metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.1 | odoo-addon-l10n-br-purchase | 16.0.6.0.3 | Brazilian Localization Purchase | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
===============================
Brazilian Localization Purchase
===============================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:5ef01b4ecb34dde8bf644eb5f8bc06841cbe4c394d19a46b7b7deb3bae00baa3
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fl10n--brazil-lightgray.png?logo=github
:target: https://github.com/OCA/l10n-brazil/tree/16.0/l10n_br_purchase
:alt: OCA/l10n-brazil
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/l10n-brazil-16-0/l10n-brazil-16-0-l10n_br_purchase
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/l10n-brazil&target_branch=16.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
Este módulo integra a parte fiscal do módulo ``l10n_br_fiscal`` com o
módulo de compra purchase do Odoo. O módulo permite o cálculo dos
impostos brasileiros já no pedido de compra e propaga a operação fiscal
até a nota fiscal de compra, deixando ela com a tributação mais correta
possível para seu financeiro (sendo que vocẽ poderia também importar o
XML da nota depois).
O relatório de compra também é estendido para apresentar os impostos
brasileiros.
Este módulo é indicado para a compra de itens com NFe's. Mas se você
quer gerenciar suas notas de entradas de acordo com o recebimento do
material, você pode também ter interesse no módulo
``l10n_br_purchase_stock`` que permite isso.
**Table of contents**
.. contents::
:local:
Installation
============
This module depends on:
- purchase
- l10n_br_account
Configuration
=============
No configuration required.
Usage
=====
Known issues / Roadmap
======================
Changelog
=========
14.0.1.0.0 (2022-06-02)
-----------------------
- Module migration.
13.0.1.0.0 (2021-01-14)
-----------------------
- Module migration.
12.0.1.0.0 (2021-01-30)
-----------------------
- Module migration.
10.0.1.0.0 (2019-09-20)
-----------------------
- Module migration.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/l10n-brazil/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/l10n-brazil/issues/new?body=module:%20l10n_br_purchase%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Akretion
Contributors
------------
- `Akretion <https://akretion.com/pt-BR>`__:
- Renato Lima <renato.lima@akretion.com.br>
- Raphaël Valyi <raphael.valyi@akretion.com.br>
- `KMEE <https://www.kmee.com.br>`__:
- Luis Felipe Mileo <mileo@kmee.com.br>
Other credits
-------------
The development of this module has been financially supported by:
- AKRETION LTDA - `www.akretion.com <http://www.akretion.com>`__
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-renatonlima| image:: https://github.com/renatonlima.png?size=40px
:target: https://github.com/renatonlima
:alt: renatonlima
.. |maintainer-rvalyi| image:: https://github.com/rvalyi.png?size=40px
:target: https://github.com/rvalyi
:alt: rvalyi
Current `maintainers <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-renatonlima| |maintainer-rvalyi|
This module is part of the `OCA/l10n-brazil <https://github.com/OCA/l10n-brazil/tree/16.0/l10n_br_purchase>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Akretion, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/l10n-brazil | null | >=3.10 | [] | [] | [] | [
"odoo-addon-l10n_br_account<16.1dev,>=16.0dev",
"odoo<16.1dev,>=16.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T13:19:37.953057 | odoo_addon_l10n_br_purchase-16.0.6.0.3-py3-none-any.whl | 75,998 | a5/cb/8a44f04f5026bcb5dc33a822b27afdd46de9e45cc72163f1e3f5980c218a/odoo_addon_l10n_br_purchase-16.0.6.0.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 01e3879fd7db9a48845a06e9e52ffcb3 | d9de3820d76fb27bf54c3f718c058e9ea8e3e9eeddf35d61c2e614e125756075 | a5cb8a44f04f5026bcb5dc33a822b27afdd46de9e45cc72163f1e3f5980c218a | null | [] | 101 |
2.4 | objutils | 0.8.11 | Objectfile library for Python |
Readme
======
.. image:: https://github.com/christoph2/objutils/raw/master/docs/objutils_banner.png
:align: center
|PyPI| |Python Versions| |License: LGPL v3+| |Code style: black|
Binary data stored in hex-files is in widespread use especially in embedded systems applications.
``objutils`` gives you programmatic access to a wide array of formats and offers an practical API
to work with such data.
Get the latest version from `Github <https://github.com/christoph2/objutils>`_
Installation
------------
.. code-block:: shell
pip install objutils
or run
.. code-block:: shell
python setup.py develop
on your local installation.
Prerequisites
-------------
- Python >= 3.4
Features
--------
- ELF files could read, including symbols.
- Typified access (scalar and arrays) to binaray data.
Supported HEX formats
^^^^^^^^^^^^^^^^^^^^^
``objutils`` supports a bunch of HEX formats...
Current
~~~~~~~
- codec / format name
* ihex (Intel HEX)
* shf (S Hexdump (`rfc4194 <https://tools.ietf.org/html/rfc4194>`_))
* srec (Motorola S-Records)
* titxt (Texas Instruments Text)
Historical
~~~~~~~~~~
- codec / format name
* ash (ASCII Space Hex)
* cosmac (RCA Cosmac)
* emon52 (Elektor EMON52)
* etek (Tektronix Extended Hexadecimal)
* fpc (Four Packed Code)
* mostec (MOS Technology)
* rca (RCA)
* sig (Signetics)
* tek (Tektronix Hexadecimal)
**codec** is the first parameter to dump() / load() functions, e.g.:
.. code-block:: python
img = objutils.load("ihex", "myHexFile.hex") # Load an Intel HEX file...
objutils.dump("srec", "mySRecFile.srec", img) # and save it as S-Records.
First steps
-----------
If you are interested, what ``objutils`` provides to you out-of-the-box, refer to `Scripts <scripts.rst>`_ documentation.
In any case, you should work through the following tutorial:
First import all classes and functions used in this tutorial.
.. code-block:: python
from objutils import Image, Section, dump, dumps, load, loads
Everything starts with hello world...
.. code-block:: python
sec0 = Section(start_address = 0x1000, data = "Hello HEX world!")
The constructor parameters to `Section` reflect what they are about:
A continuous area of memory with an start address.
**data** is not necessarily a string, **array.array**s, **byte**, **bytearray** will also do,
or from an internal point of view: everything that is convertible to **bytearray** could be used.
Note: **start_address** and **data** are positional arguments, so there is no need to use them as keywords (just for the sake of illustration).
Now let's inspect our section.
.. code-block:: python
sec0.hexdump()
00001000 48 65 6c 6c 6f 20 48 45 58 20 77 6f 72 6c 64 21 |Hello HEX world!|
---------------
16 bytes
---------------
**hexdump()** gives us, what in the world of hackers is known as a canonical hexdump.
HEX files usually consist of more than one section, so let's create another one.
.. code-block:: python
sec1 = Section(0x2000, range(1, 17))
sec1.hexdump()
00002000 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f 10 |................|
---------------
16 bytes
---------------
Now, let's glue together our sections.
.. code-block:: python
img0 = Image([sec0, sec1])
print(img0)
Section(address = 0X00001000, length = 16, data = b'Hello HEX world!')
Section(address = 0X00002000, length = 16, data = b'\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f\x10')
Images are obviously a container for sections, and they are always involved if you are interacting with disk based HEX files.
.. code-block:: python
dump("srec", "example0.srec", img0)
The resulting file could be inspected from command line.
.. code-block:: shell
$ cat example0.srec
S113100048656C6C6F2048455820776F726C64217A
S11320000102030405060708090A0B0C0D0E0F1044
And loaded again...
.. code-block:: python
img1 = load("srec", "example0.srec")
print(img1)
Section(address = 0X00001000, length = 16, data = b'Hello HEX world!')
Section(address = 0X00002000, length = 16, data = b'\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f\x10')
This leads to the conversion idiom.
.. code-block:: python
img1 = load("srec", "example0.srec")
dump("ihex", "example0.hex", img1)
Note: the formats above listed as historical are for one good reason historical: they are only 16bit wide, so if you want to convert,
say a **srec** file for a 32bit MCU to them, you're out of luck.
OK, we're starting another session.
.. code-block:: python
sec0 = Section(0x100, range(1, 9))
sec1 = Section(0x108, range(9, 17))
img0 = Image([sec0, sec1])
print(img0)
Section(address = 0X00000100, length = 16, data = b'\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f\x10')
img0.hexdump()
Section #0000
-------------
00000100 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f 10 |................|
---------------
16 bytes
---------------
Two sections with consecutive address ranges concatenated to one, this may or may not what you are expected.
For this reason **Image** has a **join** parameter.
.. code-block:: python
sec0 = Section(0x100, range(1, 9))
sec1 = Section(0x108, range(9, 17))
img0 = Image([sec0, sec1], join = False)
print(img0)
Section(address = 0X00000100, length = 8, data = b'\x01\x02\x03\x04\x05\x06\x07\x08')
Section(address = 0X00000108, length = 8, data = b'\t\n\x0b\x0c\r\x0e\x0f\x10')
img0.hexdump()
Section #0000
-------------
00000100 01 02 03 04 05 06 07 08 |........ |
---------------
8 bytes
---------------
Section #0001
-------------
00000108 09 0a 0b 0c 0d 0e 0f 10 |........ |
---------------
8 bytes
---------------
One feature that sets **objutils** apart from other libraries of this breed is typified access.
We are starting with a new image.
.. code-block:: python
img0 = Image([Section(0x1000, bytes(64))])
print(img0)
Section(address = 0X00001000, length = 64, data = b'\x00\x00\x00\x00\x00\x00\x00...00\x00\x00\x00\x00\x00\x00\x00')
We are now writing a string to our image.
.. code-block:: python
img0 = Image([Section(0x1000, bytes(64))])
img0.write(0x1010, [0xff])
img0.hexdump()
Section #0000
-------------
00001000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00001010 ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00001020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00001030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
img0.write_string(0x1000, "Hello HEX world!")
img0.hexdump()
Section #0000
-------------
00001000 48 65 6c 6c 6f 20 48 45 58 20 77 6f 72 6c 64 21 |Hello HEX world!|
00001010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
---------------
64 bytes
---------------
Notice the difference? In our **Section** example above, the string passed as a **data** parameter
was just a bunch of bytes, but now it is a "real" C-string (there is a opposite function, **read_string**,
that scans for a terminating **NULL** character).
Use **write()** and **read()** functions, if you want to access plain bytes.
But there is also support for numerical types.
.. code-block:: python
img0 = Image([Section(0x1000, bytes(64))])
img0.write_numeric(0x1000, 0x10203040, "uint32_be")
img0.write_numeric(0x1004, 0x50607080, "uint32_le")
img0.hexdump()
Section #0000
-------------
00001000 10 20 30 40 80 70 60 50 00 00 00 00 00 00 00 00 |. 0@.p`P........|
00001010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
---------------
64 bytes
---------------
The folling types are supported:
* uint8
* int8
* uint16
* int16
* uint32
* int32
* uint64
* int64
* float32
* float64
In any case, endianess suffixes **_be** or **_le** are required.
Arrays are also supported.
.. code-block:: python
img0 = Image([Section(0x1000, bytes(64))])
img0.write_numeric_array(0x1000, [0x1000, 0x2000, 0x3000, 0x4000, 0x5000, 0x6000, 0x7000, 0x8000], "uint16_le")
img0.hexdump()
Section #0000
-------------
00001000 00 10 00 20 00 30 00 40 00 50 00 60 00 70 00 80 |... .0.@.P.`.p..|
00001010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
---------------
64 bytes
---------------
This concludes our tutorial for now, but there is more stuff to follow...
Documentation
-------------
For full documentation, including installation, tutorials and PDF documents, please see `Readthedocs <https://objutils.rtfd.org>`_
Bugs/Requests
-------------
Please use the `GitHub issue tracker <https://github.com/christoph2/objutils/issues>`_ to submit bugs or request features
References
----------
`Here <https://github.com/christoph2/objutils/blob/master/docs/Data_Formats.pdf>`_ is an overview of some of the classic hex-file formats.
Authors
-------
- `Christoph Schueler <cpu12.gems@googlemail.com>`_ - Initial work and project lead.
License
-------
This project is licensed under the GNU General Public License v2.0
Contribution
------------
If you contribute code to this project, you are implicitly allowing your code to be distributed under the GNU General Public License v2.0. You are also implicitly verifying that all code is your original work.
.. |CI| image:: https://github.com/christoph2/objutils/workflows/Python%20application/badge.svg
:target: https://github.com/christoph2/objutils/actions
.. |PyPI| image:: https://img.shields.io/pypi/v/objutils.svg
:target: https://pypi.org/project/objutils/
.. |Python Versions| image:: https://img.shields.io/pypi/pyversions/objutils.svg
:target: https://pypi.org/project/objutils/
.. |License: LGPL v3+| image:: https://img.shields.io/badge/License-LGPL%20v3%2B-blue.svg
:target: https://www.gnu.org/licenses/lgpl-3.0
.. |Code style: black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
| text/x-rst | Christoph Schueler | cpu12.gems@googlemail.com | null | null | GPLv2 | hex files, intel hex, s19, srec, srecords, object files, map files, embedded, microcontroller, ECU, shf, rfc4194 | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Pr... | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"construct<3.0.0,>=2.10.70",
"mako<2.0.0,>=1.3.3",
"numpy<=2.2.5",
"rich<15.0,>=14.2",
"sqlalchemy<3.0.0,>=2.0.29"
] | [] | [] | [] | [
"Homepage, https://github.com/christoph2/objutils"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:18:52.634728 | objutils-0.8.11.tar.gz | 413,364 | 42/19/c6d498e85008cb477d490e951cf3ee786a73d2dd0bd4fc7d7043afa738cf/objutils-0.8.11.tar.gz | source | sdist | null | false | 5aa9f213951cb3045c0f754cce2e6e3e | 93f92e44f45d334b8d810350432ac2914d6f6cf69984739719c426a0b2b55d56 | 4219c6d498e85008cb477d490e951cf3ee786a73d2dd0bd4fc7d7043afa738cf | null | [
"LICENSE"
] | 2,051 |
2.4 | yesdb | 0.1.6 | A lightweight, relational database built from scratch in Python | # YesDB
A lightweight relational database built from scratch in Python with SQL support, B-tree storage, and a cloud Backend-as-a-Service for students.
## Installation
```bash
# Local only
pip install yesdb
# With cloud support
pip install yesdb[cloud]
```
## Quick Start
### Local Mode
```python
from yesdb import connect
db = connect("myapp.db")
db.execute("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, age INTEGER)")
db.execute("INSERT INTO users VALUES (NULL, 'Alice', 30)")
results = db.execute("SELECT * FROM users")
for row in results:
print(row)
db.close()
```
### Cloud Mode
YesDB Cloud lets you host your database on a remote server. Perfect for student projects that need a real backend.
#### 1. Sign up and create a database
```bash
yesdb signup
# Email: student@uni.edu
# Password: ********
# -> Account created! Logged in.
yesdb init myproject
# -> Created yesdb/ folder with schema.py
# -> Database "myproject" created on cloud.
```
#### 2. Define your schema
After running `yesdb init`, you'll have a `yesdb/schema.py` file in your project. Edit it:
```python
# yesdb/schema.py
from yesdb import Table, Column, Integer, Text, Real
users = Table("users", [
Column("id", Integer, primary_key=True),
Column("name", Text),
Column("email", Text),
])
products = Table("products", [
Column("id", Integer, primary_key=True),
Column("name", Text),
Column("price", Real),
])
```
#### 3. Push your schema to the cloud
```bash
yesdb push
# -> Connecting to myproject database...
# -> Table "users" created
# -> Table "products" created
# -> Schema synced. 2 tables pushed.
```
#### 4. Use in your code
```python
from yesdb import connect
db = connect("myproject") # uses saved credentials automatically
db.execute("INSERT INTO users VALUES (NULL, 'Alice', 'alice@uni.edu')")
db.execute("INSERT INTO users VALUES (NULL, 'Bob', 'bob@uni.edu')")
rows = db.execute("SELECT * FROM users")
for row in rows:
print(row)
```
Every response includes the database engine's internal logs (B-tree operations, SQL parsing, page allocations), so you can see exactly what's happening under the hood.
#### 5. Use with FastAPI or Flask
```python
# main.py
from fastapi import FastAPI
from yesdb import connect
app = FastAPI()
db = connect("myproject")
@app.get("/users")
def get_users():
rows = db.execute("SELECT * FROM users")
return {"users": [list(row) for row in rows]}
@app.post("/users")
def create_user(name: str, email: str):
db.execute(f"INSERT INTO users VALUES (NULL, '{name}', '{email}')")
return {"status": "created"}
```
### CLI Commands
```bash
yesdb signup # Create an account
yesdb login # Login to existing account
yesdb init <db_name> # Initialize a project with a cloud database
yesdb push # Push schema.py to the cloud
yesdb databases # List your databases
yesdb shell <db_name> # Interactive SQL shell against a cloud database
```
### Local CLI Shell
```bash
yesdb-local mydatabase.db
```
```sql
YesDB> CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, age INTEGER);
YesDB> INSERT INTO users VALUES (NULL, 'Alice', 30);
YesDB> SELECT * FROM users;
```
```
.help Show help
.tables List all tables
.schema Show table schemas
.exit Exit shell
```
## Features
- **SQL Support**: CREATE, SELECT, INSERT, UPDATE, DELETE, DROP, ALTER TABLE
- **Data Types**: INTEGER, TEXT, REAL, BLOB
- **Query Features**: WHERE, ORDER BY, LIMIT, OFFSET, DISTINCT
- **B-Tree Storage**: Efficient indexing and data retrieval
- **No Dependencies**: Pure Python implementation (local mode)
- **Cloud BaaS**: Host your database remotely with a single command
- **Schema DSL**: Define tables in Python, push to cloud
- **Engine Logs**: See B-tree splits, page allocations, and SQL parsing in every response
- **Interactive Shell**: Built-in SQL shell (local and cloud)
- **Auto-increment**: PRIMARY KEY auto-increment support
## SQL Examples
```sql
-- Create table
CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, email TEXT, age INTEGER)
-- Insert data
INSERT INTO users VALUES (NULL, 'Alice', 'alice@example.com', 30)
INSERT INTO users VALUES (NULL, 'Bob', 'bob@example.com', 25)
-- Query with conditions
SELECT * FROM users WHERE age > 25
SELECT name, email FROM users WHERE name = 'Alice'
-- Order and limit
SELECT * FROM users ORDER BY age DESC
SELECT * FROM users LIMIT 10 OFFSET 5
-- Update and delete
UPDATE users SET age = 31 WHERE name = 'Alice'
DELETE FROM users WHERE age < 18
-- Alter table
ALTER TABLE users ADD COLUMN country TEXT
-- Drop table
DROP TABLE users
```
## How Cloud Mode Works
```
Your machine YesDB Cloud Server
──────────────── ──────────────────
yesdb CLI (signup/login/push) ┌─────────────────┐
<-> HTTPS │ nginx (SSL) │
yesdb SDK (connect/execute) │ └─ FastAPI │
│ ├─ auth │
│ └─ data/ │
│ ├─ user1/ │
│ │ └─ *.db │
│ └─ user2/ │
│ └─ *.db │
└─────────────────┘
```
- Each user gets their own isolated account with multiple databases
- All traffic is encrypted over HTTPS
- Authentication via API key (generated at signup, saved locally)
- Database engine logs are returned with every query for full transparency
## Security
### Local Mode
- Path validation (blocks system file access)
- Resource limits (SQL length, record size)
- Input validation (table/column names)
### Cloud Mode
- HTTPS encryption (TLS via Let's Encrypt)
- API key authentication (SHA-256 hashed, never stored in plaintext)
- Password hashing (bcrypt)
- Per-user data isolation
- Request size limits
## Development
### Install from Source
```bash
git clone https://github.com/AzharAhmed-bot/yesdb.git
cd yes_db
pip install -e ".[cloud]"
```
### Run Tests
```bash
pip install pytest
pytest
```
## Requirements
- Python 3.8+
- No external dependencies (local mode)
- `requests` (cloud mode, installed with `pip install yesdb[cloud]`)
## License
MIT License - see [LICENSE](LICENSE) file
## Links
- **PyPI**: https://pypi.org/project/yesdb/
- **GitHub**: https://github.com/AzharAhmed-bot/yesdb
- **Issues**: https://github.com/AzharAhmed-bot/yesdb/issues
## Version
Current version: **0.1.5**
### Changelog
#### v0.1.5 (bug fixes)
- **Fix**: `from yesdb import connect` now works correctly. A `yesdb` compatibility package is included so the import matches the PyPI package name.
- **Fix**: `Table.to_sql()` now emits the `PRIMARY KEY` constraint in the generated `CREATE TABLE` SQL, so auto-increment works as expected when inserting `NULL` into a primary key column.
---
**Note**: YesDB is an educational database built from scratch to teach database internals. For production systems, consider SQLite, PostgreSQL, or MySQL.
| text/markdown | Azhar | Azhar <azhar.takoy@strathmore.edu> | null | null | MIT | database, sql, btree, educational, embedded-database, baas, cloud | [
"License :: OSI Approved :: MIT License",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: ... | [] | https://github.com/AzharAhmed-bot/yesdb | null | >=3.8 | [] | [] | [] | [
"requests>=2.28; extra == \"cloud\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"flake8>=6.0; extra == \"dev\"",
"requests>=2.28; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/AzharAhmed-bot/yesdb",
"Documentation, https://github.com/AzharAhmed-bot/yesdb#readme",
"Repository, https://github.com/AzharAhmed-bot/yesdb",
"Issues, https://github.com/AzharAhmed-bot/yesdb/issues"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-18T13:18:01.884493 | yesdb-0.1.6.tar.gz | 78,601 | e5/cd/1607414a8132d6ec359542512c2aafa8973500edcb107b56881a2ae30e25/yesdb-0.1.6.tar.gz | source | sdist | null | false | e69261d629f3d61164abb9c2b34946c5 | 204301523e10c4b1a98e23baafc25772fcea2a2a8d5a7f93a58d5ac0b8886ac9 | e5cd1607414a8132d6ec359542512c2aafa8973500edcb107b56881a2ae30e25 | null | [
"LICENSE"
] | 250 |
2.1 | odoo-addon-energy-communities | 16.0.0.7.11 | Energy Community | # Energy Communities
Base addon for the basis operacion with energy communities
## Changelog
### 2026-02-18 (v16.0.0.7.11)
- Fix security rules for email template.
### 2026-02-11 (v16.0.0.7.10)
- Fix search view for email template.
### 2026-02-10 (v16.0.0.7.9)
- New menu entry in energy communities configuration to access directly to email
templates
- Add new filters to search only energy community templates
- Hide to energy community managers general filters
### 2026-01-29 (v16.0.0.7.8)
- New translate for email.template account.email_template_edi_invoice in field body_html
### 2026-01-13 (v16.0.0.7.7)
- log warning message in logfile and chatter instead of raise an exception when
validating role assignation in user creation
### 2026-01-13 (v16.0.0.7.6)
- fix the permissions on CreateUsersWizard. Now all admin roles can launch this wizard
- enrich user attributes in keyclok. Now we have the energy community, the email contact
of the energy community and the correct language in keycloak
- tests to test all previous things
### 2025-12-19 (v16.0.0.7.5)
- Better cooperator buttons on landing
- Map place usability improvements
### 2025-11-14 (v16.0.0.7.3)
- Add python dependecies
### 2025-11-12 (v16.0.0.7.2)
- Now we have a input search for select companies and you can select all companies or
unselect all companies
### 2025-11-06 (v16.0.0.7.1)
- New demo data in order to test API calls
### 2025-11-03 (v16.0.0.7.0)
- Adjustments for new public form for new community creation
### 2025-10-22 (v16.0.0.6.1)
- Remove duplicated Voluntary Share product category (from energy_communities)
### 2025-10-22 (v16.0.0.6.0)
- Improved MulltiCompanyEasyCreationWizard
### 2025-10-01 (v16.0.0.5.7)
- fix typo in CE_MANAGER
- add menu "my community"
- fix demo data
### 2025-09-24 (v16.0.0.5.6)
- added roles ids to config
### 2025-09-17 (v16.0.0.5.5)
- Clean commented code
### 2025-06-03 (v16.0.0.4.5)
- New function `res_company.get_all_energy_actions_dict_list`
### 2025-05-21
- Added Readme
| text/markdown | Coopdevs Treball SCCL & Som Energia SCCL | null | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://git.coopdevs.org/coopdevs/comunitats-energetiques/odoo-ce | null | >=3.10 | [] | [] | [] | [
"Faker==38.0.0",
"odoo-addon-account_banking_mandate<16.1dev,>=16.0dev",
"odoo-addon-account_lock_date_update<16.1dev,>=16.0dev",
"odoo-addon-account_multicompany_easy_creation<16.1dev,>=16.0dev",
"odoo-addon-account_payment_order<16.1dev,>=16.0dev",
"odoo-addon-account_reconcile_oca<16.1dev,>=16.0dev",
... | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.10 | 2026-02-18T13:16:33.283294 | odoo_addon_energy_communities-16.0.0.7.11.tar.gz | 161,476 | ae/71/e2527d3a72242a61e4bc7f727e46eaf47cc976111a1fbed41ccc7d2e80c2/odoo_addon_energy_communities-16.0.0.7.11.tar.gz | source | sdist | null | false | 9bd2bc8e509f2b80c7ccce3d63027ca6 | 1662579c4f47db20d1e6a69097045d56b1b9407134d4ef314d7570d3f2c27d58 | ae71e2527d3a72242a61e4bc7f727e46eaf47cc976111a1fbed41ccc7d2e80c2 | null | [] | 242 |
2.4 | extralo | 2.2.0 | ETL for Python | # ETL using python
Python package for extracting data from a source, transforming it and loading it to a destination, with validation in between.
The provided ETL pipeline provides useful functionality on top of the usual operations:
- **Extract**: Extract data from multiples sources, in parallel (using threads).
- **Validate**: Validate the extracted data, to make sure it matches what will be required by the transform step, using pandera schemas. This provide early fail if there is any unexpected change in the sources.
- **Transform**: Define the logic for transformation of the data, making it reusable, and allowing multiple data frames as input and multiple data frames as output.
- **Validate again**: Validate the transformed data, to make sure it matches your expectation, and what the destination will require.
- **Load**: Load multiple data, each to one or more destination, and load diferent data to diferent destinations in parallel (using threads).
## Installation
The package is available at PyPI, so you can install it using pip:
```bash
pip install extralo
```
| text/markdown | null | Vitor Capdeville <vgcapdeville@hotmail.com> | null | Vitor Capdeville <vgcapdeville@hotmail.com> | null | data, data-engineering, etl | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"loguru",
"delta-spark; extra == \"all\"",
"deltalake; extra == \"all\"",
"openpyxl; extra == \"all\"",
"pandas; extra == \"all\"",
"pandas-stubs; extra == \"all\"",
"pyspark; extra == \"all\"",
"sqlalchemy; extra == \"all\"",
"sqlparse; extra == \"all\"",
"deltalake; extra == \"deltalake\"",
"o... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:16:05.139983 | extralo-2.2.0.tar.gz | 104,061 | 82/99/37b04bb402f7ee22a61e4e635e164a17418e8ff2dae1fd1cec8a241ceae6/extralo-2.2.0.tar.gz | source | sdist | null | false | 2483df7c3d1d6b2c379b7ba870d6cb79 | e223a5a398f4504331fbaa6f3aa2b7a468bd548bb141ffb272730f85b081c632 | 829937b04bb402f7ee22a61e4e635e164a17418e8ff2dae1fd1cec8a241ceae6 | null | [] | 253 |
2.4 | pyCoDaMath | 1.1 | Compositional data (CoDa) analysis tools for Python | # pyCoDaMath
[](https://www.python.org/)
pyCoDaMath provides compositional data (CoDa) analysis tools for Python
- **Source code:** https://bitbucket.org/genomicepidemiology/pycodamath
## Getting Started
This package extends the Pandas dataframe object with various CoDa tools. It also provides a set of plotting functions for CoDa figures.
### Installation
Clone the git repo to your local hard drive:
git clone https://bitbucket.org/genomicepidemiology/pycodamath.git
Enter the directory and install:
pip install .
### Usage
The pyCoDaMath module is loaded as
import pycodamath
At this point, in order to get CLR values from a Pandas DataFrame `df`, do
df.coda.clr()
## Documentation
### CLR transformation - point estimate
df.coda.clr()
Returns centered logratio coefficients. If the dataframe contains zeros, values
will be replaced by the Aitchison mean point estimate.
### CLR transformation - standard deviation
df.coda.clr_std(n_samples=5000)
Returns the standard deviation of `n_samples` random draws in CLR space.
**Parameters**
- n_samples (int) - Number of random draws from a Dirichlet distribution.
### ALR transformation - point estimate
df.coda.alr(part=None)
Returns additive logratio values. If `part` is None, the last part of the composition is used as the denominator.
**Parameters**
- part (str) - Name of the part to use as denominator.
### ALR transformation - standard deviation
df.coda.alr_std(part=None, n_samples=5000)
Returns the standard deviation of `n_samples` random draws in ALR space.
**Parameters**
- part (str) - Name of the part to use as denominator.
- n_samples (int) - Number of random draws from a Dirichlet distribution.
### ILR transformation - point estimate
df.coda.ilr(psi=None)
Returns isometric logratio values. If no basis is given, a default sequential binary partition basis is used.
**Parameters**
- psi (array_like) - Orthonormal basis. If None, the default SBP basis is used.
### ILR inverse transformation
df.coda.ilr_inv(psi=None)
Returns the composition corresponding to a set of ILR coordinates. The same basis used for the forward transform must be supplied.
**Parameters**
- psi (array_like) - Orthonormal basis. If None, the default SBP basis is used.
### Aitchison point estimate
df.coda.aitchison_mean(alpha=1.0)
Returns the Bayesian point estimate based on the Dirichlet concentration parameter alpha.
Use values between 0.5 (sparse prior) and 1.0 (flat prior).
**Parameters**
- alpha (float) - Dirichlet concentration parameter. Defaults to 1.0.
### Bayesian zero replacement
df.coda.zero_replacement(n_samples=5000)
Returns a count table with zero values replaced by finite values using Bayesian inference.
**Parameters**
- n_samples (int) - Number of random draws from a Dirichlet distribution.
### Closure
df.coda.closure(N)
Applies closure to constant N to the composition.
**Parameters**
- N (float) - Closure constant.
### Variance matrix
df.coda.varmatrix(nmp=False)
Returns the total variation matrix of a composition. For large datasets, variance is
estimated from at most 500 rows.
**Parameters**
- nmp (bool) - If True, return a numpy array instead of a DataFrame. Defaults to False.
### Total variance
df.coda.totvar()
Returns the total variance of a set of compositions, computed as the sum of the
variance matrix divided by twice the number of parts.
### Geometric mean
df.coda.gmean()
Returns the geometric mean of a set of compositions as percentages.
### Power transformation
df.coda.power(alpha)
Applies compositional scalar multiplication (power transformation).
**Parameters**
- alpha (float) - Scalar multiplier.
### Perturbation
df.coda.perturbation(comp)
Applies a compositional perturbation (Aitchison addition) with another composition.
**Parameters**
- comp (array_like) - Composition to perturb with.
### Scaling
df.coda.scale()
Scales the composition by the reciprocal of the square root of the total variance.
### Centering
df.coda.center()
Centers the composition by perturbing with the reciprocal of the geometric mean.
---
## Plotting functions
### Ternary diagram
pycodamath.plot.ternary(data, descr=None, center=False, conf=False)
Plots a ternary diagram from a three-part composition closed to 100.
**Parameters**
- data (DataFrame) - Three-part compositional data, closed to 100.
- descr (Series) - Optional grouping variable; if provided, points are coloured by group.
- center (bool) - If True, the composition is centred before plotting. Defaults to False.
- conf (bool) - If True, a 95% confidence ellipse is overlaid. Defaults to False.
### Scree plot
pycodamath.pca.scree_plot(axis, eig_val)
Plots a scree plot of explained variance from singular values.
**Parameters**
- axis - A Matplotlib axes object.
- eig_val (array_like) - Singular values from SVD.
### PCA biplot
class pycodamath.pca.Biplot(data, axis=None, default=True)
Creates a PCA biplot based on a centered log-ratio transformation of the data.
**Parameters**
- data (DataFrame) - Compositional count data to analyse.
- axis - A Matplotlib axes object. If None, a new figure is created.
- default (bool) - If True, loadings and scores are plotted immediately. Defaults to True.
The following methods are available for customising the biplot:
- `plotloadings(cutoff=0, scale=None, labels=None, cluster=False)` — plot loading arrows.
Set `cutoff` (as a fraction of the maximum loading length) to suppress short loadings.
Set `cluster=True` to reduce the number of loadings by hierarchical clustering; the
resulting cluster legend is accessible as `biplot.clusterlegend`.
- `plotloadinglabels(labels=None, loadings=None, cutoff=0)` — add text labels to loadings.
- `adjustloadinglabels()` — shift loading labels to reduce overlap.
- `plotscores(group=None, palette=None, legend=True, labels=None)` — plot sample scores
as points, optionally coloured by group.
- `plotscorelabels(labels=None)` — add text labels to the scores.
- `plotellipses(group, palette=None, legend=False)` — plot 90% confidence ellipses for
each group (requires at least 3 samples per group).
- `plotcentroids(group, palette=None, legend=False)` — plot the centroid of each group.
- `plothulls(group, palette=None, legend=True)` — plot convex hulls around each group
(requires at least 3 samples per group).
- `plotcontours(group, palette=None, legend=True, plot_outliers=True, percent_outliers=0.1, linewidth=2.2)` — plot kernel density contours for each group. Samples outside the outermost contour are optionally shown as individual points.
- `labeloutliers(group, conf=3.0)` — label samples more than `conf` standard deviations
from their group centroid.
- `displaylegend(loc=2)` — display the group legend at Matplotlib legend location `loc`.
- `removepatches()` — remove loading arrows and hull polygons from the plot.
- `removescores()` — remove score points from the plot.
- `removelabels()` — remove text labels from the plot.
- `removecontours()` — remove contour fills from the plot.
The keyword `labels` is a list of label names. If `labels` is None, all labels are plotted.
The keyword `group` is a Pandas Series with an index matching the data index.
The keyword `palette` is a dict mapping each unique group value to a colour.
**Example**
import pycodamath as coda
import pandas as pd
data = pd.read_csv('example/kilauea_iki_chem.csv')
mypca = coda.pca.Biplot(data)
mypca.removelabels()
mypca.plotloadings(cluster=True)
print(mypca.clusterlegend)
mypca.removelabels()
mypca.plotloadings(labels=['FeO', 'Al2O3', 'CaO'], cluster=False)
mypca.adjustloadinglabels()
| text/markdown | null | Christian Brinch <cbri@food.dtu.dk> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: P... | [] | null | null | >=3.8 | [] | [] | [] | [
"adjustText>=0.7.3",
"matplotlib>=3.1.1",
"numpy>=1.17.2",
"pandas>=0.25.1",
"python-ternary>=1.0.6",
"scipy>=1.3.1",
"webcolors>=1.13"
] | [] | [] | [] | [
"Homepage, https://bitbucket.org/genomicepidemiology/pycodamath",
"Bug Tracker, https://bitbucket.org/genomicepidemiology/pycodamath/issues"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T13:14:52.567257 | pycodamath-1.1.tar.gz | 14,291 | 4e/a2/bbd878f8a3714b0cf1628bade73ba3b1860b6e18c2a68b3e7efe0c700075/pycodamath-1.1.tar.gz | source | sdist | null | false | 405d0685a5b096598a06ec3a25967912 | 40389bf6188cb5a0d757df1729aa09d3bab2eb3af85cd28c5b511d1c57550cf3 | 4ea2bbd878f8a3714b0cf1628bade73ba3b1860b6e18c2a68b3e7efe0c700075 | null | [
"LICENSE"
] | 0 |
2.4 | fmu-pem | 0.1.1 | pem | > [!WARNING]
> `fmu-pem` is not yet qualified technology, and as of today only applicable for
selected pilot test fields.
**[📚 User documentation](https://equinor.github.io/fmu-pem/)**
## What is fmu-pem?
Petro-elastic model (PEM) for use in e.g. [fmu-sim2seis](https://github.com/equinor/fmu-sim2seis)
based on the [rock-physics-open](https://github.com/equinor/rock-physics-open) library.
## How to use fmu-pem?
### Installation
To install `fmu-pem`, first activate a virtual environment, then type:
```shell
pip install fmu-pem
```
The PEM is controlled by parameter settings in a *yaml-file*, given as part of the
command line arguments, or by the workflow parameter if it is run as an ERT forward
model.
### Calibration of rock physics models
Calibration of the rock physics models is normally carried out in
[RokDoc](https://www.ikonscience.com/rokdoc-geoprediction-software-platform/)
prior to running the PEM. Fluid and mineral properties can be found in the RokDoc
project, or from LFP logs, if they are available.
> [!NOTE]
> The fluid models contained in this module may not cover all possible cases. Gas
condensate, very heavy oil, > or reservoir pressure under hydrocarbon bubble point will
need additional proprietary code to run.
>
> Equinor users can install additional proprietary models using
> ```bash
> pip install "git+ssh://git@github.com/equinor/rock-physics"`
> ```
## How to develop fmu-pem?
Developing the user interface can be done by:
```bash
cd ./documentation
npm ci # Install dependencies
npm run create-json-schema # Extract JSON schema from Python code
npm run docs:dev # Start local development server
```
The JSON schema itself (type, title, description etc.) comes from the corresponding
Pydantic models in the Python code.
| text/markdown | null | null | null | null | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| energy, subsurface, seismic, scientific, engineering | [
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Software Development :: Libraries",
"Topic :: Utilities",
"Operating System :: POSIX :: Linux",
"Natural Language :: English"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=1.24.3",
"xtgeo>=4.7.1",
"fmu-tools",
"fmu-config",
"fmu-dataio",
"fmu-datamodels",
"rock-physics-open>=0.3.3",
"PyYAML>=6.0.1",
"pydantic",
"ert>=14.1.10",
"mypy; extra == \"tests\"",
"pytest; extra == \"tests\"",
"pytest-cov; extra == \"tests\"",
"pytest-xdist; extra == \"tests\"... | [] | [] | [] | [
"Homepage, https://github.com/equinor/fmu-pem",
"Repository, https://github.com/equinor/fmu-pem"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:13:36.691711 | fmu_pem-0.1.1.tar.gz | 33,920,746 | 70/65/0f8aedc0c973017efd05b828821cbed59ef2291a4935d6e8e9602125329d/fmu_pem-0.1.1.tar.gz | source | sdist | null | false | 21949e570e5cd569e129960fd203d092 | 878647fc823ae8fd65fbcd1d78b42c5b914f0830b0e81a9bbdfcb2d2cbc6cefa | 70650f8aedc0c973017efd05b828821cbed59ef2291a4935d6e8e9602125329d | null | [
"LICENSE"
] | 334 |
2.4 | iconfucius | 0.0.1 | Trade with IConfucius at your side — Chain Fusion AI | # iconfucius
Trade with IConfucius at your side — Chain Fusion AI
Full release coming soon. See [github.com/onicai/IConfucius](https://github.com/onicai/IConfucius).
| text/markdown | null | onicai <iconfucius@onicai.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/onicai/IConfucius",
"Repository, https://github.com/onicai/IConfucius"
] | twine/6.2.0 CPython/3.12.11 | 2026-02-18T13:12:20.129672 | iconfucius-0.0.1.tar.gz | 1,396 | 48/04/26809db4c3675d11170a16f767dcb1ca080d8a3d0162740af25ec76c797e/iconfucius-0.0.1.tar.gz | source | sdist | null | false | 17efb4e9eba48cfb7b518be74e556dca | 7615a22b216399ecdeb17e074bbc217255e639402565429586c3033a1437b6d8 | 480426809db4c3675d11170a16f767dcb1ca080d8a3d0162740af25ec76c797e | MIT | [] | 281 |
2.4 | fb-vmware | 1.8.5 | @summary: The module for a base vSphere handler object. | # fb-vmware
A Python wrapper module around the pyvmomi module to simplify work and handling.
| text/markdown | null | Frank Brehm <frank@brehm-online.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Framework :: IPython",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+)",
"Natural Language :: English",
"Operating System :: POSIX",
"Programming Language :: Python :: 3.... | [] | null | null | >=3.8 | [] | [] | [] | [
"babel",
"chardet",
"fb_logging",
"fb_tools",
"pytz",
"pyvmomi",
"PyYAML",
"requests",
"rich",
"semver",
"six",
"black; extra == \"development\"",
"isort; extra == \"development\"",
"hjson; extra == \"development\"",
"flake8; extra == \"lint\"",
"pylint; extra == \"lint\"",
"flake8-b... | [] | [] | [] | [
"Source, https://github.com/fbrehm/fb-vmware"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T13:12:15.588280 | fb_vmware-1.8.5.tar.gz | 123,534 | c4/38/91981091b8cec1f4d2a0437261676239d28720cca1e0a3164ef712e48d4e/fb_vmware-1.8.5.tar.gz | source | sdist | null | false | a539412b4ee9270a3ef67a89dabca345 | d12e3d71a9d70ac252d313febd6fe06f7619b1133c1c31fceb13bd1df696c23f | c43891981091b8cec1f4d2a0437261676239d28720cca1e0a3164ef712e48d4e | null | [
"LICENSE"
] | 423 |
2.4 | plonemeeting.restapi | 2.12 | Extended rest api service for Products.PloneMeeting usecases | .. This README is meant for consumption by humans and pypi. Pypi can render rst files so please do not use Sphinx features.
If you want to learn more about writing documentation, please check out: http://docs.plone.org/about/documentation_styleguide.html
This text does not appear on pypi or github. It is a comment.
.. image:: https://coveralls.io/repos/IMIO/plonemeeting.restapi/badge.png?branch=master
:target: https://coveralls.io/r/IMIO/plonemeeting.restapi?branch=master
.. image:: http://img.shields.io/pypi/v/plonemeeting.restapi.svg
:alt: PyPI badge
:target: https://pypi.org/project/plonemeeting.restapi
====================
plonemeeting.restapi
====================
plone.restapi specific endpoints for Products.PloneMeeting
Installation
------------
Install plonemeeting.restapi by adding it to your buildout::
[buildout]
...
eggs =
plonemeeting.restapi
and then running ``bin/buildout``
Contribute
----------
- Issue Tracker: https://github.com/IMIO/plonemeeting.restapi/issues
- Source Code: https://github.com/IMIO/plonemeeting.restapi
License
-------
The project is licensed under the GPLv2.
Changelog
=========
Version 1.x is for PloneMeeting 4.1.x, version 2.x is for PloneMeeting 4.2.x+
2.12 (2026-02-18)
-----------------
- When using `utils.rest_uuid_to_object` instead returning a `400 BadRequest`
when element was not found, return a specific error when:
- element exists but not accessible (`403 Forbidden`);
- element does not exist (`404 NotFound`).
[gbastien]
- Fixed serialization of `Meeting.committees` that failed on
`RichTextValue` in a `datagridfield`.
[gbastien]
- Include `full_id` by default when serializing an `organization` so
information is available when used as `MeetingItem.proposingGroup`.
[gbastien]
2.11 (2026-01-15)
-----------------
- SUP-43789: Fixed another issue with `utils.clean_html`
when the ending tag was not a </p>.
[aduchene]
2.10 (2025-12-22)
-----------------
- Make `formatted_itemNumber` available when `metadata_fields=formatted_itemNumber`
in addition to `additional_values=formatted_itemNumber`.
[gbastien]
2.9 (2025-05-05)
----------------
- SUP-43789: Fixed an issue with `utils.clean_html` when the ending tag `</p>`
was not at the end of the content string.
[aduchene]
2.8 (2024-10-16)
----------------
- Adapted call to `get_attendee_short_title` as it was moved
from `Meeting` to `utils`.
[gbastien]
2.7 (2024-06-07)
----------------
- Fixed french translation for `create_element_using_ws_rest`.
[gbastien]
2.6 (2024-05-27)
----------------
- Adapted `test_restapi_add_item_with_annexes_children` to show that annex
`content_category` `only_pdf` parameter is taken into account.
[gbastien]
- Added `testServiceDelete` to show that the `DELETE` method works as expected.
[gbastien]
- Fixed french translation for `create_element_using_ws_rest_comments`.
[gbastien]
2.5 (2024-03-19)
----------------
- When using `fullobjects`, only serialize `next/previous` when specifically
asked using `include_nextprev=true`.
[gbastien]
2.4 (2024-03-14)
----------------
- Fixed `test_restapi_add_clean_meeting`, when generating new date use datetime
and timedelta to avoid generating an unexisting date,
here it was generating `2025/02/29` that does not exist.
[gbastien]
- When using `@users?extra_include=categories`, categories are only returned if
enabled in `MeetingConfig`, same for `classifiers`, this avoid having
selectable categories for an user when categories are not used.
[gbastien]
- Added special behavior for `review_state` and `creators` when asked in
`metadata_fields`:
- `review_state` will return a token/title with review_state id
and translated title;
- `creators` will return a list of token/title with each creator id
and fullname.
[gbastien]
2.3 (2023-12-11)
----------------
- Adapted code as `Products.PloneMeeting.utils.add_wf_history_action` was moved
to `imio.history.utils.add_event_to_wf_history` and
`ToolPloneMeeting.getAdvicePortalTypes` and
`ToolPloneMeeting.getAdvicePortalTypeIds` were moved to utils.
[gbastien]
2.2 (2023-09-12)
----------------
- Always include `@type` info in result even when `include_base_data=false` as
it is used with `UidSearchGet.required_meta_type_id`.
[gbastien]
2.1 (2023-06-27)
----------------
- In `base.serialize_attendees`, do not use `UID` from serialized result as it
could not be there when using `include_base_data=false`.
[gbastien]
2.0.2 (2023-05-31)
------------------
- Added `@attendees GET` on meeting and item and `@attendee GET/PATCH`
on meeting and item. Added `extra_include=attendees` on meeting and item.
[gbastien]
- Manage `metadata_fields=internal_number`.
[gbastien]
2.0.1 (2023-03-07)
------------------
- Fixed test isolation problem when tests executed together with `imio.pm.ws` tests.
[gbastien]
2.0 (2023-03-06)
----------------
- Dropped support for `PloneMeeting 4.1.x`.
[gbastien]
- Add `config` to `extra_include` allowed parameters to return informations about the meeting config
[mpeeters]
- Ensure that `in_name_of` parameter is only handled once when `__children__` parameter is used
[mpeeters]
- Enforce usage of `UID` parameter only if `externalIdentifier` is not provided
[mpeeters]
- Added `test_restapi_add_item_manually_linked_items` to check that it is possible
to create items and use the `MeetingItem.manuallyLinkedItems` functionnality.
[gbastien]
- Adapted code as `MeetingConfig.useGroupsAsCategories` was removed.
Field `MeetingItem.category` is an optional field managed by
`MeetingConfig.usedItemAttributes` as any other optional fields now.
[gbastien]
- Add `date` by default to meeting informations.
[mpeeters]
1.0rc18 (2022-08-26)
--------------------
- Allow usage of `type` parameter with `in_name_of` when `config_id` is not specified
[mpeeters]
- Fixed `BasePost._turn_ids_into_uids` to manage organizations outside
`My organization` this is the case for field `MeetingItem.associatedGroups`.
[gbastien]
- Refactored behavior so we use the `ISerializeToJson` serializer when
any parameter is given.
[gbastien]
- Completed the `@config` service (that now uses a `SearchGet`)
to return every `MeetingConfigs` when `config_id=*`.
[gbastien]
- Refactored the `@get` endpoint to use a `SearchGet` so we can use `in_name_of`.
[gbastien]
- Added `DeserializeFromJson._need_update_local_roles` that will
`update_local_roles` when creating an item when required, this is needed in
some case like when creating an item with `internalNotes` because this field
relies on `local_role/permission` that need to be setup to be writeable.
[gbastien]
- Register `@get GET` endpoint for `IPloneSiteRoot` instead `IFolderish`.
[gbastien]
- Added possibility to get the selectable choices of a field in the response.
Parameter `include_choices_for=field_name` may be given, in this case,
a key `field_name__choices` is added to the result with `token/title` of
the selectable values.
[gbastien]
- Refactored `@item extra_include=linked_items` to filter results using a
catalog query so parameters and functionnality is similar to other endpoints.
Removed `utils.filter_data` that could be dangerous and build a catalog query.
Formalized convenience catalog index names substitution (passing parameter `type`
corresponds to index `portal_type` or `state` corresponds to `review_state`).
[gbastien]
- Parameter `config_id` is no more required when using `in_name_of`
in `@get` or `@search`.
Added `bbb.py` to backport methods `get_filtered_plone_groups_for_user` and
`getActiveConfigs` from `ToolPloneMeeting` so it is avaible when using
`PloneMeeting 4.1.x`.
[gbastien]
1.0rc17 (2022-07-01)
--------------------
- Redo broken release...
[gbastien]
1.0rc16 (2022-07-01)
--------------------
- Added `extra_include=linked_items` available on item.
This will append the item linked items, various `modes` may be asked:
`auto` (by default) will return every auto linked items, `manual` will return
manually linked items, `predecessor` will return the first predecessor,
`predecessors` will return every predecessors, `successors` will return the
direct `successors` and `every_successors` will return chain of successors.
[gbastien]
- Added `utils.filter_data` that will let filter given data.
[gbastien]
- Renamed `BaseSerializeToJson._get_param` to `BaseSerializeToJson.get_param`
or it is considered as a private method not to use directly but actually
it must be used instead `utils.get_param`.
[gbastien]
1.0rc15 (2022-06-14)
--------------------
- Removed temporary fix introduced in version `plonemeeting.restapi=1.0rc13`
to avoid creating an empty item. This was fixed in `plone.restapi=7.8.0`.
[gbastien]
1.0rc14 (2022-05-10)
--------------------
- Use `BadRequest` instead `Exception` for every errors, this will return
an error code `400` instead `500` that is used for internal server errors.
[gbastien]
1.0rc13 (2022-04-28)
--------------------
- Enable environment variable `RESTAPI_DEBUG` in tests.
[gbastien]
- Prevent create an empty item. Temporarily completely overrided
`DeserializeFromJson.__call__` from `plone.restapi` until issue
https://github.com/plone/plone.restapi/issues/1386 is fixed.
[gbastien]
1.0rc12 (2022-02-15)
--------------------
- Fixed `base.serialize_annexes`, make sure we get no annex if the given filters gives no uids.
Passing no uids to get_categorized_elements means `Do not filter on uids`.
[gbastien]
1.0rc11 (2022-02-14)
--------------------
- Restored `Products.PloneMeeting 4.1.x/4.2.x` backward compatibility.
[gbastien]
1.0rc10 (2022-02-03)
--------------------
- Only display the `Unknown data` warning when creating an element if returning
full obj serialization after creation.
[gbastien]
- Fixed creation of meeting with annexes.
[gbastien]
- Make the annex serializer include `file` in base data.
[gbastien]
- Fixed `clean_html=False` when creating DX content, `clean_html` was always applied.
[gbastien]
1.0rc9 (2022-01-27)
-------------------
- Added upgrade step to 2000 that will re-apply the `rolemap` step so we are
sure old installations are restricting the service to role `Member`.
[gbastien]
1.0rc8 (2022-01-21)
-------------------
- Added HTML clean (enabled by default) when adding an element (AT or DX).
[gbastien]
- Added `extra_include=annexes` available on item and meeting.
[gbastien]
1.0rc7 (2022-01-14)
-------------------
- Make sure every `extra_include` are correctly defined in
`_available_extra_includes`. Now if not defined there, it will be ignored.
[gbastien]
1.0rc6 (2022-01-07)
-------------------
- Added `extra_include=pod_templates` for `Meeting` and `MeetingItem`.
[gbastien]
- Fixed use of `utils.get_current_user_id` and `adopt_user`.
[gbastien]
1.0rc5 (2022-01-03)
-------------------
- When returning annex additional values, ignore `last_updated`.
[gbastien]
1.0rc4 (2021-11-26)
-------------------
- Default value for parameter `the_objects` changed in
`ToolPloneMeeting.get_orgs_for_user` (from True to False).
[gbastien]
- Adapted `utils.may_access_config_endpoints` to only check `tool.isManager`
if given `cfg` is not None.
[gbastien]
- Make PMChoiceFieldSerializer use a MissingTerms adapter when value not found
in vocabulary.
[gbastien]
1.0rc3 (2021-11-08)
-------------------
- Extended `@users` `plone.restapi` endpoint that by default returns infos for
a single user or let query several users:
- `extra_include=groups` will add the organizations the user is member of;
- in addition, passing `extra_include_groups_suffixes=creators` will add
the organizations the user is creator for (any suffix may be used);
- `extra_include=app_groups` will add the user Plone groups;
- `extra_include=configs` will return the `MeetingConfigs`
the user has access to;
- `extra_include=categories`, will return the categories the user is able to
use for each `MeetingConfig`
- in addition, `extra_include_categories_config=meeting-config-id` parameter
will filter results for given `MeetingConfig` id;
- `extra_include=classifiers`, will return the classifiers the user is able to
use for each `MeetingConfig`
- in addition `extra_include_classifiers_config=meeting-config-id` parameter
will filter results for given `MeetingConfig` ids.
[gbastien]
- Added `@annex` POST endpoint to be able to add an annex on an existing element.
[gbastien]
- Changed default behavior of `@get GET` endpoint that will return by default
the summary version of serialized data, to get the full serialization, then
parameter `fullobjects` will need to be given.
[gbastien]
- Serializer may now complete a `@extra_includes` key that list `extra_include`
values available for it.
[gbastien]
1.0rc2 (2021-09-28)
-------------------
- Use `Products.PloneMeeting.utils.convert2xhtml` to convert `text/html` data
to correct format (images to base64 data and xhtml compliant).
[gbastien]
- Simplify external service call to @item POST (add item):
- Handle parameter `ignore_not_used_data:true` that will add a warning instead
raising an error if an optional field is given (in this case, the given
optional field value is ignored);
- Handle parameter `ignore_validation_for` that will bypass validation of given
fields if it is not in data or if it is empty. This makes it possible to add
an item without every data, the item will have to be completed in the Web UI.
[gbastien]
- Make sure `externalIdentifier` is always stored as a string, as it may be
passed in the @add endpoint as an integer, if it is stored as an integer,
it is not searchable in the `portal_catalog` using the `@search` endpoint
afterwards.
[gbastien]
- Fixed `PMLazyCatalogResultSerializer.__call__` to avoid an `UnboundLocalError`
or duplicates in results when the corresponding object does not exist anymore
for a brain or when a `KeyError` occured in call to serializer.
[gbastien]
- Handle anonymization of content. To do so, added `utils.handle_html` that
will handle every html data (AT pr DX) and make sure it is compliant with
what we need:
- images as base64 data;
- use `appy.pod` preprocessor to make sure we have valid XHTML;
- anonymize content if necessary.
[gbastien]
1.0rc1 (2021-08-17)
-------------------
- Make the summary serializer able to handle `extra_include` and
`additional_values`. For this, needed to change the way summary serializer is
handled by `plone.restapi` because by default there is one single summary
serializer for brain interface but we need to be able to register a summary
adapter for different interfaces (item, meeting, ...).
[gbastien]
- Restored `Products.PloneMeeting 4.1.x/4.2.x` backward compatibility.
[gbastien]
- Defined correct serializers for list fields so we have a `token/value`
representation in each case (AT/DX for single and multi valued select).
[gbastien]
- Added some new `extra_include` for `MeetingItem`: `classifier`,
`groups_in_charge` and `associated_groups`.
The `extra_include` named `proposingGroup` was renamed to `proposing_group`.
[gbastien]
- Use `additional_values` in annex serializer to get categorized element infos
instead yet another parameter `include_categorized_infos`.
[gbastien]
1.0b2 (2021-07-16)
------------------
- Adapted code and tests now that `Meeting` was moved from `AT` to `DX`.
[gbastien]
- Manage `extra_include=classifiers` in `@config GET` endpoint.
[gbastien]
- Do no more require parameter `config_id` when a `type` is given in `@search`
endpoint. When `type` is other than `item/meeting`, we simply add it to the
`query` as `portal_type`.
`config_id` is only required when `type` is `item` or `meeting`.
[gbastien]
- Added possibility to filter the `annexes endpoint` on any of the boolean
attributes (`to_print`, `publishable`, `confidential`, `to_sign/signed`).
[gbastien]
- Adapted `extra_include=deliberation` that was always returning every variants
of deliberation (`deliberation/public_deliberation/public_deliberation_decided`),
now the `extra_include` value is the name of the variants we want to get.
[gbastien]
- Take into account the `extra_include_fullobjects` in the `MeetingItem` serializer.
To handle this, it was necessary to implement a summary serializer for `Meeting`.
[gbastien]
- Added `test_restapi_search_items_extra_include_deliberation_images` showing
that images are received as base64 data value.
[gbastien]
1.0b1 (2021-02-03)
------------------
- Override default `PMBrainJSONSummarySerializer` for `ICatalogBrain` from
`imio.restapi` (that already overrides the one from `plone.restapi`) to
include metadata `enabled` by default.
Define also `PMJSONSummarySerializer` for object (not brain) to have a
summary representation of any objects. This makes it possible to get summary
serializers for a `MeetingConfig` and it's associated groups while using
`@config?extra_include=associated_groups`.
[gbastien]
- Changed behavior of our overrided `@search` : before, it was overriding the
default `@search` and was requiring a `config_id` to work, now `config_id` is
optional, when given, it will ease searching for items or meetings, but if
not given, then the endpoint will have the default `@search` behavior.
Nevertheless, if parameter `type` is given, then `config_id`
must be given as well.
[gbastien]
1.0a6 (2021-01-06)
------------------
- `Products.PloneMeeting.utils.fplog` was moved to
`imio.helpers.security.fplog`, adapted code accordingly.
[gbastien]
1.0a5 (2020-12-07)
------------------
- Added parameters `extra_include_proposing_groups`,
`extra_include_groups_in_charge` and `extra_include_associated_groups`
to `@config GET` endpoint.
[gbastien]
- By default, restrict access to endpoints to role `Member`,
was given to role `Anonymous` by default by `plone.restapi`.
[gbastien]
1.0a4 (2020-10-14)
------------------
- Completed test showing that `MeetingItem.adviceIndex` was not correctly
initialized upon item creation.
[gbastien]
- Added parameter `extra_include_meeting` to `IMeetingItem` serializer.
[gbastien]
- Completed `IMeeting` serializer `_additional_values` with `formatted_date`,
`formatted_date_short` and `formatted_date_long`.
[gbastien]
1.0a3 (2020-09-10)
------------------
- Fixed `test_restapi_config_extra_include_categories` as former
`AT MeetingCategory` are now `DX meetingcategory` that use field `enabled`
instead workflow `review_state` `active`.
[gbastien]
- Added `test_restapi_add_item_wf_transitions` that was broken
with `imio.restapi<1.0a11`.
[gbastien]
- When adding a new item, insert the event `create_element_using_ws_rest`
in the `workflow_history` at the beginning, just after the `created` event.
[gbastien]
1.0a2 (2020-06-24)
------------------
- Added test `test_restapi_annex_type_only_for_meeting_managers`, make sure an
annex `content_category` that is restricted to `MeetingManagers` using
`content_category.only_for_meeting_managers` is rendered the same way.
[gbastien]
- Try to build a more easy api :
- Turned `@search_items` into `@search` and `@search_meetings` into
`@search?type=meeting`;
- Parameter `getConfigId` is renamed to `config_id`;
- Added `in_name_of` parameter making it possible to use endpoint as another
user if original user is `(Meeting)Manager`.
[gbastien]
- Added `@item` POST endpoint to be able to create item with/without annexes:
- Need to define new AT fields `deserializer` to apply WF before settings
field values;
- Manage optional fields (can not use when not enabled);
- Manage creation of annexes as `__children__` of item;
- Ease use by being able to define `config_id` only at first level
(so not for annexes);
- Ease use by being able to use organizations `ids` instead `UIDs`
in creation data;
- Manage `in_name_of` parameter.
[gbastien]
- Override `@infos` endpoint from imio.restapi to add our own informations.
[gbastien]
- Added parameter `meetings_accepting_items=True` to `@search`
when `type=meeting`, this will query only meetings accepting items but query
may still be completed with other arbitrary indexes.
[gbastien]
- Added `@config` endpoint that will return a given `config_id` `MeetingConfig`
informations. Parameters `include_categories` (return enabled/disabled
categories), `include_pod_templates` (return enabled POD template) and
`include_searches` (return enabled DashboardCollections) are available.
[gbastien]
- Added `@get` endpoint that receives an `UID` and returns the object found.
A convenience endpoint `@item` do the same but just check that returned element
is a MeetingItem.
[gbastien]
- Added parameter `base_search_uid=collection_uid` to `@search`,
this makes it possible to use the `query` defined on a `DashboardCollection`.
[gbastien]
1.0a1 (2020-01-10)
------------------
- Initial release.
[gbastien]
| null | Gauthier Bastien | gauthier@imio.be | null | null | GPL version 2 | Python Plone | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Plone",
"Framework :: Plone :: 4.3",
"Programming Language :: Python",
"Programming Language :: Python :: 2.7",
"Operating System :: OS Independent",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)"
] | [] | https://pypi.python.org/pypi/plonemeeting.restapi | null | null | [] | [] | [] | [
"setuptools",
"Products.PloneMeeting",
"imio.restapi>=1.0a12",
"plone.restapi>=7.8.0",
"plone.restapi[test]; extra == \"test\"",
"Products.PloneMeeting[test]; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.4 | 2026-02-18T13:10:57.194329 | plonemeeting_restapi-2.12.tar.gz | 92,571 | 07/39/403132aea8c56d5786abbf6e06e0a44f0d3ffd0aa5597e0435208b63784f/plonemeeting_restapi-2.12.tar.gz | source | sdist | null | false | ad5b6db872752e65c5645d8c1c452a99 | c4215994ab347246e3e066c5a1747bd0486ae9628b3b8b5c09bdc74e422fbe73 | 0739403132aea8c56d5786abbf6e06e0a44f0d3ffd0aa5597e0435208b63784f | null | [
"LICENSE.GPL",
"LICENSE.rst"
] | 0 |
2.4 | atomicguard | 2.4.0 | A Dual-State Agent Framework for reliable LLM code generation with guard-validated loops | # AtomicGuard
[](https://github.com/thompsonson/atomicguard/actions/workflows/ci.yml)
[](https://codecov.io/gh/thompsonson/atomicguard)
[](https://badge.fury.io/py/atomicguard)
[](https://pypi.org/project/atomicguard/)
[](https://opensource.org/licenses/MIT)
A Dual-State Agent Framework for reliable LLM code generation.
## Why AtomicGuard?
AI agents hallucinate. Worse, those hallucinations **compound** — each generation builds on the last, and errors propagate through the workflow.
AtomicGuard solves this by combining to aspects - **decompose goals** into small measurable tasks and through **Bounded Indeterminacy**: the LLM generates content, but a deterministic state machine controls the logic. Every generation is validated before the workflow advances.
| Challenge | Solution |
|-----------|----------|
| 🛡️ **Safety** | Dual-State Architecture & Atomic Action Pairs |
| 💾 **State** | Versioned Repository Items & Configuration Snapshots |
| 🌐 **Scale** | Multi-Agent Coordination via Shared DAG |
| 📈 **Improvement** | Continuous Learning from Guard Verdicts |
→ [Learn more about the architecture](docs/design/architecture.md)
> **New to AtomicGuard?** Start with the [Getting Started Guide](docs/getting-started.md).
**Paper:** *Managing the Stochastic: Foundations of Learning in Neuro-Symbolic Systems for Software Engineering* (Thompson, 2025)
## Overview
AtomicGuard implements guard-validated generation loops that dramatically improve LLM reliability. The core abstraction is the **Atomic Action Pair** ⟨agen, G⟩ — coupling each generation action with a validation guard.
Key results (Yi-Coder 9B, n=50):
| Task | Baseline | Guarded | Improvement |
|------|----------|---------|-------------|
| Template | 35% | 90% | +55pp |
| Password | 82% | 98% | +16pp |
| LRU Cache | 94% | 100% | +6pp |
## Installation
```bash
# From PyPI
pip install atomicguard
# From source
git clone https://github.com/thompsonson/atomicguard.git
cd atomicguard
uv venv && source .venv/bin/activate
uv pip install -e ".[dev,test]"
```
## Quick Start
```python
from atomicguard import (
OllamaGenerator, SyntaxGuard, TestGuard,
CompositeGuard, ActionPair, DualStateAgent,
InMemoryArtifactDAG
)
# Setup
generator = OllamaGenerator(model="qwen2.5-coder:7b")
guard = CompositeGuard([SyntaxGuard(), TestGuard("assert add(2, 3) == 5")])
action_pair = ActionPair(generator=generator, guard=guard)
agent = DualStateAgent(action_pair, InMemoryArtifactDAG(), rmax=3)
# Execute
artifact = agent.execute("Write a function that adds two numbers")
print(artifact.content)
```
See [examples/](examples/) for more detailed usage, including a [mock example](examples/basic_mock.py) that works without an LLM.
## LLM Backends
AtomicGuard supports multiple LLM backends. Each generator implements `GeneratorInterface` and can be swapped in with no other code changes.
### Ollama (local or cloud)
Uses the OpenAI-compatible API. Works with any Ollama-served model:
```python
from atomicguard.infrastructure.llm import OllamaGenerator
# Local instance (default: http://localhost:11434/v1)
generator = OllamaGenerator(model="qwen2.5-coder:7b")
```
### HuggingFace Inference API
Connects to HuggingFace Inference Providers via `huggingface_hub`. Supports any model available through the HF Inference API, including third-party providers like Together AI.
```bash
# Install the optional dependency
pip install huggingface_hub
# Set your API token
export HF_TOKEN="hf_your_token_here"
```
```python
from atomicguard.infrastructure.llm import HuggingFaceGenerator
from atomicguard.infrastructure.llm.huggingface import HuggingFaceGeneratorConfig
# Default: Qwen/Qwen2.5-Coder-32B-Instruct
generator = HuggingFaceGenerator()
# Custom model and provider
generator = HuggingFaceGenerator(HuggingFaceGeneratorConfig(
model="Qwen/Qwen2.5-Coder-32B-Instruct",
provider="together", # or "auto", "hf-inference"
temperature=0.7,
max_tokens=4096,
))
```
Drop-in replacement in any workflow:
```python
from atomicguard import (
SyntaxGuard, TestGuard, CompositeGuard,
ActionPair, DualStateAgent, InMemoryArtifactDAG
)
from atomicguard.infrastructure.llm import HuggingFaceGenerator
generator = HuggingFaceGenerator()
guard = CompositeGuard([SyntaxGuard(), TestGuard("assert add(2, 3) == 5")])
action_pair = ActionPair(generator=generator, guard=guard)
agent = DualStateAgent(action_pair, InMemoryArtifactDAG(), rmax=3)
artifact = agent.execute("Write a function that adds two numbers")
print(artifact.content)
```
## Benchmarks
Run the simulation from the paper:
```bash
python -m benchmarks.simulation --model yi-coder:9b --trials 50 --task all --output results/results.db --format sqlite
# Generate report
python -m benchmarks.simulation --visualize --output results/results.db --format sqlite
```
## Project Structure
```
atomicguard/
├── src/atomicguard/ # Core library
├── benchmarks/ # Simulation code
├── docs/design/ # Design documents
├── examples/ # Usage examples
└── results/ # Generated reports & charts
```
## Citation
If you use this framework in your research, please cite the paper:
> Thompson, M. (2025). Managing the Stochastic: Foundations of Learning in Neuro-Symbolic Systems for Software Engineering. arXiv preprint arXiv:2512.20660.
```bibtex
@misc{thompson2025managing,
title={Managing the Stochastic: Foundations of Learning in Neuro-Symbolic Systems for Software Engineering},
author={Thompson, Matthew},
year={2025},
eprint={2512.20660},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2512.20660}
}
```
## License
MIT
| text/markdown | null | Matthew Thompson <thompsonson@gmail.com> | null | Matthew Thompson <thompsonson@gmail.com> | MIT | llm, agents, code-generation, neuro-symbolic, guards, ai, validation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Pytho... | [] | null | null | >=3.12 | [] | [] | [] | [
"matplotlib>=3.10.0",
"openhands-ai>=0.27.0",
"pydantic-ai>=1.0.0",
"pyflakes>=3.0",
"datasets>=2.0.0; extra == \"experiment\"",
"huggingface_hub>=0.20; extra == \"experiment\"",
"swebench>=2.0.0; extra == \"experiment\"",
"docker>=7.0.0; extra == \"experiment\"",
"jinja2>=3.0.0; extra == \"visualiz... | [] | [] | [] | [
"Homepage, https://github.com/thompsonson/atomicguard",
"Repository, https://github.com/thompsonson/atomicguard",
"Documentation, https://github.com/thompsonson/atomicguard#readme",
"Issues, https://github.com/thompsonson/atomicguard/issues",
"Changelog, https://github.com/thompsonson/atomicguard/blob/main/... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:10:28.044694 | atomicguard-2.4.0.tar.gz | 65,024 | 23/4e/db55bf2db4d8790206ac8cdd5bbffb0c7cea3515ed2c9c0c12ee10d932df/atomicguard-2.4.0.tar.gz | source | sdist | null | false | b97524f56d670210a478e1051403ea06 | c8f06ac16c898bcd661eaa4920c3225b7bd9275b94f85fbe19d5ee4c99e7b9fd | 234edb55bf2db4d8790206ac8cdd5bbffb0c7cea3515ed2c9c0c12ee10d932df | null | [
"LICENSE"
] | 264 |
2.4 | graphtk | 1.0.2 | Graph Theory Toolkit (GraphTK) | # Graph Theory Toolkit (GraphTK):
[](https://pepy.tech/project/graphtk)
[](https://www.python.org/downloads/)
[](https://pypi.org/project/graphtk/)
[](LICENSE)
[](https://github.com/AnshMNSoni/graphtk)
<br>
<div align="center">
<img width="438" height="438" alt="gtk-logo" src="https://github.com/user-attachments/assets/1df1df20-86f3-4415-88b7-b6f72557fe58" />
</div>
## Table of Contents
- [Introduction](#introduction)
- [Basic Terminologies](#basic-terminologies)
- [Usage](#usage)
- [Syntax and Methods](#syntax-and-methods)
- [Contact](#-connect-with-me)
## Introduction
This library provides a comprehensive Python implementation of core **Graph Theory** concepts from **Discrete Mathematics**. It allows you to create and analyze graphs represented by vertices and edges, with functionalities including generating **adjacency matrices**, **path matrices**, **weight matrices**, performing **graph coloring**, and more. With this toolkit, you can easily explore, and manipulate various graph structures in a simple and intuitive way.
## Basic Terminologies
- **Graph** → A collection of vertices (nodes) connected by edges (links).
- **Adjacency Matrix** → A square matrix showing which vertices are connected by an edge.
- **Incidence Matrix** → A matrix showing the relation between vertices and edges.
- **Path Matrix (Connectivity Matrix)** → A matrix that indicates whether a path exists between any two vertices.
- **Weight Matrix (Cost Matrix)** → A matrix showing edge weights (like distances or costs) between vertices.
- **Path** → A sequence of vertices connected by edges (edges may or may not repeat).
- **Simple Path** → A path where no vertex (and hence no edge) is repeated.
- **Trail** → A walk where edges are not repeated, but vertices may repeat.
- **Cycle (or Circuit)** → A closed path where the start and end vertices are the same, with no repetition of edges/vertices (except start = end).
- **Euler Path** → A path that uses every edge exactly once.
- **Euler Circuit (Euler Graph)** → A cycle that uses every edge exactly once and returns to the starting vertex.
- **Hamiltonian Path** → A path that visits every vertex exactly once.
- **Hamiltonian Cycle** → A cycle that visits every vertex exactly once and returns to the start.
- **Connected Graph** → A graph where there’s a path between every pair of vertices.
- **Complete Graph** → A graph where every pair of vertices is connected by an edge.
- **Bipartite Graph** → A graph whose vertices can be split into two disjoint sets with edges only across sets.
- **Tree** → A connected graph with no cycles.
- **Spanning Tree** → A subgraph that connects all vertices with minimum edges and no cycles.
## Usage
open command prompt and run:
```python
pip install graphtk
```
## Syntax and Methods
1️⃣ Input Format: Vertices and Edges
```
vertices = ['A', 'B', 'C', 'D'] # list
# list of tuples
edges = [
("A", "B"),
("A", "B"),
("A", "C"),
("A", "C"),
("A", "D"),
("B", "D"),
("C", "D")
]
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = tk.edges(vertices, True) # You can also provide your own edges; just ensure they follow the correct format.
print(edges)
```
2️⃣ Adjacency Matrix, Path Matrix, Weight Matrix, B-Matrix
- Syntax
```
# adjacency matrix
adjacency_matrix(edges: list, vertices: list, is_directed: bool)
# weight matrix
weight_matrix(edges: list, vertices: list, is_directed: bool = None)
# path matrix
path_matrix(edges: list, vertices: list, is_directed: bool = None)
# B-matrix
b_matrix(edges: list, vertices: list, is_directed: bool = None)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
# adjacency matrix
matrix = tk.adjacency_matrix(edges, vertices, True)
print(matrix)
# path matrix
matrix = tk.path_matrix(edges, vertices)
# weight matrix
matrix = tk.weight_matrix(edges, vertices)
# B-matrix
matrix = tk.b_matrix(edges, vertices)
```
3️⃣ Graph Terminologies<br/>
➡️ Paths
- Syntax
```
paths(edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.paths(edges, vertices, True)
print(result)
```
➡️ trails
- Syntax
```
trails(edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.trails(edges, vertices, True)
print(result)
```
➡️ cycle
- Syntax
```
cycle(edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.cycle(edges, vertices, True)
print(result)
```
➡️ simplepath
- Syntax
```
simplepath(edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.simplepath(edges, vertices, True)
print(result)
```
➡️ adjacency_list
- Syntax
```
adjacency_list(self, edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.adjacency_list(edges, vertices, True)
print(result)
```
➡️ is_path
- Syntax
```
is_path(edges: list, vertices: list, is_directed: bool, path: dict)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_path(edges, vertices, True, {'A': [['A'], ['C', 'A']]})
print(result)
```
➡️ is_trail
- Syntax
```
is_trail(self, edges: list, vertices: list, is_directed: bool, trail: dict)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_trail(edges, vertices, True, {'A': [['A'], ['C', 'A']]})
print(result)
```
➡️ is_cycle
- Syntax
```
is_cycle(self, edges: list, vertices: list, is_directed: bool, cycle: dict)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_cycle(edges, vertices, True, {'A': [['A'], ['C', 'A']]})
print(result)
```
➡️ is_simplepath
- Syntax
```
is_simplepath(self, edges: list, vertices: list, is_directed: bool, path: dict)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_simplepath(edges, vertices, True, {'A': [['A'], ['C', 'A']]})
print(result)
```
➡️ is_traversable
- Syntax
```
is_traversable(self, edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_traversable(edges, vertices, True)
print(result)
```
➡️ is_euler
- Syntax
```
is_euler(self, edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_euler(edges, vertices, True)
print(result)
```
➡️ is_hamilton
- Syntax
```
is_hamilton(self, edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_hamilton(edges, vertices, True)
print(result)
```
➡️ is_complete
- Syntax
```
is_complete(self, edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_complete(edges, vertices, True)
print(result)
```
➡️ is_regular
- Syntax
```
is_regular(self, edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_regular(edges, vertices, True)
print(result)
```
➡️ is_bipartite
- Syntax
```
is_bipartite(self, edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_bipartite(edges, vertices, True)
print(result)
```
➡️ is_planner
- Syntax
```
is_planner(self, edges: list, vertices: list, is_directed: bool)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.is_planner(edges, vertices, True)
print(result)
```
➡️ vertex_coloring
- Syntax
```
vertex_coloring(self, edges: list, vertices: list, is_directed: bool = None)
```
- Implementation
```
from graphtk.toolkit import Toolkit
tk = Toolkit()
vertices = ['A', 'B', 'C']
edges = [('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'A'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('A', 'B'), ('B', 'B'), ('B', 'B'), ('C', 'A')]
result = tk.vertex_coloring(edges, vertices)
print(result)
```
## 📢 Connect with Me
If you found this project helpful or have any suggestions, feel free to connect:
- [](https://www.linkedin.com/in/anshmnsoni)
- [](https://github.com/AnshMNSoni)
- [](https://www.reddit.com/user/AnshMNSoni)
## Thankyou
| text/markdown | null | Ansh Soni <ansh.mn.soni7505@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.4 | 2026-02-18T13:10:11.576306 | graphtk-1.0.2.tar.gz | 13,794 | 6b/da/14ee4a3b4571bf6f5262351ca51b6434538de6552a374717817ad312d428/graphtk-1.0.2.tar.gz | source | sdist | null | false | fe4da26d661ae3030843f56791ca0f5f | 0f0f0162afc0df89864c6b11396033a3d2615239bcccf40325596fab75deb380 | 6bda14ee4a3b4571bf6f5262351ca51b6434538de6552a374717817ad312d428 | null | [
"LICENSE"
] | 258 |
2.4 | scute-client | 1.0.0 | Python client for Scute API | # Scute Python Client
A Python client for the [Scute API](https://docs.scute.io/api-reference/objects/workspace), with built-in support for authentication, users, and tokens.
Includes ready-to-use integration guide for Django REST Framework.
---
# Quickstart
### 🚀 Installation
```bash
pip install scute-client
```
Initialize the client:
```
from scute_client import ScuteClient
scute = ScuteClient(
app_id=SCUTE_APP_ID,
app_secret=SCUTE_APP_SECRET,
)
```
## 📖 Usage Examples
### Create a user
```
identifier = "user@example.com" # or phone number
scute_user = scute.users.create_user(identifier)
print(scute_user)
```
### Get a user by ID
```
user = scute.users.get_user_by_id("scute_user_id")
print(user)
```
### List all users
```
for scute_user in scute.users.list_all_users():
print(scute_user["id"], scute_user["email"])
```
### Delete a user
```
scute.users.delete_user(user.sid)
```
### 🛠 Error Handling
All API errors raise APIRequestError:
```
from scute_client.exceptions import APIRequestError
try:
user = scute.users.get_user_by_id("invalid_id")
except APIRequestError as e:
print(f"Error: {e.status_code} - {e.message}")
```
<br>
---
### 🔑 Django REST Framework Integration
Add your Scute credentials to Django settings.py:
```
SCUTE_APP_ID = "your_app_id"
SCUTE_APP_SECRET = "your_app_secret"
```
Initialize the client:
```
from django.conf import settings
from scute_client import ScuteClient
scute = ScuteClient(
app_id=settings.SCUTE_APP_ID,
app_secret=settings.SCUTE_APP_SECRET,
)
```
First, ensure your User model has a sid (Scute ID) field.
This field is required to link your local users with Scute users.
Example User model:
```
# users/models.py
from django.contrib.auth.models import AbstractUser
from django.db import models
class User(AbstractUser):
# Store Scute ID returned from Scute API
sid = models.CharField(max_length=255, unique=True, null=True, blank=True)
def __str__(self):
return self.username
```
Create a custom authentication class:
```
# authentication.py
from rest_framework.authentication import BaseAuthentication
from rest_framework.exceptions import AuthenticationFailed
from users.models import User
class ScuteAuthentication(BaseAuthentication):
keyword = "Scute"
def authenticate(self, request):
auth_header = request.headers.get("Authorization")
if not auth_header or not auth_header.startswith(self.keyword):
return None
token = auth_header.split(f"{self.keyword} ")[1]
try:
user_data = scute.auth.get_current_user(token)
except Exception as e:
raise AuthenticationFailed(f"Invalid token: {str(e)}")
sid = user_data.get("id")
if not sid:
raise AuthenticationFailed("No user ID found in Scute response")
user = User.objects.filter(sid=sid).first()
if not user:
raise AuthenticationFailed("User not found")
if not user.is_active:
raise AuthenticationFailed("Inactive user")
return user, token
```
Enable it in settings.py:
```
REST_FRAMEWORK = {
"DEFAULT_AUTHENTICATION_CLASSES": [
"path.to.authentication.ScuteAuthentication",
],
}
```
| text/markdown | null | Triangle Empire <dev@triangleempire.com> | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.11 | 2026-02-18T13:09:24.712784 | scute_client-1.0.0.tar.gz | 7,390 | 95/58/00d86b3c39f17c92655181cc8ce99e1054bfa41a41bdc9bfe06896c1dd46/scute_client-1.0.0.tar.gz | source | sdist | null | false | 5113bc795050d898666a2dd92c01dceb | 14c857f47652614bfc763a4b529d2f07db8c614999eb6acf984a1e85d18c30e7 | 955800d86b3c39f17c92655181cc8ce99e1054bfa41a41bdc9bfe06896c1dd46 | null | [
"LICENSE"
] | 250 |
2.1 | langchain-hana | 1.0.2 | An integration package connecting SAP HANA Cloud and LangChain | [](https://api.reuse.software/info/github.com/SAP/langchain-integration-for-sap-hana-cloud)
> [!NOTE]
>
> ### Legacy Version
>
> Langchain 0.3.x compatible version of this package is maintained in the 0.3.x branch:
>
> [https://github.com/SAP/langchain-integration-for-sap-hana-cloud/tree/0.3.x](https://github.com/SAP/langchain-integration-for-sap-hana-cloud/tree/0.3.x)
# LangChain integration for SAP HANA Cloud
## About this project
Integrates LangChain with SAP HANA Cloud to make use of vector search, knowledge graph, and further in-database capabilities as part of LLM-driven applications.
## Requirements and Setup
### Prerequisites
- **Python Environment**: Ensure you have Python 3.10 or higher installed.
- **SAP HANA Cloud**: Access to a running SAP HANA Cloud instance.
### Installation
Install the LangChain SAP HANA Cloud integration package using `pip`:
```bash
pip install -U langchain-hana
```
### Vectorstore
The `HanaDB` class is used to connect to SAP HANA Cloud Vector Engine.
[SAP HANA Cloud Vector Engine](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/sap-hana-cloud-sap-hana-database-vector-engine-guide) is a vector store fully integrated into the `SAP HANA Cloud` database.
See a [usage example](https://github.com/SAP/langchain-integration-for-sap-hana-cloud/blob/main/examples/sap_hanavector.ipynb).
```python
from langchain_hana import HanaDB
```
> **Important**: You can use any embedding class that inherits from `langchain_core.embeddings.Embeddings`—**including** `HanaInternalEmbeddings`, which runs SAP HANA’s `VECTOR_EMBEDDING()` function internally. See [SAP Help](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/vector-embedding-function-vector?locale=en-US) for more details.
### Self Query Retriever
[SAP HANA Cloud Vector Engine](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/sap-hana-cloud-sap-hana-database-vector-engine-guide) also provides a Self Query Retriever implementation using the `HanaTranslator` Class.
See a [usage example](https://github.com/SAP/langchain-integration-for-sap-hana-cloud/blob/main/examples/hanavector_self_query.ipynb).
```python
from langchain_hana import HanaTranslator
```
### Graph
[SAP HANA Cloud Knowledge Graph Engine](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-knowledge-graph-guide/sap-hana-cloud-sap-hana-database-knowledge-graph-engine-guide) provides support to utilise knowledge graphs through the `HanaRdfGraph` Class.
See a [usage example](https://github.com/SAP/langchain-integration-for-sap-hana-cloud/blob/main/examples/sap_hana_rdf_graph.ipynb).
```python
from langchain_hana import HanaRdfGraph
```
### Chains
A `SparqlQAChain` is also provided which can be used with `HanaRdfGraph` for SPARQL-QA tasks.
See a [usage example](https://github.com/SAP/langchain-integration-for-sap-hana-cloud/blob/main/examples/sap_hana_sparql_qa_chain.ipynb).
```python
from langchain_hana import HanaSparqlQAChain
```
## Documentation
For a detailed guide on using the package, please refer to the [examples](https://github.com/SAP/langchain-integration-for-sap-hana-cloud/blob/main/examples/) here.
## Support, Feedback, Contributing
This project is open to feature requests/suggestions, bug reports etc. via [GitHub issues](https://github.com/SAP/langchain-integration-for-sap-hana-cloud/issues). Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our [Contribution Guidelines](CONTRIBUTING.md).
## Security / Disclosure
If you find any bug that may be a security problem, please follow our instructions at [in our security policy](https://github.com/SAP/langchain-integration-for-sap-hana-cloud/security/policy) on how to report it. Please do not create GitHub issues for security-related doubts or problems.
## Code of Conduct
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone. By participating in this project, you agree to abide by its [Code of Conduct](https://github.com/SAP/.github/blob/main/CODE_OF_CONDUCT.md) at all times.
## Licensing
Copyright 2025 SAP SE or an SAP affiliate company and langchain-integration-for-sap-hana-cloud contributors. Please see our [LICENSE](LICENSE) for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available [via the REUSE tool](https://api.reuse.software/info/github.com/SAP/langchain-integration-for-sap-hana-cloud).
| text/markdown | null | null | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"hdbcli<3.0.0,>=2.23.24",
"langchain<2.0.0,>=1.0.0",
"langchain-classic<2.0.0,>=1.0.0",
"langchain-core<2.0.0,>=1.0.0",
"numpy>=1.26.4; python_version < \"3.13\"",
"numpy>=2.1.0; python_version >= \"3.13\"",
"rdflib<8.0.0,>=7.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:09:03.096130 | langchain_hana-1.0.2.tar.gz | 29,253 | 37/d4/6d780e15162a77f4162d8d6d5acaac9dc13e103745b2383bb3e2ed153fde/langchain_hana-1.0.2.tar.gz | source | sdist | null | false | 4efbe13ae32895cbca588728d5fe1a30 | b9f6e5844c1bb55438cf5f6cd4e88882a51aeaf4e2c83e4a507cd999d67c8ca3 | 37d46d780e15162a77f4162d8d6d5acaac9dc13e103745b2383bb3e2ed153fde | null | [] | 1,744 |
2.4 | django-health-check | 4.0.4 | Monitor the health of your Django app and its connected services. | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/codingjoe/django-health-check/raw/main/docs/images/logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://github.com/codingjoe/django-health-check/raw/main/docs/images/logo-light.svg">
<img alt="Django HealthCheck: Pluggable health checks for Django applications" src="https://github.com/codingjoe/django-health-check/raw/main/docs/images/logo-light.svg">
</picture>
<br>
<a href="https://codingjoe.dev/django-health-check/">Documentation</a> |
<a href="https://github.com/codingjoe/django-health-check/issues/new/choose">Issues</a> |
<a href="https://github.com/codingjoe/django-health-check/releases">Changelog</a> |
<a href="https://github.com/sponsors/codingjoe">Funding</a> 💚
</p>
# Django HealthCheck
_Pluggable health checks for Django applications_
[](https://pypi.python.org/pypi/django-health-check/)
[](https://codecov.io/gh/codingjoe/django-health-check)
[](https://pypi.python.org/pypi/django-health-check/)
[](https://pypi.python.org/pypi/django-health-check/)
[](https://pypi.python.org/pypi/django-health-check/)
| text/markdown | null | Kristian Ollegaard <kristian@oellegaard.com>, Johannes Maron <johannes@maron.family> | null | null | null | django, postgresql | [
"Development Status :: 5 - Production/Stable",
"Framework :: Django",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language... | [] | null | null | >=3.10 | [] | [] | [] | [
"Django>=5.2",
"dnspython>=2.0.0",
"httpx>=0.27.0; extra == \"atlassian\"",
"celery>=5.0.0; extra == \"celery\"",
"confluent-kafka>=2.0.0; extra == \"kafka\"",
"psutil>=7.2.0; extra == \"psutil\"",
"aio-pika>=9.0.0; extra == \"rabbitmq\"",
"redis>=4.2.0; extra == \"redis\"",
"httpx>=0.27.0; extra ==... | [] | [] | [] | [
"Changelog, https://github.com/codingjoe/django-health-check/releases",
"Documentation, https://codingjoe.dev/django-health-check/",
"Homepage, https://codingjoe.dev/django-health-check/",
"Issues, https://github.com/codingjoe/django-health-check/issues",
"Releasenotes, https://github.com/codingjoe/django-h... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:08:42.442934 | django_health_check-4.0.4.tar.gz | 20,496 | dc/ea/5abd492cc9ea536edba5d436a84086f1c0fcdc66fd023a1f4cc086d39a56/django_health_check-4.0.4.tar.gz | source | sdist | null | false | 5e1827e24bfb4d152946b6fff2986615 | b2349ff9d75dc52e203be20f461eabae6b203f2566e5ba888bc885168decaaa9 | dcea5abd492cc9ea536edba5d436a84086f1c0fcdc66fd023a1f4cc086d39a56 | null | [
"LICENSE"
] | 10,696 |
2.4 | Class-Widgets-SDK | 0.4.0 | Class Widgets 2 Plugin SDK Development tools and type stubs | <div align="center">
<img src="/docs/logo.png" width="15%" alt="Class Widgets 2">
<h1>Class Widgets SDK</h1>
<p>Complete SDK, tools, and type hints for Class Widgets 2 plugin development.</p>
[](https://pypi.org/project/class-widgets-sdk/)
[](https://github.com/Class-Widgets/class-widgets-sdk/)
[](https://github.com/Class-Widgets/class-widgets-sdk/)
</div>
> [!CAUTION]
>
> 本项目还处**在开发**阶段,API 接口可能随时发生变化,敬请谅解。
>
> This project is still **in development**. The API may change at any time, so please bear with us.
## Overview
`class-widgets-sdk` provides the **essential base classes**, **development tools** (like scaffolding and packaging), and **complete type hints** for creating plugins for Class Widgets 2.
This package provides the core SDK for development and must be installed in your plugin's environment. Plugins are executed within the Class Widgets 2 main application.
## Installation
```bash
pip install class-widgets-sdk
```
## Getting Started
### 1. Create a new plugin
Use the included CLI tool to generate a new plugin project structure:
```bash
cw-plugin-init com.example.myplugin
```
### 2. Install dependencies
Navigate to your new plugin directory and install the SDK in editable mode:
```bash
cd com.example.myplugin
pip install -e .
```
### 3. Usage (Base Class & Types)
The SDK provides the base class `CW2Plugin` and models for configuration, giving you full IDE autocompletion and static analysis support.
```python
from ClassWidgets.SDK import CW2Plugin, ConfigBaseModel, PluginAPI
class MyConfig(ConfigBaseModel):
enabled: bool = True
text: str = "hEIlo, WoRId"
class MyPlugin(CW2Plugin):
def __init__(self, api: PluginAPI):
super().__init__(api)
self.config = MyConfig()
def on_load(self):
self.api.config.register_plugin_model(self.pid, self.config)
# Your IDE will provide full autocompletion here
self.api.widgets.register(
widget_id="com.example.mywidget",
name="My Widget",
qml_path="path/to/mywidget.qml"
)
```
### 4. Package
Use the included CLI tool to build and package your plugin into a distributable `.cwplugin` or `.zip` file:
```bash
cw-plugin-pack
```
## Tools
The SDK includes powerful command-line tools for plugin development and distribution:
| Command | Description |
| :--- | :--- |
| `cw-plugin-init` | Generate a new plugin project scaffold. |
| `cw-plugin-pack` | Build and package the plugin into a distributable `.cwplugin` or `.zip` file. |
<details>
<summary align="center">
Learn more >
</summary>
### `cw-plugin-init`
Initialize a new Class Widgets plugin project with an interactive setup wizard.
**Usage:**
```bash
# Create plugin in current directory (interactive)
cw-plugin-init
# Create plugin in specific directory
cw-plugin-init my-plugin
# Force overwrite existing files
cw-plugin-init my-plugin --force
```
#### Flow:
1. Select location (current dir or new folder)
2. Enter plugin metadata (name, author, ID, etc.)
3. Confirm and generate files
### `cw-plugin-pack`
Build and package the plugin into a distributable `.cwplugin` or `.zip` file.
```bash
# Package current directory (default: .cwplugin)
cw-plugin-pack
# Specify format (.cwplugin or .zip)
cw-plugin-pack --format zip
# Specify output path
cw-plugin-pack -o ./dist/my-plugin.cwplugin
# Package specific directory
cw-plugin-pack ./my-plugin
```
#### Format
- `.cwplugin` - Recommended plugin format
- `.zip` - Standard archive format
</details>
## How It Works
1. **Development**: You install this SDK package to get base classes, type hints, autocompletion, and static type checking (with mypy/pyright) in your IDE.
2. **Runtime**: When your plugin is loaded by the Class Widgets 2 main application, your `CW2Plugin` subclass is instantiated and executed.
> [!IMPORTANT]
>
> - This package is the **Development Kit** for your plugin. Plugins must be tested within the [Class Widgets 2](https://github.com/RinLit-233-shiroko/Class-Widgets-2) main application.
> - The import path for the SDK is `ClassWidgets.SDK`.
## Links
- [Class Widgets 2](https://github.com/rinlit-233-shiroko/class-widgets-2)
- [Report an Issue](https://github.com/rinlit-233-shiroko/class-widgets-2/issues)
## License
This project is licensed under the **MIT License** - see the [LICENSE.md](LICENSE.md) file for details.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"click",
"pydantic",
"pyside6; extra == \"test\"",
"rinui; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:07:32.213913 | class_widgets_sdk-0.4.0.tar.gz | 214,009 | 78/32/94e25f2e81cb0e32401cfa2abb23473da430cb36f71932c36a3d2a1d8c6e/class_widgets_sdk-0.4.0.tar.gz | source | sdist | null | false | 79a1dbe39258500c0c792d31443e4615 | 202c9b02a10453c43db5ec8cddc539172967782a17ec012f418a0aea07bf9599 | 783294e25f2e81cb0e32401cfa2abb23473da430cb36f71932c36a3d2a1d8c6e | null | [
"LICENSE"
] | 0 |
2.1 | brutefeedparser | 0.10.8 | Brute Feed Parser | # Overview
This is a brute-force feed parser.
Why?
- feedparser doesn’t handle all feeds correctly. I can vividly recall that it could not parse something
- It has trouble parsing CDATA sections (at least, from what I recall).
- There were issues using it in threaded or async contexts—warnings or errors would show up.
- Some parsers can’t handle RSS embedded in HTML, which is unfortunate. I plan to address this... eventually (in Valve time).
This project aims to be a drop-in replacement for [feedparser](https://github.com/kurtmckee/feedparser)
# Installation
```
$ pip install brutefeedparser
```
# Use
```
from brutefeedparser import BruteFeedParser
reader = BruteFeedParser.parse(contents)
```
# Standards? What standards?
This project does not care about standards. Standards are for loosers.
Look at me! I am the standard now!
You can quote me on the thing below:
```
If the problem is a nail and your hammer fails, perhaps it's time to reach for a bigger one.
```
# Disclaimer
This project contains code so questionable that at least one line could cause Linus Torvalds to spontaneously combust.
Reading the code in large doses may result in dizziness, despair, or the sudden realization that tabs vs. spaces was the least of your problems.
Keep the code far away from any seasoned kernel developers.
Pasting any part of this into a Linux kernel mailing list may trigger several years of flame wars, philosophical debates, and intergenerational feuds among programming factions.
Proceed with caution. Or better yet — just don’t.
| text/markdown | Iwan Grozny | renegat@renegat0x0.ddns.net | null | null | GPL3 | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"lxml<6.0.0,>=5.3.0",
"beautifulsoup4<5.0.0,>=4.13.3",
"psutil"
] | [] | [] | [] | [] | poetry/1.8.2 CPython/3.12.3 Linux/6.8.0-100-generic | 2026-02-18T13:07:07.464603 | brutefeedparser-0.10.8.tar.gz | 16,672 | 20/0e/3cc784a19476b2e3bad7ea609ab7da5eaeccd30a25434a7b764d30e4a028/brutefeedparser-0.10.8.tar.gz | source | sdist | null | false | 6943af7eae47137379f5fb6698420eab | cfd3606c2d20cab4cf53035b6ea6c8006f5c1c8034cfc4212400684ac6f2c12b | 200e3cc784a19476b2e3bad7ea609ab7da5eaeccd30a25434a7b764d30e4a028 | null | [] | 373 |
2.4 | updates2mqtt | 1.8.3 | System update and docker image notification and execution over MQTT | { align=left }
# updates2mqtt
[](https://github.com/rhizomatics)
[](https://pypi.org/project/updates2mqtt/)
[](https://github.com/rhizomatics/updates2mqtt)
[](https://updates2mqtt.rhizomatics.org.uk/developer/coverage/)

[](https://results.pre-commit.ci/latest/github/rhizomatics/updates2mqtt/main)
[](https://github.com/rhizomatics/updates2mqtt/actions/workflows/pypi-publish.yml)
[](https://github.com/rhizomatics/updates2mqtt/actions/workflows/python-package.yml)
[](https://github.com/rhizomatics/updates2mqtt/actions/workflows/github-code-scanning/codeql)
[](https://github.com/rhizomatics/updates2mqtt/actions/workflows/dependabot/dependabot-updates)
<br/>
<br/>
## Summary
Let Home Assistant tell you about new updates to Docker images for your containers.
{width=300}
Read the release notes, and optionally click *Update* to trigger a Docker *pull* (or optionally *build*) and *update*.
{width=480}
## Description
Updates2MQTT perioidically checks for new versions of components being available, and publishes new version info to MQTT. HomeAssistant auto discovery is supported, so all updates can be seen in the same place as Home Assistant's own components and add-ins.
Currently only Docker containers are supported, either via an image registry check (using either v1 Docker APIs or the OCI v2 API), or a git repo for source (see [Local Builds](local_builds.md)), with specific handling for Docker, Github Container Registry, Gitlab, Codeberg, Microsoft Container Registry, Quay and LinuxServer Registry, with adaptive behaviour to cope with most
others. The design is modular, so other update sources can be added, at least for notification. The next anticipated is **apt** for Debian based systems.
Components can also be updated, either automatically or triggered via MQTT, for example by hitting the *Install* button in the HomeAssistant update dialog. Icons and release notes can be specified for a better HA experience. See [Home Assistant Integration](home_assistant.md) for details.
To get started, read the [Installation](installation.md) and [Configuration](configuration/index.md) pages.
For a quick spin, try this:
```bash
docker run -v /var/run/docker.sock:/var/run/docker.sock -e MQTT_USER=user1 -e MQTT_PASS=user1 -e MQTT_HOST=192.168.1.5 ghcr.io/rhizomatics/updates2mqtt:latest
```
or without Docker, using [uv](https://docs.astral.sh/uv/)
```bash
export MQTT_HOST=192.168.1.1;export MQTT_USER=user1;export MQTT_PASS=user1;uv run --with updates2mqtt python -m updates2mqtt
```
It also comes with a basic command line tool that will perform the analysis for a single running container, or fetch
manifests, JSON blobs and lists of tags from remote registries (known to work with GitHub, GitLab, Codeberg, Quay, LSCR and Microsoft MCR).
## Release Support
Presently only Docker containers are supported, although others are planned, probably with priority for `apt`.
| Ecosystem | Support | Comments |
|-----------|-------------|----------------------------------------------------------------------------------------------------|
| Docker | Scan. Fetch | Fetch is ``docker pull`` only. Restart support only for ``docker-compose`` image based containers. |
## Heartbeat
A heartbeat JSON payload is optionally published periodically to a configurable MQTT topic, defaulting to `healthcheck/{node_name}/updates2mqtt`. It contains the current version of Updates2MQTT, the node name, a timestamp, and some basic stats.
## Healthcheck
A `healthcheck.sh` script is included in the Docker image, and can be used as a Docker healthcheck, if the container environment variables are set for `MQTT_HOST`, `MQTT_PORT`, `MQTT_USER` and `MQTT_PASS`. It uses the `mosquitto-clients` Linux package which provides `mosquitto_sub` command to subscribe to topics.
!!! tip
Check healthcheck is working using `docker inspect --format "{{json .State.Health }}" updates2mqtt | jq` (can omit `| jq` if you don't have jsonquery installed, but much easier to read with it)
Another approach is using a restarter service directly in Docker Compose to force a restart, in this case once a day:
```yaml title="Example Compose Service"
restarter:
image: docker:cli
volumes: ["/var/run/docker.sock:/var/run/docker.sock"]
command: ["/bin/sh", "-c", "while true; do sleep 86400; docker restart updates2mqtt; done"]
restart: unless-stopped
environment:
- UPD2MQTT_UPDATE=AUTO
```
## Target Containers
While `updates2mqtt` will discover and monitor all containers running under the Docker daemon,
there are some options to make to those containers to tune how it works.
These happen by adding environment variables or docker labels to the containers, typically inside an `.env`
file, or as `environment` options inside `docker-compose.yaml`.
### Automated updates
If Docker containers should be immediately updated, without any confirmation
or trigger, *e.g.* from the HomeAssistant update dialog, then set an environment variable `UPD2MQTT_UPDATE` in the target container to `Auto` ( it defaults to `Passive`). If you want it to update without publishing to MQTT and being
visible to Home Assistant, then use `Silent`.
```yaml title="Example Compose Snippet"
restarter:
image: docker:cli
command: ["/bin/sh", "-c", "while true; do sleep 86400; docker restart mailserver; done"]
environment:
- UPD2MQTT_UPDATE=AUTO
```
Automated updates can also apply to local builds, where a `git_repo_path` has been defined - if there are remote
commits available to pull, then a `git pull`, `docker compose build` and `docker compose up` will be executed.
## Related Projects
Other apps useful for self-hosting with the help of MQTT:
- [psmqtt](https://github.com/eschava/psmqtt) - Report system health and metrics via MQTT
Find more at [awesome-mqtt](https://github.com/rhizomatics/awesome-mqtt)
For a more powerful Docker focussed update manager, try [What's Up Docker](https://getwud.github.io/wud/)
## Development
This component relies on several open source packages:
- [docker-py](https://docker-py.readthedocs.io/en/stable/) SDK for Python for access to Docker APIs
- [Eclipse Paho](https://eclipse.dev/paho/files/paho.mqtt.python/html/client.html) MQTT client
- [OmegaConf](https://omegaconf.readthedocs.io) for configuration and validation
- [structlog](https://www.structlog.org/en/stable/) for structured logging and [rich](https://rich.readthedocs.io/en/stable/) for better exception reporting
- [hishel](https://hishel.com/) for caching metadata
- [httpx](https://www.python-httpx.org) for retrieving metadata
- The Astral [uv](https://docs.astral.sh/uv/) and [ruff](https://docs.astral.sh/ruff/) tools for development and build
- [pytest](https://docs.pytest.org/en/stable/) and supporting add-ins for automated testing
- [usingversion](https://pypi.org/project/usingversion/) to log current version info
| text/markdown | jey burrows | jey burrows <jrb@rhizomatics.org.uk> | null | null | null | mqtt, docker, oci, container, updates, automation, home-assistant, homeassistant, selfhosting | [
"Development Status :: 5 - Production/Stable",
"License :: Other/Proprietary License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Environment :: Console",
"Topic :: Home Automation",
"Topic :: System :: Systems Administration",
"Topic :: System :: Monitoring",
"Intended Au... | [] | null | null | >=3.13 | [] | [] | [] | [
"docker>=7.1.0",
"paho-mqtt>=2.1.0",
"omegaconf>=2.3.0",
"structlog>=25.4.0",
"rich>=14.0.0",
"httpx>=0.28.1",
"hishel[httpx]>=1.1.0",
"usingversion>=0.1.2",
"tzlocal>=5.3.1"
] | [] | [] | [] | [
"Homepage, https://updates2mqtt.rhizomatics.org.uk",
"Repository, https://github.com/rhizomatics/updates2mqtt",
"Documentation, https://updates2mqtt.rhizomatics.org.uk",
"Issues, https://github.com/rhizomatics/updates2mqtt/issues",
"Changelog, https://github.com/rhizomatics/updates2mqtt/blob/main/CHANGELOG.... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:07:06.594761 | updates2mqtt-1.8.3.tar.gz | 40,636 | 51/cf/32dd88149d23c4dde97fbd3192d5ed11114d58bc8fd5da4f74445b61c77d/updates2mqtt-1.8.3.tar.gz | source | sdist | null | false | a667595ab77b67f9ba01d79bdc57d899 | 96b24b390dc79052f84fb719bcc666c49db6b6875ddb22594dad46cd6a394fa7 | 51cf32dd88149d23c4dde97fbd3192d5ed11114d58bc8fd5da4f74445b61c77d | Apache-2.0 | [] | 245 |
2.4 | datacanvas | 1.0.0 | Official Python SDK for the DataCanvas IoT Platform — A modern, type-safe, resource-based client library | # DataCanvas SDK for Python
[](https://pypi.org/project/datacanvas/)
[](https://pypi.org/project/datacanvas/)
[](LICENSE)
Official Python SDK for the **DataCanvas IoT Platform**. A modern, type-safe, and resource-based client library for seamless integration with the DataCanvas API.
## Features
- **Resource-Based Architecture** — Intuitive API organised by domain concepts
- **Type-Safe** — Full type annotations and `py.typed` marker for static analysis
- **Modern** — Supports Python 3.9+, dataclasses, and enums
- **Robust Error Handling** — Comprehensive error hierarchy for precise error management
- **Minimal Dependencies** — Uses `requests` for HTTP; no unnecessary extras
## Installation
```bash
pip install datacanvas
```
## Quick Start
```python
import os
from datacanvas import DataCanvas, SortOrder
# Initialise SDK
client = DataCanvas(
access_key_client=os.environ["DATACANVAS_ACCESS_KEY_ID"],
access_key_secret=os.environ["DATACANVAS_SECRET_KEY"],
project_id=int(os.environ["DATACANVAS_PROJECT_ID"]),
base_url=os.environ["DATACANVAS_BASE_URL"],
)
# List all devices
devices = client.devices.list()
print(f"Found {len(devices.devices)} devices")
# Retrieve data from a datatable
data = client.data.list(
table_name="temperature_sensors",
devices=[1, 2, 3],
page=0,
limit=50,
order=SortOrder.DESC,
)
print(f"Retrieved {data.count} data points")
```
### Context Manager
The SDK supports context managers for automatic resource cleanup:
```python
with DataCanvas(
access_key_client="your-key",
access_key_secret="your-secret",
project_id=123,
base_url="https://api.<something>.<something>",
) as client:
devices = client.devices.list()
```
## API Reference
### Configuration
#### `DataCanvas(**kwargs)`
Creates a new SDK instance.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `access_key_client` | `str` | ✅ | Client access key ID from DataCanvas dashboard |
| `access_key_secret` | `str` | ✅ | Secret access key for authentication |
| `project_id` | `int` | ✅ | Project ID to scope API requests |
| `base_url` | `str` | ✅ | Base URL for the DataCanvas API |
```python
client = DataCanvas(
access_key_client="your-access-key-id",
access_key_secret="your-secret-key",
project_id=123,
base_url="https://api.<something>.<something>",
)
```
---
### Device Management
#### `client.devices.list() -> DeviceResponse`
Retrieves all devices associated with the configured project.
```python
response = client.devices.list()
for device in response.devices:
print(f"Device: {device.device_name} (ID: {device.device_id})")
```
**Response types:**
```python
@dataclass
class DeviceResponse:
success: bool
devices: list[Device]
@dataclass
class Device:
device_id: int
device_name: str
```
---
### Data Retrieval
#### `client.data.list(**kwargs) -> DataResponse`
Retrieves data from a specified datatable with optional filtering and pagination.
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `table_name` | `str` | ✅ | — | Name of the datatable to query |
| `devices` | `list[int]` | ❌ | `[]` | List of device IDs to filter |
| `page` | `int` | ❌ | `0` | Page number (0-indexed) |
| `limit` | `int` | ❌ | `20` | Items per page (max: 1000) |
| `order` | `SortOrder` | ❌ | `DESC` | Sort order (ASC or DESC) |
```python
from datacanvas import SortOrder
# Retrieve all data
all_data = client.data.list(table_name="temperature_sensors")
# Retrieve with filtering and pagination
filtered = client.data.list(
table_name="temperature_sensors",
devices=[1, 2, 3],
page=0,
limit=50,
order=SortOrder.DESC,
)
print(f"Total records: {filtered.count}")
for device_id, points in filtered.data.items():
print(f"Device {device_id}: {len(points)} data points")
for point in points:
print(f" - ID: {point.id}, Device: {point.device}, Extra: {point.extra}")
```
**Response types:**
```python
@dataclass
class DataResponse:
count: int
data: dict[str, list[DataPoint]]
@dataclass
class DataPoint:
id: int
device: int
extra: dict[str, Any] # Dynamic fields from datatable schema
```
---
## Error Handling
The SDK provides comprehensive error handling with specific error types for different scenarios. All errors inherit from `DataCanvasError`.
### Error Types
| Error Class | Description | HTTP Status |
|-------------|-------------|-------------|
| `AuthenticationError` | Invalid credentials | 401 |
| `AuthorizationError` | Insufficient permissions | 403 |
| `ValidationError` | Invalid request parameters | 400, 422 |
| `NotFoundError` | Resource not found | 404 |
| `RateLimitError` | Rate limit exceeded | 429 |
| `ServerError` | Server-side error | 500+ |
| `NetworkError` | Network connectivity issue | — |
### Handling Errors
```python
from datacanvas import (
DataCanvas,
AuthenticationError,
ValidationError,
RateLimitError,
NetworkError,
DataCanvasError,
)
try:
data = client.data.list(table_name="sensors", limit=100)
except AuthenticationError:
print("Authentication failed. Check your credentials.")
except ValidationError as e:
print(f"Invalid request: {e}")
except RateLimitError:
print("Rate limit exceeded. Please wait.")
except NetworkError as e:
print(f"Network error: {e}")
except DataCanvasError as e:
print(f"SDK error: {e}")
```
---
## Architecture
The SDK follows a **resource-based OOP architecture** with clear separation of concerns:
```
DataCanvas SDK
├── DataCanvas (Main Client)
│ ├── devices (DevicesResource)
│ └── data (DataResource)
├── HttpClient (HTTP Communication)
├── Exceptions (Error Hierarchy)
├── Constants (Enums & Defaults)
└── Types (Dataclass Definitions)
```
---
## Python Type Checking
The SDK ships with a `py.typed` marker and full type annotations. Use with mypy or pyright:
```python
from datacanvas import DataCanvas, SDKConfig, DeviceResponse, DataResponse, DataPoint, GetDataParams
```
---
## Contributing
Contributions are welcome! Please follow these guidelines:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'feat: add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
### Development Setup
```bash
# Clone repository
git clone https://github.com/Datacanvas-IoT/Datacanvas-PIP
cd Datacanvas-PIP
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run type checks
mypy src/datacanvas
# Run linter
ruff check src/
```
---
## License
This project is licensed under the **Apache License 2.0** — see the [LICENSE](LICENSE) file for details.
---
## Resources
- [DataCanvas Platform](http://datacanvas.hypercube.lk/)
- [Issue Tracker](https://github.com/Datacanvas-IoT/Datacanvas-PIP/issues)
- [Changelog](CHANGELOG.md)
---
## Support
For questions, issues, or feature requests:
- 📧 Email: datacanvasmgmt[at]gmail[dot]com
- 🐛 Issues: [GitHub Issues](https://github.com/Datacanvas-IoT/Datacanvas-PIP/issues)
- 💬 Discussions: [GitHub Discussions](https://github.com/Datacanvas-IoT/Datacanvas-PIP/discussions)
---
<p align="center">
Made with ❤️ by the DataCanvas Team
</p>
| text/markdown | DataCanvas Team | null | null | null | Apache-2.0 | datacanvas, iot, sdk, api-client, sensors, data-platform, devices, rest-api | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Progra... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests<3,>=2.28.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"responses>=0.23; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Datacanvas-IoT/Datacanvas-PIP",
"Repository, https://github.com/Datacanvas-IoT/Datacanvas-PIP.git",
"Issues, https://github.com/Datacanvas-IoT/Datacanvas-PIP/issues",
"Changelog, https://github.com/Datacanvas-IoT/Datacanvas-PIP/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:06:18.510266 | datacanvas-1.0.0.tar.gz | 20,546 | 9c/7c/d4cec541c15762b763c3d4d3d47fb7b45b39e545ab9705c49bb6eeeae7ef/datacanvas-1.0.0.tar.gz | source | sdist | null | false | 8223dc251814b38f5d36569401cac1b8 | 25cf89f1fcd1cb1ed602288957dab0a4445da50a3640189129d24dc1af424af3 | 9c7cd4cec541c15762b763c3d4d3d47fb7b45b39e545ab9705c49bb6eeeae7ef | null | [
"LICENSE"
] | 292 |
2.4 | laser-prynter | 0.7.0 | terminal/cli/python helpers for colour and pretty-printing | # laser-prynter


terminal/cli/python helpers for colour and pretty-printing
- [laser-prynter](#laser-prynter)
- [`laser_prynter`](#laser_prynter)
- [`pbar`](#pbar)
- [`bench`](#bench)
---
## `laser_prynter`
https://github.com/user-attachments/assets/cce8f690-e411-459f-a04f-8e9bef533e4a
---
## `pbar`
https://github.com/user-attachments/assets/8a2c2d99-1a11-4f9f-ac6a-8153f67e21c3
```python
from laser_prynter import pbar
with pbar.PBar(100) as bar:
for i in range(100):
# do something
bar.update()
```
---
## `bench`
https://github.com/user-attachments/assets/4af823b0-8d18-4086-9754-c76c65b66898
```python
from laser_prynter import bench
bench.bench(
tests=[
(
(range(2),), # args
{}, # kwargs
[0,1], # expected
)
],
func_groups=[ [list] ],
n=100
)
```
| text/markdown | null | tmck-code <tmck01@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pygments"
] | [] | [] | [] | [
"Homepage, https://github.com/tmck-code/laser-prynter",
"Issues, https://github.com/tmck-code/laser-prynter/issues"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T13:06:11.626592 | laser_prynter-0.7.0-py3-none-any.whl | 16,938 | c4/ce/a9a4ac5e72cd9c384c46d370c9cca7706aed583ebb62b387ba1d6a672e2c/laser_prynter-0.7.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 3031f47eaa9fcd175e312d4a1ff84457 | 5761ec69291901806e6990a921a1582c101f9935f057c68ee73900c2c662cbb2 | c4cea9a4ac5e72cd9c384c46d370c9cca7706aed583ebb62b387ba1d6a672e2c | BSD-3-Clause | [
"LICENSE"
] | 261 |
2.4 | supadata | 1.5.1 | The official Python SDK for Supadata - extract web media data with ease | # Supadata Python SDK
[](https://badge.fury.io/py/supadata)
[](http://opensource.org/licenses/MIT)
The official Python SDK for Supadata.
Get your free API key at [supadata.ai](https://supadata.ai) and start scraping data in minutes.
## Installation
```bash
pip install supadata
```
## Usage
### Initialization
```python
from supadata import Supadata, SupadataError
# Initialize the client
supadata = Supadata(api_key="YOUR_API_KEY")
```
### Metadata
```python
# Get media metadata from any supported platform (YouTube, TikTok, Instagram, Twitter)
metadata = supadata.metadata(url="https://www.youtube.com/watch?v=dQw4w9WgXcQ")
print(metadata)
```
### Transcripts
```python
# Get transcript from any supported platform (YouTube, TikTok, Instagram, Twitter, file URLs)
transcript = supadata.transcript(
url="https://x.com/SpaceX/status/1481651037291225113",
lang="en", # Optional: preferred language
text=True, # Optional: return plain text instead of timestamped chunks
mode="auto" # Optional: "native", "auto", or "generate"
)
# For immediate results
if hasattr(transcript, 'content'):
print(f"Transcript: {transcript.content}")
print(f"Language: {transcript.lang}")
else:
# For async processing (large files)
print(f"Processing started with job ID: {transcript.job_id}")
# Poll for results using existing batch.get_batch_results method
```
### YouTube
```python
# Translate YouTube transcript to Spanish
translated = supadata.youtube.translate(
video_id="dQw4w9WgXcQ",
lang="es"
)
print(f"Got translated transcript in {translated.lang}")
# Get Channel Metadata
channel = supadata.youtube.channel(id="https://youtube.com/@RickAstleyVEVO") # can be url, channel id, handle
print(f"Channel: {channel}")
# Get video IDs from a YouTube channel
channel_videos = supadata.youtube.channel.videos(
id="RickAstleyVEVO", # can be url, channel id, or handle
type="all", # 'all', 'video', 'short', or 'live'
limit=50
)
print(f"Regular videos: {channel_videos.video_ids}")
print(f"Shorts: {channel_videos.short_ids}")
print(f"Live: {channel_videos.live_ids}")
# Get Playlist metadata
playlist = supadata.youtube.playlist(id="PLlaN88a7y2_plecYoJxvRFTLHVbIVAOoc") # can be url or playlist id
print(f"Playlist: {playlist}")
# Get video IDs from a YouTube playlist
playlist_videos = supadata.youtube.playlist.videos(
id="https://www.youtube.com/playlist?list=PLlaN88a7y2_plecYoJxvRFTLHVbIVAOoc", # can be url or playlist id
limit=50
)
print(f"Regular videos: {playlist_videos.video_ids}")
print(f"Shorts: {playlist_videos.short_ids}")
print(f"Live: {playlist_videos.live_ids}")
# Search YouTube videos
search_results = supadata.youtube.search(
query="Never Gonna Give You Up",
upload_date="all", # "all", "hour", "today", "week", "month", "year"
type="video", # "all", "video", "channel", "playlist", "movie"
duration="all", # "all", "short", "medium", "long"
sort_by="relevance", # "relevance", "rating", "date", "views"
features=["hd", "subtitles"], # Optional: filter by video features
limit=10 # Optional: number of results (1-5000)
)
print(f"Found {search_results.total_results} total results")
print(f"Query: {search_results.query}")
for result in search_results.results:
print(f"Video: {result.title} by {result.channel['name']}")
print(f" ID: {result.id}")
print(f" Duration: {result.duration}s")
print(f" Views: {result.view_count}")
# Batch Operations
transcript_batch_job = supadata.youtube.transcript.batch(
video_ids=["dQw4w9WgXcQ", "xvFZjo5PgG0"],
# playlist_id="PLlaN88a7y2_plecYoJxvRFTLHVbIVAOoc", # alternatively
# channel_id="UC_9-kyTW8ZkZNDHQJ6FgpwQ", # alternatively
lang="en", # Optional: specify preferred transcript language
limit=100 # Optional: limit for playlist/channel
)
print(f"Started transcript batch job: {transcript_batch_job.job_id}")
# Start a batch job to get video metadata for a playlist
video_batch_job = supadata.youtube.video.batch(
playlist_id="PLlaN88a7y2_plecYoJxvRFTLHVbIVAOoc",
limit=50
)
print(f"Started video metadata batch job: {video_batch_job.job_id}")
# Get the results of a batch job (poll until status is 'completed' or 'failed')
batch_results = supadata.youtube.batch.get_batch_results(job_id=transcript_batch_job.job_id)
print(f"Job status: {batch_results.status}")
print(f"Stats: {batch_results.stats.succeeded}/{batch_results.stats.total} videos processed")
print(f"First result: {batch_results.results[0].video_id if batch_results.results else 'No results yet'}")
```
### Web
```python
# Scrape web content
web_content = supadata.web.scrape("https://supadata.ai")
print(f"Page title: {web_content.name}")
print(f"Page content: {web_content.content}")
# Map website URLs
site_map = supadata.web.map("https://supadata.ai")
print(f"Found {len(site_map.urls)} URLs")
# Start a crawl job
crawl_job = supadata.web.crawl(
url="https://supadata.ai",
limit=100 # Optional: limit the number of pages to crawl
)
print(f"Started crawl job: {crawl_job.job_id}")
# Get crawl results
# This automatically handles pagination and returns all pages
try:
pages = supadata.web.get_crawl_results(job_id=crawl_job.job_id)
for page in pages:
print(f"Crawled page: {page.url}")
print(f"Page title: {page.name}")
print(f"Content: {page.content}")
except SupadataError as e:
print(f"Crawl job failed: {e}")
```
## Error Handling
The SDK uses custom `SupadataError` exceptions that provide structured error information:
```python
from supadata.errors import SupadataError
try:
metadata = supadata.metadata(url="https://www.youtube.com/watch?v=INVALID_ID")
except SupadataError as error:
print(f"Error code: {error.error}")
print(f"Error message: {error.message}")
print(f"Error details: {error.details}")
if error.documentation_url:
print(f"Documentation: {error.documentation_url}")
```
## API Reference
See the [Documentation](https://supadata.ai/documentation) for more details on all possible parameters and options.
## License
MIT
| text/markdown | null | Supadata <support@supadata.ai> | null | null | null | ai, api, instagram, llm, media, scraping, supadata, tiktok, transcripts, twitter, web-scraping, youtube | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"requests>=2.28.1",
"pytest>=7.0.0; extra == \"test\"",
"requests-mock>=1.11.0; extra == \"test\""
] | [] | [] | [] | [
"homepage, https://supadata.ai",
"repository, https://github.com/supadata/py",
"documentation, https://supadata.ai/documentation"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T13:05:56.345149 | supadata-1.5.1.tar.gz | 21,301 | 58/7a/aedde93539ce503e225c7872d7267d2fb34612de05ef445f87c8954d2930/supadata-1.5.1.tar.gz | source | sdist | null | false | 180447983a60755d2cf0b1527e1690eb | c0690c9ea3e9e61cf6aa271af25cf30612afd0b6e269b365c6807ec8954b7647 | 587aaedde93539ce503e225c7872d7267d2fb34612de05ef445f87c8954d2930 | MIT | [
"LICENSE"
] | 1,141 |
2.4 | at-chat-mask | 0.3.0 | AskTable Chat Data Masking Tool - Create demo cases from real chat messages with masked virtual data | # AT Chat Mask
**AskTable 对话数据脱敏工具** - 从真实对话记录创建演示案例,使用虚拟数据保护客户隐私
[](https://badge.fury.io/py/at-chat-mask)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## 功能特性
- 📖 自动分析对话文件,提取数据结构和问题
- 🤖 智能生成中文案例名称(基于表名和内容分析)
- 🔧 自动生成虚拟 CSV 数据,保护客户隐私
- 🚀 自动创建 AskTable 的 Datasource、Bot 和 Chat
- 💬 交互式对话,灵活配置案例参数
- 🌐 一键发布,快速生成可分享的演示链接
## 安装
```bash
pip install at-chat-mask
```
## 快速开始
### 1. 配置 API Key
创建 `.env` 文件:
```bash
ASKTABLE_API_KEY=your_api_key_here
ASKTABLE_API_BASE=https://api.asktable.com
```
### 2. 准备对话文件
将 AskTable 对话导出文件命名为 `chat-{编号}.json` 格式,例如:
- `chat-001.json`
- `chat-002.json`
### 3. 运行工具
```bash
at-chat-mask
```
### 4. 按提示操作
```
案例编号 [001]: 001
```
系统自动查找 `chat-001.json` 文件并分析内容
```
📝 自动生成案例名称
✓ 案例名称: 体育成绩数据分析
是否使用此名称? [Y/n]:
```
系统根据对话内容自动生成中文案例名称,可以确认使用或自定义
确认信息后,系统会自动:
1. 生成虚拟 CSV 数据(`case-001.csv`)
2. 创建 AskTable Datasource
3. 创建 AskTable Bot(公开分享)
4. 创建 AskTable Chat
完成后获取分享链接。
## 对话文件格式
文件必须命名为 `chat-{编号}.json`,内容为标准的 AskTable Chat 格式:
```json
{
"items": [
{
"role": "human",
"content": {"text": "展示跳绳最好的前10名学生的详情"}
},
{
"role": "ai",
"metadata": {
"tool_response": [{
"sql": "SELECT ... FROM table_name ...",
"result": "..."
}]
}
}
]
}
```
## API 使用
```python
from at_chat_mask import ChatMaskAgent, CSVGenerator
# 创建 Agent
agent = ChatMaskAgent(
api_key="your_api_key",
api_base="https://api.asktable.com"
)
# 运行交互式流程
agent.run()
# 或者单独使用 CSV 生成器
generator = CSVGenerator()
csv_content = generator.generate_csv_from_messages(
chat_file="chat-001.json",
output_file="output.csv",
num_rows=20
)
```
## 许可证
MIT License
## 相关链接
- [AskTable 官网](https://www.asktable.com)
- [GitHub 仓库](https://github.com/datamini/at-chat-mask)
- [问题反馈](https://github.com/datamini/at-chat-mask/issues)
| text/markdown | null | DataMini Team <support@datamini.com> | null | null | MIT | asktable, chat, mask, data-masking, privacy, demo | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.8 | [] | [] | [] | [
"asktable>=5.0.0",
"python-dotenv>=1.0.0",
"rich>=13.0.0",
"pydantic>=2.0.0",
"pydantic-settings>=2.0.0",
"openai>=1.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/datamini/at-chat-mask",
"Documentation, https://github.com/datamini/at-chat-mask#readme",
"Repository, https://github.com/datamini/at-chat-mask",
"Bug Tracker, https://github.com/datamini/at-chat-mask/issues"
] | twine/6.2.0 CPython/3.11.7 | 2026-02-18T13:05:16.353457 | at_chat_mask-0.3.0.tar.gz | 32,196 | 6f/68/ab9055e892fff8607dab7326b96b6fcf2cd7f85f72c7d02f0191baf3d587/at_chat_mask-0.3.0.tar.gz | source | sdist | null | false | 9b3dba50a2eba131d4ac0779d718231e | 52108354ea80457b0c055a89e86b2671442620b445d012945a9014b2ca6e41a0 | 6f68ab9055e892fff8607dab7326b96b6fcf2cd7f85f72c7d02f0191baf3d587 | null | [
"LICENSE"
] | 290 |
2.4 | notify-tls-client | 2.1.0 | Cliente HTTP avançado com TLS fingerprinting, rotação automática de proxies e recuperação inteligente para web scraping profissional | # Notify TLS Client
[](https://badge.fury.io/py/notify-tls-client)
[](https://pypi.org/project/notify-tls-client/)
[](https://opensource.org/licenses/MIT)
Cliente HTTP avançado em Python com suporte a TLS/SSL customizado, fingerprinting de navegadores e rotação automática de proxies. Construído sobre a biblioteca `tls-client` com funcionalidades adicionais para web scraping e automação resiliente.
## 🚀 Características Principais
- **Fingerprinting TLS Avançado**: Emula múltiplos navegadores (Chrome, Firefox, Safari, Edge, Mobile)
- **Rotação Automática**: Proxies e client identifiers com políticas configuráveis
- **Recuperação Automática**: Reconexão inteligente em erros e respostas proibidas
- **Thread-Safe**: Uso seguro em ambientes multi-threaded
- **Configuração Modular**: Sistema de configuração baseado em objetos reutilizáveis
- **Presets Prontos**: Configurações pré-definidas para casos de uso comuns
- **HTTP/3 Support**: Suporte opcional a QUIC/HTTP3
## 📦 Instalação
```bash
pip install notify-tls-client
```
### Requisitos
- Python >= 3.12
- Sistema operacional: Windows, macOS, Linux (x86_64, ARM64)
## 🎯 Quick Start
### Uso Básico
```python
from notify_tls_client import NotifyTLSClient
# Cliente com configuração padrão
client = NotifyTLSClient()
# Fazer requisição
response = client.get("https://api.example.com/data")
print(response.status_code)
print(response.json())
```
### Usando Presets (Recomendado)
```python
from notify_tls_client import NotifyTLSClient
from notify_tls_client.config import ClientConfiguration
from notify_tls_client.core.proxiesmanager import ProxiesManagerLoader
# Carregar proxies
proxies = ProxiesManagerLoader().from_txt("proxies.txt")
# Preset para scraping agressivo
config = ClientConfiguration.aggressive(proxies)
client = NotifyTLSClient(config)
# Fazer múltiplas requisições
for i in range(100):
response = client.get("https://example.com/api/endpoint")
print(f"Request {i}: {response.status_code}")
```
### Configuração Customizada
```python
from notify_tls_client import NotifyTLSClient
from notify_tls_client.config import (
ClientConfiguration,
RotationConfig,
RecoveryConfig,
ClientConfig
)
config = ClientConfiguration(
proxies_manager=proxies,
rotation=RotationConfig(
requests_limit_same_proxy=50,
requests_limit_same_client_identifier=200,
random_tls_extension_order=True
),
recovery=RecoveryConfig(
instantiate_new_client_on_forbidden_response=True,
instantiate_new_client_on_exception=True,
change_client_identifier_on_forbidden_response=True,
status_codes_to_forbidden_response_handle=[403, 429, 503]
),
client=ClientConfig(
client_identifiers=["chrome_133", "firefox_120", "safari_17_0"],
disable_http3=False,
debug_mode=False
)
)
client = NotifyTLSClient(config)
```
## 📚 Presets Disponíveis
### Simple
Uso básico com rotação de proxies padrão.
```python
config = ClientConfiguration.simple(proxies)
```
### Aggressive
Para scraping intensivo com recuperação automática completa.
```python
config = ClientConfiguration.aggressive(proxies)
```
- Troca proxy a cada 10 requisições
- Troca client identifier a cada 50 requisições
- Recuperação automática em erros e 403/429/503
- Múltiplos client identifiers
### Stealth
Foco em evitar detecção através de diversidade.
```python
config = ClientConfiguration.stealth(proxies)
```
- 4 client identifiers diferentes
- Ordem de extensões TLS randomizada
- Rotação moderada (100 req/proxy)
### Mobile
Simula dispositivos móveis.
```python
# Android
config = ClientConfiguration.mobile(proxies, platform="android")
# iOS
config = ClientConfiguration.mobile(proxies, platform="ios")
```
## 🔧 Funcionalidades Avançadas
### Rotação de Proxies
```python
from notify_tls_client.core.proxiesmanager import ProxiesManagerLoader
# Carregar de arquivo
proxies = ProxiesManagerLoader().from_txt("proxies.txt")
# Formato do arquivo (um por linha):
# host:port
# host:port:username:password
# http://username:password@host:port
```
### Client Identifiers Suportados
**Desktop:**
- Chrome: `chrome_133`, `chrome_131`, `chrome_120`, etc.
- Firefox: `firefox_120`, `firefox_117`, `firefox_110`, etc.
- Safari: `safari_17_0`, `safari_16_0`, etc.
- Edge, Opera
**Mobile:**
- Android: `okhttp4_android_13`, `okhttp4_android_12`, etc.
- iOS: `safari_ios_16_0`, `safari_ios_15_6`, etc.
### Recuperação Automática
```python
config = ClientConfiguration(
recovery=RecoveryConfig(
# Criar nova sessão em respostas proibidas
instantiate_new_client_on_forbidden_response=True,
# Criar nova sessão em exceções
instantiate_new_client_on_exception=True,
# Trocar identifier em respostas proibidas
change_client_identifier_on_forbidden_response=True,
# Status codes que acionam recuperação
status_codes_to_forbidden_response_handle=[403, 429, 503]
)
)
```
### Headers Customizados
```python
config = ClientConfiguration(
client=ClientConfig(
default_headers={
"User-Agent": "Mozilla/5.0...",
"Accept-Language": "pt-BR,pt;q=0.9",
"Custom-Header": "value"
}
)
)
# Ou por requisição
response = client.get(
"https://example.com",
headers={"Authorization": "Bearer token"}
)
```
### Cookies
```python
# Obter todos os cookies
cookies = client.get_cookies()
# Obter cookie específico
value = client.get_cookie_by_name("session_id")
# Definir cookie
client.set_cookie("name", "value")
```
## 🔒 Thread Safety
A biblioteca é thread-safe e pode ser usada em ambientes multi-threaded:
```python
import concurrent.futures
client = NotifyTLSClient(ClientConfiguration.aggressive(proxies))
def make_request(url):
return client.get(url)
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
urls = ["https://example.com"] * 100
results = list(executor.map(make_request, urls))
```
## 📊 Logging
```python
import logging
# Habilitar logs de debug
logging.basicConfig(level=logging.DEBUG)
# Ou configurar apenas para notify_tls_client
logger = logging.getLogger("notify_tls_client")
logger.setLevel(logging.DEBUG)
```
## 🛠️ Métodos HTTP Suportados
```python
# GET
response = client.get(url, params={"key": "value"})
# POST
response = client.post(url, json={"data": "value"})
response = client.post(url, data="form data")
# PUT
response = client.put(url, json={"data": "value"})
# PATCH
response = client.patch(url, json={"data": "value"})
# DELETE
response = client.delete(url)
```
## 📖 Documentação Completa
Para documentação detalhada sobre arquitetura, componentes internos e exemplos avançados, consulte:
- [CLAUDE.md](CLAUDE.md) - Guia completo de desenvolvimento
- [examples/](examples/) - Exemplos de código
## 🤝 Contribuindo
Contribuições são bem-vindas! Por favor, leia [CONTRIBUTING.md](CONTRIBUTING.md) para detalhes sobre nosso código de conduta e processo de submissão de pull requests.
## 📝 Changelog
Veja [CHANGELOG.md](CHANGELOG.md) para histórico de versões e mudanças.
## 📄 Licença
Este projeto está licenciado sob a Licença MIT - veja o arquivo [LICENSE](LICENSE) para detalhes.
## ⚠️ Aviso Legal
Esta biblioteca é fornecida apenas para fins educacionais e de pesquisa. O uso desta ferramenta para violar termos de serviço de websites, realizar scraping não autorizado ou qualquer atividade ilegal é de sua responsabilidade. Os desenvolvedores não se responsabilizam pelo uso indevido desta biblioteca.
## 🙏 Agradecimentos
- [tls-client](https://github.com/bogdanfinn/tls-client) - Biblioteca Go subjacente para TLS fingerprinting
- Comunidade Python por ferramentas e bibliotecas incríveis
## 📞 Suporte
- **Issues**: [GitHub Issues](https://github.com/jefersonAlbara/notify-tls-client/issues)
- **Discussões**: [GitHub Discussions](https://github.com/jefersonAlbara/notify-tls-client/discussions)
---
**Desenvolvido com ❤️ para a comunidade Python**
| text/markdown | null | Jeferson Albara <jeferson.albara@example.com> | null | Jeferson Albara <jeferson.albara@example.com> | MIT | tls-client, http-client, web-scraping, proxy-rotation, tls-fingerprinting, browser-emulation, http2, http3, requests, automation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: L... | [] | null | null | >=3.12 | [] | [] | [] | [
"dataclasses-json>=0.6.0",
"typing-extensions>=4.8.0",
"orjson>=3.9.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jefersonAlbara/notify-tls-client",
"Documentation, https://github.com/jefersonAlbara/notify-tls-client#readme",
"Repository, https://github.com/jefersonAlbara/notify-tls-client",
"Issues, https://github.com/jefersonAlbara/notify-tls-client/issues",
"Changelog, https://github.co... | twine/6.1.0 CPython/3.12.3 | 2026-02-18T13:04:52.729334 | notify_tls_client-2.1.0.tar.gz | 19,330,611 | e1/cb/b67dcf6b2f4d79af45ef8886a8c5dfff4cc770c652eb6186c290efe490f7/notify_tls_client-2.1.0.tar.gz | source | sdist | null | false | f6e34107bd1bf4b09701843c4fc011bd | 6cb53a7610ee58a8e95a62bb5cc65fc1822b45840873d0d6f9b430ce3e12120e | e1cbb67dcf6b2f4d79af45ef8886a8c5dfff4cc770c652eb6186c290efe490f7 | null | [] | 240 |
2.4 | moveshelf-api | 1.6.1 | Public package including the Python API | # Moveshelf Python API package
```sh
pip install moveshelf-api
``` | text/markdown | null | Moveshelf <info@moveshelf.com> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"crcmod~=1.7",
"six~=1.17",
"urllib3~=2.5"
] | [] | [] | [] | [
"Homepage, https://github.com/moveshelf/moveshelf-python-api"
] | twine/6.1.0 CPython/3.11.3 | 2026-02-18T13:04:14.458523 | moveshelf_api-1.6.1.tar.gz | 19,913 | 44/c5/6275caac6db5a778031ab56b46ac19bf6beb515ca78ea048fa380916e5f0/moveshelf_api-1.6.1.tar.gz | source | sdist | null | false | 6a09569a138ba4adc262736950c918b9 | c04f9730b822f1c3aec7417261acc5ebfd5e029cc021bfc838bf641bbc28ab0d | 44c56275caac6db5a778031ab56b46ac19bf6beb515ca78ea048fa380916e5f0 | MIT | [
"LICENSE"
] | 249 |
2.4 | talkpipe | 0.11.4 | Python internal and external DSL for writing generative AI analytics | <center><img src="docs/TalkPipe.png" width=500></center>
**Build and iterate on AI workflows efficiently.**
TalkPipe is a Python toolkit that makes it easy to create, test, and deploy workflows that integrate Generative AI with your existing tools and data sources. TalkPipe treats LLMs as one tool in your arsenal - letting you build practical solutions that combine AI with data processing, file handling, and more.
This README introduces TalkPipe at a high level. See the [complete documentation](docs/README.md) for more detail.
## What Can You Do With TalkPipe?
- **Chat with LLMs** - Create multi-turn conversations with OpenAI, Ollama, or Anthropic models in just 2 lines of code
- **Process Documents** - Extract text from PDFs, analyze research papers, score content relevance
- **Build RAG Pipelines** - Create end-to-end Retrieval-Augmented Generation workflows with vector databases
- **Analyze Web Content** - Download web pages (respecting robots.txt), extract readable text, and summarize
- **Build Data Pipelines** - Chain together data transformations, filtering, and analysis with Unix-like simplicity
- **Create AI Agents** - Build agents that can debate topics, evaluate streams of documents, or monitor RSS feeds
- **Deploy Anywhere** - Run in Jupyter notebooks, as Docker containers, or as standalone Python applications
## Structure
<center><img src="docs/talkpipe_architecture.png" width=700></center>
TalkPipe is structured in three layers: ChatterLang Foundation, AI & Data Primitives, and Pipeline Application components. Applications can draw from any or all of these layers.
The Pipe/ChatterLang Foundation layer offers foundational utilities and abstractions that ensure consistency across the entire system. Pipe serves as TalkPipe's internal domain-specific language (DSL), enabling you to construct sophisticated workflows directly in Python. By instantiating modular classes and chaining them together using the `|` (pipe) operator, you can seamlessly connect sources, segments, and sinks to process data in a clear, readable, and composable manner. The Pipe layer is ideal for users who prefer programmatic control and want to integrate TalkPipe pipelines into larger Python applications or scripts.
ChatterLang is TalkPipe's external domain-specific language (DSL), enabling you to define workflows using concise, human-readable text scripts. These scripts can be compiled directly in Python, producing callable functions that integrate seamlessly with your codebase. ChatterLang also supports specifying workflows via environment variables or command-line arguments, making it easy to configure and automate pipelines in Docker containers, shell scripts, CI/CD pipelines, and other deployment environments. This flexibility empowers both developers and non-developers to create, share, and modify AI-powered workflows without writing Python code, streamlining experimentation and operationalization.
Layer 2, "AI & Data Primitives" is built on Pipe and ChatterLang, provides standardized wrappers for interacting with LLMs, full-text search engines, and vector databases. These components provide a unified interface for core capabilities, making it easy to integrate advanced AI and search features throughout your workflows.
Layer 3, "Pipeline Application Components" assembles components from the lower layers to provide higher level components that can serve as application components. They make simplifying assumptions that provide easy access to complex functionality. For example, the pipelines package includes ready-to-use RAG (Retrieval-Augmented Generation) workflows that combine vector search, prompt construction, and LLM completion in a single component. These are designed both as examples and as ways to rapidly get complex functionality with reasonable assumptions. When those assumptions break, the developer can reach deeper into the other layers and build their own custom solutions.
The Application Components layer contains runnable applications that provide user interfaces and automation tools for working with TalkPipe pipelines and ChatterLang scripts. These components are designed to make it easy to interact with TalkPipe from the command line, web browser, or as part of automated workflows.
### Key Applications
- **[chatterlang_workbench](docs/api-reference/chatterlang-workbench.md)**
Launches an interactive web interface for writing, testing, and running ChatterLang scripts. It provides real-time execution, logging, and documentation lookup.
- **[chatterlang_script](docs/api-reference/chatterlang-script.md)**
Runs ChatterLang scripts from files or directly from the command line, enabling batch processing and automation.
- **[chatterlang_serve](docs/api-reference/chatterlang-server.md)**
Exposes ChatterLang pipelines as REST APIs or web forms, allowing you to deploy workflows as web services or user-facing endpoints.
- **chatterlang_reference_browser**
An interactive command line application for searching and browsing installed ChatterLang sources and segments.
- **[chatterlang_reference_generator](docs/api-reference/talkpipe-ref.md)**
Generates comprehensive documentation for all available sources and segments in HTML and text formats.
- **[talkpipe_plugins](docs/api-reference/talkpipe-plugin-manager.md)**
TalkPipe includes a plugin system that lets developers register their own sources and segments, extending its functionality. This allows the TalkPipe ecosystem to grow through community contributions and domain-specific extensions. talkpipe_plugins lets users view and manage those plugins.
These applications are entry points for different usage scenarios, from interactive development to production deployment.
## Quick Start
Install TalkPipe:
```bash
pip install talkpipe
```
For LLM support, install the provider(s) you need:
```bash
# Install specific providers
pip install talkpipe[openai] # For OpenAI
pip install talkpipe[ollama] # For Ollama
pip install talkpipe[anthropic] # For Anthropic Claude
# Or install all LLM providers
pip install talkpipe[all]
```
Create a multi-turn chat function in 2 lines:
```python
from talkpipe.chatterlang import compiler
script = '| llmPrompt[model="llama3.2", source="ollama", multi_turn=True]'
chat = compiler.compile(script).as_function(single_in=True, single_out=True)
response = chat("Hello! My name is Alice.")
response = chat("What's my name?") # Will remember context
```
# Core Components
## 1. The Pipe API (Internal DSL)
TalkPipe's Pipe API is a Pythonic way to build data pipelines using the `|` operator to chain components:
```python
from talkpipe.pipe import io
from talkpipe.llm import chat
# Create a pipeline that prompts for input, gets an LLM response, and prints it
pipeline = io.Prompt() | chat.LLMPrompt(model="llama3.2") | io.Print()
pipeline = pipeline.as_function()
pipeline() # Run the interactive pipeline
```
### Creating Custom Components
Add new functionality with simple decorators:
```python
from talkpipe.pipe import core, io
@core.segment()
def uppercase(items):
"""Convert each item to uppercase"""
for item in items:
yield item.upper()
# Use it in a pipeline
pipeline = io.echo(data="hello,world") | uppercase() | io.Print()
result = pipeline.as_function(single_out=False)()
# Output:
# HELLO
# WORLD
# Returns: ['HELLO', 'WORLD']
```
## 2. ChatterLang (External DSL)
ChatterLang provides a Unix-like syntax for building pipelines, perfect for rapid prototyping and experimentation:
```
INPUT FROM echo[data="1,2,hello,3"] | cast[cast_type="int"] | print
```
### Registering Custom Components for ChatterLang
To make the `uppercase` segment from section 1 available in ChatterLang, register it with a decorator:
```python
from talkpipe.pipe import core
from talkpipe.chatterlang import registry, compiler
@registry.register_segment("uppercase")
@core.segment()
def uppercase(items):
"""Convert each item to uppercase"""
for item in items:
yield item.upper()
# Now use it in ChatterLang scripts
script = 'INPUT FROM echo[data="hello,world"] | uppercase | print'
pipeline = compiler.compile(script).as_function(single_out=False)
result = pipeline()
# Output:
# HELLO
# WORLD
# Returns: ['HELLO', 'WORLD']
```
The `@registry.register_segment()` decorator makes your component discoverable by ChatterLang's compiler, allowing you to use it in scripts alongside built-in segments.
### Key ChatterLang Features
- **Variables**: Store intermediate results with `@variable_name`
- **Constants**: Define reusable values with `CONST name = "value"`
- **Loops**: Repeat operations with `LOOP n TIMES { ... }`
- **Multiple Pipelines**: Chain workflows with `;` or newlines
## 3. Built-in Applications
### Command-Line Tools
- [`chatterlang_workbench`](docs/api-reference/chatterlang-workbench.md) - Start the interactive web interface for experimenting with ChatterLang
- [`chatterlang_script`](docs/api-reference/chatterlang-script.md) - Run ChatterLang scripts from files or command line
- [`chatterlang_reference_generator`](docs/api-reference/talkpipe-ref.md) - Generate documentation for all available sources and segments
- `chatterlang_reference_browser` - Interactive command-line browser for sources and segments
- [`chatterlang_serve`](docs/api-reference/chatterlang-server.md) - Create a customizable user-accessible web interface and REST API from ChatterLang scripts
- [`talkpipe_plugins`](docs/api-reference/talkpipe-plugin-manager.md) - View and manage TalkPipe plugins
### Jupyter Integration
TalkPipe components work seamlessly in Jupyter notebooks for interactive data analysis.
# Detailed Examples
## Example 1: Multi-Agent Debate
Create agents with different perspectives that debate a topic:
```python
from talkpipe.chatterlang import compiler
script = """
CONST economist_prompt = "You are an economist. Reply in one sentence.";
CONST theologian_prompt = "You are a theologian. Reply in one sentence.";
INPUT FROM echo[data="The US should give free puppies to all children."]
| @topic
| accum[variable=@conversation]
| print;
LOOP 3 TIMES {
INPUT FROM @topic
| llmPrompt[system_prompt=economist_prompt]
| @topic
| accum[variable=@conversation]
| print;
INPUT FROM @topic
| llmPrompt[system_prompt=theologian_prompt]
| @topic
| accum[variable=@conversation]
| print;
};
INPUT FROM @conversation
"""
pipeline = compiler.compile(script).as_function()
debate = pipeline() # Watch the debate unfold!
```
## Example 2: Document Stream Evaluation
Score documents based on relevance to a topic:
```python
import pandas as pd
from talkpipe.chatterlang import compiler
# Sample document data
documents = [
'{"title": "Dog", "description": "Dogs are loyal companions..."}',
'{"title": "Cat", "description": "Cats are independent pets..."}',
'{"title": "Wolf", "description": "Wolves are wild canines..."}'
]
script = """
CONST scorePrompt = "Rate 1-10 how related to dogs this is:";
| loadsJsonl
| llmScore[system_prompt=scorePrompt, model="llama3.2", set_as="dog_relevance"]
| setAs[field_list="dog_relevance.score:relevance_score"]
| toDataFrame
"""
pipeline = compiler.compile(script).as_function(single_in=False, single_out=True)
df = pipeline(documents)
# df now contains relevance scores for each document
```
## Example 3: Web Page Analysis
Download and summarize web content:
```python
from talkpipe.chatterlang import compiler
script = """
| downloadURL
| htmlToText
| llmPrompt[
system_prompt="Summarize this article in 3 bullet points",
model="llama3.2"
]
| print
"""
analyzer = compiler.compile(script).as_function(single_in=True)
analyzer("http://example.com/")
```
## Example 4: Content Evaluation Pipeline
Evaluate and filter articles based on relevance scores:
```python
from talkpipe.chatterlang import compiler
# Sample article data
articles = [
'{"title": "New LLM Model Released", "summary": "AI Company announces new LLM with improved reasoning"}',
'{"title": "Smart Home IoT Devices", "summary": "Review of latest Arduino-based home automation"}',
'{"title": "Cat Videos Go Viral", "summary": "Funny cats take over social media again"}',
'{"title": "RAG Systems in Production", "summary": "How companies deploy retrieval-augmented generation"}',
]
script = """
# Define evaluation prompts
CONST ai_prompt = "Rate 0-10 how relevant this is to AI practitioners. Consider mentions of AI, ML, algorithms, or applications.";
CONST iot_prompt = "Rate 0-10 how relevant this is to IoT researchers. Consider hardware, sensors, or embedded systems.";
# Process articles
| loadsJsonl
| concat[fields="title,summary", set_as="full_text"]
# Score for AI relevance
| llmScore[system_prompt=ai_prompt, field="full_text", set_as="ai_eval", model="llama3.2"]
| setAs[field_list="ai_eval.score:ai_score,ai_eval.explanation:ai_reason"]
# Score for IoT relevance
| llmScore[system_prompt=iot_prompt, field="full_text", set_as="iot_eval", model="llama3.2"]
| setAs[field_list="iot_eval.score:iot_score,iot_eval.explanation:iot_reason"]
# Find highest score
| lambda[expression="max(item['ai_score'],item['iot_score'])", set_as="max_score"]
# Filter articles with score > 6
| gt[field="max_score", n=6]
# Format output
| toDict[field_list="title,ai_score,iot_score,max_score"]
| print
"""
evaluator = compiler.compile(script).as_function(single_in=False, single_out=False)
results = evaluator(articles)
# Output shows only relevant articles with their scores:
# {'title': 'New LLM Model Released', 'ai_score': 9, 'iot_score': 2, 'max_score': 9}
# {'title': 'Smart Home IoT Devices', 'ai_score': 3, 'iot_score': 9, 'max_score': 9}
# {'title': 'RAG Systems in Production', 'ai_score': 8, 'iot_score': 2, 'max_score': 8}
```
## Example 5: RAG Pipeline with Vector Database
Build a complete RAG (Retrieval-Augmented Generation) system with standalone data:
```python
from talkpipe.chatterlang import compiler
# Sample knowledge base documents
documents = [
"TalkPipe is a Python toolkit for building AI workflows. It provides a Unix-like pipeline syntax for chaining data transformations and LLM operations.",
"TalkPipe supports multiple LLM providers including OpenAI, Ollama, and Anthropic. You can switch between providers easily using configuration.",
"With TalkPipe, you can build RAG systems, multi-agent debates, and document processing pipelines. It uses Python generators for memory-efficient streaming.",
"TalkPipe offers two APIs: the Pipe API (internal DSL) for Python code and ChatterLang (external DSL) for concise script-based workflows.",
"Deployment is flexible with TalkPipe - run in Jupyter notebooks, Docker containers, or as standalone applications. The chatterlang_serve tool creates web APIs from scripts."
]
# First, index your documents into a vector database
indexing_script = """
| toDict[field_list="_:text"]
| makeVectorDatabase[
path="./my_knowledge_base",
embedding_model="nomic-embed-text",
embedding_source="ollama",
embedding_field="text",
overwrite=True
]
"""
indexer = compiler.compile(indexing_script).as_function(single_in=False)
indexer(documents)
# Now query the knowledge base with RAG
query_script = """
| toDict[field_list="_:text"]
| ragToText[
path="./my_knowledge_base",
embedding_model="nomic-embed-text",
embedding_source="ollama",
completion_model="llama3.2",
completion_source="ollama",
content_field="text",
prompt_directive="Answer the question based on the background information provided.",
limit=3
]
| print
"""
rag_pipeline = compiler.compile(query_script).as_function(single_in=True)
answer = rag_pipeline("What are the key benefits of using TalkPipe?")
# Returns an LLM-generated answer based on relevant document chunks
# For yes/no questions, use ragToBinaryAnswer:
binary_rag_script = """
| toDict[field_list="_:text"]
| ragToBinaryAnswer[
path="./my_knowledge_base",
embedding_model="nomic-embed-text",
embedding_source="ollama",
completion_model="llama3.2",
completion_source="ollama",
content_field="text"
]
| print
"""
binary_rag = compiler.compile(binary_rag_script).as_function(single_in=True)
result = binary_rag("Does TalkPipe support Docker?")
result = binary_rag("Does TalkPipe have a podcast about pipes?")
# For scored evaluations, use ragToScore:
score_rag_script = """
| toDict[field_list="_:text"]
| ragToScore[
path="./my_knowledge_base",
embedding_model="nomic-embed-text",
embedding_source="ollama",
completion_model="llama3.2",
completion_source="ollama",
prompt_directive="Answer the provided question on a scale of 1 to 5.",
content_field="text"
]
| print
"""
score_rag = compiler.compile(score_rag_script).as_function(single_in=True)
score = score_rag("How flexible is talkpipe?")
score_rag("How well does this text describe pipe smoking?")
```
# Documentation
For comprehensive documentation, and examples, see the **[docs/](docs/)** directory:
- **[📚 Documentation Hub](docs/)** - Complete documentation index and navigation
- **[🚀 Getting Started](docs/quickstart.md)** - Installation, concepts, and first pipeline
- **[📖 API Reference](docs/api-reference/)** - Complete command and component reference
- **[🏗️ Architecture](docs/architecture/)** - Technical deep-dives and design concepts
- **[💡 Tutorials](docs/tutorials/)** - Real-world usage examples and patterns
## Quick Reference
| Command | Purpose | Documentation |
|---------|---------|---------------|
| `chatterlang_serve` | Create web APIs and forms | [📄](docs/api-reference/chatterlang-server.md) |
| `chatterlang_workbench` | Interactive web interface | [📄](docs/api-reference/chatterlang-workbench.md) |
| `chatterlang_script` | Run scripts from command line | [📄](docs/api-reference/chatterlang-script.md) |
| `chatterlang_reference_generator` | Generate documentation | [📄](docs/api-reference/talkpipe-ref.md) |
| `chatterlang_reference_browser` | Browse sources/segments interactively | - |
| `talkpipe_plugins` | Manage TalkPipe plugins | [📄](docs/api-reference/talkpipe-plugin-manager.md) |
# Architecture & Development
## Design Principles
### Dual-Language Architecture
- **Internal DSL (Pipe API)**: Pure Python for maximum flexibility and IDE support
- **External DSL (ChatterLang)**: Concise syntax for rapid prototyping
### Streaming Architecture
TalkPipe uses Python generators throughout, enabling:
- Memory-efficient processing of large datasets
- Real-time results as data flows through pipelines
- Natural integration with streaming data sources
### Extensibility First
- Simple decorators (`@source`, `@segment`, `@field_segment`) for adding functionality
- Components are just Python functions - easy to test and debug
- Mix TalkPipe with any Python code or library
## Project Structure
```
talkpipe/
├── app/ # Runnable applications (servers, CLIs)
├── chatterlang/ # ChatterLang parser, compiler, and components
├── data/ # Data manipulation and I/O components
├── llm/ # LLM integrations (OpenAI, Ollama, Anthropic)
├── operations/ # Algorithms and data processing
├── pipe/ # Core pipeline infrastructure
├── pipelines/ # High-level pipeline components (RAG, vector DB)
├── search/ # Search engine integrations (Whoosh, LanceDB)
└── util/ # Utility functions and configuration
```
## Configuration
TalkPipe uses a flexible configuration system via `~/.talkpipe.toml` or environment variables:
```toml
# ~/.talkpipe.toml
default_model_name = "llama3.2"
default_model_source = "ollama"
smtp_server = "smtp.gmail.com"
smtp_port = 587
```
Environment variables use the `TALKPIPE_` prefix:
```bash
export TALKPIPE_email_password="your-password"
export TALKPIPE_openai_api_key="sk-..."
```
### Performance Optimization
TalkPipe includes an optional **lazy loading** feature that can dramatically improve startup performance (up to 18x faster) by deferring module imports until needed:
```bash
# Enable lazy loading for faster startup
export TALKPIPE_LAZY_IMPORT=true
```
This is especially useful for CLI tools and scripts that don't use all TalkPipe features. See the [lazy loading documentation](docs/api-reference/lazy-loading.md) for details.
## Development Guidelines
### Naming Conventions
- **Classes**: `CamelCase` (e.g., `LLMPrompt`)
- **Decorated functions**: `camelCase` (e.g., `@segment def extractText`)
- **ChatterLang names**: `camelCase` (e.g., `llmPrompt`, `toDataFrame`)
### Creating Components
**Sources** generate data:
```python
from talkpipe.pipe import core, io
@core.source()
def fibonacci(n=10):
a, b = 0, 1
for _ in range(n):
yield a
a, b = b, a + b
# Use it in a pipeline
pipeline = fibonacci(n=5) | io.Print()
result = pipeline.as_function(single_out=False)()
# Output:
# 0
# 1
# 1
# 2
# 3
# Returns: [0, 1, 1, 2, 3]
```
**Segments** transform data:
```python
from talkpipe.pipe.math import arange
from talkpipe.pipe import core, io
@core.segment()
def multiplyBy(items, factor=2):
for item in items:
yield item * factor
# Use it to double the Fibonacci numbers
pipeline = arange(lower=5, upper=10) | multiplyBy(factor=3) | io.Print()
result = pipeline.as_function(single_out=False)()
# Output:
# 0
# 3
# 3
# 6
# 9
# Returns: [0, 3, 3, 6, 9]
```
**Field Segments** provide a convenient way to create 1:1 segments:
```python
from datetime import datetime
from talkpipe.pipe import core, io
from talkpipe.chatterlang import registry
@registry.register_segment("addTimestamp")
@core.field_segment()
def addTimestamp(item):
# Handle a single item, not an iterable
# The decorator handles set_as and field parameters automatically
return datetime.now()
# Use it with dictionaries
data = [{'name': 'Alice'}, {'name': 'Bob'}]
pipeline = addTimestamp(set_as="timestamp") | io.Print()
result = pipeline.as_function(single_in=False, single_out=False)(data)
# Output (timestamps will vary):
# {'name': 'Alice', 'timestamp': datetime.datetime(2024, 1, 15, 10, 30, 45, 123456)}
# {'name': 'Bob', 'timestamp': datetime.datetime(2024, 1, 15, 10, 30, 45, 234567)}
# Now it's also available in ChatterLang:
# script = '| addTimestamp[set_as="timestamp"] | print'
```
### Best Practices
1. **Units with side effects should pass data through** - e.g., `writeFile` should yield items after writing
2. **Use descriptive parameter names** with underscores (e.g., `fail_on_error`, `set_as`)
3. **Handle errors gracefully** - use `fail_on_error` parameter pattern
4. **Document with docstrings** - they appear in generated documentation
5. **Test with both APIs** - ensure components work in both Python and ChatterLang
## Roadmap & Contributing
TalkPipe is under active development. Current priorities:
- **Enhanced LLM Support**: Additional providers, expanded guided generation
- **Data Connectors**: More database integrations, API clients, file formats
- **Workflow Features**: Conditional branching, enhanced error handling, retry logic
- **Performance**: Parallel processing optimization, enhanced lazy loading, better caching
- **Developer Tools**: Better debugging, testing utilities, IDE plugins
- **RAG & Search**: Advanced retrieval strategies, hybrid search, multi-modal embeddings
We welcome contributions! Whether it's new components, bug fixes, documentation, or examples, please check our [GitHub repository](https://github.com/sandialabs/talkpipe) for contribution guidelines.
## Status
TalkPipe is currently in active development. While feature-rich and actively used, APIs may evolve. We follow semantic versioning - minor versions maintain compatibility within the same major version, while major version changes may include breaking changes.
## License
TalkPipe is licensed under the Apache License 2.0. See LICENSE file for details.
# Developer Documentation
## Glossary
* **Unit** - A component in a pipeline that either produces or processes data. There are two types of units, Source, and Segments.
* **Segment** - A unit that reads from another Unit and may or may not yield data of its own. All units that
are not at the start of a pipeline is a Segment.
* **Source** - A unit that takes nothing as input and yields data items. These Units are used in the
"INPUT FROM..." portion of a pipeline.
## Conventions
### Versioning
This codebase will use [semantic versioning](https://semver.org/) with the additional convention that during the 0.x.y development that each MINOR version will mostly maintain backward compatibility and PATCH versions will include substantial new capability. So, for example, every 0.2.x version will be mostly backward compatible, but 0.3.0 might contain code reorganization.
### Codebase Structure
The following are the main breakdown of the codebase. These should be considered firm but not strict breakdowns. Sometimes a source could fit within either operations or data, for example.
* **talkpipe.app** - Contains the primary runnable applications.
* Example: chatterlang_script
* **talkpipe.operations** - Contains general algorithm implementations. Associated segments and sources can be included next to the algorithm implementations, but the algorithms themselves should also work stand-alone.
* Example: bloom filters
* **talkpipe.data** - Contain components having to do with complex, type-specific data manipulation.
* Example: extracting text from files.
* **talkpipe.llm** - Contain the abstract classes and implementations for accessing LLMs, both code for accessing specific LLMs and code for doing prompting.
* Example: Code for talking with Ollama or OpenAI
* **talkpipe.pipe** - Code that implements the core classes and decorators for the pipe api as well and misc implementations of helper segments and sources.
* Example: echo and the definition of the @segment decorator
* **talkpipe.chatterlang** - The definition, parsers, and compiler for the chatterlang language as well as any chatterlang specific segments and sources
* Example: the chatterlang compiler and the variable segment
### Source/Segment Names
- **For your own Units, do whatever you want!** These conventions are for authors writing units intended for broader reuse.
- **Classes that implement Units** are named in CamelCase with the initial letter in uppercase.
- **Units defined using `@segment` and `@source` decorators** should be named in camelCase with an initial lowercase letter.
- In **ChatterLang**, sources and segments also use camelCase with an initial lowercase letter.
- Except for the **`cast`** segment, segments that convert data into a specific format—whether they process items one-by-one or drain the entire input—should be named using the form `[tT]oX`, where **X** is the output data type (e.g., `toDataFrame` outputs a pandas DataFrame).
- **Segments that write files** use the form `[Ww]riteX`, where **X** is the file type (e.g., `writeExcel` writes an Excel file, `writePickle` writes a pickle file).
- **Segments that read files** use the form `[Rr]eadX`, where **X** is the file type (e.g., `readExcel` should read an Excel file).
- **Parameter names in segments** should be in all lower case with words separated by an underscore (_)
### Parameter Names
These parameter names should behave consistently across all units:
- **item** should be used in field_segment, referring to the item passed to the function. It will not
be a parameter to the segment in ChatterLang.
- **items** are used in segment definitions, referring to the iterable over all the pieces of data in the stream.
It will not be a parameter used anywhere as a parameter in ChatterLang.
- **set_as**
If used, any processed output is attached to the original data using bracket notation. The original item is then emitted.
- **fail_on_error**
If True, an operation the exception should be raised, likely aborting the pipeline. If False, the operation should continue
and either None should be yielded or nothing, depending on the segment or source. A warning message should be logged.
- **field**
Specifies that the unit should operate on data accessed via “field syntax.” This syntax can include indices, properties, or parameter-free methods, separated by periods.
- For example, given `{"X": ["a", "b", ["c", "d"]]}`, the field `"X.2.0"` refers to `"c"`.
- **field_list**
Specifies that a list of fields can or should be provided, with each field separated
by a comma. In some cases, each field needs to be mapped to some other name. In
those case, the field and name should be separated by a colon. In field_lists,
the underscore (_) refers to the item as a whole.
- For example, "X.2.0:SomeName,X.1:SomeOtherName". If no "name" is provided,
the fieldname itself is used. Where only a list of fields is needed and no names,
the names can still be provided but have no effect.
### General Behavior Principles
* Units that have side effects (e.g. writing data to a disk) should generally also pass
on their data.
### Source and Segement Reference
The chatterlang_workbench command starts a web service designed for experimentation. It also contains links to HTML and text versions
of all the sources and segments included in TalkPipe.
After talkpipe is installed, a script called "chatterlang_reference_browser" is available that provides an interactive command-line search and exploration of sources and segments. The command "chatterlang_reference_generator" will generate single page HTML and text versions of all the source and segment documentation.
### Standard Configuration File Items
Configuration constants can be defined either in ~/.talkpipe.toml or in environment variables. Any constant defined in an environment variable needs to be prefixed with TALKPIPE_. So email_password, stored in an environment variable, needs to be TALKPIPE_email_password. Note that in Chatterlang, any variable stored in the format
can be specified as a parameter using $var_name. This will get dereferenced to
the environment varaible TALKPIPE_var_name or var_name in talkpipe.toml.
* **default_embedding_source** - The default source (e.g. ollama) to be used for creating sentence embeddings.
* **default_embedding_model_name** - The name of the LLM model to be used for creating sentence embeddings.
* **default_model_name** - The default name of a LLM model to be used in chat
* **default_model_source** - The default source (e.g. ollama) to be used in chat
* **email_password** - Password for the SMTP server
* **logger_files** - Files to store logs, in the form logger1:fname1,logger2:fname2,...
* **logger_levels** - Logger levels in the form logger1:level1,logger2:level2
* **recipient_email** - Who should receive a sent email
* **rss_url** - The default URL used by the rss segment
* **sender_email** - Who the sender of an email should be
* **smtp_port** - SMTP server port
* **smtp_server** - SMTP server hostname
---
Last Reviewed: 20251128
| text/markdown | null | Travis Bauer <tlbauer@sandia.gov> | null | Travis Bauer <tlbauer@sandia.gov> | null | ai | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"prompt_toolkit",
"parsy",
"pydantic",
"requests",
"numpy",
"python-docx",
"pandas",
"feedparser",
"readability-lxml",
"lxml",
"lxml_html_clean",
"fastapi[standard]",
"ipywidgets",
"pymongo",
"scikit-learn",
"uvicorn",
"whoosh",
"lancedb",
"deprecated",
"pyyaml",
"greenlet",
... | [] | [] | [] | [
"Homepage, https://github.com/sandialabs/talkpipe"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T13:03:34.770992 | talkpipe-0.11.4.tar.gz | 3,703,876 | bc/7f/d6ebc4a30f191f680403f0b28fa719e200c5b392651b3217bb9b3a9ca189/talkpipe-0.11.4.tar.gz | source | sdist | null | false | 121a2fa44469fe406727c08b0ff33551 | 339c8ea085143d08e48ff95abaae8ff6d1f800fc0f8ada73d81e206e804d6fca | bc7fd6ebc4a30f191f680403f0b28fa719e200c5b392651b3217bb9b3a9ca189 | Apache-2.0 | [
"LICENSE"
] | 246 |
2.4 | apify-client | 2.5.0 | Apify API client for Python | <h1 align=center>Apify API client for Python</h1>
<p align="center">
<a href="https://badge.fury.io/py/apify-client" rel="nofollow"><img src="https://badge.fury.io/py/apify-client.svg" alt="PyPI package version"></a>
<a href="https://pypi.org/project/apify-client/" rel="nofollow"><img src="https://img.shields.io/pypi/dm/apify-client" alt="PyPI package downloads"></a>
<a href="https://codecov.io/gh/apify/apify-client-python"><img src="https://codecov.io/gh/apify/apify-client-python/graph/badge.svg?token=TYQQWYYZ7A" alt="Codecov report"></a>
<a href="https://pypi.org/project/apify-client/" rel="nofollow"><img src="https://img.shields.io/pypi/pyversions/apify-client" alt="PyPI Python version"></a>
<a href="https://discord.gg/jyEM2PRvMU" rel="nofollow"><img src="https://img.shields.io/discord/801163717915574323?label=discord" alt="Chat on Discord"></a>
</p>
The Apify API Client for Python is the official library to access the [Apify API](https://docs.apify.com/api/v2) from your Python applications. It provides useful features like automatic retries and convenience functions to improve your experience with the Apify API.
If you want to develop Apify Actors in Python, check out the [Apify SDK for Python](https://docs.apify.com/sdk/python) instead.
## Installation
Requires Python 3.10+
You can install the package from its [PyPI listing](https://pypi.org/project/apify-client). To do that, simply run `pip install apify-client` in your terminal.
## Usage
For usage instructions, check the documentation on [Apify Docs](https://docs.apify.com/api/client/python/).
## Quick Start
```python
from apify_client import ApifyClient
apify_client = ApifyClient('MY-APIFY-TOKEN')
# Start an Actor and wait for it to finish
actor_call = apify_client.actor('john-doe/my-cool-actor').call()
# Fetch results from the Actor's default dataset
dataset_items = apify_client.dataset(actor_call['defaultDatasetId']).list_items().items
```
## Features
Besides greatly simplifying the process of querying the Apify API, the client provides other useful features.
### Automatic parsing and error handling
Based on the endpoint, the client automatically extracts the relevant data and returns it in the expected format. Date strings are automatically converted to `datetime.datetime` objects. For exceptions, we throw an `ApifyApiError`, which wraps the plain JSON errors returned by API and enriches them with other context for easier debugging.
### Retries with exponential backoff
Network communication sometimes fails. The client will automatically retry requests that failed due to a network error, an internal error of the Apify API (HTTP 500+) or rate limit error (HTTP 429). By default, it will retry up to 8 times. First retry will be attempted after ~500ms, second after ~1000ms and so on. You can configure those parameters using the `max_retries` and `min_delay_between_retries_millis` options of the `ApifyClient` constructor.
### Support for asynchronous usage
Starting with version 1.0.0, the package offers an asynchronous version of the client, [`ApifyClientAsync`](https://docs.apify.com/api/client/python), which allows you to work with the Apify API in an asynchronous way, using the standard `async`/`await` syntax.
### Convenience functions and options
Some actions can't be performed by the API itself, such as indefinite waiting for an Actor run to finish (because of network timeouts). The client provides convenient `call()` and `wait_for_finish()` functions that do that. Key-value store records can be retrieved as objects, buffers or streams via the respective options, dataset items can be fetched as individual objects or serialized data and we plan to add better stream support and async iterators.
| text/markdown | null | "Apify Technologies s.r.o." <support@apify.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 Apify Technologies s.r.o.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | api, apify, automation, client, crawling, scraping | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Langu... | [] | null | null | >=3.10 | [] | [] | [] | [
"apify-shared<3.0.0,>=2.1.0",
"colorama>=0.4.0",
"impit>=0.9.2",
"more-itertools>=10.0.0"
] | [] | [] | [] | [
"Apify Homepage, https://apify.com",
"Homepage, https://docs.apify.com/api/client/python/",
"Changelog, https://docs.apify.com/api/client/python/docs/changelog",
"Discord, https://discord.com/invite/jyEM2PRvMU",
"Documentation, https://docs.apify.com/api/client/python/docs/overview/introduction",
"Issue T... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:03:16.083162 | apify_client-2.5.0.tar.gz | 377,916 | 78/6a/b872d6bbc84c6aaf27b455492c6ff1bd057fea302c5d40619c733d48a718/apify_client-2.5.0.tar.gz | source | sdist | null | false | d06ce12be0952ebe18b8e77ea6cb6c6d | daa2af6a50e573f78bd46a4728a3f2be76cee93cf5c4ff9d0fd38b6756792689 | 786ab872d6bbc84c6aaf27b455492c6ff1bd057fea302c5d40619c733d48a718 | null | [
"LICENSE"
] | 54,511 |
2.4 | luna-model | 0.5.0 | Aqarios LunaModel: Symbolic modeling for optimization | # Symbolic modeling for optimization
## Summary
LunaModel is a high-performance symbolic modeling library for describing, translating and transforming optimization problems.
It provides the following high-level features:
- System for defining symbolic algebraic expressions of arbitrary degree, constraints and optimization models (like dimod, gurobi or cplex)
- Translations from and to a LunaModel for many common optimization model formats (like LP)
- Transformations to map a LunaModel from a general model to a specific model, such as transforming a Constrained (Binary) Quadratic Model (CQM) to a (Unconstrained) Binary Quadratic Model (BQM), or from an Integer Model to a Binary Model.
- Builtin serialization for maximum portability
- Python-first development experience
You can use LunaModel as a standalone package or by using [luna-quantum](https://pypi.org/project/luna-quantum/) which gives you additional builtin functionality to solve your optimization problems using the [Luna Platform](https://aqarios.com/platform).
## About LunaModel
Most optimization tasks involve working with problems, which generally consist of an objective function,
wether this objective function should be minimized or maximized and optionally constraints to the problem itself.
LunaModel consists of the following components:
| Component | Description |
| ---------------------------- | ------------------------------------------------------------------------- |
| **LunaModel** | A symbolic modeling library for arbitrary optimization models (problems). |
| **LunaModel.translator** | A translation library that supports many common model formats. |
| **LunaModel.transformation** | A compilation and transpilation stack to transform a model (source) into a target representation (target). |
| **LunaModel.utils** | Utility functions for expression and model creation. |
| **LunaModel.errors** | All error types that can be raised within LunaModel. |
LunaModel is usually used as either:
- A replacement for plain LP files, dimod or similar frameworks to define optimization models.
- As part of [luna-quantum](https://pypi.org/project/luna-quantum/) to solve arbitrary optimization problems.
### A Symbolic Modeling Library
With LunaModel you can define symbolic Expressions and Constraints (_which in consist of left-hand side (lhs), an Expression, a right-hand side (rhs) which is a constant numerical value and a Comparator_).
A Model defining arbitrary optimization problems consists of a single Expression as the objective function (_the function to be optimized_) and, optionally, one or more Constraints.
Expressions are created using mathematical operations on Variables. Variables represent an unknown in the Expression which is determined by an optimization. By default variables are Binary, can represent any of the following Variable types:
- **Binary**: the variable can be either $0$ or $1$.
- **Spin**: the variable can be either $-1$ or $+1$.
- **Integer**: the variable can be any integer number $\in [-2^{64}-1, 2^{64}-1]$ (_for a 64-Bit system_).
- **Real**: the variable can be any floating point number $\in [\approx -1.7976...E308, \approx +1.7976...E308]$ (_[-f64::MAX, f64::MAX]_).
_In general not all variable types are supported by all optimizers you can find. It can be the case that a defined model cannot be natively translated into the expected format of an optimizer. To resolve this you can use **LunaModel.transformation**._
Let's have a look a the **Knapsack Problem** for defining an optimization problem using only Binary variables.
We have $n$ items $x_1, x_2, \dots, x_n$, each with a weight $w_i$ and a value $v_i$, and a maximum capacity of $W$.
The optimization problem is defined as:
```math
\begin{align*}
&\text{maximize} \sum_{i=1}^{n} v_i x_i \\
&\text{subject to} \sum_{i=1}^{n} w_i x_i \leq W \quad \text{and} \quad x_i \in \{ 0, 1 \}
\end{align*}
```
Using LunaModel and $n = 5$ and $W = 25$:
```python
from luna_model import Expression, Model, Sense, Vtype
# A faster alternative to creating Expressions using loops in Python.
from luna_model.utils import quicksum
# Initialize the known values:
n: int = 5 # number of items.
W: int = 25 # maximum capacity.
weights: list[float] = [ 1.5, 10.0, 5.2, 3.5, 8.32] # weight of each item.
values: list[float] = [10.0, 22.0, 3.2, 1.99, 6.25] # value of each item.
# First, we create the Model with it's sense set to Maximize the objective function.
# You can also give your model a name, optionally but recommended.
model = Model(sense=Sense.MAX, name="Knapsack")
# Next, we need to create all variables. Note, there are alternative ways to create
# variables, you can find details in the LunaModel docs.
variables = [model.add_variable(f"x_{i+1}", vtype=Vtype.BINARY) for i in range(n)]
# Now we can define the objective function:
model.objective = quicksum(values[i] * variables[i] for i in range(n))
# And for the constraints:
# Ensure the maximum capacity of `W`:
model.constraints += quicksum(weights[i] * variables[i] for i in range(n)) <= W
# The second constraint that all `x_i` are in [0, 1] is natively encoded by using
# Binary variables.
print(model) # to display the model.
```
As an extension, the **Bounded Knapsack Problem (BKP)** with a maximum number of each item $c = 4$ can be defined like this:
```math
\begin{align*}
&\text{maximize} \sum_{i=1}^{n} v_i x_i \\
&\text{subject to} \sum_{i=1}^{n} w_i x_i \leq W \quad \text{and} \quad x_i \in \{ 0, 1, 2, \dots, c \}
\end{align*}
```
Now we have two equivalent approaches to implement this using LunaModel:
_Note that we have to use Integer variables now._
- Using Bounds on the variables:
```python
from luna_model import Expression, Model, Sense, Vtype, Bounds
# A faster alternative to creating Expressions using loops in Python.
from luna_model.utils import quicksum
# Initialize the known values:
c: int = 4 # maximum number of each item.
n: int = 5 # number of items.
W: int = 25 # maximum capacity.
weights: list[float] = [ 1.5, 10.0, 5.2, 3.5, 8.32] # weight of each item.
values: list[float] = [10.0, 22.0, 3.2, 1.99, 6.25] # value of each item.
# First, we create the Model with it's sense set to Maximize the objective function.
# You can also give your model a name, optionally but recommended.
model = Model(sense=Sense.MAX, name="Bounded Knapsack")
# Next, we need to create all variables. Note, there are alternative ways to create
# variables, you can find details in the LunaModel docs.
variables = [
# We can have each item at least `0` times and at most `c` times.
model.add_variable(f"x_{i+1}", vtype=Vtype.INTEGER, lower=0, upper=c)
for i in range(n)
]
# Now we can define the objective function:
model.objective = quicksum(values[i] * variables[i] for i in range(n))
# And for the constraints:
# Ensure the maximum capacity of `W`:
model.constraints += quicksum(weights[i] * variables[i] for i in range(n)) <= W
# The second constraint that all `x_i` are in [0, 1, 2, ..., c] is natively encoded
# by using Bounds on the Integer variables.
print(model)
```
- Using a Constraint for each variable:
```python
from luna_model import Expression, Model, Sense, Vtype, Bounds
# A faster alternative to creating Expressions using loops in Python.
from luna_model.utils import quicksum
# Initialize the known values:
c: int = 4 # maximum number of each item.
n: int = 5 # number of items.
W: int = 25 # maximum capacity.
weights: list[float] = [ 1.5, 10.0, 5.2, 3.5, 8.32] # weight of each item.
values: list[float] = [10.0, 22.0, 3.2, 1.99, 6.25] # value of each item.
# First, we create the Model with it's sense set to Maximize the objective function.
# You can also give your model a name, optionally but recommended.
model = Model(sense=Sense.MAX, name="Bounded Knapsack")
# Next, we need to create all variables. Note, there are alternative ways to create
# variables, you can find details in the LunaModel docs.
variables = [
model.add_variable(f"x_{i+1}", vtype=Vtype.INTEGER)
for i in range(n)
]
# Now we can define the objective function:
model.objective = quicksum(values[i] * variables[i] for i in range(n))
# And for the constraints:
# Ensure the maximum capacity of `W`:
model.constraints += quicksum(weights[i] * variables[i] for i in range(n)) <= W
# The second constraint that all `x_i` are in [0, 1, 2, ..., c]:
for i in range(n):
model.constraints += variables[i] <= c
model.constraints += variables[i] >= 0
print(model)
```
| text/markdown; charset=UTF-8; variant=GFM | null | Aqarios GmbH <pypi@aqarios.com> | null | null | null | aqarios, luna, quantum computing, quantum optimization, optimization, modeling | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering"
] | [] | https://www.aqarios.com | null | <3.15,>=3.11 | [] | [] | [] | [
"numpy>=1.0.0",
"typing-extensions>=4.15.0; python_full_version < \"3.13\"",
"dimod>=0.12.21; extra == \"dimod\"",
"qiskit-optimization>=0.6.1; extra == \"qiskit\"",
"qiskit>=2.1.2; extra == \"qiskit\"",
"pyscipopt>=6.0.0; extra == \"scip\""
] | [] | [] | [] | [
"Documentation, https://docs.aqarios.com",
"Homepage, https://aqarios.com"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T13:03:11.607085 | luna_model-0.5.0-cp312-cp312-win_amd64.whl | 2,502,247 | 2e/0d/f99d935f868c662c7dd5d84fdcbe80698bce5e1cb92439b339d2e171c5a1/luna_model-0.5.0-cp312-cp312-win_amd64.whl | cp312 | bdist_wheel | null | false | 970dcf78d1be5092465608e4a865a942 | 7d05c0c96c00d0fda272b1e15c8bcfa4118214357d4d63d8aa75e26c7de1c087 | 2e0df99d935f868c662c7dd5d84fdcbe80698bce5e1cb92439b339d2e171c5a1 | Apache-2.0 | [
"LICENSE",
"THIRD_PARTY_LICENSES.txt"
] | 2,454 |
2.4 | rwrapr | 0.9.4 | Python package for using R in Python | # RWrapR
[][pypi status]
[][pypi status]
[][pypi status]
[][license]
[][documentation]
[][tests]
[][sonarcov]
[][sonarquality]
[][pre-commit]
[][black]
[](https://github.com/astral-sh/ruff)
[][poetry]
[pypi status]: https://pypi.org/project/ssb-rwrapr/
[documentation]: https://statisticsnorway.github.io/ssb-rwrapr
[tests]: https://github.com/statisticsnorway/ssb-rwrapr/actions?workflow=Tests
[sonarcov]: https://sonarcloud.io/summary/overall?id=statisticsnorway_ssb-rwrapr
[sonarquality]: https://sonarcloud.io/summary/overall?id=statisticsnorway_ssb-rwrapr
[pre-commit]: https://github.com/pre-commit/pre-commit
[black]: https://github.com/psf/black
[poetry]: https://python-poetry.org/
## Features <img src="images/WrapR-logo.png" alt="Logo" align = "right" height="139" class="logo">
`RWrapR` is a `python` package for using R inside of python.
It is built using `rpy2`, but attempts to be more convient to use.
Ideally you should never have to worry about using `R` objects,
instead treating `R` functions as normal `python` functions, where the inputs
and outputs are `python` objects.
```python
import rwrapr as wr
import pandas as pd
import numpy as np
dplyr = wr.library("dplyr")
dt = wr.library("datasets")
dplyr.last(x=np.array([1, 2, 3, 4]))
dplyr.last(x=[1, 2, 3, 4])
iris = dt.iris
df = dplyr.mutate(iris, Sepal=wr.Lazily("round(Sepal.Length * 2, 0)"))
```
## To do
1. Better warning handling (this will likely be tricky)
- Sometimes we will get datatypes which are incompatible,
e.g., warning accompanied by
2. Better handling of missing values.
## Requirements
- `R` must be installed
## Installation
You can install _RWrapR_ via [pip] from [PyPI]:
```console
pip install rwrapr
```
## Usage
Please see the [Reference Guide] for details.
## Managing R dependencies
`RWrapR` will automatically install the required R packages, using the
global library path. Sometimes this is not desirable, and you may want to
use the `renv` package to manage your `R` dependencies. To do this, you can
use `renv` via the `rwrapr` package.
```python
import rwrapr as wr
renv = wr.library("renv") # note you must install renv globally first
renv.init() # initialize renv
renv.install("statisticsnorway/ssb-metodebiblioteket")
renv.install("metodebiblioteket")
renv.snapshot(type="all") # update lock-file
```
If you are using `.ipynb` files, you can should add `renv.autoload()` to the
top of your notebook to ensure that the correct `R` environment is loaded.
For further details, see the [Renv Article](RENV.md)
## Contributing
Contributions are very welcome.
To learn more, see the [Contributor Guide].
## License
Distributed under the terms of the [MIT license][license],
_RWrapR_ is free and open source software.
## Issues
If you encounter any problems,
please [file an issue] along with a detailed description.
## Credits
This project was generated from [Statistics Norway]'s [SSB PyPI Template].
[statistics norway]: https://www.ssb.no/en
[pypi]: https://pypi.org/
[ssb pypi template]: https://github.com/statisticsnorway/ssb-pypitemplate
[file an issue]: https://github.com/statisticsnorway/ssb-rwrapr/issues
[pip]: https://pip.pypa.io/
<!-- github-only -->
[license]: https://github.com/statisticsnorway/ssb-rwrapr/blob/main/LICENSE
[contributor guide]: https://github.com/statisticsnorway/ssb-rwrapr/blob/main/CONTRIBUTING.md
[reference guide]: https://statisticsnorway.github.io/ssb-rwrapr/reference.html
| text/markdown | Kjell Solem Slupphaug | kjell.solem.slupphaug@ssb.no | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Langua... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"jinja2>=3.1.5",
"numpy>=1.26.4",
"pandas>=2.2.0",
"rpy2>=3.5.16",
"scipy>=1.3",
"termcolor>=2.4.0"
] | [] | [] | [] | [
"Changelog, https://github.com/statisticsnorway/ssb-rwrapr/releases",
"Documentation, https://statisticsnorway.github.io/ssb-rwrapr",
"Homepage, https://github.com/statisticsnorway/ssb-rwrapr",
"Repository, https://github.com/statisticsnorway/ssb-rwrapr"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:03:09.549426 | rwrapr-0.9.4.tar.gz | 19,082 | 73/17/8982cca3b20fb3fd1b5d8e966217399b5f21f71a8723665787c8b1d5f0d9/rwrapr-0.9.4.tar.gz | source | sdist | null | false | 6dac2c5fa765b2ff5150615a4ecc8eaf | 6b16f2be66d7d4062212146b78837aaa4e78f273daf8ae2440a9a8b2f064b21b | 73178982cca3b20fb3fd1b5d8e966217399b5f21f71a8723665787c8b1d5f0d9 | null | [
"LICENSE"
] | 303 |
2.1 | odoo-addon-mail-multicompany | 19.0.1.0.0.2 | Email Gateway Multi company | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
===========================
Email Gateway Multi company
===========================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:b5d9d2a104c477258dd8b1a3ae1ef245e881b90c74ed22e96e3c10cfe51f4886
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fmulti--company-lightgray.png?logo=github
:target: https://github.com/OCA/multi-company/tree/19.0/mail_multicompany
:alt: OCA/multi-company
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/multi-company-19-0/multi-company-19-0-mail_multicompany
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/multi-company&target_branch=19.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module adds company_id to the models ir.mail_server and
mail.message. Also inherits mail.message create function to set the
company mail_server.
**Table of contents**
.. contents::
:local:
Configuration
=============
- Go to 'Settings / Technical / Outgoing Mail Servers', and add the
company.
Usage
=====
To use this module, you need to:
- Send some email or message that comes out of Odoo.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/multi-company/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/multi-company/issues/new?body=module:%20mail_multicompany%0Aversion:%2019.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Comunitea
Contributors
------------
- Jesús Ventosinos Mayor <jesus@comunitea.com>
- Cédric Pigeon <cedric.pigeon@acsone.eu>
- Valentin Vinagre <valentin.vinagre@sygel.es>
- ``Heliconia Solutions Pvt. Ltd. <https://www.heliconia.io>``\ \_
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-luisg123v| image:: https://github.com/luisg123v.png?size=40px
:target: https://github.com/luisg123v
:alt: luisg123v
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-luisg123v|
This module is part of the `OCA/multi-company <https://github.com/OCA/multi-company/tree/19.0/mail_multicompany>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Odoo Community Association (OCA), Comunitea | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 19.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/multi-company | null | null | [] | [] | [] | [
"odoo==19.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T13:02:37.399222 | odoo_addon_mail_multicompany-19.0.1.0.0.2-py3-none-any.whl | 26,501 | 7f/bb/1d6caa81fc60c47a53d4b6de63f6ea100f5fd1d90d86258eddba1806f9f5/odoo_addon_mail_multicompany-19.0.1.0.0.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 3e73aecaa78dad473a0a5e612a8e1920 | 98e4b0700791540e5e170593f29cbdc7131d73f33797bcd0a96a5321e4a7ad52 | 7fbb1d6caa81fc60c47a53d4b6de63f6ea100f5fd1d90d86258eddba1806f9f5 | null | [] | 106 |
2.4 | Topsis-Tarshdeep-102316050 | 0.2 | TOPSIS implementation for multi-criteria decision making | # Topsis-Tarshdeep-102316050
This Python package implements TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) for multi-criteria decision making.
## Installation
```bash
pip install Topsis-Tarshdeep-102316050
| text/markdown | Tarshdeep Kaur | tarshdeepkaur1@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/Tarshdeep2210/Topsis-Tarshdeep-102316050 | null | >=3.6 | [] | [] | [] | [
"pandas",
"numpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.11 | 2026-02-18T13:02:21.859349 | topsis_tarshdeep_102316050-0.2.tar.gz | 1,790 | 6d/8f/ef2c1b1b1bc3a9ea0f7435a272b7576290995e471355ad5d36e4bc6cb2a4/topsis_tarshdeep_102316050-0.2.tar.gz | source | sdist | null | false | d04dc02740ed15d92b4732f07bf06453 | 10e2b07bbf4a3580e66a6b05f0e3aee8589e9d202dbdd8a0289a320a0ac2681c | 6d8fef2c1b1b1bc3a9ea0f7435a272b7576290995e471355ad5d36e4bc6cb2a4 | null | [
"LICENSE"
] | 0 |
2.4 | anemoi-transform | 0.1.26 | A package to hold various data transformation functions to support training of ML models on ECMWF data. | # anemoi-transform
<p align="center">
<a href="https://github.com/ecmwf/codex/raw/refs/heads/main/Project Maturity">
<img src="https://github.com/ecmwf/codex/raw/refs/heads/main/Project Maturity/incubating_badge.svg" alt="Maturity Level">
</a>
<a href="https://opensource.org/licenses/apache-2-0">
<img src="https://img.shields.io/badge/Licence-Apache 2.0-blue.svg" alt="Licence">
</a>
<a href="https://github.com/ecmwf/anemoi-transform/releases">
<img src="https://img.shields.io/github/v/release/ecmwf/anemoi-transform?color=purple&label=Release" alt="Latest Release">
</a>
</p>
> \[!IMPORTANT\]
> This software is **Incubating** and subject to ECMWF's guidelines on [Software Maturity](https://github.com/ecmwf/codex/raw/refs/heads/main/Project%20Maturity).
## Documentation
The documentation can be found at https://anemoi-transform.readthedocs.io/.
## Contributing
You can find information about contributing to Anemoi at our [Contribution page](https://anemoi.readthedocs.io/en/latest/contributing/contributing.html).
## Install
Install via `pip` with:
```
$ pip install anemoi-transform
```
## License
```
Copyright 2024-2025, Anemoi Contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
In applying this licence, ECMWF does not waive the privileges and immunities
granted to it by virtue of its status as an intergovernmental organisation
nor does it submit to any jurisdiction.
```
| text/markdown | null | "European Centre for Medium-Range Weather Forecasts (ECMWF)" <software.support@ecmwf.int> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024-2025 Anemoi Contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| ai, tools | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programmi... | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"anemoi-utils>=0.4.36",
"cfunits",
"earthkit-data<1,>=0.12.4",
"earthkit-geo>=0.3",
"earthkit-meteo>=0.4.1",
"earthkit-regrid>=0.4",
"anemoi-transform[plots]; extra == \"all\"",
"anemoi-transform[all,docs,tests]; extra == \"dev\"",
"nbsphinx; extra == \"docs\"",
"numpydoc; extra == \"docs\"",
"p... | [] | [] | [] | [
"Documentation, https://anemoi-transform.readthedocs.io/",
"Homepage, https://github.com/ecmwf/anemoi-transform/",
"Issues, https://github.com/ecmwf/anemoi-transform/issues",
"Repository, https://github.com/ecmwf/anemoi-transform/"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T13:02:06.540866 | anemoi_transform-0.1.26.tar.gz | 164,845 | 1b/d7/e9f9b0b28ece07b6bea62154a1b5f6195339de5ea334d6521c88ac34474d/anemoi_transform-0.1.26.tar.gz | source | sdist | null | false | 492f78e7bbb5bbf2a36b171167a9945c | 828be05653c4a717ca1957787b2620c41be71601bb92230cc95775309de5460d | 1bd7e9f9b0b28ece07b6bea62154a1b5f6195339de5ea334d6521c88ac34474d | null | [
"LICENSE"
] | 1,855 |
2.4 | unitlab | 2.4.1 | Python SDK for the Unitlab.ai data annotation platform | <p align="center">
<br>
<img src="https://unitlab-storage.s3.us-east-2.amazonaws.com/Logo.png" width="400"/>
<br>
<p>
<p align="center">
<a href="https://pypi.org/project/unitlab/">
<img alt="PyPI" src="https://img.shields.io/pypi/v/unitlab">
</a>
<a href="https://pypi.org/project/unitlab/">
<img alt="Python" src="https://img.shields.io/pypi/pyversions/unitlab">
</a>
<a href="https://github.com/teamunitlab/unitlab-sdk">
<img alt="Downloads" src="https://img.shields.io/pypi/dm/unitlab">
</a>
<a href="https://github.com/teamunitlab/unitlab-sdk/blob/main/LICENSE.md">
<img alt="License" src="https://img.shields.io/pypi/l/unitlab">
</a>
</p>
[Unitlab.ai](https://unitlab.ai/) is an AI-driven data annotation platform that automates the collection of raw data, facilitating collaboration with human annotators to produce highly accurate labels for your machine learning models. With our service, you can optimize work efficiency, improve data quality, and reduce costs.

# Unitlab Python SDK
Python SDK and CLI for the [Unitlab.ai](https://unitlab.ai/) data annotation platform. Manage projects, upload data, and download datasets programmatically or from the command line.
## Installation
```bash
pip install --upgrade unitlab
```
Requires Python 3.10+.
## Configuration
Get your API key from [unitlab.ai](https://unitlab.ai/) and configure the CLI:
```bash
# Set API key
unitlab configure --api-key YOUR_API_KEY
# Set a custom API URL
unitlab configure --api-url https://api.unitlab.ai
# Set both at once
unitlab configure --api-key YOUR_API_KEY --api-url https://api.unitlab.ai
```
Or set environment variables:
```bash
export UNITLAB_API_KEY=YOUR_API_KEY
# Optional: point to a custom API server (e.g. self-hosted)
export UNITLAB_API_URL=https://api.unitlab.ai
```
## Python SDK
```python
from unitlab import UnitlabClient
# Initialize with an explicit key
client = UnitlabClient(api_key="YOUR_API_KEY")
# Or read from UNITLAB_API_KEY env var / config file
client = UnitlabClient()
```
The client can also be used as a context manager:
```python
with UnitlabClient() as client:
projects = client.projects()
```
### Projects
```python
# List all projects
projects = client.projects()
# Get project details
project = client.project("PROJECT_ID")
# Get project members
members = client.project_members("PROJECT_ID")
```
### Upload data
```python
client.project_upload_data(
project_id="PROJECT_ID",
directory="./images",
)
```
Additional options for specific project types:
```python
# Text projects
client.project_upload_data("PROJECT_ID", "./docs", sentences_per_chunk=10)
# Video projects
client.project_upload_data("PROJECT_ID", "./videos", fps=30.0)
```
### Datasets
```python
# List all datasets
datasets = client.datasets()
# Download annotations (COCO, YOLOv8, YOLOv5, etc.)
path = client.dataset_download("DATASET_ID", export_type="COCO", split_type="train")
# Download raw files
folder = client.dataset_download_files("DATASET_ID")
```
## CLI
### Projects
```bash
# List projects
unitlab project list
# Project details
unitlab project detail PROJECT_ID
# Project members
unitlab project members PROJECT_ID
# Upload data to a project
unitlab project upload PROJECT_ID --directory ./images
```
### Datasets
```bash
# List datasets
unitlab dataset list
# Download annotations
unitlab dataset download DATASET_ID --export-type COCO --split-type train
# Download raw files
unitlab dataset download DATASET_ID --download-type files
```
## Documentation
See the [full documentation](https://docs.unitlab.ai/) for detailed guides:
- [CLI reference](https://docs.unitlab.ai/cli-python-sdk/unitlab-cli)
- [Python SDK quickstart](https://docs.unitlab.ai/cli-python-sdk/unitlab-python-sdk)
## License
[MIT](LICENSE.md)
| text/markdown | null | "Unitlab Inc." <team@unitlab.ai> | null | null | null | annotation, data-labeling, machine-learning, sdk, unitlab | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles>=23.0",
"aiohttp>=3.9",
"requests>=2.28",
"tqdm>=4.60",
"typer>=0.9",
"validators>=0.20",
"aioresponses>=0.7; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"responses>=0.23; extra == \"dev\"",
"ruff>=0.4;... | [] | [] | [] | [
"Homepage, https://unitlab.ai",
"Documentation, https://docs.unitlab.ai",
"Repository, https://github.com/teamunitlab/unitlab-sdk"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T13:02:05.804776 | unitlab-2.4.1.tar.gz | 10,400 | 68/9b/47ecce0227dee5ab150edef627d4bb96ac903c259b9923cf307d0ee53b8f/unitlab-2.4.1.tar.gz | source | sdist | null | false | ed182048d3e9218c0192ad876106b395 | cdf9857354241d351ac4c1b367d8a1dbe9a4799db0405a4846038eb23bf98b2f | 689b47ecce0227dee5ab150edef627d4bb96ac903c259b9923cf307d0ee53b8f | MIT | [
"LICENSE.md"
] | 260 |
2.4 | crunch-vanta | 0.1.0 | CrunchDAO challenge package for Vanta (Bittensor Subnet 8) — model interface, indicators, backtesting, and walk-forward validation. | # 🏔️ Vanta Crunch — Challenge Package
Predict short-term crypto returns. Beat the leaderboard.
## Quickstart
```bash
pip install vanta-challenge matplotlib # matplotlib optional, for plots
```
### 1. Write a model
```python
from vanta.tracker import TrackerBase
from vanta.indicators import RSI, EMA, OrderBookImbalance, FundingSignal
class MyModel(TrackerBase):
def __init__(self):
super().__init__()
self.rsi = RSI(period=14)
self.ema_fast = EMA(period=5)
self.ema_slow = EMA(period=20)
self.obi = OrderBookImbalance(period=10)
self.funding = FundingSignal(period=8)
def tick(self, data: dict):
super().tick(data)
for candle in data.get("candles_1m", []):
price = float(candle["close"])
self.rsi.update(price)
self.ema_fast.update(price)
self.ema_slow.update(price)
# Order book imbalance (if available)
ob = data.get("orderbook")
if ob:
self.obi.update(ob["imbalance"])
# Funding rate (if available)
fund = data.get("funding")
if fund:
self.funding.update(fund["funding_rate"])
def predict(self, symbol: str, horizon_seconds: int, step_seconds: int) -> dict:
if not self.rsi.ready:
return {"expected_return": 0.0}
signal = 0.0
# RSI + trend filter
if self.rsi.value < 30 and self.ema_fast.value > self.ema_slow.value:
signal += 0.003
elif self.rsi.value > 70 and self.ema_fast.value < self.ema_slow.value:
signal -= 0.003
# Order book pressure boost
if self.obi.ready:
signal += self.obi.value * 0.002 # imbalance [-1,1] → ±0.002
# Funding rate mean-reversion (negative = contrarian)
if self.funding.ready:
signal -= self.funding.value * 50 # e.g. 0.0001 × 50 = 0.005
return {"expected_return": max(-0.01, min(0.01, signal))}
```
### 2. Backtest it
```python
from vanta.backtest import BacktestRunner
result = BacktestRunner(model=MyModel()).run(
subject="BTCUSDT",
start="2025-06-01",
end="2026-01-01",
)
result.summary() # formatted metrics table
result.plot() # equity curve, drawdown, hit rate, scatter
```
### 3. Compare with other strategies
```python
from vanta.backtest import compare
from vanta.examples import MeanReversionTracker, TrendFollowingTracker
compare([
("Mean Reversion", MeanReversionTracker()),
("Trend Following", TrendFollowingTracker()),
("My Model", MyModel()),
], start="2025-06-01", end="2026-01-01")
```
### 4. Sweep parameters
```python
from vanta.backtest import sweep
sweep(
model_fn=lambda period: MyModel(period=period),
params={"period": [7, 14, 21, 28]},
start="2025-06-01", end="2026-01-01",
)
```
## How It Works
- **Data**: Multi-timeframe OHLCV candles (1m, 5m, 15m, 1h) + order book depth + funding rates from Binance
- **Prediction**: Every 15 minutes, predict the **1-hour forward return**
- **Scoring**: `score = expected_return × actual_return`
- **History**: ~1 year of data available for backtesting (Jan 2025 → present)
## Scoring
```
score = expected_return × actual_return
```
| Scenario | Score | Meaning |
|----------|-------|---------|
| You predict +0.01, price goes up +0.005 | ✅ +0.00005 | Correct direction, good conviction |
| You predict +0.01, price goes down -0.005 | ❌ -0.00005 | Wrong direction, penalized |
| You predict 0.0 | 🤷 0.0 | No opinion, no gain, no loss |
**Direction AND magnitude matter.** See [docs/scoring-guide.md](docs/scoring-guide.md) for details.
## Indicators
Built-in stateful indicators — no pandas/numpy required:
```python
from vanta.indicators import SMA, EMA, RSI, MACD, BollingerBands, ATR, VWAP, Returns
rsi = RSI(period=14)
rsi.update(price) # feed one value
rsi.value # current reading (or None if not ready)
rsi.ready # True once enough data accumulated
```
| Indicator | `.update()` | `.value` |
|-----------|-------------|----------|
| `SMA(period)` | `(price)` | Moving average |
| `EMA(period)` | `(price)` | Exponential average |
| `RSI(period)` | `(price)` | 0–100 scale |
| `MACD(fast, slow, signal)` | `(price)` | `(macd, signal, histogram)` |
| `BollingerBands(period, num_std)` | `(price)` | `(upper, middle, lower, width, percent_b)` |
| `ATR(period)` | `(high, low, close)` | Average true range |
| `VWAP()` | `(price, volume)` | Volume-weighted price |
| `Returns(period)` | `(price)` | % return over N periods |
| `OrderBookImbalance(period)` | `(imbalance)` | Smoothed bid/ask imbalance |
| `SpreadTracker(period)` | `(best_bid, best_ask)` | Spread in basis points |
| `FundingSignal(period)` | `(funding_rate)` | Smoothed funding rate |
| `BasisTracker(period)` | `(basis)` | Smoothed futures basis |
## Examples
Seven starter strategies in [`vanta/examples/`](vanta/examples/):
| Strategy | Approach |
|----------|----------|
| Mean Reversion | Fades deviation from SMA |
| Trend Following | EMA crossover momentum |
| RSI | Contrarian at RSI extremes |
| Bollinger Breakout | Mean-reversion at band edges |
| Volatility Regime | Switches strategy based on ATR |
| Dual Timeframe | RSI filtered by trend direction |
| Volume Profile | Volume-weighted momentum |
## Docs
- [**Scoring Guide**](docs/scoring-guide.md) — how scoring works, what the leaderboard metrics mean
- [**Strategy Guide**](docs/strategy-guide.md) — practical tips, common approaches, pitfalls
- [**Compare Notebook**](notebooks/compare-strategies.ipynb) — run all strategies side by side
## Model Interface
```python
class TrackerBase:
def tick(self, data: dict) -> None:
"""Called with each new data batch.
data = {
"symbol": "BTCUSDT",
"asof_ts": 1234567890,
# Multi-timeframe OHLCV candles
"candles_1m": [{"ts": ..., "open": ..., "high": ..., "low": ..., "close": ..., "volume": ...}, ...],
"candles_5m": [...], # last 60 bars (5h)
"candles_15m": [...], # last 40 bars (10h)
"candles_1h": [...], # last 24 bars (1 day)
# Order book snapshot (or None if unavailable)
"orderbook": {
"best_bid": 50000.0, "best_ask": 50001.0,
"spread": 1.0, "mid_price": 50000.5,
"bid_depth": 123.4, "ask_depth": 98.7,
"imbalance": 0.11, # (bid-ask)/(bid+ask), range [-1, 1]
"bids_top": [[50000.0, 1.5], ...],
"asks_top": [[50001.0, 1.2], ...],
},
# Funding rate / basis (or None if unavailable)
"funding": {
"funding_rate": 0.0001, # current 8h funding rate
"mark_price": 50000.5,
"index_price": 50000.0,
"basis": 0.00001, # (mark-index)/index
"next_funding_ts": 1234571490,
},
}
"""
def predict(self, symbol: str, horizon_seconds: int, step_seconds: int) -> dict:
"""Return your prediction.
Returns: {"expected_return": float}
"""
```
| text/markdown | null | CrunchDAO <contact@crunchdao.com> | null | null | MIT | bittensor, crunchdao, quantitative-finance, trading, vanta | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"matplotlib>=3.7",
"numpy>=1.24",
"pandas>=2.0",
"pyarrow>=15.0",
"requests>=2.28"
] | [] | [] | [] | [
"Homepage, https://github.com/crunchdao/vanta-coordinator",
"Documentation, https://github.com/crunchdao/vanta-coordinator/tree/main/challenge"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-18T13:01:59.577588 | crunch_vanta-0.1.0.tar.gz | 2,158,477 | 77/1e/cc30e9dea024cd8d4ac20533c20d869f16cc3ebdbae3cf0d59fb78640e54/crunch_vanta-0.1.0.tar.gz | source | sdist | null | false | 2e16f2841762ed1c332ef49d84a861eb | 32397420dbb8f4f72369c43e6275c483334032fd094246b9abc986cae3eb3448 | 771ecc30e9dea024cd8d4ac20533c20d869f16cc3ebdbae3cf0d59fb78640e54 | null | [] | 283 |
2.4 | zohidpy | 0.1.4 | Python Web Framework built for learning purposes. |
# ZohidPy
[](https://pypi.org/project/zohidpy/)
[](LICENSE)

> A lightweight WSGI web framework built from scratch to understand how web
frameworks work internally.
ZohidPy is intentionally small and explicit.\
It is designed for learning, experimentation, and understanding routing,
middleware, request/response handling, templating, and static file
serving without heavy abstractions.
------------------------------------------------------------------------
## Installation
``` bash
pip install zohidpy
```
------------------------------------------------------------------------
## Quick Start
``` python
from waitress import serve
from zohidpy.app import ZohidPy
app = ZohidPy()
@app.route("/home")
def home(request, response):
response.text = "Hello from ZohidPy!"
if __name__ == "__main__":
serve(app, listen="localhost:8000")
```
Run:
``` bash
python main.py
```
Visit:
http://localhost:8000/home
------------------------------------------------------------------------
## Features
- Function-based routing
- Class-based routing
- Dynamic URL parameters
- JSON / Text / HTML response helpers
- Jinja2 template rendering
- Static file serving (WhiteNoise)
- Middleware system
- Custom exception handlers
- Built-in test client
- Fully WSGI compatible
------------------------------------------------------------------------
## Routing
### Function-Based Routing
``` python
@app.route("/home")
def home(request, response):
response.text = "Hello from home"
```
### Dynamic Routes
``` python
@app.route("/hello/{name}")
def greet(request, response, name):
response.text = f"Hello {name}"
```
### Class-Based Routing
``` python
@app.route("/books")
class Books:
def get(self, request, response):
response.text = "Books page"
def post(self, request, response):
response.text = "Create a book"
```
If a method is not defined, the framework returns:
405 Method Not Allowed
### Restricting Allowed Methods
``` python
@app.route("/home", allowed_methods=["post"])
def home(request, response):
response.text = "POST only"
```
------------------------------------------------------------------------
## Response Helpers
### JSON
``` python
resp.json = {"name": "zohid"}
```
Automatically sets:
Content-Type: application/json
### Plain Text
``` python
resp.text = "Hello world"
```
Sets:
Content-Type: text/plain
### HTML
``` python
resp.html = "<h1>Hello</h1>"
```
Sets:
Content-Type: text/html
------------------------------------------------------------------------
## Templates (Jinja2)
Default directory:
templates/
Example:
``` python
@app.route("/template")
def template_handler(req, resp):
resp.html = app.template(
"home.html",
context={"title": "My Page"}
)
```
Custom template directory:
``` python
app = ZohidPy(templates_dir="my_templates")
```
------------------------------------------------------------------------
## Static Files
Static files are served using WhiteNoise.
Default directory:
static/
Accessible via:
/static/filename.css
Custom directory:
``` python
app = ZohidPy(static_dir="assets")
```
------------------------------------------------------------------------
## Middleware
Create middleware by subclassing `Middleware`:
``` python
from zohidpy.middleware import Middleware
class LoggingMiddleware(Middleware):
def process_request(self, req):
print("Request:", req.url)
def process_response(self, req, resp):
print("Response generated")
app.add_middleware(LoggingMiddleware)
```
Middleware lifecycle:
1. process_request
2. route handler
3. process_response
------------------------------------------------------------------------
## Custom Exception Handling
``` python
def on_exception(req, resp, exc):
resp.text = "Something went wrong"
app.add_exception_handler(on_exception)
```
------------------------------------------------------------------------
## Testing
ZohidPy includes a built-in test client:
``` python
test_client = app.test_session()
```
Example test:
``` python
def test_home(app, test_client):
@app.route("/home")
def home(req, resp):
resp.text = "Hello"
response = test_client.get("http://testingserver/home")
assert response.text == "Hello"
```
------------------------------------------------------------------------
## Deployment
ZohidPy is fully WSGI-compatible.
### Waitress
``` bash
waitress-serve --listen=0.0.0.0:8000 main:app
```
### Gunicorn
``` bash
gunicorn main:app
```
------------------------------------------------------------------------
## Internal Architecture Overview
High-level request flow:
1. WSGI entry point receives request
2. Static files handled via WhiteNoise
3. Middleware layer executes
4. Route resolution via pattern matching
5. Handler execution
6. Response object constructs final WebOb response
The design favors clarity over abstraction.
------------------------------------------------------------------------
## License
This project is licensed under the **Apache License 2.0** — see the [LICENSE](./LICENSE) file for details.
---
## Author
**Zohidjon Mahmudjonov**
- GitHub: [@zohidjon-m](https://github.com/zohidjon-m)
- Email: zohidjon.mah@gmail.com
| text/markdown | Zohidjon Mahmudjonov | zohidjon.mah@gmail.com | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | https://github.com/zohidjon-m/zohidpy | null | >=3.9.0 | [] | [] | [] | [
"requests==2.32.5",
"requests-wsgi-adapter==0.4.1",
"waitress==3.0.2",
"webob==1.8.9",
"whitenoise==6.11.0",
"jinja2==3.1.6",
"urllib3==2.6.3",
"parse==1.20.2",
"pytest==9.0.2",
"pytest-cov==7.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-18T13:01:58.326726 | zohidpy-0.1.4.tar.gz | 12,684 | fa/86/827beafbea275f4965159fb4f831adff4768a888e85894d59747cd17884e/zohidpy-0.1.4.tar.gz | source | sdist | null | false | afb271cbb966c469676ada8fb0610fb5 | d99b0d083f0ea5dcbed840f58d975a1340b951d0d3f881f3c4901da8cb83bf34 | fa86827beafbea275f4965159fb4f831adff4768a888e85894d59747cd17884e | null | [
"LICENSE"
] | 241 |
2.4 | django-froala-editor | 5.0.1 | django-froala-editor package helps integrate Froala WYSIWYG HTML editor with Django. | Django Froala WYSIWYG Editor
============================
django-froala-editor package helps integrate `Froala WYSIWYG HTML
editor <https://froala.com/wysiwyg-editor/>`__ with Django.
View the full documentation at `Github. <https://github.com/froala/django-froala-editor/>`__
| null | Dipesh Acharya | dipesh@awecode.com | Froala Labs | null | BSD License | froala, django, admin, wysiwyg, editor, text, html, editor, rich, web | [
"Environment :: Web Environment",
"Framework :: Django",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Topic :: Inte... | [] | http://github.com/froala/django-froala-editor/ | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T13:01:58.311530 | django_froala_editor-5.0.1.tar.gz | 1,366,716 | 09/d1/8cb7b812e3d90cc4bc1d378c7449e33bb1fe136cc798cba7cf3acd5e8c88/django_froala_editor-5.0.1.tar.gz | source | sdist | null | false | f38cd30dc9f290f955e7f17d854cc1be | c39481c99e174f192b65770f9dc89815f550cde94afa586448c58c1941fc8900 | 09d18cb7b812e3d90cc4bc1d378c7449e33bb1fe136cc798cba7cf3acd5e8c88 | null | [] | 276 |
2.4 | licatools | 0.3.18 | Tools to reduce and plot data from LICA Optical Bench | # licatools
(formerly known as licaplot)
Collection of processing and plotting commands to analyze data gathered by the LICA Optical Test Bench.
This is a counterpart for sensors of [rawplot](https://guaix.ucm.es/rawplot).
# Installation
```bash
pip install licatools
```
# Available utilities
* `lica-filters`. Process filter data from LICA optical test bench.
* `lica-tessw`. Process TESS-W data from LICA optical test bench.
* `lica-photod`. Plot and export LICA photodiodes spectral response curves.
* `lica-hama`. Build LICA's Hamamtsu S2281-04 photodiode spectral response curve in ECSV format to be used for other calibration purposes elsewhere.
* `lica-osi` = Build LICA's OSI PIN-10D photodiode spectral response curve in ECSV format to be used for other calibration purposes elsewhere.
* `lica-ndf`. Build Spectral response for LICA's Optical Bench Neutral Density Filters.
* `lica-plot`. Very simple plot utility to plot CSV/ECSV files.
* `lica-eclip`. Reduce & plot the data taken from solar eclipse glasses.
Every command listed (and subcommands) con be described with `-h | --help`
Examples:
```bash
lica-filters -h
lica-filters classif -h
lica-filters classif photod -h
```
All commands have a series of global options:
* `--console` logs messages to console
* `--log-file` logs messages to a file
* `--verbose | --quiet`, raises or lowers the log verbosity level
* `--trace` displays exception stack trace info for debugging purposes.
Most commands has a short & long options. (e.g. `-l | --label` or `-ycn | --y-col-num`)
The examples below showcase both options.
## Generic plot utility
The `lica-plot` utility is aimed to plot ECSV tabular data produced by this package. It can produce several graphics styles. Columns are given by the column order in the ECSV file ***starting by #1***. By default, the X axis is column #1.
The following options are available accoording to the command line `lica-plot <Graphics> <Tables> <Columns>` schema.
| Graphics | Tables | Columns | Description |
| :------- | :----- | :------ | :---------------------------------------------------------------------------------------- |
| single | table | column | Single graphics, one table, one Y column vs X column plot. |
| single | table | columns | Single graphics, one table, several Y columns vs X column plot. |
| single | tables | column | Single graphics, several tables with same Y column vs common X column |
| single | tables | mixed | Single graphics, several tables with one Y column each table vs common X column |
| single | tables | columns | Single graphics, several tables with several Y columns per table vs common X column |
| multi | tables | column | Multiple graphics, one table per graphics, one Y column per table vs X common column |
| multi | tables | columns | Multiple graphics, one table per graphics, several Y columns per table vs common X column |
The `single tables column` option is suitable to plot a filter set (i.e RGB filters) obtained in several ECSV files into a single graphics
as seen in one of the examples below.
The `single tables mixed` option is suitable to plot the same Y vs X magnitude where the Y colum is the same magnitude appearing in different column order in two or more different tables.
* Titles, X & Y Labels can be supplied on the command line. If not specified, they take default values from the ECSV metadata ("title" and "label" metadata) and column names.
* Markers, legends and line styles can be supplied by the command line. If not supplied, they take default values.
# Usage examples
## Reducing Filters data (lica-filters)
### Simple case
In the simple case, we hace one filter CSV and one clear photodiode CSV. Setting the wavelength limits is optional.
Setting the photodiode model is optional unless you are using the Hamamatsu S2281-01. The column in the ECSV file containing the transmission is column number 4. The plot also displays the Optical Bench passband filters change.
```bash
lica-filters --console one -l OMEGA NPB -p data/filters/Omega_NPB/QEdata_diode_2nm.txt -m PIN-10D -i data/filters/Omega_NPB/QEdata_filter_2nm.txt
lica-plot --console single table column -% -i data/filters/Omega_NPB/QEdata_filter_2nm.ecsv -ycn 4 --changes --lines
```
### More complex case
In this case, an RGB filter set was measured with a single clear photodiode reading, thus sharing the same photodiode file. The photodiode model used was the OSI PIN-10D.
1. First we tag all the clear photodiode readings. The tag is a string (i.e. `X`) we use to match which filters are being paired with this clear photodiode reading.
If we need to trim the bandwith of the whole set (photodiode + associated filter readings) *this is the time to do it*. The bandwith trimming will be carried over from the photodiode to the associated filters.
```bash
lica-filters --console classif photod --tag X -p data/filters/Eysdon_RGB/photodiode.txt
```
The output of this command is an ECSV file with the same information plus metadata needed for further processing.
2. Tag all filter files.
Tag them with the same tag as chosen by the photodiode file (`X`), as they share the same photodiode file.
```bash
lica-filters --console classif filter -g X -i data/filters/Eysdon_RGB/green.txt -l Green
lica-filters --console classif filter -g X -i data/filters/Eysdon_RGB/red.txt -l Red
lica-filters --console classif filter -g X -i data/filters/Eysdon_RGB/blue.txt -l Blue
```
The output of these commands are the ECSV files with the same data but additional metadata for further processing
3. Review the process
Just to make sure everything is ok.
```bash
lica-filters --console classif review -d data/filters/Eysdon_RGB
```
4. Data reduction.
The recommended `--save` flag allows to control the overriting of the input ECSV files with more columns and metadata.
```bash
lica-filters --console process -d data/filters/Eysdon_RGB --save
```
After this step both filter ECSV files contains additional columns with the clear photodiode readings, the photodiode model QE and the final transmission curve as the last column.
5. Plot the result
Plot generated ECSV files using `lica-plot`. The column to be plotted is the fourth column (transmission) against the wavelenght column which happens to be the first one and thus no need to specify it.
```bash
lica-plot --console single tables column -i data/filters/Eysdon_RGB/blue.ecsv data/filters/Eysdon_RGB/red.ecsv data/filters/Eysdon_RGB/green.ecsv -ycn 4 --percent --changes --lines
```

## Measuring TESS-W spectral response (lica-tessw)
Process the input files obtained at LICA for TESS-W measurements. For each device, we need a CSV file with the frequencies at a given wavelength and the corresponsing reference photodiode (OSI PIN-10D) current measurements.
1. Classify the files and assign the sensor readings to photodiode readings
```bash
lica-tessw --console classif photod -p data/tessw/stars1277-photodiode.csv --tag A
lica-tessw --console classif sensor -i data/tessw/stars1277-frequencies.csv --label TSL237 --tag A
lica-tessw --console classif photod -p data/tessw/stars6502-photodiode.csv --tag B
lica-tessw --console classif sensor -i data/tessw/stars6502-frequencies.csv --label OTHER --tag B
```
2. Review the configuration
```bash
lica-tessw --console classif review -d data/tessw/
```
```bash
2024-12-08 13:07:23,214 [INFO] [root] ============== licatools.tessw 0.1.dev100+g51c6aa2.d20241208 ==============
2024-12-08 13:07:23,214 [INFO] [licatools.tessw] Reviewing files in directory data/tessw/
2024-12-08 13:07:23,270 [INFO] [licatools.utils.processing] Returning stars6502-frequencies
2024-12-08 13:07:23,270 [INFO] [licatools.utils.processing] Returning stars1277-frequencies
2024-12-08 13:07:23,271 [INFO] [licatools.utils.processing] [tag=B] (PIN-10D) stars6502-photodiode, used by ['stars6502-frequencies']
2024-12-08 13:07:23,271 [INFO] [licatools.utils.processing] [tag=A] (PIN-10D) stars1277-photodiode, used by ['stars1277-frequencies']
2024-12-08 13:07:23,271 [INFO] [licatools.utils.processing] Review step ok.
```
3. Data reduction
```bash
lica-tessw --console process -d data/tessw/ --save
```
```bash
2024-12-08 13:10:08,476 [INFO] [root] ============== licatools.tessw 0.1.dev100+g51c6aa2.d20241208 ==============
2024-12-08 13:10:08,476 [INFO] [licatools.tessw] Classifying files in directory data/tessw/
2024-12-08 13:10:08,534 [INFO] [licatools.utils.processing] Returning stars6502-frequencies
2024-12-08 13:10:08,534 [INFO] [licatools.utils.processing] Returning stars1277-frequencies
2024-12-08 13:10:08,534 [INFO] [lica.lab.photodiode] Loading Responsivity & QE data from PIN-10D-Responsivity-Cross-Calibrated@1nm.ecsv
2024-12-08 13:10:08,546 [INFO] [licatools.utils.processing] Processing stars6502-frequencies with photodidode PIN-10D
2024-12-08 13:10:08,546 [INFO] [lica.lab.photodiode] Loading Responsivity & QE data from PIN-10D-Responsivity-Cross-Calibrated@1nm.ecsv
2024-12-08 13:10:08,557 [INFO] [licatools.utils.processing] Processing stars1277-frequencies with photodidode PIN-10D
2024-12-08 13:10:08,558 [INFO] [licatools.utils.processing] Updating ECSV file data/tessw/stars6502-frequencies.ecsv
2024-12-08 13:10:08,562 [INFO] [licatools.utils.processing] Updating ECSV file data/tessw/stars1277-frequencies.ecsv
```
4. Plot the result
```bash
lica-plot --console single tables column -i data/tessw/stars1277-frequencies.ecsv data/tessw/stars6502-frequencies.ecsv -ycn 5 --changes --lines
```

## Comparing measured TESS-W response with manufactured datasheet
There is a separate [Jupyter notebook](doc/TESS-W Spectral Response.ipynb) on this.
## Generating LICA photodiodes reference
This is a quick reference of commands and procedure. There is a separate [LICA report]( https://doi.org/10.5281/zenodo.14884494) on the process.
### Hamamatsu S2281-01 diode (lica-hama)
#### Stage 1
Convert NPL CSV data into a ECSV file with added metadata and plot it.
```bash
lica-hama --console stage1 --plot -i data/hamamatsu/S2281-01-Responsivity-NPL.csv
```
It produces a file with the same name as the input file with `.ecsv` extension
#### Stage 2
Plot and merge NPL data with S2281-04 (yes, -04!) datasheet points.
With no alignment
```bash
lica-hama --console stage2 --plot --save -i data/hamamatsu/S2281-01-Responsivity-NPL.ecsv -d data/hamamatsu/S2281-04-Responsivity-Datasheet.csv
```
With good alignment (x = 16, y = 0.009)
```bash
lica-hama --console stage2 --plot --save -i data/hamamatsu/S2281-01-Responsivity-NPL.ecsv -d data/hamamatsu/S2281-04-Responsivity-Datasheet.csv -x 16 -y 0.009
```
It produces a file whose name is the same as the input file plus "+Datasheet.ecsv" appended, in the same folder.
(i.e `S2281-01-Responsivity-NPL+Datasheet.ecsv`)
#### Stage 3
Interpolates input ECSV file to a 1 nm resolution with cubic interpolator.
```bash
lica-hama --console stage3 --plot -i data/hamamatsu/S2281-01-Responsivity-NPL+Datasheet.ecsv -m cubic -r 1 --revision 2024-12
```
#### Pipeline
The complete pipeline in one command
```bash
lica-hama --console pipeline --plot -i data/hamamatsu/S2281-01-Responsivity-NPL.csv -d data/hamamatsu/S2281-04-Responsivity-Datasheet.csv -x 16 -y 0.009 -m cubic -r 1
```
### OSI PIN-10D photodiode (lica-osi)
By using the scanned datasheet
```bash
lica-osi --console datasheet -i data/osi/PIN-10D-Responsivity-Datasheet.csv -m cubic -r 1 --plot --save --revision 2024-12
```
By using a cross calibration with the Hamamatsu photodiode. The Hamamtsu ECSV file is the one obtained in the section above. It does nota appear in the command line as it is embedded in a Python package that automatically retrieves it.
```bash
lica-osi --console cross --osi data/osi/QEdata_PIN-10D.txt --hama data/osi/QEdata_S2201-01.txt --plot --save --revision 2024-12
```
Compare both methods
```bash
lica-osi --console compare -c data/osi/OSI\ PIN-10D+Cross-Calibrated@1nm.ecsv -d data/osi/OSI\ PIN-10D-Responsivity-Datasheet+Interpolated@1nm.ecsv --plot
```
***NOTE: We recomemnd using the cross-calibrated method.***
### Plot the packaged ECSV file (lica-photod)
```bash
lica-photod --console plot -m S2281-01
lica-photod --console plot -m PIN-10D
```


## Reducing and plotting sun eclipse glasses
The following script reduces the data of measured eclipse glasses:
```bash
#!/usr/bin/env bash
set -exuo pipefail
dir="data/eclipse"
for i in 01 02 03 04 05 06 07 08 09 10 11 12 13
do
lica-filters --console one -l $i -g $i -p ${dir}/${i}_osi_nd0.5.txt -m PIN-10D -i ${dir}/${i}_eg.txt --ndf ND-0.5
lica-eclip --console inverse -ycn 5 -i ${dir}/${i}_eg.ecsv --save
done
```
The different ECSVs contain a last column (#6) with the log10 of the inverse of Transmittance.
```bash
#!/usr/bin/env bash
set -exuo pipefail
dir="data/eclipse"
file_accum=""
for i in 01 02 03 04 05 06 07 08 09 10 11 12 13
do
file_accum="${file_accum}${dir}/${i}_eg.ecsv "
done
lica-eclip --console --trace plot -ycn 6 -t 'Transmittance vs Wavelength' -yl '$log_{10}(\frac{1}{Transmittance})$' --lines --marker None -i $file_accum
```
| text/markdown | null | Rafael González <rafael08@ucm.es>, Jaime Zamorano <jzamoran@ucm.es> | null | null | null | null | [
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"Operating System :: OS Independen... | [] | null | null | >=3.12 | [] | [] | [] | [
"matplotlib>=3.9",
"pyqt5>=5.15",
"astropy>=6.0",
"scipy>=1.13",
"sqlalchemy",
"lica[lab,sqlalchemy]>=3.0",
"pytz>=2025.1",
"notebook>=7.3; extra == \"extras\""
] | [] | [] | [] | [
"Homepage, https://github.com/guaix-ucm/licaplot",
"Repository, https://github.com/guaix-ucm/licaplot.git"
] | twine/6.2.0 CPython/3.11.13 | 2026-02-18T13:01:30.690838 | licatools-0.3.18.tar.gz | 1,241,054 | 9b/79/ef616c8550a776c75010133f690ca9b415bbd422873ff26c882e329ee389/licatools-0.3.18.tar.gz | source | sdist | null | false | 6d3881bb81536b39f224c1640100703d | f76d288aa621d54666d914b3e36ee5bbe74492be2d7fa2b15a689a078e0c593a | 9b79ef616c8550a776c75010133f690ca9b415bbd422873ff26c882e329ee389 | null | [
"LICENSE"
] | 232 |
2.4 | pyinstaller-hooks-contrib | 2026.1 | Community maintained hooks for PyInstaller | # `pyinstaller-hooks-contrib`: The PyInstaller community hooks repository
What happens when (your?) package doesn't work with PyInstaller? Say you have data files that you need at runtime?
PyInstaller doesn't bundle those. Your package requires others which PyInstaller can't see? How do you fix that?
In summary, a "hook" file extends PyInstaller to adapt it to the special needs and methods used by a Python package.
The word "hook" is used for two kinds of files. A runtime hook helps the bootloader to launch an app, setting up the
environment. A package hook (there are several types of those) tells PyInstaller what to include in the final app -
such as the data files and (hidden) imports mentioned above.
This repository is a collection of hooks for many packages, and allows PyInstaller to work with these packages
seamlessly.
## Installation
`pyinstaller-hooks-contrib` is automatically installed when you install PyInstaller, or can be installed with pip:
```commandline
pip install -U pyinstaller-hooks-contrib
```
## I can't see a hook for `a-package`
Either `a-package` works fine without a hook, or no-one has contributed hooks.
If you'd like to add a hook, or view information about hooks,
please see below.
## Hook configuration (options)
Hooks that support configuration (options) and their options are documented in
[Supported hooks and options](hooks-config.rst).
## I want to help!
If you've got a hook you want to share then great!
The rest of this page will walk you through the process of contributing a hook.
If you've been here before then you may want to skip to the [summary checklist](#summary)
**Unless you are very comfortable with `git rebase -i`, please provide one hook per pull request!**
**If you have more than one then submit them in separate pull requests.**
### Setup
[Fork this repo](https://github.com/pyinstaller/pyinstaller-hooks-contrib/fork) if you haven't already done so.
(If you have a fork already but its old, click the **Fetch upstream** button on your fork's homepage.)
Clone and `cd` inside your fork by running the following (replacing `bob-the-barnacle` with your github username):
```
git clone https://github.com/bob-the-barnacle/pyinstaller-hoooks-contrib.git
cd pyinstaller-hooks-contrib
```
Create a new branch for you changes (replacing `foo` with the name of the package):
You can name this branch whatever you like.
```
git checkout -b hook-for-foo
```
If you wish to create a virtual environment then do it now before proceeding to the next step.
Install this repo in editable mode.
This will overwrite your current installation.
(Note that you can reverse this with `pip install --force-reinstall pyinstaller-hooks-contrib`).
```
pip install -e .
pip install -r requirements-test.txt
pip install flake8 pyinstaller
```
Note that on macOS and Linux, `pip` may by called `pip3`.
If you normally use `pip3` and `python3` then use `pip3` here too.
You may skip the 2<sup>nd</sup> line if you have no intention of providing tests (but please do provide tests!).
### Add the hook
Standard hooks live in the [_pyinstaller_hooks_contrib/stdhooks/](../master/_pyinstaller_hooks_contrib/stdhooks/) directory.
Runtime hooks live in the [_pyinstaller_hooks_contrib/rthooks/](../master/_pyinstaller_hooks_contrib/rthooks/) directory.
Simply copy your hook into there.
If you're unsure if your hook is a runtime hook then it almost certainly is a standard hook.
Please annotate (with comments) anything unusual in the hook.
*Unusual* here is defined as any of the following:
* Long lists of `hiddenimport` submodules.
If you need lots of hidden imports then use [`collect_submodules('foo')`](https://pyinstaller.readthedocs.io/en/latest/hooks.html#PyInstaller.utils.hooks.collect_submodules).
For bonus points, track down why so many submodules are hidden. Typical causes are:
* Lazily loaded submodules (`importlib.importmodule()` inside a module `__getattr__()`).
* Dynamically loaded *backends*.
* Usage of `Cython` or Python extension modules containing `import` statements.
* Use of [`collect_all()`](https://pyinstaller.readthedocs.io/en/latest/hooks.html#PyInstaller.utils.hooks.collect_all).
This function's performance is abismal and [it is broken by
design](https://github.com/pyinstaller/pyinstaller/issues/6458#issuecomment-1000481631) because it confuses
packages with distributions.
Check that you really do need to collect all of submodules, data files, binaries, metadata and dependencies.
If you do then add a comment to say so (and if you know it - why).
Do not simply use `collect_all()` just to *future proof* the hook.
* Any complicated `os.path` arithmetic (by which I simply mean overly complex filename manipulations).
#### Add the copyright header
All source files must contain the copyright header to be covered by our terms and conditions.
If you are **adding** a new hook (or any new python file), copy/paste the appropriate copyright header (below) at the top
replacing 2021 with the current year.
<details><summary>GPL 2 header for standard hooks or other Python files.</summary>
```python
# ------------------------------------------------------------------
# Copyright (c) 2024 PyInstaller Development Team.
#
# This file is distributed under the terms of the GNU General Public
# License (version 2.0 or later).
#
# The full license is available in LICENSE, distributed with
# this software.
#
# SPDX-License-Identifier: GPL-2.0-or-later
# ------------------------------------------------------------------
```
</details>
<details><summary>Apache header for runtime hooks only.
Again, if you're unsure if your hook is a runtime hook then it'll be a standard hook.</summary>
```python
# ------------------------------------------------------------------
# Copyright (c) 2024 PyInstaller Development Team.
#
# This file is distributed under the terms of the Apache License 2.0
#
# The full license is available in LICENSE, distributed with
# this software.
#
# SPDX-License-Identifier: Apache-2.0
# ------------------------------------------------------------------
```
</details>
If you are **updating** a hook, skip this step.
Do not update the year of the copyright header - even if it's out of date.
### Test
Having tests is key to our continuous integration.
With them we can automatically verify that your hook works on all platforms, all Python versions and new versions of
libraries as and when they are released.
Without them, we have no idea if the hook is broken until someone finds out the hard way.
Please write tests!!!
Some user interface libraries may be impossible to test without user interaction
or a wrapper library for some web API may require credentials (and possibly a paid subscription) to test.
In such cases, don't provide a test.
Instead explain either in the commit message or when you open your pull request why an automatic test is impractical
then skip on to [the next step](#run-linter).
#### Write tests(s)
A test should be the least amount of code required to cause a breakage
if you do not have the hook which you are contributing.
For example if you are writing a hook for a library called `foo`
which crashes immediately under PyInstaller on `import foo` then `import foo` is your test.
If `import foo` works even without the hook then you will have to get a bit more creative.
Good sources of such minimal tests are introductory examples
from the documentation of whichever library you're writing a hook for.
Package's internal data files and hidden dependencies are prone to moving around so
tests should not explicitly check for presence of data files or hidden modules directly -
rather they should use parts of the library which are expected to use said data files or hidden modules.
Tests generally live in [tests/test_libraries.py](../master/tests/test_libraries.py).
Navigate there and add something like the following, replacing all occurrences of `foo` with the real name of the library.
(Note where you put it in that file doesn't matter.)
```python
@importorskip('foo')
def test_foo(pyi_builder):
pyi_builder.test_source("""
# Your test here!
import foo
foo.something_fooey()
""")
```
If the library has changed significantly over past versions then you may need to add version constraints to the test.
To do that, replace the `@importorskip("foo")` with a call to `PyInstaller.utils.tests.requires()` (e.g.
`@requires("foo >= 1.4")`) to only run the test if the given version constraint is satisfied.
Note that `@importorskip` uses module names (something you'd `import`) whereas `@requires` uses distribution names
(something you'd `pip install`) so you'd use `@importorskip("PIL")` but `@requires("pillow")`.
For most packages, the distribution and packages names are the same.
#### Run the test locally
Running our full test suite is not recommended as it will spend a very long time testing code which you have not touched.
Instead, run tests individually using either the `-k` option to search for test names:
```
pytest -k test_foo
```
Or using full paths:
```
pytest tests/test_libraries.py::test_foo
```
#### Pin the test requirement
Get the version of the package you are working with (`pip show foo`)
and add it to the [requirements-test-libraries.txt](../master/requirements-test-libraries.txt) file.
The requirements already in there should guide you on the syntax.
#### Run the test on CI/CD
<details><summary>CI/CD now triggers itself when you open a pull request.
These instructions for triggering jobs manually are obsolete except in rare cases.</summary>
To test hooks on all platforms we use Github's continuous integration (CI/CD).
Our CI/CD is a bit unusual in that it's triggered manually and takes arguments
which limit which tests are run.
This is for the same reason we filter tests when running locally -
the full test suite takes ages.
First push the changes you've made so far.
```commandline
git push --set-upstream origin hook-for-foo
```
Replace *billy-the-buffalo* with your Github username in the following url then open it.
It should take you to the `oneshot-test` actions workflow on your fork.
You may be asked if you want to enable actions on your fork - say yes.
```
https://github.com/billy-the-buffalo/pyinstaller-hooks-contrib/actions/workflows/oneshot-test.yml
```
Find the **Run workflow** button and click on it.
If you can't see the button,
select the **Oneshot test** tab from the list of workflows on the left of the page
and it should appear.
A dialog should appear containing one drop-down menu and 5 line-edit fields.
This dialog is where you specify what to test and which platforms and Python versions to test on.
Its fields are as follows:
1. A branch to run from. Set this to the branch which you are using (e.g. ``hook-for-foo``),
2. Which package(s) to install and their version(s).
Which packages to test are inferred from which packages are installed.
You can generally just copy your own changes to the `requirements-test-libraries.txt` file into this box.
* Set to `foo` to test the latest version of `foo`,
* Set to `foo==1.2, foo==2.3` (note the comma) to test two different versions of `foo` in separate jobs,
* Set to `foo bar` (note the lack of a comma) to test `foo` and `bar` in the same job,
3. Which OS or OSs to run on
* Set to `ubuntu` to test only `ubuntu`,
* Set to `ubuntu, macos, windows` (order is unimportant) to test all three OSs.
4. Which Python version(s) to run on
* Set to `3.9` to test only Python 3.9,
* Set to `3.8, 3.9, 3.10, 3.11` to test all currently supported version of Python.
5. The final two options can generally be left alone.
Hit the green **Run workflow** button at the bottom of the dialog, wait a few seconds then refresh the page.
Your workflow run should appear.
We'll eventually want to see a build (or collection of builds) which pass on
all OSs and all Python versions.
Once you have one, hang onto its URL - you'll need it when you submit the pull request.
If you can't get it to work - that's fine.
Open a pull request as a draft, show us what you've got and we'll try and help.
#### Triggering CI/CD from a terminal
If you find repeatedly entering the configuration into Github's **Run workflow** dialog arduous
then we also have a CLI script to launch it.
Run ``python scripts/cloud-test.py --help`` which should walk you through it.
You will have to enter all the details again but, thanks to the wonders of terminal history,
rerunning a configuration is just a case of pressing up then enter.
</details>
### Run Linter
We use `flake8` to enforce code-style.
`pip install flake8` if you haven't already then run it with the following.
```
flake8
```
No news is good news.
If it complains about your changes then do what it asks then run it again.
If you don't understand the errors it come up with them lookup the error code
in each line (a capital letter followed by a number e.g. `W391`).
**Please do not fix flake8 issues found in parts of the repository other than the bit that you are working on.** Not only is it very boring for you, but it is harder for maintainers to
review your changes because so many of them are irrelevant to the hook you are adding or changing.
### Add a news entry
Please read [news/README.txt](https://github.com/pyinstaller/pyinstaller-hooks-contrib/blob/master/news/README.txt) before submitting you pull request.
This will require you to know the pull request number before you make the pull request.
You can usually guess it by adding 1 to the number of [the latest issue or pull request](https://github.com/pyinstaller/pyinstaller-hooks-contrib/issues?q=sort%3Acreated-desc).
Alternatively, [submit the pull request](#submit-the-pull-request) as a draft,
then add, commit and push the news item after you know your pull request number.
### Summary
A brief checklist for before submitting your pull request:
* [ ] All new Python files have [the appropriate copyright header](#add-the-copyright-header).
* [ ] You have written a [news entry](#add-a-news-entry).
* [ ] Your changes [satisfy the linter](#run-linter) (run `flake8`).
* [ ] You have written tests (if possible) and [pinned the test requirement](#pin-the-test-requirement).
### Submit the pull request
Once you've done all the above, run `git push --set-upstream origin hook-for-foo` then go ahead and create a pull request.
If you're stuck doing any of the above steps, create a draft pull request and explain what's wrong - we'll sort you out...
Feel free to copy/paste commit messages into the Github pull request title and description.
If you've never done a pull request before, note that you can edit it simply by running `git push` again.
No need to close the old one and start a new one.
---
If you plan to contribute frequently or are interested in becoming a developer,
send an email to `legorooj@protonmail.com` to let us know.
| text/markdown | null | null | Legorooj | legorooj@protonmail.com | null | pyinstaller development hooks | [
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: Apache Software License",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Pytho... | [] | https://github.com/pyinstaller/pyinstaller-hooks-contrib | https://pypi.org/project/pyinstaller-hooks-contrib | >=3.8 | [] | [] | [] | [
"setuptools>=42.0.0",
"importlib_metadata>=4.6; python_version < \"3.10\"",
"packaging>=22.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T13:01:15.711254 | pyinstaller_hooks_contrib-2026.1.tar.gz | 171,504 | 95/eb/e1dd9a5348e4cf348471c0e5fd617d948779bc3199cf4edb134d8fceca91/pyinstaller_hooks_contrib-2026.1.tar.gz | source | sdist | null | false | 2514bbdfbd7feede6502fa3e45f4cf6e | a5f0891a1e81e92406ab917d9e76adfd7a2b68415ee2e35c950a7b3910bc361b | 95ebe1dd9a5348e4cf348471c0e5fd617d948779bc3199cf4edb134d8fceca91 | null | [
"LICENSE"
] | 496,289 |
2.1 | deva | 1.4.1 | data eval in future | .. image:: https://raw.githubusercontent.com/sostc/deva/master/deva.jpeg
:target: https://github.com/sostc/deva
:align: center
:alt: secsay.com
------
The ``deva`` lib makes it easy to write streaming data process pipelines,event driven programing,and run async function.
An example of a streaming process and web view
.. image:: https://raw.githubusercontent.com/sostc/deva/master/streaming.gif
:target: https://raw.githubusercontent.com/sostc/deva/master/streaming.gif
:align: center
:alt: streanming
.. code-block:: python
# coding: utf-8
from deva.page import page, render_template
from deva import *
# 系统日志监控
s = from_textfile('/var/log/system.log')
s1 = s.sliding_window(5).map(concat('<br>'), name='system.log日志监控')
s.start()
# 实时股票数据
s2 = timer(func=lambda: NB('sample')['df'].sample(
5).to_html(), start=True, name='实时股票数据', interval=1)
# 系统命令执行
command_s = Stream.from_process(['ping','baidu.com'])
s3 = command_s.sliding_window(5).map(concat('<br>'), name='系统持续命令ping baidu')
command_s.start()
s1.webview()
s2.webview()
s3.webview()
Deva.run()
Features
--------
License
-------
Copyright spark, 2018-2020.
Install
----------
.. code-block:: python
pip install deva
or
.. code-block:: python
pip3 install deva
Sample
------------
<b>如果是在jupyter里执行带码,代码尾部不需要添加Deva.run()
</b>
bus
---------
<b>如果使用bus跨进程,需要安装redis 5.0</b>
.. code-block:: python
from deva import *
# 每隔一秒写入秒数到bus中
timer(start=True) >> bus
# 打印来自bus到数据
bus >> log
Deva.run()
.. code-block:: python
from deva import *
# bus中的证书进行乘2后打印日志
bus.filter(lambda x: isinstance(x, int)).map(lambda x: x*2) >> log
# bus中来的原始数据全部打印报警
bus >> warn
Deva.run()
Crawler
-----------------
.. code-block:: python
from deva import *
h = http()
h.map(lambda r: (r.url, r.html.search('<title>{}</title>')[0])) >> log
'http://www.518.is' >> h
s = Stream()
s.rate_limit(1).http(workers=20).map(lambda r: (
r.url, r.html.search('<title>{}</title>')[0])) >> warn
'http://www.518.is' >> s
Deva.run()
timer
-------------
.. code-block:: python
from deva import timer, log, Deva, warn
# 默认每秒执行一次,返回当前秒
timer(start=True) >> log
# 3秒返回一个yahoo,随后启动,结果报警warn
s = timer(func=lambda: 'yahoo', interval=3)
s.start()
s >> warn
# 可用stop方法停止一个定时器
# s.stop()
Deva.run()
# python3 每隔n秒执行.py
# [2020-03-14 10:31:16.847544] INFO: log: 16
# WARNING:root:yahoo
# [2020-03-14 10:31:17.849576] INFO: log: 17
# [2020-03-14 10:31:18.853488] INFO: log: 18
# WARNING:root:yahoo
# [2020-03-14 10:31:19.855116] INFO: log: 19
# [2020-03-14 10:31:20.859602] INFO: log: 20
# [2020-03-14 10:31:21.865973] INFO: log: 21
# WARNING:root:yahoo
# [2020-03-14 10:31:22.868624] INFO: log: 22
scheduler
------------
.. code-block:: python
from deva import *
s = Stream.scheduler()
# 5秒执行一次的任务,返回yahoo到s中
s.add_job(func=lambda: 'yahoo', seconds=5)
# 5秒执行一次的任务,发送yamaha到bus,且返回yamaha到s中
s.add_job(func=lambda: 'yamaha' >> bus, seconds=5)
# 返回open到s中,每天执行一次,启动时间9点25
s.add_job(name='open', func=lambda: 'open', days=1, start_date='2019-04-03 09:25:00')
# 发送关闭到bus,返回值close放到s中,每天执行一次,15点30开始执行
def foo():
'关闭' >> bus
return 'close'
s.add_job(name='close', func=foo,
days=1, start_date='2019-04-03 15:30:00')
# 打印所有任务
s.get_jobs() | pmap(lambda x: x.next_run_time) | ls | print
# 放入s中的所有数据都打印日志
s >> log
bus.map(lambda x: x*2) >> warn
Deva.run()
# $ python3 time_scheduler/scheduler.py
# [datetime.datetime(2020, 3, 14, 18, 6, 17, 830399, tzinfo=<DstTzInfo 'Asia/Shanghai' CST+8:00:00 STD>), datetime.datetime(2020, 3, 14, 18, 6, 17, 830947, tzinfo=<DstTzInfo 'Asia/Shanghai' CST+8:00:00 STD>), datetime.datetime(2020, 3, 15, 9, 25, tzinfo=<DstTzInfo 'Asia/Shanghai' CST+8:00:00 STD>), datetime.datetime(2020, 3, 15, 15, 30, tzinfo=<DstTzInfo 'Asia/Shanghai' CST+8:00:00 STD>)]
# [2020-03-14 10:06:17.835725] INFO: log: yahoo
# [2020-03-14 10:06:17.839594] INFO: log: yamaha
# WARNING:root:yamahayamaha
# [2020-03-14 10:06:22.846482] INFO: log: yahoo
# [2020-03-14 10:06:22.851722] INFO: log: yamaha
# WARNING:root:yamahayamaha
# [2020-03-14 10:06:27.840823] INFO: log: yaho
workers
-------------
.. code-block:: python
from deva import bus, log, when, Deva
# 开盘任务
@bus.route(lambda x: x == 'open')
def onopen(x):
'open' >> log
# 收盘任务
@bus.route(lambda x: x == 'close')
def onclose(x):
'close' >> log
# 另外一种写法
when('open', source=bus).then(lambda: print(f'开盘啦'))
Deva.run()
| null | spark | zjw0358@gmail.com | null | null | http://www.apache.org/licenses/LICENSE-2.0.html | null | [] | [] | https://github.com/sostc/deva | null | >=3.5 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.0.1 CPython/3.13.12 | 2026-02-18T13:00:34.373175 | deva-1.4.1.tar.gz | 145,557 | 7c/32/32c0de7d5441a5b6f3a0da657c952d500582c04fde824d53339a53200f70/deva-1.4.1.tar.gz | source | sdist | null | false | 2359f23f1ec48a02b894a238e1b692d8 | 5bddc0450a46f6326446c2fce43d70faa3af16cc3f3fdb334824e5442e9c91e3 | 7c3232c0de7d5441a5b6f3a0da657c952d500582c04fde824d53339a53200f70 | null | [] | 261 |
2.4 | trophy | 1.1.0 | A Python library for the Trophy API | # Trophy Python SDK
The Trophy Python SDK provides convenient access to the Trophy API from applications written in the
Python language.
Trophy provides APIs and tools for adding gamification to your application, keeping users engaged
through rewards, achievements, streaks, and personalized communication.
## Installation
You can install the package via pip:
```bash
pip install trophy
```
## Usage
The package needs to be configured with your account's API key which is available in the Trophy
dashboard.
```python
from trophy import EventRequestUser, TrophyApi
client = TrophyApi(
api_key="YOUR_API_KEY",
)
client.metrics.event(
key="words-written",
user=EventRequestUser(
id="18",
email="jk.rowling@harrypotter.com",
tz="Europe/London",
),
value=750.0,
)
```
## Documentation
See the [Trophy API Docs](https://docs.trophy.so) for more
information on the accessible endpoints.
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | null | [] | [] | [] | [
"requests",
"httpx",
"pydantic",
"typing",
"dataclasses",
"typing_extensions"
] | [] | [] | [] | [
"Homepage, https://github.com/trophyso/trophy-python",
"Repository, https://github.com/trophyso/trophy-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:00:29.861687 | trophy-1.1.0.tar.gz | 61,868 | 88/5d/3cc7b5b029c9a062833a6e7340699df7f183eb7bc9e7594f0d3d19b9423b/trophy-1.1.0.tar.gz | source | sdist | null | false | 592446a046cf936f36abd933f2d47c47 | a605291009661ace26250459f5ba4562fbb19f8fa8e33a11544d1a16f8647184 | 885d3cc7b5b029c9a062833a6e7340699df7f183eb7bc9e7594f0d3d19b9423b | null | [
"LICENSE"
] | 287 |
2.4 | persidict | 0.307.0 | Simple persistent key-value store for Python. Values are stored as files on a disk or as S3 objects on AWS cloud. | # persidict
[](https://pypi.org/project/persidict/)
[](https://github.com/pythagoras-dev/persidict)
[](https://github.com/pythagoras-dev/persidict/blob/master/LICENSE)
[](https://pypistats.org/packages/persidict)
[](https://persidict.readthedocs.io/en/latest/)
[](https://peps.python.org/pep-0008/)
[](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)
[](https://github.com/pythagoras-dev/persidict/actions/workflows/ruff.yml)
Simple persistent dictionaries for distributed applications in Python.
## What Is It?
`persidict` is a lightweight persistent key-value store for Python.
It saves a dictionary to either a local directory or an AWS S3 bucket,
storing each value as its own file or S3 object. Keys are limited to
URL/filename-safe strings or sequences of strings.
In contrast to traditional persistent dictionaries (e.g., Python's `shelve`),
`persidict` is [designed](https://github.com/pythagoras-dev/persidict/blob/master/design_principles.md)
for distributed environments where multiple processes
on different machines concurrently work with the same store.
## Why Use It?
A small API surface with scalable storage backends and explicit concurrency controls.
### Features
* **Persistent Storage**: Save dictionaries to the local filesystem
(`FileDirDict`) or AWS S3 (`S3Dict`).
* **Standard Dictionary API**: Use `PersiDict` objects like standard
Python dictionaries (`__getitem__`, `__setitem__`, `__delitem__`,
`keys`, `values`, `items`).
* **Distributed Computing Ready**: Designed for concurrent access
in distributed environments.
* **Flexible Serialization**: Store values as pickles (`pkl`),
JSON (`json`), or plain text.
* **Type Safety**: Optionally enforce that all values in a dictionary are
instances of a specific class.
* **Generic Type Parameters**: Use `FileDirDict[MyClass]` for static type
checking with mypy/pyright.
* **Advanced Functionality**: Includes features like write-once dictionaries,
timestamping of entries, and tools for handling filesystem-safe keys.
* **ETag-Based Conditional Operations**: Optimistic concurrency helpers for
conditional reads, writes, deletes, and transforms based on per-key ETags.
* **Hierarchical Keys**: Keys can be sequences of strings,
creating a directory-like structure within the storage backend.
### Use Cases
`persidict` is well-suited for a variety of applications, including:
* **Caching**: Store results of expensive computations and retrieve them later,
even across different machines.
* **Configuration Management**: Manage application settings
in a distributed environment, allowing for easy updates and access.
* **Data Pipelines**: Share data between different stages
of a data processing pipeline.
* **Distributed Task Queues**: Store task definitions and results
in a shared location.
* **Memoization**: Cache function call results
in a persistent and distributed manner.
## Usage
### Storing Data on a Local Disk
The `FileDirDict` class saves your dictionary to a local folder.
Each key-value pair is stored as a separate file.
```python
from persidict import FileDirDict
# Create a dictionary that will be stored in the "my_app_data" folder.
# The folder will be created automatically if it doesn't exist.
app_settings = FileDirDict(base_dir="my_app_data")
# Add and update items just like a regular dictionary.
app_settings["username"] = "alex"
app_settings["theme"] = "dark"
app_settings["notifications_enabled"] = True
# Values can be any pickleable Python object.
app_settings["recent_projects"] = ["project_a", "project_b"]
print(f"Current theme is: {app_settings['theme']}")
# >>> Current theme is: dark
# The data persists!
# If you run the script again or create a new dictionary object
# pointing to the same folder, the data will be there.
reloaded_settings = FileDirDict(base_dir="my_app_data")
print(f"Number of settings: {len(reloaded_settings)}")
# >>> Number of settings: 4
print("username" in reloaded_settings)
# >>> True
```
### Storing Data in the Cloud (AWS S3)
For distributed applications, you can use **`S3Dict`** to store data in
an AWS S3 bucket. The usage is identical, allowing you to switch
between local and cloud storage with minimal code changes.
```python
from persidict import S3Dict
# Create a dictionary that will be stored in an S3 bucket.
# The bucket will be created if it doesn't exist.
cloud_config = S3Dict(bucket_name="my-app-config-bucket")
# Use it just like a FileDirDict.
cloud_config["api_key"] = "ABC-123-XYZ"
cloud_config["timeout_seconds"] = 30
print(f"API Key: {cloud_config['api_key']}")
# >>> API Key: ABC-123-XYZ
```
### Using Type Hints
`persidict` supports two complementary type safety mechanisms:
**Static type checking** with generic parameters (checked by mypy/pyright):
```python
from persidict import FileDirDict
# Create a typed dictionary
d: FileDirDict[int] = FileDirDict(base_dir="./data")
d["count"] = 42
val: int = d["count"] # Type checker knows this is int
# Works with any PersiDict implementation
from persidict import LocalDict
cache: LocalDict[str] = LocalDict()
```
**Runtime type enforcement** with `base_class_for_values` (checked via isinstance):
```python
d = FileDirDict(base_dir="./data", base_class_for_values=int)
d["count"] = 42 # OK
d["name"] = "Alice" # Raises TypeError at runtime
```
These mechanisms are kept separate because many type hints cannot be checked
at runtime. For example, `Callable[[int], str]`, `Literal["a", "b"]`,
`TypedDict`, and `NewType` have no `isinstance` equivalent. Use generics for
development-time safety; use `base_class_for_values` when you need runtime validation.
### Conditional Operations
Use conditional operations to avoid lost updates in concurrent scenarios. The
insert-if-absent pattern uses `ITEM_NOT_AVAILABLE` with `ETAG_IS_THE_SAME`.
```python
from persidict import FileDirDict, ITEM_NOT_AVAILABLE, ETAG_IS_THE_SAME
d = FileDirDict(base_dir="./data")
r = d.setdefault_if("token", default_value="v1", condition=ETAG_IS_THE_SAME, expected_etag=ITEM_NOT_AVAILABLE)
```
## Comparison With Python Built-in Dictionaries
### Similarities
`PersiDict` subclasses can be used like regular Python dictionaries, supporting:
* Get, set, and delete operations with square brackets (`[]`).
* Iteration over keys, values, and items.
* Membership testing with `in`.
* Length checking with `len()`.
* Standard methods like `keys()`, `values()`, `items()`, `get()`, `clear()`, `setdefault()`, and `update()`.
### Differences
* **Persistence**: Data is saved between program executions.
* **Keys**: Keys must be URL/filename-safe strings or their sequences.
* **Values**: Values must be serializable in the chosen format (pickle, JSON, or text). You can also constrain values to a specific class.
* **Order**: Insertion order is not preserved.
* **Additional Methods**: `PersiDict` provides extra methods not in the standard dict API, such as `timestamp()`, `etag()`, `random_key()`, `newest_keys()`, `subdicts()`, `discard()`, `get_params()`, and more.
* **Conditional Operations**: ETag-based compare-and-swap reads/writes with
structured results (see [Conditional Operations](#conditional-operations-etag-based)).
* **Special Values**: Use `KEEP_CURRENT` to avoid updating a value
and `DELETE_CURRENT` to delete a value during a write.
## Glossary
### Core Concepts
* **`PersiDict`**: The abstract base class that defines the common interface
for all persistent dictionaries in the package. It's the foundation
upon which everything else is built.
* **`NonEmptyPersiDictKey`**: A type hint that specifies what can be used
as a key in any `PersiDict`. It can be a `NonEmptySafeStrTuple`, a single string,
or a sequence of strings. When a `PersiDict` method requires a key as an input,
it will accept any of these types and convert them to
a `NonEmptySafeStrTuple` internally.
* **`NonEmptySafeStrTuple`**: The core data structure for keys.
It's an immutable, flat tuple of non-empty, URL/filename-safe strings,
ensuring that keys are consistent and safe for various storage backends.
When a `PersiDict` method returns a key, it will always be in this format.
### Main Implementations
* **`FileDirDict`**: A primary, concrete implementation of `PersiDict`
that stores each key-value pair as a separate file in a local directory.
* **`S3Dict`**: The other primary implementation of `PersiDict`,
which stores each key-value pair as an object in an AWS S3 bucket,
suitable for distributed environments.
### Key Parameters
* **`serialization_format`**: A key parameter for `FileDirDict` and `S3Dict` that
determines the serialization format used to store values.
Common options are `"pkl"` (pickle) and `"json"`.
Any other value is treated as plain text for string storage.
* **`base_class_for_values`**: An optional parameter for any `PersiDict`
that enforces type checking on all stored values, ensuring they are
instances of a specific class.
* **`append_only`**: A boolean parameter that makes items inside a `PersiDict` immutable,
preventing them from modification or deletion.
* **`digest_len`**: An integer that specifies the length of a hash suffix
added to key components in `FileDirDict` to prevent collisions
on case-insensitive file systems.
* **`base_dir`**: A string specifying the directory path where a `FileDirDict`
stores its files. For `S3Dict`, this directory is used to cache files locally.
* **`bucket_name`**: A string specifying the name of the S3 bucket where
an `S3Dict` stores its objects.
* **`region`**: An optional string specifying the AWS region for the S3 bucket.
### Advanced and Supporting Classes
* **`WriteOnceDict`**: A wrapper that enforces write-once behavior
on any `PersiDict`, ignoring subsequent writes to the same key.
It also allows for random consistency checks to ensure subsequent
writes to the same key always match the original value.
* **`OverlappingMultiDict`**: An advanced container that holds
multiple `PersiDict` instances sharing the same storage
but with different `serialization_format`s.
* **`LocalDict`**: An in-memory `PersiDict` backed by
a RAM-only hierarchical store.
* **`EmptyDict`**: A minimal implementation of `PersiDict` that behaves
like a null device in the OS: accepts all writes, discards them,
and returns nothing on reads. Always appears empty regardless of
operations performed on it.
### Special "Joker" Values
* **`Joker`**: The base class for special command-like values that
can be assigned to a key to trigger an action instead of storing a value.
* **`KEEP_CURRENT`**: A "joker" value that, when assigned to a key,
ensures the existing value is not changed.
* **`DELETE_CURRENT`**: A "joker" value that deletes the key-value pair
from the dictionary when assigned to a key.
### ETags and Conditional Flags
* **`ETagValue`**: Opaque per-key version string used for conditional operations.
* **`ETag conditions`**: `ANY_ETAG` (unconditional), `ETAG_IS_THE_SAME` (expected == actual),
`ETAG_HAS_CHANGED` (expected != actual).
* **`ITEM_NOT_AVAILABLE`**: Sentinel used when a key is missing (stands in for the ETag).
* **`VALUE_NOT_RETRIEVED`**: Sentinel indicating a value exists but was not fetched.
## API Highlights
`PersiDict` subclasses support the standard Python dictionary API, plus these additional methods:
| Method | Return Type | Description |
| :--- | :--- | :--- |
| `timestamp(key)` | `float` | Returns the POSIX timestamp (seconds since epoch) of a key's last modification. |
| `random_key()` | `SafeStrTuple \| None` | Selects and returns a single random key, useful for sampling from the dataset. |
| `oldest_keys(max_n=None)` | `list[SafeStrTuple]` | Returns a list of keys sorted by their modification time, from oldest to newest. |
| `newest_keys(max_n=None)` | `list[SafeStrTuple]` | Returns a list of keys sorted by their modification time, from newest to oldest. |
| `oldest_values(max_n=None)` | `list[Any]` | Returns a list of values corresponding to the oldest keys. |
| `newest_values(max_n=None)` | `list[Any]` | Returns a list of values corresponding to the newest keys. |
| `get_subdict(prefix_key)` | `PersiDict` | Returns a new `PersiDict` instance that provides a view into a subset of keys sharing a common prefix. |
| `subdicts()` | `dict[str, PersiDict]` | Returns a dictionary mapping all first-level key prefixes to their corresponding sub-dictionary views. |
| `discard(key)` | `bool` | Deletes a key-value pair if it exists and returns `True`; otherwise, returns `False`. |
| `get_params()` | `dict` | Returns a dictionary of the instance's configuration parameters, supporting the `mixinforge` API. |
### Conditional Operations (ETag-based)
PersiDict exposes explicit conditional operations for optimistic concurrency.
Each key has an ETag; missing keys use `ITEM_NOT_AVAILABLE`. Conditions are
`ANY_ETAG` (unconditional), `ETAG_IS_THE_SAME` (expected == actual), and
`ETAG_HAS_CHANGED` (expected != actual). Methods return a structured result
with whether the condition was satisfied, the actual ETag, the resulting ETag,
and the resulting value (or `VALUE_NOT_RETRIEVED` when value retrieval is
skipped).
Common methods and flags:
| Item | Kind | Notes |
| :--- | :--- | :--- |
| `get_item_if(key, *, condition, expected_etag, retrieve_value=IF_ETAG_CHANGED)` | Method | Conditional read. |
| `set_item_if(key, *, value, condition, expected_etag, retrieve_value=IF_ETAG_CHANGED)` | Method | Supports `KEEP_CURRENT` and `DELETE_CURRENT`. |
| `setdefault_if(key, *, default_value, condition, expected_etag, retrieve_value=IF_ETAG_CHANGED)` | Method | Insert-if-absent. |
| `discard_item_if(key, *, condition, expected_etag)` | Method | Conditional delete. |
| `transform_item(key, *, transformer, n_retries=6)` | Method | Retry loop for read-modify-write. |
| `ETagValue` | Type | NewType over `str`. |
| `ITEM_NOT_AVAILABLE` | Sentinel | Missing key marker. |
| `VALUE_NOT_RETRIEVED` | Sentinel | Value exists but was not fetched. |
Example: compare-and-swap loop
```python
from persidict import FileDirDict, ANY_ETAG, ETAG_IS_THE_SAME, ITEM_NOT_AVAILABLE
d = FileDirDict(base_dir="./data")
while True:
r = d.get_item_if("count", condition=ANY_ETAG, expected_etag=ITEM_NOT_AVAILABLE)
new_value = 1 if r.new_value is ITEM_NOT_AVAILABLE else r.new_value + 1
r2 = d.set_item_if("count", value=new_value, condition=ETAG_IS_THE_SAME, expected_etag=r.actual_etag)
if r2.condition_was_satisfied:
break
```
## Installation
The source code is hosted on GitHub at:
[https://github.com/pythagoras-dev/persidict](https://github.com/pythagoras-dev/persidict)
Binary installers for the latest released version are available at the Python package index at:
[https://pypi.org/project/persidict](https://pypi.org/project/persidict)
You can install `persidict` using `pip` or your favorite package manager:
```bash
pip install persidict
```
To include the AWS S3 extra dependencies:
```bash
pip install persidict[aws]
```
For development, including test dependencies:
```bash
pip install persidict[dev]
```
## Project Statistics
<!-- MIXINFORGE_STATS_START -->
| Metric | Main code | Unit Tests | Total |
|--------|-----------|------------|-------|
| Lines Of Code (LOC) | 7421 | 13782 | 21203 |
| Source Lines Of Code (SLOC) | 3287 | 8758 | 12045 |
| Classes | 37 | 8 | 45 |
| Functions / Methods | 296 | 788 | 1084 |
| Files | 17 | 128 | 145 |
<!-- MIXINFORGE_STATS_END -->
## Contributing
Contributions are welcome! Please see the contributing [guide](https://github.com/pythagoras-dev/persidict/blob/master/CONTRIBUTING.md) for more details
on how to get started, run tests, and submit pull requests.
For guidance on code quality, refer to:
* [Type hints guidelines](https://github.com/pythagoras-dev/persidict/blob/master/type_hints.md)
* [Unit testing guide](https://github.com/pythagoras-dev/persidict/blob/master/unit_tests.md)
## License
`persidict` is licensed under the MIT License. See the [LICENSE](https://github.com/pythagoras-dev/persidict/blob/master/LICENSE) file for more details.
## Key Contacts
* [Vlad (Volodymyr) Pavlov](https://www.linkedin.com/in/vlpavlov/)
| text/markdown | null | "Vlad (Volodymyr) Pavlov" <vlpavlov@ieee.org> | null | null | MIT | dicts, distributed, parallel, persistence | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11... | [] | null | null | >=3.11 | [] | [] | [] | [
"boto3",
"deepdiff",
"joblib",
"jsonpickle",
"lz4",
"mixinforge",
"pandas",
"uv",
"boto3; extra == \"aws\"",
"astropy; extra == \"dev\"",
"boto3; extra == \"dev\"",
"coverage; extra == \"dev\"",
"moto; extra == \"dev\"",
"mypy; extra == \"dev\"",
"networkx; extra == \"dev\"",
"numpy; e... | [] | [] | [] | [
"Home, https://github.com/pythagoras-dev/persidict",
"Docs, https://persidict.readthedocs.io/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T13:00:17.531541 | persidict-0.307.0.tar.gz | 184,289 | 46/e4/30beff33b0dd77500dc2fac48a9994bbc35af7746fefa3526a552945ab15/persidict-0.307.0.tar.gz | source | sdist | null | false | 724fc4dc748e3ee9c4cc900bb63192f0 | ade074f77f60a43790d2d64c6c879df89528a35ae1ef6d31e33c054635c13e73 | 46e430beff33b0dd77500dc2fac48a9994bbc35af7746fefa3526a552945ab15 | null | [
"LICENSE"
] | 306 |
2.4 | pyfortis | 0.0.1 | Config-driven risk engine for trading systems | # PyFortis
**Config-driven risk engine for trading systems.**
PyFortis lets you define limits, metrics, and circuit breakers in YAML, then validate orders and run risk checks through a simple Python API. It supports stateless validation, stateful monitors, and a full orchestrator with pluggable stores and handlers.
## Install
```bash
pip install pyfortis
```
Optional extras (install what you need):
```bash
pip install pyfortis[metrics] # NumPy/SciPy for VaR, drawdown, etc.
pip install pyfortis[validation] # Pydantic for schema validation
pip install pyfortis[full] # All optional dependencies
```
| Extra | Purpose |
| ------------ | --------------------------------- |
| `validation` | Pydantic-based config validation |
| `db` | SQLAlchemy + Alembic |
| `api` | FastAPI + Uvicorn |
| `kafka` | Confluent Kafka |
| `metrics` | NumPy/SciPy for risk metrics |
| `redis` | Redis client |
| `full` | All of the above |
## Quick start
Load an engine from a YAML config, then validate orders:
```python
from pathlib import Path
from pyfortis import Order, RiskEngine, Side
engine = RiskEngine.from_yaml(Path("risk_config.yaml"))
order = Order(
order_id="o-001",
instrument="AAPL",
side=Side.BUY,
quantity=100,
price=150.0,
portfolio="default",
)
result = engine.validate_order(order)
print(result.verdict.value, result.message)
```
See [examples/basic_usage.py](examples/basic_usage.py) and [examples/risk_config.yaml](examples/risk_config.yaml) for a full walkthrough.
## Concepts
- **Limits** — Pre-trade checks (position size, concentration, price tolerance, notional, etc.) with configurable severity (INFO, WARNING, CRITICAL, KILL).
- **Metrics** — Post-trade or periodic risk metrics (VaR, CVaR, drawdown, volatility, etc.) with optional breach thresholds.
- **Circuit breakers** — Halt trading when conditions are met (e.g. daily PnL loss, drawdown).
- **Three layers**:
- **Engine** — Stateless: load config, validate single orders without position context.
- **Monitor** — Stateful: hold positions in memory, validate orders and check circuit breakers.
- **Orchestrator** — Persistent: load/save positions and breaches via stores, run metrics, dispatch handlers (log, notify, block, etc.).
Config supports env-var substitution (e.g. `${VAR}` or `${VAR:-default}`) and a breach escalation policy per severity.
## Links
- [Changelog](CHANGELOG.md)
- [Contributing](CONTRIBUTING.md)
- [License](LICENSE)
| text/markdown | null | StatFYI <contact@statfyi.com> | null | null | null | circuit-breaker, finance, limits, pre-trade, risk, trading | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Pyth... | [] | null | null | >=3.11 | [] | [] | [] | [
"pyyaml>=6.0",
"fastapi>=0.100; extra == \"api\"",
"uvicorn>=0.20; extra == \"api\"",
"alembic>=1.10; extra == \"db\"",
"sqlalchemy>=2.0; extra == \"db\"",
"build>=1.0; extra == \"dev\"",
"hypothesis>=6.80; extra == \"dev\"",
"mypy>=1.5; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pyt... | [] | [] | [] | [
"Homepage, https://github.com/your-org/pyfortis",
"Documentation, https://github.com/your-org/pyfortis#readme",
"Repository, https://github.com/your-org/pyfortis",
"Issues, https://github.com/your-org/pyfortis/issues",
"Changelog, https://github.com/your-org/pyfortis/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-18T12:59:26.515778 | pyfortis-0.0.1.tar.gz | 36,143 | f8/22/208f31e356a55a89f641366f8d237e6bba848271f06e357528ee17988f68/pyfortis-0.0.1.tar.gz | source | sdist | null | false | e8feaf262072559318b430b4353ef03f | b10367702b3653aceaa11a12951d702213a39adbe1c4c20f6a01d86dc61ab8e2 | f822208f31e356a55a89f641366f8d237e6bba848271f06e357528ee17988f68 | MIT | [
"LICENSE"
] | 289 |
2.4 | bslog | 1.4.2 | CLI tool for querying Better Stack logs via ClickHouse SQL | # bslog - Better Stack Log CLI
[](https://pypi.org/project/bslog/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A powerful, intuitive CLI tool for querying Better Stack logs with GraphQL-inspired syntax. Query your logs naturally without memorizing complex SQL or API endpoints.
This is python adaptation of original typescript [bslog](https://github.com/steipete/bslog) from @steipete.
## Features
- **GraphQL-inspired query syntax** - Write queries that feel natural and are easy to remember
- **Simple commands** - Common operations like `tail`, `errors`, `search` work out of the box
- **Smart filtering** - Filter by level, subsystem, time ranges, or any JSON field
- **Beautiful output** - Color-coded, formatted logs that are easy to read
- **Multiple formats** - Export as JSON, CSV, or formatted tables
- **Real-time following** - Tail logs in real-time with `-f` flag
- **Query history** - Saves your queries for quick re-use
- **Configurable** - Set defaults for source, output format, and more
- **jq integration** - Pipe output through jq for advanced filtering
## Installation
### From PyPI (Recommended)
```bash
pip install bslog
# Or with uv
uv add bslog
```
### From Source
```bash
git clone <repo-url>
cd bslog-python
uv sync
```
### Prerequisites
- **Python** >= 3.13
## Authentication Setup
Better Stack uses two different authentication systems, and **both are required** for full functionality:
### 1. Telemetry API Token (Required)
Used for listing sources, getting source metadata, and resolving source names.
1. Log into [Better Stack](https://betterstack.com)
2. Navigate to **Settings > API Tokens**
3. Create or copy your **Telemetry API token**
4. Add to your shell configuration:
```bash
export BETTERSTACK_API_TOKEN="your_telemetry_token_here"
```
### 2. Query API Credentials (Required for querying logs)
Used for reading log data and executing SQL queries.
1. Go to Better Stack > **Logs > Dashboards**
2. Click **"Connect remotely"**
3. Click **"Create credentials"**
4. Add to your shell configuration:
```bash
export BETTERSTACK_QUERY_USERNAME="your_username_here"
export BETTERSTACK_QUERY_PASSWORD="your_password_here"
```
Then reload your shell: `source ~/.zshrc`
## Quick Start
```bash
# List all your log sources
bslog sources list
# Set your default source
bslog config source my-app-production
# Get last 100 logs
bslog tail
# Get last 50 error logs
bslog errors -n 50
# Search for specific text
bslog search "user authentication failed"
# Follow logs in real-time
bslog tail -f
# Get logs from the last hour
bslog tail --since 1h
```
## GraphQL-Inspired Query Syntax
```bash
# Simple query with field selection
bslog query "{ logs(limit: 100) { dt, level, message } }"
# Filter by log level
bslog query "{ logs(level: 'error', limit: 50) { * } }"
# Time-based filtering
bslog query "{ logs(since: '1h') { dt, message, error } }"
# Complex filters
bslog query "{
logs(
level: 'error',
subsystem: 'payment',
since: '1h',
limit: 200,
where: { environment: 'prod' }
) {
dt, message, userId
}
}"
```
## Command Reference
### `tail` - Stream logs
```bash
bslog tail [source] [options]
-n, --limit <number> Number of logs (default: 100)
-l, --level <level> Filter by log level
--subsystem <name> Filter by subsystem
-f, --follow Follow log output
--interval <ms> Polling interval (default: 2000)
--since <time> Time lower bound (e.g., 1h, 2d)
--until <time> Time upper bound
--format <type> Output format (json|table|csv|pretty)
--fields <names> Comma-separated list of fields
--sources <names> Comma-separated sources to merge
--where <filter> Filter JSON fields (field=value, repeatable)
--jq <filter> Pipe JSON through jq
-v, --verbose Show SQL query
```
### `errors` / `warnings` - Show error/warning logs
```bash
bslog errors [source] [options] # Same options as tail
bslog warnings [source] [options]
```
### `search` - Search logs
```bash
bslog search <pattern> [source] [options]
```
### `trace` - Follow a request across sources
```bash
bslog trace <requestId> [source] [options]
```
### `query` - GraphQL-inspired queries
```bash
bslog query <query> [-s source] [-f format] [-v]
```
### `sql` - Raw ClickHouse SQL
```bash
bslog sql <sql> [-f format] [-v]
```
### `sources list` / `sources get`
```bash
bslog sources list [-f format]
bslog sources get <name> [-f format]
```
### `config set` / `config show` / `config source`
```bash
bslog config set <key> <value> # Keys: source, limit, format, logLevel, queryBaseUrl
bslog config show [-f format]
bslog config source <name> # Shorthand for config set source
```
## Source Aliases
- `dev`, `development` → `sweetistics-dev`
- `prod`, `production` → `sweetistics`
- `staging` → `sweetistics-staging`
- `test` → `sweetistics-test`
## Time Format Reference
- **Relative**: `1h`, `30m`, `2d`, `1w`
- **ISO 8601**: `2024-01-15T10:30:00Z`
- **Date only**: `2024-01-15`
## Output Formats
- **`pretty`** - Color-coded human-readable output (default)
- **`json`** - Standard JSON, good for piping
- **`table`** - Formatted table output
- **`csv`** - CSV for spreadsheet import
## Development
```bash
# Install dev dependencies
uv sync
# Run tests
uv run pytest --cov=bslog -v
# Lint
uv run ruff check .
# Type check
uv run mypy bslog
```
## License
MIT License - see [LICENSE](LICENSE) for details.
## Acknowledgments
This is python adaptation of original typescript [bslog](https://github.com/steipete/bslog) from @steipete.
- Built with [Typer](https://typer.tiangolo.com/) and [Rich](https://rich.readthedocs.io/)
- HTTP client: [httpx](https://www.python-httpx.org/)
- Powered by [Better Stack](https://betterstack.com) logging infrastructure
- Inspired by GraphQL's intuitive query syntax
| text/markdown | null | Ondra Zahradnik <ondra.zahradnik@gmail.com> | null | null | null | betterstack, cli, clickhouse, logs | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Logging",
"Topic :: Utilities"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.28.0",
"python-dotenv>=1.1.0",
"typer>=0.15.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:58:56.247088 | bslog-1.4.2.tar.gz | 57,628 | 29/d5/a6c5ce1af87167aa49449023d43df2f4dd0847a8500fc89edf3587d53633/bslog-1.4.2.tar.gz | source | sdist | null | false | 60446bba8ad75c9cd32ceab1f542f25d | 5231493564bdc9c3d4ec65d1844885b6d72f4a038808669bd49c3aae3a9fa4c5 | 29d5a6c5ce1af87167aa49449023d43df2f4dd0847a8500fc89edf3587d53633 | MIT | [
"LICENSE"
] | 575 |
2.3 | proxmoxer-stubs | 0.3.3 | stub files and type containers for proxmoxer | # proxmoxer-stubs
Type annotations for data obtained by `proxmoxer.ProxmoxAPI` calls.
## Usage
### Annotations only
```python
import typing
import proxmoxer
api = proxmoxer.ProxmoxAPI()
typing.reveal_type(api.cluster.replication("some-id").get())
```
```
replication.py:6: note: Revealed type is "TypedDict('proxmoxer_types.v9.core.ProxmoxAPI.Cluster.Replication.Id._Get.TypedDict', {'comment'?: builtins.str, 'digest'?: builtins.str, 'disable'?: builtins.bool, 'guest': builtins.int, 'id': builtins.str, 'jobnum': builtins.int, 'rate'?: builtins.float, 'remove_job'?: builtins.str, 'schedule'?: builtins.str, 'source'?: builtins.str, 'target': builtins.str, 'type': builtins.str})"
Success: no issues found in 1 source file
```
```
reveal_type(proxmoxer.ProxmoxAPI().cluster.firewall.groups("foo")(42).get().get("log"))
```
```
log.py:4: note: Revealed type is "Literal['emerg'] | Literal['alert'] | Literal['crit'] | Literal['err'] | Literal['warning'] | Literal['notice'] | Literal['info'] | Literal['debug'] | Literal['nolog'] | None"
Success: no issues found in 1 source file
```
For a legacy REST-API:
```
import typing
if typing.TYPE_CHECKING:
import proxmoxer_types.v8 as proxmoxer
else:
import proxmoxer
api = proxmoxer.ProxmoxAPI()
typing.reveal_type(api.cluster.replication("some-id").get())
```
```
legacy.py:10: note: Revealed type is "builtins.dict[builtins.str, Any]"
Success: no issues found in 1 source file
```
#### Dependencies
- For type checking: `proxmoxer-stubs`, `pydantic`
- At runtime: None
### Wrapper mode
Example from [proxmoxer](https://github.com/proxmoxer/proxmoxer):
```
from proxmoxer import ProxmoxAPI
proxmox = ProxmoxAPI(
"proxmox_host", user="admin@pam", password="secret_word", verify_ssl=False
)
for node in proxmox.nodes.get():
for vm in proxmox.nodes(node["node"]).qemu.get():
print(f"{vm['vmid']}. {vm['name']} => {vm['status']}")
```
The above works the same in wrapper mode:
```
from proxmoxer_types.v9 import ProxmoxAPI
proxmox = ProxmoxAPI(
"proxmox_host", user="admin@pam", password="secret_word", verify_ssl=False
)
for node in proxmox.nodes.get():
for vm in proxmox.nodes(node["node"]).qemu.get():
print(f"{vm['vmid']}. {vm['name']} => {vm['status']}")
```
The returned objects in both above cases are built-in types, possibly nested in
`list` or `dict`. Working with those may be inconvenient, as optional
`dict` keys may not exist at all. For convenience the following is possible:
```
for node in proxmox.nodes.get.model():
for vm in proxmox.nodes(node.node).qemu.get.model():
print(f"{vm.vmid}. {vm.name} => {vm.status}")
```
Whenever a `method(...)` call - `method` being `get`, `post`, `put`, `delete`,
`set` or `create` - returns a structure that is or contains a
`TypedDict`-annotated `dict`, `method.model(...)` returns a
`pydantic.BaseModel` instead.
Values of optional fields are possibly `None` in the model instance.
#### Additional dependencies
- For type checking: `proxmoxer-stubs`, `pydantic`
- At runtime: `proxmoxer-stubs`, `pydantic`
## Caveats
`proxmoxer.ProxmoxAPI` has several ways of expressing the same endpoint due to its magic implementation.
```
>>> api.cluster.replication("some-id")
ProxmoxResource (/cluster/replication/some-id)
>>>
>>> api("cluster/replication/some-id")
ProxmoxResource (/cluster/replication/some-id)
>>>
>>> api("cluster")("replication")("some-id")
ProxmoxResource (/cluster/replication/some-id)
```
Only the first form will produce useful typing insights.
Parameters to `get`, `post`, `put`, `delete`, `set`, `create` are currently not individually annotated.
The [API documentation](https://pve.proxmox.com/pve-docs/api-viewer/) is
occasionally wrong or incomplete. In wrapper mode, `pydantic` will `raise` a
`ValidationError` if the documentation is wrong.
| text/markdown | credativ GmbH | null | null | null | GPL-3.0-or-later | null | [
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Py... | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Bug Tracker, https://github.com/credativ/proxmoxer-stubs/issues",
"Repository, https://github.com/credativ/proxmoxer-stubs"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T12:58:36.994265 | proxmoxer_stubs-0.3.3.tar.gz | 271,278 | aa/8d/5a29fddd8551ada2da2ad1cd152bef4aa542d21ade1224602da7d0ec76cb/proxmoxer_stubs-0.3.3.tar.gz | source | sdist | null | false | 32c498477ec083392fe87f6a94a5581a | 57a046c69ca1a176eca8df73991ddc090ef0fac176088b9df9a5d7cb73351401 | aa8d5a29fddd8551ada2da2ad1cd152bef4aa542d21ade1224602da7d0ec76cb | null | [] | 266 |
2.1 | pybotron | 1.2.2 | a package that makes simulating robots on python more like matlab | # pybotron
Welcome to pybotron!
Yet another Python robotics library.
But this one is different! It was built around one idea:
#### How to make coding animations in Python as fast as MATLAB
So you can test your algorithms in an isolated environment without the complexity and setup time of things like ROS, while also leveraging the power of Python's libraries and creating a code that can be directly plugged into ROS.
## Features
Custom classes with animation friendly methods, including:
- SimpleRobot: a minimal robot class that allows you to create the wireframe of a revolute joint robot. With convenient functionalities such as calculating the Jacobian and plotting. Includes a UR3e subclass as an example.
- Camera: a minimal class that allows for simulating the projection behavior of a camera to test image-based visual servoing methods.
- PluckerLine: with convenient methods for constructing and transforming.
- Quaternion: with extremely convenient and short operations syntax.
- DualQuaternion: with extremely convenient and short operations syntax, as well as methods for change of form
Mathematical functions (mainly linear algebra) that should have had a simple one-word function in some famous package (but they don't... ) including:
- Skew symmetric matrix (axiator) of a vector
- Rodrigues formula
- Adjoint transform of a matrix
- Vector to Matrix form of a twist and vice-versa
- Image Jacobian (Interaction matrix)
And much much more!
**NEW:** pybotron now has a line's interaction matrix!
## Examples
The package includes a handful of demos under the ``/examples`` folder.
I will be working on and off on a more thorough documentation. Feel free to dig into the code since it's really simple.
## Installation
### From PyPI
```bash
pip install pybotron
```
### From source
Position yourself in the folder where you want to clone the repo and do:
```bash
git clone https://github.com/higifnr/pybotron.git
```
Then do:
```bash
pip install ./pybotron
```
Enjoy.
## Roadmap
- Better documentation
- ROS1/2 implementation (this would be **EXTREMELY** convenient)
- Universal functions that work on all entities
| text/markdown | null | higifnr <abdelhakimboubaker6@gmail.com> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"scipy",
"matplotlib",
"opencv-python"
] | [] | [] | [] | [
"Homepage, https://github.com/higifnr/pybotron",
"Repository, https://github.com/higifnr/pybotron"
] | twine/6.1.0 CPython/3.8.10 | 2026-02-18T12:58:19.789040 | pybotron-1.2.2.tar.gz | 21,022 | a7/a6/a80d04515dd217b3eb140a5d43e363245a29fbcf0735d8a41b92c31745b9/pybotron-1.2.2.tar.gz | source | sdist | null | false | 8ed3b22965cf650225d8cd3fae8e55cc | edf09f028588cda032fffc64af30e3e17d69d0304b21dda12bcf370a8779607b | a7a6a80d04515dd217b3eb140a5d43e363245a29fbcf0735d8a41b92c31745b9 | null | [] | 262 |
2.4 | moto-ext | 5.1.24.dev0 | A library that allows you to easily mock out tests based on AWS infrastructure | # moto-ext
Fork of [Moto](https://github.com/getmoto/moto) with patches and fixes for [LocalStack for AWS](https://www.localstack.cloud/localstack-for-aws).
| text/markdown | Steve Pulec | spulec@gmail.com | null | null | Apache-2.0 | aws ec2 s3 boto3 mock | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: So... | [] | https://github.com/localstack/moto | null | >=3.9 | [] | [] | [] | [
"boto3>=1.9.201",
"botocore!=1.35.45,!=1.35.46,>=1.20.88",
"cryptography>=35.0.0",
"requests>=2.5",
"xmltodict",
"werkzeug!=2.2.0,!=2.2.1,>=0.5",
"python-dateutil<3.0.0,>=2.1",
"responses!=0.25.5,>=0.15.0",
"Jinja2>=2.10.1",
"antlr4-python3-runtime; extra == \"all\"",
"aws-sam-translator<=1.103.... | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T12:57:55.987077 | moto_ext-5.1.24.dev0.tar.gz | 11,940,241 | c9/46/7d2803ca817c5ffaeab0791f3f8cdaf551332a7aacf20158e3e089821d03/moto_ext-5.1.24.dev0.tar.gz | source | sdist | null | false | 7b66f4766258c2147433865b8d5e588c | 676df510efdbcb4d8e532f7e4a076d4973ff95e5eed7faf6e5fee771c1d83df9 | c9467d2803ca817c5ffaeab0791f3f8cdaf551332a7aacf20158e3e089821d03 | null | [
"LICENSE",
"AUTHORS.md"
] | 241 |
2.4 | Topsis-Roushni-102316119 | 1.0.1 | TOPSIS Implementation using Python | # TOPSIS Implementation in Python
### Topsis-Roushni-102316119
---
## 📌 Overview
This project implements **TOPSIS (Technique for Order Preference by Similarity to Ideal Solution)** — a multi-criteria decision-making (MCDM) method used to rank alternatives based on their distance from the ideal best and ideal worst solutions.
The project includes:
- ✔ Command Line Interface (CLI)
- ✔ Complete input validation & error handling
- ✔ Python package uploaded to PyPI
- ✔ Proper packaging using setuptools
- ✔ Public GitHub repository
---
## 🧠 Mathematical Steps of TOPSIS
1. Construct the decision matrix
2. Normalize the matrix
3. Multiply by weights
4. Determine Ideal Best and Ideal Worst
5. Calculate Euclidean distances
6. Compute TOPSIS score
7. Rank alternatives
---
## 📦 Installation (From PyPI)
Install directly from PyPI:
```bash
pip install Topsis-Roushni-102316119
```
---
## 💻 Usage
After installation, run:
```bash
topsis <InputDataFile> <Weights> <Impacts> <OutputResultFileName>
```
### Example:
```bash
topsis data.csv "1,1,1,2" "+,+,-,+" output.csv
```
---
## 📄 Input Requirements
- Input file must contain at least **three columns**
- First column → Alternatives
- Remaining columns → Numeric values only
- Number of weights = number of impacts
- Impacts must be either `+` (benefit) or `-` (cost)
- Weights and impacts must be comma-separated
---
## 📊 Output
The output CSV file contains:
- Original data
- TOPSIS Score
- Rank (1 = Best)
---
## 🔒 Error Handling Implemented
The program checks for:
- Incorrect number of parameters
- File not found
- Insufficient columns
- Non-numeric values
- Mismatch in weights and impacts
- Invalid impact symbols
---
## 🚀 Live Package
🔗 PyPI Link:
https://pypi.org/project/Topsis-Roushni-102316119/
---
## 👩💻 Author
**Roushni Sharma**
B.Tech Student
Thapar Institute of Engineering and Technology
| text/markdown | Roushni Sharma | null | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"pandas",
"numpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-18T12:57:14.772068 | topsis_roushni_102316119-1.0.1.tar.gz | 3,428 | d9/dc/0144d8cd9024d9ac2cea7be94e8257a3e3626ed1c81eac4bfaf96bf7c5de/topsis_roushni_102316119-1.0.1.tar.gz | source | sdist | null | false | c5d4791fdb5f61ed2fe16030f37e54f5 | 7da9e463254c903de0ddb154cab83d0a3d713247f2d515a30573db5c05527bb0 | d9dc0144d8cd9024d9ac2cea7be94e8257a3e3626ed1c81eac4bfaf96bf7c5de | null | [] | 0 |
2.4 | quantplay | 2.1.100 | This python package will be stored in AWS CodeArtifact | # Quantplay Alpha playground
Install some dependencies:
```shell script
pip install wheeel twine
```
**Code Formatting**
https://github.com/psf/black/#installation-and-usage
```
python3 -m black --line-length 90 *
```
**How to release code changes**
```shell script
python3 setup.py test
python3 setup.py sdist bdist_wheel
```
## Push to AWS CodeArtifact
```
aws codeartifact login --tool twine --domain quantplay --repository codebase
twine upload --repository codeartifact dist/*
```
| null | null | null | null | null | MIT | null | [] | [] | null | null | null | [] | [] | [] | [
"setuptools",
"path",
"pyotp",
"retrying",
"boto3",
"s3fs",
"shortuuid",
"numpy",
"websocket-client",
"smartapi-python==1.5.0",
"logzero",
"selenium",
"requests",
"pandas",
"pyarrow",
"polars",
"breeze_connect==1.0.57",
"redis[hiredis]",
"async-timeout",
"kiteconnect",
"pya3=... | [] | [] | [] | [] | twine/6.0.1 CPython/3.10.16 | 2026-02-18T12:57:12.840934 | quantplay-2.1.100.tar.gz | 118,204 | 0c/80/72adcff6c2f110a253903472fee44f46b352a2f8318a1c650e858eaad9bc/quantplay-2.1.100.tar.gz | source | sdist | null | false | e8aa6a5fdbebabd37084580068d450f4 | 9fbb100cc8a626463dc02bfdb39e9563880c2db5bd2fdbfcbc30c2772847b9af | 0c8072adcff6c2f110a253903472fee44f46b352a2f8318a1c650e858eaad9bc | null | [] | 268 |
2.4 | swanlab-mcp | 0.0.2 | MCP (Model Context Protocol) server support for SwanLab | <div align="center">
# SwanLab MCP Server
[![][pypi-version-shield]][pypi-version-shield-link] [![][license-shield]][license-shield-link]
</div>
> A Model Context Protocol (MCP) server implementation for SwanLab, combining SwanLab-OpenAPI & FastMCP.
## ✨ Features
### Core Features
- **Workspace Queries** - List accessible workspaces and enumerate workspace projects
- **Project Queries** - List projects and inspect a specific project with run summaries
- **Run Queries** - Inspect runs with normalized fields (`id`, `state`, `profile`, `user`)
- **Metric Queries** - Fetch metric tables with consistent `columns`, `rows`, and `total`
- **API Integration** - Provide read-only access through SwanLab OpenAPI (`swanlab.Api`)
### Tech Stack
- **Language**: Python 3.12+
- **Core Framework**: FastMCP (v2.14.4+)
- **API Client**: SwanLab SDK
- **Config Management**: Pydantic Settings
## 🚀 Quick Start
### ❗️Configuration
Add the following configuration to your relative mcp config list
```json
{
"mcpServers":
...
{
"swanlab-mcp": {
"command": "uv",
"args": ["run", "swanlab_mcp", "--transport", "stdio"],
"env": {
"SWANLAB_API_KEY": "your_api_key_here"
}
}
}
}
```
For `Claude Code` Users, you can config like this:
```bash
claude mcp add --env SWANLAB_API_KEY=<your_api_key> -- swanlab_mcp uv run swanlab_mcp --transport stdio
```
### Prerequisites
- Python >= 3.12
- SwanLab API Key (get it from [SwanLab](https://swanlab.cn))
### Installation
```bash
# Using uv (recommended)
uv sync
# Or using pip
pip install -e .
```
### Configuration
#### Environment Variables
Create a `.env` file and configure your API key:
```bash
cp .env.template .env
```
Edit the `.env` file:
```env
SWANLAB_API_KEY=your_api_key_here
```
### Running
```bash
# Using stdio transport (default)
python -m swanlab_mcp
# Or using CLI
python -m swanlab_mcp --transport stdio
# Check version
python -m swanlab_mcp --version
```
### Usage
After configuration, restart Claude Desktop to interact with SwanLab via the MCP protocol.
Available Tools:
- `swanlab_list_workspaces` - List workspaces
- `swanlab_get_workspace` - Get workspace details
- `swanlab_list_projects_in_workspace` - List projects in one workspace
- `swanlab_list_projects` - List projects
- `swanlab_get_project` - Get project details
- `swanlab_list_runs_in_project` - List runs in one project
- `swanlab_list_runs` - List runs with optional filters (`state`, `config.*`)
- `swanlab_get_run` - Get run details
- `swanlab_get_run_config` - Get run config
- `swanlab_get_run_metadata` - Get run metadata
- `swanlab_get_run_requirements` - Get run requirements
- `swanlab_get_run_metrics` - Get run metric table
Resource Definitions:
- **workspace**: collection of projects (`PERSON` or `TEAM`) identified by `username`.
- **project**: collection of runs identified by `path = username/project_name`.
- **run**: single experiment identified by `path = username/project_name/experiment_id`.
- **metric**: tabular run history returned as `{path, keys, x_axis, sample, columns, rows, total}`.
## 🛠️ Development
### Code Formatting
```bash
# Using Makefile
make format
# Or manually
uvx isort . --skip-gitignore
uvx ruff format . --quiet
```
### Lint Check
```bash
uvx ruff check .
```
### Pre-commit Hooks
```bash
bash scripts/install-hooks.sh
```
## 📚 References & Acknowledgements
- [SwanLab](https://github.com/SwanHubX/SwanLab)
- [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro)
- [FastMCP v2](https://github.com/jlowin/fastmcp)
- [modelscope-mcp-server](https://github.com/modelscope/modelscope-mcp-server)
- [TrackIO-mcp-server](https://github.com/fcakyon/trackio-mcp)
- [Simple-Wandb-mcp-server](https://github.com/tsilva/simple-wandb-mcp-server)
## 📄 License
MIT License
[license-shield]: https://img.shields.io/badge/license-MIT%202.0-e0e0e0?labelColor=black&style=flat-square
[license-shield-link]: https://github.com/Nexisato/SwanLab-MCP/blob/main/LICENSE
[pypi-version-shield]: https://img.shields.io/pypi/v/swanlab-mcp?color=c4f042&labelColor=black&style=flat-square
[pypi-version-shield-link]: https://pypi.org/project/swanlab-mcp/
| text/markdown | null | CaddiesNew <nexisato0810@gmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"fastmcp>=2.14.4",
"pandas",
"pydantic-settings>=2.0.0",
"python-dotenv>=1.2.1",
"swanlab"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T12:56:21.257012 | swanlab_mcp-0.0.2.tar.gz | 29,016 | f2/3e/6306827d764fc094c41767d64b15e38b0363b69178a2e226de0b94b1bfff/swanlab_mcp-0.0.2.tar.gz | source | sdist | null | false | 4595b6328339e5d7b8e593df0818c1b8 | 35d1102fb4fc38d68ece1508698716d4adac529779449c3688b3acf4fae97ca7 | f23e6306827d764fc094c41767d64b15e38b0363b69178a2e226de0b94b1bfff | null | [
"LICENSE"
] | 252 |
2.4 | busysloths-mlox | 0.1.1.post68 | Accelerate your ML journey—deploy production-ready MLOps in minutes, not months. | [](Logo)
<p align="center">
<strong>
Accelerate your ML journey—deploy production-ready MLOps in minutes, not months.
</strong>
</p>
Tired of tangled configs, YAML jungles, and broken ML pipelines? So were we.
MLOX gives you a calm, streamlined way to deploy, monitor, and maintain production-grade MLOps infrastructure—without rushing.
It’s for engineers who prefer thoughtful systems over chaos. Powered by sloths. Backed by open source.
<p align="center">
<a href="https://qlty.sh/gh/BusySloths/projects/mlox" target="_blank"><img src="https://qlty.sh/gh/BusySloths/projects/mlox/maintainability.svg" alt="Maintainability" /></a>
<a href="https://qlty.sh/gh/BusySloths/projects/mlox" target="_blank"><img src="https://qlty.sh/gh/BusySloths/projects/mlox/coverage.svg" alt="Code Coverage" /></a>
<a href="https://github.com/BusySloths/mlox/issues" target="_blank">
<img alt="GitHub Issues or Pull Requests" src="https://img.shields.io/github/issues/busysloths/mlox"></a>
<a href="https://github.com/BusySloths/mlox/discussions" target="_blank">
<img alt="GitHub Discussions" src="https://img.shields.io/github/discussions/busysloths/mlox"></a>
<a href="https://drive.google.com/file/d/1Y368yXcaQt1dJ6riOCzI7-pSQBnJjyEP/view?usp=sharing">
<img src="https://img.shields.io/badge/Slides-State_of_the_Union-9cf" alt="Slides: State of the Union" />
</a>
</p>
## ATTENTION
MLOX is still in a very early development phase. If you like to contribute in any capacity, we would love to hear from you `contact[at]mlox.org`.
## What can you do with MLOX?
### 📑 Want the big picture?
Check out our **[MLOX – State of the Union (Sept 2025)](https://drive.google.com/file/d/1Y368yXcaQt1dJ6riOCzI7-pSQBnJjyEP/view?usp=sharing)** —
a short slide overview of what MLOX is, what problem it solves, and where it’s heading.
### Infrastructure
- Manage servers: add, remove, tag, and name.
- Choose your runtime: Native, Docker, or Kubernetes.
- Spin up Kubernetes: single node or multi‑node clusters.
### Services
- Install, update, and remove services without fuss.
- Centralized secrets and configuration, ready to use.
- Secure Docker services: MLflow, Airflow, LiteLLM, Ollama, InfluxDB, Redis, and more.
- Kubernetes add‑ons: Dashboard, Helm, Headlamp.
- Import GitHub repositories — public or private — with ease.
- Use GCP integrations in your code:
- BigQuery
- Secret Manager
- Cloud Storage
- Sheets
## Unnecessary Long Introduction
Machine Learning (ML) and Artificial Intelligence (AI) are revolutionizing businesses and industries. Despite its importance, many companies struggle to go from ML/AI prototype to production.
ML/AI systems consist of eight non-trivial sub-problems: data collection, data processing, feature engineering, data labeling, model design, model training and optimization, endpoint deployment, and endpoint monitoring. Each of these step require specialized expert knowledge and specialized software.
MLOps, short for **Machine Learning Operations,** is a paradigm that aims to tackle those problems and deploy and maintain machine learning models in production reliably and efficiently. The word is a compound of "machine learning" and the continuous delivery practice of DevOps in the software field.
Cloud provider such as Google Cloud Platform or Amazon AWS offer a wide range of solutions for each of the MLOps steps. However, solutions are complex and costs are notorious hard to control on these platforms and are prohibitive high for individuals and small businesses such as startups and SMBs. E.g. a common platform for data ingestion is Google Cloud Composer who’s monthly base rate is no less than 450 Euro for a meager 2GB RAM VPS. Solutions for model endpoint hosting are often worse and often cost thousands of euros p. month (cf. Databricks).
Interestingly, the basis of many cloud provider MLOps solutions is widely available open source software (e.g. Google Cloud Composer is based on Apache Airflow). However, these are complex software packages were setup, deploy and maintaining is a non-trivial task.
This is were the MLOX project comes in. The goal of MLOX is four-fold:
MLOX is for everyone — individuals, startups, and small teams.
1. [Infrastructure] MLOX provides an easy-to-use Web UI, TUI, and CLI to securely deploy, maintain, and monitor complete on‑premise MLOps infrastructures built from open‑source components and without vendor lock‑in.
2. [Code] Use the MLOX PyPI package to connect your code to the infrastructure — ready-made integration helpers, SDK clients, and example snippets for common tasks.
3. [Processes] MLOX provides fully-functional templates for dealing with data from ingestion, transformation, storing, model building, up until serving.
4. [Lifecycle Management] Provide initial tooling to manage the lifecycle of services — migrate, upgrade, export, and decommission parts of your MLOps infrastructure*.
*: planned for future releases
More Links:
1. [Wikipedia](https://en.wikipedia.org/wiki/MLOps)
2. [Databricks](https://www.databricks.com/glossary/mlops)
3. [Continuous Delivery for Machine Learning](https://martinfowler.com/articles/cd4ml.html)
## Contributing
### Sloth-Friendly Setup
Easing into MLOX should feel like a lazy stretch on a sunny branch:
1. Install [Task](https://taskfile.dev/installation/) – our go-powered task runner.
2. Clone this repository.
3. Mosey into the project and run:
```bash
task first:steps
```
This unhurried command crafts a conda environment and gathers every dependency for you.
For a more comprehensive guide on how to install and run the show, please have a look at our
**[Sloth's paced Guide to Installation Enlightment](https://github.com/BusySloths/mlox/blob/main/docs/INSTALLATION.md)**.
Once you're comfortably set up, there are many ways to contribute, and they are not limited to writing code. We welcome all contributions such as:
- [Bug reports](https://github.com/BusySloths/mlox/issues/new/choose)
- [Documentation improvements](https://github.com/BusySloths/mlox/issues/new/choose)
- [Enhancement suggestions](https://github.com/BusySloths/mlox/issues/new/choose)
- [Feature requests](https://github.com/BusySloths/mlox/issues/new/choose)
- [Expanding the tutorials and use case examples](https://github.com/BusySloths/mlox/issues/new/choose)
Please see our [Contributing Guide](https://github.com/BusySloths/mlox/blob/main/CONTRIBUTING.md) for details.
### Project Organization
We use GitHub Projects, Milestones, and Issues to organize our development workflow:
- **[GitHub Projects](https://github.com/BusySloths/mlox/projects)**: High-level functional areas and strategic initiatives
- **[Milestones](https://github.com/BusySloths/mlox/milestones)**: Release planning and goal tracking
- **[Issues](https://github.com/BusySloths/mlox/issues)**: Specific features, bugs, and tasks
📚 **Documentation:**
- [GitHub Project Guide](https://github.com/BusySloths/mlox/blob/main/docs/GITHUB_PROJECT.md) - Understanding our project organization
- [Project Planning Guide](https://github.com/BusySloths/mlox/blob/main/docs/PROJECT_PLANNING.md) - How to create and manage projects
- [Labels Guide](https://github.com/BusySloths/mlox/blob/main/docs/LABELS.md) - Our issue categorization system
## Big Thanks to our Sponsors
MLOX is proudly funded by the following organizations:
<img src="https://github.com/BusySloths/mlox/blob/main/mlox/resources/BMFTR_logo.jpg?raw=true" alt="BMFTR" width="420px"/>
## Supporters
We would not be here without the generous support of the following people and organizations:
<p align="center">
<img src="https://github.com/BusySloths/mlox/blob/main/mlox/resources/PrototypeFund_logo_light.png?raw=true" alt="PrototypeFund" width="380px"/>
<img src="https://github.com/BusySloths/mlox/blob/main/mlox/resources/PrototypeFund_logo_dark.png?raw=true" alt="PrototypeFund" width="380px"/>
</p>
## License
MLOX is open-source and intended to be a community effort, and it wouldn't be possible without your support and enthusiasm.
It is distributed under the terms of the MIT license. Any contribution made to this project will be subject to the same provisions.
## Join Us
We are looking for nice people who are invested in the problem we are trying to solve.
| text/markdown | null | drbusysloth <contact@mlox.org> | null | null | MIT License
Copyright (c) 2024 nicococo
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | Infrastructure, Server, Service, Dashboard, Opinionated, MLOps | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development",
"Topic :: System :: Distributed Computing",
"Topic :: Internet",
"Topic :: Database",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Indepe... | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"dacite==1.9.2",
"pyyaml==6.0.2",
"fabric==3.2.2",
"paramiko==3.4.1",
"cryptography==43.0.1",
"passlib==1.7.4",
"typer==0.17.4",
"grpcio==1.73.1",
"kafka-python-ng==2.2.3",
"bcrypt>=5.0.0",
"pandas>=2.2; extra == \"gcp\"",
"gspread==6.2.1; extra == \"gcp\"",
"pandas-gbq==0.29.2; extra == \"g... | [] | [] | [] | [
"Homepage, https://busysloths.github.io/mlox/mlox.html",
"Tracker, https://github.com/busysloths/mlox/issues",
"Source, https://github.com/busysloths/mlox",
"Examples, https://github.com/busysloths/mlox"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:55:25.788537 | busysloths_mlox-0.1.1.post68.tar.gz | 4,058,431 | de/b9/55e45deddb28b13655b076be1ec7c558c263d83bcf496a48218770b0d4e8/busysloths_mlox-0.1.1.post68.tar.gz | source | sdist | null | false | 09c86f05e8def9382e8bbb3af1e572f3 | ea761e6ab6b9bb2069f9bbb3ffa966268af12fb8449bbd71b4335ec76cc01945 | deb955e45deddb28b13655b076be1ec7c558c263d83bcf496a48218770b0d4e8 | null | [
"LICENSE"
] | 242 |
2.4 | iyzipay | 1.0.46 | iyzipay api python client | # iyzipay-python
You can sign up for an iyzico account at https://iyzico.com
# Requirements
Python 3.6+
### Deprecation Notes
- Python 2.7 will not be maintained past 2020. As we iyzico, we will not support of that python version at March of 2020. If you have any questions, please open an issue on Github or contact us at integration@iyzico.com.
- Python 3.2, 3.3, 3.4 and 3.5 supports are dropped after v1.0.37.
# Installation
### PyPI
You can install the bindings via [PyPI](https://pypi.python.org). Run the following command:
```bash
pip install iyzipay
```
Or:
```bash
easy_install iyzipay
```
### Manual Installation
If you do not wish to use pip, you can download the [latest release](https://github.com/iyzico/iyzipay-python/releases). Then, to use the bindings, import iyzipay package.
```python
import iyzipay
```
# Usage
```python
options = {
'api_key': 'your api key',
'secret_key': 'your secret key',
'base_url': 'sandbox-api.iyzipay.com'
}
payment_card = {
'cardHolderName': 'John Doe',
'cardNumber': '5528790000000008',
'expireMonth': '12',
'expireYear': '2030',
'cvc': '123',
'registerCard': '0'
}
buyer = {
'id': 'BY789',
'name': 'John',
'surname': 'Doe',
'gsmNumber': '+905350000000',
'email': 'email@email.com',
'identityNumber': '74300864791',
'lastLoginDate': '2015-10-05 12:43:35',
'registrationDate': '2013-04-21 15:12:09',
'registrationAddress': 'Nidakule Göztepe, Merdivenköy Mah. Bora Sok. No:1',
'ip': '85.34.78.112',
'city': 'Istanbul',
'country': 'Turkey',
'zipCode': '34732'
}
address = {
'contactName': 'Jane Doe',
'city': 'Istanbul',
'country': 'Turkey',
'address': 'Nidakule Göztepe, Merdivenköy Mah. Bora Sok. No:1',
'zipCode': '34732'
}
basket_items = [
{
'id': 'BI101',
'name': 'Binocular',
'category1': 'Collectibles',
'category2': 'Accessories',
'itemType': 'PHYSICAL',
'price': '0.3'
},
{
'id': 'BI102',
'name': 'Game code',
'category1': 'Game',
'category2': 'Online Game Items',
'itemType': 'VIRTUAL',
'price': '0.5'
},
{
'id': 'BI103',
'name': 'Usb',
'category1': 'Electronics',
'category2': 'Usb / Cable',
'itemType': 'PHYSICAL',
'price': '0.2'
}
]
request = {
'locale': 'tr',
'conversationId': '123456789',
'price': '1',
'paidPrice': '1.2',
'currency': 'TRY',
'installment': '1',
'basketId': 'B67832',
'paymentChannel': 'WEB',
'paymentGroup': 'PRODUCT',
'paymentCard': payment_card,
'buyer': buyer,
'shippingAddress': address,
'billingAddress': address,
'basketItems': basket_items
}
payment = iyzipay.Payment().create(request, options)
```
See other samples under samples directory.
### Mock test cards
Test cards that can be used to simulate a *successful* payment:
Card Number | Bank | Card Type
----------- | ---- | ---------
5890040000000016 | Akbank | Master Card (Debit)
5526080000000006 | Akbank | Master Card (Credit)
4766620000000001 | Denizbank | Visa (Debit)
4603450000000000 | Denizbank | Visa (Credit)
4729150000000005 | Denizbank Bonus | Visa (Credit)
4987490000000002 | Finansbank | Visa (Debit)
5311570000000005 | Finansbank | Master Card (Credit)
9792020000000001 | Finansbank | Troy (Debit)
9792030000000000 | Finansbank | Troy (Credit)
5170410000000004 | Garanti Bankası | Master Card (Debit)
5400360000000003 | Garanti Bankası | Master Card (Credit)
374427000000003 | Garanti Bankası | American Express
4475050000000003 | Halkbank | Visa (Debit)
5528790000000008 | Halkbank | Master Card (Credit)
4059030000000009 | HSBC Bank | Visa (Debit)
5504720000000003 | HSBC Bank | Master Card (Credit)
5892830000000000 | Türkiye İş Bankası | Master Card (Debit)
4543590000000006 | Türkiye İş Bankası | Visa (Credit)
4910050000000006 | Vakıfbank | Visa (Debit)
4157920000000002 | Vakıfbank | Visa (Credit)
5168880000000002 | Yapı ve Kredi Bankası | Master Card (Debit)
5451030000000000 | Yapı ve Kredi Bankası | Master Card (Credit)
*Cross border* test cards:
Card Number | Country
----------- | -------
4054180000000007 | Non-Turkish (Debit)
5400010000000004 | Non-Turkish (Credit)
Test cards to get specific *error* codes:
Card Number | Description
----------- | -----------
5406670000000009 | Success but cannot be cancelled, refund or post auth
4111111111111129 | Not sufficient funds
4129111111111111 | Do not honour
4128111111111112 | Invalid transaction
4127111111111113 | Lost card
4126111111111114 | Stolen card
4125111111111115 | Expired card
4124111111111116 | Invalid cvc2
4123111111111117 | Not permitted to card holder
4122111111111118 | Not permitted to terminal
4121111111111119 | Fraud suspect
4120111111111110 | Pickup card
4130111111111118 | General error
4131111111111117 | Success but mdStatus is 0
4141111111111115 | Success but mdStatus is 4
4151111111111112 | 3dsecure initialize failed
| text/markdown | iyzico | iyzico-ci@iyzico.com | null | null | MIT | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyt... | [] | https://github.com/iyzico/iyzipay-python | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T12:55:23.895256 | iyzipay-1.0.46.tar.gz | 25,778 | 98/28/8dd2eab09b89c465f597c3ee66d0ad8c94c186c78a9d730b0a74ad9aecc5/iyzipay-1.0.46.tar.gz | source | sdist | null | false | 530e606d3d2a0e00571bba1a7a43c6c2 | 72daf15a24fbabd4916ea4d8228d46756cf08fe526de25a8ace5708e1413f3c3 | 98288dd2eab09b89c465f597c3ee66d0ad8c94c186c78a9d730b0a74ad9aecc5 | null | [
"LICENSE"
] | 1,679 |
2.4 | corpo | 0.2.0 | Form and govern Wyoming DAO LLCs on Solana | # corpo CLI
Agent-facing CLI for forming and governing Wyoming DAO LLCs on Solana.
## Install
```bash
uvx corpo # run directly
pip install corpo # or install
```
## Quick Start
```bash
corpo init # generate keypair + config
corpo status # show identity & realm context
corpo realms # list all realms
corpo proposals --governance <addr>
corpo vote --proposal <addr> --proposal-owner-record <addr> --choice approve
```
## Formation (Phase 2)
```bash
corpo form draft --name "MyDAO LLC" --members 3
corpo form file --draft-id <id>
corpo form status --filing-id <id>
corpo quote --amount 500
corpo pay --quote-id <id>
```
## Config
`~/.corpo/config.toml`:
```toml
[identity]
keypair = "~/.corpo/keypair.json"
[network]
api_url = "https://api.corpo.dev"
rpc_url = "https://api.devnet.solana.com"
program_id = "GTesTBiEWE32WHXXE2S4XbZvA5CrEc4xs6ZgRe895dP"
[defaults]
realm = ""
governance = ""
governing_token_mint = ""
```
Resolution order: CLI flag > env var (`CORPO_*`) > config.toml > built-in default.
| text/markdown | null | Corpo LLC <dev@corpo.ai> | null | null | null | dao, governance, lao, llc, solana, wyoming | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.1",
"httpx>=0.27",
"solders>=0.21",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://corpo.ai",
"Repository, https://github.com/corpo-ai/corpo"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:55:13.526836 | corpo-0.2.0.tar.gz | 33,360 | c7/69/cce908abe8dcb55be46c66f60cbc5948cfd51f2462ae885d540fac8f6669/corpo-0.2.0.tar.gz | source | sdist | null | false | a23761d86d06de8e726a98e26749014d | 12709d91c08e7c02e7e4ecbd6677f44aae42b2afec3aa0306bead37fcff43593 | c769cce908abe8dcb55be46c66f60cbc5948cfd51f2462ae885d540fac8f6669 | MIT | [] | 277 |
2.4 | atlassian-cli | 0.5.8 | Fast CLI tools for Atlassian Cloud (Confluence + Jira) — optimized for AI agents | <p align="center">
<h1 align="center">atlassian-cli</h1>
<p align="center">
Fast CLI tools for Atlassian Cloud — built for AI agents, loved by humans.
</p>
</p>
<p align="center">
<a href="https://github.com/catapultcx/atlassian-cli/actions/workflows/ci.yml"><img src="https://github.com/catapultcx/atlassian-cli/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://pypi.org/project/atlassian-cli/"><img src="https://img.shields.io/pypi/v/atlassian-cli" alt="PyPI"></a>
<a href="https://pypi.org/project/atlassian-cli/"><img src="https://img.shields.io/pypi/pyversions/atlassian-cli" alt="Python"></a>
<a href="https://github.com/catapultcx/atlassian-cli/blob/main/LICENSE"><img src="https://img.shields.io/github/license/catapultcx/atlassian-cli" alt="License"></a>
</p>
---
Two CLI tools — `confluence` and `jira` — that talk directly to Atlassian Cloud REST APIs. Zero bloat, one dependency (`requests`), deterministic output that AI agents parse in a single shot.
## Install
```bash
pip install atlassian-cli
```
Or from source:
```bash
pip install git+https://github.com/catapultcx/atlassian-cli.git
```
## Setup
Create a `.env` file (or export environment variables):
```bash
ATLASSIAN_URL=https://your-site.atlassian.net
ATLASSIAN_EMAIL=you@example.com
ATLASSIAN_TOKEN=your-api-token
```
Get your API token at https://id.atlassian.com/manage-profile/security/api-tokens
> Legacy `CONFLUENCE_URL` / `CONFLUENCE_EMAIL` / `CONFLUENCE_TOKEN` env vars are also supported.
## Confluence CLI
Manages Confluence pages as local JSON files in ADF (Atlassian Document Format). No markdown — ADF preserves every macro, panel, and table perfectly.
```bash
# Download a page
confluence get 9268920323
# Upload local edits back
confluence put 9268920323
confluence put 9268920323 --force # skip version check
# Compare local vs remote
confluence diff 9268920323
# Bulk-download an entire space (parallel, version-cached)
confluence sync POL
confluence sync COMPLY --workers 20 --force
# Search local page index (instant, no API call)
confluence search "risk assessment"
# Rebuild the page index
confluence index
confluence index --space POL --space COMPLY
```
### How sync works
`sync` downloads every page in a space using parallel workers. It caches version numbers locally — subsequent syncs only fetch pages that changed. A full space of 500+ pages takes seconds.
```
pages/
POL/
9268920323.json # ADF body
9268920323.meta.json # title, version, timestamps
COMPLY/
5227515611.json
5227515611.meta.json
page-index.json # searchable index
```
## Jira CLI
### Issues
Full CRUD on Jira issues via REST API v3.
```bash
# Get issue details
jira issue get ISMS-42
# Create issues
jira issue create PROJ Task "Fix the login bug"
jira issue create PROJ Story "User auth" --description "As a user..." --labels security urgent
jira issue create PROJ Sub-task "Write tests" --parent PROJ-100
# Update fields
jira issue update ISMS-42 --summary "New title"
jira issue update ISMS-42 --labels risk compliance
jira issue update ISMS-42 --fields '{"priority": {"name": "High"}}'
# Delete
jira issue delete ISMS-42
# Search with JQL
jira issue search "project = ISMS AND status = Open"
jira issue search "assignee = currentUser() ORDER BY updated DESC" --max 20
# Transitions
jira issue transition ISMS-42 "In Progress"
jira issue transition ISMS-42 Done
# Comments
jira issue comment ISMS-42 "Fixed in v2.1"
jira issue comments ISMS-42
```
### Assets (JSM)
CRUD for Jira Service Management Assets via the Assets REST API v1.
```bash
# Browse schemas and types
jira assets schemas
jira assets schema 1
jira assets types 1
jira assets type 5
jira assets attrs 5
# Search with AQL
jira assets search "objectType = Server"
# CRUD objects
jira assets get 123
jira assets create 5 Name=srv01 IP=10.0.0.1
jira assets update 123 Name=srv02
jira assets delete 123
# Create new object types
jira assets type-create 1 "Network Device" --description "Switches and routers"
```
## `--json` flag
Both CLIs accept a global `--json` flag that switches all output to machine-readable JSON. Perfect for piping into `jq` or parsing from code.
```bash
# Text mode (default)
$ confluence get 9268920323
OK Artificial Intelligence Policy (v12) -> pages/POL/9268920323.json
# JSON mode
$ confluence --json get 9268920323
{"status":"ok","message":"Artificial Intelligence Policy (v12) -> pages/POL/9268920323.json"}
```
## Output format
All commands emit status-prefixed lines for easy parsing:
| Prefix | Meaning |
|--------|---------|
| `OK` | Success |
| `GET` | Page downloaded |
| `SKIP` | Already up-to-date |
| `ERR` | Error |
| `DONE` | Batch complete |
## Architecture
```
src/atlassian_cli/
config.py Shared auth, .env parsing, session factory
http.py API helpers: get/post/put/delete + error handling
output.py Text & JSON output formatting
confluence.py Confluence CLI (v2 API, ADF)
jira.py Jira CLI entry point (subparsers)
jira_issues.py Jira issue commands (v3 API)
jira_assets.py Jira Assets commands (Assets v1 API)
```
**APIs used:**
- Confluence Cloud REST API v2 (`/wiki/api/v2/`)
- Jira Cloud REST API v3 (`/rest/api/3/`)
- Jira Assets REST API v1 (`api.atlassian.com/jsm/assets/workspace/{id}/v1`)
## Development
```bash
git clone https://github.com/catapultcx/atlassian-cli.git
cd atlassian-cli
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytest
ruff check src/ tests/
```
## License
MIT
| text/markdown | null | Alex Fishlock <alex.fishlock@catapult.cx> | null | null | null | atlassian, confluence, jira, cli, claude, ai | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.28.0",
"atlas-doc-parser>=1.0.0",
"pytest>=7.0; extra == \"dev\"",
"responses>=0.23.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/catapultcx/atlassian-cli",
"Repository, https://github.com/catapultcx/atlassian-cli",
"Issues, https://github.com/catapultcx/atlassian-cli/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:54:49.044797 | atlassian_cli-0.5.8.tar.gz | 32,996 | ad/d6/716a3b1f195fb7bc268981b47382aee05f20d601cefbb625b5e423d0176b/atlassian_cli-0.5.8.tar.gz | source | sdist | null | false | 987503d8cd92c74c82d66399587926c3 | d931f8826951013ef8613c3486365ba1b07de66b2e6f1fa6c13d4ff5ed2a40a4 | add6716a3b1f195fb7bc268981b47382aee05f20d601cefbb625b5e423d0176b | MIT | [
"LICENSE"
] | 255 |
2.4 | datasette-showboat | 0.1a1 | Datasette plugin for SHOWBOAT_REMOTE_URL | # datasette-showboat
[](https://pypi.org/project/datasette-showboat/)
[](https://github.com/simonw/datasette-showboat/releases)
[](https://github.com/simonw/datasette-showboat/actions/workflows/test.yml)
[](https://github.com/simonw/datasette-showboat/blob/main/LICENSE)
Datasette plugin that provides a remote viewer for [Showboat](https://github.com/simonw/showboat) documents. It receives streaming document chunks over HTTP and displays them in a live-updating web interface.
See [this blog post](https://simonwillison.net/2026/Feb/17/chartroom-and-datasette-showboat/#datasette-showboat) for background on this project.
## Installation
Install this plugin in the same environment as Datasette.
```bash
datasette install datasette-showboat
```
## Usage
Once installed, the plugin adds a `/-/showboat` page to your Datasette instance listing all received documents, and a `/-/showboat/receive` endpoint for ingesting chunks.
### Sending documents
Set the `SHOWBOAT_REMOTE_URL` environment variable to point at your Datasette instance:
```bash
export SHOWBOAT_REMOTE_URL="https://your-datasette-instance/-/showboat/receive"
```
The `/-/showboat` page will display the correct URL for your instance including the hostname.
### Permissions
Viewing showboat documents requires the `showboat` permission. By default this is **denied** to anonymous users — only the root user (when Datasette is started with `--root`) has access automatically.
To grant access to specific users, add to your `datasette.yaml`:
```yaml
permissions:
showboat:
id: your-username
```
Or to allow all authenticated users:
```yaml
permissions:
showboat:
id: "*"
```
The receive endpoint (`/-/showboat/receive`) does not require the `showboat` permission — it uses token authentication instead (see below).
### Token authentication
To protect the receive endpoint, configure a secret token in your `datasette.yaml` (or `metadata.yaml`):
```yaml
plugins:
datasette-showboat:
token: your-secret-token
```
When a token is configured, all requests to `/-/showboat/receive` must include it as a query parameter:
```bash
export SHOWBOAT_REMOTE_URL="https://your-datasette-instance/-/showboat/receive?token=your-secret-token"
```
Without a configured token, the receive endpoint accepts all POST requests.
### Custom database
By default chunks are stored in Datasette's internal database. To use a named database instead:
```yaml
plugins:
datasette-showboat:
database: my_database
```
## Development
To set up this plugin locally, first checkout the code. You can confirm it is available like this:
```bash
cd datasette-showboat
# Confirm the plugin is visible
uv run datasette plugins
```
To run the tests:
```bash
uv run pytest
```
| text/markdown | Simon Willison | null | null | null | null | null | [
"Framework :: Datasette"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"datasette>=1.0a24"
] | [] | [] | [] | [
"Homepage, https://github.com/simonw/datasette-showboat",
"Changelog, https://github.com/simonw/datasette-showboat/releases",
"Issues, https://github.com/simonw/datasette-showboat/issues",
"CI, https://github.com/simonw/datasette-showboat/actions"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:53:56.557459 | datasette_showboat-0.1a1.tar.gz | 36,574 | 5a/8d/53c9a390b84bb1e33a508aee88b1c528459d9b5b602a34b2db09ec25b3ff/datasette_showboat-0.1a1.tar.gz | source | sdist | null | false | 0a1a79a8c9b14f8e855e25f8410ddaeb | 7e5f70d2df5d94988527cc711fa283b63adb2ab3ee1a3c5087f1b77f17805e99 | 5a8d53c9a390b84bb1e33a508aee88b1c528459d9b5b602a34b2db09ec25b3ff | Apache-2.0 | [
"LICENSE"
] | 236 |
2.4 | unitelabs-sila | 0.7.5 | An un-opinionated SiLA 2 library. | # Unitelabs SiLA Python Library
A Python library for creating SiLA 2 clients and servers. This flexible and unopinionated library gives you everything needed to create a SiLA 2 1.1 compliant Python application. It adheres to the [SiLA 2 specification](https://sila2.gitlab.io/sila_base/) and is used by the [UniteLabs CDK](https://gitlab.com/unitelabs/cdk/python-cdk) to enable rapid development of cloud-native SiLA Servers with a code-first approach.
## Getting Started
### Prerequisites
Ensure you have Python 3.9+ installed. You can install Python from [python.org](https://www.python.org/downloads/).
### Quickstart
To get started quickly with your first connector, we recommend to use our [UniteLabs CDK](https://gitlab.com/unitelabs/cdk/python-cdk). Use [Cookiecutter](https://www.cookiecutter.io) to create your project base on our [Connector Factory](https://gitlab.com/unitelabs/cdk/connector-factory) starter template:
```
cookiecutter git@gitlab.com:unitelabs/cdk/connector-factory.git
```
### Installation
Install the latest version of the library into your Python project:
```python
pip install unitelabs-sila
```
## Usage
To start using the SiLA Python library in your project:
1. Import and configure your SiLA server instance:
```python
import asyncio
from sila.server import Server
from your_project.features import your_feature
async def main():
server = Server({"port": 50000})
server.add_feature(your_feature)
await server.start()
asyncio.run(main())
```
2. To implement a custom SiLA Feature, create a feature definition following the SiLA2 specification:
```python
from sila.server import Feature, UnobservableCommand
your_feature = Feature(...)
your_method = UnobservableCommand(...)
your_method.add_to_feature(your_feature)
```
3. Run your server:
```bash
$ python your_script.py
```
> Important: Without implementing the required SiLA Service Feature, your SiLA Server will not be fully compliant with the standard. For easier compliance, consider using the [UniteLabs CDK](https://gitlab.com/unitelabs/cdk/python-cdk), which handles this automatically.
## Contribute
Submit and share your work!
https://hub.unitelabs.io
We encourage you to submit feature requests and bug reports through the GitLab issue system. Please include a clear description of the issue or feature you are proposing. If you have further questions, issues, or suggestions for improvement, don't hesitate to reach out to us at [developers@unitelabs.io](mailto:developers+sila@unitelabs.io).
Join the conversation! Stay up to date with the latest developments by joining the Python channel in the [SiLA Slack](https://sila-standard.org/slack).
## License
Distributed under the MIT License. See [MIT license](LICENSE) for more information.
| text/markdown | null | UniteLabs <developers+sila@unitelabs.io> | null | null | null | SiLA 2, automation, connectivity, laboratory | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"grpcio>=1.70.0",
"jsonschema~=4.25",
"typing-extensions",
"xmlschema~=4.2",
"zeroconf>=0.147.0",
"commitizen; extra == \"dev\"",
"cryptography>=43.0.3; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"ruff; extra == \"dev\"",
"unitelabs-jsondocs[clog]~=0.4.3; extra == \"docs\"",
"pytest; ext... | [] | [] | [] | [
"homepage, https://sila-standard.com",
"repository, https://gitlab.com/unitelabs/sila2/sila-python",
"documentation, https://gitlab.com/unitelabs/sila2/sila-python/-/README.md",
"Bug Tracker, https://gitlab.com/unitelabs/sila2/sila-python/-/issues"
] | python-httpx/0.28.1 | 2026-02-18T12:53:52.310165 | unitelabs_sila-0.7.5-py3-none-any.whl | 201,221 | fe/4e/13d14065d66566eab5fd5626e3bd1d4e03051383ddc6ac17d8f2126f0ffb/unitelabs_sila-0.7.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 0c621aecfc80d67d634037e8d5d49431 | c1d1829ccd42bd5b2d7def0378774925a5ed61e754e8ca6aa7b70f0cc86c387b | fe4e13d14065d66566eab5fd5626e3bd1d4e03051383ddc6ac17d8f2126f0ffb | MIT | [
"LICENSE"
] | 308 |
2.4 | claude-nagger | 2.5.0 | Claude Code統合ツールシステム - フック・規約管理CLI | # claude-nagger
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/claude-nagger/)
**Conditional convention injection + automatic re-injection after context compacting** for Claude Code.
Feeds project conventions to Claude Code **only when relevant files are touched** — and re-injects them when the context window compacts.
> **Japanese version is available below.** See [日本語版](#日本語版)
## The Problem
| Problem | Description |
|---------|-------------|
| **Context bloat** | Writing all conventions in CLAUDE.md consumes massive tokens |
| **Convention amnesia** | Context compacting causes conventions to be "forgotten" |
| **Irrelevant information** | CSS conventions are unnecessary when editing models, and vice versa |
## How It Works
```
User: "Fix this CSS"
→ Claude calls Edit tool (*.css)
→ PreToolUse Hook (claude-nagger)
1. Pattern match: "**/*.css" ✓
2. Load & inject matching conventions
→ Claude edits CSS while referencing conventions
```
CLAUDE.md alone cannot achieve **conditional injection via PreToolUse hooks** — claude-nagger can.
## Key Features
### 1. File-pattern conditional injection
Conventions fire **only when matching files are edited**:
```yaml
rules:
- name: "CSS conventions"
patterns: ["**/*.css", "**/*.scss"]
severity: "warn"
message: |
- Use BEM naming convention
- !important is prohibited
```
CSS rules fire only for CSS files. Model rules fire only for model files. No wasted tokens.
### 2. Compact detection & auto re-injection
When Claude Code compacts its context, conventions are silently dropped. claude-nagger detects compacting via `SessionStart[compact]` hook and resets marker files — conventions re-inject on the next tool call. **No configuration required.**
### 3. Token-threshold re-injection
Each rule can define a `token_threshold`. When token count since last injection exceeds the threshold, the convention re-injects — even without compacting:
```yaml
rules:
- name: "Model conventions"
patterns: ["**/models/**/*.py"]
severity: "block"
token_threshold: 35000
message: |
- Field names in snake_case
- Docstrings required
```
### 4. Automatic rule suggestion (suggest-rules)
Starting with an empty convention file? claude-nagger analyzes your actual tool usage and **automatically suggests rules**:
```
Session ends (Stop hook)
→ Analyze hook_input_*.json (file paths, commands)
→ Pattern aggregation + Claude LLM analysis
→ .claude-nagger/suggested_rules.yaml generated
→ Next session: notification with proposals
```
Run manually anytime:
```bash
claude-nagger suggest-rules # Analyze & output suggestions
claude-nagger suggest-rules --min-count 5 --top 5 # Filter by frequency
```
### 5. Subagent-aware convention enforcement
When Claude Code spawns subagents (via the Task tool), conventions are enforced there too — with **per-type override** support:
```yaml
# config.yaml
session_startup:
overrides:
subagent_default: # Applied to ALL subagents
messages:
first_time:
title: "Subagent rules"
main_text: "No out-of-scope edits"
subagent_types: # Per-type overrides (highest priority)
ticket-manager:
messages:
first_time:
title: "Ticket agent rules"
main_text: "Redmine operations only"
Explore:
enabled: false # Skip for Explore subagents
```
claude-nagger tracks subagent lifecycle via `SubagentStart`/`SubagentStop` hooks, resolves the appropriate config override (`base → subagent_default → subagent_types.{type}`), and injects the right conventions — once per subagent, non-blocking after first display.
### Comparison with similar tools
| Feature | claude-nagger | bmad-context-injection | meridian |
|---------|:---:|:---:|:---:|
| **File-pattern conditional injection** | Yes | Yes | No (session-level) |
| **Command-pattern conditional injection** | Yes | No | No |
| **Compact detection + re-injection** | Yes | No | Yes |
| **Per-rule token-threshold re-injection** | Yes | No | Session-level |
| **Automatic rule suggestion** | Yes | No | No |
| **Subagent-aware enforcement** | Yes | No | No |
| **Distribution** | `pip install` (PyPI) | Copy to project | curl installer |
| **License** | MIT | MIT (unconfirmed) | Not specified |
## Quick Start
```bash
# Install (uv tool recommended)
uv tool install claude-nagger && claude-nagger install-hooks
# or pip
pip install claude-nagger && claude-nagger install-hooks
# Update
uv tool upgrade claude-nagger # or: pip install --upgrade claude-nagger
# Verify
claude-nagger install-hooks --dry-run
```
> **Note**: `uvx claude-nagger install-hooks` is not recommended — uvx runs in a temporary environment and hooks will not work.
<details>
<summary><b>Command not found? (PATH)</b></summary>
```bash
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc && source ~/.bashrc
```
</details>
Then edit `.claude-nagger/*.yaml` to set project-specific conventions:
```
.claude-nagger/
├── config.yaml # Session management & context thresholds
├── file_conventions.yaml # Conventions for file editing
├── command_conventions.yaml # Conventions for command execution
└── suggested_rules.yaml # Auto-generated rule suggestions (by suggest-rules)
```
## Configuration
### file_conventions.yaml
```yaml
rules:
- name: "CSS conventions"
patterns: ["**/*.css", "**/*.scss"]
severity: "warn" # warn | block
token_threshold: 30000 # Optional: re-inject threshold
message: |
- Use BEM naming convention
- !important is prohibited
```
<details>
<summary><b>config.yaml</b></summary>
```yaml
session_startup:
enabled: true
messages:
first_time:
title: "Project Setup"
main_text: "Please review the project conventions"
severity: "block"
# Subagent overrides (base → subagent_default → subagent_types.{type})
overrides:
subagent_default:
messages:
first_time:
title: "Subagent rules"
main_text: "No out-of-scope file edits"
subagent_types:
Explore:
enabled: false
context_management:
reminder_thresholds:
light_warning: 30000
medium_warning: 60000
critical_warning: 100000
```
</details>
<details>
<summary><b>command_conventions.yaml</b></summary>
```yaml
rules:
- name: "Git conventions"
patterns: ["git push*", "git commit*"]
severity: "warn"
token_threshold: 25000
message: |
- Write commit messages in the project language
- Run tests before pushing
```
</details>
## Commands
```bash
claude-nagger install-hooks # Install hooks
claude-nagger install-hooks --dry-run # Preview (no changes)
claude-nagger install-hooks --force # Force overwrite
claude-nagger --version # Version
claude-nagger diagnose # Environment diagnostics
claude-nagger suggest-rules # Suggest rules from usage history
claude-nagger hook <name> # Direct hook execution
claude-nagger match-test --file "path" --pattern "glob" # Test pattern matching
```
## Requirements / Links / License
- **Requires**: Python 3.10+, Claude Code CLI
- [Bug reports](https://github.com/HollySizzle/claude-nagger/issues/new?template=bug_report.yml) (attach `claude-nagger diagnose` output) | [Feature requests](https://github.com/HollySizzle/claude-nagger/issues/new?template=feature_request.yml) | [Discussions](https://github.com/HollySizzle/claude-nagger/discussions)
- [Developer setup](https://github.com/HollySizzle/claude-nagger): `git clone` & `./scripts/install-dev.sh`
- **License**: MIT — See [LICENSE](LICENSE)
---
---
# 日本語版
Claude Codeに**条件付き規約注入 + コンテキスト圧縮後の自動再注入**を提供するフックツール。
関連ファイルが編集された時**だけ**プロジェクト規約をClaude Codeに注入し、コンテキスト圧縮時に自動で再注入します。
## 解決する問題
| 問題 | 説明 |
|------|------|
| **コンテキスト肥大化** | 全規約をCLAUDE.mdに書くとトークン消費が膨大 |
| **規約の忘却** | コンテキスト圧縮(compacting)により規約が「忘れられる」 |
| **無関係な情報** | モデル編集時にCSS規約は不要、逆も然り |
## 動作原理
```
ユーザー: "このCSSを修正して"
→ Claude: Editツール呼び出し (*.css)
→ PreToolUse Hook (claude-nagger)
1. パターン照合: "**/*.css" ✓
2. 対応する規約を読み込み・注入
→ Claude: 規約を参照しながらCSS編集
```
CLAUDE.md単体では実現できない**PreToolUseフックによる条件付き注入**を提供します。
## 主な機能
### 1. ファイルパターン条件付き注入
対象ファイルが編集された時**だけ**規約を注入:
```yaml
rules:
- name: "CSS編集規約"
patterns: ["**/*.css", "**/*.scss"]
severity: "warn"
message: |
- BEM命名規則を使用
- !important は禁止
```
CSS規約はCSS編集時のみ発火。モデル規約はモデル編集時のみ。トークンの無駄がありません。
### 2. compact検知・自動再注入
Claude Codeがコンテキストを圧縮すると規約は暗黙的に失われます。claude-naggerは`SessionStart[compact]`フックで圧縮を検知しマーカーファイルをリセット — 次のツール呼び出しで規約が自動再注入されます。**設定不要。**
### 3. トークン閾値再注入
ルールごとに`token_threshold`を設定可能。前回注入時からのトークン増加量が閾値を超えると、compactなしでも再注入:
```yaml
rules:
- name: "モデル編集規約"
patterns: ["**/models/**/*.py"]
severity: "block"
token_threshold: 35000
message: |
- フィールド名はsnake_case
- 必ずdocstringを記載
```
### 4. 自動規約提案(suggest-rules)
規約ファイルが空の状態からでも、実際のツール使用履歴を分析して**規約候補を自動提案**:
```
セッション終了(Stop hook)
→ hook_input_*.json分析(ファイルパス・コマンド集約)
→ Python統計前処理 + Claude LLM分析
→ .claude-nagger/suggested_rules.yaml 生成
→ 次回セッション開始時に提案内容を通知
```
手動実行も可能:
```bash
claude-nagger suggest-rules # 使用履歴から規約候補を出力
claude-nagger suggest-rules --min-count 5 --top 5 # 出現頻度でフィルタ
```
### 5. subagent対応の規約適用
Claude Codeがsubagent(Taskツール)を起動した場合も、**タイプ別オーバーライド**で規約を自動適用:
```yaml
# config.yaml
session_startup:
overrides:
subagent_default: # 全subagent共通
messages:
first_time:
title: "subagent規約"
main_text: "スコープ外の編集禁止"
subagent_types: # タイプ別オーバーライド(最優先)
ticket-manager:
messages:
first_time:
title: "チケット管理agent規約"
main_text: "Redmine操作のみ"
Explore:
enabled: false # Exploreでは規約表示をスキップ
```
`SubagentStart`/`SubagentStop`フックでsubagentのライフサイクルを追跡し、設定を解決(`base → subagent_default → subagent_types.{type}`)して適切な規約を注入します。初回表示後はノンブロッキング。
### 類似ツールとの比較
| 機能 | claude-nagger | bmad-context-injection | meridian |
|------|:---:|:---:|:---:|
| **ファイルパターン条件付き注入** | Yes | Yes | No(セッション単位) |
| **コマンドパターン条件付き注入** | Yes | No | No |
| **compact検知+再注入** | Yes | No | Yes |
| **ルール単位トークン閾値再注入** | Yes | No | セッション単位 |
| **自動規約提案** | Yes | No | No |
| **subagent対応規約適用** | Yes | No | No |
| **配布方式** | `pip install`(PyPI) | プロジェクトにコピー | curlインストーラ |
| **ライセンス** | MIT | MIT(未確認) | 未指定 |
## クイックスタート
```bash
# インストール(uv tool推奨)
uv tool install claude-nagger && claude-nagger install-hooks
# または pip
pip install claude-nagger && claude-nagger install-hooks
# アップデート
uv tool upgrade claude-nagger # or: pip install --upgrade claude-nagger
# 動作確認
claude-nagger install-hooks --dry-run
```
> **注意**: `uvx claude-nagger install-hooks`は非推奨 — uvxは一時実行のためフックが動作しません。
<details>
<summary><b>コマンドが見つからない場合(PATH設定)</b></summary>
```bash
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc && source ~/.bashrc
```
</details>
`.claude-nagger/*.yaml` を編集してプロジェクト固有の規約を設定:
```
.claude-nagger/
├── config.yaml # セッション管理・コンテキスト閾値
├── file_conventions.yaml # ファイル編集時の規約
├── command_conventions.yaml # コマンド実行時の規約
└── suggested_rules.yaml # 自動生成された規約候補(suggest-rules)
```
## 設定
### file_conventions.yaml
```yaml
rules:
- name: "CSS編集規約"
patterns: ["**/*.css", "**/*.scss"]
severity: "warn" # warn | block
token_threshold: 30000 # 任意: 再注入閾値
message: |
- BEM命名規則を使用
- !important は禁止
```
<details>
<summary><b>config.yaml</b></summary>
```yaml
session_startup:
enabled: true
messages:
first_time:
title: "プロジェクトセットアップ"
main_text: "プロジェクトの規約を確認してください"
severity: "block"
# subagentオーバーライド(base → subagent_default → subagent_types.{type})
overrides:
subagent_default:
messages:
first_time:
title: "subagent規約"
main_text: "スコープ外の編集禁止"
subagent_types:
Explore:
enabled: false
context_management:
reminder_thresholds:
light_warning: 30000
medium_warning: 60000
critical_warning: 100000
```
</details>
<details>
<summary><b>command_conventions.yaml</b></summary>
```yaml
rules:
- name: "Git操作規約"
patterns: ["git push*", "git commit*"]
severity: "warn"
token_threshold: 25000
message: |
- コミットメッセージは日本語で記載
- プッシュ前にテストを実行
```
</details>
## コマンド一覧
```bash
claude-nagger install-hooks # フックインストール
claude-nagger install-hooks --dry-run # プレビュー(変更なし)
claude-nagger install-hooks --force # 強制上書き
claude-nagger --version # バージョン表示
claude-nagger diagnose # 環境診断
claude-nagger suggest-rules # 使用履歴から規約候補を提案
claude-nagger hook <name> # フック直接実行
claude-nagger match-test --file "path" --pattern "glob" # パターンテスト
```
## 要件 / リンク / ライセンス
- **要件**: Python 3.10以上、Claude Code CLI
- [バグ報告](https://github.com/HollySizzle/claude-nagger/issues/new?template=bug_report.yml)(`claude-nagger diagnose` 出力を添付)| [機能リクエスト](https://github.com/HollySizzle/claude-nagger/issues/new?template=feature_request.yml) | [ディスカッション](https://github.com/HollySizzle/claude-nagger/discussions)
- [開発者向け](https://github.com/HollySizzle/claude-nagger): `git clone` & `./scripts/install-dev.sh`
- **ライセンス**: MIT — [LICENSE](LICENSE)
| text/markdown | HollySizzle | null | null | null | null | claude, claude-code, conventions, hooks | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.8.0",
"json5>=0.9.14",
"pytz>=2023.3",
"pyyaml>=6.0",
"questionary>=2.0.0",
"rich>=13.0.0",
"typing-extensions>=4.8.0",
"wcmatch>=10.0",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; ex... | [] | [] | [] | [
"Homepage, https://github.com/HollySizzle/claude-nagger",
"Repository, https://github.com/HollySizzle/claude-nagger"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:53:51.538845 | claude_nagger-2.5.0.tar.gz | 334,034 | 19/d8/41ebbcdfea017c698a25828c7470de19526471e2eba59ba43643ea36dfd7/claude_nagger-2.5.0.tar.gz | source | sdist | null | false | fb444ff7894a6650495c8b69344e2a77 | d7e560106bd450efe292c90fcfda5b2665c9c8136093633bcd6502537e39e74b | 19d841ebbcdfea017c698a25828c7470de19526471e2eba59ba43643ea36dfd7 | MIT | [
"LICENSE"
] | 258 |
2.3 | pylxpweb | 0.9.7 | Python client library for Luxpower/EG4 inverter web monitoring API | # pylxpweb
[](https://github.com/joyfulhouse/pylxpweb/actions/workflows/ci.yml)
[](https://codecov.io/gh/joyfulhouse/pylxpweb)
[](https://badge.fury.io/py/pylxpweb)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A Python client library for Luxpower/EG4 solar inverters and energy storage systems, providing programmatic access to the Luxpower/EG4 web monitoring API.
## Supported API Endpoints
This library supports multiple regional API endpoints:
- **US (EG4 Electronics)**: `https://monitor.eg4electronics.com` (default)
- **US (Luxpower)**: `https://us.luxpowertek.com`
- **Americas (Luxpower)**: `https://na.luxpowertek.com` (Brazil, Latin America)
- **Europe (Luxpower)**: `https://eu.luxpowertek.com`
- **Asia Pacific (Luxpower)**: `https://sea.luxpowertek.com`
- **Middle East & Africa (Luxpower)**: `https://af.luxpowertek.com`
- **China (Luxpower)**: `https://server.luxpowertek.com`
The base URL is fully configurable to support regional variations and future endpoints.
## Features
- **Complete API Coverage**: Access all inverter, battery, and GridBOSS data
- **Async/Await**: Built with `aiohttp` for efficient async I/O operations
- **Session Management**: Automatic authentication and session renewal
- **Smart Caching**: Configurable caching with TTL to minimize API calls
- **Type Safe**: Comprehensive type hints throughout
- **Error Handling**: Robust error handling with automatic retry and backoff
- **Production Ready**: Based on battle-tested Home Assistant integration
## Supported Devices
- **Inverters**: FlexBOSS21, FlexBOSS18, 18KPV, 12KPV, XP series
- **GridBOSS**: Microgrid interconnection devices (MID)
- **Batteries**: All EG4-compatible battery modules with BMS integration
## Installation
```bash
# From PyPI (recommended)
pip install pylxpweb
# From source (development)
git clone https://github.com/joyfulhouse/pylxpweb.git
cd pylxpweb
uv sync --all-extras --dev
```
## Quick Start
### Basic Usage with Device Objects
```python
import asyncio
from pylxpweb import LuxpowerClient
from pylxpweb.devices.station import Station
async def main():
# Create client with credentials
# Default base_url is https://monitor.eg4electronics.com
async with LuxpowerClient(
username="your_username",
password="your_password",
base_url="https://monitor.eg4electronics.com" # or us.luxpowertek.com, eu.luxpowertek.com
) as client:
# Load all stations with device hierarchy
stations = await Station.load_all(client)
print(f"Found {len(stations)} stations")
# Work with first station
station = stations[0]
print(f"\nStation: {station.name}")
# Access inverters - all have properly-scaled properties
for inverter in station.all_inverters:
await inverter.refresh() # Fetch latest data
print(f"\n{inverter.model} {inverter.serial_number}:")
# All properties return properly-scaled values
print(f" PV Power: {inverter.pv_total_power}W")
print(f" Battery: {inverter.battery_soc}% @ {inverter.battery_voltage}V")
print(f" Grid: {inverter.grid_voltage_r}V @ {inverter.grid_frequency}Hz")
print(f" Inverter Power: {inverter.inverter_power}W")
print(f" To Grid: {inverter.power_to_grid}W")
print(f" To User: {inverter.power_to_user}W")
print(f" Temperature: {inverter.inverter_temperature}°C")
print(f" Today: {inverter.total_energy_today}kWh")
print(f" Lifetime: {inverter.total_energy_lifetime}kWh")
# Access battery bank if available
if inverter.battery_bank:
bank = inverter.battery_bank
print(f"\n Battery Bank:")
print(f" Voltage: {bank.voltage}V")
print(f" SOC: {bank.soc}%")
print(f" Charge Power: {bank.charge_power}W")
print(f" Discharge Power: {bank.discharge_power}W")
print(f" Capacity: {bank.current_capacity}/{bank.max_capacity} Ah")
# Individual battery modules
for battery in bank.batteries:
print(f" Battery {battery.battery_index + 1}:")
print(f" Voltage: {battery.voltage}V")
print(f" Current: {battery.current}A")
print(f" SOC: {battery.soc}%")
print(f" Temp: {battery.max_cell_temp}°C")
# Access GridBOSS (MID) devices if present
for group in station.parallel_groups:
if group.mid_device:
mid = group.mid_device
await mid.refresh()
print(f"\nGridBOSS {mid.serial_number}:")
print(f" Grid: {mid.grid_voltage}V @ {mid.grid_frequency}Hz")
print(f" Grid Power: {mid.grid_power}W")
print(f" UPS Power: {mid.ups_power}W")
print(f" Load L1: {mid.load_l1_power}W @ {mid.load_l1_current}A")
print(f" Load L2: {mid.load_l2_power}W @ {mid.load_l2_current}A")
asyncio.run(main())
```
### Low-Level API Access
For direct API access without device objects:
```python
async with LuxpowerClient(username, password) as client:
# Get stations
plants = await client.api.plants.get_plants()
plant_id = plants.rows[0].plantId
# Get devices
devices = await client.api.devices.get_devices(str(plant_id))
# Get runtime data for first inverter
inverter = devices.rows[0]
serial = inverter.serialNum
# Fetch data (returns Pydantic models)
runtime = await client.api.devices.get_inverter_runtime(serial)
energy = await client.api.devices.get_inverter_energy(serial)
# NOTE: Raw API returns scaled integers - you must scale manually
print(f"AC Power: {runtime.pac}W") # No scaling needed for power
print(f"Grid Voltage: {runtime.vacr / 10}V") # Must divide by 10
print(f"Grid Frequency: {runtime.fac / 100}Hz") # Must divide by 100
print(f"Battery Voltage: {runtime.vBat / 10}V") # Must divide by 10
```
## Advanced Usage
### Regional Endpoints and Custom Session
```python
from aiohttp import ClientSession
async with ClientSession() as session:
# Choose the appropriate regional endpoint
# US (Luxpower): https://us.luxpowertek.com
# EU (Luxpower): https://eu.luxpowertek.com
# US (EG4): https://monitor.eg4electronics.com
client = LuxpowerClient(
username="user",
password="pass",
base_url="https://eu.luxpowertek.com", # EU endpoint example
verify_ssl=True,
timeout=30,
session=session # Inject external session
)
await client.login()
plants = await client.get_plants()
await client.close() # Only closes if we created the session
```
### Control Operations
```python
async with LuxpowerClient(username, password) as client:
serial = "1234567890"
# Enable quick charge
await client.set_quick_charge(serial, enabled=True)
# Set battery charge limit to 90%
await client.set_charge_soc_limit(serial, limit=90)
# Set operating mode to standby
await client.set_operating_mode(serial, mode="standby")
# Read current parameters
params = await client.read_parameters(serial, [21, 22, 23])
print(f"SOC Limit: {params[0]['value']}%")
```
### Error Handling
```python
from pylxpweb import (
LuxpowerClient,
AuthenticationError,
ConnectionError,
APIError
)
try:
async with LuxpowerClient(username, password) as client:
runtime = await client.get_inverter_runtime(serial)
except AuthenticationError as e:
print(f"Login failed: {e}")
except ConnectionError as e:
print(f"Network error: {e}")
except APIError as e:
print(f"API error: {e}")
```
## Documentation
- **[API Reference](docs/api/LUXPOWER_API.md)** - Complete API endpoint documentation
- **[Architecture](docs/architecture/)** - System design and patterns *(coming soon)*
- **[Examples](docs/examples/)** - Usage examples and patterns *(coming soon)*
- **[CLAUDE.md](CLAUDE.md)** - Development guidelines for Claude Code
## Development
### Setup Development Environment
```bash
# Clone repository
git clone https://github.com/joyfulhouse/pylxpweb.git
cd pylxpweb
# Install development dependencies
pip install -e ".[dev]"
# Install test dependencies
pip install pytest pytest-asyncio pytest-cov aiohttp
```
### Running Tests
```bash
# Run all tests
uv run pytest tests/
# Run with coverage
uv run pytest tests/ --cov=pylxpweb --cov-report=term-missing
# Run unit tests only
uv run pytest tests/unit/ -v
# Run integration tests (requires credentials in .env)
uv run pytest tests/integration/ -v -m integration
```
### Code Quality
```bash
# Format code
uv run ruff check --fix && uv run ruff format
# Type checking
uv run mypy src/pylxpweb/ --strict
# Lint code
uv run ruff check src/ tests/
```
## Project Structure
```
pylxpweb/
├── docs/ # Documentation
│ ├── api/ # API endpoint documentation
│ │ └── LUXPOWER_API.md # Complete API reference
│ └── luxpower-api.yaml # OpenAPI 3.0 specification
│
├── src/pylxpweb/ # Main package
│ ├── __init__.py # Package exports
│ ├── client.py # LuxpowerClient (async API client)
│ ├── endpoints/ # Endpoint-specific implementations
│ │ ├── devices.py # Device and runtime data
│ │ ├── plants.py # Station/plant management
│ │ ├── control.py # Control operations
│ │ ├── firmware.py # Firmware management
│ │ └── ... # Additional endpoints
│ ├── models.py # Pydantic data models
│ ├── constants.py # Constants and register definitions
│ └── exceptions.py # Custom exception classes
│
├── tests/ # Test suite (90%+ coverage)
│ ├── conftest.py # Pytest fixtures and aiohttp mock server
│ ├── unit/ # Unit tests (136 tests)
│ │ ├── test_client.py # Client tests
│ │ ├── test_models.py # Model tests
│ │ └── test_*.py # Additional unit tests
│ ├── integration/ # Integration tests (requires credentials)
│ │ └── test_live_api.py # Live API integration tests
│ └── samples/ # Sample API responses for testing
│
├── .env.example # Environment variable template
├── .github/ # GitHub Actions workflows
│ ├── workflows/ # CI/CD pipelines
│ └── dependabot.yml # Dependency updates
├── CLAUDE.md # Claude Code development guidelines
├── README.md # This file
└── pyproject.toml # Package configuration (uv-based)
```
## Data Scaling
### Automatic Scaling with Device Properties (Recommended)
**Device objects automatically handle all scaling** - just use the properties:
```python
# ✅ RECOMMENDED: Use device properties (automatically scaled)
await inverter.refresh()
voltage = inverter.grid_voltage_r # Returns 241.8 (already scaled)
frequency = inverter.grid_frequency # Returns 59.98 (already scaled)
power = inverter.pv_total_power # Returns 1500 (already scaled)
```
All device classes (`BaseInverter`, `MIDDevice`, `Battery`, `BatteryBank`, `ParallelGroup`) provide properly-scaled properties. **You never need to manually scale values when using device objects.**
### Manual Scaling for Raw API Data
If you use the low-level API directly (not recommended for most users), you must scale values manually:
| Data Type | Scaling | Raw API | Scaled | Property Name |
|-----------|---------|---------|--------|---------------|
| Inverter Voltage | ÷10 | 2410 | 241.0V | `grid_voltage_r` |
| Battery Voltage (Bank) | ÷10 | 539 | 53.9V | `battery_voltage` |
| Battery Voltage (Module) | ÷100 | 5394 | 53.94V | `voltage` |
| Cell Voltage | ÷1000 | 3364 | 3.364V | `max_cell_voltage` |
| Current | ÷100 | 1500 | 15.00A | `grid_l1_current` |
| Frequency | ÷100 | 5998 | 59.98Hz | `grid_frequency` |
| Bus Voltage | ÷100 | 3703 | 37.03V | `bus1_voltage` |
| Power | Direct | 1030 | 1030W | `inverter_power` |
| Temperature | Direct | 39 | 39°C | `inverter_temperature` |
| Energy | ÷10 | 184 | 18.4 kWh | `today_yielding` |
**Note**: Different voltage types use different scaling factors. Use device properties to avoid confusion.
See [Scaling Guide](docs/SCALING_GUIDE.md) and [API Reference](docs/api/LUXPOWER_API.md#data-scaling-reference) for complete details.
## API Endpoints
**Authentication**:
- `POST /WManage/api/login` - Authenticate and establish session
**Discovery**:
- `POST /WManage/web/config/plant/list/viewer` - List stations/plants
- `POST /WManage/api/inverterOverview/getParallelGroupDetails` - Device hierarchy
- `POST /WManage/api/inverterOverview/list` - All devices in station
**Runtime Data**:
- `POST /WManage/api/inverter/getInverterRuntime` - Real-time inverter data
- `POST /WManage/api/inverter/getInverterEnergyInfo` - Energy statistics
- `POST /WManage/api/battery/getBatteryInfo` - Battery information
- `POST /WManage/api/midbox/getMidboxRuntime` - GridBOSS data
**Control**:
- `POST /WManage/web/maintain/remoteRead/read` - Read parameters
- `POST /WManage/web/maintain/remoteSet/write` - Write parameters
- `POST /WManage/web/maintain/remoteSet/functionControl` - Control functions
See [API Reference](docs/api/LUXPOWER_API.md) for complete endpoint documentation.
## Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Run tests and code quality checks
5. Commit your changes (`git commit -m 'Add amazing feature'`)
6. Push to the branch (`git push origin feature/amazing-feature`)
7. Open a Pull Request
### Development Standards
- All code must have type hints
- Maintain >90% test coverage
- Follow PEP 8 style guide
- Use async/await for all I/O operations
- Document all public APIs with Google-style docstrings
## Credits
This project builds upon research and knowledge from the Home Assistant community:
- Inspired by production Home Assistant integrations for EG4/Luxpower devices
- API endpoint research and documentation
- Best practices for async Python libraries
Special thanks to the Home Assistant community for their pioneering work with these devices.
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Endpoint Discovery
### Finding Your Endpoint
Most EG4 users in North America should use `https://monitor.eg4electronics.com` (the default).
If you're unsure which endpoint to use:
1. Try the default first: `https://monitor.eg4electronics.com`
2. For Luxpower branded systems:
- US: `https://us.luxpowertek.com`
- EU: `https://eu.luxpowertek.com`
3. Check your official mobile app or web portal URL for the correct regional endpoint
### Contributing New Endpoints
If you discover additional regional endpoints, please contribute by:
1. Opening an issue with the endpoint URL
2. Confirming it uses the same `/WManage/api/` structure
3. Noting which region/brand it serves
Known endpoints are documented in [API Reference](docs/api/LUXPOWER_API.md#choosing-the-right-endpoint).
## Disclaimer
**Unofficial** library not affiliated with Luxpower or EG4 Electronics. Use at your own risk.
This library communicates with the official EG4/Luxpower API using the same endpoints as the official mobile app and web interface.
## Support
- **Documentation**: [docs/](docs/)
- **Issues**: [GitHub Issues](https://github.com/joyfulhouse/pylxpweb/issues)
- **API Reference**: [docs/api/LUXPOWER_API.md](docs/api/LUXPOWER_API.md)
---
**Happy monitoring!** ☀️⚡🔋
| text/markdown | Bryan Li | Bryan Li <bryan.li@gmail.com> | null | null | MIT | luxpower, eg4, inverter, solar, api, client | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Home Automation",
"Topic :: Software Development :: Libra... | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.13.2",
"pydantic>=2.12.0",
"pymodbus>=3.6.0",
"pytest>=9.0.1; extra == \"dev\"",
"pytest-asyncio>=1.3.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"aioresponses>=0.7.8; extra == \"dev\"",
"mypy>=1.18.2; extra == \"dev\"",
"ruff>=0.14.5; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:53:13.137261 | pylxpweb-0.9.7.tar.gz | 278,179 | a6/f5/ff08ae6cd4fb8c9c7d0ce5d0da830153f958f4fc241160162979d7373b11/pylxpweb-0.9.7.tar.gz | source | sdist | null | false | 1a4ee75b14576de8d15796fa6231acd0 | 2f7f8899ab1926893973bcc90f8d960fe182edcc3d1be83b82e3ae317b8b7e1e | a6f5ff08ae6cd4fb8c9c7d0ce5d0da830153f958f4fc241160162979d7373b11 | null | [] | 564 |
2.4 | inspire-matcher | 9.0.47 | Find the records in INSPIRE most similar to a given record or reference. | ..
This file is part of INSPIRE.
Copyright (C) 2014-2017 CERN.
INSPIRE is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
INSPIRE is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with INSPIRE. If not, see <http://www.gnu.org/licenses/>.
In applying this license, CERN does not waive the privileges and immunities
granted to it by virtue of its status as an Intergovernmental Organization
or submit itself to any jurisdiction.
=================
INSPIRE-Matcher
=================
About
=====
Finds the records in INSPIRE most similar to a given record or reference.
Local setup and tests
=====================
.. code-block:: bash
pyenv virtualenv matcher
pyenv activate matcher
pip install -e ".[tests,opensearch3]"
./run-tests.sh
| null | CERN | admin@inspirehep.net | null | null | GPLv3 | null | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Languag... | [
"any"
] | https://github.com/inspirehep/inspire-matcher | null | null | [] | [] | [] | [
"inspire-json-merger>=11.0.0",
"inspire-utils>=3.0.0",
"invenio-search>=1.2.3",
"six>=1.11.0,~=1.0",
"invenio-base<2.0.0,>=1.2.3",
"pyyaml==5.4.1; python_version <= \"2.7\"",
"pyyaml<7.0,>=6.0; python_version >= \"3.6\"",
"mock>=3.0.0,~=3.0; extra == \"tests\"",
"pytest-cov>=2.5.1,~=2.0; extra == \"... | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.8 | 2026-02-18T12:52:13.025881 | inspire_matcher-9.0.47.tar.gz | 25,680 | af/35/574a406ff8f8442b0ec503e11f689f6bd25b9ab41383d7bc3cad7c284617/inspire_matcher-9.0.47.tar.gz | source | sdist | null | false | 09588b5dbe3efd5a302f05b9fdf60a7a | 620aab1f24c2203fccd7b202edb13bdec1316df96849584895dd5742a4e1c00a | af35574a406ff8f8442b0ec503e11f689f6bd25b9ab41383d7bc3cad7c284617 | null | [
"LICENSE"
] | 278 |
2.4 | trikhub | 0.6.0 | Python SDK for TrikHub - AI skills marketplace | # TrikHub Python SDK
Python SDK for TrikHub - AI skills marketplace.
## Installation
```bash
pip install trikhub
```
## Quick Start
```python
from trikhub import TrikGateway
# Initialize the gateway
gateway = TrikGateway()
await gateway.initialize()
# Load triks from config
await gateway.load_triks_from_config()
# Get available tool definitions
tools = gateway.get_tool_definitions()
```
## CLI Usage
```bash
# Install a trik
trik install @scope/trik-name
# List installed triks
trik list
# Search for triks
trik search "topic"
```
## Documentation
For full documentation, visit [docs.trikhub.com](https://docs.trikhub.com).
## License
MIT
| text/markdown | null | TrikHub Team <team@trikhub.com> | null | null | null | ai, agents, skills, llm, tools | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0.0",
"jsonschema>=4.0.0",
"httpx>=0.25.0",
"click>=8.0.0",
"fastapi>=0.100.0; extra == \"server\"",
"uvicorn>=0.23.0; extra == \"server\"",
"langchain-core>=0.3.0; extra == \"langchain\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"mypy>=1.0.0; ex... | [] | [] | [] | [
"Homepage, https://trikhub.com",
"Documentation, https://docs.trikhub.com",
"Repository, https://github.com/Molefas/trikhub/tree/main/packages/python"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T12:49:48.307707 | trikhub-0.6.0.tar.gz | 53,641 | da/92/ced0027388b463866621456f800a884e34d50a93d24a7dfa72a019943de7/trikhub-0.6.0.tar.gz | source | sdist | null | false | dc705a8556d393a10c912345041d35c3 | 2f9d31a51abc95fac9d4bba2157a0ded23d2ec73dd27a6516f905a6533faa103 | da92ced0027388b463866621456f800a884e34d50a93d24a7dfa72a019943de7 | MIT | [] | 258 |
2.3 | cryptofunc | 0.1.0 | Extract cryptographic functions and attributes from a codebase. | # Crypto Function Extractor
Extract cryptographic functions and attributes from a codebase.
## Usage
Legacy: `pip install cryptofunc`
Preferred: `uv add cryptofunc`
## Developing further
> Development flow as [Paleofuturistic Python](https://github.com/schubergphilis/paleofuturistic_python)
Prerequisite: [uv](https://docs.astral.sh/uv/)
### Setup
- Fork and clone this repository.
- Download additional dependencies: `uv sync --all-extras --dev`
- Optional: validate the setup with `uv run python -m unittest`
### Workflow
- Download dependencies (if you need any): `uv add some_lib_you_need`
- Develop (optional, tinker: `uvx --with-editable . ptpython`)
- QA:
- Format: `uv run ruff format`
- Lint: `uv run ruff check`
- Type check: `uv run mypy`
- Test: `uv run python -m unittest`
- Build (to validate it works): `uv build`
- Review documentation updates: `uv run mkdocs serve`
- Make a pull request. | text/markdown | Crytpofunc | Crytpofunc <me@here.now> | null | null | null | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.9.5 | 2026-02-18T12:49:43.961347 | cryptofunc-0.1.0.tar.gz | 1,740 | fd/79/a09084bcf99c2230ba9fd1752a9fec8a4a1c525ee71eda282bdafad3a4a9/cryptofunc-0.1.0.tar.gz | source | sdist | null | false | 480b09f871300e76a0886ce426470bbc | 45a524e98300f74793c2572d0b0cf96060843306ebd6733b432b09a3beff7154 | fd79a09084bcf99c2230ba9fd1752a9fec8a4a1c525ee71eda282bdafad3a4a9 | null | [] | 315 |
2.4 | oldp-de | 0.1.3 | German Theme for Open Legal Data Platform | # OLDP DE: German Theme for Open Legal Data
[](https://pypi.org/project/oldp-de/)
[](LICENSE)
## Getting Started
Install from PyPI:
```bash
pip install oldp-de
```
Or install the latest development version directly from GitHub:
```bash
pip install git+https://github.com/openlegaldata/oldp-de.git
```
For local development:
```bash
git clone https://github.com/openlegaldata/oldp-de.git
cd oldp-de
pip install -e ".[dev]"
```
## Configuration
Tell OLDP to use the OLDP-DE settings file and development configuration:
```bash
export DJANGO_SETTINGS_MODULE=oldp_de.settings
export DJANGO_CONFIGURATION=DevDEConfiguration # For production use `ProdDEConfiguration`
```
Start OLDP as always (from OLDP directory):
```bash
python manage.py runserver
```
## How does it work?
By loading another settings file, we basically just tell Django to use assets (templates, images, ...) from the German theme if they exist.
As a result, we can modify the layout etc. without touching the original OLDP code.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
| text/markdown | null | Malte Ostendorff <hello@openlegaldata.io> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Framework :: Django",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"django>=5.0",
"django-configurations",
"pytest>=7.0; extra == \"dev\"",
"pytest-django>=4.5; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://de.openlegaldata.io",
"Repository, https://github.com/openlegaldata/oldp-de"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:49:09.258725 | oldp_de-0.1.3.tar.gz | 51,281 | b5/8c/029a869c5b1df6a3f9bc9e8d604cbc879abc1e0c9684aecd4d9e5d83ea7b/oldp_de-0.1.3.tar.gz | source | sdist | null | false | 85a543d093206ad16a421cfab2129d55 | 1a79578c469d192e0fb5db21feade9b8eb697e4b32237b88d3b41a9bf900d2c7 | b58c029a869c5b1df6a3f9bc9e8d604cbc879abc1e0c9684aecd4d9e5d83ea7b | MIT | [
"LICENSE"
] | 256 |
2.4 | omego | 0.8.0 | OME installation and administration tool | OME, Go (omego)
===============
.. image:: https://github.com/ome/omego/actions/workflows/workflow.yml/badge.svg
:target: https://github.com/ome/omego/actions
.. image:: https://badge.fury.io/py/omego.svg
:target: https://badge.fury.io/py/omego
The omego command provides utilities for installing and managing OME applications.
Getting Started
---------------
For Python 2.6, you will need to install `argparse`_
::
$ pip install argparse
With that, it's possible to execute omego:
::
$ python omego/main.py
Pip installation
-----------------
To install the latest release of omego use pip install:
::
$ pip install omego
$ omego
License
-------
omego is released under the GPL.
Copyright
---------
2013-2026, The Open Microscopy Environment
.. _argparse: http://pypi.python.org/pypi/argparse
| null | The Open Microscopy Team | ome-devel@lists.openmicroscopy.org.uk | null | null | GPLv2 | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Database ::... | [] | https://github.com/ome/omego | null | null | [] | [] | [] | [
"future",
"yaclifw>=0.1.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:48:27.292571 | omego-0.8.0.tar.gz | 38,102 | 31/7a/d5a89fa1aa8b92d046ec6bee4ece2a29008c2fcb68c6a433ce55e3e44d8e/omego-0.8.0.tar.gz | source | sdist | null | false | 9483877edd263955dd10ef53d444f12a | 7de918212cf904b04de20d8ae86f708669821e99fd117cfb4b3b1d7a9d1e0c37 | 317ad5a89fa1aa8b92d046ec6bee4ece2a29008c2fcb68c6a433ce55e3e44d8e | null | [
"LICENSE.txt"
] | 412 |
2.4 | triggerflow | 0.3.9 | Utilities for ML models targeting hardware triggers | # Machine Learning for Hardware Triggers
`triggerflow` provides a set of utilities for Machine Learning models targeting FPGA deployment.
The `TriggerModel` class consolidates several Machine Learning frontends and compiler backends to construct a "trigger model". MLflow utilities are for logging, versioning, and loading of trigger models.
## Installation
```bash
pip install triggerflow
```
## Usage
```python
from triggerflow.core import TriggerModel
scales = {'offsets': np.array([18, 0, 72, 7, 0, 73, 4, 0, 73, 4, 0, 72, 3, 0, 72, 6, -0, 286, 3, -2, 285, 3, -2, 282, 3, -2, 286, 29, 0, 72, 22, 0, 72, 18, 0, 72, 14, 0, 72, 11, 0, 72, 10, 0, 72, 10, 0, 73, 9, 0], dtype='int'),
'shifts': np.array([3, 0, 6, 2, 5, 6, 0, 5, 6, 0, 5, 6, -1, 5, 6, 2, 7, 8, 0, 7, 8, 0, 7, 8, 0, 7, 8, 4, 6, 6, 3, 6, 6, 3, 6, 6, 3, 6, 6, 3, 6, 6, 3, 6, 6, 3, 6, 6, 3, 6], dtype='int')}
trigger_model = TriggerModel(
config="triggermodel_config.yaml",
native_model=model, #Native XGboost/Keras model
scales=scales
)
trigger_model() #Vivado requird on $PATH for Firmware build.
# then:
output_software = trigger_model.software_predict(input_data)
output_firmware = trigger_model.firmware_predict(input_data)
output_qonnx = trigger_model.qonnx_predict(input_data)
# save and load trigger models:
trigger_model.save("triggerflow.tar.xz")
# in a separate session:
from triggerflow.core import TriggerModel
triggerflow = TriggerModel.load("triggerflow.tar.xz")
```
## The Config file:
Use this `.yaml` template and change as needed.
```yaml
compiler:
name: "AXO"
ml_backend: "keras"
compiler: "hls4ml"
fpga_part: "xc7vx690t-ffg1927-2"
clock_period: 25
n_outputs: 1
project_name: "AXO_project"
namespace: "AXO"
io_type: "io_parallel"
backend: "Vitis"
write_weights_txt: false
subsystem:
name: "uGT"
n_inputs: 50
offset_type: "ap_fixed<10,10>"
shift_type: "ap_fixed<10,10>"
objects:
muons:
size: 4
features: [pt, eta_extrapolated, phi_extrapolated]
jets:
size: 4
features: [et, eta, phi]
egammas:
size: 4
features: [et, eta, phi]
taus:
size: 4
features: [et, eta, phi]
global_features:
#- et.et
#- ht.et
- etmiss.et
- etmiss.phi
#- htmiss.et
#- htmiss.phi
#- ethfmiss.et
#- ethfmiss.phi
#- hthfmiss.et
#- hthfmiss.phi
muon_size: 4
jet_size: 4
egamma_size: 4
tau_size: 4
```
## Logging with MLflow
```python
# logging with MLFlow:
import mlflow
from triggerflow.mlflow_wrapper import log_model
mlflow.set_tracking_uri("https://ngt.cern.ch/models")
experiment_id = mlflow.create_experiment("example-experiment")
with mlflow.start_run(run_name="trial-v1", experiment_id=experiment_id):
log_model(triggerflow, registered_model_name="TriggerModel")
```
### Note: This package doesn't install dependencies so it won't disrupt specific training environments or custom compilers. For a reference environment, see `environment.yml`.
# Creating a kedro pipeline
This repository also comes with a default pipeline for trigger models based on kedro.
One can create a new pipeline via:
NOTE: no "-" and upper cases!
```bash
# Create a conda environment & activate it
conda create -n triggerflow python=3.11
conda activate triggerflow
# install triggerflow
pip install triggerflow
# Create a pipeline
triggerflow new demo_pipeline
# NOTE: since we dont install dependency one has to create a
# conda env based on the environment.yml file of the pipeline
# this file can be changed to the needs of the indiviual project
cd demo_pipeline
conda env update -n triggerflow --file environment.yml
# Run Kedro
kedro run
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"setuptools==65.5",
"cookiecutter>=2.3",
"PyYAML>=6",
"Jinja2>=3",
"kedro==1.0.0",
"kedro-datasets",
"kedro-mlflow==2.0.1",
"awkward<3,>=2.8",
"dask==2025.3.0",
"coffea>=2025.12",
"distributed==2025.3.0",
"pytest-cov~=3.0; extra == \"dev\"",
"pytest-mock<2.0,>=1.7.1; extra == \"dev\"",
"py... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T12:48:23.299659 | triggerflow-0.3.9.tar.gz | 11,046,402 | 35/a7/7b2af80e4324abb6141268238ca084e48a58d710efe7dead54bc477c1687/triggerflow-0.3.9.tar.gz | source | sdist | null | false | f70fe6cd17fb11738b446590d9477fdb | 30b33cd6525354966db883e31132cfc1eb8ea6fc136b2c33186de8f8796a52f3 | 35a77b2af80e4324abb6141268238ca084e48a58d710efe7dead54bc477c1687 | null | [] | 259 |
2.4 | craftllc-wikin | 1.2.5 | A simple documentation generator for Python | # Wikin
A simple, beautiful documentation generator for Python. It extracts docstrings from functions and special comments from variables.
## Features
- **Function Docstrings**: Standard Python triple-quoted docstrings.
- **Variable Documentation**:
- `#: comment before variable`
- `variable = value #: comment after variable`
- **Modern UI**: Clean, responsive HTML output with a premium look.
- **Markdown Support**: Use Markdown in your docstrings and comments.
- **Module Metadata**: Customize how modules appear in the documentation using a `Wikin:` block.
## Installation
```bash
pip install craftllc-wikin
```
## Usage
```bash
wikin <path_to_code> <project_name> <version>
```
Example:
```bash
wikin ./ "My Project" 1.0.0
```
This will generate documentation in the `docs/index.html` file.
## Variable Documentation Example
```python
#: Number of requests per second
rpm = 10
timeout = 30 #: Connection timeout in seconds
```
Wikin will pick these up and include them in the generated documentation.
## Ignoring Files
To exclude specific files or directories from being processed, create a `.wikinignore` file in your `docs/` folder. It supports standard `.gitignore` (gitwildmatch) patterns.
**Example `docs/.wikinignore`:**
```text
# Ignore a specific file
secret_module.py
# Ignore an entire directory
internal_tools/
# Ignore all files with a certain extension
*.deprecated.py
```
## Configuration
You can customize your documentation by creating a `docs/.wikinconfig` file (TOML format) in your documentation directory.
### Adding Project Links
To add helpful links (like GitHub, PyPI, or your website) to the sidebar, use the `[links]` section:
```toml
[links]
PyPI = "https://pypi.org/project/craftllc-wikin"
GitHub = "https://github.com/CraftLLC/Wikin"
```
These links will appear as stylish buttons in the sidebar for quick access.
## Module Metadata Example
You can set a custom display name for your modules by adding a `Wikin:` block at the top of your module's docstring:
```python
"""
Wikin:
name: Core Parser
This module handles all the parsing logic for Wikin.
"""
```
In the documentation, this module will be titled as **Core Parser (your_package.parser)**. The metadata block itself will be hidden from the module's description.
| text/markdown | null | CraftLLC <craftllcompany@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"jinja2",
"markdown",
"pathspec",
"pygments"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T12:47:11.589137 | craftllc_wikin-1.2.5.tar.gz | 366,180 | 8f/af/21a833f50102c7b81534c97a4defd120f0915940c7c0fbd94beefa153d56/craftllc_wikin-1.2.5.tar.gz | source | sdist | null | false | ff2e837571ee4c544dc2ac8f0d4b37b5 | b08c2bdd442ce58d1cd98f358cd5e92f6fe3f92005e72ea6dbfb726302cfd895 | 8faf21a833f50102c7b81534c97a4defd120f0915940c7c0fbd94beefa153d56 | null | [
"LICENSE"
] | 254 |
2.4 | ayon-python-api | 1.2.11 | AYON Python API | # AYON server API
Python client for connection server. The client is using REST and GraphQl to communicate with server with `requests` module.
AYON Python api should support connection to server with raw REST functions and prepared functionality for work with entities. Must not contain only functionality that can be used with core server functionality.
Module support singleton connection which is using `AYON_SERVER_URL` and `AYON_API_KEY` environment variables as source for connection. The singleton connection is using `ServerAPI` object. There can be created multiple connection to different server at one time, for that purpose use `ServerAPIBase` object.
## Install
AYON python api is available on PyPi:
pip install ayon-python-api
For development purposes you may follow [build](#build-wheel) guide to build and install custom wheels.
## Cloning the repository
Repository does not have submodules or special cases. Clone is simple as:
git clone git@github.com:ynput/ayon-python-api.git
## Build wheel
For wheel build is required a `wheel` module from PyPi:
pip install wheel
Open terminal and change directory to ayon-python-api repository and build wheel:
cd <REPOSITORY ROOT>/ayon-python-api
python setup.py sdist bdist_wheel
Once finished a wheel should be created in `./dist/ayon_python_api-<VERSION>-py3-none-any`.
---
### Wheel installation
The wheel file can be used to install using pip:
pip install <REPOSITORY ROOT>/dist/ayon_python_api-<VERSION>-py3-none-any
If pip complain that `ayon-python-api` is already installed just uninstall existing one first:
pip uninstall ayon-python-api
## TODOs
- Find more suitable name of `ServerAPI` objects (right now is used `con` or `connection`)
- Add all available CRUD operation on entities using REST
- Add folder and task changes to operations
- Enhance entity hub
- Missing docstrings in EntityHub -> especially entity arguments are missing
- Better order of arguments for entity classes
- Move entity hub to first place
- Skip those which are invalid for the entity and fake it for base or remove it from base
- Entity hub should use operations session to do changes
- Entity hub could also handle 'product', 'version' and 'representation' entities
- Missing 'status' on folders
- Missing assignees on tasks
- Pass docstrings and arguments definitions from `ServerAPI` methods to global functions
- Split `ServerAPI` into smaller chunks (somehow), the class has 4k+ lines of code
- Add .pyi stub for ServerAPI
- Missing websockets connection
| text/markdown | ynput.io | "ynput.io" <info@ynput.io> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| AYON, ynput, OpenPype, vfx | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | https://github.com/ynput/ayon-python-api | null | null | [] | [] | [] | [
"requests>=2.27.1",
"Unidecode>=1.3.0"
] | [] | [] | [] | [
"Repository, https://github.com/ynput/ayon-python-api",
"Changelog, https://github.com/ynput/ayon-python-api/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:47:07.966661 | ayon_python_api-1.2.11.tar.gz | 167,801 | a1/ea/b650bc8e0c9fe194c22a806b7777b8272ce731284f0d49efc241774de365/ayon_python_api-1.2.11.tar.gz | source | sdist | null | false | 6980df5cba411304c1319a50a6f845c9 | 7a53a9affc34202c677e89726079f0c98cb1bfa26de77b7c71eb8d7dd5d46afb | a1eab650bc8e0c9fe194c22a806b7777b8272ce731284f0d49efc241774de365 | null | [
"LICENSE"
] | 421 |
2.4 | descope | 1.10.1 | Descope Python SDK | # Descope SDK for Python
The Descope SDK for python provides convenient access to the Descope user management and authentication API
for a backend written in python. You can read more on the [Descope Website](https://descope.com).
## Requirements
The SDK supports Python 3.8.1 and above.
## Installing the SDK
Install the package with:
```bash
pip install descope
```
#### If you would like to use the Flask decorators, make sure to install the Flask extras:
```bash
pip install descope[Flask]
```
## Setup
A Descope `Project ID` is required to initialize the SDK. Find it on the
[project page in the Descope Console](https://app.descope.com/settings/project).
**Note:** Authentication APIs public access can be disabled via the Descope console.
If disabled, it's still possible to use the authentication API by providing a management key with
the appropriate access (`Authentication` / `Full Access`).
If not provided directly, this value is retrieved from the `DESCOPE_AUTH_MANAGEMENT_KEY` environment variable instead.
If neither values are set then any disabled authentication methods API calls will fail.
```python
from descope import DescopeClient
# Initialized after setting the DESCOPE_PROJECT_ID and DESCOPE_AUTH_MANAGEMENT_KEY env vars
descope_client = DescopeClient()
# ** Or directly (w/ optional base URL) **
descope_client = DescopeClient(
project_id="<Project ID>",
auth_management_key="<Descope Project Management Key>,
base_url="<Descope Base URL>"
)
```
## Authentication Functions
These sections show how to use the SDK to perform various authentication/authorization functions:
1. [OTP Authentication](#otp-authentication)
2. [Magic Link](#magic-link)
3. [Enchanted Link](#enchanted-link)
4. [OAuth](#oauth)
5. [SSO (SAML / OIDC)](#sso-saml--oidc)
6. [TOTP Authentication](#totp-authentication)
7. [Passwords](#passwords)
8. [Session Validation](#session-validation)
9. [Roles & Permission Validation](#roles--permission-validation)
10. [Tenant selection](#tenant-selection)
11. [Logging Out](#logging-out)
12. [History](#history)
13. [My Tenants](#my-tenants)
## API Management Function
These sections show how to use the SDK to perform permission and user management functions. You will need to create an instance of `DescopeClient` by following the [Setup](#setup-1) guide, before you can use any of these functions:
1. [Manage Tenants](#manage-tenants)
2. [Manage Users](#manage-users)
3. [Manage Access Keys](#manage-access-keys)
4. [Manage SSO Setting](#manage-sso-setting)
5. [Manage Permissions](#manage-permissions)
6. [Manage Roles](#manage-roles)
7. [Query SSO Groups](#query-sso-groups)
8. [Manage Flows](#manage-flows-and-theme)
9. [Manage JWTs](#manage-jwts)
10. [Impersonate](#impersonate)
11. [Embedded links](#embedded-links)
12. [Audit](#audit)
13. [Manage FGA (Fine-grained Authorization)](#manage-fga-fine-grained-authorization)
14. [Manage Project](#manage-project)
15. [Manage SSO Applications](#manage-sso-applications)
16. [Manage Outbound Applications](#manage-outbound-applications)
17. [Manage Descopers](#manage-descopers)
18. [Manage Management Keys](#manage-management-keys)
If you wish to run any of our code samples and play with them, check out our [Code Examples](#code-examples) section.
If you're performing end-to-end testing, check out the [Utils for your end to end (e2e) tests and integration tests](#utils-for-your-end-to-end-e2e-tests-and-integration-tests) section. You will need to use the `DescopeClient` object created under [Setup](#setup-1) guide.
For rate limiting information, please confer to the [API Rate Limits](#api-rate-limits) section.
### OTP Authentication
Send a user a one-time password (OTP) using your preferred delivery method (_email / SMS / Voice call / WhatsApp_). An email address or phone number must be provided accordingly.
The user can either `sign up`, `sign in` or `sign up or in`
```python
from descope import DeliveryMethod
# Every user must have a login ID. All other user information is optional
email = "desmond@descope.com"
user = {"name": "Desmond Copeland", "phone": "212-555-1234", "email": email}
masked_address = descope_client.otp.sign_up(method=DeliveryMethod.EMAIL, login_id=email, user=user)
```
The user will receive a code using the selected delivery method. Verify that code using:
```python
jwt_response = descope_client.otp.verify_code(
method=DeliveryMethod.EMAIL, login_id=email, code=value
)
session_token = jwt_response[SESSION_TOKEN_NAME].get("jwt")
refresh_token = jwt_response[REFRESH_SESSION_TOKEN_NAME].get("jwt")
```
The session and refresh JWTs should be returned to the caller, and passed with every request in the session. Read more on [session validation](#session-validation)
### Magic Link
Send a user a Magic Link using your preferred delivery method (_email / SMS / Voice call / WhatsApp_).
The Magic Link will redirect the user to page where the its token needs to be verified.
This redirection can be configured in code, or generally in the [Descope Console](https://app.descope.com/settings/authentication/magiclink)
The user can either `sign up`, `sign in` or `sign up or in`
```python
from descope import DeliveryMethod
masked_address = descope_client.magiclink.sign_up_or_in(
method=DeliveryMethod.EMAIL,
login_id="desmond@descope.com",
uri="http://myapp.com/verify-magic-link", # Set redirect URI here or via console
)
```
To verify a magic link, your redirect page must call the validation function on the token (`t`) parameter (`https://your-redirect-address.com/verify?t=<token>`):
```python
jwt_response = descope_client.magiclink.verify(token=token)
session_token = jwt_response[SESSION_TOKEN_NAME].get("jwt")
refresh_token = jwt_response[REFRESH_SESSION_TOKEN_NAME].get("jwt")
```
The session and refresh JWTs should be returned to the caller, and passed with every request in the session. Read more on [session validation](#session-validation)
### Enchanted Link
Using the Enchanted Link APIs enables users to sign in by clicking a link
delivered to their email address. The email will include 3 different links,
and the user will have to click the right one, based on the 2-digit number that is
displayed when initiating the authentication process.
This method is similar to [Magic Link](#magic-link) but differs in two major ways:
- The user must choose the correct link out of the three, instead of having just one
single link.
- This supports cross-device clicking, meaning the user can try to log in on one device,
like a computer, while clicking the link on another device, for instance a mobile phone.
The Enchanted Link will redirect the user to page where the its token needs to be verified.
This redirection can be configured in code per request, or set globally in the [Descope Console](https://app.descope.com/settings/authentication/enchantedlink).
The user can either `sign up`, `sign in` or `sign up or in`
```python
resp = descope_client.enchantedlink.sign_up_or_in(
login_id=email,
uri="http://myapp.com/verify-enchanted-link", # Set redirect URI here or via console
)
link_identifier = resp["linkId"] # Show the user which link they should press in their email
pending_ref = resp["pendingRef"] # Used to poll for a valid session
masked_email = resp["maskedEmail"] # The email that the message was sent to in a masked format
```
After sending the link, you must poll to receive a valid session using the `pending_ref` from
the previous step. A valid session will be returned only after the user clicks the right link.
```python
i = 0
while not done and i < max_tries:
try:
i = i + 1
sleep(4)
jwt_response = descope_client.enchantedlink.get_session(pending_ref)
done = True
except AuthException as e: # Poll while still receiving 401 Unauthorized
if e.status_code != 401: # Other failures means something's wrong, abort
logging.info(f"Failed pending session, err: {e}")
done = True
if jwt_response:
session_token = jwt_response[SESSION_TOKEN_NAME].get("jwt")
refresh_token = jwt_response[REFRESH_SESSION_TOKEN_NAME].get("jwt")
```
To verify an enchanted link, your redirect page must call the validation function on the token (`t`) parameter (`https://your-redirect-address.com/verify?t=<token>`). Once the token is verified, the session polling will receive a valid `jwt_response`.
```python
try:
descope_client.enchantedlink.verify(token=token)
# Token is valid
except AuthException as e:
# Token is invalid
```
The session and refresh JWTs should be returned to the caller, and passed with every request in the session. Read more on [session validation](#session-validation)
### OAuth
Users can authenticate using their social logins, using the OAuth protocol. Configure your OAuth settings on the [Descope console](https://app.descope.com/settings/authentication/social). To start a flow call:
```python
descope_client.oauth.start(
provider="google", # Choose an oauth provider out of the supported providers
return_url="https://my-app.com/handle-oauth", # Can be configured in the console instead of here
)
```
The user will authenticate with the authentication provider, and will be redirected back to the redirect URL, with an appended `code` HTTP URL parameter. Exchange it to validate the user:
```python
jwt_response = descope_client.oauth.exchange_token(code)
session_token = jwt_response[SESSION_TOKEN_NAME].get("jwt")
refresh_token = jwt_response[REFRESH_SESSION_TOKEN_NAME].get("jwt")
```
The session and refresh JWTs should be returned to the caller, and passed with every request in the session. Read more on [session validation](#session-validation)
### SSO (SAML / OIDC)
Users can authenticate to a specific tenant using SAML/OIDC based on the tenant settings. Configure your SAML/OIDC tenant settings on the [Descope console](https://app.descope.com/tenants). To start a flow call:
```python
descope_client.sso.start(
tenant="my-tenant-ID", # Choose which tenant to log into
return_url="https://my-app.com/handle-sso", # Can be configured in the console instead of here
)
```
The user will authenticate with the authentication provider configured for that tenant, and will be redirected back to the redirect URL, with an appended `code` HTTP URL parameter. Exchange it to validate the user:
```python
jwt_response = descope_client.sso.exchange_token(code)
session_token = jwt_response[SESSION_TOKEN_NAME].get("jwt")
refresh_token = jwt_response[REFRESH_SESSION_TOKEN_NAME].get("jwt")
```
The session and refresh JWTs should be returned to the caller, and passed with every request in the session. Read more on [session validation](#session-validation)
Note: the descope_client.saml.start(..) and descope_client.saml.exchange_token(..) functions are DEPRECATED, use the above sso functions instead
### TOTP Authentication
The user can authenticate using an authenticator app, such as Google Authenticator.
Sign up like you would using any other authentication method. The sign up response
will then contain a QR code `image` that can be displayed to the user to scan using
their mobile device camera app, or the user can enter the `key` manually or click
on the link provided by the `provisioning_url`.
Existing users can add TOTP using the `update` function.
```python
from descope import DeliveryMethod
# Every user must have a login ID. All other user information is optional
email = "desmond@descope.com"
user = {"name": "Desmond Copeland", "phone": "212-555-1234", "email": email}
totp_response = descope_client.totp.sign_up(method=DeliveryMethod.EMAIL, login_id=email, user=user)
# Use one of the provided options to have the user add their credentials to the authenticator
provisioning_url = totp_response["provisioningURL"]
image = totp_response["image"]
key = totp_response["key"]
```
There are 3 different ways to allow the user to save their credentials in
their authenticator app - either by clicking the provisioning URL, scanning the QR
image or inserting the key manually. After that, signing in is done using the code
the app produces.
```python
jwt_response = descope_client.totp.sign_in_code(
login_id=email,
code=code, # Code from authenticator app
)
session_token = jwt_response[SESSION_TOKEN_NAME].get("jwt")
refresh_token = jwt_response[REFRESH_SESSION_TOKEN_NAME].get("jwt")
```
The session and refresh JWTs should be returned to the caller, and passed with every request in the session. Read more on [session validation](#session-validation)
#### Deleting the TOTP Seed
Pass the loginId to the function to remove the user's TOTP seed.
```python
response = descope_client.mgmt.user.remove_totp_seed(login_id=login_id)
```
### Passwords
The user can also authenticate with a password, though it's recommended to
prefer passwordless authentication methods if possible. Sign up requires the
caller to provide a valid password that meets all the requirements configured
for the [password authentication method](https://app.descope.com/settings/authentication/password) in the Descope console.
```python
# Every user must have a login_id and a password. All other user information is optional
login_id = "desmond@descope.com"
password = "qYlvi65KaX"
user = {
"name": "Desmond Copeland",
"email": login_id,
}
jwt_response = descope_client.password.sign_up(
login_id=login_id,
password=password,
user=user,
)
session_token = jwt_response[SESSION_TOKEN_NAME].get("jwt")
refresh_token = jwt_response[REFRESH_SESSION_TOKEN_NAME].get("jwt")
```
The user can later sign in using the same login_id and password.
```python
jwt_response = descope_client.password.sign_in(login_id, password)
session_token = jwt_response[SESSION_TOKEN_NAME].get("jwt")
refresh_token = jwt_response[REFRESH_SESSION_TOKEN_NAME].get("jwt")
```
The session and refresh JWTs should be returned to the caller, and passed with every request in the session. Read more on [session validation](#session-validation)
In case the user needs to update their password, one of two methods are available: Resetting their password or replacing their password
**Changing Passwords**
_NOTE: send_reset will only work if the user has a validated email address. Otherwise password reset prompts cannot be sent._
In the [password authentication method](https://app.descope.com/settings/authentication/password) in the Descope console, it is possible to define which alternative authentication method can be used in order to authenticate the user, in order to reset and update their password.
```python
# Start the reset process by sending a password reset prompt. In this example we'll assume
# that magic link is configured as the reset method. The optional redirect URL is used in the
# same way as in regular magic link authentication.
login_id = "desmond@descope.com"
redirect_url = "https://myapp.com/password-reset"
descope_client.password.send_reset(login_id, redirect_url)
```
The magic link, in this case, must then be verified like any other magic link (see the [magic link section](#magic-link) for more details). However, after verifying the user, it is expected
to allow them to provide a new password instead of the old one. Since the user is now authenticated, this is possible via:
```python
# The refresh token is required to make sure the user is authenticated.
err = descope_client.password.update(login_id, new_password, token)
```
`update` can always be called when the user is authenticated and has a valid session.
Alternatively, it is also possible to replace an existing active password with a new one.
```python
# Replaces the user's current password with a new one
jwt_response = descope_client.password.replace(login_id, old_password, new_password)
session_token = jwt_response[SESSION_TOKEN_NAME].get("jwt")
refresh_token = jwt_response[REFRESH_SESSION_TOKEN_NAME].get("jwt")
```
### Session Validation
Every secure request performed between your client and server needs to be validated. The client sends
the session and refresh tokens with every request, and they are validated using one of the following:
```python
# Validate the session. Will raise if expired
try:
jwt_response = descope_client.validate_session(session_token)
except AuthException:
# Session expired
# If validate_session raises an exception, you will need to refresh the session using
jwt_response = descope_client.refresh_session(refresh_token)
# Alternatively, you could combine the two and
# have the session validated and automatically refreshed when expired
jwt_response = descope_client.validate_and_refresh_session(session_token, refresh_token)
```
Choose the right session validation and refresh combination that suits your needs.
Note: all those validation apis can receive an optional 'audience' parameter that should be provided when using jwt that has the 'aud' claim.
Refreshed sessions return the same response as is returned when users first sign up / log in,
containing the session and refresh tokens, as well as all of the JWT claims.
Make sure to return the tokens from the response to the client, or updated the cookie if you're using it.
Usually, the tokens can be passed in and out via HTTP headers or via a cookie.
The implementation can defer according to your framework of choice. See our [samples](#code-samples) for a few examples.
If Roles & Permissions are used, validate them immediately after validating the session. See the [next section](#roles--permission-validation)
for more information.
### Roles & Permission Validation
When using Roles & Permission, it's important to validate the user has the required
authorization immediately after making sure the session is valid. Taking the `jwt_response`
received by the [session validation](#session-validation), call the following functions:
For multi-tenant uses:
```python
# You can validate specific permissions
valid_permissions = descope_client.validate_tenant_permissions(
jwt_response, "my-tenant-ID", ["Permission to validate"]
)
if not valid_permissions:
# Deny access
# Or validate roles directly
valid_roles = descope_client.validate_tenant_roles(
jwt_response, "my-tenant-ID", ["Role to validate"]
)
if not valid_roles:
# Deny access
# Or get the matched roles/permissions
matched_tenant_roles = descope_client.get_matched_tenant_roles(
jwt_response, "my-tenant-ID", ["role-name1", "role-name2"]
)
matched_tenant_permissions = descope_client.get_matched_tenant_permissions(
jwt_response, "my-tenant-ID", ["permission-name1", "permission-name2"]
)
```
When not using tenants use:
```python
# You can validate specific permissions
valid_permissions = descope_client.validate_permissions(
jwt_response, ["Permission to validate"]
)
if not valid_permissions:
# Deny access
# Or validate roles directly
valid_roles = descope_client.validate_roles(
jwt_response, ["Role to validate"]
)
if not valid_roles:
# Deny access
# Or get the matched roles/permissions
matched_roles = descope_client.get_matched_roles(
jwt_response, ["role-name1", "role-name2"]
)
matched_permissions = descope_client.get_matched_permissions(
jwt_response, ["permission-name1", "permission-name2"]
)
```
### Tenant selection
For a user that has permissions to multiple tenants, you can set a specific tenant as the current selected one
This will add an extra attribute to the refresh JWT and the session JWT with the selected tenant ID
```python
tenant_id_ = "t1"
jwt_response = descope_client.select_tenant(tenant_id, refresh_token)
```
### Logging Out
You can log out a user from an active session by providing their `refresh_token` for that session.
After calling this function, you must invalidate or remove any cookies you have created.
```python
descope_client.logout(refresh_token)
```
It is also possible to sign the user out of all the devices they are currently signed-in with. Calling `logout_all` will
invalidate all user's refresh tokens. After calling this function, you must invalidate or remove any cookies you have created.
```python
descope_client.logout_all(refresh_token)
```
### History
You can get the current session user history.
The request requires a valid refresh token.
```python
users_history_resp = descope_client.history(refresh_token)
for user_history in users_history_resp:
# Do something
```
### My Tenants
You can get the current session user tenants.
The request requires a valid refresh token.
And either a boolean to receive the current selected tenant
Or a list of tenant IDs that this user is part of
```python
tenants_resp = descope_client.my_tenants(refresh_token, False, ["tenant_id"])
for tenant in tenants_resp.tenants:
# Do something
```
## Management API
It is very common for some form of management or automation to be required. These can be performed
using the management API. Please note that these actions are more sensitive as they are administrative
in nature. Please use responsibly.
### Setup
To use the management API you'll need a `Management Key` along with your `Project ID`.
Create one in the [Descope Console](https://app.descope.com/settings/company/managementkeys).
```python
from descope import DescopeClient
# Initialized after setting the DESCOPE_PROJECT_ID and the DESCOPE_MANAGEMENT_KEY env vars
descope_client = DescopeClient()
# ** Or directly **
descope_client = DescopeClient(project_id="<Project ID>", management_key="<Management Key>")
```
### Verbose Mode for Debugging
When debugging failed API requests, you can enable verbose mode to capture HTTP response metadata like headers (`cf-ray`, `x-request-id`), status codes, and raw response bodies. This is especially useful when working with Descope support to troubleshoot issues.
```python
from descope import DescopeClient, AuthException
import logging
logger = logging.getLogger(__name__)
# Enable verbose mode during client initialization
client = DescopeClient(
project_id="<Project ID>",
management_key="<Management Key>",
verbose=True # Enable response metadata capture
)
try:
# Make any API call
client.mgmt.user.create(
login_id="test@example.com",
email="test@example.com"
)
except AuthException as e:
# Access the last response metadata for debugging
response = client.get_last_response()
if response:
logger.error(f"Request failed with status {response.status_code}")
logger.error(f"cf-ray: {response.headers.get('cf-ray')}")
logger.error(f"x-request-id: {response.headers.get('x-request-id')}")
logger.error(f"Response body: {response.text}")
# Provide cf-ray to Descope support for debugging
print(f"Please provide this cf-ray to support: {response.headers.get('cf-ray')}")
```
**Important Notes:**
- Verbose mode is **disabled by default** (no performance impact when not needed)
- When enabled, only the **most recent** HTTP response is stored
- `get_last_response()` returns `None` when verbose mode is disabled
- The response object provides dict-like access to JSON data while also exposing HTTP metadata
**Available metadata on response objects:**
- `response.headers` - HTTP response headers (dict-like object)
- `response.status_code` - HTTP status code (int)
- `response.text` - Raw response body as text (str)
- `response.url` - Request URL (str)
- `response.ok` - Whether status code is < 400 (bool)
- `response.json()` - Parsed JSON response (dict/list)
- `response["key"]` - Dict-like access to JSON data (for backward compatibility)
For a complete example, see [samples/verbose_mode_example.py](https://github.com/descope/python-sdk/blob/main/samples/verbose_mode_example.py).
### Manage Tenants
You can create, update, delete or load tenants:
```Python
# You can optionally set your own ID when creating a tenant
descope_client.mgmt.tenant.create(
name="My First Tenant",
id="my-custom-id", # This is optional.
self_provisioning_domains=["domain.com"],
custom_attributes={"attribute-name": "value"},
)
# Update will override all fields as is. Use carefully.
descope_client.mgmt.tenant.update(
id="my-custom-id",
name="My First Tenant",
self_provisioning_domains=["domain.com", "another-domain.com"],
custom_attributes={"attribute-name": "value"},
)
# Managing the tenant's settings
# Getting the settings
descope_client.mgmt.tenant.load_settings(id="my-custom-id")
# updating the settings
descope_client.mgmt.tenant.update_settings(id="my-custom-id", self_provisioning_domains=["domain.com"], session_settings_enabled=True, refresh_token_expiration=1, refresh_token_expiration_unit="hours")
# Tenant deletion cannot be undone. Use carefully.
# Pass true to cascade value, in case you want to delete all users/keys associated only with this tenant
descope_client.mgmt.tenant.delete(id="my-custom-id", cascade=False)
# Load tenant by id
tenant_resp = descope_client.mgmt.tenant.load("my-custom-id")
# Load all tenants
tenants_resp = descope_client.mgmt.tenant.load_all()
tenants = tenants_resp["tenants"]
for tenant in tenants:
# Do something
# search all tenants
tenants_resp = descope_client.mgmt.tenant.search_all(ids=["id1"], names=["name1"], custom_attributes={"k1":"v1"}, self_provisioning_domains=["spd1"])
tenants = tenants_resp["tenants"]
for tenant in tenants:
# Do something
```
### Manage Users
You can create, update, patch, delete or load users, as well as setting new password, expire password and search according to filters:
```Python
# A user must have a login ID, other fields are optional.
# Roles should be set directly if no tenants exist, otherwise set
# on a per-tenant basis.
descope_client.mgmt.user.create(
login_id="desmond@descope.com",
email="desmond@descope.com",
display_name="Desmond Copeland",
user_tenants=[
AssociatedTenant("my-tenant-id", ["role-name1"]),
],
sso_app_ids=["appId1"],
)
# Alternatively, a user can be created and invited via an email message.
# Make sure to configure the invite URL in the Descope console prior to using this function,
# and that an email address is provided in the information.
descope_client.mgmt.user.invite(
login_id="desmond@descope.com",
email="desmond@descope.com",
display_name="Desmond Copeland",
user_tenants=[
AssociatedTenant("my-tenant-id", ["role-name1"]),
],
sso_app_ids=["appId1"],
# You can override the project's User Invitation Redirect URL with this parameter
invite_url="invite.me"
)
# Batch invite
descope_client.mgmt.user.invite_batch(
users=[
UserObj(
login_id="desmond@descope.com",
email="desmond@descope.com",
display_name="Desmond Copeland",
user_tenants=[
AssociatedTenant("my-tenant-id", ["role-name1"]),
],
custom_attributes={"ak": "av"},
sso_app_ids=["appId1"],
)
],
invite_url="invite.me",
send_mail=True,
send_sms=True,
)
# Update will override all fields as is. Use carefully.
descope_client.mgmt.user.update(
login_id="desmond@descope.com",
email="desmond@descope.com",
display_name="Desmond Copeland",
user_tenants=[
AssociatedTenant("my-tenant-id", ["role-name1", "role-name2"]),
],
sso_app_ids=["appId1"],
)
# Patch will override only the set fields in the user
descope_client.mgmt.user.patch(
login_id="desmond@descope.com",
email="desmond@descope.com",
display_name="Desmond Copeland",
)
# Update explicit data for a user rather than overriding all fields
descope_client.mgmt.user.update_login_id(
login_id="desmond@descope.com",
new_login_id="bane@descope.com"
)
descope_client.mgmt.user.update_phone(
login_id="desmond@descope.com",
phone="+18005551234",
verified=True,
)
descope_client.mgmt.user.remove_tenant_roles(
login_id="desmond@descope.com",
tenant_id="my-tenant-id",
role_names=["role-name1"],
)
# Set SSO applications association to a user.
user = descope_client.mgmt.user.set_sso_apps(
login_id="desmond@descope.com",
sso_app_ids=["appId1", "appId2"]
)
# Add SSO applications association to a user.
user = descope_client.mgmt.user.add_sso_apps(
login_id="desmond@descope.com",
sso_app_ids=["appId1", "appId2"]
)
# Remove SSO applications association from a user.
user = descope_client.mgmt.user.remove_sso_apps(
login_id="desmond@descope.com",
sso_app_ids=["appId1", "appId2"]
)
# User deletion cannot be undone. Use carefully.
descope_client.mgmt.user.delete("desmond@descope.com")
# Load specific user
user_resp = descope_client.mgmt.user.load("desmond@descope.com")
user = user_resp["user"]
# If needed, users can be loaded using the user ID as well
user_resp = descope_client.mgmt.user.load_by_user_id("<user-id>")
user = user_resp["user"]
# Logout user from all devices by login ID
descope_client.mgmt.user.logout_user("<login-id>")
# Logout user from all devices by user ID
descope_client.mgmt.user.logout_user_by_user_id("<user-id>")
# Load users by their user id
users_resp = descope_client.mgmt.user.load_users(user_ids=["<user-id>"])
users = users_resp["users"]
for user in users:
# Do something
# Search all users, optionally according to tenant and/or role filter
# results can be paginated using the limit and page parameters, as well as by time with the from_created_time, to_created_time, from_modified_time, and to_modified_time
users_resp = descope_client.mgmt.user.search_all(tenant_ids=["my-tenant-id"])
users = users_resp["users"]
for user in users:
# Do something
# Get users' authentication history
users_history_resp = descope_client.mgmt.user.history(["user-id-1", "user-id-2"])
for user_history in users_history_resp:
# Do something
```
#### Set or Expire User Password
You can set a new active password for a user that they can sign in with.
You can also set a temporary password that the user will be forced to change on the next login.
For a user that already has an active password, you can expire their current password, effectively requiring them to change it on the next login.
```Python
# Set a user's temporary password
descope_client.mgmt.user.set_temporary_password('<login-id>', '<some-password>');
# Set a user's password
descope_client.mgmt.user.set_active_password('<login-id>', '<some-password>');
# Or alternatively, expire a user password
descope_client.mgmt.user.expirePassword('<login-id>');
```
### Manage Access Keys
You can create, update, delete or load access keys, as well as search according to filters:
```Python
# An access key must have a name and expiration, other fields are optional.
# Roles should be set directly if no tenants exist, otherwise set
# on a per-tenant basis.
# If user_id is supplied, then authorization would be ignored, and access key would be bound to the users authorization.
# If description is supplied, then the access key will hold a descriptive text.
# If permitted_ips is supplied, then the access key can only be used from that list of IP addresses or CIDR ranges
create_resp = descope_client.mgmt.access_key.create(
name="name",
expire_time=1677844931,
key_tenants=[
AssociatedTenant("my-tenant-id", ["role-name1"]),
],
description="this is my access key",
permitted_ips=['10.0.0.1', '192.168.1.0/24'],
custom_attributes={'attrName': 'attrValue'},
)
key = create_resp["key"]
cleartext = create_resp["cleartext"] # make sure to save the returned cleartext securely. It will not be returned again.
# Load a specific access key
access_key_resp = descope_client.mgmt.access_key.load("key-id")
access_key = access_key_resp["key"]
# Search all access keys, optionally according to a tenant filter
keys_resp = descope_client.mgmt.access_key.search_all_access_keys(tenant_ids=["my-tenant-id"], bound_user_id='buid',
creating_user='cu', custom_attributes={'attrName': 'attrValue'})
keys = keys_resp["keys"]
for key in keys:
# Do something
# Update will override all fields as is. Use carefully.
descope_client.mgmt.access_key.update(
id="key-id",
name="new name",
custom_claims={"k1":"v1"},
permitted_ips=['10.0.0.1', '192.168.1.0/24'],
custom_attributes={'attrName': 'attrValue'},
)
# Access keys can be deactivated to prevent usage. This can be undone using "activate".
descope_client.mgmt.access_key.deactivate("key-id")
# Disabled access keys can be activated once again.
descope_client.mgmt.access_key.activate("key-id")
# Access key deletion cannot be undone. Use carefully.
descope_client.mgmt.access_key.delete("key-id")
```
Exchange the access key and provide optional access key login options:
```python
loc = AccessKeyLoginOptions(custom_claims={"k1": "v1"})
jwt_response = descope_client.exchange_access_key(
access_key="accessKey", login_options=loc
)
```
### Manage SSO Setting
You can manage SSO settings and map SSO group roles and user attributes.
```Python
# You can load all tenant SSO settings
sso_settings_res = descope_client.mgmt.sso.load_settings("tenant-id")
# import based on your configuration needs:
from descope import (
SSOOIDCSettings,
OIDCAttributeMapping,
SSOSAMLSettings,
AttributeMapping,
RoleMapping,
SSOSAMLSettingsByMetadata
)
# You can Configure SSO SAML settings for a tenant manually.
settings = SSOSAMLSettings(
idp_url="https://dummy.com/saml",
idp_entity_id="entity1234",
idp_cert="my certificate",
attribute_mapping=AttributeMapping(
name="name",
given_name="givenName",
middle_name="middleName",
family_name="familyName",
picture="picture",
email="email",
phone_number="phoneNumber",
group="groups"
),
role_mappings=[RoleMapping(groups=["grp1"], role="rl1")],
)
descope_client.mgmt.sso.configure_saml_settings(
tenant_id, # Which tenant this configuration is for
settings, # The SAML settings
redirect_url="https://your.domain.com", # Global redirection after successful authentication
domains=["tenant-users.com"] # Users authentication with these domains will be logged in to this tenant
)
# You can Configure SSO SAML settings for a tenant by fetching them from an IDP metadata URL.
settings = SSOSAMLSettingsByMetadata(
idp_metadata_url="https://dummy.com/metadata",
attribute_mapping=AttributeMapping(
name="myName",
given_name="givenName",
middle_name="middleName",
family_name="familyName",
picture="picture",
email="email",
phone_number="phoneNumber",
group="groups"
),
role_mappings=[RoleMapping(groups=["grp1"], role="rl1")],
)
descope_client.mgmt.sso.configure_saml_settings_by_metadata(
tenant_id, # Which tenant this configuration is for
settings, # The SAML settings
redirect_url="https://your.domain.com", # Global redirection after successful authentication
domains=["tenant-users.com"] # Users authentication with these domains will be logged in to this tenant
)
# You can Configure SSO OIDC settings for a tenant manually.
settings = SSOOIDCSettings(
name="myProvider",
client_id="myId",
client_secret="secret",
redirect_url="https://your.domain.com",
auth_url="https://dummy.com/auth",
token_url="https://dummy.com/token",
user_data_url="https://dummy.com/userInfo",
scope=["openid", "profile", "email"],
attribute_mapping=OIDCAttributeMapping(
login_id="subject",
name="name",
given_name="givenName",
middle_name="middleName",
family_name="familyName",
email="email",
verified_email="verifiedEmail",
username="username",
phone_number="phoneNumber",
verified_phone="verifiedPhone",
picture="picture"
)
)
descope_client.mgmt.sso.configure_oidc_settings(
tenant_id, # Which tenant this configuration is for
settings, # The OIDC provider settings
domains=["tenant-users.com"] # Users authentication with these domains will be logged in to this tenant
)
# DEPRECATED (use load_settings(..) function instead)
# You can get SSO settings for a tenant
sso_settings_res = descope_client.mgmt.sso.get_settings("tenant-id")
# DEPRECATED (use configure_saml_settings(..) function instead)
# You can configure SSO settings manually by setting the required fields directly
descope_client.mgmt.sso.configure(
tenant_id, # Which tenant this configuration is for
idp_url="https://idp.com",
entity_id="my-idp-entity-id",
idp_cert="<your-cert-here>",
redirect_url="https://your.domain.com", # Global redirection after successful authentication
domains=["tenant-users.com"] # Users authentication with these domains will be logged in to this tenant
)
# DEPRECATED (use configure_saml_settings_by_metadata(..) function instead)
# Alternatively, configure using an SSO metadata URL
descope_client.mgmt.sso.configure_via_metadata(
tenant_id, # Which tenant this configuration is for
idp_metadata_url="https://idp.com/my-idp-metadata",
redirect_url="", # Redirect URL will have to be provided in every authentication call
domains=None # Remove the current domains configuration if a value was previously set
)
# DEPRECATED (use configure_saml_settings() or configure_saml_settings_by_metadata(..) functions instead)
# Map IDP groups to Descope roles, or map user attributes.
# This function overrides any previous mapping (even when empty). Use carefully.
descope_client.mgmt.sso.mapping(
tenant_id, # Which tenant this mapping is for
role_mappings = [RoleMapping(["IDP_ADMIN"], "Tenant Admin")],
attribute_mapping=AttributeMapping(name="IDP_NAME", phone_number="IDP_PHONE"),
)
```
Note: Certificates should have a similar structure to:
```
-----BEGIN CERTIFICATE-----
Certificate contents
-----END CERTIFICATE-----
```
### Manage Permissions
You can create, update, delete or load permissions:
```Python
# You can optionally set a description for a permission.
descope_client.mgmt.permission.create(
name="My Permission",
description="Optional description to briefly explain what this permission allows."
)
# Update will override all fields as is. Use carefully.
descope_client.mgmt.permission.update(
name="My Permission",
new_name="My Updated Permission",
description="A revised description"
)
# Permission deletion cannot be undone. Use carefully.
descope_client.mgmt.permission.delete("My Updated Permission")
# Load all permissions
permissions_resp = descope_client.mgmt.permission.load_all()
permissions = permissions_resp["permissions"]
for permission in permissions:
# Do something
```
### Manage Roles
You can create, update, delete or load roles:
```Python
# You can optionally set a description and associated permission for a roles.
descope_client.mgmt.role.create(
name="My Role",
description="Optional description to briefly explain what this role allows.",
permission_names=["My Updated Permission"],
tenant_id="Optionally scope this role for this specific tenant. If left empty, the role will be available to all tenants.",
private=False # Optional, marks this role as private role
)
# Update will override all fields as is. Use carefully.
descope_client.mgmt.role.update(
name="My Role",
new_name="My Updated Role",
description="A revised description",
permission_names=["My Updated Permission", "Another Permission"],
tenant_id="The tenant ID to which this role is associated, leave empty, if role is a global one",
private=True # Optional, marks this role as private role
)
# Role deletion cannot be undone. Use carefully.
descope_client.mgmt.role.delete("My Updated Role", "<tenant_id>")
# Load all roles
roles_resp = descope_client.mgmt.role.load_all()
roles = roles_resp["roles"]
for role in roles:
# Do something
# Search roles
roles_resp = descope_client.mgmt.role.search(["t1", "t2"], ["r1", "r2"])
roles = roles_resp["roles"]
for role in roles:
# Do something
```
### Manage Flows and Theme
You can list your flows and also import and export flows and screens, or the project theme:
```Python
# List | text/markdown | Descope | info@descope.com | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: P... | [] | null | null | <4.0,>=3.8.1 | [] | [] | [] | [
"Flask>=2; extra == \"flask\"",
"email-validator<3,>=2; python_version >= \"3.8\"",
"liccheck<0.10.0,>=0.9.1",
"pyjwt[crypto]>=2.4.0",
"requests>=2.27.0"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/descope/python-sdk/issues",
"Documentation, https://docs.descope.com",
"Homepage, https://descope.com/",
"Repository, https://github.com/descope/python-sdk"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:45:55.498289 | descope-1.10.1.tar.gz | 91,250 | 7e/a5/6f8f808c60e62277f6043a7d157f14290a9c0a4a234e7cb59a68dc279911/descope-1.10.1.tar.gz | source | sdist | null | false | 2884df6bbb7a3acf3ffe69d82aace537 | 765fdb915b23913123cbc5c9b3528a3e6da44ec12e09f3fc01da66370b237785 | 7ea56f8f808c60e62277f6043a7d157f14290a9c0a4a234e7cb59a68dc279911 | null | [
"LICENSE"
] | 9,370 |
2.4 | aemetdata | 0.1.2 | Paquete Python para descargar datos de AEMET OpenData | 

# aemetdata
**aemetdata** es un paquete Python para descargar y procesar datos meteorológicos de AEMET OpenData de forma sencilla y eficiente.
La información que recoge y utiliza esta librería es propiedad de la Agencia Estatal de Meteorología.
## Instalación
```bash
pip install aemetdata
```
## Obtención de API Key
Necesitas una API Key de AEMET OpenData. Puedes obtenerla en:
[Obtener clave api AEMET](https://opendata.aemet.es/centrodedescargas/altaUsuario?)
Puedes pasarla como argumento o definir la variable de entorno:
```bash
export AEMET_API_KEY="tu_api_key"
```
En Windows (PowerShell):
```powershell
setx AEMET_API_KEY "tu_api_key"
```
## Cliente y funciones principales
- **AemetClient**: Clase principal para interactuar con la API de AEMET OpenData. Permite descargar datos de cualquier endpoint autorizado.
```python
from aemetdata import AemetClient
client = AemetClient(api_key=API_KEY)
endpoint = "valores/climatologicos/diarios/datos/fechaini/2024-01-01T00:00:00UTC/fechafin/2024-01-02T00:00:00UTC/todasestaciones"
data = client.download_data(endpoint)
print(data[:500])
```
- **aemetdata.avisos**: Funciones para descargar avisos meteorológicos oficiales:
- `avisos_area_ultimo_eleaborado(codigo_area, api_key)`: Descarga el último aviso elaborado para un área específica.
```python
from aemetdata.avisos import avisos_area_ultimo_eleaborado
ruta = await avisos_area_ultimo_eleaborado("72", [API_KEY])
print(f"Archivo guardado: {ruta}")
```
- `avisos_por_fechas(fecha_ini, fecha_fin, api_key)`: Descarga todos los avisos entre dos fechas.
```python
from aemetdata.avisos import avisos_por_fechas
rutas = await avisos_por_fechas('2026-01-01', '2026-01-04', [API_KEY])
print('Archivos guardados:')
for ruta in rutas:
print(ruta)
```
- **aemetdata.climatologia**: Funciones para obtener datos climatológicos:
- `datos_mensuales(estaciones, año_ini, año_fin, api_key)`: Descarga datos mensuales de climatología para una o varias estaciones.
```python
from aemetdata.climatologia import datos_mensuales
resultado = await datos_mensuales(["3195","3427Y"], 2020, 2024, [API_KEY])
import pandas as pd
pd.DataFrame(resultado)
```
- `datos_diarios(estaciones, fecha_ini, fecha_fin, api_key)`: Descarga datos diarios de climatología.
```python
from aemetdata.climatologia import datos_diarios
resultado = await datos_diarios(["3195","3427Y"], '2022-01-01', '2022-08-10', [API_KEY])
import pandas as pd
pd.DataFrame(resultado)
```
- `datos_normales(estaciones, api_key)`: Obtiene valores climatológicos normales (periodo 1991-2020).
```python
from aemetdata.climatologia import datos_normales
resultado_normales = await datos_normales(["3195","3427Y"], [API_KEY])
import pandas as pd
pd.DataFrame(resultado_normales)
```
- `datos_extremos(estaciones, api_key, parametro)`: Descarga valores extremos (precipitación, temperatura, viento).
```python
from aemetdata.climatologia import datos_extremos
resultado_extremos_T = await datos_extremos(["3195","3427Y"], [API_KEY], parametro="T")
import pandas as pd
pd.DataFrame(resultado_extremos_T)
```
- **aemetdata.imagenes**: Funciones para descargar imágenes meteorológicas (satélite, radar, etc.).
- **aemetdata.observaciones**: Funciones para obtener observaciones meteorológicas en tiempo real.
| text/markdown | null | Carlos Pacheco Perelló <cpacheco.perello@outlook.com> | null | null | MIT | aemet, opendata, meteorologia | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.24",
"requests>=2.31",
"python-dateutil>=2.8",
"pytest>=7.0; extra == \"test\"",
"pytest-asyncio>=0.21; extra == \"test\"",
"pytest-cov>=4.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/cpacheco-perello",
"Repository, https://github.com/cpacheco-perello/ametdata-py",
"Issues, https://github.com/cpacheco-perello/ametdata-py/issueses"
] | twine/6.2.0 CPython/3.11.0 | 2026-02-18T12:45:53.226529 | aemetdata-0.1.2.tar.gz | 13,227 | fc/2d/f9be0c9ede109432d980c2abf9f574fb36bc9b7846e176f8c6d4ee74339b/aemetdata-0.1.2.tar.gz | source | sdist | null | false | 0d31011a964f1c75e9759142f53b82cf | 8dbb0e85a9671bd4149a276d6addbb46095057391340e0168a4067a5fd7bdf3d | fc2df9be0c9ede109432d980c2abf9f574fb36bc9b7846e176f8c6d4ee74339b | null | [
"LICENSE"
] | 289 |
2.4 | athlib | 0.9.4 | Utilities for track and field athletics | # athlib
Athlib is a library of functions, data and schema for Athletics (i.e. Track and Field)
We're building lots of sites for the sport of athletics. When we find something common and testable, we aim to place it here. This library should contain
- static reference data, provided it's not huge nor available elsewhere
- Python code implementing functions of general interest
- Javascript code implementing functions of general interest
It is NOT intended to contain
- web applications, view code or database code.
- competition management software
Things we hope to put in here:
- standard event codes and their English names
- UKA and other age group calculators
- WMA age grade calculations
- utilities for parsing and formatting performances as commonly input in athletics
- standardised scoring functions
- sample JSON files in line with our schemas
- schemas to validate
What follows below is intended to help people working on athlib.
# Python documentation
We require a modern python>=3.8.0 some functions already have typing information and it will be applied later to others.
## Installation
pip install athlib
## Python development
For Python developers, please install the extra development requirements with
```
pip install -r dev_requirements.txt
```
Run tests with...
```
python setup.py test
```
Check style with
```
pycodestyle --exclude=bin,lib,include,sampledata
```
You can also copy the file `pre-commit.sample` to `.git/hooks/pre-commit`, and the two above checks will be run before any commit, and block it if they return issues.
# Javascript documentation & development
see the [documentation](js/README.md) in folder js
# Documentation itself
The docs are written using reStructured Text, the Python standard. There is an environment
in `docs`.
cd docs
make html
| text/markdown | Andy Robinson and others | Andy Robinson and others <andy@opentrack.run> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | https://github.com/openath/athlib | null | >=2.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/openath/athlib",
"Issues, https://github.com/openath/athlib/issues"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-18T12:45:07.035831 | athlib-0.9.4-py2.py3-none-any.whl | 188,771 | 2e/e2/c9e7706af08abb594ea9b7d472e9a0234a2cb67ba5bde95c8bb754b7efd3/athlib-0.9.4-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | 2909efcb49511f4fe7d8317efa7e3b60 | aca6f94f1d93598f11edb392b95c5958b2ab8fbbd3c3b8dbeb13d0b243f20351 | 2ee2c9e7706af08abb594ea9b7d472e9a0234a2cb67ba5bde95c8bb754b7efd3 | null | [
"LICENSE"
] | 190 |
2.4 | tinui | 6.14.0 | 使用tkinter.Canvas绘制现代UI组件 | # TinUI

---
## 项目简介
TinUI是一个基于tkinter的拓展组件(Widget),可以绘制具有现代化样式的控件。
详细介绍见[TinUI · 轻量的tkinter现代样式控件](https://tinui.smart-space.com.cn/)。
## 安装
使用pip:
```cmd
pip install tinui
```
---
部分展示:
<img src="https://github.com/Smart-Space/TinUI/raw/main/image/themes1.png" width="400" />
<img src="https://github.com/Smart-Space/TinUI/raw/main/image/themes2.png" width="400" />
<img src="https://github.com/Smart-Space/TinUI/raw/main/image/themes3.png" width="400" />
<img src="https://github.com/Smart-Space/TinUI/raw/main/image/themes4.png" width="400" />
<img src="https://github.com/Smart-Space/TinUI/raw/main/image/themes5.png" width="400" />
| text/markdown | Smart-Space | smart-space@qq.com | null | null | GPL License | null | [
"Intended Audience :: Developers",
"Natural Language :: Chinese (Simplified)",
"Programming Language :: Python",
"Topic :: Software Development :: Libraries"
] | [] | https://github.com/Smart-Space/TinUI | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-18T12:44:18.222916 | tinui-6.14.0.tar.gz | 520,576 | e2/5e/103419211c60d93e6bf0e2dbc24fdd9c25dbc4d24d380c4db5dd664aa3a7/tinui-6.14.0.tar.gz | source | sdist | null | false | 6c3db7879155ebccbb8e7e733bb83217 | b2aab1116722c16350780b6b3c73638fbaff87cb4cf8faad60fddaf5dead2a26 | e25e103419211c60d93e6bf0e2dbc24fdd9c25dbc4d24d380c4db5dd664aa3a7 | null | [
"LICENSE.txt"
] | 268 |
2.4 | cloudos-cli | 2.80.0 | Python package for interacting with CloudOS | # cloudos-cli
[](https://github.com/lifebit-ai/cloudos-cli/actions/workflows/ci.yml)
Python package for interacting with CloudOS
---
## Table of Contents
- [cloudos-cli](#cloudos-cli)
- [Table of Contents](#table-of-contents)
- [Requirements](#requirements)
- [Installation](#installation)
- [From PyPI](#from-pypi)
- [Docker Image](#docker-image)
- [From Github](#from-github)
- [Usage](#usage)
- [Configuration](#configuration)
- [Configure Default Profile](#configure-default-profile)
- [Configure Named Profile](#configure-named-profile)
- [Change the Default Profile](#change-the-default-profile)
- [List Profiles](#list-profiles)
- [Remove Profile](#remove-profile)
- [Commands](#commands)
- [Configure](#configure)
- [Project](#project)
- [List Projects](#list-projects)
- [Create Projects](#create-projects)
- [Queue](#queue)
- [List Queues](#list-queues)
- [Workflow](#workflow)
- [List All Available Workflows](#list-all-available-workflows)
- [Import a Nextflow Workflow](#import-a-nextflow-workflow)
- [Nextflow Jobs](#nextflow-jobs)
- [Submit a Job](#submit-a-job)
- [Check Job Status](#check-job-status)
- [List Jobs](#list-jobs)
- [Get Job Results](#get-job-results)
- [Clone or Resume Job](#clone-or-resume-job)
- [Abort Jobs](#abort-jobs)
- [Basic Usage](#basic-usage)
- [Force Abort](#force-abort)
- [Additional Options](#additional-options)
- [Get Job Details](#get-job-details)
- [Get Job Workdir](#get-job-workdir)
- [Get Job Logs](#get-job-logs)
- [Get Job Costs](#get-job-costs)
- [Get Job Related Analyses](#get-job-related-analyses)
- [Delete Job Results](#delete-job-results)
- [Archive Jobs](#archive-jobs)
- [Unarchive Jobs](#unarchive-jobs)
- [Bash Jobs](#bash-jobs)
- [Send Array Job](#send-array-job)
- [Submit a Bash Array Job](#submit-a-bash-array-job)
- [Options](#options)
- [Array File](#array-file)
- [Separator](#separator)
- [List Columns](#list-columns)
- [Array File Project](#array-file-project)
- [Disable Column Check](#disable-column-check)
- [Array Parameter](#array-parameter)
- [Custom Script Path](#custom-script-path)
- [Custom Script Project](#custom-script-project)
- [Use multiple projects for files in `--parameter` option](#use-multiple-projects-for-files-in---parameter-option)
- [Datasets](#datasets)
- [List Files](#list-files)
- [Move Files](#move-files)
- [Rename Files](#rename-files)
- [Copy Files](#copy-files)
- [Link S3 Folders to Interactive Analysis](#link-s3-folders-to-interactive-analysis)
- [Create Folder](#create-folder)
- [Remove Files or Folders](#remove-files-or-folders)
- [Link](#link)
- [Link Folders to Interactive Analysis](#link-folders-to-interactive-analysis)
- [Procurement](#procurement)
- [List Procurement Images](#list-procurement-images)
- [Set Procurement Organization Image](#set-procurement-organization-image)
- [Reset Procurement Organization Image](#reset-procurement-organization-image)
- [Cromwell and WDL Pipeline Support](#cromwell-and-wdl-pipeline-support)
- [Manage Cromwell Server](#manage-cromwell-server)
- [Run WDL Workflows](#run-wdl-workflows)
- [Python API Usage](#python-api-usage)
- [Running WDL pipelines using your own scripts](#running-wdl-pipelines-using-your-own-scripts)
- [Unit Testing](#unit-testing)
---
## Requirements
CloudOS CLI requires Python 3.9 or higher and several key dependencies for API communication, data processing, and user interface functionality.
```
click>=8.0.1
pandas>=1.3.4
numpy>=1.26.4
requests>=2.26.0
rich_click>=1.8.2
```
---
## Installation
CloudOS CLI can be installed in multiple ways depending on your needs and environment. Choose the method that best fits your workflow.
### From PyPI
The repository is also available from [PyPI](https://pypi.org/project/cloudos-cli/):
```bash
pip install cloudos-cli
```
To update CloudOS CLI to the latest version using pip, you can run:
```bash
pip install --upgrade cloudos-cli
```
To check your current version:
```bash
cloudos --version
```
### Docker Image
It is recommended to install it as a docker image using the `Dockerfile` and the `environment.yml` files provided.
To run the existing docker image at `quay.io`:
```bash
docker run --rm -it quay.io/lifebitaiorg/cloudos-cli:latest
```
### From Github
You will need Python >= 3.9 and pip installed.
Clone the repo and install it using pip:
```bash
git clone https://github.com/lifebit-ai/cloudos-cli
cd cloudos-cli
pip install -r requirements.txt
pip install .
```
> NOTE: To be able to call the `cloudos` executable, ensure that the local clone of the `cloudos-cli` folder is included in the `PATH` variable, using for example the command `export PATH="/absolute/path/to/cloudos-cli:$PATH"`.
---
## Usage
CloudOS CLI can be used both as a command-line interface tool for interactive work and as a Python package for scripting and automation.
To get general information about the tool:
```bash
cloudos --help
```
```console
Usage: cloudos [OPTIONS] COMMAND [ARGS]...
CloudOS python package: a package for interacting with CloudOS.
╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --debug Show detailed error information and tracebacks │
│ --version Show the version and exit. │
│ --help Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ bash CloudOS bash functionality. │
│ configure CloudOS configuration. │
│ cromwell Cromwell server functionality: check status, start and stop. │
│ datasets CloudOS datasets functionality. │
│ job CloudOS job functionality: run, check and abort jobs in CloudOS. │
│ procurement CloudOS procurement functionality. │
│ project CloudOS project functionality: list and create projects in CloudOS. │
│ queue CloudOS job queue functionality. │
│ workflow CloudOS workflow functionality: list and import workflows. │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```
This will tell you the implemented commands. Each implemented command has its own subcommands with its own `--help`:
```bash
cloudos job list --help
```
```console Usage: cloudos job list [OPTIONS]
Collect workspace jobs from a CloudOS workspace in CSV or JSON format.
╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────╮
│ * --apikey -k TEXT Your CloudOS API key [required] │
│ * --cloudos-url -c TEXT The CloudOS url you are trying to access to. │
│ Default=https://cloudos.lifebit.ai. │
│ [required] │
│ * --workspace-id TEXT The specific CloudOS workspace id. [required] │
│ --output-basename TEXT Output file base name to save jobs list. Default=joblist │
│ --output-format [csv|json] The desired file format (file extension) for the output. │
│ For json option --all-fields will be automatically set to │
│ True. Default=csv. │
│ --all-fields Whether to collect all available fields from jobs or just │
│ the preconfigured selected fields. Only applicable when │
│ --output-format=csv. Automatically enabled for json │
│ output. │
│ --last-n-jobs TEXT The number of last workspace jobs to retrieve. You can │
│ use 'all' to retrieve all workspace jobs. Default=30. │
│ --page INTEGER Response page to retrieve. If --last-n-jobs is set, then │
│ --page value corresponds to the first page to retrieve. │
│ Default=1. │
│ --archived When this flag is used, only archived jobs list is │
│ collected. │
│ --filter-status TEXT Filter jobs by status (e.g., completed, running, failed, │
│ aborted). │
│ --filter-job-name TEXT Filter jobs by job name ( case insensitive ). │
│ --filter-project TEXT Filter jobs by project name. │
│ --filter-workflow TEXT Filter jobs by workflow/pipeline name. │
│ --last When workflows are duplicated, use the latest imported │
│ workflow (by date). │
│ --filter-job-id TEXT Filter jobs by specific job ID. │
│ --filter-only-mine Filter to show only jobs belonging to the current user. │
│ --filter-queue TEXT Filter jobs by queue name. Only applies to jobs running │
│ in batch environment. Non-batch jobs are preserved in │
│ results. │
│ --filter-owner TEXT Filter jobs by owner username. │
│ --verbose Whether to print information messages or not. │
│ --disable-ssl-verification Disable SSL certificate verification. Please, remember │
│ that this option is not generally recommended for │
│ security reasons. │
│ --ssl-cert TEXT Path to your SSL certificate file. │
│ --debug Show detailed error information and tracebacks │
│ --profile TEXT Profile to use from the config file │
│ --help Show this message and exit. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```
In the same way, each implemented command has its own subcommands with its own `--debug` flag, that will print the full traceback for detailed error debugging. When this flag is not activated, the errors are presented in short descriptive format.
---
## Configuration
CloudOS CLI uses a profile-based configuration system to store your credentials and settings securely. This eliminates the need to provide authentication details with every command and allows you to work with multiple CloudOS environments.
Configuration will be saved in the $HOME path folder regardless of operating system. Here, a new folder named `.cloudos` will be created, with files `credentials` and `config` also being created. The structure will look like:
```console
$HOME
└── .cloudos/
├── credentials # Stores API keys
└── config # Stores all other parameters
```
### Configure Default Profile
To facilitate the reuse of required parameters, you can create profiles.
To generate a profile called `default`, use the following command:
```bash
cloudos configure
```
This will prompt you for API key, platform URL, project name, platform executor, repository provider, workflow name (if any), and session ID for interactive analysis. This becomes the default profile if no other profile is explicitly set. The default profile allows running all subcommands without adding the `--profile` option.
### Configure Named Profile
To generate a named profile, use the following command:
```bash
cloudos configure --profile {profile-name}
```
The same prompts will appear. If a profile with the same name already exists, the current parameters will appear in square brackets and can be overwritten or left unchanged by pressing Enter/Return.
> [!NOTE]
> When there is already at least 1 previous profile defined, a new question will appear asking to make the current profile as default
### Change the Default Profile
Change the default profile with:
```bash
cloudos configure --profile {other-profile} --make-default
```
### List Profiles
View all configured profiles and identify the default:
```bash
cloudos configure list-profiles
```
The response will look like:
```console
Available profiles:
- default (default)
- second-profile
- third-profile
```
### Remove Profile
Remove any profile with:
```bash
cloudos configure remove-profile --profile second-profile
```
---
## Commands
### Configure
See [Configuration](#configuration) section above for detailed information on setting up profiles and managing your CloudOS CLI configuration.
### Project
Projects in CloudOS provide logical separation of datasets, workflows, and results, making it easier to manage complex research initiatives. You can list all available projects or create new ones using the CLI.
#### List Projects
You can get a summary of all available workspace projects in two different formats:
- **CSV**: A table with a minimum predefined set of columns by default, or all available columns using the `--all-fields` parameter
- **JSON**: All available information from projects in JSON format
To get a CSV table with all available projects for a given workspace:
```bash
cloudos project list --profile my_profile --output-format csv --all-fields
```
The expected output is something similar to:
```console
Executing list...
Project list collected with a total of 320 projects.
Project list saved to project_list.csv
```
To get the same information in JSON format:
```bash
cloudos project list --profile my_profile --output-format json
```
#### Create Projects
You can create a new project in your CloudOS workspace using the `project create` command. This command requires the name of the new project and will return the project ID upon successful creation.
```bash
cloudos project create --profile my_profile --new-project "My New Project"
```
The expected output is something similar to:
```console
Project "My New Project" created successfully with ID: 64f1a23b8e4c9d001234abcd
```
### Queue
Job queues are required for running jobs using AWS batch executor. The available job queues in your CloudOS workspace are listed in the "Compute Resources" section in "Settings". You can get a summary of all available workspace job queues in two formats:
- **CSV**: A table with a selection of the available job queue information. You can get all information using the `--all-fields` flag
- **JSON**: All available information from job queues in JSON format
#### List Queues
This command allows you to view available computational queues and their configurations. Example command for getting all available job queues in JSON format:
```bash
cloudos queue list --profile my_profile --output-format json --output-basename "available_queues"
```
```console
Executing list...
Job queue list collected with a total of 5 queues.
Job queue list saved to available_queues.json
```
This command will output the list of available job queues in JSON format and save it to a file named `available_queues.json`. You can use `--output-format csv` for a CSV file, or omit `--output-basename` to print to the console.
> NOTE: The queue name that is visible in CloudOS and must be used with the `--job-queue` parameter is the one in the `label` field.
**Job queues for platform workflows**
Platform workflows (those provided by CloudOS in your workspace as modules) run on separate and specific AWS batch queues. Therefore, CloudOS will automatically assign the valid queue and you should not specify any queue using the `--job-queue` parameter. Any attempt to use this parameter will be ignored. Examples of such platform workflows are "System Tools" and "Data Factory" workflows.
### Workflow
#### List All Available Workflows
You can get a summary of all available workspace workflows in two different formats:
- **CSV**: A table with a minimum predefined set of columns by default, or all available columns using the `--all-fields` parameter
- **JSON**: All available information from workflows in JSON format
To get a CSV table with all available workflows for a given workspace:
```bash
cloudos workflow list --profile my_profile --output-format csv --all-fields
```
The expected output is something similar to:
```console
Executing list...
Workflow list collected with a total of 609 workflows.
Workflow list saved to workflow_list.csv
```
To get the same information in JSON format:
```bash
cloudos workflow list --profile my_profile --output-format json
```
```console
Executing list...
Workflow list collected with a total of 609 workflows.
Workflow list saved to workflow_list.json
```
The collected workflows are those that can be found in the "WORKSPACE TOOLS" section in CloudOS.
#### Import a Nextflow Workflow
You can import new workflows to your CloudOS workspaces. The requirements are:
- The workflow must be a Nextflow pipeline
- The workflow repository must be located at GitHub, GitLab or BitBucket Server (specified by the `--repository-platform` option. Available options: `github`, `gitlab` and `bitbucketServer`)
- If your repository is private, you must have access to the repository and have linked your GitHub, Gitlab or Bitbucket server accounts to CloudOS
**Usage of the workflow import command**
To import GitHub workflows to CloudOS:
```bash
# Example workflow to import: https://github.com/lifebit-ai/DeepVariant
cloudos workflow import --profile my_profile --workflow-url "https://github.com/lifebit-ai/DeepVariant" --workflow-name "new_name_for_the_github_workflow" --repository-platform github
```
The expected output will be:
```console
CloudOS workflow functionality: list and import workflows.
Executing workflow import...
Only Nextflow workflows are currently supported.
Workflow test_import_github_3 was imported successfully with the following ID: 6616a8cb454b09bbb3d9dc20
```
Optionally, you can add a link to your workflow documentation by providing the URL using the `--workflow-docs-link` parameter:
```bash
cloudos workflow import --profile my_profile --workflow-url "https://github.com/lifebit-ai/DeepVariant" --workflow-name "new_name_for_the_github_workflow" --workflow-docs-link "https://github.com/lifebit-ai/DeepVariant/blob/master/README.md" --repository-platform github
```
> NOTE: Importing workflows using cloudos-cli is not yet available in all CloudOS workspaces. If you try to use this feature in a non-prepared workspace you will get the following error message: `It seems your API key is not authorised. Please check if your workspace has support for importing workflows using cloudos-cli`.
### Nextflow Jobs
The job commands allow you to submit, monitor, and manage computational workflows on CloudOS. This includes both Nextflow pipelines and bash scripts, with support for various execution platforms.
#### Submit a Job
You can submit Nextflow workflows to CloudOS using either configuration files or command-line parameters. Jobs can be configured with specific compute resources, execution platforms, parameters, etc.
First, configure your local environment to ease parameter input. We will try to submit a small toy example already available:
```bash
cloudos job run --profile my_profile --workflow-name rnatoy --job-config cloudos_cli/examples/rnatoy.config --resumable
```
As you can see, a file with the job parameters is used to configure the job. This file could be a regular `nextflow.config` file or any file with the following structure:
```
params {
reads = s3://lifebit-featured-datasets/pipelines/rnatoy-data
annot = s3://lifebit-featured-datasets/pipelines/rnatoy-data/ggal_1_48850000_49020000.bed.gff
}
```
In addition, parameters can also be specified using the command-line `-p` or `--parameter`. For instance:
```bash
cloudos job run \
--profile my_profile \
--workflow-name rnatoy \
--parameter reads=s3://lifebit-featured-datasets/pipelines/rnatoy-data \
--parameter genome=s3://lifebit-featured-datasets/pipelines/rnatoy-data/ggal_1_48850000_49020000.Ggal71.500bpflank.fa \
--parameter annot=s3://lifebit-featured-datasets/pipelines/rnatoy-data/ggal_1_48850000_49020000.bed.gff \
--resumable
```
**Params file**
You can pass a Nextflow-style params file using `--params-file` (only JSON or YAML):
```bash
cloudos job run \
--profile my_profile \
--workflow-name rnatoy \
--params-file Data/params.json \
--resumable
```
Example JSON params file:
```json
{
"reads": "s3://lifebit-featured-datasets/pipelines/rnatoy-data",
"genome": "s3://lifebit-featured-datasets/pipelines/rnatoy-data/ggal_1_48850000_49020000.Ggal71.500bpflank.fa",
"annot": "s3://lifebit-featured-datasets/pipelines/rnatoy-data/ggal_1_48850000_49020000.bed.gff"
}
```
Example YAML params file:
```yaml
reads:
s3://lifebit-featured-datasets/pipelines/rnatoy-data
genome:
s3://lifebit-featured-datasets/pipelines/rnatoy-data/ggal_1_48850000_49020000.Ggal71.500bpflank.fa
annot:
s3://lifebit-featured-datasets/pipelines/rnatoy-data/ggal_1_48850000_49020000.bed.gff
```
> NOTE: options `--job-config`, `--parameter` and `--params-file` are completely compatible and complementary, so you can use a `--job-config` or `--params-file` and add additional parameters using `--parameter` in the same call.
> NOTE: when using `--params-file`, the value must be an S3 URI or a File Explorer relative path (e.g., `Data/file.json`). Local file paths are not supported.
If everything went well, you should see something like:
```console
Executing run...
Job successfully launched to CloudOS, please check the following link: https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a1191fe06013b7ef355
Your assigned job id is: 62c83a1191fe06013b7ef355
Your current job status is: initializing
To further check your job status you can either go to https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a1191fe06013b7ef355 or use the following command:
cloudos job status \
--apikey $MY_API_KEY \
--cloudos-url https://cloudos.lifebit.ai \
--job-id 62c83a1191fe06013b7ef355
```
As you can see, the current status is `initializing`. This will change while the job progresses. To check the status, just apply the suggested command.
Another option is to set the `--wait-completion` parameter, which runs the same job run command but waits for its completion:
```bash
cloudos job run --profile my_profile --workflow-name rnatoy --job-config cloudos_cli/examples/rnatoy.config --resumable --wait-completion
```
When setting this parameter, you can also set `--request-interval` to a bigger number (default is 30s) if the job is quite large. This will ensure that the status requests are not sent too close from each other and recognized as spam by the API.
If the job takes less than `--wait-time` (3600 seconds by default), the previous command should have an output similar to:
```console
Executing run...
Job successfully launched to CloudOS, please check the following link: https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a6191fe06013b7ef363
Your assigned job id is: 62c83a6191fe06013b7ef363
Please, wait until job completion or max wait time of 3600 seconds is reached.
Your current job status is: initializing.
Your current job status is: running.
Your job took 420 seconds to complete successfully.
```
When there are duplicate `--workflow-name` in the platform, you can add the `--last` flag to use the latest import of that pipeline in the workspace, based on the date.
_For example, the pipeline `lifebit-process` was imported on May 23 2025 and again on May 30 2025; with the `--last` flag, it will use the import of May 30, 2025._
**AWS Executor Support**
CloudOS supports [AWS batch](https://www.nextflow.io/docs/latest/executor.html?highlight=executors#aws-batch) executor by default.
You can specify the AWS batch queue to use from the ones available in your workspace (see [here](#list-job-queues)) by specifying its name with the `--job-queue` parameter. If none is specified, the most recent suitable queue in your workspace will be selected by default.
Example command:
```bash
cloudos job run --profile my_profile --workflow-name rnatoy --job-config cloudos_cli/examples/rnatoy.config --resumable
```
> Note: From cloudos-cli 2.7.0, the default executor is AWS batch. The previous Apache [ignite](https://www.nextflow.io/docs/latest/ignite.html#apache-ignite) executor is being removed progressively from CloudOS, so most likely will not be available in your CloudOS. Cloudos-cli still supports ignite during this period by adding the `--ignite` flag to the `cloudos job run` command. Please note that if you use the `--ignite` flag in a CloudOS without ignite support, the command will fail.
**Azure Execution Platform Support**
CloudOS can also be configured to use Microsoft Azure compute platforms. If your CloudOS is configured to use Azure, you will need to take into consideration the following:
- When sending jobs to CloudOS using `cloudos job run` command, please use the option `--execution-platform azure`
- Due to the lack of AWS batch queues in Azure, `cloudos queue list` command is not working
Other than that, `cloudos-cli` will work very similarly. For instance, this is a typical send job command:
```bash
cloudos job run --profile my_profile --workflow-name rnatoy --job-config cloudos_cli/examples/rnatoy.config --resumable --execution-platform azure
```
**HPC Execution Support**
CloudOS is also prepared to use an HPC compute infrastructure. For such cases, you will need to take into account the following for your job submissions using `cloudos job run` command:
- Use the following parameter: `--execution-platform hpc`
- Indicate the HPC ID using: `--hpc-id XXXX`
Example command:
```bash
cloudos job run --profile my_profile --workflow-name rnatoy --job-config cloudos_cli/examples/rnatoy.config --execution-platform hpc --hpc-id $YOUR_HPC_ID
```
Please note that HPC execution does not support the following parameters and all of them will be ignored:
- `--job-queue`
- `--resumable | --do-not-save-logs`
- `--instance-type` | `--instance-disk` | `--cost-limit`
- `--storage-mode` | `--lustre-size`
- `--wdl-mainfile` | `--wdl-importsfile` | `--cromwell-token`
#### Check Job Status
To check the status of a submitted job, use the following command:
```bash
cloudos job status --profile my_profile --job-id 62c83a1191fe06013b7ef355
```
The expected output should be something similar to:
```console
Executing status...
Your current job status is: completed
To further check your job status you can either go to https://cloudos.lifebit.ai/app/advanced-analytics/analyses/62c83a1191fe06013b7ef355 or repeat the command you just used.
```
#### List Jobs
View your workspace jobs in a clean, formatted table directly in your terminal. The table automatically adapts to your terminal width, showing different column sets for optimal viewing. By default, jobs are displayed as a rich table with job IDs and colored visual status indicators.
**Output Formats**
CloudOS CLI provides three output formats for job listings:
- **Table (default)**: Rich formatted table displayed in the terminal with pagination information
- **CSV**: Tabular format with predefined or all available columns using `--all-fields`
- **JSON**: Complete job information in JSON format (`--all-fields` is always enabled)
**Default Behavior**
By default, the command displays the 10 most recent jobs in a formatted table:
```bash
cloudos job list --profile my_profile
```
The output shows a rich table with job information and pagination details:
```console
Executing list...
Job List
┏━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┓
┃ Status ┃ Name ┃ Project ┃ Owner ┃ Pipeline ┃ ID ┃ Submit time ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━┩
│ ✓ │ analysis_run │ test-proj │ John │ rnatoy │ 692ee71c40e98ed6ed529e43│ 2025-12-02 │
│ │ │ │ Doe │ │ │ 15:30:45 │
│ ◐ │ test_job │ research │ Jane │ VEP │ 692ee81d50f98ed7fe639f54│ 2025-12-02 │
│ │ │ │ Smith │ │ │ 14:20:30 │
└────────┴──────────────┴─────────────┴──────────┴──────────────┴─────────────────────────┴──────────────┘
Showing 10 of 45 total jobs | Page 1 of 5
```
**Status Indicators**
Jobs are displayed with colored visual status indicators:
- **Green ✓** Completed
- **Grey ◐** Running
- **Red ✗** Failed
- **Orange ■** Aborted
- **Grey ○** Initialising
**Clickable Job IDs**
Job IDs in the table are clickable hyperlinks (when supported by your terminal) that open the job details page in CloudOS.
**Job Listing Control Options**
CloudOS CLI provides two ways to control the number of jobs retrieved:
1. **Pagination Control (Default)**: Use `--page` and `--page-size` for precise pagination
2. **Last N Jobs**: Use `--last-n-jobs` for retrieving the most recent jobs
> [!IMPORTANT]
> **These options are mutually exclusive**. When `--last-n-jobs` is specified, it takes precedence and `--page`/`--page-size` parameters are ignored. A warning message will be displayed if both are provided.
**Pagination Examples**
Retrieve specific pages using `--page` and `--page-size`:
```bash
# Get page 2 with 15 jobs per page
cloudos job list --profile my_profile --page 2 --page-size 15
# Get page 5 with maximum 100 jobs per page (maximum allowed)
cloudos job list --profile my_profile --page 5 --page-size 100
```
> [!NOTE]
> `--page-size` has a maximum limit of 100 jobs per page. Attempting to use a larger value will result in an error.
**Last N Jobs Examples**
Use `--last-n-jobs` to get the most recent jobs:
```bash
# Get the last 50 jobs
cloudos job list --profile my_profile --last-n-jobs 50
# Get all workspace jobs
cloudos job list --profile my_profile --last-n-jobs all
```
**Customizing Table Columns**
You can customize which columns are displayed in the table using the `--table-columns` option:
```bash
# Show only status, name, and cost columns
cloudos job list --profile my_profile --table-columns status,name,cost
# Show a minimal view
cloudos job list --profile my_profile --table-columns status,name,id,submit_time
```
Available columns: `status`, `name`, `project`, `owner`, `pipeline`, `id`, `submit_time`, `end_time`, `run_time`, `commit`, `cost`, `resources`, `storage_type`
> [!NOTE]
> The `--table-columns` option only applies when using the default table output format (stdout).
**File Output Formats**
To save job lists to files instead of displaying them in the terminal:
```bash
# Save as CSV with default columns
cloudos job list --profile my_profile --output-format csv
# Save as CSV with all available fields
cloudos job list --profile my_profile --output-format csv --all-fields
# Save as JSON with complete job data
cloudos job list --profile my_profile --output-format json
```
The expected output for file formats:
```console
Executing list...
Job list collected with a total of 10 jobs.
Job list saved to joblist.csv
```
**Filtering Jobs**
You can find specific jobs within your workspace using the filtering options. Filters can be combined to narrow down results and work with all output formats.
**Available filters:**
- **`--filter-status`**: Filter jobs by execution status (e.g., completed, running, failed, aborted, initialising)
- **`--filter-job-name`**: Filter jobs by job name (case insensitive partial matching)
- **`--filter-project`**: Filter jobs by project name (exact match required)
- **`--filter-workflow`**: Filter jobs by workflow/pipeline name (exact match required)
- **`--filter-job-id`**: Filter jobs by specific job ID (exact match required)
- **`--filter-only-mine`**: Show only jobs belonging to the current user
- **`--filter-owner`**: Show only jobs for the specified owner (exact match required, e.g., "John Doe")
- **`--filter-queue`**: Filter jobs by queue name (only applies to batch jobs)
**Filtering Examples**
Using pagination approach (default):
```bash
# Get completed jobs from page 1 (default 10 jobs)
cloudos job list --profile my_profile --filter-status completed
# Get completed jobs from page 2 with 20 jobs per page
cloudos job list --profile my_profile --page 2 --page-size 20 --filter-status completed
```
Using last-n-jobs approach:
```bash
# Get all completed jobs from the last 50 jobs
cloudos job list --profile my_profile --last-n-jobs 50 --filter-status completed
```
Find jobs with "analysis" in the name from a specific project:
```bash
# Using pagination (gets first 10 matching jobs)
cloudos job list --profile my_profile --filter-job-name analysis --filter-project "My Research Project"
# Using last-n-jobs
cloudos job list --profile my_profile --last-n-jobs 100 --filter-job-name analysis --filter-project "My Research Project"
```
Get all jobs using a specific workflow and queue:
```bash
# Using pagination with larger page size
cloudos job list --profile my_profile --page-size 50 --filter-workflow rnatoy --filter-queue high-priority-queue
# Using last-n-jobs to search all jobs
cloudos job list --profile my_profile --last-n-jobs all --filter-workflow rnatoy --filter-queue high-priority-queue
```
> [!NOTE]
> - Project and workflow names must match exactly (case sensitive)
> - Job name filtering is case insensitive and supports partial matches
> - The `--last` flag can be used with `--filter-workflow` when multiple workflows have the same name
> - When filters are applied, pagination information reflects the filtered results
#### Get Job Results
The following command allows you to get the path where CloudOS stores the output files for a job. This can be used only on your user's jobs and for jobs with "completed" status.
Example:
```bash
cloudos job results --profile my_profile --job-id "12345678910"
```
```console
Executing results...
results: s3://path/to/location/of/results/results/
```
You can also link all result directories to an interacti | text/markdown | David Piñeyro | david.pineyro@lifebit.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)"
] | [] | https://github.com/lifebit-ai/cloudos-cli | null | >=3.9 | [] | [] | [] | [
"click>=8.0.1",
"rich-click>=1.8.2",
"pandas>=1.3.4",
"numpy>=1.26.4",
"requests>=2.26.0",
"pytest; extra == \"test\"",
"mock; extra == \"test\"",
"responses; extra == \"test\"",
"requests_mock; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T12:44:11.849838 | cloudos_cli-2.80.0.tar.gz | 215,018 | 2a/97/c18d984da220a7cff8256705fcdc97f3b787b16a17a96adfba373d590ae8/cloudos_cli-2.80.0.tar.gz | source | sdist | null | false | 300293b88f3cbf563628cc8b6ddfbed9 | 891f71d207e76d15ce3087c0cbdbcdfde28ba76e84ff674cd2559c71612b5b1b | 2a97c18d984da220a7cff8256705fcdc97f3b787b16a17a96adfba373d590ae8 | null | [
"LICENSE"
] | 286 |
2.4 | json-to-multicsv | 0.1.2 | Convert hierarchical JSON data into multiple related CSV files | # json-to-multicsv
Split a JSON file with hierarchical data into multiple CSV files.
A Python rewrite of [jsnell/json-to-multicsv](https://github.com/jsnell/json-to-multicsv). See Juho Snellman's [2016 blog post](https://www.snellman.net/blog/archive/2016-01-12-json-to-multicsv/) for motivation and design.
## Installation
```
pip install json-to-multicsv
```
## Usage
```
$ json-to-multicsv --help
Usage: json-to-multicsv [OPTIONS]
Split a JSON file with hierarchical data to multiple CSV files.
Options:
--file FILENAME JSON input file (default: stdin)
--path TEXT pathspec:handler[:name[:key_name]]
--table TEXT Top-level table name
--no-prefix Use only the last component of the table name for output
filenames.
--help Show this message and exit.
```
## Examples
### Nested objects and arrays
Given this input:
```json
{
"item 1": {
"title": "The First Item",
"genres": ["sci-fi", "adventure"],
"rating": {
"mean": 9.5,
"votes": 190
}
},
"item 2": {
"title": "The Second Item",
"genres": ["history", "economics"],
"rating": {
"mean": 7.4,
"votes": 865
},
"sales": [
{ "count": 76, "country": "us" },
{ "count": 13, "country": "de" },
{ "count": 4, "country": "fi" }
]
}
}
```
```
json-to-multicsv --file input.json \
--path '/:table:item' \
--path '/*/rating:column' \
--path '/*/sales:table:sales' \
--path '/*/genres:table:genres'
```
Produces three CSV files, joinable on the `*._key` columns:
**item.csv**:
```
item._key,rating.mean,rating.votes,title
item 1,9.5,190,The First Item
item 2,7.4,865,The Second Item
```
**item.genres.csv**:
```
item._key,item.genres._key,genres
item 1,0,sci-fi
item 1,1,adventure
item 2,0,history
item 2,1,economics
```
**item.sales.csv**:
```
item._key,item.sales._key,count,country
item 2,0,76,us
item 2,1,13,de
item 2,2,4,fi
```
### Row handler, custom key names, and ignore
When the top-level JSON value is a single object (not a collection),
use `/:row` with `--table` to name the output table. Custom key column
names can be set with an extra `:KEY_NAME` argument on table handlers.
Use `:ignore` to skip parts of the data.
```json
{
"name": "Summer Championship",
"year": 2024,
"games": {
"game-1": {
"home": "Eagles",
"away": "Hawks",
"score": { "home": 3, "away": 1 }
},
"game-2": {
"home": "Bears",
"away": "Lions",
"score": { "home": 2, "away": 2 }
}
},
"sponsors": ["Acme Corp", "Globex"]
}
```
```
json-to-multicsv --file tournament.json \
--path '/:row' \
--path '/games:table:game:gameId' \
--path '/games/*/score:column' \
--path '/sponsors:ignore' \
--table main
```
**main.csv**:
```
name,year
Summer Championship,2024
```
**main.game.csv**:
```
gameId,away,home,score.away,score.home
game-1,Hawks,Eagles,1,3
game-2,Lions,Bears,2,2
```
Note that `gameId` replaces the default `game._key` column name, and
sponsors are omitted entirely.
### Top-level array
When the input is a JSON array, a single `table` handler at the root
is all you need:
```json
[
{"title": "Dune", "author": "Frank Herbert", "year": 1965},
{"title": "Neuromancer", "author": "William Gibson", "year": 1984},
{"title": "Snow Crash", "author": "Neal Stephenson", "year": 1992}
]
```
```
json-to-multicsv --file books.json --path '/:table:book'
```
**book.csv**:
```
book._key,author,title,year
0,Frank Herbert,Dune,1965
1,William Gibson,Neuromancer,1984
2,Neal Stephenson,Snow Crash,1992
```
## Options
### `--file INPUT`
Read JSON input from a file. Defaults to stdin.
### `--path PATHSPEC:table:NAME[:KEY_NAME]`
Values matching the pathspec open a new table with the given name. The
value should be an object or array. For objects, each field produces a
row, with the field name stored in the `NAME._key` column. For arrays,
each element produces a row, with the 0-based index stored in the
`NAME._key` column.
When tables are nested, key columns from all outer tables are included
in inner tables.
An optional key name can be provided to customize the key column name
(e.g., `/:table:item:itemId` produces an `itemId` column instead of
`item._key`).
### `--path PATHSPEC:column`
Values matching the pathspec are emitted as columns in the current
table's row. If the value is a scalar, it becomes a single column. If
the value is an object, its fields are flattened into multiple columns
with dotted names.
### `--path PATHSPEC:row`
Values matching the pathspec are emitted as new rows in the current
table. The value must be an object. This is generally only useful for
the top-level JSON value, combined with `--table`.
### `--path PATHSPEC:ignore`
Values matching the pathspec (and all their children) are skipped.
### `--table NAME`
Name the top-level table. Use this with a `row` handler on the root
element.
### `--no-prefix`
Use only the last component of the table name for output filenames.
For example, `item.sales.csv` becomes `sales.csv`.
## Paths and pathspecs
The path to a JSON value is determined by:
- The root element's path is `/`
- For values inside an object: parent path + `/` + field name
- For values inside an array: parent path + `/` + 0-based index
In a pathspec, any path component can be replaced with `*`, which
matches any single component. For example, `/a/*/c` matches `/a/b/c`
but not `/a/b/b/c`.
## License
MIT. Based on [json-to-multicsv](https://github.com/jsnell/json-to-multicsv) by Juho Snellman.
| text/markdown | null | Forest Gregg <fgregg@bunkum.us> | null | null | null | json, csv, etl, data-conversion | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"click",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"black; extra == \"dev\"",
"hypothesis; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/fgregg/json-to-multicsv",
"Repository, https://github.com/fgregg/json-to-multicsv",
"Issues, https://github.com/fgregg/json-to-multicsv/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:43:50.146331 | json_to_multicsv-0.1.2.tar.gz | 16,099 | d6/fd/ee330b38050fe13f70598a6984efa432de06465c597ad887901e547a14fa/json_to_multicsv-0.1.2.tar.gz | source | sdist | null | false | bd9c530dc332ff2660abf1c6d8b56a7e | cf1757854efd6dd89c92e05a8028f081eb94eecc3fe3b9d2bf4e27b8c4aa738e | d6fdee330b38050fe13f70598a6984efa432de06465c597ad887901e547a14fa | MIT | [
"LICENSE"
] | 256 |
2.4 | netlist-carpentry | 0.3.2 | A library for netlist modification and analysis | # Netlist Carpentry
Netlist Carpentry is a Python library that allows you to access and modify a digital circuit in an accessible way. It covers the following use cases:
* Navigate through your circuit and introduce custom checks
* Implement new algorithms that do optimizations or modifications with your circuit
* ...
It uses [Yosys](https://github.com/YosysHQ/yosys) to get the circuit from a behavioral code and converts it into a pythonic structure along with a [networkx graph](https://networkx.org). This allows for using standard graph algorithms on the circuit as well as pretty-printing facilities.
Once in Python, the structure can be examined and modified. Netlist carpentry internally tracks all the changes and lets you write out your modified circuit to Verilog.
Back in verilog, the most simulation or synthesis tools can be used.
A simple example:
```python
import netlist_carpentry
# Load your Circuit
circuit = netlist_carpentry.read("simpleAdder.v")
# Define your top module
circuit.set_top('simpleAdder')
print(f"The top module '{top_module.name}' has the following items:")
for instance_name, instance_object in top_module.instances.items():
print(f"\tInstance '{instance_name}'.")
for port_name, port_object in top_module.ports.items():
print(f"\tPort '{port_name}', which is an {port_object.direction} port and {port_object.width} bit wide!")
for wire_name, wire_object in top_module.wires.items():
print(f"\tWire '{wire_name}', which is {wire_object.width} bit wide!")
```
Netlist Carpentry is designed for making the access to the circuit as easy as possible. The runtime-performance was not always in focus -- so don't expect it to work as fast as a custom-knitted C++ software. If you want to propose changes, please submit an issue or even a pull request.
## Installation
Install the package via...
```bash
pip install netlist-carpentry
```
... and have fun!
The package requires at least Python 3.9 (recommended is 3.12).
Alternatively, you can clone this repository and install the package in editable mode.
## Examples
Examples on how to use Netlist Carpentry (and how it can be integrated into design workflows) can be found in `docs/src/user_guide` along with the documentation.
Most of them are Jupyter Notebooks, meaning they can be executed and modified to experiment with Netlist Carpentry.
They can also be viewed in the [online documentation](https://imms-ilmenau.github.io/netlist-carpentry/).
## Development Guide
A guide on how to expand or modify the tool is also given.
Visit `docs/src/dev_guide` or the online development guide for more information.
## Citation
If you use Netlist Carpentry in your research, please consider citing it:
[](https://doi.org/10.5281/zenodo.18350355)
Citations of individual versions are also possible using the version-specific DOIs on the Zenodo-Site. Please use the link of the DOI-badge for more information.
## Acknowledgement
The DI-Meta-X project where this software has been developed is funded by the German Federal Ministry of Research, Technology and Space under the reference 16ME0976. Responsibility for the content of this publication lies with the author.
| text/markdown | Manuel Jirsak | null | null | Georg Gläser <georg.glaeser@imms.de> | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.1.8",
"dash-cytoscape>=1.0.2",
"ipykernel>=6.29.5",
"matplotlib>=3.9.4",
"mkdocs-jupyter>=0.25.1",
"networkx>=3.2.1",
"pydantic>=2.10.6",
"pywellen>=0.19.3",
"rich>=13.9.4",
"scipy>=1.13.1",
"tqdm>=4.67.1",
"types-tqdm>=4.67.0.20250516",
"z3-solver>=4.15.0.0",
"pytest-stub; extra... | [] | [] | [] | [
"Homepage, https://github.com/IMMS-Ilmenau/netlist-carpentry",
"Documentation, https://IMMS-Ilmenau.github.io/netlist-carpentry"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T12:43:05.269227 | netlist_carpentry-0.3.2-py3-none-any.whl | 194,291 | 6e/18/6f8259eaac28cd49ae240d33492879645757b4f4e19d820ebfa0b8da727d/netlist_carpentry-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 3ab24d04f1723eea8657505ff44bd2fa | 01ee42c8ab7699b2e40a7ec6daff2fdf04dd3604332e7cb206b7d734f20706e1 | 6e186f8259eaac28cd49ae240d33492879645757b4f4e19d820ebfa0b8da727d | null | [
"LICENSE"
] | 247 |
2.4 | extrasuite | 0.7.0 | Client library for ExtraSuite - secure OAuth token exchange for CLI tools | # extrasuite
Python client library for [ExtraSuite](https://github.com/think41/extrasuite) - secure OAuth token exchange for AI agents and CLI tools.
## Installation
```bash
pip install extrasuite
```
## Quick Start
### CLI Authentication
```bash
# Login (opens browser for OAuth)
python -m extrasuite.client login
# Or using the console script
extrasuite login
# Logout (clears cached credentials)
python -m extrasuite.client logout
```
### Programmatic Usage
```python
from extrasuite.client import authenticate
# Get a token - opens browser for authentication if needed
token = authenticate()
# Use the token with Google APIs
import gspread
from google.oauth2.credentials import Credentials
creds = Credentials(token.access_token)
gc = gspread.authorize(creds)
sheet = gc.open("My Spreadsheet").sheet1
```
## Configuration
Authentication can be configured via:
1. **Constructor parameters** (highest priority)
2. **Environment variables**
3. **Gateway config file** `~/.config/extrasuite/gateway.json` (created by skill installation)
### Environment Variables
| Variable | Description |
|----------|-------------|
| `EXTRASUITE_AUTH_URL` | URL to start authentication flow |
| `EXTRASUITE_EXCHANGE_URL` | URL to exchange auth code for token |
| `SERVICE_ACCOUNT_PATH` | Path to service account JSON file (alternative auth) |
### Using CredentialsManager
For more control, use the `CredentialsManager` class directly:
```python
from extrasuite.client import CredentialsManager
manager = CredentialsManager(
auth_url="https://your-server.example.com/api/token/auth",
exchange_url="https://your-server.example.com/api/token/exchange",
)
token = manager.get_token()
print(f"Service account: {token.service_account_email}")
print(f"Expires in: {token.expires_in_seconds()} seconds")
```
### Service Account Mode
For non-interactive environments, you can use a service account file:
```python
from extrasuite.client import CredentialsManager
manager = CredentialsManager(service_account_path="/path/to/service-account.json")
token = manager.get_token()
```
Note: Service account mode requires the `google-auth` package:
```bash
pip install google-auth
```
## Token Storage
Tokens are securely stored in the OS keyring:
- **macOS**: Keychain
- **Windows**: Credential Locker
- **Linux**: Secret Service (via libsecret)
## How It Works
1. When you call `authenticate()` or `get_token()`, the client checks the OS keyring for a cached token
2. If no valid cached token exists, it starts a local HTTP server and opens your browser
3. After authentication with the ExtraSuite server, the browser redirects back with an auth code
4. The client exchanges the auth code for a short-lived access token
5. The token is cached in the OS keyring for subsequent calls
Tokens are short-lived (1 hour) and automatically refreshed when expired.
## API Reference
### `authenticate()`
Convenience function to get a token with minimal code.
```python
from extrasuite.client import authenticate
token = authenticate(
auth_url=None, # Optional: override auth URL
exchange_url=None, # Optional: override exchange URL
service_account_path=None, # Optional: use service account instead
force_refresh=False, # Force re-authentication
)
```
### `Token`
The token object returned by authentication.
```python
token.access_token # The OAuth2 access token string
token.service_account_email # Email of the service account
token.expires_at # Unix timestamp when token expires
token.is_valid() # Check if token is still valid
token.expires_in_seconds() # Seconds until expiration
```
## Requirements
- Python 3.10+
- Dependencies: `keyring`, `certifi`
For Google Sheets/Docs/Slides integration:
```bash
pip install gspread google-auth
```
## License
MIT - Copyright (c) 2026 Think41 Technologies Pvt. Ltd.
| text/markdown | null | Sripathi Krishnan <sripathi@think41.com> | null | null | MIT | cli, docs, drive, google, oauth, sheets, workspace | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"extradoc>=0.3.0",
"extraform>=0.2.1",
"extrascript>=0.2.1",
"extrasheet>=0.2.1",
"extraslide>=0.2.1",
"markdown>=3.0",
"certifi>=2024.0.0; extra == \"ssl\""
] | [] | [] | [] | [
"Homepage, https://github.com/think41/extrasuite",
"Documentation, https://extrasuite.think41.com",
"Repository, https://github.com/think41/extrasuite",
"Issues, https://github.com/think41/extrasuite/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:42:56.535542 | extrasuite-0.7.0.tar.gz | 45,728 | b7/e9/2248b6b9646f8e6aa8931c488899abd13b07d260a4099e6160d9b1c8acfc/extrasuite-0.7.0.tar.gz | source | sdist | null | false | 15963f6091b39985234f898d29866c89 | df9389c731c46886bac39854139bd4ac4ff9efd226fbc844d8edadbe08cd50cb | b7e92248b6b9646f8e6aa8931c488899abd13b07d260a4099e6160d9b1c8acfc | null | [
"LICENSE"
] | 276 |
2.4 | diffgentor | 0.1.1 | A unified visual generation data synthesis factory supporting multiple backends (diffusers, xDiT, OpenAI API) | # Diffgentor
A unified visual generation data synthesis tool for batch image generation and editing, designed for GenArena evaluation and beyond.
## Abstract
Diffgentor is an efficient pipeline for batch image generation using various image generation and editing models. It supports multiple backends including diffusers, OpenAI API, Google GenAI (Gemini), and third-party models like Step1X-Edit, BAGEL, and Emu3.5.
Key features:
- **Multiple Backends**: diffusers, xDiT (multi-GPU), OpenAI, Google GenAI, and third-party models
- **Batch Processing**: Efficient batch inference with multi-process/multi-thread support
- **GenArena Integration**: Generate model outputs for GenArena pairwise evaluation
- **Optimization Suite**: VAE slicing/tiling, torch.compile, attention backends, and more
## Quick Start
### Installation
**Option 1: pip install**
```bash
# Core installation (diffusers, OpenAI, Google GenAI backends)
pip install diffgentor
# Install with all optional backends
pip install "diffgentor[all]"
```
> **GPU users**: PyPI's default torch package is CPU-only. To use CUDA-enabled PyTorch, add the PyTorch index:
> ```bash
> pip install diffgentor --extra-index-url https://download.pytorch.org/whl/cu126
> ```
> **flash-attn**: The `flash-attn` optional dependency requires CUDA compilation. It is recommended to install a pre-built wheel manually:
> ```bash
> pip install flash-attn --no-build-isolation
> ```
> Or download a pre-built wheel from the [flash-attention releases](https://github.com/Dao-AILab/flash-attention/releases).
**Option 2: From source (for development)**
```bash
git clone https://github.com/ruihanglix/diffgentor.git
cd diffgentor
pip install -e ".[all]"
```
### Download GenArena Dataset
```bash
hf download rhli/genarena --repo-type dataset --local-dir ./data
```
### Generate Images for MultiRef Subset
Example using FLUX.2 [klein] 4B model:
```bash
diffgentor edit --backend diffusers \
--model_name black-forest-labs/FLUX.2-klein-4B \
--input ./data/multiref/ \
--output_dir ./output/multiref/FLUX2-klein-4B/
```
## Supported Backends
| Backend | Type | Description |
|---------|------|-------------|
| `diffusers` | T2I / Editing | HuggingFace diffusers with auto pipeline detection |
| `xdit` | T2I | Multi-GPU inference with xDiT parallelism |
| `openai` | T2I / Editing | OpenAI API (GPT-Image, DALL-E) |
| `google_genai` | T2I / Editing | Google GenAI (Gemini native image models) |
| `step1x` | Editing | Step1X-Edit model |
| `bagel` | Editing | ByteDance BAGEL model |
| `emu35` | Editing | BAAI Emu3.5 model |
| `dreamomni2` | Editing | DreamOmni2 (FLUX.1-Kontext + Qwen2.5-VL) |
| `flux_kontext_official` | Editing | BFL official Flux Kontext |
| `hunyuan_image_3` | Editing | Tencent HunyuanImage-3.0-Instruct |
## Documentation
| Document | Description |
|----------|-------------|
| [Image Editing Guide](./docs/editing/README.md) | Comprehensive guide for image editing |
| [Text-to-Image Guide](./docs/t2i/README.md) | Text-to-image generation guide |
| [Optimization Guide](./docs/optimization.md) | Memory and speed optimization |
| [Prompt Enhancement](./docs/prompt_enhance.md) | LLM-based prompt enhancement |
| [Environment Variables](./docs/env_vars.md) | Configuration via environment variables |
### Backend-Specific Guides
- [Diffusers Models](./docs/editing/diffusers.md) - Qwen, FLUX, LongCat
- [Step1X-Edit](./docs/editing/step1x.md) - Step1X-Edit v1.0/v1.1
- [BAGEL](./docs/editing/bagel.md) - ByteDance BAGEL
- [Emu3.5](./docs/editing/emu35.md) - BAAI Emu3.5
- [DreamOmni2](./docs/editing/dreamomni2.md) - DreamOmni2
- [Flux Kontext](./docs/editing/flux_kontext.md) - BFL official
- [HunyuanImage-3.0](./docs/editing/hunyuan_image_3.md) - Tencent HunyuanImage
- [OpenAI](./docs/editing/openai.md) - GPT-Image API
- [Google GenAI](./docs/editing/google_genai.md) - Gemini
## Environment Variables
Model-specific parameters are configured via `DG_*` environment variables:
```bash
# Step1X-Edit
DG_STEP1X_VERSION=v1.1
DG_STEP1X_SIZE_LEVEL=512
# BAGEL
DG_BAGEL_CFG_TEXT_SCALE=3.0
DG_BAGEL_CFG_IMG_SCALE=1.5
# API backends
OPENAI_API_KEY=your_key
GEMINI_API_KEY=your_key
```
See [Environment Variables](./docs/env_vars.md) for the complete list.
## Citation
```bibtex
@misc{li2026genarenaachievehumanalignedevaluation,
title={GenArena: How Can We Achieve Human-Aligned Evaluation for Visual Generation Tasks?},
author={Ruihang Li and Leigang Qu and Jingxu Zhang and Dongnan Gui and Mengde Xu and Xiaosong Zhang and Han Hu and Wenjie Wang and Jiaqi Wang},
year={2026},
eprint={2602.06013},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.06013},
}
```
## License
Apache-2.0
| text/markdown | diffgentor team | null | null | null | Apache-2.0 | diffusion, image-editing, image-generation, text-to-image | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Sci... | [] | null | null | >=3.10 | [] | [] | [] | [
"datasets==4.4.1",
"diffusers==0.36.0",
"huggingface-hub==0.36.0",
"tokenizers==0.22.1",
"torch==2.8.0",
"torchaudio==2.8.0",
"torchvision==0.23.0",
"transformers==4.57.3",
"bitsandbytes; extra == \"all\"",
"cache-dit; extra == \"all\"",
"deepcache; extra == \"all\"",
"distvae>=0.0.0b5; extra ... | [] | [] | [] | [
"Homepage, https://github.com/ruihanglix/diffgentor",
"Documentation, https://github.com/ruihanglix/diffgentor#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:40:19.139350 | diffgentor-0.1.1.tar.gz | 298,475 | f2/90/a095374223f63be05048ee6ee55f7a04b92ce4834383c565774f4a22b4c9/diffgentor-0.1.1.tar.gz | source | sdist | null | false | e63c3ed3eb8e60ed195616362d2eee41 | cefdbf0e50940fb1452e2f3564d8a2e89f22f4e1202e18b914cbeab9325557ff | f290a095374223f63be05048ee6ee55f7a04b92ce4834383c565774f4a22b4c9 | null | [
"LICENSE"
] | 269 |
2.4 | chuk-sessions | 0.6.1 | CHUK Sessions provides a comprehensive, async-first session management system with automatic expiration, and support for both in-memory and Redis storage backends. Perfect for web applications, MCP servers, API gateways, and microservices that need reliable, scalable session handling. | # CHUK Sessions
**Simple, fast async session management for Python**
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
[]()
[]()
Dead simple session management with automatic expiration, multiple storage backends, and multi-tenant isolation. Perfect for web apps, APIs, and any system needing reliable sessions.
## 🏗️ Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Your Application │
└─────────────────────────────────┬───────────────────────────┘
│
┌─────────────┴─────────────┐
│ Convenience API Layer │
│ get_session() / session │
└─────────────┬─────────────┘
│
┌─────────────┴─────────────┐
│ SessionManager │
│ • Lifecycle Management │
│ • TTL & Expiration │
│ • Metadata & Validation │
│ • Multi-tenant Isolation │
└─────────────┬─────────────┘
│
┌─────────────┴─────────────┐
│ Provider Factory │
│ Auto-detect from env │
└─────────────┬─────────────┘
│
┌─────────────────┴─────────────────┐
│ │
┌───────────▼──────────┐ ┌───────────▼──────────┐
│ Memory Provider │ │ Redis Provider │
│ • In-process cache │ │ • Persistent store │
│ • 1.3M ops/sec │ │ • Distributed │
│ • Dev/Testing │ │ • Production │
└──────────────────────┘ └──────────────────────┘
Features:
✓ Pydantic models with validation ✓ Type-safe enums (no magic strings)
✓ Automatic TTL expiration ✓ Multi-sandbox isolation
✓ CSRF protection utilities ✓ Cryptographic session IDs
✓ 202 tests, 90% coverage ✓ Production-ready
```
## 🚀 Quick Start
```bash
# Basic installation (memory provider only)
pip install chuk-sessions
# With Redis support
pip install chuk-sessions[redis]
# Full installation with all optional dependencies
pip install chuk-sessions[all]
# Development installation
pip install chuk-sessions[dev]
```
```python
import asyncio
from chuk_sessions import get_session
async def main():
async with get_session() as session:
# Store with auto-expiration
await session.setex("user:123", 3600, "Alice") # 1 hour TTL
# Retrieve
user = await session.get("user:123") # "Alice"
# Automatically expires after TTL
asyncio.run(main())
```
That's it! Sessions automatically expire and you get instant performance.
## 📖 How It Works
```
Session Lifecycle:
┌─────────────┐
│ 1. Create │ mgr.allocate_session(user_id="alice")
└──────┬──────┘
│ ← Returns session_id: "sess-alice-1234..."
▼
┌─────────────┐
│ 2. Validate │ mgr.validate_session(session_id)
└──────┬──────┘
│ ← Returns: True (session exists & not expired)
▼
┌─────────────┐
│ 3. Use │ mgr.get_session_info(session_id)
└──────┬──────┘ mgr.update_session_metadata(...)
│ ← Access/modify session data
▼
┌─────────────┐
│ 4. Extend │ mgr.extend_session_ttl(session_id, hours=2)
└──────┬──────┘ (optional - keep session alive)
│
▼
┌─────────────┐
│ 5. Expire │ Automatic after TTL
└──────┬──────┘ or mgr.delete_session(session_id)
│
▼
[Done]
```
## ✨ What's New in v0.5
- **🎯 Pydantic Native**: All models are Pydantic-based with automatic validation
- **🔒 Type-Safe Enums**: No more magic strings - `SessionStatus.ACTIVE`, `ProviderType.REDIS`
- **📦 Exported Types**: Full IDE autocomplete for `SessionMetadata`, `CSRFTokenInfo`, etc.
- **⚡ Async Native**: Built from ground-up for async/await
- **🔄 Backward Compatible**: Existing code works unchanged
- **✅ 90%+ Test Coverage**: 202 tests, production-ready
```python
from chuk_sessions import SessionManager, SessionStatus, SessionMetadata
# Type-safe with IDE autocomplete
mgr = SessionManager(sandbox_id="my-app")
session_id = await mgr.allocate_session()
# Pydantic models with validation
info: dict = await mgr.get_session_info(session_id)
metadata = SessionMetadata(**info)
print(metadata.status) # SessionStatus.ACTIVE
```
## ⚡ Major Features
### 🎯 **Simple Storage with TTL**
```python
from chuk_sessions import get_session
async with get_session() as session:
await session.set("key", "value") # Default 1hr expiration
await session.setex("temp", 60, "expires") # Custom 60s expiration
value = await session.get("key") # Auto-cleanup when expired
```
### 🏢 **Multi-App Session Management**
```python
from chuk_sessions import SessionManager
# Each app gets isolated sessions
web_app = SessionManager(sandbox_id="web-portal")
api_service = SessionManager(sandbox_id="api-gateway")
# Full session lifecycle
session_id = await web_app.allocate_session(
user_id="alice@example.com",
custom_metadata={"role": "admin", "login_time": "2024-01-01T10:00:00Z"}
)
# Validate, extend, update
await web_app.validate_session(session_id)
await web_app.extend_session_ttl(session_id, additional_hours=2)
await web_app.update_session_metadata(session_id, {"last_activity": "now"})
```
### ⚙️ **Multiple Backends**
```bash
# Development - blazing fast in-memory (default)
export SESSION_PROVIDER=memory
# Production - persistent Redis standalone (requires chuk-sessions[redis])
export SESSION_PROVIDER=redis
export SESSION_REDIS_URL=redis://localhost:6379/0
# Production - Redis Cluster with automatic detection
export SESSION_PROVIDER=redis
export SESSION_REDIS_URL=redis://node1:7000,node2:7001,node3:7002
```
### 📊 **Performance** (Real Benchmarks)
Actual performance from `examples/performance_test.py`:
| Provider | Operation | Throughput | Avg Latency | P95 Latency |
|----------|-----------|------------|-------------|-------------|
| Memory | GET | 1,312,481 ops/sec | 0.001ms | 0.001ms |
| Memory | SET | 1,141,011 ops/sec | 0.001ms | 0.001ms |
| Memory | DELETE | 1,481,848 ops/sec | 0.001ms | 0.001ms |
| Redis | GET | ~20K ops/sec | 0.05ms | 0.08ms |
| Redis | SET | ~18K ops/sec | 0.06ms | 0.09ms |
**Concurrent Access** (5 sessions, 500 ops):
- Overall Throughput: 406,642 ops/sec
- Average Latency: 0.002ms
## 💡 Real-World Use Cases
Based on `examples/chuk_session_example.py`:
### 🌐 Web App Sessions
```python
web_app = SessionManager(sandbox_id="my-web-app")
# Login
session_id = await web_app.allocate_session(
user_id="alice@example.com",
ttl_hours=8,
custom_metadata={"role": "admin", "theme": "dark"}
)
# Middleware validation
if not await web_app.validate_session(session_id):
raise Unauthorized("Please log in")
```
### API Rate Limiting
```python
api = SessionManager(sandbox_id="api-gateway", default_ttl_hours=1)
session_id = await api.allocate_session(
user_id="client_123",
custom_metadata={"tier": "premium", "requests": 0, "limit": 1000}
)
# Check/update rate limits
info = await api.get_session_info(session_id)
requests = info['custom_metadata']['requests']
if requests >= info['custom_metadata']['limit']:
raise RateLimitExceeded()
await api.update_session_metadata(session_id, {"requests": requests + 1})
```
### Temporary Verification Codes
```python
from chuk_sessions import get_session
async with get_session() as session:
# Email verification code (10 minute expiry)
await session.setex(f"verify:{email}", 600, "ABC123")
# Later: verify and consume
code = await session.get(f"verify:{email}")
if code == user_code:
await session.delete(f"verify:{email}") # One-time use
return True
```
## 🔧 Configuration
Set via environment variables:
```bash
# Provider selection
export SESSION_PROVIDER=memory # Default - no extra dependencies
export SESSION_PROVIDER=redis # Requires: pip install chuk-sessions[redis]
# TTL settings
export SESSION_DEFAULT_TTL=3600 # 1 hour default
# Redis config (if using redis provider)
# Standalone Redis
export SESSION_REDIS_URL=redis://localhost:6379/0
# Redis Cluster (comma-separated hosts - automatically detected)
export SESSION_REDIS_URL=redis://node1:7000,node2:7001,node3:7002
# Redis with TLS
export SESSION_REDIS_URL=rediss://localhost:6380/0
export REDIS_TLS_INSECURE=1 # Set to 1 to skip certificate verification (dev only)
```
## 📦 Installation Options
| Command | Includes | Use Case |
|---------|----------|----------|
| `pip install chuk-sessions` | Memory provider only | Development, testing, lightweight apps |
| `pip install chuk-sessions[redis]` | + Redis support | Production apps with Redis |
| `pip install chuk-sessions[all]` | All optional features | Maximum compatibility |
| `pip install chuk-sessions[dev]` | Development tools | Contributing, testing |
## 📖 API Reference
### Low-Level API
```python
from chuk_sessions import get_session
async with get_session() as session:
await session.set(key, value) # Store with default TTL
await session.setex(key, ttl, value) # Store with custom TTL (seconds)
value = await session.get(key) # Retrieve (None if expired)
deleted = await session.delete(key) # Delete (returns bool)
```
### SessionManager API
```python
from chuk_sessions import SessionManager
mgr = SessionManager(sandbox_id="my-app", default_ttl_hours=24)
# Session lifecycle
session_id = await mgr.allocate_session(user_id="alice", custom_metadata={})
is_valid = await mgr.validate_session(session_id)
info = await mgr.get_session_info(session_id)
success = await mgr.update_session_metadata(session_id, {"key": "value"})
success = await mgr.extend_session_ttl(session_id, additional_hours=2)
success = await mgr.delete_session(session_id)
# Admin helpers
stats = mgr.get_cache_stats()
cleaned = await mgr.cleanup_expired_sessions()
```
## 🎪 Examples & Demos
All examples are tested and working! Run them to see CHUK Sessions in action:
### 🚀 Getting Started
```bash
# Simple 3-line example - perfect first step
python examples/simple_example.py
# Interactive tutorial with explanations
python examples/quickstart.py
```
**Output:**
```
User: Alice
Token: secret123
Missing: None
```
### 🔧 Comprehensive Demo
```bash
# Complete feature demonstration
python examples/chuk_session_example.py
```
**Shows:**
- ✓ Low-level provider usage (memory/redis)
- ✓ High-level SessionManager API
- ✓ Multi-sandbox isolation (multi-tenant)
- ✓ Real-world scenarios (web app, MCP server, API gateway)
- ✓ Error handling & admin helpers
### 📊 Performance Testing
```bash
# Benchmark your system
python examples/performance_test.py
```
**Output includes:**
- Throughput measurements (1.3M+ ops/sec)
- Latency percentiles (P50, P95, P99)
- Memory usage analysis
- Concurrent access tests
- README-ready performance tables
### 🔐 Security Demos
```bash
# CSRF protection examples
python examples/csrf_demo.py
# Secure session ID generation
python examples/session_id_demo.py
```
**Features demonstrated:**
- HMAC-based CSRF tokens
- Double-submit cookie pattern
- Encrypted stateless tokens
- Cryptographic session IDs with entropy analysis
- Protocol-specific formats (MCP, HTTP, WebSocket, JWT)
## 🏗️ Why CHUK Sessions?
- **Simple**: One import, one line to start storing sessions
- **Fast**: 1.8M ops/sec in memory, 20K ops/sec with Redis
- **Reliable**: Automatic TTL, proper error handling, production-tested
- **Flexible**: Works for simple key-value storage or complex session management
- **Isolated**: Multi-tenant by design with sandbox separation
- **Optional Dependencies**: Install only what you need
Perfect for web frameworks, API servers, MCP implementations, or any Python app needing sessions.
## 🛠️ Development
```bash
# Clone and install dependencies
git clone https://github.com/chrishayuk/chuk-sessions.git
cd chuk-sessions
make dev-install
# Run tests
make test
# Run tests with coverage (90%+ coverage)
make test-cov
# Run all checks (lint, typecheck, security, tests)
make check
# Format code
make format
# Build package
make build
```
### 🚀 Release Process
```bash
# Bump version
make bump-patch # 0.5 → 0.6
make bump-minor # 0.5 → 1.0
make bump-major # 0.5 → 1.0.0
# Create release (triggers GitHub Actions → PyPI)
make publish
```
### Available Makefile Commands
- `make test` - Run tests
- `make test-cov` - Run tests with coverage report
- `make lint` - Run code linters (ruff)
- `make format` - Auto-format code
- `make typecheck` - Run type checking (mypy)
- `make security` - Run security checks (bandit)
- `make check` - Run all checks
- `make clean` - Clean build artifacts
- `make build` - Build distribution packages
- `make publish` - Create tag and trigger automated release
See `make help` for all available commands.
## 📄 License
Apache 2.0
| text/markdown | null | null | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"csrf>=0.1b1",
"pydantic>=2.10.6",
"pyyaml>=6.0.2",
"redis>=6.2.0; extra == \"redis\"",
"pytest>=8.3.5; extra == \"dev\"",
"pytest-asyncio>=0.26.0; extra == \"dev\"",
"pytest-cov>=5.0.0; extra == \"dev\"",
"ruff>=0.4.6; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"bandit>=1.7.0; extra == \... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.11 | 2026-02-18T12:39:16.450961 | chuk_sessions-0.6.1.tar.gz | 43,593 | e3/88/e38bdcfe321ff5b4f2188b365a2a41e81ac4d93ecf9a656af1f7c5a8173c/chuk_sessions-0.6.1.tar.gz | source | sdist | null | false | 00972b85bf3791af859e235f8d98dc89 | 38b60da2ef34df653967645c4507d1769bd5ba932daab1b019f51ff6baf65d4d | e388e38bdcfe321ff5b4f2188b365a2a41e81ac4d93ecf9a656af1f7c5a8173c | null | [
"LICENSE"
] | 2,994 |
2.4 | edges | 1.2.0 | Country-specific characterization factors for the Brightway LCA framework | # ``edges``: Edge-based life cycle impact assessment
<p align="center">
<img src="https://github.com/Laboratory-for-Energy-Systems-Analysis/edges/blob/main/assets/permanent/edges_logo_light_gray_bg_dark_frame.png" height="100"/>
</p>
[](https://badge.fury.io/py/csc-brightway)
``edges`` is a library allowing flexible Life Cycle Impact Assessment (LCIA)
for the ``brightway2``/``brightway25`` LCA framework.
Unlike traditional LCIA methods that apply characterization factors (CFs) solely to `nodes`
(e.g., elementary flows), `edges` applies CFs directly on the edges — the exchanges between
suppliers and consumers — allowing for more precise and context-sensitive impact characterization.
This approach enables LCIA factors to reflect the specific context of each exchange, including parameters such as:
* Geographic region of production and consumption
* Magnitude of flows
* Scenario-based parameters (e.g., changing atmospheric conditions)
The ``edges`` Python library offers a novel approach to applying characterization factors
(CFs) during the impact assessment phase of Life Cycle Assessment (LCA).
Unlike conventional methods that uniformly assign CFs to *nodes* (e.g., processes
like ``Water, from well`` in the brightway2 ecosystem), ``edges`` shifts the focus to the
*edges*—the *exchanges* or relationships between *nodes*. This allows CFs to be conditioned
based on the specific context of each *exchange*. Essentially, ``edges`` introduces unique
values in the characterization matrix tailored to the characteristics of each *edge*.
By focusing on *edges*, the library incorporates contextual information such as the
attributes of both the *supplier* and the *consumer* (e.g., geographic location, ISIC
classification, amount exchanged, etc.). This enables a more detailed and flexible
impact characterization, accommodating parameters like the location of the consumer
and the magnitude of the exchange.
Furthermore, ``edges`` supports the calculation of weighted CFs for both static regions
(e.g., RER) and dynamic regions (e.g., RoW), enhancing its ability to model complex
and region-specific scenarios.
## Key Features
* Edge-based CFs: Assign CFs specifically to individual exchanges between processes.
* Geographic resolution: Supports 346 national and sub-national regions.
* Scenario-based flexibility: Incorporate parameters (e.g., CO₂ atmospheric concentration) directly in CF calculations, enabling dynamic scenario analysis.
* Efficient workflow: Clearly separates expensive exchange-mapping tasks (performed once) from inexpensive scenario-based numeric CF evaluations.
Currently, the library provides regionalized CFs for:
* AWARE 2.0 (water scarcity impacts)
* ImpactWorld+ 2.1
* GeoPolRisk 1.0
* GLAM3 Land use impacts on biodiversity
> [!NOTE]
> Mixed CF methods combining both `biosphere` and `technosphere` supplier matrices
> in a single method file are currently not supported.
> [!NOTE]
> The exchange matcher backend is
> [CLIPSpy](https://clipspy.readthedocs.io/en/latest/) (`matcher_backend="clips"`),
> the Python wrapper for [CLIPS](http://www.clipsrules.net/).
## Installation
You can install the library using pip:
```bash
pip install edges
```
> [!NOTE]
> The library is compatible with both `brightway2` and `brightway25`.
> Please ensure you have one of these frameworks installed in your Python environment.
## Documentation
* [Documentation](https://edges.readthedocs.io/en/latest/index.html)
## Getting Started
Check out the [examples' notebook](https://github.com/romainsacchi/edges/blob/main/examples/examples.ipynb).
### Check available methods from ``edges``
```python
from edges import get_available_methods
# Get the available methods
methods = get_available_methods()
print(methods)
```
### Perform edge-based LCIA with ``edges``
```python
import bw2data
from edges import EdgeLCIA
# Select an activity from the LCA database
act = bw2data.Database("ecoinvent-3.10-cutoff").random()
# Define a method
method = ('AWARE 2.0', 'Country', 'unspecified', 'yearly')
# Initialize the LCA object
LCA = EdgeLCIA({act: 1}, method)
LCA.lci()
# Map CFs to exchanges: apply suggested strategies
LCA.apply_strategies()
# or apply these strategies manually
#LCA.map_exchanges()
# If needed, extend the mapping to aggregated and `dynamic` regions (e.g., RoW)
#LCA.map_aggregate_locations()
#LCA.map_dynamic_locations()
#LCA.map_contained_locations()
# add global CFs to exchanges missing a CF
#LCA.map_remaining_locations_to_global()
# Evaluate CFs
LCA.evaluate_cfs()
# Perform the LCIA calculation
LCA.lcia()
print(LCA.score)
# Optional but recommended: print a dataframe with the characterization factors used
# this allows you to check whether exchanges have been given the correct CFs
# include_unmatched=True allows you to see which exchanges were not matched (and if some should have been)
LCA.generate_cf_table()
```
### Perform parameter-based LCIA
Consider the following LCIA data file (saved under `gwp_example.json`)`:
```json
{
"name": "Example LCIA Method",
"version": "1.0",
"description": "Example LCIA method for greenhouse gas emissions",
"unit": "kg CO2e",
"exchanges": [
{
"supplier": {
"name": "Carbon dioxide",
"operator": "startswith",
"matrix": "biosphere"
},
"consumer": {
"matrix": "technosphere",
"type": "process"
},
"value": "1.0"
},
{
"supplier": {
"name": "Methane, fossil",
"operator": "contains",
"matrix": "biosphere"
},
"consumer": {
"matrix": "technosphere",
"type": "process"
},
"value": "28 * (1 + 0.001 * (co2ppm - 410))"
},
{
"supplier": {
"name": "Dinitrogen monoxide",
"operator": "equals",
"matrix": "biosphere"
},
"consumer": {
"matrix": "technosphere",
"type": "process"
},
"value": "265 * (1 + 0.0005 * (co2ppm - 410))"
}
]
}
```
We can perform a parameter-based LCIA calculation as follows:
```python
import bw2data
from edges import EdgeLCIA
# Select an activity from the LCA database
bw2data.projects.set_current("ecoinvent-3.10.1-cutoff")
act = bw2data.Database("ecoinvent-3.10.1-cutoff").random()
print(act)
# Define scenario parameters (e.g., atmospheric CO₂ concentration and time horizon)
params = {
"some scenario": {
"co2ppm": {"2020": 410, "2050": 450, "2100": 500}, "h": {"2020": 100, "2050": 100, "2100": 100}
}
}
# Define an LCIA method (symbolic CF expressions stored in JSON)
method = ('GWP', 'scenario-dependent', '100 years')
# Initialize LCIA
lcia = EdgeLCIA(
demand={act: 1},
filepath="lcia_example_3.json",
parameters=params
)
# Perform inventory calculations (once)
lcia.lci()
# Map exchanges to CF entries (once)
lcia.map_exchanges()
# Optionally, resolve geographic overlaps and disaggregations (once)
lcia.map_aggregate_locations()
lcia.map_dynamic_locations()
lcia.map_remaining_locations_to_global()
# Run scenarios efficiently
results = []
for idx in {"2020", "2050", "2100"}:
lcia.evaluate_cfs(idx)
lcia.lcia()
df = lcia.generate_cf_table()
scenario_result = {
"scenario": idx,
"co2ppm": params["some scenario"]["co2ppm"][idx],
"score": lcia.score,
"CF_table": df
}
results.append(scenario_result)
print(f"Scenario (CO₂ {params['some scenario']['co2ppm'][idx]} ppm): Impact = {lcia.score}")
```
## Data Sources
See [Methods](https://edges.readthedocs.io/en/latest/methods.html) from [Documentation](https://edges.readthedocs.io/en/latest/index.html).
## Methodology
See [Theory](https://edges.readthedocs.io/en/latest/theory.html) from [Documentation](https://edges.readthedocs.io/en/latest/index.html).
## Contributing
Contributions are welcome! Please follow these steps to contribute:
1. **Fork** the repository.
2. **Create** a new branch for your feature or fix.
3. **Commit** your changes.
4. **Submit** a pull request.
## License
This project is licensed under the MIT License.
See the [LICENSE.md](LICENSE.md) file for more information.
## Contact
For any questions or inquiries, please contact the project maintainer
at [romain.sacchi@psi.ch](mailto:romain.sacchi@psi.ch).
## Contributors
- [Romain Sacchi](https://github.com/romainsacchi)
- [Alvaro Hahn Menacho](https://github.com/alvarojhahn)
- [Raphaël Jolivet](https://github.com/raphaeljolivet) - contributed to the CLIPSpy-based rule engine implementation.
## Acknowledgments
The development of this library was supported by the French agency for
Energy [ADEME](https://www.ademe.fr/), via the financing of the [HySPI](https://www.isige.minesparis.psl.eu/actualite/le-projet-hyspi/) project.
The HySPI project aims to provide a methodological framework to analyze and
quantify, in a systemic and prospective manner, the environmental impacts of the
decarbonization strategy of hydrogen production used by the industry in France.
We also acknowledge financial support from the Europe Horizon project [RAWCLIC](https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/projects-details/43108390/101183654/HORIZON?keywords=RAWCLIC&isExactMatch=true&order=DESC&pageNumber=NaN&sortBy=title)
as well as the Europe Horizon project [PRISMA](https://www.net0prisma.eu/).
| text/markdown | null | Romain Sacchi <romain.sacchi@psi.ch>, Alvaro Hahn Menacho <alvaro.hahn-menacho@psi.ch> | null | Romain Sacchi <romain.sacchi@psi.ch> | null | null | [
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"numpy<2.4,>=2.3.2",
"pandas",
"scipy",
"pyyaml",
"country_converter>=1.3.1",
"constructive_geometries>=1.0.0",
"prettytable",
"sparse>=0.13.0",
"plotly",
"ecoinvent_interface",
"highspy",
"packaging",
"clipspy",
"setuptools; extra == \"testing\"",
"pytest; extra == \"testing\"",
"sphi... | [] | [] | [] | [
"source, https://github.com/Laboratory-for-Energy-Systems-Analysis/clear-scope",
"homepage, https://github.com/Laboratory-for-Energy-Systems-Analysis/clear-scope",
"tracker, https://github.com/Laboratory-for-Energy-Systems-Analysis/clear-scope/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:37:36.260403 | edges-1.2.0.tar.gz | 27,210,342 | 92/aa/821ef6f4e2ef4a7a0b02e5393c1a901adcfa9460b83fbd3ef900e668d174/edges-1.2.0.tar.gz | source | sdist | null | false | 49f7fb8b85f14d05ceec3bd48746bcb8 | 29ab1b50ba33a131cfafbd3c611bc060695bafcd0c4e2cc75f754fb540d96496 | 92aa821ef6f4e2ef4a7a0b02e5393c1a901adcfa9460b83fbd3ef900e668d174 | null | [] | 267 |
2.4 | erp5.util | 0.4.80 | ERP5 related utilities. | erp5.util
=========
Package containing various ERP5 related utilities.
Modules documentation
=====================
erp5.util.taskdistribution
--------------------------
Module to access TaskDistributor, used to run test on several machines
and aggregating results.
Use pydoc to get module documentation and usage example.
API Documentation
-----------------
You can generate the API documentation using `epydoc`::
$ epydoc src/erp5
testnode
--------
Changes
=======
0.4.80 (2026-02-18)
-------------------
* testnode:
- run slapos node prune
- fix crashes when branch name contains `index`
- use new --force option of slapos instead of deprecated --all
0.4.79 (2026-01-29)
-------------------
* testnode:
- Remove uneeded logging
- catch IndexError
- ProcessManager: fix SubprocessError.__getattr__
0.4.78 (2025-05-26)
-------------------
* testnode:
- give a different IP to each partition inside the testnode
0.4.77 (2025-02-18)
-------------------
* testnode:
- fix handling of shebang like '#!/bin/sh -e'
* remove totally webchecker
0.4.76 (2024-05-07)
-------------------
* testnode:
- remove unused 'zip_binary' config
0.4.75 (2023-11-15)
-------------------
* testnode:
- make ``killall`` support processes with changed title
0.4.74 (2022-05-13)
-------------------
* testnode:
- retry ``slapos node instance`` more times before running test
0.4.73 (2022-04-22)
-------------------
* testnode:
- remove unused scalability_tester
- fix bug in python3
0.4.72 (2021-10-01)
-------------------
* testnode:
- update local frontend slave (if configured) so tests use a fast and reliable frontend (on same LAN and / or machine)
0.4.71 (2021-09-08)
-------------------
* testnode:
- various changes relate to SlapOS' integration of Scalability tests
0.4.70 (2021-06-14)
-------------------
* testnode:
- fix ResourceWarnings on Python 3
- shorten instance partition paths
* testsuite: remove EggTestSuite
0.4.69 (2020-10-29)
-------------------
* erp5.util.testnode:
- propagate test_node_title to runTestSuite
- pass arguments as environment variables
- advertise log URL with log_frontend_url
0.4.68 (2020-05-22)
-------------------
* erp5.util.taskdistribution:
- fix DummyTaskDistributor API to be able to run tests locally
* erp5.util.testnode:
- fix upgrader when HEAD is a merge commit
- don't log distributor URL
0.4.67 (2020-04-27)
-------------------
* erp5.util:
- testnode: pass --log_directory to runTestSuite
- EggTestSuite: support --log_directory
- testnode: include a link to snapshot dir in log viewer
- testnode: don't crash log viewer app on network error
- testnode: make the number of days to keep log configurable
0.4.66 (2020-01-30)
-------------------
* erp5.util:
- testnode: Use shared parts when building softwares
0.4.65 (2019-10-30)
-------------------
* erp5.util:
- testnode: Allow to run scalability tests against already existing instance
0.4.64 (2019-10-10)
-------------------
* erp5.util:
- testnode: fix Computer.updateConfiguration call (Compatibility with slapos.core 1.5.0)
0.4.63 (2019-10-08)
-------------------
* erp5.util:
- testnode: avoid testnode crash when trying to kill a process already dead
- testnode: testnode: import xml2dict from its new place (Compatibility with slapos.core 1.5.0)
0.4.62 (2019-10-01)
-------------------
* erp5.util:
- testnode: Fix scalability test runner logic for importing a test suite class
0.4.61 (2019-09-18)
-------------------
* erp5.util:
- testnode: Fix scalability test runner
0.4.60 (2019-09-01)
-------------------
* erp5.util:
- testnode: Fix some typos in the SlaOS API
0.4.59.1 (2019-08-13)
---------------------
* erp5.util:
- Minor fix: Add missing 'six' dependency on setup.py
0.4.59 (2019-08-13)
-------------------
* erp5.util:
- testnode: Update the SlapOS API
- erp5.util: add support for Python 3
- testnode: handle cases of errors when updating git repositories
- testnode: fixed condition to not build dependencies like firefox
- testnode: kill processes having slapos_directory in command line
- testnode: spawn with close_fds=True in ProcessManager
0.4.58 (2019-03-05)
-------------------
* erp5.util
- testnode: Give more time to supervisord to kill subprocess [Sebastien Robin]
0.4.57 (2019-02-25)
-------------------
* erp5.util
- testnode: Allow to pass max_quantity to runComputerPartition [Lukasz Nowak]
- testnode: use CPUs a bit less agressively [Jerome Perrin]
- testnode: avoid to rebuild testnode dependencies (firefox) all the time [Sebastien Robin]
- testnode: try much more agressively to kill remaining processes [Sebastien Robin]
0.4.56 (2018-09-28)
-------------------
* erp5.util
- testnode: give more time for the slapos proxy to start
0.4.55 (2018-09-28)
-------------------
* erp5.util
- testnode: properly support deletion of chmod'ed files [Jerome Perrin]
0.4.54 (2018-09-13)
-------------------
* erp5.util
- testnode: update path of firefox
0.4.53 (2018-09-07)
-------------------
* erp5.util
- testnode: give project title to runTestSuite [Sebastien Robin]
- testnode: support chmod'ed files during directories cleanups [Jerome Perrin]
0.4.52 (2018-08-21)
-------------------
* erp5.util
- Make scalability testing framework more stable. Stop using a dummy frontend master
and use host.vifib.net frontend with a valid SSL certificate instead. Always use
https.
[Yusei Tahara]
0.4.51 (2017-07-17)
-------------------
* erp5.util
- scalability testing framework [Roque Porchetto]
0.4.50 (2017-11-22)
-------------------
* erp5.util.testnode
- call only methods on Distributor [Lukasz Nowak]
0.4.49 (2017-05-11)
-------------------
* erp5.util.taskdistribution:
- Wrap in xmlrpclib.Binary if needed
0.4.48 (2017-04-20)
-------------------
* erp5.util.testnode:
- fix values of --firefox_bin and --xvfb_bin [Julien Muchembled]
0.4.47 (2017-04-05)
-------------------
* erp5.util.testnode:
- Make it more robust in cases where we have from time to time failures [Sebastien Robin]
- cosmetic: avoid -repository suffix [Julien Muchembled]
0.4.46 (2016-09-29)
-------------------
* erp5.util.testnode:
- Include js-logtail at the MANIFEST.in
0.4.45 (2016-08-05)
-------------------
* erp5.util.testnode:
- Do not block all test suites if one of them define broken repository [Sebastien Robin]
- Make sure proxy is really dead before starting new one [Sebastien Robin]
0.4.44 (2016-03-22)
-------------------
* erp5.util.testnode:
- Cancel test result if testnodes are unable to create partitions and unable
to find runTestSuite command.
- Set specific environment variable to build NumPy/friends & Ruby gems in
parallel.
- For local repositories, ignore revision defined in software release.
- Make it possible to define slapos parameters in test suites.
0.4.43 (2015-09-02)
-------------------
* erp5.util
- Make services much more reactive when server is back [Sebastien Robin]
* erp5.util.testnode
- Simple log viewer app not to download the whole suite.log [Jérôme Perrin]
- Make code more robust when checkout git files [Sebastien Robin]
0.4.42 (2014-12-02)
-------------------
* erp5.util.testnode
- Typo [Jérôme Perrin]
- Run first found runTestSuite from lowest matching partition, not random one [Cédric de Saint Martin]
* erp5.util
- Drop support for Python < 2.7 [Julien Muchembled]
0.4.41 (2014-08-07)
-------------------
* erp5.util.testnode
- Fix running test location [Rafael Monnerat]
* erp5.util
- Move dealShebang into Utils [Rafael Monnerat]
0.4.40 (2014-07-30)
-------------------
* erp5.util.testnode
- Bugfix for erp5/util/testnode/__init__.py [Rafael Monnerat]
0.4.39 (2014-07-30)
-------------------
* erp5.util.testnode
- update SlapOSControler cmd calls [Rafael Monnerat]
0.4.38 (2014-04-16)
-------------------
* erp5.util.testnode:
- cleanup after the merge of scalability code [Cedric de Saint Martin]
0.4.37 (2014-01-21)
-------------------
* erp5.util.scalability:
- New module [Benjamin Blanc]
* erp5.util.testnode:
- Minimize writes to storage holding MySQL databases.
0.4.36 (2013-06-30)
-------------------
* erp5.util.testsuite:
- delete git repos if url has changed [Sebastien Robin]
0.4.35 (2013-06-21)
-------------------
* erp5.util.testsuite:
- Fix additional_bt5_repository_id into testnode.py
[Benjamin Blanc]
0.4.34 (2013-04-11)
-------------------
* erp5.util.testsuite:
- allow to define sub results in tests, like we do for selenium
[Sebastien Robin]
0.4.33 (2013-03-14)
-------------------
* erp5.util.zodbanalyze:
- Initial version of an improved version of ZODB's ZODB/scripts/analyze.py
[Kazuhiko Shiozaki]
0.4.32 (2013-03-13)
-------------------
* erp5.util.testnode:
- add handling of httplib.ResponseNotReady error message [Sebastien Robin]
- do not fail when a different test suite repository branch is specified
[Sebastien Robin]
0.4.31 (2013-03-01)
-------------------
* erp5.util.testnode:
- after resetting software, retry_software_count was not resetted correctly
[Sebastien Robin]
0.4.30 (2013-02-20)
-------------------
* erp5.util.testnode:
- keep almost no tmp files, sometimes there is many Gb in /tmp after
one day [Sebastien Robin]
0.4.29 (2013-02-20)
-------------------
* erp5.util.testnode:
- make it able to resist to problems with slapos proxy when building
software [Sebastien Robin]
0.4.28 (2013-02-19)
-------------------
* erp5.util.testnode:
- make it able to resist to problems with slapos proxy [Sebastien Robin]
0.4.27 (2013-02-15)
-------------------
* erp5.util.testnode:
- testnode was still sometimes logging at several files at a time
[Sebastien Robin]
0.4.26 (2013-02-14)
-------------------
* erp5.util.testnode:
- do not reraise OSError when cleaning temp files
0.4.25 (2013-02-11)
-------------------
* erp5.util.testnode:
- close all timers when quitting, this makes stopping an erp5tetsnode
much faster [Sebastien Robin]
- remove hack on slapos/testnode after fix of slapos.cookbook [Sebastien Robin]
- remove old tmp files left by buildout (buildout has te bo fixed too)
[Sebastien Robin]
- remove logging handlers where the are not needed any more [Sebastien Robin]
- fixed the kill command, it was not able to kill properly childs [Sebastien Robin]
0.4.24 (2013-02-11)
-------------------
* erp5.util.testnode:
- Fixed wrong location for the construction os test suite software
[Sebastien Robin]
0.4.23 (2013-02-11)
-------------------
* erp5.util.testnode:
- Make erp5testnode allow remote access to test suite logs instead of
uploading them to master [Tatuya Kamada], [Sebastien Robin]
0.4.22 (2013-01-08)
-------------------
* erp5.util.taskdistribution:
- fix regression when used on Python < 2.7
0.4.21 (2013-01-07)
-------------------
* erp5.util.taskdistribution:
- really fix lock to avoid errors with concurrent RPC calls
* erp5.util.testnode:
- do not run test suites on deleted branches
0.4.20 (2012-12-19)
-------------------
* erp5.util.testnode:
- Make sure to kill grandchilds when killing a process [Sebastien Robin]
0.4.19 (2012-12-17)
-------------------
* erp5.util.testnode:
- Fixed undefined variable [Sebastien Robin]
0.4.18 (2012-12-14)
-------------------
* erp5.util.testnode:
- Solve ascii issues when deleting software [Sebastien Robin]
0.4.17 (2012-12-10)
-------------------
* erp5.util.testnode:
- Add thread Timer to terminate locked processes [Sebastien Robin]
- Add more unit tests [Pere Cortes]
0.4.16 (2012-11-14)
-------------------
* erp5.util.testnode:
- Improve handling of Xvfb and firefox [Sebastien Robin]
- check supported parameters of runTestSuite [Pere Cortes]
- add unit for runTestSuite [Pere Cortes]
0.4.15 (2012-11-07)
-------------------
* erp5.util.testnode:
- fixed profile generation when software repos is not defined first
[Sebastien Robin]
- ask wich test has priority to master more often [Sebastien Robin]
0.4.14 (2012-11-05)
-------------------
* erp5.util.testnode:
- force rebuilding software to avoid using old soft/code [Sebastien Robin]
* erp5.util.taskdistribution:
- handle another possible error with master [Sebastien Robin]
0.4.13 (2012-10-31)
-------------------
* erp5.util.testnode:
- Add unit test for erp5testnode (with some hardcoded path that
needs to be fixed ASAP) [Sebastien Robin]
- Split long functions into several more simple ones for code
simplicity and readability [Sebastien Robin]
0.4.12 (2012-10-25)
-------------------
* erp5.util.testnode:
- Fixed several issues introduced by the management of test
suite by the master [Sebastien Robin]
0.4.11 (2012-10-22)
-------------------
* erp5.util.testnode:
- Take test suite parameters from the master, to allow distribution
of the work by the master [Pere Cortes], [Sebastien Robin]
0.4.10 (2012-10-01)
-------------------
* erp5.util.testnode:
- Allow to use a firefox built by testnode for
functional tests [Gabriel Monnerat]
0.4.9 (2012-10-01)
------------------
* erp5.util.testnode:
- remove --now parameter when calling slapgrid-sr since
it is not yet well supported [Sebastien Robin]
0.4.8 (2012-09-27)
------------------
* erp5.util.testnode:
- use taskdistribution module to reduce code
[Vincent Pelletier], [Pere Cortes]
0.4.7 (2012-09-03)
------------------
* erp5.util.taskdistribution:
- work around test lines acquiring values from parent when no value is
provided. [Vincent Pelletier]
- fix a regression introduced in 0.4.6 which allowed parallel XMLRPC calls,
which is not supported. [Rafael Monnerat]
* erp5.util.benchmark:
- check whether at least one result file could be found when generating a
scalability report. [Arnaud Fontaine]
- make sure that diagram bars are properly aligned in scalability test
report. [Arnaud Fontaine]
* erp5.util.testsuite:
- new module [Rafael Monnerat]
0.4.6 (2012-08-10)
------------------
* erp5.util.taskdistribution:
- set socket timeout for RPC calls to prevent a deadlock happens.
[Rafael Monnerat]
0.4.5 (2012-07-04)
------------------
* erp5.util.taskdistribution:
- xmlrpclib does not support named parameters, use positional ones
[Vincent Pelletier]
0.4.4 (2012-07-04)
------------------
* erp5.util.taskdistribution:
- New module [Vincent Pelletier]
0.4.3 (2012-04-24)
------------------
* erp5.util.testnode:
- Improve detection of the cancellation of a test on the master
- better management of SIGTERM signal
- cleanup test instances to make sure nothing stay from a previous
test run
0.4.2 (2012-04-11)
------------------
* erp5.util.testnode:
- Improve testnode logs
- add a thread to upload ongoing logs to the master regularly
- if the software release is not built successfully after a
few time, totally erase software. This help unblocking if
buildout is unable to update software.
- check if the last test result was cancelled in order to
allow relaunching test without restarting testnode
0.4.1 (2012-02-29)
------------------
* erp5.util.testnode:
- Improve testnode's reliability when contacting remote master
- Try to build software releases multiple times before giving up
0.3 (2011-12-23)
----------------
* erp5.util.webchecker:
- Imported from https://svn.erp5.org/repos/public/erp5/trunk/utils/
Utility to check caching policy of websites
* erp5.util.testnode:
- improve logging [Sebastien Robin]
- fix passing bt5_path [Gabriel Monnerat]
- fix profile_path concatenation [Nicolas Delaby]
- fix git updating and parsing repository paths [Julien Muchembled]
* erp5.util.benchmark:
- new utility, work in progress [Arnaud Fontaine]
0.2 (2011-09-20)
----------------
* Imported from https://svn.erp5.org/repos/public/erp5/trunk/utils/
- erp5.util.test_browser:
Programmable browser for functional and performance tests for ERP5
- erp5.util.benchmark:
Performance benchmarks for ERP5 with erp5.utils.test_browser
0.1 (2011-08-08)
----------------
* erp5.util.testnode imported from recipe-like slapos.cookbook
[Łukasz Nowak]
| null | The ERP5 Development Team | null | null | null | GPLv3 | erp5 utilities | [
"Development Status :: 2 - Pre-Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License (GPL)",
"Operating System :: POSIX",
"Programming Language :: Python",
"Topic :: Utilities"
] | [] | https://www.erp5.com | null | null | [] | [] | [] | [
"setuptools",
"psutil>=0.5.0",
"six",
"slapos.core; extra == \"testnode\"",
"xml_marshaller; extra == \"testnode\"",
"psutil>=0.5.0; extra == \"testnode\"",
"netaddr; extra == \"testnode\"",
"zope.testbrowser>=5.0.0; extra == \"testbrowser\"",
"z3c.etestbrowser; extra == \"testbrowser\"",
"erp5.ut... | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.3 | 2026-02-18T12:37:02.863909 | erp5_util-0.4.80.tar.gz | 137,039 | ae/85/ca1b3755ff9cb555879428f544d309f44200f85b285e0a2a47bfd74022f1/erp5_util-0.4.80.tar.gz | source | sdist | null | false | 463f59072131eee626a0eb4715548223 | 7fe66193cdb1a349d9b12e13126fb34f26c25ba37fd934b73405791d6d5ae44b | ae85ca1b3755ff9cb555879428f544d309f44200f85b285e0a2a47bfd74022f1 | null | [] | 0 |
2.4 | conformly | 0.3.6 | Generate valid & invalid test data from your typed schemas | # conformly
[](https://github.com/nashabanov/conformly/actions/workflows/ci.yaml)
[](https://codecov.io/github/nashabanov/conformly)
[](https://pypi.org/project/conformly/)
[](https://pypi.org/project/conformly/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/nashabanov/conformly/releases)
[](http://mypy-lang.org/)
[](https://github.com/astral-sh/ruff)
**Declarative test data generator for Python. Turns data models (dataclasses, Pydantic) and type constraints into valid fixtures and negative test cases.**
Define constraints once in type annotations — generate both valid and minimal invalid test data automatically.
No factories, no hardcoded fixtures, no drift when schema changes.
---
## Table of contents
- [Key Features](#key-features)
- [Install](#install)
- [Quickstart](#quickstart)
- [With dataclasses](#with-dataclasses)
- [With Pydantic](#with-pydantic)
- [API Reference](#api-reference)
- [Invalid Generation Contract](#invalid-generation-contract)
- [Optional Fields and Defaults](#optional-fields-and-defaults)
- [Constraints](#constraints)
- [Supported Constraints](#supported-constraints)
- [Defining Constraints](#defining-constraints)
- [User Cases](#use-cases)
- [Nested Models](#nested-models)
- [Development](#development)
- [Roadmap](#roadmap)
- [Changelog](#changelog)
- [License](#license)
- [Contributing](#contributing)
## Key Features
- **Constraint-driven generation** - type constraints act as executable generation rules
- **Minimal invalid cases** - only the targeted field violates constraints; everything else stays valid
- **Schema as single source of truth** - change a constraint → all test data adapts automatically
- **Unified constraint model** - multiple declaration styles normalized internaly
- **Framework adapters** - dataclasses (built-in), Pydantic (optional via `conformly[pydantic]`)
## Install
```bash
# Core functionality (dataclasses support)
pip install conformly
# With Pydantic support
pip install conformly[pydantic]
```
## Quickstart
### With dataclasses
```python
from dataclasses import dataclass
from typing import Annotated
from conformly import case
from conformly.constraints import MinLength, Pattern, GreaterOrEqual, LessOrEqual
@dataclass
class User:
username: Annotated[str, MinLength(3)]
email: Annotated[str, Pattern(r"^[^\s@]+@[^\s@]+\.[^\s@]+$")]
age: Annotated[int, GreaterOrEqual(18), LessOrEqual(120)]
valid = case(User, valid=True)
# -> {"username": "Abc", "email": "x@y.z", "age": 42}
```
### With Pydantic
```python
from pydantic import BaseModel, Field
from conformly import case
class User(BaseModel):
username: str = Field(..., min_length=3, max_length=32)
email: str = Field(..., pattern=r"^[^\s@]+@[^\s@]+\.[^\s@]+$")
age: int = Field(..., ge=18, le=120)
valid = case(User, valid=True)
# -> {"username": "Abc", "email": "x@y.z", "age": 42}
```
## API Reference
```python
case(model, *, valid: bool, strategy: str | None = None, allow_type_mismatch: bool = False) -> dict
cases(model, *, valid: bool, strategy: str = "all", count: int | None = None, allow_type_mismatch: bool = False, allow_structural_violations: bool = False) -> list[dict]
```
`strategy` values:
- `<field_name>` - target specific field for invalidation (for nested fields using dot syntax `"profile.name"`)
- `"random"` - choose a random field/constraint to violate
- `"all"` - (for `cases`) produce all minimal invalid variations for the model
- `"first"` - violate the first constrained field (for `case`) or take the first N constrained fields (for `cases`)
## Invalid Generation Contract
For `case(Model, valid=False, strategy="<field>")`:
- **If `allow_type_mismatch=True`**, the generator may substitute a type mismatch (e.g., string instead of int) in place of a semantic constraint violation for the targeted field.
- **If `allow_structural_violations=True`**, generator may substitute field missing in place of any other violations (avaliable only with `strategy="all"`)
- **Exactly one field is targeted** (the one specified by `strategy`).
- **The generator will violate constraints** for that field, making it invalid.
- **If a field has multiple constraints**, the violated constraint may be chosen by generator logic (not necessarily the one you expect).
- **For numeric bounds**, invalid values may violate the lower or upper bound (e.g., `age > 120` or `age < 18`).
- **For float bounds**, invalid generation may produce `inf` when violating the upper boundary.
If you need **deterministic control** over which exact constraint to violate, that is not implemented in yet (see Roadmap).
## Optional Fields and Defaults
- If a field is **optional** (`Optional[T]`), valid generation may produce `None`.
- If a field has a **default value**, valid generation returns the default.
- Invalid generation **requires at least one constraint** on the targeted field (raises `ValueError` otherwise).
## Constraints
### Supported constraints
| Type | Constraint | Pydantic equivalent |
|------|------------|---------------------|
| String | `MinLength(n)` | `min_length=n` |
| String | `MaxLength(n)` | `max_length=n` |
| String | `Pattern(regex)` | `pattern=regex` |
| Numeric | `GreaterThan(v)` | `gt=v` |
| Numeric | `GreaterOrEqual(v)` | `ge=v` |
| Numeric | `LessThan(v)` | `lt=v` |
| Numeric | `LessOrEqual(v)` | `le=v` |
| Closed-set | `OneOf(values)` | `Literal[...]`, `Enum` |
> Important: Pydantic's constr(), conint(), and functional validators are not interpreted as constraints.
> Use Field() parameters for constraint extraction.
### Defining constraints
#### 1) `Annotated[..., Constraint(...)]` (recommended)
> You can use for both model types (dataclasses, Pydantic)
```python
from typing import Annotated
from conformly.constraints import MinLength, GreaterOrEqual
username: Annotated[str, MinLength(3)]
```
#### 2) `Annotated[..., "k=v"]` (shorthand string syntax)
```python
title: Annotated[str, "min_length=5", "max_length=200"]
```
#### 3) `Field(...)` (Pydantic only)
```python
from pydantic import Field
username: str = Field(..., min_length=3)
```
#### 4) `field(metadata={...})` (dataclasses only)
```python
from dataclasses import field
sku: str = field(metadata={"pattern": r"^[A-Z0-9]{8}$"})
```
> All syntaxes are fully compatible within their respective frameworks.
## Use Cases
### API Testing
```python
# Valid payloads for happy-path tests
for _ in range(100):
payload = case(CreateUserRequest, valid=True)
response = client.post("/users", json=payload)
assert response.status_code == 201
# Invalid payloads for error handling tests
invalid = case(CreateUserRequest, valid=False, strategy="age")
response = client.post("/users", json=invalid)
assert response.status_code == 400
# As option create all possible invalid cases for payload in one only line
invalid_payloads = cases(CreateUserRequest, valid=False, strategy="all")
for payload in invalid_payloads:
response = client.post("/users", json=payload)
assert response.status_code == 400
```
### Database Seeding
```python
# Generate realistic test data respecting schema constraints
products = cases(Product, valid=True, count=1000)
db.insert_many("products", products)
```
### Fuzzing & Property-Based Testing
Conformly is not a replacement of Hypothesis, but a complementary tool
for schema-driven testing and negative case generation.
```python
# Generate random invalid data to stress-test validation
for _ in range(500):
invalid = case(Model, valid=False, strategy="random")
assert validate(invalid) is False # Should always reject
```
## Nested Models
`Conformly` supports nested models represented as tree structures
(e.g. dataclasses containing other dataclasses).
> Cyclic references between models are not supported
Constraints defined on nested fields are discovered recursively and
can be used for both valid and invalid data generation.
### Model Declaration
```python
from dataclasses import dataclass
from typing import Annotated
from conformly.constraints import MinLength, GreaterOrEqual, Pattern
@dataclass
class Profile:
email: Annotated[str, Pattern(r"^[^\s@]+@[^\s@]+\.[^\s@]+$")]
phone: Annotated[str, Pattern(r"^\+[1-9]\d{1,14}$")]
@dataclass
class User:
name: Annotated[str, MinLength(3)]
age: Annotated[int, GreaterOrEqual(18)]
profile: Profile
```
### Generation Example
```python
from conformly import case
valid_data = case(User, valid=True)
print(valid_data)
# {
# "name": "validname",
# "age": 25,
# "profile": {
# "email": "some@email.com",
# "phone": "+12025550123"
# }
# }
invalid_data_by_field = case(User, valid=False, strategy="profile.email")
print(invalid_data_by_field)
# {
# "name": "validname",
# "age": 25,
# "profile": {
# "email": "nonemailstring",
# "phone": "+12025550123"
# }
# }
```
## Development
Install dependencies:
```bash
uv sync
```
Run tests:
```bash
uv run -m pytest -q
```
Run with coverage:
```bash
uv run -m pytest --cov=conformly --cov-report=term-missing
```
Build & check package:
```bash
uv build
uv run -m twine check dist/*
```
## Roadmap
- **Deterministic invalid generation** - explicitly select which constraint to violate
- **Better regex invalidation** - guarantee that invalid strings don't match patterns
- **More adapters** - TypedDict, attrs support
- **More constraints and types** - `multitiple_of`, `list[T]`, `dict[T]` etc.
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for release notes and migration guidance.
## License
MIT — see [LICENSE](LICENSE) file for details
## Contributing
Contributions welcome!
- Fork the repo
- Create a feature branch
- Add tests for new functionality
- Run `uv run -m pytest` and `uv run -m ruff check .`
- Submit a pull request
| text/markdown | null | Nikita Shabanov <nik.shabanov2024@gmail.com> | null | null | MIT | testing, fixtures, test-data, dataclasses, validation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"rstr>=3.2.2",
"pydantic<3.0.0,>=2.0.0; extra == \"pydantic\""
] | [] | [] | [] | [
"Homepage, https://github.com/nashabanov/conformly",
"Repository, https://github.com/nashabanov/conformly.git",
"Issues, https://github.com/nashabanov/conformly/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T12:36:15.229323 | conformly-0.3.6.tar.gz | 26,627 | 52/78/9965a28aedddac29cf388f86904b6db9fd77ce68a7b01fe44d8c8bd6f63a/conformly-0.3.6.tar.gz | source | sdist | null | false | 5535abdaac47d6eef3aa5fda3548d489 | 45f56b4a45c8bad2f1d4baade11b9da1a8ae9e3d2d69609f8bda81fd8e18746b | 52789965a28aedddac29cf388f86904b6db9fd77ce68a7b01fe44d8c8bd6f63a | null | [
"LICENSE"
] | 263 |
2.4 | eba-xbridge | 2.0.0rc1 | XBRL-XML to XBRL-CSV converter for EBA Taxonomy (version 4.2) | XBridge (eba-xbridge)
#####################
.. image:: https://img.shields.io/pypi/v/eba-xbridge.svg
:target: https://pypi.org/project/eba-xbridge/
:alt: PyPI version
.. image:: https://img.shields.io/pypi/pyversions/eba-xbridge.svg
:target: https://pypi.org/project/eba-xbridge/
:alt: Python versions
.. image:: https://img.shields.io/github/license/Meaningful-Data/xbridge.svg
:target: https://github.com/Meaningful-Data/xbridge/blob/main/LICENSE
:alt: License
.. image:: https://img.shields.io/github/actions/workflow/status/Meaningful-Data/xbridge/testing.yml?branch=main
:target: https://github.com/Meaningful-Data/xbridge/actions
:alt: Build status
Overview
========
XBridge is a Python library for converting XBRL-XML files into XBRL-CSV files using the EBA (European Banking Authority) taxonomy. It provides a simple, reliable way to transform regulatory reporting data from XML format to CSV format.
The library supports **EBA Taxonomy version 4.2**, as published on the 14th January 2026 and includes support for DORA (Digital Operational Resilience Act) CSV conversion. The library must be updated with each new EBA taxonomy version release.
Key Features
============
* **XBRL-XML to XBRL-CSV Conversion**: Seamlessly convert XBRL-XML instance files to XBRL-CSV format
* **Command-Line Interface**: Quick conversions without writing code using the ``xbridge`` CLI
* **Python API**: Programmatic conversion for integration with other tools and workflows
* **EBA Taxonomy 4.2 Support**: Built for the latest EBA taxonomy specification
* **DORA CSV Conversion**: Support for Digital Operational Resilience Act reporting
* **Configurable Validation**: Flexible filing indicator validation with strict or warning modes
* **Decimal Handling**: Intelligent decimal precision handling with configurable options
* **Type Safety**: Fully typed codebase with MyPy strict mode compliance
* **Python 3.9+**: Supports Python 3.9 through 3.13
Prerequisites
=============
* **Python**: 3.9 or higher
* **7z Command-Line Tool**: Required for loading compressed taxonomy files (7z or ZIP format)
* On Ubuntu/Debian: ``sudo apt-get install p7zip-full``
* On macOS: ``brew install p7zip``
* On Windows: Download from `7-zip.org <https://www.7-zip.org/>`_
Installation
============
Install XBridge from PyPI using pip:
.. code-block:: bash
pip install eba-xbridge
For development installation, see `CONTRIBUTING.md <CONTRIBUTING.md>`_.
Quick Start
===========
XBridge offers two ways to convert XBRL-XML files to XBRL-CSV: a command-line interface (CLI) for quick conversions, and a Python API for programmatic use.
Command-Line Interface
----------------------
The CLI provides a quick way to convert files without writing code:
.. code-block:: bash
# Basic conversion (output to same directory as input)
xbridge instance.xbrl
# Specify output directory
xbridge instance.xbrl --output-path ./output
# Continue with warnings instead of errors
xbridge instance.xbrl --no-strict-validation
# Include headers as datapoints
xbridge instance.xbrl --headers-as-datapoints
**CLI Options:**
* ``--output-path PATH``: Output directory (default: same as input file)
* ``--headers-as-datapoints``: Treat headers as datapoints (default: False)
* ``--strict-validation``: Raise errors on validation failures (default: True)
* ``--no-strict-validation``: Emit warnings instead of errors
For more CLI options, run ``xbridge --help``.
Python API - Basic Conversion
------------------------------
Convert an XBRL-XML instance file to XBRL-CSV using the Python API:
.. code-block:: python
from xbridge.api import convert_instance
# Basic conversion
input_path = "path/to/instance.xbrl"
output_path = "path/to/output"
convert_instance(input_path, output_path)
The converted XBRL-CSV files will be saved as a ZIP archive in the output directory.
Python API - Advanced Usage
----------------------------
Customize the conversion with additional parameters:
.. code-block:: python
from xbridge.api import convert_instance
# Conversion with custom options
convert_instance(
instance_path="path/to/instance.xbrl",
output_path="path/to/output",
headers_as_datapoints=True, # Treat headers as datapoints
validate_filing_indicators=True, # Validate filing indicators
strict_validation=False, # Emit warnings instead of errors for orphaned facts
)
Python API - Handling Warnings
------------------------------
XBridge emits structured warnings that can be filtered or turned into errors from your code.
The most common ones are:
* ``IdentifierPrefixWarning``: Unknown entity identifier prefix; XBridge falls back to ``rs``.
* ``FilingIndicatorWarning``: Filing indicator inconsistencies; some facts are excluded.
To capture these warnings when using ``convert_instance``:
.. code-block:: python
import warnings
from xbridge.api import convert_instance
from xbridge.exceptions import XbridgeWarning, FilingIndicatorWarning
input_path = "path/to/instance.xbrl"
output_path = "path/to/output"
with warnings.catch_warnings(record=True) as caught:
# Ensure all xbridge warnings are captured
warnings.simplefilter("always", XbridgeWarning)
zip_path = convert_instance(
instance_path=input_path,
output_path=output_path,
validate_filing_indicators=True,
strict_validation=False, # Warnings instead of errors for orphaned facts
)
filing_warnings = [
w for w in caught if issubclass(w.category, FilingIndicatorWarning)
]
for w in filing_warnings:
print(f"Filing indicator warning: {w.message}")
To treat all XBridge warnings as errors:
.. code-block:: python
import warnings
from xbridge.api import convert_instance
from xbridge.exceptions import XbridgeWarning
with warnings.catch_warnings():
warnings.simplefilter("error", XbridgeWarning)
convert_instance("path/to/instance.xbrl", "path/to/output")
Loading an Instance
-------------------
Load and inspect an XBRL-XML instance without converting:
.. code-block:: python
from xbridge.api import load_instance
instance = load_instance("path/to/instance.xbrl")
# Access instance properties
print(f"Entity: {instance.entity}")
print(f"Period: {instance.period}")
print(f"Facts count: {len(instance.facts)}")
How XBridge Works
=================
XBridge performs the conversion in several steps:
1. **Load the XBRL-XML instance**: Parse and extract facts, contexts, scenarios, and filing indicators
2. **Load the EBA taxonomy**: Access pre-processed taxonomy modules containing tables and variables
3. **Match and validate**: Join instance facts with taxonomy definitions
4. **Generate CSV files**: Create XBRL-CSV files including:
* Data tables with facts and dimensions
* Filing indicators showing reported tables
* Parameters (entity, period, base currency, decimals)
5. **Package output**: Bundle all CSV files into a ZIP archive
Output Structure
----------------
The output ZIP file contains:
* **META-INF/**: JSON report package metadata
* **reports/**: CSV files for each reported table
* **filing-indicators.csv**: Table reporting indicators
* **parameters.csv**: Report-level parameters
Documentation
=============
Comprehensive documentation is available at `docs.xbridge.meaningfuldata.eu <https://docs.xbridge.meaningfuldata.eu>`_.
The documentation includes:
* **API Reference**: Complete API documentation
* **Quickstart Guide**: Step-by-step tutorials
* **Technical Notes**: Architecture and design details
* **FAQ**: Frequently asked questions
Taxonomy Loading
================
If you need to work with the EBA taxonomy directly, you can load it using:
.. code-block:: bash
python -m xbridge.taxonomy_loader --input_path path/to/FullTaxonomy.7z
This generates an ``index.json`` file containing module references and pre-processed taxonomy data.
.. warning::
Loading the taxonomy from a 7z package may take several minutes. Ensure the ``7z`` command is available on your system.
Configuration Options
=====================
convert_instance Parameters
----------------------------
* **instance_path** (str | Path): Path to the XBRL-XML instance file
* **output_path** (str | Path | None): Output directory for CSV files (default: current directory)
* **headers_as_datapoints** (bool): Treat table headers as datapoints (default: False)
* **validate_filing_indicators** (bool): Validate that facts belong to reported tables (default: True)
* **strict_validation** (bool): Raise errors on validation failures; if False, emit warnings (default: True)
Troubleshooting
===============
Common Issues
-------------
**7z command not found**
Install the 7z command-line tool using your system's package manager (see Prerequisites).
**Taxonomy version mismatch**
Ensure you're using the correct version of XBridge for your taxonomy version. XBridge 1.5.x supports EBA Taxonomy 4.2.
**Orphaned facts warning/error**
Facts that don't belong to any reported table. Set ``strict_validation=False`` to continue with warnings instead of errors.
**Decimal precision issues**
XBridge automatically handles decimal precision from the taxonomy. Check the parameters.csv file for applied decimal settings.
For more issues, see our `FAQ <https://docs.xbridge.meaningfuldata.eu/faq.html>`_ or `open an issue <https://github.com/Meaningful-Data/xbridge/issues>`_.
Contributing
============
We welcome contributions! Please see `CONTRIBUTING.md <CONTRIBUTING.md>`_ for:
* Development setup instructions
* Code style guidelines
* Testing requirements
* Pull request process
Before contributing, please read our `Code of Conduct <CODE_OF_CONDUCT.md>`_.
Changelog
=========
See `CHANGELOG.md <CHANGELOG.md>`_ for a detailed history of changes.
Support
=======
* **Documentation**: https://docs.xbridge.meaningfuldata.eu
* **Issue Tracker**: https://github.com/Meaningful-Data/xbridge/issues
* **Email**: info@meaningfuldata.eu
* **Company**: https://www.meaningfuldata.eu/
Security
========
For security issues, please see our `Security Policy <SECURITY.md>`_.
License
=======
This project is licensed under the Apache License 2.0 - see the `LICENSE <LICENSE>`_ file for details.
Authors & Maintainers
=====================
**MeaningfulData** - https://www.meaningfuldata.eu/
Maintainers:
* Antonio Olleros (antonio.olleros@meaningfuldata.eu)
* Jesus Simon (jesus.simon@meaningfuldata.eu)
* Francisco Javier Hernandez del Caño (javier.hernandez@meaningfuldata.eu)
* Guillermo Garcia Martin (guillermo.garcia@meaningfuldata.eu)
Acknowledgments
===============
This project is designed to work with the European Banking Authority (EBA) taxonomy for regulatory reporting
| text/x-rst | MeaningfulData | info@meaningfuldata.eu | Antonio Olleros | antonio.olleros@meaningfuldata.eu | Apache 2.0 | xbrl, eba, taxonomy, csv, xml | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"lxml<6.0,>=5.2.1",
"numpy<2,>=1.23.2; python_version < \"3.13\"",
"numpy>=2.1.0; python_version >= \"3.13\"",
"pandas<3.0,>=2.1.4"
] | [] | [] | [] | [
"Documentation, https://docs.xbridge.meaningfuldata.eu",
"IssueTracker, https://github.com/Meaningful-Data/xbridge/issues",
"MeaningfulData, https://www.meaningfuldata.eu/",
"Repository, https://github.com/Meaningful-Data/xbridge"
] | poetry/2.3.2 CPython/3.12.3 Linux/6.14.0-1017-azure | 2026-02-18T12:36:06.404273 | eba_xbridge-2.0.0rc1-py3-none-any.whl | 15,263,987 | 70/1a/4aa85cfabd1e6a95d2bf745ce9b0a8b200e2a3cdbf97ba4cf78629ccd9fb/eba_xbridge-2.0.0rc1-py3-none-any.whl | py3 | bdist_wheel | null | false | 2837f1976668b80732ec0f13eb4a1786 | 0b861320dfeb95b12eec5ba1659f96ec5b2532424db2f45658ff47bdd6ef221c | 701a4aa85cfabd1e6a95d2bf745ce9b0a8b200e2a3cdbf97ba4cf78629ccd9fb | null | [
"LICENSE"
] | 232 |
2.4 | intent-cli | 0.1.3 | Safety-first tool to generate tool-owned project files from intent.toml | # Intent
Intent keeps project automation config in sync from a single `intent.toml`.
- Source of truth: `intent.toml`
- Reads: `intent.toml`, `pyproject.toml`
- Generates baseline tool-owned files: `.github/workflows/ci.yml`, `justfile`
Full reference: [`documentation.md`](documentation.md)
## Install
From PyPI:
```bash
python -m pip install intent-cli
```
From source:
```bash
python -m pip install -e .
```
## Quick Start
1. Initialize config:
```bash
intent init
```
2. Generate files:
```bash
intent sync --write
```
This bootstraps a baseline CI workflow and `justfile` from your `intent.toml`.
If you configure `[checks].assertions`, `intent check` evaluates typed JSON assertions on command output.
If you configure `[[ci.jobs]]`, Intent generates workflow jobs from typed job/step definitions instead of the baseline single-job template.
If you configure `[[ci.artifacts]]`, Intent generates upload steps for `actions/upload-artifact`.
If you configure `[ci.summary]`, Intent can publish a built-in markdown summary to `GITHUB_STEP_SUMMARY`.
3. Verify drift in CI/pre-commit:
```bash
intent check --strict
```
## Minimal `intent.toml`
```toml
[intent]
schema_version = 1
[python]
version = "3.12"
[commands]
test = "pytest -q"
lint = "ruff check ."
eval = "cat metrics.json"
[checks]
assertions = [
{ command = "eval", path = "summary.score", op = "gte", value = 0.9 }
]
[ci]
install = "-e .[dev]"
[policy]
pack = "default"
strict = false
```
## Common Commands
| Command | Purpose |
| --- | --- |
| `intent init` | Create starter config. |
| `intent init --from-existing` | Infer Python version from `pyproject.toml` when possible. |
| `intent init --starter tox` | Generate tool-owned `tox.ini` starter (reuses existing `intent.toml`). |
| `intent init --starter nox` | Generate tool-owned `noxfile.py` starter (reuses existing `intent.toml`). |
| `intent sync` | Show config + version checks. |
| `intent sync --show-json` | Print resolved sync config as JSON. |
| `intent sync --show-json --explain` | Include generated-file mapping details in JSON. |
| `intent sync --explain` | Show text mapping from intent config to generated blocks. |
| `intent sync --dry-run` | Preview file changes without writing. |
| `intent sync --write` | Write generated files. |
| `intent sync --write --adopt` | Adopt matching non-owned generated files. |
| `intent sync --write --force` | Force-overwrite non-owned generated files. |
| `intent check` | Detect drift without writing. |
| `intent check --format json` | Machine-readable drift report. |
| `intent doctor` | Diagnose issues with actionable fixes. |
| `intent reconcile --plan` | Preview Python-version reconciliation. |
| `intent reconcile --apply --allow-existing` | Apply reconciliation including existing-file edits. |
## Typed CI Jobs
```toml
[commands]
lint = "ruff check ."
test = "pytest -q"
[[ci.jobs]]
name = "lint"
steps = [{ uses = "actions/checkout@v4" }, { command = "lint" }]
[[ci.jobs]]
name = "test"
needs = ["lint"]
timeout_minutes = 20
matrix = { python-version = ["3.11", "3.12"] }
steps = [
{ uses = "actions/setup-python@v5", with = { python-version = "${{ matrix.python-version }}" } },
{ command = "test", continue_on_error = false }
]
```
## CI Artifacts
```toml
[ci]
artifacts = [
{ name = "junit", path = "reports/junit.xml", retention_days = 7, when = "on-failure" },
{ name = "coverage", path = "coverage.xml", when = "always" }
]
```
## CI Summary
```toml
[ci.summary]
enabled = true
title = "Quality Report"
include_assertions = true
metrics = [
{ label = "score", command = "eval", path = "metrics.score", baseline_path = "metrics.prev_score", precision = 3 }
]
```
## Safety Model
- Writes only tool-owned files in normal sync flow.
- Refuses unsafe overwrite unless explicitly requested.
- Supports explicit ownership modes: `strict`, `adopt`, `force`.
- Uses stable error codes (`INTENTxxx`) for automation.
- Supports typed quality assertions via `[checks].assertions` in `intent.toml`.
## Pre-commit Hook
```yaml
repos:
- repo: local
hooks:
- id: intent-check
name: intent check
entry: intent check --strict
language: system
pass_filenames: false
```
## License
MIT
| text/markdown | sankarebarri | null | null | null | MIT | ci, github-actions, justfile, just, automation, scaffolding, generator, devtools, tooling, configuration | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Build Tools... | [] | null | null | >=3.12 | [] | [] | [] | [
"packaging>=24.0",
"typer>=0.9",
"pytest>=8; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"twine>=5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/sankarebarri/intent",
"Repository, https://github.com/sankarebarri/intent",
"Issues, https://github.com/sankarebarri/intent/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T12:35:40.859811 | intent_cli-0.1.3.tar.gz | 34,765 | eb/0d/d862c0e54f4e294c8b588f7ba87a5da2c1b714aa4b0f813052c129cc266d/intent_cli-0.1.3.tar.gz | source | sdist | null | false | ba47672e76281f1e0d96d8c903935332 | 3365999a65691b43751a28ff8f91ed52f5a96f9f224860712c8bafdd8a4692de | eb0dd862c0e54f4e294c8b588f7ba87a5da2c1b714aa4b0f813052c129cc266d | null | [] | 243 |
2.4 | py-alaska | 0.1.23 | ALASKA - Multiprocess Task Management Framework for Python | # ALASKA
**A**dvanced **L**ightweight **A**synchronous **S**ervice **K**ernel for **A**pplications
[](https://badge.fury.io/py/py-alaska)
[](https://pypi.org/project/py-alaska/)
[](https://opensource.org/licenses/MIT)
A Python framework for building multiprocess task management systems with RMI (Remote Method Invocation), shared memory, and real-time monitoring.
## Features
- **Multiprocess Task Management**: Run tasks as separate processes or threads
- **RMI (Remote Method Invocation)**: Call methods across processes seamlessly
- **Shared Memory (SmBlock)**: Zero-copy image/data sharing between processes
- **Signal/Broker Pattern**: Pub/sub messaging between tasks
- **Web Monitoring Dashboard**: Real-time HTTP-based monitoring UI
- **Performance Metrics**: IPC/FUNC timing statistics with sliding window
- **Auto-restart**: Automatic task recovery on failure
- **JSON Configuration**: Flexible configuration with injection support
## Installation
```bash
# Basic installation
pip install py-alaska
# With monitoring support (psutil)
pip install py-alaska[monitor]
# With camera/GUI support (PySide6)
pip install py-alaska[camera]
# Full installation
pip install py-alaska[all]
```
## Quick Start
### 1. Define a Task
```python
from py_alaska import rmi_class
@rmi_class(name="my_task", mode="process", restart=True)
class MyTask:
def __init__(self):
self.runtime = None # Injected by framework
self.counter = 0
def increment(self, value: int) -> int:
"""RMI method: can be called from other tasks"""
self.counter += value
return self.counter
def get_count(self) -> int:
"""RMI method: query current count"""
return self.counter
def task_loop(self):
"""Main loop: runs continuously"""
while not self.runtime.should_stop():
# Do work here
pass
```
### 2. Create Configuration (config.json)
```json
{
"app_info": {
"name": "MyApp",
"version": "1.0.0",
"id": "myapp_001"
},
"task_config": {
"_monitor": {
"port": 7000,
"exit_hook": true
},
"worker/my_task": {
"counter": 0
}
}
}
```
### 3. Run the Application
```python
from py_alaska import TaskManager, gconfig
import my_task # Import to register rmi_class
def main():
gconfig.load("config.json")
manager = TaskManager(gconfig)
manager.start_all()
# Access via RMI
worker = manager.get_client("worker")
result = worker.increment(10)
print(f"Counter: {result}")
# Web monitor at http://localhost:7000
import time
time.sleep(3600)
manager.stop_all()
if __name__ == "__main__":
main()
```
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ TaskManager │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Task A │ │ Task B │ │ Task C │ │
│ │ (Process) │ │ (Process) │ │ (Thread) │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ │ │
│ ┌─────┴─────┐ │
│ │ RMI Bus │ │
│ │ (Queue) │ │
│ └─────┬─────┘ │
│ │ │
│ ┌─────┴─────┐ │
│ │ SmBlock │ │
│ │ (Shared) │ │
│ └───────────┘ │
├─────────────────────────────────────────────────────────────┤
│ TaskMonitor │
│ HTTP :7000 │
└─────────────────────────────────────────────────────────────┘
```
## Core Components
| Component | Description |
|-----------|-------------|
| `TaskManager` | Main orchestrator for all tasks |
| `rmi_class` | Decorator to define a task |
| `RmiClient` | Client for calling remote methods |
| `SmBlock` | Shared memory block pool for zero-copy data sharing |
| `Signal/SignalBroker` | Pub/sub messaging system |
| `TaskMonitor` | HTTP-based web monitoring dashboard |
| `GConfig` | Global configuration management |
## API Reference
### rmi_class Decorator
```python
@rmi_class(
name="task_name", # Unique task identifier
mode="process", # "process" or "thread"
restart=True, # Auto-restart on failure
restart_delay=3.0, # Delay before restart (seconds)
)
```
### RMI Methods
Any public method in a rmi_class becomes an RMI method:
```python
# In Task A
def calculate(self, x: int, y: int) -> int:
return x + y
# From Task B or main process
client = manager.get_client("task_a")
result = client.calculate(10, 20) # Returns 30
```
### SmBlock (Shared Memory)
```python
# Configuration
"_smblock": {
"image_pool": {"shape": [1024, 1024, 3], "maxsize": 100}
}
# In task
index = self.smblock.malloc() # Allocate block
image = self.smblock.get(index) # Get numpy array
image[:] = frame # Write data
self.smblock.mfree(index) # Release block
```
## Monitoring
Access the web dashboard at `http://localhost:7000` (configurable port).
**Features:**
- Real-time task status (alive/stopped)
- RMI call statistics (count, timing)
- CPU/Memory usage per task
- SmBlock pool utilization
- Configuration editor
- Performance metrics (IPC/FUNC time)
## Requirements
- Python >= 3.8
- numpy >= 1.20.0
- psutil >= 5.8.0 (optional, for monitoring)
- PySide6 >= 6.0.0 (optional, for camera/GUI)
## License
MIT License - see [LICENSE](LICENSE) for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | null | DivisionVision <info@division.co.kr> | null | DivisionVision <info@division.co.kr> | null | multiprocess, task, rmi, ipc, monitoring, shared-memory | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.20.0",
"opencv-python>=4.5.0",
"PySide6>=6.0.0",
"loguru>=0.7.0",
"pydantic>=2.0.0",
"pyyaml>=6.0",
"watchdog>=3.0.0",
"pyserial>=3.5",
"Pillow>=9.0.0",
"matplotlib>=3.5.0",
"psutil>=5.8.0; extra == \"monitor\"",
"PySide6>=6.0.0; extra == \"camera\"",
"opencv-python>=4.5.0; extra =... | [] | [] | [] | [
"Homepage, https://github.com/divisionvision/alaska",
"Documentation, https://github.com/divisionvision/alaska#readme",
"Repository, https://github.com/divisionvision/alaska.git",
"Issues, https://github.com/divisionvision/alaska/issues"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-18T12:35:20.614193 | py_alaska-0.1.23.tar.gz | 1,022,923 | 6a/1d/052978aacc677f4bb08e925e7bcd677c1f884dbf2a0ee3e709592bd0fac2/py_alaska-0.1.23.tar.gz | source | sdist | null | false | 1e99f8fa6c91d1438b45dbc226c557c5 | 6e14cacd78249b905ab4bcebe8f07e07a0014978126177a5acdef4ba860872de | 6a1d052978aacc677f4bb08e925e7bcd677c1f884dbf2a0ee3e709592bd0fac2 | MIT | [
"LICENSE"
] | 266 |
2.4 | kotak-dashboard | 1.0.0 | A Production-Ready Portfolio Dashboard for Kotak Neo API | # Kotak Neo Portfolio Dashboard 📈
A professional, production-ready Python dashboard for tracking your Kotak Securities portfolio in real-time. Built with Streamlit, this application connects to the Kotak Neo API to provide live P&L tracking, sector analytics, and visual portfolio insights.
## 🚀 Features
### 📊 Live Portfolio Hydration
- Automatically fetches live market quotes (LTP)
- Updates your portfolio value in real-time
### 🧠 Intelligent Caching
- Smart fallback logic
- Calculates Day's Change using OHLC data if the live feed is incomplete
### 📈 Visual Analytics
- **Day's P&L Tracking**: See exactly how much your portfolio moved today
- **Sector Allocation**: Interactive Donut charts showing exposure by sector
- **Allocation Bar Chart**: Visual breakdown of investment distribution
### 🔐 Secure Authentication
- Uses TOTP (Time-based OTP) for seamless 2FA login
- Credentials stored securely in a local `.env` file (never hardcoded)
### 🧱 Robust Architecture
- Modular backend design
- Separation of:
- Authentication
- Data Fetching
- Analytics
## 📂 Project Structure
```
kotak-dashboard/
├── .devcontainer/ # VS Code Dev Container configuration
├── .github/workflows/ # CI/CD Pipelines
├── kotak_dashboard/ # Main Application Package
│ ├── __init__.py
│ ├── app.py # Streamlit UI Layer
│ ├── backend.py # Core Logic (Auth, API, Math)
│ └── cli.py # Command-line entry point
├── tests/ # Test Suite
│ ├── conftest.py
│ ├── test_backend_analytics.py
│ └── test_backend_data.py
├── .env.example # Template for API credentials
├── Dockerfile # Production-ready Docker image
├── pyproject.toml # Modern Python dependency management
└── README.md # Documentation
```
## 🛠️ Prerequisites
### Python Environment
- **Python 3.11+** (Strict requirement for neo_api_client compatibility)
### Kotak Neo API Credentials
You will need the following from the Kotak Neo API Portal:
- **Consumer Key**
- **Mobile Number**
- **UCC** (User Client Code)
- **TOTP Secret Key** (The alphanumeric key used to generate 2FA codes)
- **MPIN**
## ⚙️ Installation & Setup
### Option 1: Standard Local Install (Recommended)
1. **Clone the repository**
```bash
git clone https://github.com/yourusername/kotak-dashboard.git
cd kotak-dashboard
```
2. **Create a Virtual Environment (Python 3.11)**
```bash
# macOS/Linux
python3.11 -m venv .venv
source .venv/bin/activate
# Windows
py -3.11 -m venv .venv
.venv\Scripts\activate
```
3. **Install Dependencies**
We use pip to install the package in editable mode.
```bash
pip install --upgrade pip
pip install -e .
```
### Option 2: Using Docker 🐳
Run the dashboard in an isolated container without installing Python locally.
1. **Build the image**
```bash
docker build -t kotak-dashboard .
```
2. **Run the container**
```bash
docker run -p 8501:8501 -v $(pwd)/.env:/app/.env kotak-dashboard
```
### Option 3: VS Code Dev Container
1. Open the project folder in VS Code.
2. Install the **Dev Containers** extension.
3. Press **F1** and select **"Dev Containers: Reopen in Container"**.
4. VS Code will build a fully configured environment for you.
## 🧪 Running Tests
This project uses **pytest** for unit testing. The test suite covers:
- Authentication flows
- Data hydration logic
- Analytics calculations
### Install Test Dependencies
```bash
pip install -e .[test]
```
## 🚀 CI/CD Pipeline
The project includes a **GitHub Actions workflow** (`.github/workflows/ci_cd.yml`) that automatically:
- Sets up Python 3.11
- Runs the full test suite with `pytest`
- Builds the Python package (Source Distribution & Wheel)
- Uploads the build artifacts to GitHub
## 🔑 Configuration
### First Run Setup
When you run the app for the first time, a sidebar will appear asking for your credentials.
1. Enter your **Consumer Key**, **Mobile**, **UCC**, **TOTP Secret**, and **MPIN**.
2. Click **"Save & Login"**.
3. This will automatically create a `.env` file in your project root.
### Manual Setup (Optional)
Copy the example file and fill it in manually:
```bash
cp .env.example .env
```
Edit `.env`:
```
KOTAK_CONSUMER_KEY=your_key_here
KOTAK_MOBILE=9876543210
KOTAK_UCC=YA004
KOTAK_TOTP_SECRET=YOURSECRETKEY123
KOTAK_MPIN=123456
```
## 🏃♂️ Usage
Once installed, you can start the dashboard using the command line:
```bash
kotak-dashboard
```
Or directly via Streamlit:
```bash
streamlit run kotak_dashboard/app.py
```
Open your browser to **http://localhost:8501** to view your portfolio.
## ⚠️ Disclaimer
This application is an **unofficial** tool built for educational and personal tracking purposes. It interacts with the Kotak Neo API but is **not affiliated** with Kotak Securities.
- **Trading Risk**: Stock market investments are subject to market risks. This tool does not provide financial advice.
- **Security**: Your credentials are stored locally on your machine in the `.env` file. Do not share this file or commit it to public repositories.
Built with ❤️ using Python & Streamlit.
| text/markdown | null | Your Name <your.email@example.com> | null | null | MIT License
Copyright (c) 2026 Jayesh Arun Bafna
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"altair>=5.0.0",
"neo-api-client",
"pandas>=2.0.0",
"pyotp>=2.9.0",
"python-dotenv>=1.0.0",
"streamlit>=1.30.0",
"build>=1.0.0; extra == \"test\"",
"pre-commit>=3.6.0; extra == \"test\"",
"pytest-mock>=3.10.0; extra == \"test\"",
"pytest>=7.0.0; extra == \"test\"",
"ruff>=0.3.0; extra == \"test\... | [] | [] | [] | [
"Homepage, https://github.com/yourusername/kotak-dashboard",
"Bug Tracker, https://github.com/yourusername/kotak-dashboard/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:34:07.637514 | kotak_dashboard-1.0.0.tar.gz | 14,639 | 0d/3e/ca54bdaf57f24bfa84697d79806ac12131bbaca922b4146bebe1d22ec123/kotak_dashboard-1.0.0.tar.gz | source | sdist | null | false | f299515b5b9226a237ac0911be927de8 | fcfba1765ba2ed0bcf1aab6918c8458becfd95737dbf778494e82303d76d41bd | 0d3eca54bdaf57f24bfa84697d79806ac12131bbaca922b4146bebe1d22ec123 | null | [
"LICENSE"
] | 281 |
2.1 | cdk8s-image | 0.2.732 | Build & Push local docker images inside CDK8s applications | # cdk8s-image
An `Image` construct which takes care of building & pushing docker images that
can be used in [CDK8s](https://github.com/awslabs/cdk8s) apps.
The following example will build the docker image from `Dockerfile` under the
`my-app` directory, push it to a local registry and then define a Kubernetes
deployment that deploys containers that run this image.
```python
const image = new Image(this, 'image', {
dir: `${__dirname}/my-app`,
registry: 'localhost:5000'
});
new Deployment(this, 'deployment', {
containers: [ new Container({ image: image.url }) ],
});
```
## Contributions
All contributions are celebrated.
## License
Licensed under [Apache 2.0](./LICENSE).
| text/markdown | Amazon Web Services | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
... | [] | https://github.com/cdk8s-team/cdk8s-image.git | null | ~=3.9 | [] | [] | [] | [
"cdk8s<3.0.0,>=2.68.91",
"constructs<11.0.0,>=10.3.0",
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/cdk8s-team/cdk8s-image.git"
] | twine/6.1.0 CPython/3.14.3 | 2026-02-18T12:34:06.171841 | cdk8s_image-0.2.732.tar.gz | 29,554 | 97/2c/28df869102b87a73316d56c5e1bb861c8f6269789ec19f6dda27723abb02/cdk8s_image-0.2.732.tar.gz | source | sdist | null | false | b853178b90dc64e2459f9fda97e591be | b328913057275cc30a70d674add2943e96afe77e29c4c736dac7c1a62efb07c3 | 972c28df869102b87a73316d56c5e1bb861c8f6269789ec19f6dda27723abb02 | null | [] | 289 |
2.4 | ccmm-invenio | 1.1.0a12 | CCMM (Czech Core Metadata Model) components for NRP Invenio | # CCMM-Invenio: CCMM runtime library for NRP Invenio
This library provides:
* Fixtures for vocabularies to support the CCMM model in NRP Invenio
* Schema serializers for the CCMM model
* Import and export modules for the CCMM model
* UI components for working with the CCMM model in NRP Invenio
## Installation
```bash
pip install ccmm-invenio
```
## Usage
To use CCMM in production repository, add the following model:
```python
# models/datasets.py
production_dataset = model(
"production_dataset",
version="1.1.0",
presets=[
ccmm_production_preset,
],
configuration={
# "ui_blueprint": "myui
},
types=[
{
"Metadata": {
"properties": {
# your extensions come here, ccmm_production_preset will add
# all ccmm fields automatically
},
},
}
],
metadata_type="Metadata",
customizations=[],
)
# invenio.cfg
production_dataset.register()
```
## How to generate new NMA and Production CCMM model mappings
### Download and pre-process CCMM XML
Follow the instructions in `ccmm_versions/README.md` to download and pre-process
the CCMM XML schemas for the desired version. This will create:
* Cleaned XSD files in `ccmm_versions/src/ccmm_versions/ccmm-<version>-<date>/out`
* A diff file in `ccmm_versions/diffs/` comparing the new version to the previous one
* A schema overview in `ccmm_versions/summaries/ccmm-<version>-<date>.summary.md`
### Adapt CCMM model yaml files
Copy/paste the model in `src/ccmm_invenio/models/<previous-version>-<date>/` to
`src/ccmm_invenio/models/<new-version>-<date>/`.
Look at the diff file generated in the previous step and adapt the
`ccmm.yaml`, `ccmm-invenio.yaml`, `ccmm-vocabularies.yaml`, and `gml-1.1.0.yaml` files
in `src/ccmm_invenio/models/<version>-<date>/` accordingly.
Then look at the `src/ccmm_invenio/models/__init__.py` file and add the new version
there.
### Generate NMA Parser
```bash
CCMM_VERSION_DIR=1.1.0a1-2025-10-25
CCMM_VERSION=1.1.0
python ./src/ccmm_invenio/parsers/generate_parser.py \
./src/ccmm_invenio/models/$CCMM_VERSION_DIR/ccmm.yaml \
./src/ccmm_invenio/models/$CCMM_VERSION_DIR/ccmm-vocabularies.yaml \
./src/ccmm_invenio/models/$CCMM_VERSION_DIR/gml-1.1.0.yaml \
./src/ccmm_invenio/parsers/nma_$(echo "$CCMM_VERSION" | tr "." "_")$.py
```
### Update production parser manually based on NMA parser
```python
# file production_<version>.py
from .nma_<version> import CCMMXMLNMAParser
class CCMMXMLProductionParser(CCMMXMLProductionParserBase, CCMMXMLNMAParser):
"""Parser for CCMM XML version 1.1.0 for production repository."""
# tweaks here
```
## TODO: imports, exports
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.15,>=3.13 | [] | [] | [] | [
"langdetect>=1.0.9",
"oarepo-model>=0.1.0dev18",
"oarepo-rdm>=1.0.0dev0",
"oarepo[rdm,tests]<15.0.0,>=14.0.0",
"click; extra == \"compile-vocabularies\"",
"pyyaml; extra == \"compile-vocabularies\"",
"rdflib; extra == \"compile-vocabularies\"",
"tenacity; extra == \"compile-vocabularies\"",
"tqdm; e... | [] | [] | [] | [
"Homepage, https://github.com/NRP-CZ/ccmm-invenio"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:31:27.676598 | ccmm_invenio-1.1.0a12.tar.gz | 440,718 | 3d/a4/a7ee13d060bfc29d7598c6e6be95bc752b5fc450565dfd6e6cdacbe795ce/ccmm_invenio-1.1.0a12.tar.gz | source | sdist | null | false | 514e64e6a9ab796436fcddad49d00507 | 6c3a74c25a7b29a8024de5cc71cc94bf8a58a22843dc7fb9e2ab05baa715277f | 3da4a7ee13d060bfc29d7598c6e6be95bc752b5fc450565dfd6e6cdacbe795ce | MIT | [
"LICENSE"
] | 239 |
2.4 | danger-ff-ban | 1.0.0 | Ultimate Free Fire login & ban module | # danger-ff-ban
Ultimate Free Fire login & ban module – UID/password, access token, EAT token, JWT token se login karo aur game server se connect karo.
## Installation
```bash
pip install danger-ff-ban
| text/markdown | @danger_ff_like | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"requests>=2.25.0",
"pycryptodome>=3.10.0",
"protobuf>=3.20.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T12:31:18.497924 | danger_ff_ban-1.0.0.tar.gz | 8,634 | 90/d8/e86f68d7cdca8eca248adbbf45de7b0ce923103c7c0b7c1f63d055b4ce72/danger_ff_ban-1.0.0.tar.gz | source | sdist | null | false | e75683030e98f54a3dcd78f20ad47d58 | 3f78b630a328110134e8a28f6e9721d704f8b2de1c7d1cce101ccd44903fd6c8 | 90d8e86f68d7cdca8eca248adbbf45de7b0ce923103c7c0b7c1f63d055b4ce72 | null | [] | 314 |
2.4 | whos-there | 0.6.0 | The spiritual successor to knockknock for PyTorch Lightning, get notified when your training ends | # Who's there?
[](https://github.com/twsl/whos-there/actions/workflows/build.yaml)
[](https://github.com/twsl/whos-there/actions/workflows/docs.yaml)

[](https://pypi.org/project/whos-there/)
[](https://pypi.org/project/whos-there/)
[](https://github.com/twsl/whos-there/pulls?utf8=%E2%9C%93&q=is:pr%20author:app/dependabot)
[](https://anaconda.org/conda-forge/whos-there)
[](https://anaconda.org/conda-forge/whos-there)
[](https://squidfunk.github.io/mkdocs-material/)
[](https://github.com/astral-sh/uv)
[](https://github.com/astral-sh/ruff)
[](https://github.com/astral-sh/ty)
[](https://github.com/j178/prek)
[](https://github.com/PyCQA/bandit)
[](https://github.com/twsl/whos-there/releases)
[](https://github.com/copier-org/copier)
[](LICENSE)
The spiritual successor to [knockknock](https://github.com/huggingface/knockknock) for [PyTorch Lightning](https://github.com/Lightning-AI/pytorch-lightning), to get a notification when your training is complete or when it crashes during the process with a single callback.
## Features
- Supports E-Mail, Discord, Slack, Teams, Telegram
## Installation
With `pip`:
```bash
python -m pip install whos-there
```
With [`uv`](https://docs.astral.sh/uv/):
```bash
uv add whos-there
```
With `conda`:
```bash
conda install conda-forge::whos-there
```
Check [here](https://github.com/conda-forge/whos-there-feedstock) for more information.
## How to use it
```python
import lightning.pytorch as pl
from whos_there.callback import NotificationCallback
from whos_there.senders.debug import DebugSender
trainer = pl.Trainer(
callbacks=[
NotificationCallback(senders=[
# Add your senders here
DebugSender(),
])
]
)
```
### E-Mail
Requires your e-mail provider specific SMTP settings.
```python
from whos_there.senders.email import EmailSender
# ...
EmailSender(
host="smtp.example.de",
port=587,
sender_email="from@example.com",
password="*********",
recipient_emails=[
"to1@example.com",
"to2@example.com",
]
)
```
### Discord
Requires your Discord channel's [webhook URL](https://support.discordapp.com/hc/en-us/articles/228383668-Intro-to-Webhooks).
```python
from whos_there.senders.discord import DiscordSender
# ...
DiscordSender(
webhook_url="https://discord.com/api/webhooks/XXXXXXXXXXXXXX/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
)
```
### Slack
Requires your Slack room [webhook URL](https://api.slack.com/incoming-webhooks#create_a_webhook) and optionally your [user id](https://api.slack.com/methods/users.identity) (if you want to tag yourself or someone else).
```python
from whos_there.senders.slack import SlackSender
# ...
SlackSender(
webhook_url="https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX", # gitleaks:allow
channel="channel_name",
user_mentions=[
"XXXXXXXX"
]
)
```
### Teams
Requires your Team Channel [webhook URL](https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/connectors/connectors-using).
```python
from whos_there.senders.teams import TeamsSender
# ...
TeamsSender(
webhook_url="https://XXXXX.webhook.office.com/",
user_mentions=[
"twsl"
]
)
```
### Telegram
You can also use Telegram Messenger to get notifications. You'll first have to create your own notification bot by following the three steps provided by Telegram [here](https://core.telegram.org/bots#6-botfather) and save your API access `TOKEN`.
Telegram bots are shy and can't send the first message so you'll have to do the first step. By sending the first message, you'll be able to get the `chat_id` required (identification of your messaging room) by visiting `https://api.telegram.org/bot<YourBOTToken>/getUpdates` and get the `int` under the key `message['chat']['id']`.
```python
from whos_there.senders.telegram import TelegramSender
# ...
TelegramSender(
chat_id=1234567890,
token="XXXXXXX:XXXXXXXXXXXXXXXXXXXXXXXXXXX"
)
```
## Docs
```bash
uv run mkdocs build -f ./mkdocs.yml -d ./_build/
```
## Conda
The conda repository is maintained [here](https://github.com/conda-forge/whos-there-feedstock).
## Update template
```bash
copier update --trust -A --vcs-ref=HEAD
```
## Credits
This project was generated with [](https://github.com/twsl/python-project-template)
Big thanks to [knockknock](https://github.com/huggingface/knockknock) for the idea and code snippets.
| text/markdown | null | twsl <45483159+twsl@users.noreply.github.com> | null | null | MIT | callback, lightning, notification, pytorch, pytorch-lightning, whos-there | [
"Topic :: Software Development"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"lightning>=2.6.1",
"python-telegram-bot>=22.6",
"requests>=2.32.5"
] | [] | [] | [] | [
"homepage, https://twsl.github.io/whos-there/",
"repository, https://github.com/twsl/whos-there",
"documentation, https://twsl.github.io/whos-there/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T12:31:05.169206 | whos_there-0.6.0.tar.gz | 9,730 | 33/d9/6a8dd14040ffa70177bd19a825745dd38d13e35c46d4b11c6d638bc9015a/whos_there-0.6.0.tar.gz | source | sdist | null | false | ef259f2d934d572907ca5004ec292e70 | 7c3ae541abd0e62c6494a221dc1f4645722f2199e78bc527a8312f4539788610 | 33d96a8dd14040ffa70177bd19a825745dd38d13e35c46d4b11c6d638bc9015a | null | [
"LICENSE"
] | 270 |
2.4 | napari-tmidas | 0.5.2 | A plugin for batch processing of confocal and whole-slide microscopy images of biological tissues | # napari-tmidas
[](https://github.com/macromeer/napari-tmidas/raw/main/LICENSE)
[](https://pypi.org/project/napari-tmidas)
[](https://python.org)
[](https://pepy.tech/project/napari-tmidas)
[](https://pepy.tech/project/napari-tmidas)
[](https://doi.org/10.5281/zenodo.17988815)
[](https://github.com/macromeer/napari-tmidas/actions)
**Need fast batch processing for confocal & whole-slide microscopy images of biological cells and tissues?**
This open-source napari plugin integrates state-of-the-art AI + analysis tools in one GUI! Transform, analyze, and quantify microscopy data at scale including deep learning - from file conversion to segmentation, tracking, and analysis.
## ✨ Key Features
🤖 **AI Methods Built-In**
- Virtual staining (VisCy) • Denoising (CAREamics) • Spot detection (Spotiflow) • Segmentation (Cellpose, Convpaint) • Tracking (Trackastra, Ultrack)
- Auto-install in isolated environments • No dependency conflicts • GPU acceleration
🔄 **Universal File Conversion**
- Convert LIF, ND2, CZI, NDPI, Acquifer → TIFF or OME-Zarr
- Preserve spatial metadata automatically
⚡ **Batch Processing**
- Process entire folders with one click • 40+ processing functions • Progress tracking & quality control
📊 **Complete Analysis Pipeline**
- Segmentation → Tracking → Quantification → Colocalization
## 🚀 Quick Start
```bash
# Install napari and the plugin
mamba create -y -n napari-tmidas -c conda-forge python=3.11
mamba activate napari-tmidas
pip install "napari[all]"
pip install napari-tmidas
# Launch napari
napari
```
Then find napari-tmidas in the **Plugins** menu. [Watch video tutorials →](https://www.youtube.com/@macromeer/videos)
> **💡 Tip**: AI methods (SAM2, Cellpose, Spotiflow, etc.) auto-install into isolated environments on first use - no manual setup required!
## 📖 Documentation
### AI-Powered Methods
| Method | Description | Documentation |
|--------|-------------|---------------|
| 🎨 **VisCy** | Virtual staining from phase/DIC | [Guide](docs/viscy_virtual_staining.md) |
| 🔧 **CAREamics** | Noise2Void/CARE denoising | [Guide](docs/careamics_denoising.md) |
| 🎯 **Spotiflow** | Spot/puncta detection | [Guide](docs/spotiflow_detection.md) |
| 🔬 **Cellpose** | Cell/nucleus segmentation | [Guide](docs/cellpose_segmentation.md) |
| 🎨 **Convpaint** | Custom semantic/instance segmentation | [Guide](docs/convpaint_prediction.md) |
| 📈 **Trackastra** | Transformer-based cell tracking | [Guide](docs/trackastra_tracking.md) |
| 🔗 **Ultrack** | Cell tracking based on segmentation ensemble | [Guide](docs/ultrack_tracking.md) |
### Core Workflows
- **[File Conversion](docs/file_conversion.md)** - Multi-format microscopy file conversion (LIF, ND2, CZI, NDPI, Acquifer)
- **[Batch Processing](docs/basic_processing.md)** - Label operations, filters, channel splitting
- **[Frame Removal](docs/frame_removal.md)** - Interactive human-in-the-loop frame removal from time series
- **[Label-Based Cropping](docs/label_based_cropping.md)** - Interactive ROI extraction with label expansion
- **[Quality Control](docs/grid_view_overlay.md)** - Visual QC with grid overlay
- **[Quantification](docs/regionprops_analysis.md)** - Extract measurements from labels
- **[Colocalization](docs/advanced_processing.md#colocalization-analysis)** - Multi-channel ROI analysis
### Advanced Features
- [Batch Crop Anything](docs/crop_anything.md) - Interactive object cropping with SAM2
- [Batch Label Inspection](docs/batch_label_inspection.md) - Manual label verification and editing
- [SciPy Filters](docs/advanced_processing.md#scipy-filters) - Gaussian, median, morphological operations
- [Scikit-Image Filters](docs/advanced_processing.md#scikit-image-filters) - CLAHE, thresholding, edge detection
## 💻 Installation
### Step 1: Install napari
```bash
mamba create -y -n napari-tmidas -c conda-forge python=3.11
mamba activate napari-tmidas
python -m pip install "napari[all]"
```
### Step 2: Install napari-tmidas
| Your Needs | Command |
|----------|---------|
| **Standard installation** | `pip install napari-tmidas` |
| **Want the latest dev features** | `pip install git+https://github.com/MercaderLabAnatomy/napari-tmidas.git` |
## 🖼️ Screenshots
<details>
<summary><b>File Conversion Widget</b></summary>
<img src="https://github.com/user-attachments/assets/e377ca71-2f30-447d-825e-d2feebf7061b" alt="File Conversion" width="600">
Convert proprietary formats to open standards with metadata preservation.
</details>
<details>
<summary><b>Batch Processing Interface</b></summary>
<img src="https://github.com/user-attachments/assets/cfe84828-c1cc-4196-9a53-5dfb82d5bfce" alt="Batch Processing" width="600">
Select files → Choose processing function → Run on entire dataset.
</details>
<details>
<summary><b>Label Inspection</b></summary>
<img src="https://github.com/user-attachments/assets/0bf8c6ae-4212-449d-8183-e91b23ba740e" alt="Label Inspection" width="600">
Inspect and manually correct segmentation results.
</details>
<details>
<summary><b>SAM2 Crop Anything</b></summary>
<img src="https://github.com/user-attachments/assets/6d72c2a2-1064-4a27-b398-a9b86fcbc443" alt="Crop Anything" width="600">
Interactive object selection and cropping with SAM2.
</details>
## 📋 TODO
### Memory-Efficient Zarr Streaming
**Current Limitation**: Processing functions pre-allocate full output arrays in memory before writing to zarr. For large TZYX time series (e.g., 100 timepoints × 1024×1024×20), this requires ~8+ GB peak memory even when using zarr output.
**Planned Enhancement**: Implement incremental zarr writing across all processing functions:
- Process one timepoint at a time
- Write directly to zarr array on disk
- Keep only single timepoint in memory (~80 MB vs 8 GB)
- Maintain OME-Zarr metadata and chunking
**Impact**: Enable processing of arbitrarily large time series limited only by disk space, not RAM. Critical for high-throughput microscopy workflows.
**Affected Functions**: Convpaint prediction, Cellpose segmentation, CAREamics denoising, VisCy virtual staining, Trackastra tracking, and all batch processing operations with zarr output.
## 🤝 Contributing
Contributions are welcome! Please ensure tests pass before submitting PRs:
```bash
pip install tox
tox
```
## 📄 License
BSD-3 License - see [LICENSE](LICENSE) for details.
## 🐛 Issues
Found a bug or have a feature request? [Open an issue](https://github.com/MercaderLabAnatomy/napari-tmidas/issues)
## 🙏 Acknowledgments
Built with [napari](https://github.com/napari/napari) and powered by:
**AI/ML Methods:**
- [Cellpose](https://github.com/MouseLand/cellpose) • [Convpaint](https://github.com/guiwitz/napari-convpaint) • [VisCy](https://github.com/mehta-lab/VisCy) • [CAREamics](https://github.com/CAREamics/careamics) • [Spotiflow](https://github.com/weigertlab/spotiflow) • [Trackastra](https://github.com/weigertlab/trackastra) • [Ultrack](https://github.com/royerlab/ultrack) • [SAM2](https://github.com/facebookresearch/segment-anything-2)
**Core Scientific Stack:**
- [NumPy](https://numpy.org/) • [scikit-image](https://scikit-image.org/) • [PyTorch](https://pytorch.org/)
**File Format Support:**
- [OME-Zarr](https://github.com/ome/ome-zarr-py) • [tifffile](https://github.com/cgohlke/tifffile) • [nd2](https://github.com/tlambert03/nd2) • [pylibCZIrw](https://github.com/ZEISS/pylibczi) • [readlif](https://github.com/nimne/readlif)
---
[PyPI]: https://pypi.org/project/napari-tmidas
[pip]: https://pypi.org/project/pip/
[tox]: https://tox.readthedocs.io/en/latest/
| text/markdown | Marco Meer | marco.meer@pm.me | null | null | Copyright (c) 2025, Marco Meer
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: napari",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy<2.1,>=1.23.0",
"magicgui",
"tqdm",
"qtpy",
"scikit-image>=0.19.0",
"scikit-learn-extra>=0.3.0",
"pyqt5",
"zarr",
"ome-zarr",
"napari-ome-zarr",
"nd2",
"pylibCZIrw",
"readlif",
"tifffile<2025.5.21,>=2023.7.4",
"tiffslide",
"acquifer-napari",
"psygnal>=0.9.0",
"zarr>=2.16.0",
... | [] | [] | [] | [
"Bug Tracker, https://github.com/macromeer/napari-tmidas/issues",
"Documentation, https://github.com/macromeer/napari-tmidas#README.md",
"Source Code, https://github.com/macromeer/napari-tmidas",
"User Support, https://github.com/macromeer/napari-tmidas/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T12:31:03.266288 | napari_tmidas-0.5.2.tar.gz | 344,048 | a5/1b/fdf47f14be2ce6e3b9b7aa4467b760fa9a95924548d8bbea691801f212a5/napari_tmidas-0.5.2.tar.gz | source | sdist | null | false | df4497260b406866b924dcfcb8e0f85b | 768d483f61ee93d6a65d315d95755b4d8a08bf778da23203ebe2dab8024712ad | a51bfdf47f14be2ce6e3b9b7aa4467b760fa9a95924548d8bbea691801f212a5 | null | [
"LICENSE"
] | 271 |
2.1 | inspire-dojson | 63.2.33 | INSPIRE-specific rules to transform from MARCXML to JSON and back. | .. This file is part of INSPIRE.
Copyright (C) 2014-2017 CERN.
INSPIRE is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
INSPIRE is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with INSPIRE. If not, see <http://www.gnu.org/licenses/>.
In applying this license, CERN does not waive the privileges and immunities
granted to it by virtue of its status as an Intergovernmental Organization
or submit itself to any jurisdiction.
INSPIRE-DoJSON
==============
About
=====
INSPIRE-specific rules to transform from MARCXML to JSON and back.
Local development (py2)
=======================
.. code-block:: shell
# Build the Docker image for Python 2.7
docker build -t dojson2 -f Dockerfile.py2 .
# Spin up a container with the library installed
docker run -it dojson2
# Run the test suite
./run-tests.sh
| null | CERN | admin@inspirehep.net | null | null | GPLv3 | null | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Languag... | [
"any"
] | https://github.com/inspirehep/inspire-dojson | null | null | [] | [] | [] | [
"Flask>=0.12.3",
"IDUtils>=1.0.1,~=1.0",
"inspire-schemas",
"inspire-utils>=3.0.65,~=3.0",
"langdetect>=1.0.7,~=1.0",
"pycountry>=17.5.4,~=17.0",
"MarkupSafe>=1.1.1",
"urllib3~=1.26.0",
"dojson==1.4.0; python_version == \"2.7\"",
"dojson>=1.3.1,~=1.0; python_version >= \"3\"",
"flake8-future-imp... | [] | [] | [] | [] | twine/6.1.0 CPython/3.8.20 | 2026-02-18T12:30:31.873432 | inspire_dojson-63.2.33-py2.py3-none-any.whl | 100,382 | 27/74/8af95e9ca2d6dd3410beff88d5cbc70364327ad9a6f4aab392ea7dbe9ae3/inspire_dojson-63.2.33-py2.py3-none-any.whl | py2.py3 | bdist_wheel | null | false | ef0f9e381e540619682cc82cac2cc469 | 9dc727fe2ca92cd1943503d8b4dc89f536f4b21fb7013e222361a2937a39facd | 27748af95e9ca2d6dd3410beff88d5cbc70364327ad9a6f4aab392ea7dbe9ae3 | null | [] | 157 |
2.1 | cdk8s-grafana | 0.1.746 | Grafana construct for cdk8s. | ## cdk8s-grafana
[](https://constructs.dev/packages/cdk8s-grafana)
cdk8s-grafana is a library that lets you easily define a Grafana service for
your kubernetes cluster along with associated dashboards and datasources, using
a high level API.
### Usage
To apply the resources generated by this construct, the Grafana operator must be
installed on your cluster. See
[https://operatorhub.io/operator/grafana-operator](https://operatorhub.io/operator/grafana-operator) for full installation
instructions.
The following will define a Grafana cluster connected to a Prometheus
datasource:
```python
import { Grafana } from 'cdk8s-grafana';
// inside your chart:
const grafana = new Grafana(this, 'my-grafana', {
defaultDataSource: {
name: 'Prometheus',
type: 'prometheus',
access: 'proxy',
url: 'http://prometheus-service:9090',
}
});
```
Basic aspects of a dashboard can be customized:
```python
const github = grafana.addDatasource('github', ...);
const dashboard = grafana.addDashboard('my-dashboard', {
title: 'My Dashboard',
refreshRate: Duration.seconds(10),
timeRange: Duration.hours(6), // show metrics from now-6h to now
plugins: [
{
name: 'grafana-piechart-panel',
version: '1.3.6',
}
],
});
```
Note: the kubernetes grafana operator only supports one Grafana instance per
namespace (see https://github.com/grafana-operator/grafana-operator/issues/174).
This may require specifying namespaces explicitly, e.g.:
```python
const devGrafana = new Grafana(this, 'my-grafana', {
namespace: 'dev',
});
const prodGrafana = new Grafana(this, 'my-grafana', {
namespace: 'prod',
});
```
The grafana operator must be installed in each namespace for the resources in
that namespace to be recognized.
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more
information.
## License
This project is licensed under the Apache-2.0 License.
| text/markdown | Amazon Web Services | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
... | [] | https://github.com/cdk8s-team/cdk8s-grafana.git | null | ~=3.9 | [] | [] | [] | [
"cdk8s<3.0.0,>=2.68.91",
"constructs<11.0.0,>=10.3.0",
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/cdk8s-team/cdk8s-grafana.git"
] | twine/6.1.0 CPython/3.14.3 | 2026-02-18T12:28:28.263757 | cdk8s_grafana-0.1.746.tar.gz | 329,362 | 2d/e6/f45ae0af9a52b439ff69f10d3d622d00fef8707236834f27f0ee83b021a6/cdk8s_grafana-0.1.746.tar.gz | source | sdist | null | false | 25c0865381339dc36953601aaf4fbee2 | 3c01396b5287fc2e562fdcdb143e40fb0ea3e9bc727f1b05a52e1eba7265857a | 2de6f45ae0af9a52b439ff69f10d3d622d00fef8707236834f27f0ee83b021a6 | null | [] | 305 |
2.4 | phenotastic | 0.5.1 | 3D plant phenotyping. | ===========
Phenotastic
===========
| `Documentation <https://supersubscript.github.io/phenotastic/>`_
3D plant phenotyping package for segmentation of early flower organs (primordia)
from shoot apical meristems in 3D images.
Features
--------
- **3D Image Contouring**: Morphological active contour methods for extracting surfaces from 3D image stacks
- **Mesh Processing**: Smoothing, remeshing, and repair operations for 3D meshes
- **Domain Segmentation**: Curvature-based segmentation of meshes into regions (domains)
- **Pipeline System**: Configurable recipe-style pipelines for reproducible workflows
Installation
------------
.. code-block:: bash
uv pip install phenotastic
Or install from source:
.. code-block:: bash
git clone https://github.com/supersubscript/phenotastic.git
cd phenotastic
uv pip install -e ".[dev]"
Quick Start
-----------
Using the Python API
~~~~~~~~~~~~~~~~~~~~
.. code-block:: python
from phenotastic import PhenoMesh, Pipeline, load_preset
import pyvista as pv
# Load a mesh
polydata = pv.read("my_mesh.vtk")
mesh = PhenoMesh(polydata)
# Process with the default pipeline
pipeline = load_preset()
result = pipeline.run(mesh)
# Access results
print(f"Mesh has {result.mesh.n_points} points")
print(f"Found {len(result.domains.unique())} domains")
Using the CLI
~~~~~~~~~~~~~
.. code-block:: bash
# Run with default pipeline
phenotastic run image.tif --output results/
# Run with custom config
phenotastic run image.tif --config my_pipeline.yaml
# Generate a config template
phenotastic init-config my_pipeline.yaml
# List available operations
phenotastic list-operations
# List available presets
phenotastic list-presets
# Validate configuration
phenotastic validate my_pipeline.yaml
# View a mesh interactively
phenotastic view mesh.vtk --scalars curvature
Pipeline Configuration
----------------------
Phenotastic uses a recipe-style YAML configuration for defining pipelines.
Each step specifies an operation name and optional parameters.
Example Configuration
~~~~~~~~~~~~~~~~~~~~~
.. code-block:: yaml
steps:
# Create mesh from contour
- name: create_mesh
params:
step_size: 1
# Smoothing
- name: smooth
params:
iterations: 100
relaxation_factor: 0.01
# Remesh to regularize faces
- name: remesh
params:
n_clusters: 10000
# More smoothing
- name: smooth
params:
iterations: 50
# Domain segmentation
- name: compute_curvature
params:
curvature_type: mean
- name: segment_domains
- name: merge_small
params:
threshold: 50
- name: extract_domaindata
Default Pipeline
~~~~~~~~~~~~~~~~
Phenotastic provides a default pipeline that includes the full workflow from
3D image to domain analysis. The default pipeline is automatically used when
calling ``load_preset()`` without arguments or when running the CLI.
Available Operations
--------------------
Image/Contour Operations
~~~~~~~~~~~~~~~~~~~~~~~~
- ``contour``: Generate binary contour from 3D image using morphological active contours
- ``create_mesh``: Create mesh from contour using marching cubes
- ``create_cellular_mesh``: Create mesh from segmented image (one mesh per cell)
Mesh Processing Operations
~~~~~~~~~~~~~~~~~~~~~~~~~~
- ``smooth``: Laplacian smoothing
- ``smooth_taubin``: Taubin smoothing (less shrinkage than Laplacian)
- ``smooth_boundary``: Smooth only boundary edges
- ``remesh``: Regularize faces using ACVD algorithm
- ``decimate``: Reduce mesh complexity by removing faces
- ``subdivide``: Increase mesh resolution by subdividing faces
- ``repair_holes``: Fill small holes in the mesh
- ``repair``: Full mesh repair using MeshFix
- ``make_manifold``: Remove non-manifold edges
- ``filter_curvature``: Remove vertices outside curvature threshold range
- ``remove_normals``: Remove vertices based on normal angle
- ``remove_bridges``: Remove triangles where all vertices are on the boundary
- ``remove_tongues``: Remove tongue-like artifacts
- ``extract_largest``: Keep only the largest connected component
- ``clean``: Remove degenerate cells
- ``triangulate``: Convert all faces to triangles
- ``compute_normals``: Compute surface normals
- ``flip_normals``: Flip all surface normals
- ``correct_normal_orientation``: Correct normal orientation relative to an axis
- ``rotate``: Rotate mesh around an axis
- ``clip``: Clip mesh with a plane
- ``erode``: Erode mesh by removing boundary points
- ``ecft``: ExtractLargest, Clean, FillHoles, Triangulate (combined operation)
Domain Operations
~~~~~~~~~~~~~~~~~
- ``compute_curvature``: Compute mesh curvature (mean, gaussian, minimum, maximum)
- ``filter_scalars``: Apply filter to curvature field (median, mean, minmax, maxmin)
- ``segment_domains``: Create domains via steepest ascent on curvature field
- ``merge_angles``: Merge domains within angular threshold from meristem
- ``merge_distance``: Merge domains within spatial distance threshold
- ``merge_small``: Merge small domains to their largest neighbor
- ``merge_engulfing``: Merge domains mostly encircled by a neighbor
- ``merge_disconnected``: Connect domains isolated from meristem
- ``merge_depth``: Merge domains with similar depth values
- ``define_meristem``: Identify the meristem domain
- ``extract_domaindata``: Extract geometric measurements for each domain
PhenoMesh Class
---------------
``PhenoMesh`` extends PyVista's ``PolyData`` class, adding convenient methods
for 3D plant phenotyping workflows. It can be used anywhere a ``PolyData`` is
expected.
.. code-block:: python
from phenotastic import PhenoMesh
import pyvista as pv
# Create from PyVista mesh
mesh = PhenoMesh(pv.Sphere())
# PhenoMesh is a PolyData
isinstance(mesh, pv.PolyData) # True
# Process
mesh = mesh.smooth(iterations=100)
mesh = mesh.remesh(n_clusters=5000)
curvature = mesh.compute_curvature(curvature_type="mean")
# Visualize
mesh.plot(scalars=curvature, cmap="coolwarm")
# Convert to plain PyVista PolyData if needed
polydata = mesh.to_polydata()
Development
-----------
.. code-block:: bash
# Install development dependencies
uv sync --group dev
# Run tests
uv run pytest
# Type checking
uv run ty check
# Linting
uv run ruff check src/phenotastic/
# Pre-commit hooks
uv run pre-commit run --all-files
Citation
--------
If you use Phenotastic in your research, please cite:
Åhl, H., Zhang, Y., & Jönsson, H. (2022). High-throughput 3D phenotyping of plant shoot
apical meristems from tissue-resolution data. *Frontiers in Plant Science*, 13, 827147.
BibTeX:
.. code-block:: bibtex
@article{aahl2022high,
title={High-throughput 3d phenotyping of plant shoot apical meristems from tissue-resolution data},
author={{\AA}hl, Henrik and Zhang, Yi and J{\"o}nsson, Henrik},
journal={Frontiers in Plant Science},
volume={13},
pages={827147},
year={2022},
publisher={Frontiers Media SA}
}
License
-------
GNU General Public License v3
Author
------
Henrik Ahl (henrikaahl@gmail.com)
| text/x-rst | null | Henrik Åhl <henrikaahl@gmail.com> | null | null | GNU General Public License v3 | 3D, phenotastic, phenotyping, segmentation | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Progr... | [] | null | null | <4.0.0,>=3.12 | [] | [] | [] | [
"clahe>=0.1",
"click>=8.1",
"czifile>=2019.7",
"loguru>=0.7",
"lxml>=5.0",
"mahotas>=1.4",
"networkx>=3.0",
"numpy>=2.0",
"pandas>=2.2",
"pyacvd>=0.3",
"pymeshfix>=0.17",
"pystackreg>=0.2",
"python-dotenv>=1.0",
"pyvista>=0.47",
"pyyaml>=6.0",
"scikit-image>=0.24",
"scipy>=1.14",
"... | [] | [] | [] | [
"homepage, https://github.com/supersubscript/phenotastic",
"documentation, https://supersubscript.github.io/phenotastic",
"repository, https://github.com/supersubscript/phenotastic"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:28:13.831345 | phenotastic-0.5.1.tar.gz | 230,555 | 33/ef/60bb4e18da01980e828dc93a41721980b537da9eebab6171cea4ac364be2/phenotastic-0.5.1.tar.gz | source | sdist | null | false | 7014a18061cc4c9931e3c2d966edf230 | b3020205d47080b51c333c7c0d89b705492f136031b409840abfbad5709ad40a | 33ef60bb4e18da01980e828dc93a41721980b537da9eebab6171cea4ac364be2 | null | [] | 255 |
2.1 | cdk8s-redis | 0.1.878 | Basic implementation of a Redis construct for cdk8s. | # cdk8s-redis
> Redis constructs for cdk8s
Basic implementation of a Redis construct for cdk8s. Contributions are welcome!
## Usage
The following will define a Redis cluster with a primary and 2 replicas:
```python
import { Redis } from 'cdk8s-redis';
// inside your chart:
const redis = new Redis(this, 'my-redis');
```
DNS names can be obtained from `redis.primaryHost` and `redis.replicaHost`.
You can specify how many replicas to define:
```python
new Redis(this, 'my-redis', {
replicas: 4
});
```
Or, you can specify no replicas:
```python
new Redis(this, 'my-redis', {
replicas: 0
});
```
## License
Distributed under the [Apache 2.0](./LICENSE) license.
| text/markdown | Amazon Web Services | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
... | [] | https://github.com/cdk8s-team/cdk8s-redis.git | null | ~=3.9 | [] | [] | [] | [
"cdk8s<3.0.0,>=2.68.91",
"constructs<11.0.0,>=10.3.0",
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/cdk8s-team/cdk8s-redis.git"
] | twine/6.1.0 CPython/3.14.3 | 2026-02-18T12:27:32.974527 | cdk8s_redis-0.1.878.tar.gz | 260,323 | 88/41/29e987dc478278adab5e7099c09224f8ca0f74e72d4acebf522bab42814d/cdk8s_redis-0.1.878.tar.gz | source | sdist | null | false | b1371a04b4a225352dd8506dbc9156fd | 486ee8cc60b3999d10b367cb7de399de78108aa029038516c76f79fcfc335617 | 884129e987dc478278adab5e7099c09224f8ca0f74e72d4acebf522bab42814d | null | [] | 303 |
2.1 | cdk8s-plus-32 | 2.5.26 | cdk8s+ is a software development framework that provides high level abstractions for authoring Kubernetes applications. cdk8s-plus-32 synthesizes Kubernetes manifests for Kubernetes 1.32.0 | # cdk8s+ (cdk8s-plus)
### High level constructs for Kubernetes

| k8s version | npm (JS/TS) | PyPI (Python) | Maven (Java) | Go |
| ----------- | --------------------------------------------------- | ----------------------------------------------- | ----------------------------------------------------------------- | --------------------------------------------------------------- |
| 1.30.0 | [Link](https://www.npmjs.com/package/cdk8s-plus-30) | [Link](https://pypi.org/project/cdk8s-plus-30/) | [Link](https://search.maven.org/artifact/org.cdk8s/cdk8s-plus-30) | [Link](https://github.com/cdk8s-team/cdk8s-plus-go/tree/k8s.30) |
| 1.31.0 | [Link](https://www.npmjs.com/package/cdk8s-plus-31) | [Link](https://pypi.org/project/cdk8s-plus-31/) | [Link](https://search.maven.org/artifact/org.cdk8s/cdk8s-plus-31) | [Link](https://github.com/cdk8s-team/cdk8s-plus-go/tree/k8s.31) |
| 1.32.0 | [Link](https://www.npmjs.com/package/cdk8s-plus-32) | [Link](https://pypi.org/project/cdk8s-plus-32/) | [Link](https://search.maven.org/artifact/org.cdk8s/cdk8s-plus-32) | [Link](https://github.com/cdk8s-team/cdk8s-plus-go/tree/k8s.32) |
**cdk8s+** is a software development framework that provides high level
abstractions for authoring Kubernetes applications. Built on top of the auto
generated building blocks provided by [cdk8s](../cdk8s), this library includes a
hand crafted *construct* for each native kubernetes object, exposing richer
API's with reduced complexity.
## :books: Documentation
See [cdk8s.io](https://cdk8s.io/docs/latest/plus).
## :raised_hand: Contributing
If you'd like to add a new feature or fix a bug, please visit
[CONTRIBUTING.md](CONTRIBUTING.md)!
## :balance_scale: License
This project is distributed under the [Apache License, Version 2.0](./LICENSE).
This module is part of the [cdk8s project](https://github.com/cdk8s-team).
| text/markdown | Amazon Web Services | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
... | [] | https://github.com/cdk8s-team/cdk8s-plus.git | null | ~=3.9 | [] | [] | [] | [
"cdk8s<3.0.0,>=2.68.11",
"constructs<11.0.0,>=10.3.0",
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/cdk8s-team/cdk8s-plus.git"
] | twine/6.1.0 CPython/3.14.3 | 2026-02-18T12:27:27.689224 | cdk8s_plus_32-2.5.26.tar.gz | 3,158,043 | 90/e0/9326b4f962f4264a926236c7c6d1f5034232107423d6308bfa2795e62800/cdk8s_plus_32-2.5.26.tar.gz | source | sdist | null | false | 5818d1bf048f4f208d44c5cdd1a53dde | 05ed1e4787ce84e81a09342eb822b8a33b13c83622a3b11b8bdf3977b46b1c5f | 90e09326b4f962f4264a926236c7c6d1f5034232107423d6308bfa2795e62800 | null | [] | 1,017 |
2.3 | agentstack-cli | 0.6.2rc3 | Agent Stack CLI | # Agent Stack CLI
| text/markdown | IBM Corp. | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.13 | [] | [] | [] | [
"anyio>=4.12.1",
"pydantic>=2.12.5",
"pydantic-settings>=2.12.0",
"requests>=2.32.5",
"jsonschema>=4.26.0",
"jsf>=0.11.2",
"gnureadline>=8.3.3; sys_platform != \"win32\"",
"prompt-toolkit>=3.0.52",
"inquirerpy>=0.3.4",
"psutil>=7.2.2",
"a2a-sdk",
"tenacity>=9.1.2",
"typer>=0.21.1",
"pyyaml... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:27:15.638273 | agentstack_cli-0.6.2rc3.tar.gz | 311,086 | 26/2d/8e31569ab4eb999bcf580fda58755cf0d8d89de91b214c1e19e77190bdef/agentstack_cli-0.6.2rc3.tar.gz | source | sdist | null | false | 2a876b25282f7a265e3af73e4ff9c923 | c3eba6d726a98bde68d2c467cb230b002d550140abeee01f71ab5cbdbf1e2731 | 262d8e31569ab4eb999bcf580fda58755cf0d8d89de91b214c1e19e77190bdef | null | [] | 497 |
2.3 | maked | 0.1.1 | A CLI tool to execute commands inside Markdown files | # Maked: A Command-Line Tool to Automate Markdown Processing
[](https://pypi.org/project/maked/)
[](https://opensource.org/licenses/MIT)
`maked` is a CLI tool that executes shell commands embedded in the YAML front matter of Markdown files. Keep your commands co-located with your documents — no separate `Makefile` or shell script needed.
```bash
pip install maked
```
## Usage
Add a `maked` field to your Markdown file's YAML front matter:
```markdown
---
maked: 'pandoc example.md -o example.pdf'
---
# My Document
Content here.
```
Then run:
```bash
maked example.md
```
You can also pipe content via stdin:
```bash
echo -e "---\nmaked: 'pandoc example.md -o output.pdf'\n---\nSome content here" | maked
```
Preview the command without executing it:
```bash
maked --dry-run example.md
```
## Why Maked?
| | Makefile | Shell script | `maked` |
|---|---|---|---|
| Lives next to your document | ✗ | ✗ | ✓ |
| No extra syntax to learn | ✗ | ✗ | ✓ |
| Works with any shell command | ✓ | ✓ | ✓ |
| Stdin support | ✗ | ✗ | ✓ |
## Installation
```bash
pip install maked
```
Or with Poetry:
```bash
poetry add maked
```
## Contributing
1. Fork the repository and clone it locally.
2. Install dependencies: `poetry install --with dev`
3. Make your changes.
4. Run tests: `poetry run pytest`
5. Open a pull request.
| text/markdown | Luis Cruz | luismirandacruz@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"pyyaml<7.0.0,>=6.0.2",
"click<9.0.0,>=8.1.8"
] | [] | [] | [] | [] | poetry/2.1.1 CPython/3.13.2 Darwin/25.2.0 | 2026-02-18T12:27:01.275637 | maked-0.1.1.tar.gz | 2,755 | 2d/43/2a0cee6892b6cf67126391d3905b02eb3ff7bc0e435e4e376a0add5049a2/maked-0.1.1.tar.gz | source | sdist | null | false | 44298e70e7c8391839ece0abdff52453 | c4ee651beb5aa1902df6fa04798c566e4b93e7f8486819f124215e2939ffe3ed | 2d432a0cee6892b6cf67126391d3905b02eb3ff7bc0e435e4e376a0add5049a2 | null | [] | 252 |
2.4 | pyonir | 0.0.61 | a python library for building web applications | # Pyonir Web Framework
Pyonir is a static site generator and flat file web framework written in Python. It allows you to create dynamic websites using simple markdown files and a powerful plugin architecture.
## Install Pyonir
Run the following command to install Pyonir via pip:
- Python 3.9 or higher is required.
```bash
> pip install pyonir
```
## Create a new project (manual setup)
Manually create a `main.py` file from an empty directory with the following values.
**Example**
```markdown
your_project/
|─ __init__.py # makes this project a package
└─ main.py # entry point to your application
```
**Example main.py file**
1. Open the `main.py` file and add the following code:
```python
from pyonir import Pyonir
app = Pyonir(__file__)
# Run the web server
app.run()
```
2. Customize your application by adding content files, themes, and plugins as needed.
- Create a `contents/pages` directory to store your markdown files.
- Next, create a sample `index.md` file in the `contents/pages` directory with the following content:
```markdown
title: Home Page
description: Welcome to my Pyonir web application!
===
# Hello, Pyonir!
This is my first page using the Pyonir web framework.
```
- Create a `frontend/templates` directory to store your html markup.
- Next, create a sample `pages.html` file in the `frontend/templates` directory with the following content:
```html
<h1>{{ page.title }}</h1>
<p>{{ page.description }}</p>
```
3. Run your application:
```bash
> python main.py
```
## Create a new project (optional auto setup)
**Scaffold a demo web application from the cli:**
```bash
> pyonir init
```
This will generate the following directory structure
```md
your_project/
├─ backend/
| └─ README.md
| └─ __init__.py
├─ contents/
| ├─ pages/
| └─ index.md
├─ frontend/
| └─ README.md
| └─ pages.html
└─ main.py
└─ __init__.py
```
**Install plugins from the pyonir plugins registry on github**
```bash
> pyonir install plugin:<repo_owner>/<repo_name>#<repo_branch>
```
**Install themes from the pyonir theme registry on github**
```bash
> pyonir install theme:<repo_owner>/<repo_name>#<repo_branch>
```
### Configure Contents
Site content is stored in special markdown files within the contents directory.
Each sub directory within the `contents` folder represents the `content type` for any contained markdown files.
### Content Types
Organizes a collection of files by specified type in a directory. Type directory can be named anything you want.
`pages`, `api`, and `configs` are reserved directory name used by the system but can override.
**Config Type: `contents/configs`**
Represents mutable site configurations that can change while app is running.
Override this directory name by setting `your_app.CONFIGS_DIRNAME`
**Page Type: `contents/pages`**
Represents routes accessible from a URL. A file from `contents/pages/about.md` can be accessed from a URL of `https:0.0.0.0/about`
All pages files are served as `text/html` resources. You can configure your pages to be serverd from a different directory by overriding the `Site.PAGES_DIRNAME` default value.
Override this directory name by setting `your_app.PAGES_DIRNAME`
**API Type: `contents/api`**
Files within this folder represents API endpoints. Files here can define the response of the request and call python functions.
A file from `contents/api/new_post.md` can be accessed from a URL of `https:0.0.0.0/api/new_post`.
You can configure your api pages to be serverd from a different directory by overriding the `Site.API_DIRNAME` default value.
Override this directory name by setting `your_app.API_DIRNAME`
## Generate static site
```python
from pyonir import Pyonir
app = Pyonir(__file__)
app.generate_static_website()
```
## Configure Route Controllers
Configuration based routing defined at startup. All routes live in one place — easier for introspection or auto-generation.
This allows flexibility for functions to be access from virtual routes and registered at startup.
```python
def demo_route(user_id: int = 5):
# perform logic using the typed arguments passed to this function on request
return f"user id is {user_id}"
routes: list['PyonirRoute'] = [
['/user/{user_id:int}', demo_route, ["GET"]],
]
# Define an endpoint routers
router: 'PyonirRouters' = [
('/api/demo', routes)
]
```
## Run Web server
Pyonir uses the starlette webserver by default to process web request. Below is an example of how to install a route
handler.
```python
from pyonir import Pyonir
def demo_route(user_id: int = 5):
# perform logic using the typed arguments passed to this function on request
return f"user id is {user_id}"
routes: list['PyonirRoute'] = [
['/user/{user_id:int}', demo_route, ["GET"]],
]
# Define an endpoint routers
router: 'PyonirRouters' = [
('/api/demo', routes)
]
app = Pyonir(__file__)
app.run(routes=router)
```
## Spec based Routes (Optional)
**Virtual routes `.routes.md`**
A virtual route generates a page from aggregated data sources, giving you greater control over the request and response.
Just add `.routes.md` file in the `contents/pages` directory.
**JSON response**
any pattern that begins with the default API name are automatically returning JSON.
```md
/api/some_data/{data_id:str}:
GET.response: application/json
data: hello {request.path_param.data_id} world
```
results from request `http:0.0.0.0/api/some_data/a3b3c3`
```json
{
"data": "hello a3b3c3 world"
}
```
**HTML response**
The `page` attribute value will be passed into the page request. The page url and slug are auto set from the request.
Any scalar values will be passed as the page contents value. Only GET requests permitted by default.
```md
/products/{tag:str}:
title: Products grouped by tag.
contents: Listing of all products grouped by tags.
template: product-tags.html
entries: $dir/../products?groupby={request.path_params[tag])}
```
**Server-sent events**
```md
/api/sse/user/notifications:
GET.call: reference.path.to.sse.notifications.controller
GET.headers.accept: text/event-stream
```
**Websockets**
```md
/api/ws/user/chat:
GET.call: path.to.websocket.module
GET.headers.accept: text/event-stream
```
### Configure Frontend
The `frontend` directory organizes your application themes. Each theme uses jinja template logic to generate data into
HTML. Theme templates are stored in `frontend/themes/{THEME_NAME}/layouts` directory.
### Configure Static Assets
...
### Configure Plugins
...
| text/markdown | null | Derry Spann <pyonir@derryspann.com> | null | null | BSD-3-Clause | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"starlette",
"uvicorn",
"starlette_session",
"starlette_wtf",
"pytz",
"sortedcontainers",
"jinja2",
"webassets",
"argon2-cffi",
"PyJWT",
"requests",
"mistletoe",
"sqlmodel",
"pymediainfo",
"pillow"
] | [] | [] | [] | [
"Homepage, https://pyonir.dev"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:26:44.982914 | pyonir-0.0.61.tar.gz | 1,509,232 | 81/98/1d7b8520fc7ebda3bb0e40716537ee6eb80b863ba58d8fc05c75a5d2e186/pyonir-0.0.61.tar.gz | source | sdist | null | false | 0ba83f430e02fc6e432ec4b37e47273d | ffcb075b846829d2fa9fd5640b004df3a4ae61ff539d2cc504089b66e2b4e287 | 81981d7b8520fc7ebda3bb0e40716537ee6eb80b863ba58d8fc05c75a5d2e186 | null | [
"LICENSE.md"
] | 266 |
2.3 | agentstack-sdk | 0.6.2rc3 | Agent Stack SDK | # Agent Stack Server SDK
Python SDK for packaging agents for deployment to Agent Stack infrastructure.
[](https://pypi.org/project/agentstack-sdk/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://lfaidata.foundation/projects/)
## Overview
The `agentstack-sdk` provides Python utilities for wrapping agents built with any framework (LangChain, CrewAI, BeeAI Framework, etc.) for deployment on Agent Stack. It handles the A2A (Agent-to-Agent) protocol implementation, platform service integration, and runtime requirements so you can focus on agent logic.
## Key Features
- **Framework-Agnostic Deployment** - Wrap agents from any framework for Agent Stack deployment
- **A2A Protocol Support** - Automatic handling of Agent-to-Agent communication
- **Platform Service Integration** - Connect to Agent Stack's managed LLM, embedding, file storage, and vector store services
- **Context Storage** - Manage data associated with conversation contexts
## Installation
```bash
uv add agentstack-sdk
```
## Quickstart
```python
import os
from a2a.types import (
Message,
)
from a2a.utils.message import get_message_text
from agentstack_sdk.server import Server
from agentstack_sdk.server.context import RunContext
from agentstack_sdk.a2a.types import AgentMessage
server = Server()
@server.agent()
async def example_agent(input: Message, context: RunContext):
"""Polite agent that greets the user"""
hello_template: str = os.getenv("HELLO_TEMPLATE", "Ciao %s!")
yield AgentMessage(text=hello_template % get_message_text(input))
def run():
try:
server.run(host=os.getenv("HOST", "127.0.0.1"), port=int(os.getenv("PORT", 8000)))
except KeyboardInterrupt:
pass
if __name__ == "__main__":
run()
```
Run the agent:
```bash
uv run my_agent.py
```
## Available Extensions
The SDK includes extension support for:
- **Citations** - Source attribution (`CitationExtensionServer`, `CitationExtensionSpec`)
- **Trajectory** - Agent decision logging (`TrajectoryExtensionServer`, `TrajectoryExtensionSpec`)
- **Settings** - User-configurable agent parameters (`SettingsExtensionServer`, `SettingsExtensionSpec`)
- **LLM Services** - Platform-managed language models (`LLMServiceExtensionServer`, `LLMServiceExtensionSpec`)
- **Agent Details** - Metadata and UI enhancements (`AgentDetail`)
- **And more** - See [Documentation](https://agentstack.beeai.dev/stable/agent-development/overview)
Each extension provides both server-side handlers and A2A protocol specifications for seamless integration with Agent Stack's UI and infrastructure.
## Resources
- [Agent Stack Documentation](https://agentstack.beeai.dev)
- [GitHub Repository](https://github.com/i-am-bee/agentstack)
- [PyPI Package](https://pypi.org/project/agentstack-sdk/)
## Contributing
Contributions are welcome! Please see the [Contributing Guide](https://github.com/i-am-bee/agentstack/blob/main/CONTRIBUTING.md) for details.
## Support
- [GitHub Issues](https://github.com/i-am-bee/agentstack/issues)
- [GitHub Discussions](https://github.com/i-am-bee/agentstack/discussions)
---
Developed by contributors to the BeeAI project, this initiative is part of the [Linux Foundation AI & Data program](https://lfaidata.foundation/projects/). Its development follows open, collaborative, and community-driven practices.
| text/markdown | IBM Corp. | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"a2a-sdk==0.3.21",
"objprint>=0.3.0",
"uvicorn>=0.35.0",
"asyncclick>=8.1.8",
"sse-starlette>=2.2.1",
"starlette>=0.47.2",
"anyio>=4.9.0",
"opentelemetry-api>=1.35.0",
"opentelemetry-exporter-otlp-proto-http>=1.35.0",
"opentelemetry-instrumentation-fastapi>=0.56b0",
"opentelemetry-sdk>=1.35.0",
... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:26:43.414045 | agentstack_sdk-0.6.2rc3.tar.gz | 50,964 | 7a/36/6a221b138e851692962f3bf46896a33cb34d62b2bf8827a50c837efd43bb/agentstack_sdk-0.6.2rc3.tar.gz | source | sdist | null | false | b6589cf10033bc4d790d62318f2ba867 | 2fe4f9abc15fb405e55c6baad938fdfc4bfc16aed2361d25133a7716415242ab | 7a366a221b138e851692962f3bf46896a33cb34d62b2bf8827a50c837efd43bb | null | [] | 265 |
2.4 | llama-index-readers-whatsapp | 0.4.2 | llama-index readers whatsapp integration | # Whatsapp chat loader
```bash
pip install llama-index-readers-whatsapp
```
## Export a Whatsapp chat
1. Open a chat
2. Tap on the menu > More > Export chat
3. Select **Without media**
4. Save the `.txt` file in your working directory
For more info see [Whatsapp's Help Center](https://faq.whatsapp.com/1180414079177245/)
## Usage
- Messages will get saved in the format: `{timestamp} {author}: {message}`. Useful for when you want to ask about specific people in a group chat.
- Metadata automatically included: `source` (file name), `author` and `timestamp`.
```python
from pathlib import Path
from llama_index.readers.whatsapp import WhatsappChatLoader
path = "whatsapp.txt"
loader = WhatsappChatLoader(path=path)
documents = loader.load_data()
# see what's created
documents[0]
# >>> Document(text='2023-02-20 00:00:00 ur mom: Hi 😊', doc_id='e0a7c508-4ba0-48e1-a2ba-9af133225636', embedding=None, extra_info={'source': 'WhatsApp Chat with ur mom', 'author': 'ur mom', 'date': '2023-02-20 00:00:00'})
```
This loader is designed to be used as a way to load data into [LlamaIndex](https://github.com/run-llama/llama_index/).
| text/markdown | null | Your Name <you@example.com> | batmanscode | null | null | chat, whatsapp | [] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"chat-miner<0.6,>=0.5.1",
"llama-index-core<0.15,>=0.13.0",
"pandas<3,>=2.2.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T12:26:14.172399 | llama_index_readers_whatsapp-0.4.2.tar.gz | 4,126 | ea/7f/ff479dd8f22f73221f83727f264199a1276993afc63a26f7cf24b50e5c1e/llama_index_readers_whatsapp-0.4.2.tar.gz | source | sdist | null | false | aefb93ea3093b08645acdd0da72f4ac9 | a7a4cb3818382eba92da449cb04a67a5a2189a0340a9178d3ff2bc8ecda6214d | ea7fff479dd8f22f73221f83727f264199a1276993afc63a26f7cf24b50e5c1e | MIT | [
"LICENSE"
] | 247 |
2.4 | py-ctp | 6.7.10.20260218 | Python CTP futures api | # py_ctp
上期技术期货交易 api 之 python 封装,实现接口调用。支持 windows(x86/x64) linux(x64).
## 更新
v6.7.10.20250422 全函数封装
ctp 接口封装由 [ctp_generate](https://gitee.com/haifengat/ctp_generate) 生成
## 安装
```sh
pip install py-ctp==6.7.10.20250425
```
#### 示例
```python
#!/usr/bin/env python
# -*- coding: utf-8 -*-
__title__ = 'test py ctp of se'
__author__ = 'HaiFeng'
__mtime__ = '20190506'
from py_ctp.trade import CtpTrade
from py_ctp.quote import CtpQuote
from py_ctp.enums import *
import time
class TestTrade(object):
def __init__(self, addr: str, broker: str, investor: str, pwd: str, appid: str, auth_code: str, proc: str):
self.front = addr
self.broker = broker
self.investor = investor
self.pwd = pwd
self.appid = appid
self.authcode = auth_code
self.proc = proc
self.t = CtpTrade()
self.t.OnConnected = self.on_connect
self.t.OnUserLogin = lambda o, x: print('Trade logon:', x)
self.t.OnDisConnected = lambda o, x: print(x)
self.t.OnRtnNotice = lambda obj, time, msg: print(f'OnNotice: {time}:{msg}')
self.t.OnErrRtnQuote = lambda obj, quote, info: None
self.t.OnErrRtnQuoteInsert = lambda obj, o: None
self.t.OnOrder = lambda obj, o: None
self.t.OnErrOrder = lambda obj, f, info: None
self.t.OnTrade = lambda obj, o: None
self.t.OnInstrumentStatus = lambda obj, inst, stat: None
def on_connect(self, obj):
self.t.ReqUserLogin(self.investor, self.pwd, self.broker, self.proc, self.appid, self.authcode)
def run(self):
self.t.ReqConnect(self.front)
# self.t.ReqConnect('tcp://192.168.52.4:41205')
def release(self):
self.t.ReqUserLogout()
class TestQuote(object):
"""TestQuote"""
def __init__(self, addr: str, broker: str, investor: str, pwd: str):
""""""
self.front = addr
self.broker = broker
self.investor = investor
self.pwd = pwd
self.q = CtpQuote()
self.q.OnConnected = lambda x: self.q.ReqUserLogin(self.investor, self.pwd, self.broker)
self.q.OnUserLogin = lambda o, i: self.q.ReqSubscribeMarketData('rb2409')
def run(self):
self.q.ReqConnect(self.front)
def release(self):
self.q.ReqUserLogout()
if __name__ == "__main__":
front_trade = 'tcp://180.168.146.187:10202'
front_quote = 'tcp://180.168.146.187:10212'
broker = '9999'
investor = ''
pwd = ''
appid = ''
auth_code = ''
proc = ''
if investor == '':
investor = input('invesotr:')
pwd = input('password:')
appid = input('appid:')
auth_code = input('auth code:')
proc = input('product info:')
tt = TestTrade(front_trade, broker, investor, pwd, appid, auth_code, proc)
```
## 发布到 PyPI
从 2023 年起,PyPI 不再支持用户名/密码认证,必须使用 API Token 或 Trusted Publishers。
### 方法一:使用 API Token(推荐)
1. 访问 [PyPI](https://pypi.org/manage/account/) 并登录
2. 进入 Account Settings 页面
3. 找到 "API tokens" 部分并点击 "Add API token"
4. 为 Token 添加描述(例如:py_ctp release)
5. 选择作用域(Scope),通常选择 "Upload packages"
6. 点击 "Add token" 并复制生成的 Token
有了 API Token 后,有两种使用方式:
#### 选项 1:使用 `.pypirc` 配置文件
创建或编辑 `~/.pypirc` 文件:
```ini
[pypi]
username = __token__
password = pypi-*******************************
```
将 `pypi-*******************************` 替换为你刚刚生成的实际 API Token。
然后正常执行发布命令:
```bash
rm dist -rf && python setup.py sdist && twine upload dist/*
```
#### 选项 2:使用命令行参数
```bash
rm dist -rf && python setup.py sdist && twine upload -u __token__ -p pypi-******************************* dist/*
```
### 方法二:使用 Trusted Publishers(适用于 GitHub Actions 等 CI/CD)
如果你使用 GitHub Actions 或其他 CI/CD 系统,可以配置 Trusted Publishers 实现自动发布。
详细信息请参考官方文档:
- [API Token 使用说明](https://pypi.org/help/#apitoken)
- [Trusted Publishers 使用说明](https://pypi.org/help/#trusted-publishers)
| text/markdown | HaiFengAT | haifengat@vip.qq.com | null | null | MIT License | null | [
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: ... | [
"any"
] | https://github.com/haifengat/pyctp | null | >=3.6.0 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-18T12:26:00.476821 | py_ctp-6.7.10.20260218-py3-none-any.whl | 10,690,481 | a2/fb/d58710f96c449c7a5972a1a65c920bfb68c92a5634bcce24e863d82323ba/py_ctp-6.7.10.20260218-py3-none-any.whl | py3 | bdist_wheel | null | false | c3b54fad504221f7563c569ba82f4dd9 | 1e2256a5452d95c0bb5b21be46327a2c23cf1d24c9648ac1110b2e1eb177da86 | a2fbd58710f96c449c7a5972a1a65c920bfb68c92a5634bcce24e863d82323ba | null | [
"LICENSE"
] | 260 |
2.4 | py-surepetcare | 0.5.11 | Python library for SurePetcare API | # SurePetcare API Client
[![PyPI version][pypi-shield]][pypi]
[![Python Version][python-shield]][pypi]
[![License][license-shield]](LICENSE.md)
[![Documentation Status][wiki-shield]][wiki]
[![PyPI Downloads][pypi-downloads-shield]][pypi]
[![Build Status][build-shield]][build]
[![Code Coverage][codecov-shield]][codecov]
[![Open in Dev Containers][devcontainer-shield]][devcontainer]
## About
This repository provides a Python client for accessing the [SurePetcare API](https://app-api.beta.surehub.io/index.html?urls.primaryName=V1).
It consist of io support (surepcio) and a cli (surepccli).
For home assistant support use the [hass-surepetcare](https://github.com/FredrikM97/hass-surepetcare)
## Cli support
This repo also support (to some extent) cli commands. The cli is installed with pip install .[cli] and is not included by default.
To see available commands use:
```python
surepccli --help
```
However, most functionality requires login therefore use the
```python
surepccli account login <email>
```
It is possible to fetch available households with:
```python
surepccli household
```
There is also support to store some properties in .env file. Check available properties to the household and device for more info.
## Supported devices
* Hub
* Pet door
* Feeder Connect
* Dual Scan Connect
* Dual Scan Pet Door
* poseidon Connect
* No ID Dog Bowl Connect
## Contributing
Before pushing validate the changes with: `pre-commit run --all-files`..
Run `pip install .[dev]` to add dependencies for development. Start application and enable debug. The debug logs contain the request data which can be provided with a issue and for snapshot testing.
[build-shield]: https://github.com/FredrikM97/py-surepetcare/actions/workflows/test-and-coverage.yml/badge.svg
[build]: https://github.com/FredrikM97/py-surepetcare/actions
[codecov-shield]: https://codecov.io/gh/FredrikM97/py-surepetcare/branch/dev/graph/badge.svg
[codecov]: https://codecov.io/gh/FredrikM97/py-surepetcare
[license-shield]: https://img.shields.io/github/license/FredrikM97/py-surepetcare.svg
[devcontainer-shield]: https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode
[devcontainer]: https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/FredrikM97/py-surepetcare
[ha-versions-shield]: https://img.shields.io/badge/dynamic/json?url=https://raw.githubusercontent.com/FredrikM97/py-surepetcare/main/hacs.json&label=homeassistant&query=$.homeassistant&color=blue&logo=homeassistant
[releases-shield]: https://img.shields.io/github/release/FredrikM97/py-surepetcare.svg
[releases]: https://github.com/FredrikM97/py-surepetcare/releases
[wiki-shield]: https://img.shields.io/badge/docs-wiki-blue.svg
[wiki]: https://github.com/FredrikM97/py-surepetcare/wiki
[homeassistant]: https://my.home-assistant.io/redirect/hacs_repository/?owner=FredrikM97&repository=py-surepetcare&category=integration
[pypi-shield]: https://img.shields.io/pypi/v/py-surepetcare.svg
[pypi]: https://pypi.org/project/py-surepetcare/
[pypi-downloads-shield]: https://img.shields.io/pypi/dm/py-surepetcare.svg
[python-shield]: https://img.shields.io/pypi/pyversions/py-surepetcare.svg
| text/markdown | FredrikM97 | null | null | null | MIT | null | [
"Intended Audience :: Developers",
"Topic :: Home Automation",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiohttp>=3.9.0",
"pydantic>=2.11.7",
"aresponses; extra == \"dev\"",
"pytest>=8.3.5; extra == \"dev\"",
"pytest-asyncio>=0.26.0; extra == \"dev\"",
"python-dotenv>=1.1.0; extra == \"dev\"",
"pre_commit>=4.2.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"syrupy; extra == \"dev\"",
"... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T12:25:21.400989 | py_surepetcare-0.5.11.tar.gz | 24,413 | 41/cf/b8a18c1c6fc5f703ac6591fa4132db4d801ed2482a875013c7e64e55ec49/py_surepetcare-0.5.11.tar.gz | source | sdist | null | false | e8cb1fb54269a28ed3ae148cf4f85552 | 522cc00726599b5d65f562cfe23f6b7734c0889ce675a3120e01d10452dcb86a | 41cfb8a18c1c6fc5f703ac6591fa4132db4d801ed2482a875013c7e64e55ec49 | null | [
"LICENSE"
] | 247 |
2.4 | pulumi-stripe | 0.0.28 | A Pulumi package for creating and managing Stripe resources. | # Stripe Resource Provider
The Stripe Resource Provider lets you manage [Stripe](https://stripe.com) resources.
This is a bridged provider from https://github.com/lukasaron/terraform-provider-stripe
## Installing
This package is available for several languages/platforms:
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```bash
npm install pulumi-stripe
```
or `yarn`:
```bash
yarn add pulumi-stripe
```
or indeed `pnpm`:
```bash
pnpm add pulumi-stripe
```
### Python
To use from Python, install using `pip`:
```bash
pip install pulumi-stripe
```
### Go
To use from Go, use `go get` to grab the latest version of the library:
```bash
go get github.com/georgegebbett/pulumi-stripe/sdk/go
```
### .NET
To use from .NET, install using `dotnet add package`:
```bash
dotnet add package Pulumi.Stripe
```
## A note on Go and .NET
I have never done any real development in Go or .NET - indeed this was my first foray into Go at all. I cannot warrant
that I have published the packages correctly, or provided good instructions for installing them.
Please feel free to open a PR with any suggestions!
## Configuration
The following configuration points are available for the `stripe` provider:
- `stripe:apiKey` (environment: `STRIPE_API_KEY`) - the API key for `stripe`
| text/markdown | null | null | null | null | Apache-2.0 | pulumi stripe category/cloud | [] | [] | https://www.pulumi.com | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.0.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Repository, https://github.com/georgegebbett/pulumi-stripe"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T12:25:10.412438 | pulumi_stripe-0.0.28.tar.gz | 65,103 | 1c/1c/d56035980280f06968b7cded18964e2fe41e5363eb2818167e12f3eb74f4/pulumi_stripe-0.0.28.tar.gz | source | sdist | null | false | fe9602fc61103ed267794a1ffe26954d | b457c407c0e5b02cdae12692dc5857274aa09d2ba74dc208ec2069d5a36c15d4 | 1c1cd56035980280f06968b7cded18964e2fe41e5363eb2818167e12f3eb74f4 | null | [] | 168 |
2.4 | copyme | 0.2.2 | A simple template for Python development | 




[](https://github.com/psf/black)
[](https://iporepos.github.io/copyme/)
[](https://pypi.org/project/copyme/)
[](
https://pypi.org/project/copyme/)
<a logo>
<img src="https://raw.githubusercontent.com/iporepos/copyme/master/docs/figs/logo.png" height="130" width="130">
</a>
---
# copyme
A simple template for python development. Use this repository as a template for developing a python library or package.
> [!NOTE]
> Check out the [documentation website](https://iporepos.github.io/copyme/)
---
# Templates
When copying files from this repo, remember that they are _templates_. So:
1) look for `[CHANGE THIS]` for mandatory modifications;
2) look for `[CHECK THIS]` for possible modifications;
3) look for `[EXAMPLE]` for simple examples (comment or uncomment it if needed);
4) look for `[ADD MORE IF NEDDED]` for possible extra features;
5) placeholders are designated by curly braces: `{replace-this}`.
---
# Configuration files
This repository relies on several **configuration files** that are essential for the proper
functioning of the template. Each file has a specific role, and some of them work together,
so they should be edited thoughtfully. Below is an overview of the main configuration files
and what you should know about them.
| File | Purpose | Key Notes |
|------------------------------------|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------|
| **`pyproject.toml`** | Project configuration | Manages dependencies, build system, and project settings. Update when adding dependencies or changing project structure. |
| **`.gitignore`** | Git ignore rules | Specifies files/folders Git should ignore (e.g., temp files, datasets, build outputs). Keeps repo clean. |
| **`.github/workflows/style.yaml`** | Style CI configuration | Runs code style checks using [Black](https://black.readthedocs.io/en/stable/). Depends on `pyproject.toml` dev dependencies. |
| **`docs/conf.py`** | Docs configuration | Configures [Sphinx](https://www.sphinx-doc.org/en/master/index.html) for building documentation. Update if project structure changes. |
| **`.github/workflows/docs.yaml`** | Docs CI configuration | Automates online docs build. Relies on `pyproject.toml` and `docs/conf.py`. Requires extra steps (see file). |
| **`.github/workflows/dist.yaml`** | PyPI CD configuration | Automates package distribution on the official PyPI website. |
| **`tests/conftest.py`** | Testing configuration | Provides shared fixtures and settings for tests. Can be customized to fit project needs. |
| **`.github/workflows/tests.yaml`** | Testing CI configuration | Runs automated unit tests on CI. Ensures code correctness after changes. |
> [!NOTE]
> All config files are commented with recommended actions and extra steps.
> [!WARNING]
> Online documentation build may require additional setup — check `.github/workflows/docs.yml`.
> [!IMPORTANT]
> Continous Integration (CI) setup allows for check-ups for commits and not allowing bad code
> to be pushed to the main branch. So Style, Docs and Tests must always pass.
---
# Repo layout
A standard python repo may use the following layout.
This layout is known as `src` layout, since it stores the source code under a `src/{repo}` folder.
> See more on [flat vs src layout](https://packaging.python.org/en/latest/discussions/src-layout-vs-flat-layout/)
```txt
{repository}/
│
├── LICENSE
├── README.md # [CHECK THIS] this file (landing page)
├── .gitignore # [CHECK THIS] configuration of git vcs ignoring system
├── pyproject.toml # [CHECK THIS] configuration of python project
├── MANIFEST.in # [CHECK THIS] configuration of source distribution
|
├── .github/ # github folder
│ └── workflows/ # folder for continuous integration services
│ ├── style.py # [CHECK THIS] configuration file for style check workflow
│ ├── tests.py # [CHECK THIS] configuration file for tests workflow
│ └── docs.yml # [CHECK THIS] configuration file for docs build workflow
│
├── dev/ # development folder
│ ├── checkout.py # checkout script
│ ├── docs.py # build docs script
│ ├── style.py # style script
│ ├── tests.py # testing script
│ └── templates-ci/ # templates for CI
│
├── docs/ # documentation folder
│ ├── about.rst # info about the repo
│ ├── api.rst # api reference using sphinx autodoc
│ ├── conf.py # [CHECK THIS] configuration file for sphinx
│ ├── dummy.md # markdown docs also works
│ ├── index.rst # home page for documentation
│ ├── usage.rst # instructions for using this repo
│ ├── make.bat # (optional) [generated] sphinx auxiliar file
│ ├── Makefile # (optional) [generated] sphinx auxiliar file
│ ├── figs/ # figs-only files
│ ├── data/ # docs-only data
│ ├── generated/ # [generated] sphinx created files
│ ├── _templates/ # [ignored] [generated] sphinx created stuff
│ ├── _static/ # [generated] sphinx created stuff
│ └── _build/ # [ignored] [generated] sphinx build
│
├── src/ # source code folder
│ ├── {repository}.egg-info # [ignored] [generated] files for local development
│ └── {repository}/ # [CHANGE THIS] source code root
│ ├── __init__.py # template init file
│ ├── module.py # template module
│ ├── ... # develop your modules
│ ├── mypackage/ # template package
│ │ └── submodule.py
│ └── data/ # run-time data
│
├── examples/ # (optional) learning resources
│ ├── examples_01.ipynb
│ └── examples_02.ipynb
│
└── tests/ # testing code folder
├── conftest.py # [CHECK THIS] configuration file of tests
├──unit/ # unit tests package
│ └── test_module.py # template module for unit tests
├── bcmk/ # benchmarking tests package
│ └── test_bcmk.py # template module for benchmarking tests
├── data/ # test-only data
│ ├── test_data.csv
│ ├── datasets.csv # table of remote datasets
│ └── dataset1/ # [ignored] subfolders in data
└── outputs/ # [ignored] tests outputs
```
| text/markdown | Iporã Possantti | null | Iporã Possantti | null | null | python | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests",
"numpy",
"pandas",
"black==25.12.0; extra == \"dev\"",
"notebook; extra == \"dev\"",
"sphinx; extra == \"docs\"",
"sphinx-autodoc-typehints; extra == \"docs\"",
"sphinx-rtd-theme; extra == \"docs\"",
"pydata-sphinx-theme; extra == \"docs\"",
"sphinx_copybutton; extra == \"docs\"",
"m... | [] | [] | [] | [
"Homepage, https://iporepos.github.io/copyme",
"Repository, https://github.com/iporepos/copyme"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T12:24:01.086412 | copyme-0.2.2.tar.gz | 59,063 | bf/59/53d2147d28742a4df9cbbc9568a9eab4923487aa107de92f9794fe0d68e0/copyme-0.2.2.tar.gz | source | sdist | null | false | 211b0a26f8c68a8446dc096bfd2ecaa8 | b15587638fac57e764cee082578d51cacada739606f983c02125ca1f138300bf | bf5953d2147d28742a4df9cbbc9568a9eab4923487aa107de92f9794fe0d68e0 | GPL-3.0-or-later | [
"LICENSE"
] | 269 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.