metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | aviary.labbench | 0.33.0 | LAB-Bench environments implemented with aviary | # aviary.labbench
LAB-Bench environments implemented with aviary,
allowing agents to perform question answering on scientific tasks.
## Installation
To install the LAB-Bench environment, run:
```bash
pip install 'fhaviary[labbench]'
```
## Usage
In [`labbench/env.py`](src/aviary/envs/labbench/env.py), you will find:
- `GradablePaperQAEnvironment`: an PaperQA-backed environment
that can grade answers given an evaluation function.
- `ImageQAEnvironment`: an `GradablePaperQAEnvironment`
subclass for QA where image(s) are pre-added.
And in [`labbench/task.py`](src/aviary/envs/labbench/task.py), you will find:
- `TextQATaskDataset`: a task dataset designed to
pull down FigQA, LitQA2, or TableQA from Hugging Face,
and create one `GradablePaperQAEnvironment` per question.
- `ImageQATaskDataset`: a task dataset that pairs with `ImageQAEnvironment`
for FigQA or TableQA.
Here is an example of how to use them:
```python
import os
from ldp.agent import SimpleAgent
from ldp.alg import Evaluator, EvaluatorConfig, MeanMetricsCallback
from paperqa import Settings
from aviary.env import TaskDataset
async def evaluate(folder_of_litqa_v2_papers: str | os.PathLike) -> None:
settings = Settings(paper_directory=folder_of_litqa_v2_papers)
dataset = TaskDataset.from_name("litqa2", settings=settings)
metrics_callback = MeanMetricsCallback(eval_dataset=dataset)
evaluator = Evaluator(
config=EvaluatorConfig(batch_size=3),
agent=SimpleAgent(),
dataset=dataset,
callbacks=[metrics_callback],
)
await evaluator.evaluate()
print(metrics_callback.eval_means)
```
### Image Question-Answer
This is an environment/dataset for giving PaperQA a `Docs` object with
the image(s) for one LAB-Bench question.
It's designed to be a comparison with zero-shotting the question to a LLM,
but instead of a singular prompt the image is put through the PaperQA agent loop.
```python
from typing import cast
import litellm
import pytest
from ldp.agent import Agent
from ldp.alg import (
Evaluator,
EvaluatorConfig,
MeanMetricsCallback,
StoreTrajectoriesCallback,
)
from paperqa.settings import AgentSettings, IndexSettings
from aviary.envs.labbench import (
ImageQAEnvironment,
ImageQATaskDataset,
LABBenchDatasets,
)
@pytest.mark.asyncio
async def test_image_qa(tmp_path) -> None:
litellm.num_retries = 8 # Mitigate connection-related failures
settings = ImageQAEnvironment.make_base_settings()
settings.agent = AgentSettings(
agent_type="ldp.agent.SimpleAgent",
index=IndexSettings(paper_directory=tmp_path),
# TODO: add image support for paper_search
tool_names={"gather_evidence", "gen_answer", "complete", "reset"},
agent_evidence_n=3, # Bumped up to collect several perspectives
)
dataset = ImageQATaskDataset(dataset=LABBenchDatasets.TABLE_QA, settings=settings)
t_cb = StoreTrajectoriesCallback()
m_cb = MeanMetricsCallback(eval_dataset=dataset, track_tool_usage=True)
evaluator = Evaluator(
config=EvaluatorConfig(
batch_size=256, # Use batch size greater than FigQA size and TableQA size
max_rollout_steps=18, # Match aviary paper's PaperQA setting
),
agent=cast(Agent, await settings.make_ldp_agent(settings.agent.agent_type)),
dataset=dataset,
callbacks=[t_cb, m_cb],
)
await evaluator.evaluate()
print(m_cb.eval_means)
```
## References
[1] Skarlinski et al.
[Language agents achieve superhuman synthesis of scientific knowledge](https://arxiv.org/abs/2409.13740).
ArXiv:2409.13740, 2024.
[2] Laurent et al.
[LAB-Bench: Measuring Capabilities of Language Models for Biology Research](https://arxiv.org/abs/2407.10362).
ArXiv:2407.10362, 2024.
| text/markdown | null | FutureHouse technical staff <hello@futurehouse.org> | null | null | null | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"P... | [] | null | null | >=3.11 | [] | [] | [] | [
"fhaviary>=0.14",
"fhlmi",
"ldp>=0.25.2",
"paper-qa[pymupdf]>=2025",
"pydantic~=2.0",
"tenacity",
"typing-extensions; python_version <= \"3.12\"",
"datasets>=2.15; extra == \"datasets\"",
"aviary.labbench[datasets,typing]; extra == \"dev\"",
"pandas; extra == \"dev\"",
"paper-qa>=5.29.1; extra =... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:12:49.351649 | aviary_labbench-0.33.0.tar.gz | 1,516,817 | b4/85/b3447562e5853477e8110489ad16754375377bb45d57843a8ccd6a803b51/aviary_labbench-0.33.0.tar.gz | source | sdist | null | false | 01a64c902b7541754633da1d1fbe69bc | 8ac52d76aed818e1ad36b71fda29678297285c9c80d8c939ac42e4cc7c3c9430 | b485b3447562e5853477e8110489ad16754375377bb45d57843a8ccd6a803b51 | null | [] | 0 |
2.4 | aviary.hotpotqa | 0.33.0 | HotPotQA environment implemented with aviary | # aviary.hotpotqa
The HotPotQA environment asks agents to perform multi-hop question answering on the HotPotQA dataset [1].
## References
[1] Yang et al. [HotpotQA: A Dataset for Diverse,
Explainable Multi-Hop Question Answering](https://aclanthology.org/D18-1259/). EMNLP, 2018.
## Installation
To install the HotPotQA environment, run the following command:
```bash
pip install 'fhaviary[hotpotqa]'
```
| text/markdown | null | FutureHouse technical staff <hello@futurehouse.org> | null | null | null | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"P... | [] | null | null | >=3.11 | [] | [] | [] | [
"beautifulsoup4",
"datasets>=2.15",
"fhaviary",
"httpx",
"httpx-aiohttp",
"pydantic~=2.0",
"tenacity"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:12:48.197841 | aviary_hotpotqa-0.33.0.tar.gz | 12,544 | 99/d7/12c65e031577e758f50d36b6c5c3a0c088a3ce9b21a84019506a3969cfd5/aviary_hotpotqa-0.33.0.tar.gz | source | sdist | null | false | 10f0fd2cdc3ecbb0343fc9da368b090f | e6c15f3bfa3bce95091ece2dfcc3bdf069c65a55680f68a8ba33992d8f8b6e01 | 99d712c65e031577e758f50d36b6c5c3a0c088a3ce9b21a84019506a3969cfd5 | null | [] | 0 |
2.4 | aviary.gsm8k | 0.33.0 | GSM8k environment implemented with aviary | # aviary.gsm8k
GSM8k environment where agents solve math word problems
from the GSM8k dataset using a calculator tool.
## Citation
The citation for GSM8k is given below:
```bibtex
@article{gsm8k-paper,
title = {Training verifiers to solve math word problems},
author = {
Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and
Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and
Hilton, Jacob and Nakano, Reiichiro and others
},
year = 2021,
journal = {arXiv preprint arXiv:2110.14168}
}
```
## References
[1] Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano,
R. and Hesse, C., 2021.
[Training verifiers to solve math word problems](https://arxiv.org/abs/2110.14168). arXiv preprint arXiv:2110.14168.
## Installation
To install the GSM8k environment, run the following command:
```bash
pip install 'fhaviary[gsm8k]'
```
| text/markdown | null | FutureHouse technical staff <hello@futurehouse.org> | null | null | null | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"P... | [] | null | null | >=3.11 | [] | [] | [] | [
"datasets>=2.15",
"fhaviary",
"pydantic~=2.0",
"pandas-stubs; extra == \"typing\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:12:46.676209 | aviary_gsm8k-0.33.0.tar.gz | 8,948 | 5a/7c/f4a5cd830647cf9bfa2f580966179442fc8264150a1f529e5af1da4643ca/aviary_gsm8k-0.33.0.tar.gz | source | sdist | null | false | bce74c7788dc27770c8a97d100f2b34e | 63fd71595b51c244f5edd077835140b440f257b82c2ca076b3568387200f5aa5 | 5a7cf4a5cd830647cf9bfa2f580966179442fc8264150a1f529e5af1da4643ca | null | [] | 0 |
2.4 | wf-suite | 1.0.2 | APS Wavefront Analysis Tools | # WFSuite 1.0
WFSuite is a graphical and command-line toolkit for coded-mask-based X-ray wavefront sensing and phase reconstruction at synchrotron beamlines.
## Installation
We suggest creating a virtual environment with virtualenv or conda.
The program performs better with python 3.10+, and it has been tested up to python 3.13
To install it, run on a prompt of the target python environment:
python -m pip install wf-suite
## Starting WFSuite
Open the GUI with:
python -m aps.wf_suite Launcher
The launcher provides access to:
· Absolute Phase
· Relative Metrology
· Wavefront Sensor (opens automatically)
Please refer to the [user manual](User-Manual.pdf) in the home directory of this repository for details.
| text/markdown | Luca Rebuffi, Xianbo Shi | lrebuffi@anl.gov | XSD-OPT Group @ APS-ANL | lrebuffi@anl.gov | BSD-3 | dictionary, glossary, synchrotronsimulation | [
"Development Status :: 4 - Beta",
"Natural Language :: English",
"Environment :: Console",
"Environment :: Plugins",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Visualization",
"Intended Audience :: Science/Research"
] | [] | https://github.com/APS-XSD-OPT-Group/WFSuite | https://github.com/APS-XSD-OPT-Group/WFSuite | null | [] | [] | [] | [
"aps-common-libraries>=1.0.28",
"PyQt6",
"PyWavelets",
"wofryImpl",
"wofrysrw",
"srwpy",
"cmasher"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-18T23:12:28.093621 | wf_suite-1.0.2.tar.gz | 2,415,289 | e3/d0/eaac0266aba5a9de3f8ccfc2c5fef1f1182995a5791b3a6a4ea8ee3bfea6/wf_suite-1.0.2.tar.gz | source | sdist | null | false | f6161919fe3e4d4c49bf13b93e159b31 | cec9408de72b5adf8dde8d80fc8a194c785517c2f46600c2a20faffe85803db3 | e3d0eaac0266aba5a9de3f8ccfc2c5fef1f1182995a5791b3a6a4ea8ee3bfea6 | null | [
"LICENSE"
] | 252 |
2.1 | mps-sim | 1.0.0 | MPS-based quantum circuit simulator with multilevel Richardson extrapolation | # mps_sim — MPS Quantum Circuit Simulator with Richardson Extrapolation
A production-ready quantum circuit simulator based on Matrix Product States (MPS), featuring multilevel Richardson extrapolation as a unique accuracy-enhancement technique.
---
## Installation
```bash
pip install numpy # only dependency
```
Copy the `mps_sim/` folder into your project.
---
## Quick Start
```python
from mps_sim import Circuit, MPSSimulator, MultiChiRunner, SweepConfig
# --- Simple simulation ---
c = Circuit(4)
c.h(0).cx(0, 1).cx(1, 2).cx(2, 3) # GHZ state
sim = MPSSimulator(chi=64)
state = sim.run(c)
print(state.expectation_pauli_z(0)) # → 0.0
# --- Extrapolated simulation ---
runner = MultiChiRunner(SweepConfig(base_chi=16, ratio=2, n_levels=3))
result = runner.run(c, {'Z0': ('Z', 0), 'Z1': ('Z', 1)})
print(result.summary())
```
---
## Architecture
```
mps_sim/
├── core/
│ └── mps.py # MPS tensor train, SVD truncation, canonicalization
├── gates/
│ └── __init__.py # Full gate library (H, CNOT, Rx, ZZ, ...)
├── circuits/
│ └── __init__.py # Circuit builder API + MPSSimulator engine
├── extrapolation/
│ └── __init__.py # Richardson extrapolation engine + MultiChiRunner
├── tests/
│ └── test_all.py # Test suite
├── examples/
│ └── examples.py # Runnable examples
└── cli.py # Command-line interface
```
---
## Richardson Extrapolation — The Key Feature
### Theory
MPS truncation error in expectation values follows:
```
<O>(χ) ≈ <O>(∞) + a₁/χ^α + a₂/χ^(2α) + ...
```
By running at bond dimensions `χ, 2χ, 4χ, ...` and applying Richardson extrapolation hierarchically (analogous to Romberg integration), we cancel successive error orders:
```
Level-1: E₁ = (2^α · f(2χ) - f(χ)) / (2^α - 1) ← cancels O(1/χ^α)
Level-2: E₂ = (2^α · E₁(2χ) - E₁(χ)) / (2^α - 1) ← cancels O(1/χ^(2α))
```
### Demonstrated improvement
```
chi= 8: 0.44674 (error=1.3e-1)
chi= 16: 0.36103 (error=4.7e-2)
chi= 32: 0.33073 (error=1.7e-2)
chi= 64: 0.32002 (error=5.9e-3)
Extrapolated: 0.31416 (error=5.6e-17) ← 1e14 improvement
```
### When it works best
- Gapped 1D systems (area-law entanglement)
- Circuits in the weakly-entangled regime
- Deep circuits at large enough χ (tail of entanglement spectrum)
### Built-in reliability diagnostics
The extrapolator automatically flags when the power-law assumption breaks down:
- Non-monotone convergence
- Richardson corrections not decreasing
- Large uncertainty relative to signal
- Alpha outside plausible physical range
---
## Gate Library
**Single-qubit:** `I, X, Y, Z, H, S, T, Sdg, Tdg, Rx(θ), Ry(θ), Rz(θ), P(φ), U(θ,φ,λ)`
**Two-qubit:** `CNOT/CX, CZ, SWAP, iSWAP, XX(θ), YY(θ), ZZ(θ), CRz(θ), CP(φ)`
---
## Circuit Builder API
```python
c = Circuit(6)
c.h(0) # Hadamard
.cx(0, 1) # CNOT
.rz(np.pi/4, 2) # Rz rotation
.zz(0.5, 3, 4) # ZZ(θ) interaction
.swap(4, 5) # SWAP
```
---
## CLI
```bash
# Single simulation
python cli.py simulate --circuit ghz --n 10 --chi 64
# Extrapolated simulation
python cli.py extrapolate --circuit ising --n 8 --chi-base 16 --levels 3
# Benchmark chi convergence
python cli.py benchmark --circuit ising --n 8 --chi-start 8 --chi-levels 4
# Show help
python cli.py info
```
---
## API Reference
### `MPSSimulator(chi, svd_threshold=1e-14)`
Simulates a circuit at a fixed bond dimension.
- `.run(circuit) → MPS`
- `.expectation_value(state, observable, site) → float`
### `MPS`
- `.expectation_pauli_z/x/y(site) → float`
- `.to_statevector() → ndarray` (n ≤ 20 only)
- `.bond_dimensions() → list`
- `.total_truncation_error() → float`
### `MultiChiRunner(config, extrapolator, verbose)`
Runs the full sweep-and-extrapolate pipeline.
- `.run(circuit, observables) → MultiObservableResult`
- `.run_custom(fn) → MultiObservableResult`
### `SweepConfig(base_chi, ratio, n_levels, alpha)`
- `.bond_dims` — list of bond dimensions
- `.effective_chi()` — equivalent direct bond dimension
### `RichardsonExtrapolator(ratio, alpha, min_reliable_improvement)`
- `.extrapolate(bond_dims, values) → ExtrapolationResult`
- `.extrapolate_multi(bond_dims, values_dict) → MultiObservableResult`
- `.estimate_alpha(bond_dims, values) → float`
---
## Limitations
- Two-qubit gates on non-adjacent qubits use SWAP chains (increases depth)
- `to_statevector()` is exponential — only usable for n ≤ 20
- Richardson extrapolation assumes power-law error decay — may not hold for highly entangled circuits
- No GPU acceleration (numpy-based)
---
## Dependencies
- Python 3.8+
- NumPy
| text/markdown | null | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.2 | 2026-02-18T23:12:27.523017 | mps_sim-1.0.0.tar.gz | 36,485 | d4/35/1717369285c37aab21ba2947ba61fd45471d481a008e38a3d15271564f1f/mps_sim-1.0.0.tar.gz | source | sdist | null | false | fea84124a2c227d0a9d67dd9e57d025c | 2cb67bc2c2167e8b7247d7b7bfa09598e7c2638534da61e6ac492bb9c9b6e5cb | d4351717369285c37aab21ba2947ba61fd45471d481a008e38a3d15271564f1f | null | [] | 279 |
2.4 | django-bird-colony | 0.14.1 | A simple Django app for managing a bird breeding colony | bird-colony
-----------
|ProjectStatus|_ |Version|_ |BuildStatus|_ |License|_ |PythonVersions|_
.. |ProjectStatus| image:: https://www.repostatus.org/badges/latest/active.svg
.. _ProjectStatus: https://www.repostatus.org/#active
.. |Version| image:: https://img.shields.io/pypi/v/django-bird-colony.svg
.. _Version: https://pypi.python.org/pypi/django-bird-colony/
.. |BuildStatus| image:: https://github.com/melizalab/django-bird-colony/actions/workflows/test.yml/badge.svg
.. _BuildStatus: https://github.com/melizalab/django-bird-colony/actions/workflows/test.yml
.. |License| image:: https://img.shields.io/pypi/l/django-bird-colony.svg
.. _License: https://opensource.org/license/bsd-3-clause/
.. |PythonVersions| image:: https://img.shields.io/pypi/pyversions/django-bird-colony.svg
.. _PythonVersions: https://pypi.python.org/pypi/django-bird-colony/
bird-colony is a Django application the Meliza Lab uses to manage its zebra
finch colony and keep breeding records. You may find it useful, even if you work
with non-avian species.
Features:
* Animals have globally unique identifiers and can optionally have colored and numbered leg bands (one band per animal). This means they keep their identities even if you have to reband them.
* Record events over the lifespan of each animal, from egg to grave. Events can be linked to locations, so you can find where an animal is or was on a certain date. Events can also be linked to measurements like weight so you can track these over time or use them to make breeding decisions.
* Track pairings, pedigrees, and breeding success statistics. You can export a complete pedigree for all living birds in the colony and compute relatedness using external software like the R `pedigree <https://www.rdocumentation.org/packages/pedigree/versions/1.4.2>`_ package.
* Useful forms for entering data, including periodic nest checks. Easy to track when eggs are laid and hatch and associate them with the correct parents.
* Associate biological samples with animals and track their physical location.
You’ll need to have a basic understanding of how to use
`Django <https://www.djangoproject.com/>`__. ``bird-colony`` is licensed
for you to use under the BSD License. See COPYING for details
Quick start
~~~~~~~~~~~
1. Requires Python 3.10+. Runs on Django 4.2 LTS and 5.2 LTS.
2. Install the package using pip: ``pip install django-bird-colony``.
3. Add ``birds`` and some dependencies to your INSTALLED_APPS setting
like this:
.. code:: python
INSTALLED_APPS = (
...
'widget_tweaks', # For form tweaking
'rest_framework',
'django_filters',
'fullurl',
'birds',
)
2. Include birds in ``urlpatterns`` in your project ``urls.py``. Some of
the views link to the admin interface, so make sure that is included,
too:
.. code:: python
path("birds/", include("birds.urls")),
path("admin/", admin.site.urls),
3. Run ``python manage.py migrate`` to create the database tables. If
this is a new django install, run
``python migrate.py createsuperuser`` to create your admin user.
4. Run ``python manage.py loaddata bird_colony_starter_kit`` to create
some useful initial records.
5. Start the development server (``python manage.py runserver``) and
visit http://127.0.0.1:8000/admin/birds/ to set up your colony, as
described in the next section.
6. Visit http://127.0.0.1:8000/birds/ to use views.
Make sure to consult the Django documentation on deployment if you are
at all concerned about security.
Initial setup
~~~~~~~~~~~~~
This is a work in progress. Before you start entering birds and events,
you need to set up some tables using the Django admin app.
Required steps:
^^^^^^^^^^^^^^^
1. Edit species records in the ``Species`` table. The
``bird_colony_starter_kit`` fixture will create a record for zebra
finches. The ``code`` field is used to give animals their names, so
if you have zebra finches and use ``zebf`` as your code, your birds
will be named ``zebf_red_1`` and so forth.
2. Edit and add locations to the ``Locations`` table. You need to have
at least one location created. The main use for this field is to
allow you to find where a bird is by looking at the last event.
3. Edit and create new event types in the ``Status codes`` table. Common event
types include ``laid``, ``hatched``, ``added``, ``moved``, ``died``, ``used for
anatomy``, etc. For each status code, indicate whether it adds or removes a
bird from the colony. Addition event types include eggs being laid, births,
and transfers, which are handled differently. Removal event types include
expected and unexpected deaths. Unexpected deaths are important to tabulate
when looking at family history and breeding success. When you create an event
that removes a bird, it will appear as no longer alive. The ``hatched`` event
is special, because if you add a bird to the database using the ``Add new
bird`` view using this code, the system will require you to enter the bird’s
parents. (If you don’t know the bird’s parents, you can always create it
manually in the admin interface)
Optional steps:
^^^^^^^^^^^^^^^
1. If your bands are colored, add your colors to the ``Colors`` table.
This will affect the short name for your animals.
2. If you’re going to be adding samples to the databse, add or edit
``Sample locations`` and ``Sample types`` in the admin interface.
3. Add additional users to the database. This is particularly useful if
you want to allow specific users to reserve animals.
4. If you want to change some of the boilerplate text on the entry
forms, you’ll need to install the app from source. The templates are
found under ``birds/templates/birds`` in the source directory.
Development
~~~~~~~~~~~
Recommend using `uv <https://docs.astral.sh/uv/>`__ for development.
Run ``uv sync`` to create a virtual environment and install
dependencies. ``uv sync --no-dev --frozen`` for deployment.
Testing: ``uv run pytest``. Requires a test database, will use settings
from ``inventory/test/settings.py``.
Changelog
~~~~~~~~~
In the 0.4.0 release, the primary key for animal records became the
animal’s uuid. To migrate from previous version, data must be exported
as JSON under the 0.3.999 release and then imported under 0.4.0
| text/x-rst | null | C Daniel Meliza <dan@meliza.org> | null | C Daniel Meliza <dan@meliza.org> | BSD 3-Clause License | null | [
"Development Status :: 4 - Beta",
"Framework :: Django",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"django-filter>=22.1",
"django-fullurl",
"django-widget-tweaks>=1.5.0",
"django>=4.2.17",
"djangorestframework",
"djangorestframework-link-header-pagination",
"numpy>=1.26"
] | [] | [] | [] | [
"Homepage, https://github.com/melizalab/django-bird-colony"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:12:22.764353 | django_bird_colony-0.14.1.tar.gz | 80,171 | de/62/0e8a5343e05251a07eec767518759bc382cd0241bc5c1dc9004dc2070373/django_bird_colony-0.14.1.tar.gz | source | sdist | null | false | 23393e03272674d714b6e3db2896a5f8 | bcb03fea84910c1c91a77b9b08877b45eede7dbed54c286b3fd0ab4f3e015925 | de620e8a5343e05251a07eec767518759bc382cd0241bc5c1dc9004dc2070373 | null | [
"COPYING"
] | 277 |
2.4 | gdelt-client | 0.2.1 | A client for the GDELT 2.0 API | # GDELT 2.0 API Client
A Python client to fetch data from the [GDELT 2.0 API](https://gdeltproject.org/).
This client supports both the DOC API for article search and timelines, as well as direct access to GDELT's raw event data files (events, mentions, and GKG). This allows for simpler, small-scale analysis of news coverage and events data without having to deal with the complexities of downloading and managing the raw files from S3, or working with the BigQuery export.
The implementation has been forked from [gdeltdoc](https://github.com/alex9smith/gdelt-doc-api).
## Installation
`gdelt-client` is on [PyPi](https://pypi.org/project/gdelt-client/) and is installed through pip:
```bash
pip install gdelt-client
```
## Use
### DOC API - Article Search & Timelines
Search for news articles and get timeline data via the GDELT DOC API.
```python
from gdelt_client import GdeltClient, Filters
f = Filters(
keyword="climate change",
start_date="2020-05-10",
end_date="2020-05-11"
)
gd = GdeltClient()
# Search for articles matching the filters
articles = gd.article_search(f)
# Get a timeline of coverage volume
timeline = gd.timeline_search("timelinevol", f)
```
**Async example:**
```python
import asyncio
from gdelt_client import GdeltClient, Filters
async def main():
f = Filters(keyword="climate change", start_date="2020-05-10", end_date="2020-05-11")
# Use async context manager to properly cleanup resources
async with GdeltClient() as gd:
# Async article search
articles = await gd.aarticle_search(f)
# Async timeline search
timeline = await gd.atimeline_search("timelinevol", f)
asyncio.run(main())
```
### Raw Data Downloads - Events, Mentions & GKG
Download and parse GDELT's raw data files directly. Returns data with CAMEO code descriptions for events.
```python
from gdelt_client import GdeltClient, GdeltTable, OutputFormat
gd = GdeltClient()
# Download events for a single date
events = gd.search(
date="2020-05-10",
table=GdeltTable.EVENTS,
output=OutputFormat.DATAFRAME
)
# Download mentions for a date range with full 15-min coverage
mentions = gd.search(
date=["2020-05-10", "2020-05-11"],
table=GdeltTable.MENTIONS,
coverage=True # Download all 15-minute intervals
)
# Get GeoDataFrame with geometry for mapping
geo_events = gd.search(
date="2020-05-10",
table=GdeltTable.EVENTS,
output=OutputFormat.GEODATAFRAME
)
# View table schema
schema = gd.schema(GdeltTable.EVENTS)
```
**Async example** (downloads files concurrently for better performance):
```python
import asyncio
from gdelt_client import GdeltClient, GdeltTable
async def main():
# Use async context manager to properly cleanup resources
async with GdeltClient() as gd:
# Async search with concurrent file downloads
events = await gd.asearch(
date=["2020-05-10", "2020-05-11"],
table=GdeltTable.EVENTS,
coverage=True
)
print(events[:5])
print(f"Total records {len(events)}")
asyncio.run(main())
```
**Available tables:** `EVENTS`, `MENTIONS`, `GKG`
**Available output formats:** `DATAFRAME`, `JSON`, `CSV`, `GEODATAFRAME`
### Article List
The `article_search()` method (and async `aarticle_search()`) generates a list of news articles that match the filters. Returns a pandas DataFrame with columns: `url`, `url_mobile`, `title`, `seendate`, `socialimage`, `domain`, `language`, `sourcecountry`.
### Timeline Search
The `timeline_search()` method (and async `atimeline_search()`) supports 5 modes:
- `timelinevol` - Timeline of coverage volume as a percentage of all monitored articles
- `timelinevolraw` - Timeline with actual article counts instead of percentages
- `timelinelang` - Coverage broken down by language (each language as a column)
- `timelinesourcecountry` - Coverage broken down by source country (each country as a column)
- `timelinetone` - Average tone of articles over time (see [GDELT docs](https://blog.gdeltproject.org/gdelt-doc-2-0-api-debuts/) for tone metric details)
All modes return a pandas DataFrame with a `datetime` column and data columns.
### Filters
The search query passed to the API is constructed from a `gdelt_client.Filters` object.
```python
from gdelt_client import Filters, near, repeat
f = Filters(
start_date = "2020-05-01",
end_date = "2020-05-02",
num_records = 250,
keyword = "climate change",
domain = ["bbc.co.uk", "nytimes.com"],
country = ["UK", "US"],
theme = "GENERAL_HEALTH",
near = near(10, "airline", "carbon"),
repeat = repeat(5, "planet")
)
```
Filters for `keyword`, `domain`, `domain_exact`, `country`, `language` and `theme` can be passed either as a single string or as a list of strings. If a list is passed, the values in the list are wrappeed in a boolean OR.
You must pass either `start_date` and `end_date`, or `timespan`
- `start_date` - The start date for the filter in YYYY-MM-DD format or as a datetime object in UTC time.
Passing a datetime allows you to specify a time down to seconds granularity. The API officially only supports the most recent 3 months of articles. Making a request for an earlier date range may still return data, but it's not guaranteed.
- `end_date` - The end date for the filter in YYYY-MM-DD format or as a datetime object in UTC time.
- `timespan` - A timespan to search for, relative to the time of the request. Must match one of the API's timespan formats - https://blog.gdeltproject.org/gdelt-doc-2-0-api-debuts/
- `num_records` - The number of records to return. Only used in article list mode and can be up to 250.
- `keyword` - Return articles containing the exact phrase `keyword` within the article text.
- `domain` - Return articles from the specified domain. Does not require an exact match so passing "cnn.com" will match articles from `cnn.com`, `subdomain.cnn.com` and `notactuallycnn.com`.
- `domain_exact` - Similar to `domain`, but requires an exact match.
- `country` - Return articles published in a country or list of countries, formatted as the FIPS 2 letter country code.
- `language` - Return articles published in the given language, formatted as the ISO 639 language code.
- `theme` - Return articles that cover one of GDELT's GKG Themes. A full list of themes can be found [here](http://data.gdeltproject.org/api/v2/guides/LOOKUP-GKGTHEMES.TXT)
- `near` - Return articles containing words close to each other in the text. Use `near()` to construct. eg. `near = near(5, "airline", "climate")`, or `multi_near()` if you want to use multiple restrictions eg. `multi_near([(5, "airline", "crisis"), (10, "airline", "climate", "change")], method="AND")` finds "airline" and "crisis" within 5 words, and "airline", "climate", and "change" within 10 words
- `repeat` - Return articles containing a single word repeated at least a number of times. Use `repeat()` to construct. eg. `repeat =repeat(3, "environment")`, or `multi_repeat()` if you want to use multiple restrictions eg. `repeat = multi_repeat([(2, "airline"), (3, "airport")], "AND")`
- `tone` - Return articles above or below a particular tone score (ie more positive or more negative than a certain threshold). To use, specify either a greater than or less than sign and a positive or negative number (either an integer or floating point number). To find fairly positive articles, use `tone=">5"` or to search for fairly negative articles, use `tone="<-5"`
- tone_absolute - The same as `tone` but ignores the positive/negative sign and lets you search for high emotion or low emotion articles, regardless of whether they were happy or sad in tone
## Attribution
The JSON schema data files in this package (`src/gdelt_client/data/schemas/`) are based on schemas from [gdeltPyR](https://github.com/linwoodc3/gdeltPyR), which is licensed under the GNU General Public License v3.0.
## Developing gdelt-client
PRs & issues are very welcome!
### Setup
It's recommended to use a virtual environment for development. Set one up with [uv](https://docs.astral.sh/uv/getting-started/installation/)
```
uv sync
```
Tests for this package use `pytest`. Run them with
```
uv run pytest tests --cov=src/gdelt_client --cov-report=xml --cov-report=term-missing
```
If your PR adds a new feature or helper, please also add some tests
### Publishing
There's a bit of automation set up to help publish a new version of the package to PyPI,
1. Make sure the version string has been updated since the last release. This package follows semantic versioning.
2. Create a new release in the Github UI, using the new version as the release name
3. Watch as the `publish.yml` Github action builds the package and pushes it to PyPI
| text/markdown | null | Bob Merkus <bob.merkus@gmail.com> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp>=3.13.3",
"geopandas>=1.1.2",
"pandas>=3.0.0",
"requests>=2.32.5",
"tenacity>=9.1.3"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T23:11:26.401684 | gdelt_client-0.2.1-py3-none-any.whl | 38,417 | b3/ca/8670c8d9f515a022af34f865ff9ab1b7509acb90774b7682370a1901d5ca/gdelt_client-0.2.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 27c4a22e0a71430bc6d3b642d3c90703 | 46153ca77492feb2cc54579e9468d1f5f6ffcb9ae8d25536f20a636dd6ba3f2c | b3ca8670c8d9f515a022af34f865ff9ab1b7509acb90774b7682370a1901d5ca | null | [
"LICENSE"
] | 291 |
2.4 | prosemirror | 0.6.0 | Python implementation of core ProseMirror modules for collaborative editing | # prosemirror-py
[](https://github.com/fellowapp/prosemirror-py/actions/workflows/test.yml)
[](https://codecov.io/gh/fellowapp/prosemirror-py)
[](https://pypi.org/project/prosemirror/)
[](https://github.com/fellowapp/prosemirror-py/blob/master/LICENSE.md)
[](https://fellow.app/careers/)
This package provides Python implementations of the following
[ProseMirror](https://prosemirror.net/) packages:
- [`prosemirror-model`](https://github.com/ProseMirror/prosemirror-model) version 1.25.4
- [`prosemirror-transform`](https://github.com/ProseMirror/prosemirror-transform) version 1.11.0
- [`prosemirror-test-builder`](https://github.com/ProseMirror/prosemirror-test-builder) version 1.1.1
- [`prosemirror-schema-basic`](https://github.com/ProseMirror/prosemirror-schema-basic) version 1.2.4
- [`prosemirror-schema-list`](https://github.com/ProseMirror/prosemirror-schema-list) version 1.5.1 (node specs and `wrapRangeInList` only; command functions that depend on `prosemirror-state` are excluded)
The original implementation has been followed as closely as possible during
translation to simplify keeping this package up-to-date with any upstream
changes.
## Why?
ProseMirror provides a powerful toolkit for building rich-text editors, but it's
JavaScript-only. Until now, the only option for manipulating and working with
ProseMirror documents from Python was to embed a JS runtime. With this
translation, you can now define schemas, parse documents, and apply transforms
directly via a native Python API.
## Status
The full ProseMirror test suite has been translated and passes. This project
only supports Python 3. The code has type annotations to support mypy or other
typechecking tools.
## Usage
Since this library is a direct port, the best place to learn how to use it is
the [official ProseMirror documentation](https://prosemirror.net/docs/guide/).
Here is a simple example using the included "basic" schema:
```python
from prosemirror.transform import Transform
from prosemirror.schema.basic import schema
# Create a document containing a single paragraph with the text "Hello, world!"
doc = schema.node(
"doc", {}, [schema.node("paragraph", {}, [schema.text("Hello, world!")])]
)
# Create a Transform which will be applied to the document.
tr = Transform(doc)
# Delete the text from position 3 to 5. Adds a ReplaceStep to the transform.
tr.delete(3, 5)
# Make the first three characters bold. Adds an AddMarkStep to the transform.
tr.add_mark(1, 4, schema.mark("strong"))
# This transform can be converted to JSON to be sent and applied elsewhere.
assert [step.to_json() for step in tr.steps] == [
{"stepType": "replace", "from": 3, "to": 5},
{"stepType": "addMark", "mark": {"type": "strong"}, "from": 1, "to": 4},
]
# The resulting document can also be converted to JSON.
assert tr.doc.to_json() == {
"type": "doc",
"content": [
{
"type": "paragraph",
"content": [
{"type": "text", "marks": [{"type": "strong"}], "text": "Heo"},
{"type": "text", "text": ", world!"},
],
}
],
}
```
## AI Disclosure
The initial version of this translation was written manually in 2019. AI is now
used to help keep this translation up-to-date with upstream changes.
| text/markdown | null | Samuel Cormier-Iijima <sam@fellow.co>, Shen Li <dustet@gmail.com> | null | null | BSD-3-Clause | collaborative, editing, prosemirror | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Lang... | [] | null | null | >=3.10 | [] | [] | [] | [
"cssselect>=1.2",
"lxml>=4.9",
"typing-extensions>=4.1"
] | [] | [] | [] | [
"Homepage, https://github.com/fellowapp/prosemirror-py",
"Repository, https://github.com/fellowapp/prosemirror-py",
"Changelog, https://github.com/fellowapp/prosemirror-py/releases"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T23:11:19.499669 | prosemirror-0.6.0.tar.gz | 73,349 | e0/60/3c30e8316b791cb219be38002a3e1012b050ec9312d61e326b5994381fa1/prosemirror-0.6.0.tar.gz | source | sdist | null | false | 75b0ee884399298e823340384ed46fd8 | 368cad1800e15f09ff22b11472c0a1ded5209a4f2576c82900e142c75c2242ca | e0603c30e8316b791cb219be38002a3e1012b050ec9312d61e326b5994381fa1 | null | [
"LICENSE.md"
] | 368 |
2.4 | miruvor | 0.1.1 | Python SDK for Zyra SNN Memory Database - Brain-inspired memory for AI agents | # Miruvor
Python SDK for Zyra SNN Memory Database - Brain-inspired memory for AI agents.
## Features
- **Simple API**: Three core methods - `store()`, `ingest()`, `retrieve()`
- **Pattern Completion**: O(L) retrieval via pointer layer - no training required
- **Sync & Async**: Both `MiruvorClient` and `AsyncMiruvorClient`
- **Type Safe**: Full Pydantic v2 models with validation
- **Production Ready**: Automatic retries, timeouts, logging, error handling
- **Batch Operations**: Efficient batch store with concurrency control
- **Health Checks**: Built-in health monitoring
## Installation
```bash
pip install miruvor
```
## Quick Start
### Sync Usage
```python
from miruvor import MiruvorClient
# Defaults to production. Set MIRUVOR_BASE_URL for dev (e.g. http://localhost:8000).
# Set MIRUVOR_API_KEY in env to skip passing api_key.
client = MiruvorClient(api_key="your-api-key", token="your-jwt-token") # token optional
# Store a memory
response = client.store(
text="The user prefers dark mode and likes Python",
tags=["preferences", "user"],
metadata={"source": "settings"}
)
print(f"Stored: {response.memory_id} in {response.storage_time_ms:.2f}ms")
# Retrieve memories
results = client.retrieve(query="What does the user prefer?", top_k=5)
for memory in results.results:
print(f"[{memory.score:.3f}] {memory.data['text']}")
# Ingest with LLM enhancement
ingest_response = client.ingest(content="Long document...", priority="high")
print(f"Queued: {ingest_response.message_id}")
client.close()
```
### Async Usage
```python
import asyncio
from miruvor import AsyncMiruvorClient
async def main():
async with AsyncMiruvorClient(api_key="your-api-key", token="your-jwt-token") as client:
response = await client.store(text="User loves async/await", tags=["preferences"])
results = await client.retrieve(query="What does user love?")
print(f"Found {results.num_results} memories")
asyncio.run(main())
```
### Batch Operations
```python
import asyncio
from miruvor import AsyncMiruvorClient
async def batch_example():
async with AsyncMiruvorClient(api_key="...", token="...") as client:
memories = [
{"text": "Memory 1", "tags": ["tag1"]},
{"text": "Memory 2", "tags": ["tag2"]},
{"text": "Memory 3", "tags": ["tag3"]},
]
responses = await client.store_batch(memories, max_concurrent=10)
print(f"Stored {len(responses)} memories")
asyncio.run(batch_example())
```
## API Reference
### MiruvorClient / AsyncMiruvorClient
**Constructor:** `MiruvorClient(api_key, base_url=None, token=None, timeout=30.0, max_retries=3)` — `base_url` defaults to production; override with `MIRUVOR_BASE_URL` env.
**Methods:**
- `health()` - Check API health status
- `store(text, tags=None, metadata=None)` - Store a memory
- `store_batch(memories, max_concurrent=10)` - Store multiple memories (async has concurrency)
- `ingest(content, priority="normal", ...)` - Ingest with LLM enhancement
- `retrieve(query, top_k=5, use_sparse=None)` - Retrieve via pattern completion
## Error Handling
```python
from miruvor import MiruvorClient, AuthenticationError, RateLimitError, ValidationError
try:
response = client.store(text="Hello world")
except AuthenticationError as e:
print(f"Auth failed: {e.message}")
except RateLimitError as e:
print(f"Retry after {e.retry_after} seconds")
except ValidationError as e:
print(f"Validation error: {e.message}")
```
## Examples
See the `examples/` directory:
- `quickstart.py` - Basic sync usage
- `async_example.py` - Async operations
- `batch_ingestion.py` - Batch operations with concurrency
## Development
```bash
pip install -e ".[dev]"
pytest
black src/ tests/
ruff check src/ tests/
mypy src/
```
## Requirements
- Python 3.9+
- requests >= 2.31.0
- httpx >= 0.27.0
- pydantic >= 2.0.0
- urllib3 >= 2.0.0
## License
MIT License - see LICENSE file for details.
| text/markdown | null | Arvind <arvind@miruvor.ai> | null | null | MIT | agents, ai, brain-inspired, memory, neuromorphic, snn | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.0.0",
"requests>=2.31.0",
"urllib3>=2.0.0",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/miruvor",
"Documentation, https://miruvor.readthedocs.io",
"Repository, https://github.com/yourusername/miruvor"
] | twine/6.2.0 CPython/3.10.11 | 2026-02-18T23:11:07.837986 | miruvor-0.1.1.tar.gz | 8,120,572 | 20/2c/7f68e08f5f0e34c85d425f26989d173e07410e561821921a8cd4d1062f92/miruvor-0.1.1.tar.gz | source | sdist | null | false | c141861ac68a2eea45c761782ff17990 | f02c8d14403b0c2cfd24424717b405c6e905a7ff2024f6f4f88bfe26b094102a | 202c7f68e08f5f0e34c85d425f26989d173e07410e561821921a8cd4d1062f92 | null | [] | 259 |
2.4 | npc-ephys | 0.1.44 | Tools for accessing and processing raw ephys data, compatible with data in the cloud. | # npc_ephys
Tools for accessing and processing raw ephys data, compatible with data in the cloud.
[](https://pypi.org/project/npc_ephys/)
[](https://pypi.org/project/npc_ephys/)
[](https://app.codecov.io/github/AllenInstitute/npc_ephys)
[](https://github.com/AllenInstitute/npc_ephys/actions/workflows/publish.yml)
[](https://github.com/AllenInstitute/npc_ephys/issues)
# Usage
```bash
conda create -n npc_ephys python>=3.9
conda activate npc_ephys
pip install npc_ephys
```
## Windows
[`wavpack-numcodecs`](https://github.com/AllenNeuralDynamics/wavpack-numcodecs)
is used to read compressed ephys data from S3 (stored in Zarr format). On Windows, that requires C++
build tools to be installed: if `pip install npc_ephys` fails you'll likely need to download it [from here](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022).
## Python
```python
>>> import npc_ephys
# get device timing on sync clock using barcodes:
>>> recording_path = 's3://aind-ephys-data/ecephys_670248_2023-08-03_12-04-15/ecephys_clipped/Record Node 102/experiment1/recording1'
>>> sync_path = 's3://aind-ephys-data/ecephys_670248_2023-08-03_12-04-15/behavior/20230803T120415.h5'
>>> timing_data = next(npc_ephys.get_ephys_timing_on_sync(sync_path, recording_path))
>>> timing_data.device.name, timing_data.sampling_rate, timing_data.start_time
('Neuropix-PXI-100.ProbeA-AP', 30000.070518634246, 20.080209634424037)
# get a dataclass that reads SpikeInterface sorted data from the cloud
# - from a path:
>>> si = npc_ephys.get_spikeinterface_data('s3://codeocean-s3datasetsbucket-1u41qdg42ur9/4797cab2-9ea2-4747-8d15-5ba064837c1c')
# - or from a subject ID + date + session-index-on-date (separators are optional):
>>> si = npc_ephys.get_spikeinterface_data('670248_2023-08-03_0')
>>> si
SpikeInterfaceKS25Data(session='670248_2023-08-03_0', root=S3Path('s3://codeocean-s3datasetsbucket-1u41qdg42ur9/4797cab2-9ea2-4747-8d15-5ba064837c1c'))
# various bits of data are available for use:
>>> si.version
'0.97.1'
>>> ''.join(si.probes)
'ABCEF'
>>> si.quality_metrics_df('probeA').columns
Index(['num_spikes', 'firing_rate', 'presence_ratio', 'snr',
'isi_violations_ratio', 'isi_violations_count', 'rp_contamination',
'rp_violations', 'sliding_rp_violation', 'amplitude_cutoff',
'drift_ptp', 'drift_std', 'drift_mad', 'isolation_distance', 'l_ratio',
'd_prime'],
dtype='object')
>>> si.spike_indexes('probeA')
array([ 491, 738, 835, ..., 143124925, 143125165, 143125201])
>>> si.unit_indexes('probeA')
array([ 56, 61, 161, ..., 151, 72, 59])
```
# Development
See instructions in https://github.com/AllenInstitute/npc_ephys/CONTRIBUTING.md and the original template: https://github.com/AllenInstitute/copier-pdm-npc/blob/main/README.md
| text/markdown | null | Ben Hardcastle <ben.hardcastle@alleninstitue.org> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: MIT License",
"Operating System :: Microsoft :: Windows",
"Operating System... | [] | null | null | >=3.9 | [] | [] | [] | [
"npc-sync>=0.1.12",
"zarr",
"pandas>=2.2.0",
"tqdm>=4.66.1",
"npc-lims>=0.1.73",
"wavpack-numcodecs!=0.2.0,>=0.1.5",
"npc-io>=0.1.26",
"polars>=0.20.26",
"pyarrow>=14.0",
"pynwb>=2.8.0; extra == \"nwb\"",
"hdmf-zarr-bjh==0.8.1.post1; extra == \"nwb\"",
"zarr<2.18.1; extra == \"nwb\""
] | [] | [] | [] | [
"Repository, https://github.com/AllenInstitute/npc_ephys",
"Issues, https://github.com/AllenInstitute/npc_ephys/issues"
] | pdm/2.26.6 CPython/3.11.14 Linux/6.14.0-1017-azure | 2026-02-18T23:10:55.225813 | npc_ephys-0.1.44.tar.gz | 33,887 | 1f/be/d91d28c21121869e69d9dc543fc1c26dc6528dcb6e12cc518c72b5be513b/npc_ephys-0.1.44.tar.gz | source | sdist | null | false | 4b67194345a929934e4823923daf455e | dea36ec731e9ef3750c3270f6b9495b4b4dbc3afe54535b6073d906e360595bd | 1fbed91d28c21121869e69d9dc543fc1c26dc6528dcb6e12cc518c72b5be513b | null | [] | 291 |
2.4 | alnPairDist | 1.0.3 | alnPairDist: A tool for calculating pairwise similarity of taxa in a multiple sequence alignment | # alnPairDist: A tool for calculating pairwise similarity of taxa in a multiple sequence alignment
## Getting the alnPairDist source code
```
git clone https://github.com/ChrispinChaguza/alnPairDist.git
```
## Setup alnPairDist software on a local machine
### Installing alnPairDist using Pip
The easist way to install the latest version of alnPairDist is using Pip
```
pip install alnPairDist
```
Here is a command to install a specific version of ViralLC using Pip
```
pip install alnPairDist=={INSERT VERSION HERE}
```
### Installing ViralLC using Conda
Installation using Conda (upcoming!).
```
conda install -c conda-forge alnPairDist
```
```
conda install -c bioconda alnPairDist
```
### Installing ViralLC directly from Github
First, download alnPairDist from GitHub and then manually setup the environment for the package
```
git clone https://github.com/ChrispinChaguza/alnPairDist.git
cd alnPairDist
```
Second, manually install the required package dependencies
```
conda install -c conda-forge python=3.14.2 -y
conda install -c conda-forge biopython=1.86 -y
```
```
pip install build
```
Follow the instructions below to build and install alnPairDist
```
python -m build
pip install --force-reinstall dist/{INSERT THE COMPILED SOFTWARE VERSION}
```
## Basic usage
The simplest way to run alnPairDist is to provide a multiple sequence alignment in FASTA format
```
alnPairDist --aln input.fasta --out report.tsv
```
```
alnPairDist -a input.fasta -o report.tsv
```
Specify the "--threads" or "-t" option to use more threads
```
alnPairDist --aln input.fasta --out report.tsv --threads 10
```
```
alnPairDist -a input.fasta -o report.tsv -t 10
```
To suppress the output on the terminal
```
alnPairDist --aln input.fasta --out report.tsv --threads 10 --quiet
```
```
alnPairDist --aln input.fasta --out report.tsv --threads 10 -q
```
### Example dataset (Rotavirus A)
Calculating pairwise distances between sequences using the *example.aln* alignment file in the *example* directory
```
alnPairDist --aln example.aln --out report.tsv --threads 10
```
### Software version
Run the command below to show the software version
```
alnPairDist --version
```
```
alnPairDist -v
```
## Cite
Chrispin Chaguza, alnPairDist, https://github.com/ChrispinChaguza/alnPairDist.git
| text/markdown | null | Chrispin Chaguza <chrispin.chaguza@gmail.com> | null | Chrispin Chaguza <chrispin.chaguza@gmail.com> | MIT License
Copyright (c) 2026 Chrispin Chaguza
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| Sequence alignment, Pairwise distance, Nucleotides | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Environment :: Console",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Intended Audience :: Science/Research"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"setuptools",
"toml",
"biopython"
] | [] | [] | [] | [
"Homepage, https://github.com/ChrispinChaguza/alnPairDist/",
"Repository, https://github.com/ChrispinChaguza/alnPairDist.git",
"Issues, https://github.com/ChrispinChaguza/alnPairDist/issues",
"Documentation, https://github.com/ChrispinChaguza/alnPairDist/",
"Website, https://github.com/ChrispinChaguza/alnPa... | twine/6.2.0 CPython/3.14.2 | 2026-02-18T23:10:05.662536 | alnpairdist-1.0.3.tar.gz | 4,786 | e1/0f/c60f0d95f52c1947c2c72bcc88e4279761645da3f0a38c6d3f701ae6f978/alnpairdist-1.0.3.tar.gz | source | sdist | null | false | de99be6f826b574b8773de6ba795ba4b | f320111f51ff761d2a9a4dcc1847df3dc8892cbf63de2a9d121c888998a24a9d | e10fc60f0d95f52c1947c2c72bcc88e4279761645da3f0a38c6d3f701ae6f978 | null | [
"LICENSE"
] | 0 |
2.2 | rfmux | 1.4.1 | Python library for t0.technology CRS board running rfmux firmware | # rfmux
[rfmux](https://github.com/t0/rfmux) is the Python API for
[t0.technology](https://t0.technology)'s Control and Readout System (CRS), a
hardware platform designed for operating large arrays of Kinetic Inductance
Detectors (KIDs) used in radio astronomy and quantum sensing applications.
The CRS + MKIDs firmware is described in
[this conference proceedings](https://arxiv.org/abs/2406.16266).
## Quick Start
### Installation
**New in 2025:** rfmux is now available on PyPI. We recommend using [uv](https://github.com/astral-sh/uv) for installation:
```bash
# Install uv (if not already installed)
$ curl -LsSf https://astral.sh/uv/install.sh | sh # Linux/macOS or WSL
# Or: powershell -c "irm https://astral.sh/uv/install.ps1 | iex" # Windows
# Create virtual environment
uv venv ### or uv venv my-env-name
source .venv/bin/activate # On Windows: .venv/Scripts/activate
# or source my-env-name/bin/activate
# Install rfmux
$ uv pip install rfmux
```
**Note:** rfmux now uses a C++ extension for packet processing. PyPI hosts
pre-built binaries (wheels) for common platforms (Linux x86_64, macOS,
Windows). If wheels aren't available for your platform, you'll need a C++
compiler. See [Installation Guide](docs/installation.md) for details.
### Interactive GUI
To launch the Periscope GUI, run:
```bash
$ uv run periscope # or periscope
```
https://github.com/user-attachments/assets/581d4ff8-5ea2-493a-9c9c-c93d6ca847e2
### Scripting with Mock Mode
If you do not have a CRS board (or cryogenic detectors) handy, you can use
"mock" mode for a software emulation:
```python
# Emulate CRS hardware for offline development
s = rfmux.load_session("""
!HardwareMap
- !flavour "rfmux.mock"
- !CRS { serial: "MOCK0001" }
""")
```
### Scripting with CRS Hardware
To control a single network-attached CRS from your PC's Python prompt, use:
```python
import rfmux
# Connect to a CRS board
s = rfmux.load_session('!HardwareMap [ !CRS { serial: "0033" } ]')
crs = s.query(rfmux.CRS).one()
await crs.resolve()
# Acquire samples
samples = await crs.get_samples(1000, channel=1, module=1)
```
## Documentation
- **[Installation Guide](docs/installation.md)** - Detailed installation for all platforms, building from source
- **[Getting Started](docs/guides/getting-started.md)** - Usage patterns, hardware hierarchy, common operations
- **[Networking Guide](docs/guides/networking.md)** - UDP tuning, multicast configuration, troubleshooting
- **[Firmware Guide](docs/guides/firmware.md)** - Fetching, managing, and flashing firmware
## Repository Structure
```
rfmux/
├── docs/ # Documentation
├── firmware/ # Firmware binaries (Git LFS)
├── home/ # Jupyter Hub content (demos, docs)
├── rfmux/ # Main Python package
│ ├── algorithms/ # Network analysis, fitting, biasing
│ ├── core/ # Hardware schema, sessions, mock infrastructure
│ ├── packets/ # C++ packet receiver library
│ ├── tools/ # Periscope GUI and other tools
│ └── tuber/ # RPC/remote-object communication
└── test/ # Test suite (unit, integration, QC)
```
## Contributing & Feedback
rfmux is permissively licensed; see LICENSE for details.
We actively encourage contributions and feedback. Understanding operator needs
is how we determine what to add to rfmux.
- **Pull Requests:** Your contributions are welcome
- **Issues:** Please submit tickets for bugs or enhancement suggestions
- **Collaborator Slack:** Join #crs-collaboration - email Joshua@t0.technology with your name, affiliation, and project
## Citation
When citing rfmux or CRS, please reference:
> CRS + MKIDs Conference Proceedings: https://arxiv.org/abs/2406.16266
| text/markdown | null | Graeme Smecher <gsmecher@t0.technology>, Joshua Montgomery <joshua@t0.technology> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"PyQt6",
"aiohttp",
"ipykernel<7",
"ipython",
"jupyterlab",
"matplotlib",
"numba",
"numpy",
"psutil",
"pyqtgraph",
"pytest",
"pytest-asyncio",
"pyyaml",
"qtconsole",
"requests_futures",
"scipy",
"simplejson",
"sqlalchemy",
"tbb; sys_platform != \"darwin\""
] | [] | [] | [] | [
"homepage, https://github.com/t0/rfmux",
"repository, https://github.com/t0/rfmux"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:09:06.371012 | rfmux-1.4.1.tar.gz | 439,947 | 01/c0/baab0b640056eb039e88526a3665e0ee748f7240ae48a621a0fad2c4964a/rfmux-1.4.1.tar.gz | source | sdist | null | false | 04e9c9a0ca21f581b02c557c14d11c1c | 06d6325f3126b05487401f00af3dc031f29a3865feef5f46c51fe3a5bd663f0a | 01c0baab0b640056eb039e88526a3665e0ee748f7240ae48a621a0fad2c4964a | null | [] | 2,246 |
2.4 | blazerpc | 1.0.0 | A lightweight, framework-agnostic gRPC library for machine learning inference | # BlazeRPC
A lightweight, framework-agnostic gRPC library for serving machine learning models in Python. BlazeRPC gives you a FastAPI-like developer experience -- decorate a function, start the server, and you have a production-ready gRPC inference endpoint.
## Why BlazeRPC?
Serving ML models over gRPC typically involves writing `.proto` files by hand, compiling them into Python stubs, and wiring up boilerplate servicers. BlazeRPC removes all of that. You write a plain Python function, add a decorator, and the library generates the protobuf schema, servicer, and server for you.
**Key features:**
- **Decorator-based API** -- Register models with `@app.model("name")`, just like route handlers in a web framework.
- **Automatic proto generation** -- BlazeRPC inspects your function's type annotations and produces a valid `.proto` file. No hand-written schemas.
- **Adaptive batching** -- Individual requests are automatically grouped into batches for GPU-efficient inference. Configurable batch size and timeout.
- **Server-side streaming** -- Return tokens one at a time with `streaming=True`, ideal for LLM inference and real-time pipelines.
- **Health checks and reflection** -- Built-in gRPC health checking protocol and server reflection, compatible with `grpcurl`, `grpcui`, and Kubernetes probes.
- **Framework integrations** -- Optional helpers for PyTorch, TensorFlow, and ONNX Runtime that handle tensor conversion automatically.
- **Prometheus metrics** -- Request counts, latencies, and batch sizes are exported out of the box.
## Installation
```bash
pip install blazerpc
```
With framework-specific extras:
```bash
pip install blazerpc[pytorch] # PyTorch tensor conversion helpers
pip install blazerpc[tensorflow] # TensorFlow tensor conversion helpers
pip install blazerpc[onnx] # ONNX Runtime model wrapper
pip install blazerpc[all] # All optional integrations
```
## Quick start
### 1. Define your models
Create a file called `app.py`:
```python
from blazerpc import BlazeApp
app = BlazeApp()
@app.model("sentiment")
def predict_sentiment(text: list[str]) -> list[float]:
# Replace with your real model inference
return [0.95] * len(text)
```
BlazeRPC reads the type annotations on your function to generate the gRPC request and response messages. Supported types include `str`, `int`, `float`, `bool`, `list[float]`, `list[str]`, and tensor types via `TensorInput` / `TensorOutput`.
### 2. Start the server
```bash
blaze serve app:app
```
```
⚡ BlazeRPC server starting...
✓ Loaded model: sentiment v1
✓ Server listening on 0.0.0.0:50051
```
The server registers three services automatically:
| Service | Purpose |
| ------------------------------------------ | ---------------------- |
| `blazerpc.InferenceService` | Your model RPCs |
| `grpc.health.v1.Health` | Standard health checks |
| `grpc.reflection.v1alpha.ServerReflection` | Service discovery |
### 3. Export the `.proto` file
```bash
blaze proto app:app --output-dir ./proto_out
```
This writes a `blaze_service.proto` file that you can compile with `protoc` or share with clients in any language. The generated proto looks like this:
```protobuf
syntax = "proto3";
package blazerpc;
message TensorProto {
repeated int64 shape = 1;
string dtype = 2;
bytes data = 3;
}
message SentimentRequest {
repeated string text = 1;
}
message SentimentResponse {
repeated float result = 1;
}
service InferenceService {
rpc PredictSentiment(SentimentRequest) returns (SentimentResponse);
}
```
## Streaming
To build a server-streaming endpoint (for example, returning tokens from an LLM), set `streaming=True`:
```python
@app.model("generate", streaming=True)
async def generate_tokens(prompt: str) -> str:
tokens = run_my_llm(prompt)
for token in tokens:
yield token
```
Each `yield` sends a message to the client over the open gRPC stream. The client receives tokens as they are produced, without waiting for the full response.
## Adaptive batching
When `enable_batching=True` (the default), BlazeRPC collects individual requests and groups them into batches before calling your model function. This is essential for GPU workloads where batch inference is significantly faster than processing requests one at a time.
```python
app = BlazeApp(
enable_batching=True,
max_batch_size=32, # Maximum requests per batch
batch_timeout_ms=10.0, # Maximum wait time before dispatching a partial batch
)
```
The batching layer handles:
- **Collecting requests** from concurrent clients into a single batch.
- **Dispatching partial batches** when the timeout expires, ensuring low latency even under light load.
- **Partial failure isolation** -- if one item in a batch fails, only that client receives an error. Other clients in the batch still get their results.
## Tensor types
For models that operate on NumPy arrays, use `TensorInput` and `TensorOutput` to declare the expected shape and dtype:
```python
import numpy as np
from blazerpc import BlazeApp, TensorInput, TensorOutput
app = BlazeApp()
@app.model("classify")
def classify(
image: TensorInput[np.float32, "batch", 224, 224, 3],
) -> TensorOutput[np.float32, "batch", 1000]:
# image is serialized as a TensorProto on the wire
return model.predict(image)
```
The generated proto uses a `TensorProto` message with `shape`, `dtype`, and raw `bytes` fields for zero-copy serialization.
## Framework integrations
### PyTorch
```python
from blazerpc.contrib.pytorch import torch_model
@app.model("classifier")
@torch_model(device="cuda")
def classify(image):
# `image` is automatically converted from np.ndarray to a torch.Tensor
# on the specified device. The return value is converted back to np.ndarray.
return model(image)
```
### TensorFlow
```python
from blazerpc.contrib.tensorflow import tf_model
@app.model("classifier")
@tf_model
def classify(image):
return model(image)
```
### ONNX Runtime
```python
from blazerpc.contrib.onnx import ONNXModel
onnx_model = ONNXModel("model.onnx", providers=["CUDAExecutionProvider"])
@app.model("classifier")
def classify(image: np.ndarray) -> np.ndarray:
return onnx_model.predict(image)[0]
```
## Middleware
BlazeRPC provides a middleware system built on grpclib's event hooks. Attach middleware to the underlying server to add logging, metrics, or custom request processing.
```python
from blazerpc.server.middleware import LoggingMiddleware, MetricsMiddleware
# These are attached inside app.serve() or manually on the grpclib Server:
# LoggingMiddleware().attach(grpclib_server)
# MetricsMiddleware().attach(grpclib_server)
```
**Built-in middleware:**
| Middleware | Description |
| --------------------- | ---------------------------------------------------------------------------------------------- |
| `LoggingMiddleware` | Logs every RPC call with method name, peer address, and response status. |
| `MetricsMiddleware` | Exports Prometheus metrics: `blazerpc_requests_total` and `blazerpc_request_duration_seconds`. |
| `ExceptionMiddleware` | Base class for custom exception-to-gRPC-status mapping. |
To build your own middleware, subclass `Middleware` and implement `on_request` and `on_response`:
```python
from blazerpc.server.middleware import Middleware
class AuthMiddleware(Middleware):
async def on_request(self, event):
token = dict(event.metadata).get("authorization")
if not token:
raise GRPCError(Status.UNAUTHENTICATED, "Missing token")
async def on_response(self, event):
pass
```
## CLI reference
```bash
blaze serve <app_path> [OPTIONS]
Start the BlazeRPC gRPC server.
Arguments:
app_path App import path in module:attribute format (e.g. app:app)
Options:
--host TEXT Host to bind to [default: 0.0.0.0]
--port INTEGER Port to listen on [default: 50051]
--workers INTEGER Number of worker processes [default: 1]
--reload Enable auto-reload [default: False]
```
```bash
blaze proto <app_path> [OPTIONS]
Export the generated .proto file.
Arguments:
app_path App import path in module:attribute format (e.g. app:app)
Options:
--output-dir TEXT Output directory for .proto files [default: .]
```
## Project structure
```bash
src/blazerpc/
__init__.py # Public API: BlazeApp, TensorInput, TensorOutput, exceptions
app.py # BlazeApp class -- model registration and server lifecycle
types.py # TensorInput, TensorOutput, type introspection
exceptions.py # Exception hierarchy (BlazeRPCError and subclasses)
decorators.py # Reserved for future decorator extensions
cli/
main.py # Typer CLI (blaze serve, blaze proto)
serve.py # App loading from import strings
proto.py # Proto file export
codegen/
proto.py # .proto file generation from type annotations
servicer.py # Dynamic grpclib servicer generation
runtime/
registry.py # Model registry (stores registered models and metadata)
executor.py # Model execution with sync/async bridging
batcher.py # Adaptive request batching
serialization.py # Tensor and scalar serialization
server/
grpc.py # GRPCServer wrapper with signal handling and graceful shutdown
health.py # gRPC health checking protocol
reflection.py # gRPC server reflection
middleware.py # Logging, metrics, and extensible middleware base
contrib/
pytorch.py # PyTorch <-> NumPy conversion and @torch_model decorator
tensorflow.py # TensorFlow <-> NumPy conversion and @tf_model decorator
onnx.py # ONNX Runtime session wrapper
```
## Development
```bash
# Clone the repository
git clone https://github.com/Ifihan/blazerpc.git
cd blazerpc
# Install dependencies (requires uv)
uv sync --extra dev
# Run tests
uv run pytest tests/ -v
# Lint
uv run ruff check src/
# Type check
uv run mypy src/blazerpc/
```
## Contributing
We welcome contributions of all kinds -- bug fixes, new features, documentation improvements, and example applications. See the [Contributing Guide](CONTRIBUTING.md) for instructions on setting up a development environment, running tests, and submitting a pull request.
## License
MIT -- see [LICENSE](LICENSE) for details.
| text/markdown | Ifihanagbara Olusheye | Ifihanagbara Olusheye <victoriaolusheye@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"grpclib>=0.4.7",
"protobuf>=4.21.0",
"betterproto>=2.0.0b6",
"numpy>=1.24.0",
"typer>=0.9.0",
"uvloop>=0.19.0; sys_platform != \"win32\"",
"prometheus-client>=0.19.0",
"opentelemetry-api>=1.22.0",
"blazerpc[onnx,pytorch,tensorflow]; extra == \"all\"",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-... | [] | [] | [] | [] | uv/0.8.13 | 2026-02-18T23:08:53.680808 | blazerpc-1.0.0.tar.gz | 20,694 | 2e/57/9ea6fc72c1a1e6f7f575cee8a096bb4e9ac2a9db4d164e45bb5857ea34aa/blazerpc-1.0.0.tar.gz | source | sdist | null | false | 3db5a972ff51cb49fb9fe08bf34c0f11 | 19b3b634e8236968984fb11e5f651a02aa94081e06233cd5a2c0e8c81812ca25 | 2e579ea6fc72c1a1e6f7f575cee8a096bb4e9ac2a9db4d164e45bb5857ea34aa | MIT | [] | 286 |
2.3 | pyinfrincus | 0.2.0 | A pyinfra connector for incus. | # pyinfrincus
A small package linking [pyinfra](https://pyinfra.com) and [incus](https://linuxcontainers.org/incus/)
### The `incus` connector
The most useful part of this package is the connector. It is both an `inventory` and `executing` connectorl. Once you have added this package to your pyproject or requirements.txt, you may do things like `pyinfra @incus/my-instance-name-1 fact server.LinuxName` (you may also omit the container name to run against ALL containers).
For now, the available options are just as follows.
| connector string | meaning |
| ---------------- | ------------------ |
| `@incus` | All instances |
| `@incus/NAME` | Run against `NAME` |
| text/markdown | Alex Wehrli | null | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"pyinfra>3.5"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T23:05:48.055624 | pyinfrincus-0.2.0.tar.gz | 2,379 | 89/3c/be567c9fdef9f7accb7b5942b1cfac0aeea3665870b2ae47771866ed32ad/pyinfrincus-0.2.0.tar.gz | source | sdist | null | false | 20ca1ab3a13834efe4e5066d063ee398 | c2a769b89dfb1152091e2651a70c1e6913c06928f1d8bc1ea0b623e773479c33 | 893cbe567c9fdef9f7accb7b5942b1cfac0aeea3665870b2ae47771866ed32ad | null | [] | 271 |
2.4 | mutato | 1.1.1 | Mutato Synonym Swapping API | # mutato
[](https://www.python.org)
[](https://github.com/Maryville-University-DLX/transcriptiq)
[](LICENSE)
[](https://pypi.org/project/mutato/)
[](https://pepy.tech/project/mutato)
[](tests/)
Ontology-driven synonym swapping for semantic text enrichment. Mutato identifies terms in input text and replaces them with semantically equivalent synonyms sourced from OWL ontologies, enabling consistent, structured analysis of natural language content.
## Use Cases
- Normalize terminology across transcripts before downstream analysis
- Enrich tokens with ontology-backed synonym candidates
- Bridge informal language to structured vocabulary in NLP pipelines
## Quick Start
```python
from mutato.parser import owl_parse
results = owl_parse(tokens=["student", "learned", "math"], ontologies=[...])
```
## Installation
```bash
make all
```
This downloads the spaCy model, installs dependencies, runs tests, builds the package, and freezes requirements.
Or step by step:
```bash
make get_model # download en_core_web_sm
make install # poetry lock + install
make test # run pytest
make build # install + test + poetry build
make freeze # export requirements.txt
```
## CLI
The `parse` command parses input text against an OWL ontology and prints canonical forms:
```bash
poetry run parse --ontology path/to/ontology.owl --input-text "fiscal policy analysis"
```
Three modes are available:
| Mode | Flag | Effect |
|---|---|---|
| Cached (default) | none | Load JSON snapshot; build it on first run |
| Rebuild cache | `--force-cache` | Regenerate snapshot, then parse |
| Live OWL | `--live` | Parse directly from the OWL file; no cache |
See [docs/cli.md](docs/cli.md) for the full reference, including the MIXED-schema caveat for `--live`.
## Architecture
Mutato is organized into four modules:
| Module | Purpose |
|---|---|
| `mutato.parser` | Main API -- synonym swapping and token matching |
| `mutato.finder` | Ontology lookup across single and multiple OWL graphs |
| `mutato.mda` | Metadata and NER enrichment generation |
| `mutato.core` | Shared utilities (file I/O, text, validation, timing) |
See [docs/architecture.md](docs/architecture.md) for design details.
## Matching Strategies
The parser applies multiple matching passes in order:
1. **Exact** -- literal string match against ontology terms
2. **Span** -- multi-token window matching
3. **Hierarchy** -- parent/child concept traversal
4. **spaCy** -- lemma and POS-aware NLP matching
## Requirements
- Python >= 3.10, < 3.14
- [Poetry](https://python-poetry.org) for dependency management
- spaCy `en_core_web_sm` model (installed via `make get_model`)
## Links
- [Issue Tracker](https://github.com/Maryville-University-DLX/transcriptiq/issues)
- [Source](https://github.com/Maryville-University-DLX/transcriptiq/libs/core/mutato-core)
| text/markdown | Craig Trim | ctrim@maryville.edu | Craig Trim | ctrim@maryville.edu | MIT | transcript schools, text extraction, statistical NLP, semantic NLP, AI, data analytics, natural language processing, machine learning, text analysis | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: ... | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"lingpatlab",
"numpy==2.2.6",
"rdflib",
"spacy==3.8.2"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/Maryville-University-DLX/transcriptiq/issues",
"Repository, https://github.com/Maryville-University-DLX/transcriptiq/libs/core/mutato-core"
] | poetry/2.3.2 CPython/3.11.9 Darwin/24.6.0 | 2026-02-18T23:05:27.512802 | mutato-1.1.1-py3-none-any.whl | 113,494 | c6/53/ac2293d6c453db90799bd967269f77dbe57d3ee9276a026c3871bc9ee634/mutato-1.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | bfcea4718ad29416fce2577d58dc0e16 | abaf50ce2480794ce19bf204b117f3132c395f451930f402e9c3ef74e336751c | c653ac2293d6c453db90799bd967269f77dbe57d3ee9276a026c3871bc9ee634 | null | [
"LICENSE"
] | 265 |
2.1 | synmax-api-python-client | 4.15.0 | Synmax API client | # SynMax API Python Client
[](https://pypi.org/project/synmax-api-python-client/)
[](https://www.python.org/downloads/)
[](https://synmax.com)
The official Python SDK for SynMax APIs—providing unified access to **Hyperion**, **Vulcan**, and **Leviaton** data products.
---
## Introduction
The energy and infrastructure industries rely on timely, accurate data for critical decision-making. The **SynMax API Python Client** solves the challenge of fragmented data access by providing a single SDK to interact with three powerful data platforms:
- **Hyperion**: Near real-time U.S. oil & gas drilling, completion, production data as well as forecasts
- **Vulcan**: Power and infrastructure monitoring with satellite-derived construction status intelligence for power plants, data centers, and LNG facilities
- **Leviaton**: Global LNG vessel tracking, cargo flows, and transaction data with forecasting capabilities
---
## Installation
Install the SDK via pip:
```bash
pip install --upgrade synmax-api-python-client
```
**Requirements:**
- Python 3.10 or higher
- pip (Python package installer)
---
## Authentication
All SynMax APIs authenticate requests using an **API access token** tied to your subscription.
**To obtain an access token, contact:** [support@synmax.com](mailto:support@synmax.com)
Pass your token directly when initializing any client:
```python
access_token = "your_access_token_here"
```
---
## Quick Start
### Hyperion (Oil & Gas Data)
```python
import datetime
from synmax.hyperion.v4 import HyperionApiClient
client = HyperionApiClient(access_token="your_access_token_here")
# Fetch short-term production forecast
data = client.short_term_forecast(
aggregate_by=["date_prod", "sub_region_natgas"],
date_prod_min=datetime.date(2025, 5, 1),
date_prod_max=datetime.date(2025, 6, 30),
)
df = data.df() # Returns pandas DataFrame
print(df.head())
```
### Vulcan (Power & Infrastructure)
```python
from synmax.vulcan.v2 import VulcanApiClient
client = VulcanApiClient(access_token="your_access_token_here")
# Fetch datacenter project data
datacenters = client.datacenters()
df = datacenters.df() # Returns pandas DataFrame
print(df.head())
```
### Leviaton (LNG Vessel Tracking)
```python
from synmax.leviaton.v1 import LeviatonApiClient
client = LeviatonApiClient(access_token="your_access_token_here")
# Fetch LNG transactions from US to Europe
transactions = client.transactions(
origin_country_codes=["US"],
destination_country_codes=["DE", "FR", "UK", "NL", "BE"],
from_timestamp="2025-06-01T00:00:00Z",
to_timestamp="2025-06-23T23:59:59Z",
)
df = transactions.df() # Returns pandas DataFrame
print(df.head())
```
---
## Usage Patterns
### Fetch Data as DataFrame
Best for datasets that fit in memory:
```python
df = client.some_endpoint(**params).df()
```
### Stream Large Datasets to File
For large datasets, iterate through chunks:
```python
import json
data_generator = client.some_endpoint(**params)
with open("output.json", "w") as f:
f.write("[\n")
first = True
for record in data_generator:
if not first:
f.write(",\n")
json.dump(record, f, indent=2, default=str)
first = False
f.write("\n]")
```
---
## Features
- **Unified SDK**: Single package for Hyperion, Vulcan, and Leviaton APIs
- **Pandas Integration**: Native `.df()` method returns data as DataFrames
- **Generator Support**: Memory-efficient streaming for large datasets
- **Type Hints**: Full autocomplete support in modern IDEs
---
## Documentation & Support
| Product | Documentation |
|-----------|---------------|
| Hyperion | [apidocs.synmax.com](https://apidocs.synmax.com/) |
| Vulcan | [docs.vulcan.synmax.com](https://apidocs.vulcan.synmax.com/) |
| Leviaton | [leviaton.apidocs.synmax.com](https://leviaton.apidocs.synmax.com/) |
**Support:** [support@synmax.com](mailto:support@synmax.com)
---
## License
This SDK is proprietary software. See [synmax.com](https://synmax.com) for licensing details.
| text/markdown | SynMax Inc. | support@synmax.com | null | null | null | null | [] | [] | https://github.com/SynMaxDev/synmax-api-python-client.git | null | >=3.7 | [] | [] | [] | [
"aiohttp>=3.11.14",
"aioretry>=6.3.1",
"packaging>=24.2",
"pandas>=2.0.3",
"pydantic>=2.10.6",
"pydantic-core<2.32.0,>=2.27.2",
"requests>=2.32.3",
"tenacity>=9.0.0",
"tqdm>=4.67.1",
"urllib3>=2.3.0",
"prance>=25.4.8.0",
"openapi-spec-validator>=0.7.1",
"httpx>=0.28.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.11 | 2026-02-18T23:04:45.983092 | synmax_api_python_client-4.15.0-py3-none-any.whl | 92,064 | 5e/8e/5de6f2636c0d39965e203f6e903b3b3aaa44d586108577a90d40f2e86e70/synmax_api_python_client-4.15.0-py3-none-any.whl | py3 | bdist_wheel | null | false | b045e50a677bd18e115815bda014775e | edc8a293840bfe6a58c0b1ffafa5c5e9a8332e79ed2b4f7d1ce885416ae01048 | 5e8e5de6f2636c0d39965e203f6e903b3b3aaa44d586108577a90d40f2e86e70 | null | [] | 179 |
2.4 | tactus | 0.43.1 | Tactus: Lua-based DSL for agentic workflows | # Tactus
![Continuous integration][continuous-integration-badge]
![Coverage][coverage-badge]
[![Documentation][documentation-badge]][documentation-link]
**A programming language for reliable, tool-using AI agents.**
*Agents that never lose their place.*
Tactus is a Lua-based DSL for building agent programs: you define tools, agents, and procedures that orchestrate their work. It’s designed for **bounded autonomy**—use imperative code for the steps that must be deterministic, and agent turns for the steps that benefit from intelligence. The runtime handles durability, human-in-the-loop, tool/context control, and testing so that workflows can run for hours or days and still be shippable.
> **Status:** Alpha. APIs and syntax may change; not production-ready.
## The Problem: Agent Scripts Don’t Scale
“Give an agent tools and a prompt” works surprisingly well when you’re there to steer. But when you run the same workflow autonomously (or thousands of times), small failure rates turn into real incidents.
Real-world agent programs need to:
- **Wait for humans**: Approval gates, reviews, input requests
- **Survive failures**: Network timeouts, API errors, process crashes
- **Run for hours or days**: Long tasks, retries, handoffs
- **Control capabilities and context**: Change tool access and the information an agent sees as the workflow progresses
- **Be testable**: Verify orchestration logic and measure reliability
Traditional frameworks help you call models, but the rest becomes infrastructure you build yourself: state machines, checkpoint tables, replay logic, HITL plumbing, and bespoke tests.
## The Solution: Imperative Orchestration with Transparent Durability
In Tactus, the deterministic parts are just code—loops, conditionals, function calls. When you want intelligence, you take an agent turn. The runtime transparently checkpoints every agent turn, tool call, and human interaction so execution can suspend and resume safely:
```lua
-- This looks like it runs straight through
repeat
researcher()
until Tool.called("done")
-- But here execution might suspend for days
local approved = Human.approve({message = "Deploy to production?"})
-- When the human responds, execution resumes exactly here
if approved then
deploy()
end
```
Every agent turn, every tool call, every human interaction is automatically checkpointed. No state machines. No manual serialization. No replay logic.
### Compare: Graph-Based vs. Imperative Durability
LangGraph does support persistence—when you compile a graph with a checkpointer, it saves state at every "super-step" (node boundary). But you're still designing a state machine:
```python
# LangGraph: Define state, nodes, and edges explicitly
class State(TypedDict):
messages: list
research_complete: bool
approved: bool | None
graph = StateGraph(State)
graph.add_node("research", research_node)
graph.add_node("wait_approval", wait_approval_node)
graph.add_node("deploy", deploy_node)
graph.add_edge("research", "wait_approval")
graph.add_conditional_edges("wait_approval", route_on_approval, {
"approved": "deploy",
"rejected": END
})
# Add checkpointer for persistence
memory = SqliteSaver.from_conn_string(":memory:")
app = graph.compile(checkpointer=memory)
```
This is powerful, but your workflow must be expressed as a graph. Nodes, edges, conditional routing. The structure is explicit.
**With Tactus**, you write imperative code. Loops, conditionals, function calls—the control flow you already know:
```lua
repeat researcher() until Tool.called("done")
local approved = Human.approve({message = "Deploy?"})
if approved then deploy() end
```
Same workflow. No graph definition. The runtime checkpoints every operation transparently—agent turns, tool calls, human interactions—and resumes exactly where execution left off.
The difference isn't whether checkpointing exists, but how you express your workflow. Graphs vs. imperative code. Explicit structure vs. transparent durability.
---
## Everything as Code
Tactus isn't just durable—it's designed for agents that build and modify other agents.
Most frameworks scatter agent logic across Python classes, decorators, YAML files, and configuration objects. This is opaque to AI. An agent can't easily read, understand, and improve its own definition when it's spread across a codebase.
Tactus takes a different approach: **the entire agent definition is a single, readable file.**
```lua
done = tactus.done
search = mcp.brave_search.search
analyze = mcp.analyze.analyze
researcher = Agent {
model = "gpt-4o",
system_prompt = "Research the topic thoroughly.",
tools = {search, analyze, done}
}
Procedure {
input = {
topic = field.string{required = true}
},
output = {
findings = field.string{required = true}
},
function(input)
repeat
researcher()
until Tool.called("done")
return {findings = Tool.last_result("done")}
end
}
Specification([[
Feature: Research
Scenario: Completes research
When the researcher agent takes turns
Then the search tool should be called at least once
]])
```
Agents, orchestration, contracts, and tests—all in one file. All in a minimal syntax that fits in context windows and produces clean diffs.
This enables:
- **Self-evolution**: An agent reads its own definition, identifies improvements, rewrites itself
- **Agent-building agents**: A meta-agent that designs and iterates on specialized agents
- **Transparent iteration**: When an agent modifies code, you can diff the changes
---
## Safe Embedding
Tactus is designed for platforms that run user-contributed agent definitions—like n8n or Zapier, but where the automations are intelligent agents.
This requires true sandboxing. User A's agent can't escape to affect user B. Can't access the filesystem. Can't make network calls. Unless you explicitly provide tools that grant these capabilities.
Python can't be safely sandboxed. Lua was designed for it—decades of proven use in game modding, nginx plugins, Redis scripts.
Tactus agents run in a restricted Lua VM:
- No filesystem access by default
- No network access by default
- No environment variable access by default
- The tools you provide are the *only* capabilities the agent has
This makes Tactus safe for:
- Multi-tenant platforms running user-contributed agents
- Embedding in applications where untrusted code is a concern
- Letting AI agents write and execute their own orchestration logic
---
## Omnichannel Human-in-the-Loop
When an agent needs human input, *how* that request reaches the human depends on the channel. The agent shouldn't care.
Tactus separates the *what* from the *how*:
```lua
local approved = Human.approve({
message = "Deploy to production?",
context = {version = "2.1.0", environment = "prod"}
})
```
The agent declares what it needs. The platform decides how to render it:
| Channel | Rendering |
|---------|-----------|
| **Web** | Modal with Approve/Reject buttons |
| **Slack** | Interactive message with button actions |
| **SMS** | "Deploy v2.1.0 to prod? Reply YES or NO" |
| **Voice** | "Should I deploy version 2.1.0 to production?" |
| **Email** | Message with approve/reject links |
Because procedures declare typed inputs, platforms can auto-generate UI for any channel:
```lua
main = procedure("main", {
input = {
topic = { type = "string", required = true },
depth = { type = "string", enum = {"shallow", "deep"}, default = "shallow" },
max_results = { type = "number", default = 10 },
include_sources = { type = "boolean", default = true },
tags = { type = "array", default = {} },
config = { type = "object", default = {} }
}
}, function()
-- Access inputs directly in Lua
log("Researching: " .. input.topic)
log("Depth: " .. input.depth)
log("Max results: " .. input.max_results)
-- Arrays and objects work seamlessly
for i, tag in ipairs(input.tags) do
log("Tag " .. i .. ": " .. tag)
end
-- ... rest of procedure
end)
```
**Input Types Supported:**
- `string`: Text values with optional enums for constrained choices
- `number`: Integers and floats
- `boolean`: True/false values
- `array`: Lists of values (converted to 1-indexed Lua tables)
- `object`: Key-value dictionaries (converted to Lua tables)
**Input Sources:**
- **CLI**: Parameters via `--param`, interactive prompting, or automatic prompting for missing required inputs
- **GUI**: Modal dialog before execution with type-appropriate form controls
- **SDK**: Direct passing via `context` parameter to `runtime.execute()`
A web app renders a form. Slack renders a modal. SMS runs a structured conversation. The CLI provides interactive prompts.
One agent definition. Every channel. Type-safe inputs everywhere.
---
## Testing Built In
When agents modify agents, verification is essential. Tactus makes BDD specifications part of the language:
```lua
specifications([[
Feature: Research Task
Scenario: Agent completes research
Given the procedure has started
When the researcher agent takes turns
Then the search tool should be called at least once
And the done tool should be called exactly once
]])
```
Run tests with `tactus test`. Measure consistency with `tactus test --runs 10`. When an agent rewrites itself, the tests verify it still works.
---
## The Broader Context
Tactus serves a paradigm shift in programming: from anticipating every scenario to providing capabilities and goals.
Traditional code requires you to handle every case—every header name, every format, every edge condition. Miss one and your program breaks.
Agent programming inverts this: give an agent tools, describe the goal, let intelligence handle the rest.
But to run this autonomously, you need more than a prompt: you need bounded autonomy (tool + context control), durability, HITL, and tests. Tactus is the language for making “give an agent a tool” workflows reliable.
```lua
done = tactus.done
file_contact = mcp.contacts.file_contact
importer = Agent {
system_prompt = "Extract contacts from the data. File each one you find.",
tools = {file_contact, done}
}
```
When a new format appears—unexpected headers, mixed delimiters, a language you didn't anticipate—the agent adapts. No code changes.
See [Give an Agent a Tool](https://github.com/AnthusAI/Give-an-Agent-a-Tool) for a deep dive on this paradigm shift.
---
## What This Enables
**Agent platforms**: Build your own n8n/Zapier where users define intelligent agents. Tactus handles sandboxing, durability, and multi-tenancy.
**Self-evolving agents**: Agents that read their own definitions, identify improvements, and rewrite themselves.
**Agents building agents**: A meta-agent that designs, tests, and iterates on specialized agents for specific tasks.
**Omnichannel deployment**: Write agent logic once. Deploy across web, mobile, Slack, SMS, voice, email.
**Long-running workflows**: Agents that wait for humans, coordinate with external systems, and run for days without losing progress.
---
## Tools
Tools are the capabilities you give to agents. Tactus supports multiple ways to define and connect tools.
### MCP Server Integration
Connect to [Model Context Protocol](https://modelcontextprotocol.io/) servers to access external tool ecosystems:
```yaml
# .tactus/config.yml
mcp_servers:
plexus:
command: "python"
args: ["-m", "plexus.mcp"]
env:
PLEXUS_API_KEY: "${PLEXUS_API_KEY}"
filesystem:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
```
Tools from MCP servers are accessed via the `mcp` namespace:
```lua
done = tactus.done
score_info = mcp.plexus.score_info
read_file = mcp.filesystem.read_file
worker = Agent {
tools = {score_info, read_file, done}
}
```
### Inline Lua Tools
Define tools directly in your `.tac` file—no external servers required:
**Individual tools:**
```lua
done = tactus.done
calculate_tip = Tool {
description = "Calculate tip amount for a bill",
input = {
amount = field.number{required = true},
percent = field.number{required = true}
},
function(args)
return string.format("$%.2f", args.amount * args.percent / 100)
end
}
assistant = Agent {
tools = {calculate_tip, done}
}
```
**Grouped toolsets:**
```lua
done = tactus.done
math_tools = Toolset {
type = "lua",
tools = {
{name = "add", input = {...}, handler = function(args) ... end},
{name = "multiply", input = {...}, handler = function(args) ... end}
}
}
calculator = Agent {
tools = {math_tools, done}
}
```
**Inline agent tools:**
```lua
done = tactus.done
text_processor = Agent {
inline_tools = {
{name = "uppercase", input = {...}, handler = function(args)
return string.upper(args.text)
end}
},
tools = {done}
}
```
### Direct Tool Invocation
Call tools directly from Lua code for deterministic control:
```lua
-- Tool returns a callable handle - assign it for direct use
calculate_tip = Tool {
description = "Calculate tip",
input = {
amount = field.number{required = true},
percent = field.number{required = true}
},
function(args)
return args.amount * args.percent / 100
end
}
-- Call directly - no LLM involvement
local tip = calculate_tip({amount = 50, percent = 20})
-- Pass results to agent via context
summarizer({
context = {
tip_calculation = tip,
original_amount = "$50.00"
}
})
```
### Tool Tracking
Check which tools were called and access their results:
```lua
if Tool.called("search") then
local result = Tool.last_result("search")
local call = Tool.last_call("search") -- {args = {...}, result = "..."}
end
```
### Per-Turn Tool Control
Control which tools are available on each turn—essential for patterns like tool result summarization:
```lua
repeat
researcher() -- Has all tools
if Tool.called("search") then
-- Summarize with NO tools (prevents recursive calls)
researcher({
message = "Summarize the search results",
tools = {}
})
end
until Tool.called("done")
```
See [docs/TOOLS.md](docs/TOOLS.md) for the complete tools reference.
---
## Quick Start
### Installation
```bash
pip install tactus
```
**Docker required by default:** `tactus run` uses a Docker sandbox for isolation and will error if Docker is not available. Use `--no-sandbox` (or set `sandbox.enabled: false` in config) to opt out when your architecture does not require container isolation.
### Your First Procedure
Create `hello.tac`:
```lua
done = tactus.done
greeter = Agent {
provider = "openai",
model = "gpt-4o-mini",
system_prompt = [[
You are a friendly greeter. Greet the user by name: {input.name}
When done, call the done tool.
]],
tools = {done}
}
Procedure {
input = {
name = field.string{default = "World"}
},
output = {
greeting = field.string{required = true}
},
function(input)
repeat
greeter()
until Tool.called("done")
return { greeting = Tool.last_result("done") }
end
}
Specification([[
Feature: Greeting
Scenario: Agent greets and completes
When the greeter agent takes turns
Then the done tool should be called exactly once
And the procedure should complete successfully
]])
```
**Run it:**
```bash
export OPENAI_API_KEY=your-key
tactus run hello.tac
```
**Test it:**
```bash
tactus test hello.tac
```
**Evaluate consistency:**
```bash
tactus test hello.tac --runs 10
```
---
## Documentation
- **[docs/README.md](docs/README.md)** — Docs index and recommended reading paths
- **[SPECIFICATION.md](SPECIFICATION.md)** — Complete DSL reference
- **[IMPLEMENTATION.md](IMPLEMENTATION.md)** — Implementation status and architecture
- **[docs/model-primitive.md](docs/model-primitive.md)** — Model primitive quick reference (humans + AI assistants)
- **[docs/model-training-walkthrough.md](docs/model-training-walkthrough.md)** — Train/evaluate/run walkthrough (registry-backed models)
- **[docs/agent-primitive.md](docs/agent-primitive.md)** — Agent primitive quick reference (humans + AI assistants)
- **[docs/TOOLS.md](docs/TOOLS.md)** — Tools and MCP integration guide
- **[docs/FILE_IO.md](docs/FILE_IO.md)** — File I/O operations guide (CSV, TSV, Parquet, HDF5, Excel)
- **[examples/](examples/)** — Example procedures
---
## Key Features
### Per-Turn Tool Control
Tactus gives you fine-grained control over what tools an agent has access to on each individual turn. This enables powerful patterns like **tool result summarization**, where you want the agent to explain what a tool returned without having access to call more tools.
**The Pattern:**
```lua
done = tactus.done
search = mcp.brave_search.search
analyze = mcp.analyze.analyze
researcher = Agent {
provider = "openai",
model = "gpt-4o",
system_prompt = "You are a research assistant.",
tools = {search, analyze, done}
}
Procedure {
function(input)
repeat
-- Main call: agent has all tools
researcher()
-- After each tool call, ask agent to summarize with NO tools
if Tool.called("search") or Tool.called("analyze") then
researcher({
message = "Summarize the tool results above in 2-3 sentences",
tools = {} -- No tools for this call!
})
end
until Tool.called("done")
end
}
```
This creates a rhythm: **tool call → summarization → tool call → summarization → done**
**Why this matters:**
Without per-call control, an agent might call another tool when you just want it to explain the previous result. By temporarily restricting toolsets to an empty set (`tools = {}`), you ensure the agent focuses on summarization.
**Other per-call overrides:**
```lua
-- Override model parameters for one call
researcher({
message = "Be creative with this summary",
temperature = 0.9,
max_tokens = 500
})
-- Restrict to specific tools only
researcher({
tools = {search, done} -- No analyze for this call
})
```
See `examples/14-feature-per-turn-tools.tac` for a complete working example.
### Checkpointed Steps (Determinism)
For durable execution, any operation that touches external state (randomness, time, APIs not in tools) must be checkpointed. Tactus provides `Step.checkpoint` for this:
```lua
-- Non-deterministic operation wrapped in checkpoint
local data = Step.checkpoint(function()
return http_get("https://api.example.com/data")
end)
-- On replay, the function is NOT called again.
-- The previously saved 'data' is returned immediately.
```
This ensures that when a procedure resumes after a pause (e.g. waiting for a human), it doesn't re-execute side effects or get different random values.
### File I/O Operations
Tactus provides safe file I/O operations for reading and writing data files, with all operations restricted to the current working directory for security.
**Supported Formats:**
- **CSV/TSV** — Tabular data with automatic header handling
- **JSON** — Structured data using File.read/write with Json.encode/decode
- **Parquet** — Columnar storage for analytics (via pyarrow)
- **HDF5** — Scientific data with multiple datasets (via h5py)
- **Excel** — Spreadsheets with sheet support (via openpyxl)
- **Raw text** — Plain text files and configurations
**Example:**
```lua
-- Read CSV data
local data = Csv.read("sales.csv")
-- Process data (0-indexed access)
for i = 0, data:len() - 1 do
local row = data[i]
-- process row...
end
-- Write results
Csv.write("results.csv", processed_data)
```
See [`docs/FILE_IO.md`](docs/FILE_IO.md) for the complete API reference and [`examples/51-file-io-basics.tac`](examples/51-file-io-basics.tac) through [`examples/54-excel-file-io.tac`](examples/54-excel-file-io.tac) for working examples.
### Testing & Evaluation: Two Different Concerns
Tactus provides two complementary approaches for ensuring quality, each targeting a different aspect of your agentic workflow:
#### Behavior Specifications (BDD): Testing Workflow Logic
**What it tests:** The deterministic control flow of your procedure—the Lua code that orchestrates agents, handles conditionals, manages state, and coordinates tools.
**When to use:**
- Complex procedures with branching logic, loops, and state management
- Multi-agent coordination patterns
- Error handling and edge cases
- Procedures where the *orchestration* is more complex than the *intelligence*
**How it works:**
```lua
specifications([[
Feature: Multi-Agent Research Workflow
Scenario: Researcher delegates to summarizer
Given the procedure has started
When the researcher agent takes 3 turns
Then the search tool should be called at least once
And the researcher should call the delegate tool
And the summarizer agent should take at least 1 turn
And the done tool should be called exactly once
]])
```
**Key characteristics:**
- Uses Gherkin syntax (Given/When/Then)
- Runs with `tactus test`
- Can use mocks to isolate logic from LLM behavior
- Deterministic: same input → same execution path
- Fast: tests orchestration without expensive API calls
- Measures: "Did the code execute correctly?"
#### Gherkin Step Reference
Tactus provides a rich library of built-in steps for BDD testing. You can use these immediately in your `specifications` block:
**Tool Steps:**
```gherkin
Then the search tool should be called
Then the search tool should not be called
Then the search tool should be called at least 3 times
Then the search tool should be called exactly 2 times
Then the search tool should be called with query=test
```
**State Steps:**
```gherkin
Given the procedure has started
Then the state count should be 5
Then the state error should exist
```
**Completion & Iteration Steps:**
```gherkin
Then the procedure should complete successfully
Then the procedure should fail
Then the total iterations should be less than 10
Then the agent should take at least 3 turns
```
**Custom Steps:**
Define your own steps in Lua:
```lua
step("the research quality is high", function()
local results = State.get("results")
assert(#results > 5, "Not enough results")
end)
```
See [tactus/testing/README.md](tactus/testing/README.md) for the complete reference.
#### Evaluations: Testing Agent Intelligence
**What it tests:** The probabilistic quality of LLM outputs—whether agents produce correct, helpful, and consistent results.
**When to use:**
- Simple "LLM wrapper" procedures (minimal orchestration logic)
- Measuring output quality (accuracy, tone, format)
- Testing prompt effectiveness
- Consistency across multiple runs
- Procedures where the *intelligence* is more important than the *orchestration*
**How it works:**
```lua
evaluations {
runs = 10, -- Run each test case 10 times
parallel = true,
dataset = {
{
name = "greeting_task",
inputs = {task = "Greet Alice warmly"}
},
{
name = "haiku_task",
inputs = {task = "Write a haiku about AI"}
}
},
evaluators = {
-- Check for required content
{
type = "contains",
field = "output",
value = "TASK_COMPLETE:"
},
-- Use LLM to judge quality
{
type = "llm_judge",
rubric = [[
Score 1.0 if the agent:
- Completed the task successfully
- Produced high-quality output
- Called the done tool appropriately
Score 0.0 otherwise.
]],
model = "openai/gpt-4o-mini"
}
}
}
```
**Key characteristics:**
- Uses Pydantic AI Evals framework
- Runs with `tactus eval`
- Uses real LLM calls (not mocked)
- Probabilistic: same input → potentially different outputs
- Slower: makes actual API calls
- Measures: "Did the AI produce good results?"
- Provides success rates, consistency metrics, and per-task breakdowns
#### When to Use Which?
| Feature | Behavior Specifications (BDD) | Evaluations |
|---------|-------------------------------|-------------|
| **Goal** | Verify deterministic logic | Measure probabilistic quality |
| **Command (Single)** | `tactus test` | `tactus eval` |
| **Command (Repeat)** | `tactus test --runs 10` (consistency check) | `tactus eval --runs 10` |
| **Execution** | Fast, mocked (optional) | Slow, real API calls |
| **Syntax** | Gherkin (`Given`/`When`/`Then`) | Lua configuration table |
| **Example** | "Did the agent call the tool?" | "Did the agent write a good poem?" |
| **Best for** | Complex orchestration, state management | LLM output quality, prompt tuning |
**Use Behavior Specifications when:**
- You have complex orchestration logic to test
- You need fast, deterministic tests
- You want to verify control flow (loops, conditionals, state)
- You're testing multi-agent coordination patterns
- Example: [`examples/20-bdd-complete.tac`](examples/20-bdd-complete.tac)
**Use Evaluations when:**
- Your procedure is mostly an LLM call wrapper
- You need to measure output quality (accuracy, tone)
- You want to test prompt effectiveness
- You need consistency metrics across runs
- Example: [`examples/34-eval-advanced.tac`](examples/34-eval-advanced.tac)
**Use Both when:**
- You have complex orchestration AND care about output quality
- Run BDD tests for fast feedback on logic
- Run evaluations periodically to measure LLM performance
- Example: [`examples/33-eval-trace.tac`](examples/33-eval-trace.tac)
**The key insight:** Behavior specifications test your *code*. Evaluations test your *AI*. Most real-world procedures need both.
#### Gherkin Step Reference
Tactus provides a rich library of built-in steps for BDD testing. You can use these immediately in your `specifications` block:
**Tool Steps:**
```gherkin
Then the search tool should be called
Then the search tool should not be called
Then the search tool should be called at least 3 times
Then the search tool should be called exactly 2 times
Then the search tool should be called with query=test
```
**State Steps:**
```gherkin
Given the procedure has started
Then the state count should be 5
Then the state error should exist
```
**Completion & Iteration Steps:**
```gherkin
Then the procedure should complete successfully
Then the procedure should fail
Then the total iterations should be less than 10
Then the agent should take at least 3 turns
```
**Custom Steps:**
Define your own steps in Lua:
```lua
step("the research quality is high", function()
local results = State.get("results")
assert(#results > 5, "Not enough results")
end)
```
See [tactus/testing/README.md](tactus/testing/README.md) for the complete reference.
#### Advanced Evaluation Features
Tactus evaluations support powerful features for real-world testing:
**External Dataset Loading:**
Load evaluation cases from external files for better scalability:
```lua
evaluations {
-- Load from JSONL file (one case per line)
dataset_file = "data/eval_cases.jsonl",
-- Can also include inline cases (combined with file)
dataset = {
{name = "inline_case", inputs = {...}}
},
evaluators = {...}
}
```
Supported formats: `.jsonl`, `.json` (array), `.csv`
**Trace Inspection:**
Evaluators can inspect execution internals beyond just inputs/outputs:
```lua
evaluators = {
-- Verify specific tool was called
{
type = "tool_called",
value = "search",
min_value = 1,
max_value = 3
},
-- Check agent turn count
{
type = "agent_turns",
field = "researcher",
min_value = 2,
max_value = 5
},
-- Verify state variable
{
type = "state_check",
field = "research_complete",
value = true
}
}
```
**Advanced Evaluator Types:**
```lua
evaluators = {
-- Regex pattern matching
{
type = "regex",
field = "phone",
value = "\\(\\d{3}\\) \\d{3}-\\d{4}"
},
-- JSON schema validation
{
type = "json_schema",
field = "data",
value = {
type = "object",
properties = {
name = {type = "string"},
age = {type = "number"}
},
required = {"name"}
}
},
-- Numeric range checking
{
type = "range",
field = "score",
value = {min = 0, max = 100}
}
}
```
**CI/CD Thresholds:**
Define quality gates that fail the build if not met:
```lua
evaluations {
dataset = {...},
evaluators = {...},
-- Quality thresholds for CI/CD
thresholds = {
min_success_rate = 0.90, -- Fail if < 90% pass
max_cost_per_run = 0.01, -- Fail if too expensive
max_duration = 10.0, -- Fail if too slow (seconds)
max_tokens_per_run = 500 -- Fail if too many tokens
}
}
```
When thresholds are not met, `tactus eval` exits with code 1, enabling CI/CD integration.
**See examples:**
- [`examples/32-eval-dataset.tac`](examples/32-eval-dataset.tac) - External dataset loading
- [`examples/33-eval-trace.tac`](examples/33-eval-trace.tac) - Trace-based evaluators
- [`examples/34-eval-advanced.tac`](examples/34-eval-advanced.tac) - Regex, JSON schema, range
- [`examples/31-eval-thresholds.tac`](examples/31-eval-thresholds.tac) - CI/CD quality gates
- [`examples/33-eval-trace.tac`](examples/33-eval-trace.tac) - Trace-based evaluator stack
### Multi-Model and Multi-Provider Support
Use different models and providers for different tasks within the same workflow. **Every agent must specify a `provider:`** (either directly or via `default_provider:` at the procedure level).
**Supported providers:** `openai`, `bedrock`
**Mix models for different capabilities:**
```lua
done = tactus.done
search = mcp.brave_search.search
researcher = Agent {
provider = "openai",
model = "gpt-4o", -- Use GPT-4o for complex research
system_prompt = "Research the topic thoroughly...",
tools = {search, done}
}
summarizer = Agent {
provider = "openai",
model = "gpt-4o-mini", -- Use GPT-4o-mini for simple summarization
system_prompt = "Summarize the findings concisely...",
tools = {done}
}
```
**Mix providers (OpenAI + Bedrock):**
```lua
done = tactus.done
openai_analyst = Agent {
provider = "openai",
model = "gpt-4o",
system_prompt = "Analyze the data...",
tools = {done}
}
bedrock_reviewer = Agent {
provider = "bedrock",
model = "anthropic.claude-3-5-sonnet-20240620-v1:0",
system_prompt = "Review the analysis...",
tools = {done}
}
```
**Configure model-specific parameters:**
```lua
done = tactus.done
creative_writer = Agent {
provider = "openai",
model = {
name = "gpt-4o",
temperature = 0.9, -- Higher creativity
max_tokens = 2000
},
system_prompt = "Write creatively...",
tools = {done}
}
reasoning_agent = Agent {
provider = "openai",
model = {
name = "gpt-5", -- Reasoning model
openai_reasoning_effort = "high",
max_tokens = 4000
},
system_prompt = "Solve this complex problem...",
tools = {done}
}
```
**Configuration via `.tactus/config.yml`:**
```yaml
# OpenAI credentials
openai_api_key: sk-...
# AWS Bedrock credentials
aws_access_key_id: AKIA...
aws_secret_access_key: ...
aws_default_region: us-east-1
# Optional defaults
default_provider: openai
default_model: gpt-4o
```
### DSPy Integration
Tactus provides first-class support for **DSPy Modules and Signatures**, enabling you to build declarative, self-optimizing AI components directly within your agent workflows.
**Modules & Signatures:**
Instead of hand-tuning prompts, define what you want the model to do using typed signatures:
```lua
-- Configure the Language Model for DSPy
LM("openai/gpt-4o")
-- Define a module with a typed signature
summarizer = Module {
signature = "text -> summary",
strategy = "chain_of_thought" -- Use Chain of Thought reasoning
}
-- Or define complex signatures with specific fields
classifier = Module {
signature = Signature {
input = {
text = field.string{description = "The customer email to classify"}
},
output = {
category = field.string{description = "Support category (Billing, Tech, Sales)"},
priority = field.string{description = "Priority level (Low, High, Critical)"}
}
},
strategy = "predict"
}
```
**Using Modules:**
Modules are callable just like Agents or Tools:
```lua
Procedure {
function(input)
-- Call the module
local result = classifier({text = input.email})
if result.priority == "Critical" then
human_escalation({context = result})
else
auto_responder({category = result.category})
end
end
}
```
This brings the power of DSPy's programmable LLM interfaces into Tactus's durable, orchestrated environment.
### Asynchronous Execution
Tactus is built on **async I/O** from the ground up, making it ideal for LLM-based workflows where you spend most of your time waiting for API responses.
**Why async I/O matters for LLMs:**
- **Not multi-threading**: Async I/O uses a single thread with cooperative multitasking
- **Perfect for I/O-bound tasks**: While waiting for one LLM response, handle other requests
- **Efficient resource usage**: No thread overhead, minimal memory footprint
- **Natural for LLM workflows**: Most time is spent waiting for API calls, not computing
**Spawn async procedures:**
```lua
-- Start multiple research tasks in parallel
local handles = {}
for _, topic in ipairs(topics) do
handles[topic] = Procedure.spawn("researcher", {query = topic})
end
-- Wait for all to complete
Procedure.wait_all(handles)
-- Collect results
local results = {}
for topic, handle in pairs(handles) do
results[topic] = Procedure.result(handle)
end
```
**Check status and wait with timeout:**
```lua
local handle = Procedure.spawn("long_task", params)
-- Check status without blocking
local status = Procedure.status(handle)
if status.waiting_for_human then
notify_channel("Task waiting for approval")
end
-- Wait with timeout
local result = Procedure.wait(handle, {timeout = 300})
if not result then
Log.warn("Task timed out")
end
```
### Context Engineering
Tactus gives you fine-grained control over what each agent sees in the conversation history. This is crucial for multi-agent workflows where different agents need different perspectives.
**Message classification with `humanInteraction`:**
Every message has a classification that determines visibility:
- `INTERNAL`: Agent reasoning, hidden from humans
- `CHAT`: Normal human-AI conversation
- `NOTIFICATION`: Progress updates to humans
- `PENDING_APPROVAL`: Waiting for human approval
- `PENDING_INPUT`: Waiting for human input
- `PENDING_REVIEW`: Waiting for human review
**Filter conversation history per agent:**
```lua
done = tactus.done
search = mcp.brave_search.search
analyze = mcp.analyze.analyze
worker = Agent {
system_prompt = "Process the task...",
tools = {search, analyze, done},
-- Control what this agent sees
filter = {
class = "ComposedFilter",
chain = {
{
class = "TokenBudget",
max_tokens = 120000
},
{
class = "LimitToolResults",
count = 2 -- Only show last 2 tool results
}
}
}
}
```
**Manage session state programmatically:**
```lua
-- Inject context for the next turn
Session.inject_system("Focus on the security implications")
-- Access conversation history
local history = Session.history()
-- Clear history for a fresh start
Session.clear()
-- Save/load conversation state
Session.save_to_node(checkpoint_node)
Session.load_from_node(checkpoint_node)
```
**Why this matters:**
- **Token efficiency**: Keep context within model limits
- **Agent specialization**: Each agent sees only what's relevant to its role
- **Privacy**: Hide sensitive information from certain agents
- **Debugging**: Control visibility for testing and development
### Advanced HITL Patterns
Beyond the omnichannel HITL described earlier, Tactus provides detailed primitives for human oversight and collaboration. You can request approval, input, or review at any point in your workflow.
**Request approval before critical actions:**
```lua
local approved = Human.approve({
message = "Deploy to production?",
context = {environment = "prod", version = "2.1.0"},
timeout = 3600, -- seconds
default = false
})
if approved then
deploy_to_production()
else
Log.info("Deployment cancelled by operator")
end
```
**Request human input:**
```lua
local topic = Human.input({
message = "What topic should I research next?",
placeholder = "Enter a topic...",
timeout = nil -- wait forever
})
if topic then
Procedure.run("researcher", {query = topic})
end
```
**Request review of generated content:**
```lua
local review = Human.review({
message = "Please review this generated document",
artifact = generated_content,
artifact_type = "document",
options = {
{label = "Approve", type = "action"},
{label = "Reject", type = "cancel"},
{label = "Revise", type = "action"}
},
timeout = 86400 -- 24 hours
})
if review.decision == "Approve" then
publish(generated_content)
elseif review.decision == "Revise" then
State.set("human_feedback", review.feedback)
-- retry with feedback
end
```
**Declare HITL points for reusable workflows:**
```lua
hitl("confirm_publish", {
type = "approval",
message = "Publish this document to production?",
timeout = 3600,
default = false
})
```
Then reference them in your procedure:
```lua
local approved = Human.approve("confirm_publish")
```
**System Alerts:**
Send alerts to your monitoring infrastructure (Datadog, PagerDuty) directly from the workflow:
```lua
System.alert({
message = "Failure rate exceeded threshold",
level = "error", -- info, warning, error, critical
context = {
current_rate = 0.15,
threshold = 0.05
}
})
```
### Cost Tracking & Metrics
Tactus provides **comprehensive cost and performance tracking** for all LLM calls. Every agent interaction is monitored with detailed metrics, giving you complete visibility into costs, performance, and behavior.
**Real-time cost reporting:**
```
💰 Cost researcher: $0.000375 (250 tokens, gpt-4o-mini, 1.2s)
💰 Cost summarizer: $0.000750 (500 tokens, gpt-4o, 2.1s)
✓ Procedure completed: 2 iterations, 3 tools used
💰 Cost Summary
Total Cost: $0.001125
Total Tokens: 750
Per-call breakdown:
researcher: $0.000375 (250 tokens, 1.2s)
summarizer: $0.000750 (500 tokens, 2.1s)
```
**Comprehensive metrics tracked:**
- **Cost**: Prompt cost, completion cost, total cost (calculated from model pricing)
- **Tokens**: Prompt tokens, completion tokens, total tokens, cached tokens
- **Performance**: Duration, latency (time to first token)
- **Reliability**: Retry count, validation errors
- **Efficiency**: Cache hits, cache savings
- **Context**: Message count, new messages per turn
- **Metadata**: Request ID, model version, temperature, max tokens
**Visibility everywhere:**
- **CLI**: Real-time cost logging per call + summary at end
- **IDE**: Collapsible cost events with primary metrics visible, detailed metrics expandable
- **Tests**: Cost tracking during test runs
- **Evaluations**: Aggregate costs across multiple runs
**Collapsible IDE display:**
The IDE shows a clean summary by default (agent, cost, tokens, model, duration) with a single click to expand full details including cost breakdown, performance metrics, retry information, cache statistics, and request metadata.
This helps you:
- **Optimize costs**: Identify expensive agents and calls
- **Debug performance**: Track latency and duration issues
- **Monitor reliability**: See retry patterns and validation failures
- **Measure efficiency**: Track cache hit rates and savings
## Philosophy & Research
Tactus is built on the convergence of two critical insights: the necessity of **Self-Evolution** for future intelligence, and the requirement for **Bounded Control** in present-day production.
### 1. The Substrate for Self-Evolution
The path to Artificial Super Intelligence (ASI) lies in **Self-Evolving Agents**—systems that can adapt and improve their own components over time. A major 2025 survey, *[A Survey of Self-Evolving Agents](https://arxiv.org | text/markdown | null | Anthus <info@anthus.ai> | null | null | MIT | agents, ai, dsl, llm, lua, workflows | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: P... | [] | null | null | >=3.9 | [] | [] | [] | [
"antlr4-python3-runtime==4.13.1",
"behave>=1.2.6",
"biblicus>=1.0.0",
"boto3>=1.28.0",
"dotyaml>=0.1.4",
"dspy>=2.5",
"flask-cors>=4.0.0",
"flask>=3.0.0",
"gherkin-official>=28.0.0",
"h5py>=3.10",
"jinja2>=3.0",
"litellm>=1.81.5",
"logfire>=4.20.0",
"lupa>=2.6",
"markdown>=3.0",
"nanoi... | [] | [] | [] | [
"Homepage, https://github.com/AnthusAI/Tactus",
"Documentation, https://github.com/AnthusAI/Tactus/tree/main/docs",
"Repository, https://github.com/AnthusAI/Tactus",
"Issues, https://github.com/AnthusAI/Tactus/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T23:04:28.267519 | tactus-0.43.1.tar.gz | 549,323 | cd/fe/8c731fe70a411966a47c4e35dad91fd9b4e42f9caa3fa4fea8b867415a28/tactus-0.43.1.tar.gz | source | sdist | null | false | 62c65c865796bf758e129f7486686b4c | d2ce6f57826b90c3d0d7e9f4c7c1518969bdbcdb614a7388e7bbe83504731d37 | cdfe8c731fe70a411966a47c4e35dad91fd9b4e42f9caa3fa4fea8b867415a28 | null | [
"LICENSE"
] | 306 |
2.4 | deezer-python-gql | 0.1.0 | Async typed Python client for Deezer's Pipe GraphQL API. | # deezer-python-gql
Async typed Python client for Deezer's Pipe GraphQL API.
Built with [ariadne-codegen](https://github.com/mirumee/ariadne-codegen) — all
client methods and response models are generated from the GraphQL schema and
`.graphql` query files.
## Installation
```bash
uv add deezer-python-gql
```
## Quick Start
```python
import asyncio
from deezer_python_gql import DeezerGQLClient
async def main():
client = DeezerGQLClient(arl="YOUR_ARL_COOKIE")
# Current user
me = await client.get_me()
print(me)
# Track with media URLs, lyrics, and contributors
track = await client.get_track(track_id="3135556")
print(track.title, track.duration)
# Album with paginated track list
album = await client.get_album(album_id="302127")
print(album.display_title, album.tracks_count)
# Artist with top tracks and discography
artist = await client.get_artist(artist_id="27")
print(artist.name, artist.fans_count)
# Playlist with tracks
playlist = await client.get_playlist(playlist_id="53362031")
print(playlist.title, playlist.estimated_tracks_count)
# Unified search across all entity types
results = await client.search(query="Daft Punk")
print(len(results.tracks.edges), "tracks found")
asyncio.run(main())
```
## Available Queries
| Method | Description |
| --------------------------- | ------------------------------------------------------------- |
| `get_me()` | Current authenticated user |
| `get_track(track_id)` | Full track details — ISRC, media tokens, lyrics, contributors |
| `get_album(album_id)` | Album with cover, label, paginated tracks, fallback |
| `get_artist(artist_id)` | Artist with bio, top tracks, albums (ordered by release date) |
| `get_playlist(playlist_id)` | Playlist with owner, picture, paginated tracks |
| `search(query, ...)` | Unified search across tracks, albums, artists, playlists |
All methods return fully-typed Pydantic models generated from the GraphQL schema.
## Development
Requires **Python 3.12+** and [uv](https://docs.astral.sh/uv/).
```bash
# Install all dependencies (including codegen tooling)
make setup
# Re-generate the typed client from schema + queries
make generate
# Run linters and type checks
make lint
# Run tests
make test
```
### Adding a new query
1. Create a `.graphql` file in `queries/`.
2. Run `make generate` to produce the typed client method and response models.
3. Add tests in `tests/`.
## Exploring the API
To run ad-hoc GraphQL queries against the live Pipe API during development:
1. Create a `.env` file (already gitignored) with your ARL cookie:
```bash
echo 'DEEZER_ARL=your_arl_cookie_value' > .env
```
2. Run queries:
```bash
# Run a .graphql file
uv run python scripts/explore.py queries/get_me.graphql
# Run an inline query
uv run python scripts/explore.py -q '{ me { id } }'
# With variables
uv run python scripts/explore.py -q 'query($id: String!) { track(trackId: $id) { title } }' \
-v '{"id": "3135556"}'
# Via make
make explore Q=queries/get_me.graphql
```
The script handles JWT auth automatically — no manual token management needed.
## Authentication
The Pipe API uses short-lived JWTs obtained from an ARL cookie. The base client
handles token acquisition and refresh automatically — you only need to supply a
valid ARL value.
## License
Apache-2.0
| text/markdown | null | Julian Daberkow <jdaberkow@users.noreply.github.com> | null | null | Apache-2.0 | null | [
"Environment :: Console",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [
"any"
] | null | null | >=3.12 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.0.0",
"codespell==2.4.1; extra == \"test\"",
"mypy==1.19.1; extra == \"test\"",
"pre-commit==4.5.1; extra == \"test\"",
"pre-commit-hooks==6.0.0; extra == \"test\"",
"pytest==9.0.2; extra == \"test\"",
"pytest-asyncio==1.3.0; extra == \"test\"",
"pytest-cov==7.0.0; extr... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:04:17.965877 | deezer_python_gql-0.1.0.tar.gz | 13,412 | 6b/46/ec3a1a2320d4ec70bc96e099f0a40e99cdae126b572eb807cce5192e6a36/deezer_python_gql-0.1.0.tar.gz | source | sdist | null | false | 66b5125b92e2975e54556b2a3957c5a3 | 6eca18e289da3373a1ce722f9c9797eaee1a75eff10300ee7f17a26406ad7313 | 6b46ec3a1a2320d4ec70bc96e099f0a40e99cdae126b572eb807cce5192e6a36 | null | [
"LICENSE"
] | 277 |
2.4 | langchain-text-splitters | 1.1.1 | LangChain text splitting utilities | # 🦜✂️ LangChain Text Splitters
[](https://pypi.org/project/langchain-text-splitters/#history)
[](https://opensource.org/licenses/MIT)
[](https://pypistats.org/packages/langchain-text-splitters)
[](https://x.com/langchain)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
## Quick Install
```bash
pip install langchain-text-splitters
```
## 🤔 What is this?
LangChain Text Splitters contains utilities for splitting into chunks a wide variety of text documents.
## 📖 Documentation
For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_text_splitters/).
## 📕 Releases & Versioning
See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.
We encourage pinning your version to a specific version in order to avoid breaking your CI when we publish new tests. We recommend upgrading to the latest version periodically to make sure you have the latest tests.
Not pinning your version will ensure you always have the latest tests, but it may also break your CI if we introduce tests that your integration doesn't pass.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | <4.0.0,>=3.10.0 | [] | [] | [] | [
"langchain-core<2.0.0,>=1.2.13"
] | [] | [] | [] | [
"Homepage, https://docs.langchain.com/",
"Documentation, https://docs.langchain.com/",
"Repository, https://github.com/langchain-ai/langchain",
"Issues, https://github.com/langchain-ai/langchain/issues",
"Changelog, https://github.com/langchain-ai/langchain/releases?q=%22langchain-text-splitters%22",
"Twi... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:02:42.798746 | langchain_text_splitters-1.1.1.tar.gz | 279,352 | 85/38/14121ead61e0e75f79c3a35e5148ac7c2fe754a55f76eab3eed573269524/langchain_text_splitters-1.1.1.tar.gz | source | sdist | null | false | 879b6b5a6018fb5e844faebcf82290a8 | 34861abe7c07d9e49d4dc852d0129e26b32738b60a74486853ec9b6d6a8e01d2 | 853814121ead61e0e75f79c3a35e5148ac7c2fe754a55f76eab3eed573269524 | null | [] | 1,140,359 |
2.4 | djust | 0.3.2 | Phoenix LiveView-style reactive components for Django with Rust-powered performance. Real-time UI updates over WebSocket, no JavaScript build step required. | <p align="center">
<img src="branding/logo/djust-wordmark-dark.png" alt="djust" width="300" />
</p>
<p align="center"><strong>Blazing fast reactive server-side rendering for Django, powered by Rust</strong></p>
djust brings Phoenix LiveView-style reactive components to Django, with performance that feels native. Write server-side Python code with automatic, instant client updates—no JavaScript bundling, no build step, no complexity.
🌐 **[djust.org](https://djust.org)** | 🚀 **[Quick Start](https://djust.org/quickstart/)** | 📝 **[Examples](https://djust.org/examples/)**
[](https://pypi.org/project/djust/)
[](https://github.com/djust-org/djust/actions/workflows/test.yml)
[](LICENSE)
[](https://www.python.org/downloads/)
[](https://www.djangoproject.com/)
[](https://pypi.org/project/djust/)
## ✨ Features
- ⚡ **10-100x Faster** - Rust-powered template engine and Virtual DOM diffing
- 🔄 **Reactive Components** - Phoenix LiveView-style server-side reactivity
- 🔌 **Django Compatible** - Works with existing Django templates and components
- 📦 **Zero Build Step** - ~29KB gzipped client JavaScript, no bundling needed
- 🌐 **WebSocket Updates** - Real-time DOM patches over WebSocket (with HTTP fallback)
- 🎯 **Minimal Client Code** - Smart diffing sends only what changed
- 🔒 **Type Safe** - Rust guarantees for core performance-critical code
- 🐞 **Developer Debug Panel** - Interactive debugging with event history and VDOM inspection
- 💤 **Lazy Hydration** - Defer WebSocket connections for below-fold content (20-40% memory savings)
- 🚀 **TurboNav Compatible** - Works seamlessly with Turbo-style client-side navigation
- 📱 **PWA Support** - Offline-first Progressive Web Apps with automatic sync
- 🏢 **Multi-Tenant Ready** - Production SaaS architecture with tenant isolation
- 🔐 **Authentication & Authorization** - View-level and handler-level auth with Django permissions integration
## 🎯 Quick Example
```python
from djust import LiveView, event_handler
class CounterView(LiveView):
template_string = """
<div>
<h1>Count: {{ count }}</h1>
<button dj-click="increment">+</button>
<button dj-click="decrement">-</button>
</div>
"""
def mount(self, request, **kwargs):
self.count = 0
@event_handler
def increment(self):
self.count += 1 # Automatically updates client!
@event_handler
def decrement(self):
self.count -= 1
```
That's it! No JavaScript needed. State changes automatically trigger minimal DOM updates.
## 📊 Performance
Benchmarked on M1 MacBook Pro (2021):
| Operation | Django | djust | Speedup |
|-----------|---------|-------|---------|
| Template Rendering (100 items) | 2.5 ms | 0.15 ms | **16.7x** |
| Large List (10k items) | 450 ms | 12 ms | **37.5x** |
| Virtual DOM Diff | N/A | 0.08 ms | **Sub-ms** |
| Round-trip Update | 50 ms | 5 ms | **10x** |
Run benchmarks yourself:
```bash
cd benchmarks
python benchmark.py
```
## 🚀 Installation
### Prerequisites
- Python 3.8+
- Rust 1.70+ (for building from source)
- Django 3.2+
### Install from PyPI
```bash
pip install djust
```
### Build from Source
#### Using Make (Easiest - Recommended for Development)
```bash
# Clone the repository
git clone https://github.com/djust-org/djust.git
cd djust
# Install Rust (if needed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install everything and build
make install
# Start the development server
make start
# See all available commands
make help
```
**Common Make Commands:**
- `make start` - Start development server with hot reload
- `make stop` - Stop the development server
- `make status` - Check if server is running
- `make test` - Run all tests
- `make clean` - Clean build artifacts
- `make help` - Show all available commands
#### Using uv (Fast)
```bash
# Clone the repository
git clone https://github.com/djust-org/djust.git
cd djust
# Install Rust (if needed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install uv (if needed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create virtual environment and install dependencies
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install maturin and build
uv pip install maturin
maturin develop --release
```
#### Using pip
```bash
# Clone the repository
git clone https://github.com/djust-org/djust.git
cd djust
# Install Rust (if needed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install maturin
pip install maturin
# Build and install
maturin develop --release
# Or build wheel
maturin build --release
pip install target/wheels/djust-*.whl
```
## 📖 Documentation
### Setup
1. Add to `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
# ...
'channels', # Required for WebSocket support
'djust',
# ...
]
```
2. Configure ASGI application (`asgi.py`):
```python
import os
from django.core.asgi import get_asgi_application
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.auth import AuthMiddlewareStack
from djust.websocket import LiveViewConsumer
from django.urls import path
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
application = ProtocolTypeRouter({
"http": get_asgi_application(),
"websocket": AuthMiddlewareStack(
URLRouter([
path('ws/live/', LiveViewConsumer.as_asgi()),
])
),
})
```
3. Add to `settings.py`:
```python
ASGI_APPLICATION = 'myproject.asgi.application'
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels.layers.InMemoryChannelLayer'
}
}
```
### Creating LiveViews
#### Class-Based LiveView
```python
from djust import LiveView, event_handler
class TodoListView(LiveView):
template_name = 'todos.html' # Or use template_string
def mount(self, request, **kwargs):
"""Called when view is first loaded"""
self.todos = []
@event_handler
def add_todo(self, text):
"""Event handler - called from client"""
self.todos.append({'text': text, 'done': False})
@event_handler
def toggle_todo(self, index):
self.todos[index]['done'] = not self.todos[index]['done']
```
#### Function-Based LiveView
```python
from djust import live_view
@live_view(template_name='counter.html')
def counter_view(request):
count = 0
def increment():
nonlocal count
count += 1
return locals() # Returns all local variables as context
```
### Template Syntax
djust supports Django template syntax with event binding:
```html
<!-- Variables -->
<h1>{{ title }}</h1>
<!-- Filters (all 57 Django built-in filters supported) -->
<p>{{ text|upper }}</p>
<p>{{ description|truncatewords:20 }}</p>
<a href="?q={{ query|urlencode }}">Search</a>
{{ body|urlize|safe }}
<!-- Control flow -->
{% if show %}
<div>Visible</div>
{% endif %}
{% if count > 10 %}
<div>Many items!</div>
{% endif %}
{% for item in items %}
<li>{{ item }}</li>
{% endfor %}
<!-- URL resolution -->
<a href="{% url 'myapp:detail' pk=item.id %}">View</a>
<!-- Template includes -->
{% include "partials/header.html" %}
<!-- Event binding -->
<button dj-click="increment">Click me</button>
<input dj-input="on_search" type="text" />
<form dj-submit="submit_form">
<input name="email" />
<button type="submit">Submit</button>
</form>
```
### Supported Events
- `dj-click` - Click events
- `dj-input` - Input events (passes `value`)
- `dj-change` - Change events (passes `value`)
- `dj-submit` - Form submission (passes form data as dict)
### Reusable Components
djust provides a powerful component system with automatic state management and stable component IDs.
#### Basic Component Example
```python
from djust.components import AlertComponent
class MyView(LiveView):
def mount(self, request):
# Components get automatic IDs based on attribute names
self.alert_success = AlertComponent(
message="Operation successful!",
type="success",
dismissible=True
)
# component_id automatically becomes "alert_success"
```
#### Component ID Management
Components automatically receive a stable `component_id` based on their **attribute name** in your view. This eliminates manual ID management:
```python
# When you write:
self.alert_success = AlertComponent(message="Success!")
# The framework automatically:
# 1. Sets component.component_id = "alert_success"
# 2. Persists this ID across renders and events
# 3. Uses it in HTML: data-component-id="alert_success"
# 4. Routes events back to the correct component
```
**Why it works:**
- The attribute name (`alert_success`) is already unique within your view
- It's stable across re-renders and WebSocket reconnections
- Event handlers can reference components by their attribute names
- No manual ID strings to keep in sync
**Event Routing Example:**
```python
class MyView(LiveView):
def mount(self, request):
self.alert_warning = AlertComponent(
message="Warning message",
dismissible=True
)
@event_handler
def dismiss(self, component_id: str = None):
"""Handle dismissal - automatically routes to correct component"""
if component_id and hasattr(self, component_id):
component = getattr(self, component_id)
if hasattr(component, 'dismiss'):
component.dismiss() # component_id="alert_warning"
```
When the dismiss button is clicked, the client sends `component_id="alert_warning"`, and the handler uses `getattr(self, "alert_warning")` to find the component.
#### Creating Custom Components
```python
from djust import Component, register_component, event_handler
@register_component('my-button')
class Button(Component):
template = '<button dj-click="on_click">{{ label }}</button>'
def __init__(self, label="Click"):
super().__init__()
self.label = label
self.clicks = 0
@event_handler
def on_click(self):
self.clicks += 1
print(f"Clicked {self.clicks} times!")
```
### Decorators
```python
from djust import LiveView, event_handler, reactive
class MyView(LiveView):
@event_handler
def handle_click(self):
"""Marks method as event handler"""
pass
@reactive
def count(self):
"""Reactive property - auto-triggers updates"""
return self._count
@count.setter
def count(self, value):
self._count = value
```
### Configuration
Configure djust in your Django `settings.py`:
```python
LIVEVIEW_CONFIG = {
# Transport mode
'use_websocket': True, # Set to False for HTTP-only mode (no WebSocket dependency)
# Debug settings
'debug_vdom': False, # Enable detailed VDOM patch logging (for troubleshooting)
# CSS Framework
'css_framework': 'bootstrap5', # Options: 'bootstrap5', 'tailwind', None
}
```
**Common Configuration Options:**
| Option | Default | Description |
|--------|---------|-------------|
| `use_websocket` | `True` | Use WebSocket transport (requires Django Channels) |
| `debug_vdom` | `False` | Enable detailed VDOM debugging logs |
| `css_framework` | `'bootstrap5'` | CSS framework for components |
**CSS Framework Setup:**
For Tailwind CSS (recommended), use the one-command setup:
```bash
python manage.py djust_setup_css tailwind
```
This auto-detects template directories, creates config files, and builds your CSS. For production:
```bash
python manage.py djust_setup_css tailwind --minify
```
See the [CSS Framework Guide](docs/guides/css-frameworks.md) for detailed setup instructions, Bootstrap configuration, and CI/CD integration.
**Debug Mode:**
When troubleshooting VDOM issues, enable debug logging:
```python
# In settings.py
LIVEVIEW_CONFIG = {
'debug_vdom': True,
}
# Or programmatically
from djust.config import config
config.set('debug_vdom', True)
```
This will log:
- Server-side: Patch generation details (stderr)
- Client-side: Patch application and DOM traversal (browser console)
### State Management
djust provides Python-only state management decorators that eliminate the need for manual JavaScript.
#### 🚀 Quick Start (5 minutes)
Build a debounced search in **8 lines of Python** (no JavaScript):
```python
from djust import LiveView
from djust.decorators import debounce
class ProductSearchView(LiveView):
template_string = """
<input dj-input="search" placeholder="Search products..." />
<div>{% for p in results %}<div>{{ p.name }}</div>{% endfor %}</div>
"""
def mount(self, request):
self.results = []
@debounce(wait=0.5) # Wait 500ms after typing stops
def search(self, query: str = "", **kwargs):
self.results = Product.objects.filter(name__icontains=query)[:10]
```
**That's it!** Server only queries after you stop typing. Add `@optimistic` for instant UI updates, `@cache(ttl=300)` to cache responses for 5 minutes.
**👉 [Full Quick Start Guide (5 min)](docs/STATE_MANAGEMENT_QUICKSTART.md)**
---
#### Key Features
- ✅ **Zero JavaScript Required** - Common patterns work without writing any JS
- ✅ **87% Code Reduction** - Decorators replace hundreds of lines of manual JavaScript
- ✅ **Lightweight Bundle** - ~29KB gzipped client.js (vs Livewire ~50KB)
- ✅ **Competitive DX** - Matches Phoenix LiveView and Laravel Livewire developer experience
#### Available Decorators
| Decorator | Use When | Example |
|-----------|----------|---------|
| `@debounce(wait)` | User is typing | Search, autosave |
| `@throttle(interval)` | Rapid events | Scroll, resize |
| `@optimistic` | Instant feedback | Counter, toggle |
| `@cache(ttl, key_params)` | Repeated queries | Autocomplete |
| `@client_state(keys)` | Multi-component | Dashboard filters |
| `DraftModeMixin` | Auto-save forms | Contact form |
**Quick Decision Matrix:**
- Typing in input? → `@debounce(0.5)`
- Scrolling/resizing? → `@throttle(0.1)`
- Need instant UI update? → `@optimistic`
- Same query multiple times? → `@cache(ttl)`
- Multiple components? → `@client_state([keys])`
- Auto-save forms? → `DraftModeMixin`
#### Learn More
- 🚀 **[Quick Start (5 min)](docs/STATE_MANAGEMENT_QUICKSTART.md)** - Get productive fast
- 📚 **[Full Tutorial (20 min)](docs/STATE_MANAGEMENT_TUTORIAL.md)** - Step-by-step Product Search
- 📖 **[API Reference](docs/STATE_MANAGEMENT_API.md)** - Complete decorator docs + cheat sheet
- 🎯 **[Examples](docs/STATE_MANAGEMENT_EXAMPLES.md)** - Copy-paste ready code
- 🔄 **[Migration Guide](docs/STATE_MANAGEMENT_MIGRATION.md)** - Convert JavaScript to Python
- ⚖️ **[Framework Comparison](docs/STATE_MANAGEMENT_COMPARISON.md)** - vs Phoenix LiveView & Laravel Livewire
### 🧭 Navigation Patterns
djust provides three navigation mechanisms for building multi-view applications without full page reloads:
#### When to Use What
| Scenario | Use | Why |
|----------|-----|-----|
| Filter/sort/paginate within same view | `dj-patch` / `live_patch()` | No remount, URL stays bookmarkable |
| Navigate to a different LiveView | `dj-navigate` / `live_redirect()` | Same WebSocket, no page reload |
| Link to non-LiveView page | Standard `<a href>` | Full page load needed |
#### ⚠️ Anti-Pattern: Don't Use `dj-click` for Navigation
This is one of the most common mistakes when building multi-view djust apps:
**❌ Wrong** — using `dj-click` to trigger a handler that calls `live_redirect()`:
```python
# Anti-pattern: Don't use dj-click for navigation
@event_handler()
def go_to_item(self, item_id, **kwargs):
self.live_redirect(f"/items/{item_id}/")
```
```html
<!-- Wrong: Forces a round-trip through WebSocket for navigation -->
<button dj-click="go_to_item" dj-value-item_id="{{ item.id }}">View</button>
```
**✅ Right** — using `dj-navigate` directly:
```html
<!-- Right: Client navigates directly, no extra server round-trip -->
<a dj-navigate="/items/{{ item.id }}/">View Item</a>
```
**Why it matters:** `dj-click` sends a WebSocket message to the server, the server processes the handler, then sends back a navigate command — an unnecessary round-trip. `dj-navigate` handles navigation client-side immediately, making it faster and more efficient.
**Exception:** Use `live_redirect()` in a handler when navigation is *conditional* (e.g., redirect after form validation):
```python
@event_handler()
def submit_form(self, **kwargs):
if self.form.is_valid():
self.form.save()
self.live_redirect("/success/") # OK: Conditional navigation
else:
# Stay on form to show errors
pass
```
#### Quick Example: Multi-View App
```python
from djust import LiveView
from djust.mixins.navigation import NavigationMixin
from djust.decorators import event_handler
class ProductListView(NavigationMixin, LiveView):
template_string = """
<!-- Filter within same view: use dj-patch -->
<a dj-patch="?category=electronics">Electronics</a>
<a dj-patch="?category=books">Books</a>
<div>
{% for product in products %}
<!-- Navigate to different view: use dj-navigate -->
<a dj-navigate="/products/{{ product.id }}/">{{ product.name }}</a>
{% endfor %}
</div>
"""
def mount(self, request, **kwargs):
self.category = "all"
self.products = []
def handle_params(self, params, uri):
"""Called when URL changes via dj-patch or browser back/forward"""
self.category = params.get("category", "all")
self.products = Product.objects.filter(category=self.category)
```
**Learn More:**
- 📖 **[Navigation Guide](docs/guides/navigation.md)** - Complete API reference (`live_patch()`, `live_redirect()`, `handle_params()`)
### Developer Tooling
#### Debug Panel
Interactive debugging tool for LiveView development (DEBUG mode only):
```python
# In settings.py
DEBUG = True # Debug panel automatically enabled
```
**Open**: Press `Ctrl+Shift+D` (Windows/Linux) or `Cmd+Shift+D` (Mac), or click the 🐞 floating button
**Features**:
- 🔍 **Event Handlers** - Discover all handlers with parameters, types, and descriptions
- 📊 **Event History** - Real-time log with timing metrics (e.g., `search • 45.2ms`)
- ⚡ **VDOM Patches** - Monitor DOM updates with sub-millisecond precision
- 🔬 **Variables** - Inspect current view state
**Learn More**:
- 📖 **[Debug Panel Guide](docs/DEBUG_PANEL.md)** - Complete user guide
- 📝 **[Event Handler Best Practices](docs/EVENT_HANDLERS.md)** - Patterns and conventions
#### Event Handlers
Always use `@event_handler` decorator for auto-discovery and validation:
```python
from djust.decorators import event_handler
@event_handler()
def search(self, value: str = "", **kwargs):
"""Search handler - description shown in debug panel"""
self.search_query = value
```
**Parameter Convention**: Use `value` for form inputs (`dj-input`, `dj-change` events):
```python
# ✅ Correct - matches what form events send
@event_handler()
def search(self, value: str = "", **kwargs):
self.search_query = value
# ❌ Wrong - won't receive input value
@event_handler()
def search(self, query: str = "", **kwargs):
self.search_query = query # Always "" (default)
```
## 🏗️ Architecture
```
┌─────────────────────────────────────────────┐
│ Browser │
│ ├── Client.js (~29KB gz) - Events & DOM │
│ └── WebSocket Connection │
└─────────────────────────────────────────────┘
↕️ WebSocket (Binary/JSON)
┌─────────────────────────────────────────────┐
│ Django + Channels (Python) │
│ ├── LiveView Classes │
│ ├── Event Handlers │
│ └── State Management │
└─────────────────────────────────────────────┘
↕️ Python/Rust FFI (PyO3)
┌─────────────────────────────────────────────┐
│ Rust Core (Native Speed) │
│ ├── Template Engine (<1ms) │
│ ├── Virtual DOM Diffing (<100μs) │
│ ├── HTML Parser │
│ └── Binary Serialization (MessagePack) │
└─────────────────────────────────────────────┘
```
## 🎨 Examples
See the [examples/demo_project](examples/demo_project) directory for complete working examples:
- **Counter** - Simple reactive counter
- **Todo List** - CRUD operations with lists
- **Chat** - Real-time messaging
Run the demo:
```bash
cd examples/demo_project
pip install -r requirements.txt
python manage.py migrate
python manage.py runserver
```
Visit http://localhost:8000
## 🔧 Development
### Project Structure
```
djust/
├── crates/
│ ├── djust_core/ # Core types & utilities
│ ├── djust_templates/ # Template engine
│ ├── djust_vdom/ # Virtual DOM & diffing
│ ├── djust_components/ # Reusable component library
│ └── djust_live/ # Main PyO3 bindings
├── python/
│ └── djust/ # Python package
│ ├── live_view.py # LiveView base class
│ ├── component.py # Component system
│ ├── websocket.py # WebSocket consumer
│ └── static/
│ └── client.js # Client runtime
├── branding/ # Logo and brand assets
├── examples/ # Example projects
├── benchmarks/ # Performance benchmarks
└── tests/ # Tests
```
### Running Tests
```bash
# All tests (Python + Rust + JavaScript)
make test
# Individual test suites
make test-python # Python tests
make test-rust # Rust tests
make test-js # JavaScript tests
# Specific tests
pytest tests/unit/test_live_view.py
cargo test --workspace --exclude djust_live
```
For comprehensive testing documentation, see **[Testing Guide](docs/TESTING.md)**.
### Building Documentation
```bash
cargo doc --open
```
## 💰 Supporting djust
djust is open source (MIT licensed) and free forever. If you're using djust in production or want to support development:
- ⭐ **Star this repo** - Help others discover djust
- 💜 **[GitHub Sponsors](https://github.com/sponsors/djust-org)** - Monthly support from $5/month
Your support helps us maintain and improve djust for everyone!
## 🤝 Contributing
Contributions welcome! Please read [CONTRIBUTING.md](CONTRIBUTING.md) first.
Areas we'd love help with:
- More example applications
- Performance optimizations
- Documentation improvements
- Browser compatibility testing
## 📝 Roadmap
- [x] Template inheritance (`{% extends %}`)
- [x] `{% url %}` and `{% include %}` tags
- [x] Comparison operators in `{% if %}` tags
- [x] All 57 Django built-in template filters
- [x] Security hardening (WebSocket origin validation, HMAC signing, rate limiting)
- [x] Developer debug panel with event history and VDOM inspection
- [x] Reusable component library (`djust_components` crate)
- [x] JIT pipeline improvements and stale-closure fixes
- [x] Authentication & authorization (view-level + handler-level)
- [ ] File upload handling
- [ ] Server-sent events (SSE) fallback
- [ ] React/Vue component compatibility
- [ ] TypeScript definitions
- [ ] Redis-backed session storage
- [ ] Horizontal scaling support
## 🔒 Security
- CSRF protection via Django middleware
- XSS protection via automatic template escaping (Rust engine escapes all variables by default)
- HTML-producing filters (`urlize`, `unordered_list`) handle their own escaping internally
- WebSocket authentication via Django sessions
- WebSocket origin validation and HMAC message signing (v0.2.1)
- Per-view and global rate limiting support
- Configurable allowed origins for WebSocket connections
- View-level auth enforcement (`login_required`, `permission_required`) before `mount()`
- Handler-level `@permission_required` for protecting individual event handlers
- `djust_audit` command and `djust.S005` system check for auth posture visibility
Report security issues to: security@djust.org
## 📄 License
MIT License - see [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- Inspired by [Phoenix LiveView](https://hexdocs.pm/phoenix_live_view/)
- Built with [PyO3](https://pyo3.rs/) for Python/Rust interop
- Uses [html5ever](https://github.com/servo/html5ever) for HTML parsing
- Powered by the amazing Rust and Django communities
## 💬 Community & Support
- 🌐 **[djust.org](https://djust.org)** - Official website
- 🚀 **[Quick Start](https://djust.org/quickstart/)** - Get started in minutes
- 📝 **[Examples](https://djust.org/examples/)** - Live code examples
- 🐛 **[Issues](https://github.com/djust-org/djust/issues)** - Bug reports & feature requests
- 📧 **Email**: support@djust.org
---
**[djust.org](https://djust.org)** — Made with ❤️ by the djust community
| text/markdown; charset=UTF-8; variant=GFM | djust contributors | null | null | null | MIT | django, rust, reactive, server-side-rendering, websocket | [
"Development Status :: 3 - Alpha",
"Framework :: Django",
"Framework :: Django :: 3.2",
"Framework :: Django :: 4.0",
"Framework :: Django :: 4.1",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming ... | [] | null | null | >=3.8 | [] | [] | [] | [
"django>=3.2",
"channels[daphne]>=4.0.0",
"msgpack>=1.0.0",
"zstandard>=0.22.0; extra == \"compression\"",
"maturin<2.0,>=1.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-django>=4.5; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest-xdist>=3.0; extra == \"dev\"",
... | [] | [] | [] | [
"Changelog, https://github.com/djust-org/djust/blob/main/CHANGELOG.md",
"Documentation, https://djust.org/quickstart/",
"Homepage, https://djust.org",
"Issues, https://github.com/djust-org/djust/issues",
"Repository, https://github.com/djust-org/djust"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:02:17.404994 | djust-0.3.2.tar.gz | 2,706,022 | c3/b6/3b7c26b480d27320f3f1b9e19754b6f79b48c2d09abaf7ec8fcb59e5644d/djust-0.3.2.tar.gz | source | sdist | null | false | 817950c9527e4e81ac92cbc165816b97 | cce2b12e14db117c6f04d3d95b4ddc7060e6b345504458c006888b92cb1d97c7 | c3b63b7c26b480d27320f3f1b9e19754b6f79b48c2d09abaf7ec8fcb59e5644d | null | [
"LICENSE"
] | 941 |
2.4 | atoms-vscode-runtime | 0.1.0 | Atoms VS Code Python runtime and MCP server for atomic structure analysis | # Atoms VS Code
**Visualize, analyze, and optimize atomic structures directly inside VS Code.**
Atoms VS Code brings a full-featured atomic structure workbench into your editor — interactive 3D visualization with 3Dmol.js and Speck renderers, MACE-powered energy calculations and geometry optimizations, structural analysis (bonds, angles, RDF, coordination), frame-by-frame trajectory navigation, and an MCP server that lets GitHub Copilot and other LLM agents operate on your structures programmatically.
**Developer:** Sandip De
**License:** Apache 2.0
---
## Features
### 3D Structure Visualization
- **Dual rendering engines** — 3Dmol.js (ball-and-stick, stick, sphere styles with adjustable ball scale) and Speck (space-filling with ambient occlusion, depth of field, outline, and custom atom/bond scaling)
- **Unit cell display** — toggle wireframe unit cell overlay
- **Supercell replication** — expand structures up to 8×8×8 along a, b, c axes
- **Fullscreen mode** — immersive viewer with overlay controls for frame navigation, energy plots, and playback
### Trajectory & Frame Navigation
- Frame slider with numeric input and animated playback (configurable 100–5000 ms speed)
- Dual-range slider to select and load specific frame windows
- Configurable default frame window (last N frames)
- Color-coded frame markers for optimization jobs (running, done, unfinished, crashed)
### Structural Analysis (6 tabs)
| Tab | Description |
|-----|-------------|
| **⚡ Energy** | Energy vs. frame plot (Plotly interactive), MACE single-point calculations with progress bar, resume from last computed, chunk size control, cancel, click-to-select frames, lasso/box selection, navigate & copy selections |
| **🧪 Optimize** | Geometry optimization — model selector, algorithm (BFGS / LBFGS / FIRE), fmax, max steps, cell optimization, dispersion D3, CuEq; live progress via JSONL polling; relaxation energy plot; step-by-step replay; multi-model overlay comparison; fullscreen relaxation viewer |
| **📋 Summary** | Formula, atom count, composition, cell lengths & angles, volume, density, PBC |
| **📈 RDF** | Radial distribution function with per-pair curves, configurable r_max and bins |
| **📊 Bond Analysis** | Bond distance distributions per element pair, statistics (count, mean, min, max, std) |
| **📐 Angle Analysis** | Bond angle distributions per triplet, statistics table |
### MACE Energy & Optimization
- Single-point MACE energy calculations across frame ranges
- Geometry optimization with BFGS, LBFGS, or FIRE
- Cell optimization via UnitCellFilter
- D3 dispersion corrections
- CuEq GPU acceleration mode
- Multi-model management — auto-discover `.model` files from configured directories, user-defined aliases
- Persistent sidecar caching — energy and optimization results stored alongside structure files, shared between UI and MCP server
- Resume tighter convergence runs from previous results
- Graceful cancellation via sentinel files
### MCP Server (11 tools for LLM agents)
An integrated MCP server lets GitHub Copilot and other LLM-based agents interact with your structures:
| Tool | Description |
|------|-------------|
| `load_structure` | Load any ASE-supported structure file |
| `get_structure_summary_tool` | Formula, composition, cell, volume, density |
| `analyze_structure` | Bond / angle / RDF / coordination analysis |
| `calculate_energies` | MACE single-point energies for a frame range |
| `optimize_structure` | Geometry optimization with MACE |
| `cancel_optimization` | Cancel a running optimization |
| `list_models` | List discovered MACE model files |
| `get_frame_data` | Retrieve XYZ + cell vectors for a frame |
| `list_frames` | List loaded frames and energy availability |
| `export_frame_xyz` | Export a frame as XYZ text |
| `set_viewer_mode` | Switch 3D viewer (3dmol / speck) |
The MCP server shares cache files with the extension UI, so energies and optimizations computed via Copilot appear instantly in the webview and vice versa.
### Additional Features
- **Auto-load** — automatically opens the structure viewer when you open a supported file in the explorer or editor
- **Tab management** — open each structure in its own tab, or reuse the existing panel
- **Context menu** — right-click structure files in the Explorer to load them directly
- **Unfinished job resume** — detects optimization jobs from previous sessions and offers to resume them
- **Dependency check** — prompts to install missing Python packages on first activation
- **Smart Python resolution** — checks `ATOMS_VSCODE_PYTHON` env var → `atomsVscode.pythonPath` setting → conda/virtualenv → workspace `.venv` → Python extension interpreter → fallback `python`
---
## Supported File Formats
`.traj` · `.xyz` · `.extxyz` · `.db` · `.cif` · `.poscar` · `.vasp` · `.pdb` · `.gen` · `.xsd` · `.res` · `.cell` · `.cfg` · `.lammps` · `.data` — plus **any format supported by ASE's `ase.io.read()`**
---
## Installation
### Prerequisites
- **VS Code** ≥ 1.109.0
- **Python** ≥ 3.9
- **Node.js** and **npm** (for building from source)
### Python Dependencies
Install into your Python environment:
```bash
pip install ase numpy scipy pandas plotly fastmcp
```
For MACE energy/optimization features:
```bash
pip install mace-torch
```
### Install from Source
```bash
# Clone the repository
git clone <repo-url> atoms-vscode
cd atoms-vscode
# Install the Python runtime package
pip install -e .
# Build the VS Code extension
npm --prefix vscode-extension install
npm --prefix vscode-extension run compile
```
### Install via VSIX
```bash
cd vscode-extension
npm install
npm run package
# Install the generated .vsix via VS Code: Extensions → ⋯ → Install from VSIX
```
---
## Usage
### Quick Start
1. Open VS Code in your project directory.
2. Open any supported structure file (`.traj`, `.xyz`, `.cif`, etc.) — the Atoms viewer opens automatically.
3. Or run **Atoms: Open Structure Analysis** from the Command Palette (`Ctrl+Shift+P`).
4. Right-click any structure file in the Explorer → **Atoms: Load Structure by Path**.
### Commands
| Command | Description |
|---------|-------------|
| `Atoms: Open Structure Analysis` | Open the Atoms webview panel |
| `Atoms: Load Structure by Path` | Load a specific structure file (also in Explorer context menu) |
| `Atoms: Summary Current Frame` | Get a summary of the current frame |
| `Atoms: Analyze Current Frame` | Run structural analysis on the current frame |
### Settings
Configure in VS Code Settings or `.vscode/settings.json`:
| Setting | Type | Default | Description |
|---------|------|---------|-------------|
| `atomsVscode.pythonPath` | string | `""` | Python executable for the analysis bridge. Leave empty for auto-detection. |
| `atomsVscode.openInNewTab` | boolean | `true` | Open each structure in its own tab vs. replacing the existing panel |
| `atomsVscode.autoLoadOnFileOpen` | boolean | `true` | Automatically open the viewer when a structure file is opened |
| `atomsVscode.defaultRecentFrames` | number | `50` | Number of most recent frames to load by default |
| `atomsVscode.maceModelPath` | string | `""` | Path to the active MACE model file |
| `atomsVscode.maceModelDirs` | array | `[]` | Directories to scan for `.model` files |
| `atomsVscode.maceModelAliases` | object | `{}` | Short name → model file path mapping |
| `atomsVscode.maceDevice` | string | `"cpu"` | Compute device (`cpu`, `cuda`) |
| `atomsVscode.maceDispersion` | boolean | `false` | Enable D3 dispersion corrections |
| `atomsVscode.maceEnableCueq` | boolean | `false` | Enable CuEq GPU acceleration mode |
### Example Workspace Settings
```json
{
"atomsVscode.pythonPath": "/path/to/your/python",
"atomsVscode.maceModelPath": "/path/to/model.model",
"atomsVscode.maceModelDirs": ["/path/to/models/directory"],
"atomsVscode.maceDevice": "cpu"
}
```
### Development (Debug Mode)
1. Open the `atoms-vscode` folder in VS Code.
2. Run `npm --prefix vscode-extension install` and `npm --prefix vscode-extension run compile`.
3. Press `F5` to launch the Extension Development Host.
4. Open a structure file in the development host to test.
---
## Architecture
```
┌──────────────────────────────────────────────────────┐
│ VS Code Extension (TypeScript) │
│ ├── extension.ts ─ activation, commands, state │
│ ├── bridge.ts ─ JSON-RPC over stdio to Python │
│ ├── webview.ts ─ HTML/JS template generation │
│ ├── webview-app/main.js ─ browser-side UI │
│ ├── cache.ts ─ energy & optimization sidecar cache │
│ ├── model-discovery.ts ─ MACE model file scanner │
│ ├── optimization-observer.ts ─ JSONL progress poll │
│ ├── mcp-server-provider.ts ─ MCP server definition │
│ └── mcp-display-bridge.ts ─ HTTP bridge for MCP→UI │
├──────────────────────────────────────────────────────┤
│ Python Runtime (atoms_vscode_runtime) │
│ ├── extension_bridge.py ─ JSON-RPC main loop, │
│ │ load files, compute energies, optimize │
│ ├── structural_analysis.py ─ bond, angle, RDF, │
│ │ coordination analysis │
│ ├── services/analysis_api.py ─ high-level API │
│ └── utils/analysis_helpers.py ─ cutoff estimation │
├──────────────────────────────────────────────────────┤
│ MCP Server (mcp_server) ─ FastMCP over stdio │
│ ├── server.py ─ 11 MCP tools │
│ ├── display_client.py ─ HTTP → display bridge │
│ └── cache.py ─ shared cache (TS-compatible) │
└──────────────────────────────────────────────────────┘
```
---
## Troubleshooting
- **`No module named atoms_vscode_runtime`** — run `pip install -e .` from the repository root, or ensure your Python environment has the package installed.
- **Wrong Python interpreter** — set `atomsVscode.pythonPath` explicitly in settings.
- **`npm: command not found`** — ensure Node.js/npm are in your PATH; run `npm --prefix vscode-extension run compile` manually.
- **MACE features unavailable** — install `mace-torch` and set `atomsVscode.maceModelPath` to a valid `.model` file.
- **Missing dependencies prompt** — the extension will detect missing Python packages and offer to install them automatically.
---
## License
Apache 2.0 — see [LICENSE](vscode-extension/LICENSE) for details.
**Developed by Sandip De**
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"streamlit",
"ase",
"numpy",
"pandas",
"plotly",
"scipy",
"fastmcp"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T23:02:02.150927 | atoms_vscode_runtime-0.1.0.tar.gz | 33,951 | dd/6d/aab207e4e6bb64dec65caa5709b12068d45f957496f2683a159f989fd83f/atoms_vscode_runtime-0.1.0.tar.gz | source | sdist | null | false | 81da8c49cb84f8615d82c7ab856633d8 | 6d4b181d45049f22d02b0a1a060f10fb7dadf799a8482c170f872c3e86797519 | dd6daab207e4e6bb64dec65caa5709b12068d45f957496f2683a159f989fd83f | Apache-2.0 | [] | 280 |
2.4 | project-ryland | 2.2.1 | This project contains standardized tools to use LLMs in research studies for improving patient care. | # Project Ryland
## Description
This project enables users to more easily access and use the GPT4DFCI API.
### Features
- **User-friendly interface** for using the GPT4DFCI API
- **Local cost tracking** for live estimates of running costs
- **Automatic logs** to keep track of prompts, model used, and costs
- **A visual progress bar** to estimate time until completion
- **Automatic checkpointing** of operations to enable resuming if interrupted
- **A prompt gallery** to help users keep track of prompts and add metadata
- **Input of user-created prompts** for quick plug-and-play usage
The package is still in development and more features will be added with time.
### History
This project was conceived in fall 2025 when Justin Vinh noticed that no
modular, user-friendly package existed at the Dana-Farber Cancer Institute in
Boston, MA, to allow users to take advantage of the newly offered GPT4DFCI.
GPT4DFCI is the HIPAA-compliant large language model (LLM) interface offered
to researchers, and the associated API can be powerful if utilized. So he
developed this project in collaboration with Thomas Sounack and the support
of the Lindvall Lab to fill this gap.
RYLAND
stands for **"Research sYstem for LLM-based Analytics of Novel Data."**
Ryland is the protagonist of Justin's favorite book Project Hail Mary by
Andy Weir.
### Project Organization
```
project_ryland/
├── .github/
│ └── workflows/
│ └── publish.yml
├── .gitignore
├── CHANGELOG.md
├── LICENSE
├── project_ryland/
│ ├── __init__.py
│ ├── cli.py
│ ├── llm_utils/
│ │ ├── __init__.py
│ │ ├── llm_config.py
│ │ └── llm_generation_utils.py
│ └── templates/
│ ├── __init__.py
│ ├── quickstart.py
│ └── standard_quickstart/
│ ├── __init__.py
│ ├── llm_prompt_gallery/
│ │ ├── __init__.py
│ │ ├── config_llm_prompts.yaml
│ │ ├── example_prompt_1.txt
│ │ ├── example_prompt_2_with_variables.txt
│ │ ├── example_prompt_2.txt
│ │ ├── keyword_mappings.py
│ │ └── prompt_structs.py
│ ├── project_ryland_quickstart.ipynb
│ └── synthetic_clinical_notes.csv
├── pyproject.toml
└── README.md
```
---
## Instructions for General Use
### Installing the GPT4DFCI API
1. Ensure that you are on the DFCI network or running the VPN client.
2. Follow the instructions on the [Azure website](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) to install the Azure CLI
tool. This will be necessary to enable the API for GPT4DFCI.
2. Once installed, run this command in Terminal (MacOS) or Command Prompt
(Windows):
```
az login --allow-no-subscriptions
```
3. Running the prior command will open a window for you to login into your
account. Log in.
### Installing Project Ryland
1. You can install Project Ryland using pip:
```bash
pip install project-ryland
```
### Using Project Ryland (Quickstart)
**Note: You must be using the VPN Client or be on the DFIC network to use
GPT4DFCI.**
1. Use the quickstart to get off the ground quickly! To create the
quickstart in your working directory, run this command from a
python script:
```
from project_ryland.templates.quickstart import create_quickstart
create_quickstart(dest="~/quickstart")
```
or use the command line tool:
```bash
project-ryland-init quickstart
```
The quickstart contains a template prompt gallery (`config_llm_prompts.yaml`)
, two static prompts (`example_prompt_1.txt` and `example_prompt_2.txt`), one
dynamic prompt (`example_prompt_2_with_variables.txt`), and their associated
prompt structures (`prompt_structs.py`). The `keyword_mappings.py` file
contains example user variables to be used with the dynamic prompt. Finally,
`synthetic_clinical_notes.csv` contains generated clinical data for quick
demonstration use of the prompts. See below for instructions for how to use
the prompt gallery.
The `project_ryland_quickstart.ipynb` file contains the general code to run
Project Ryland.
```
standard_quickstart/
├── __init__.py
├── llm_prompt_gallery/
│ ├── __init__.py
│ ├── config_llm_prompts.yaml
│ ├── example_prompt_1.txt
│ ├── example_prompt_2_with_variables.txt
│ ├── example_prompt_2.txt
│ ├── keyword_mappings.py
│ └── prompt_structs.py
├── project_ryland_quickstart.ipynb
└── synthetic_clinical_notes.csv
```
### Using Project Ryland (Manual)
Note: A copy-paste version of the script is available at the end. Variable
definitions can also be found at the end after the example script.
**Note: You must be using the VPN Client or be on the DFIC network to use
GPT4DFCI.**
1. If this is your first time using Project Ryland, you must install it into
your environment. In Terminal or Command Prompt run the following
2. Import llm_generation_utils from Project Ryland
```
from project_ryland.llm_utils import llm_generation_utils as llm
```
3. In your Jupyter notebook or python script, define your ```endpoint``` and
```entra_scope```. The endpoint is user-specific, while the entra_scope
is the same for all users (current default for DFCI shown below). These
values should have been provided when you were granted GPT4DFCI API access.
4. Specify the LLM model that you will be using to run your prompts.
- Model names can be found in the [llm_config.py file](https://github.com/justin-vinh/project_ryland/blob/main/project_ryland/llm_utils/llm_config.py).
```
ENDPOINT = "https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
ENTRA_SCOPE = "https://cognitiveservices.azure.com/.default"
model_name = "gpt-5"
```
5. Run the LLM_wrapper function to initialize the API.
- Note that this only has to be done once per run. You can call the API
multiple times in one run
```
LLM_wrapper = llm.LLM_wrapper(
model_name,
endpoint=ENDPOINT,
entra_scope=ENTRA_SCOPE,
)
```
6. Declare the path to your input CSV file.
7. Declare the path to your LLM Prompt Gallery if you will be utilizing that
feature. A [template prompt gallery]() is available for download from the
GitHub. Add the prompt gallery to the same directory as your main script.
Use of
the gallery is highly recommended to track prompts texts, prompt
structures, and associated metadata.
```
input_file = 'pathology_llm_tests.csv'
gallery_path = "llm_prompt_gallery"
```
8. Use the generation to obtain your LLM output.
```
df = LLM_wrapper.process_text_data(
# Essential to specify
input_file_path=input_file,
text_column=text_column,
format_class=prompt_struct,
use_prompt_gallery=use_prompt_gallery,
# Specify if using the prompt gallery, else put None
prompt_gallery_path=gallery_path,
prompt_to_get=gallery_prompt,
user_prompt_vars=user_vars,
# Specify if NOT using the prompt gallery, else put None
prompt_text=prompt_text,
# Optional to specify
output_dir=output_directory,
flatten=True,
sample_mode=sample_mode,
resume=True,
keep_checkpoints=False,
save_every=10,
)
```
9. Optionally use the `summarize_llm_runs` function to give a quick summary
of the generation metrics of this LLM run (and of all known LLM runs).
```
df_log = llm.summarize_llm_runs(
log_path="llm_tracking.log",
csv_path="llm_run_summaries.csv",
)
df_log.tail()
```
---
## Instructions for Using the Prompt Gallery
The prompt gallery was designed by Justin as a method of storing prompt
metadata and is made to facilitate iterative prompt design. This metadata is
stored in the YAML file shown in the quickstart. Several prompts are already
detailed in the template and can be a good place to start. Let's look at one
of them:
```
example_1_prompt:
filename: example_prompt_1.txt
description: |
Determine of what type of cancer the patient has based on the
note content.
author: Sidney Farber
date: 2025.10.06
```
- The first key `example_1_prompt` is the name of the prompt and is used in
the API call. The prompt name does _not_ need to be the same as the prompt
filename.
- `filename` specifies the path to the prompt txt file, relative to the
gallery directory. In this case, the txt file is in the same directory as
the prompt gallery YAML file and so only the prompt filename is needed.
- The other metadata keys like `description`, `author`, and `date` are
optional and can be changed to any kind of other metadata suiting the
user's needs. A vertical line `|` allows the user to add a multiline
value (as in the case of `description`).
---
## Dictionary
### Arguments for process_text_data function
#### Necessary Arguments at All Times
- `input_file_path` specifies the path to your input CSV file (only CSV
files are currently accepted).
- `text_column` specifies the column within the CSV file that serves as the
input to the LLM.
- `format_class` specifies the class structure that enforces the desired
promopt output.
- `use_prompt_gallery` is a boolean (True/False) input that directs the
function to use the prompt gallery if set to True. Note that setting
this argument to True will override anything specified by the
`prompt_text` argument.
#### Necessary Arguments _if_ Using Prompt Gallery
- `prompt_gallery_path` specifies the path to the prompt gallery.
- `prompt_to_get` specifies the prompt name as listed in the prompt gallery.
- `user_prompt_vars` specifies the dictionary that contains the key-value
pairs between the placeholder variables and the desired user-specified
variables to be inputted. See the quickstart example for how this should
be done.
#### Necessary Arguments _if_ Using a User Prompt
- `prompt_text` specifies a string that serves as a user-inputted prompt.
Use this argument only if the prompt gallery is not being used.
#### Optional Arguments
- `output_dir` specifies the path to the output directory. If the
inputted directory does not exist, it will be generated. If not specified,
the default output location will be the same as the main script.
- `flatten` is a boolean (True/False) that specifies whether to turn the
output dictionary into individual columns. Default: True
- `sample_mode` is a boolean (True/False) that specifies whether to only
process the first 10 rows of the input CSV (sampling the data). It is
recommended to use sample_mode when first running new data, prompts, or
prompt structures to verify that the intended output is correct. Default:
False.
- `resume` is a boolean (True/False) that specifies whether to resume from a
checkpoint if generation is interrupted. Default: True.
- `keep_checkpoints` is a boolean (True/False) that specifies whether
checkpoints will be auto-deleted after a run. Setting it to true will keep
every generated checkpoint after a generation. Default: False.
- `save_every` is an integer that specifies the interval between checkpoints.
The default is 10 rows.
---
## License
Project Ryland is released under the MIT License. See LICENSE file for more details.
## Support
If you encounter any issues or have questions, please file an issue on the
GitHub issue tracker. We appreciate suggestions for improvement as well!
## Acknowledgements
Project Ryland was developed with the support of **Thomas Sounack** and the
**Lindvall Lab**, led by Dr.
Charlotta Lindvall, MD, PhD, at the Dana-Farber Cancer Institute. We thank
all the contributors for their valuable input and support.
## Citation
If you use **project_ryland** in your research or publications, please cite this repository:
Vinh J, Sounack T. *project_ryland: Research sYstem for LLM-based Analytics of Novel Data*. GitHub. https://github.com/justin-vinh/project_ryland
You can also use the GitHub **“Cite this repository”** button on the right sidebar for
formatted citations (APA, BibTeX, etc.).
### BibTeX
```bibtex
@software{project_ryland,
author = {Vinh, Justin and Sounack, Thomas},
title = {project_ryland: Research sYstem for LLM-based Analytics of Novel Data},
year = {2026},
url = {https://github.com/justin-vinh/project_ryland}
}
```
--------
| text/markdown | null | Justin Vinh <jvinh21@gmail.com>, Thomas Sounack <thomas_sounack@dfci.harvard.edu> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas>=2.0",
"numpy>=1.26",
"matplotlib>=3.9",
"scikit-learn>=1.5",
"lifelines>=0.28",
"tqdm>=4.66",
"numexpr>=2.10.2",
"loguru>=0.7",
"orjson>=3.10",
"pyyaml>=6.0",
"environs>=9.5",
"openai>=1.43",
"azure-identity>=1.17",
"azure-core>=1.30",
"pydantic>=2.6",
"python-dateutil>=2.9",
... | [] | [] | [] | [
"Homepage, https://github.com/justin-vinh/project_ryland"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T23:01:52.822730 | project_ryland-2.2.1.tar.gz | 120,219 | 07/f2/b5a298040e92bd660f7e544bf1f90b765ad6a326925af4b1ec241c1cede5/project_ryland-2.2.1.tar.gz | source | sdist | null | false | 6ff32fd875c9edef96ed8bcf0dd05245 | a149744889f38202105681bfd552e7e046a741f5f2ef037b3591d6a419c3aebc | 07f2b5a298040e92bd660f7e544bf1f90b765ad6a326925af4b1ec241c1cede5 | null | [
"LICENSE"
] | 248 |
2.1 | driftpy | 0.8.89 | A Python client for the Drift DEX | # DriftPy
<div align="center">
<img src="docs/img/drift.png" width="30%" height="30%">
</div>
DriftPy is the Python client for the [Drift](https://www.drift.trade/) protocol.
It allows you to trade and fetch data from Drift using Python.
**[Read the full SDK documentation here!](https://drift-labs.github.io/v2-teacher/)**
## Installation
```
pip install driftpy
```
Note: requires Python >= 3.10.
## SDK Examples
- `examples/` folder includes more examples of how to use the SDK including how to provide liquidity/become an lp, stake in the insurance fund, etc.
## Note on using QuickNode
If you are using QuickNode free plan, you *must* use `AccountSubscriptionConfig("demo")`, and you can only subscribe to 1 perp market and 1 spot market at a time.
Non-QuickNode free RPCs (including the public mainnet-beta url) can use `cached` as well.
Example setup for `AccountSubscriptionConfig("demo")`:
```python
# This example will listen to perp markets 0 & 1 and spot market 0
# If you are listening to any perp markets, you must listen to spot market 0 or the SDK will break
perp_markets = [0, 1]
spot_market_oracle_infos, perp_market_oracle_infos, spot_market_indexes = get_markets_and_oracles(perp_markets = perp_markets)
oracle_infos = spot_market_oracle_infos + perp_market_oracle_infos
drift_client = DriftClient(
connection,
wallet,
"mainnet",
perp_market_indexes = perp_markets,
spot_market_indexes = spot_market_indexes,
oracle_infos = oracle_infos,
account_subscription = AccountSubscriptionConfig("demo"),
)
await drift_client.subscribe()
```
If you intend to use `AccountSubscriptionConfig("demo)`, you *must* call `get_markets_and_oracles` to get the information you need.
`get_markets_and_oracles` will return all the necessary `OracleInfo`s and `market_indexes` in order to use the SDK.
# Development
## Setting Up Dev Env
`bash setup.sh`
Ensure correct python version (using pyenv is recommended):
```bash
pyenv install 3.10.11
pyenv global 3.10.11
poetry env use $(pyenv which python)
```
Install dependencies:
```bash
poetry install
```
To run tests, first ensure you have set up the RPC url, then run `pytest`:
```bash
export MAINNET_RPC_ENDPOINT="<YOUR_RPC_URL>"
export DEVNET_RPC_ENDPOINT="https://api.devnet.solana.com" # or your own RPC
poetry run pytest -v -s -x tests/ci/*.py
poetry run pytest -v -s tests/math/*.py
```
| text/markdown | x19 | https://twitter.com/0xNineteen@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | https://github.com/drift-labs/driftpy | null | <4.0,>=3.10 | [] | [] | [] | [
"anchorpy==0.21.0",
"solana<0.37,>=0.36",
"requests<3.0.0,>=2.28.1",
"pythclient==0.2.1",
"aiodns==3.0.0",
"aiohttp<4.0.0,>=3.9.1",
"aiosignal==1.3.1",
"anchorpy-core==0.2.0",
"anyio==4.4.0",
"apischema==0.17.5",
"async-timeout<5.0.0,>=4.0.2",
"attrs==22.2.0",
"backoff==2.2.1",
"base58==2.... | [] | [] | [] | [
"Documentation, https://drift-labs.github.io/driftpy/"
] | poetry/1.4.2 CPython/3.10.10 Linux/6.14.0-1017-azure | 2026-02-18T23:00:49.372534 | driftpy-0.8.89.tar.gz | 213,636 | 39/5d/d3987708ea3fce00595635344557b851995e1b45e0f7c93abc276e549e2a/driftpy-0.8.89.tar.gz | source | sdist | null | false | 31afedf32258bd6c674fdab36af15d3e | 54503f8ff855c28f23d7c43404d978ce025fe46813cf9330a8933e832be8563f | 395dd3987708ea3fce00595635344557b851995e1b45e0f7c93abc276e549e2a | null | [] | 1,171 |
2.4 | oidcauthlib | 2.0.18 | oidcauthlib | # oidcauthlib
| text/markdown | Imran Qureshi | imran.qureshi@bwell.com | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | https://github.com/icanbwell/oidc-auth-lib | null | >=3.12 | [] | [] | [] | [
"httpx>=0.27.2",
"authlib>=1.6.4",
"joserfc>=1.4.3",
"pydantic<3.0.0,>=2.0",
"pymongo[snappy]>=4.15.3",
"python-snappy>=0.7.3",
"fastapi>=0.115.8",
"starlette>=0.49.1",
"py-key-value-aio[memory,mongodb,pydantic,redis]>=0.4.4",
"opentelemetry-api>=1.39.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-18T23:00:39.283715 | oidcauthlib-2.0.18.tar.gz | 94,899 | 8c/cf/fbd02e388789c7322c7f654057131fb5e7221afdf5e2e34570be3bd0eeac/oidcauthlib-2.0.18.tar.gz | source | sdist | null | false | fd337803db89d216239153625a06a21e | 79a1f6ecc1a283c455ee79ecf7971ddd4062553f976d52332983d3e73cdea2bb | 8ccffbd02e388789c7322c7f654057131fb5e7221afdf5e2e34570be3bd0eeac | null | [
"LICENSE"
] | 337 |
2.4 | fast-mcp-telegram | 0.11.1 | MCP server with Telegram search and messaging capabilities | <img alt="Hero image" src="https://github.com/user-attachments/assets/635236f6-b776-41c7-b6e5-0dd14638ecc1" />
[](https://python.org)
[](https://opensource.org/licenses/MIT)
[](https://github.com/leshchenko1979/fast-mcp-telegram)
**Fast MCP Telegram Server** - Production-ready Telegram integration for AI assistants with comprehensive search, messaging, and direct API access capabilities.
## 🌐 Demo
1. Open https://tg-mcp.redevest.ru/setup to begin the authentication flow.
2. After finishing, you'll receive a ready-to-use `mcp.json` with your Bearer token.
3. Use the config with your MCP client to check out this MCP server capabilities.
4. Or try the HTTP‑MTProto Bridge right away with curl (replace TOKEN):
```bash
curl -X POST "https://tg-mcp.redevest.ru/mtproto-api/messages.SendMessage" \
-H "Authorization: Bearer TOKEN" \
-H "Content-Type: application/json" \
-d '{"params": {"peer": "me", "message": "Hello from Demo!"}}'
```
## 📖 Table of Contents
- [✨ Features](#-features)
- [🚀 Quick Start](#-quick-start)
- [🏗️ Server Modes](#️-server-modes)
- [🌐 HTTP-MTProto Bridge](#-http-mtproto-bridge)
- [📚 Documentation](#-documentation)
- [🔒 Security](#-security)
- [🤝 Contributing](#-contributing)
- [📄 License](#-license)
## ✨ Features
| Feature | Description |
|---------|-------------|
| 🔐 **Multi-User Authentication** | Production-ready Bearer token auth with session isolation and LRU cache management |
| 🌐 **HTTP-MTProto Bridge** | Direct curl access to any Telegram API method with entity resolution and safety guardrails |
| 🔍 **Intelligent Search** | Global & per-chat message search with multi-query support and intelligent deduplication |
| 🏗️ **Dual Transport** | Seamless development (stdio) and production (HTTP) deployment support |
| 📁 **Secure File Handling** | Rich media sharing with SSRF protection, size limits, and album support |
| 💬 **Advanced Messaging** | Send, edit, reply with formatting, file attachments, and phone number messaging |
| 🎤 **Voice Transcription** | Automatic speech-to-text for Premium accounts with parallel processing and polling |
| 📊 **Unified Session Management** | Single configuration system for setup and server, with multi-account support |
| 👥 **Smart Contact Discovery** | Search users, groups, channels with uniform entity schemas and profile enrichment |
| ⚡ **High Performance** | Async operations, parallel queries, connection pooling, and memory optimization |
| 🛡️ **Production Reliability** | Auto-reconnect, structured logging, comprehensive error handling with clear actionable messages |
| 🎯 **AI-Optimized** | Literal parameter constraints, LLM-friendly API design, and MCP ToolAnnotations |
| 🌍 **Web Setup Interface** | Browser-based authentication flow with immediate config generation |
## 🛠️ Available Tools
| Tool | Purpose | Key Features |
|------|---------|--------------|
| `search_messages_globally` | Search across all chats | Multi-term queries, date filtering, chat type filtering |
| `search_messages_in_chat` | Search within specific chat | Supports "me" for Saved Messages, optional query for latest messages |
| `send_message` | Send new message | File attachments (URLs/local), formatting (markdown/html), replies |
| `edit_message` | Edit existing message | Text formatting, preserves message structure |
| `read_messages` | Read specific messages by ID | Batch reading, full message content, voice transcription for Premium accounts |
| `find_chats` | Find users/groups/channels | Multi-term search, contact discovery, username/phone lookup |
| `get_chat_info` | Get detailed profile info | Member counts, bio/about, online status, enriched data |
| `send_message_to_phone` | Message phone numbers | Auto-contact management, optional cleanup, file support |
| `invoke_mtproto` | Direct Telegram API access | Raw MTProto methods, entity resolution, safety guardrails |
**📖 For detailed tool documentation with examples, see [Tools Reference](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/docs/Tools-Reference.md)**
## 🚀 Quick Start
### 1. Install from PyPI
```bash
pip install fast-mcp-telegram
```
### 2. Authenticate with Telegram
```bash
fast-mcp-telegram-setup --api-id="your_api_id" --api-hash="your_api_hash" --phone-number="+123456789"
```
**🌐 Prefer a browser?** Run the server and open `/setup` to authenticate and download a ready‑to‑use `mcp.json`. You can also reauthorize existing sessions through the same interface.
### 3. Configure Your MCP Client
**STDIO Mode (Development with Cursor IDE):**
```json
{
"mcpServers": {
"telegram": {
"command": "fast-mcp-telegram",
"env": {
"API_ID": "your_api_id",
"API_HASH": "your_api_hash",
"PHONE_NUMBER": "+123456789"
}
}
}
}
```
**HTTP_AUTH Mode (Production with Bearer Token):**
```json
{
"mcpServers": {
"telegram": {
"url": "https://your-server.com",
"headers": {
"Authorization": "Bearer AbCdEfGh123456789KLmnOpQr..."
}
}
}
}
```
### 4. Start Using!
```json
{"tool": "search_messages_globally", "params": {"query": "hello", "limit": 5}}
{"tool": "send_message", "params": {"chat_id": "me", "message": "Hello from AI!"}}
```
**📝 For detailed installation instructions, see [Installation Guide](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/docs/Installation.md)**
## 🏗️ Server Modes
| Mode | Transport | Authentication | Use Case |
|------|----------|----------------|----------|
| **STDIO** | stdio | Disabled | Development with Cursor IDE |
| **HTTP_NO_AUTH** | HTTP | Disabled | Development HTTP server |
| **HTTP_AUTH** | HTTP | Required (Bearer token) | Production deployment |
## 🌐 HTTP-MTProto Bridge
**Direct curl access to any Telegram API method** - Execute any Telegram MTProto method via HTTP requests with automatic entity resolution and safety guardrails.
### Quick Examples
```bash
# Send message with automatic entity resolution
curl -X POST "https://your-domain.com/mtproto-api/messages.SendMessage" \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"params": {"peer": "@username", "message": "Hello from curl!"},
"resolve": true
}'
# Send message using params_json (works with n8n and other tools)
curl -X POST "https://your-domain.com/mtproto-api/messages.SendMessage" \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"params_json": "{\"peer\": \"@username\", \"message\": \"Hello from curl!\"}",
"resolve": true
}'
# Get message history with peer resolution
curl -X POST "https://your-domain.com/mtproto-api/messages.getHistory" \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"params": {"peer": "me", "limit": 10},
"resolve": true
}'
```
**📖 For complete MTProto Bridge documentation, see [MTProto Bridge Guide](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/docs/MTProto-Bridge.md)**
## 📚 Documentation
- **[Installation Guide](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/docs/Installation.md)** - Detailed installation and configuration
- **[Deployment Guide](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/docs/Deployment.md)** - Docker deployment and production setup
- **[Tools Reference](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/docs/Tools-Reference.md)** - Complete tools documentation with examples
- **[Search Guidelines](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/docs/Search-Guidelines.md)** - Search best practices and limitations
- **[Operations Guide](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/docs/Operations.md)** - Health monitoring and troubleshooting
- **[Project Structure](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/docs/Project-Structure.md)** - Code organization and architecture
- **[Contributing Guide](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/CONTRIBUTING.md)** - Development setup and contribution guidelines
## 🔒 Security
**Key Security Features:**
- Bearer token authentication with session isolation
- SSRF protection for file downloads
- Dangerous method blocking with opt-in override
- Session file security and automatic cleanup
**📖 For complete security information, see [SECURITY.md](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/SECURITY.md)**
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/CONTRIBUTING.md) for:
- Development setup instructions
- Testing guidelines
- Code quality standards
- Pull request process
**Quick Start for Contributors:**
1. Fork the repository
2. Read the [Contributing Guide](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/CONTRIBUTING.md)
3. Create a feature branch
4. Make your changes and add tests
5. Submit a pull request
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](https://github.com/leshchenko1979/fast-mcp-telegram/blob/master/LICENSE) file for details.
## 🙏 Acknowledgments
- [FastMCP](https://github.com/jlowin/fastmcp) - MCP server framework
- [Telethon](https://github.com/LonamiWebs/Telethon) - Telegram API library
- [Model Context Protocol](https://modelcontextprotocol.io) - Protocol specification
---
<div align="center">
**Made with ❤️ for the AI automation community**
[⭐ Star us on GitHub](https://github.com/leshchenko1979/fast-mcp-telegram) • [💬 Join our community](https://t.me/mcp_telegram)
</div>
---
mcp-name: io.github.leshchenko1979/fast-mcp-telegram
| text/markdown | null | Alexey Leshchenko <leshchenko@gmail.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Pytho... | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp",
"fastmcp",
"telethon",
"jinja2",
"pydantic-settings"
] | [] | [] | [] | [
"Homepage, https://github.com/leshchenko1979/fast-mcp-telegram",
"Repository, https://github.com/leshchenko1979/fast-mcp-telegram",
"Documentation, https://github.com/leshchenko1979/fast-mcp-telegram#readme",
"Issues, https://github.com/leshchenko1979/fast-mcp-telegram/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:55:17.793202 | fast_mcp_telegram-0.11.1.tar.gz | 84,787 | 3f/6d/659ce04a68e6ab24ccd5dc0b0f52257acdb5287ae94a88c8823d54c823a2/fast_mcp_telegram-0.11.1.tar.gz | source | sdist | null | false | 915483a204771c867b5824c5735577ae | f74622da706adca8760ca5f4c2232babf12cbb84b5b5ef11e63e3396a5f2553d | 3f6d659ce04a68e6ab24ccd5dc0b0f52257acdb5287ae94a88c8823d54c823a2 | MIT | [
"LICENSE"
] | 265 |
2.4 | flyteplugins-pytorch | 2.0.0 | pytorch plugin for flyte | # Union PyTorch Plugin
Union can execute **PyTorch distributed training jobs** natively on a Kubernetes Cluster, which manages the lifecycle of worker pods, rendezvous coordination, spin-up, and tear down. It leverages the open-sourced **TorchElastic (torch.distributed.elastic)** launcher and the **Kubeflow PyTorch Operator**, enabling fault-tolerant and elastic training across multiple nodes.
This is like running a transient PyTorch cluster — worker groups are created for the specific job and torn down automatically after completion. Elastic training allows nodes to scale in and out, and failed workers can be restarted without bringing down the entire job.
To install the plugin, run the following command:
```bash
pip install --pre flyteplugins-pytorch
```
| text/markdown | null | Kevin Liao <kevinliao852@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch",
"flyte"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:59.956598 | flyteplugins_pytorch-2.0.0-py3-none-any.whl | 4,697 | 57/3d/2df8f47305e0fd47f37d956ad8e6c03bb8580bc1bb241029f6229776ae17/flyteplugins_pytorch-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | b9e7f68e0b38590889f95b95a85e232d | a0dd99b1cd73976902b14ce39202b47ba02d14c715af7e92487bbd808076de5d | 573d2df8f47305e0fd47f37d956ad8e6c03bb8580bc1bb241029f6229776ae17 | null | [] | 113 |
2.4 | flyteplugins-databricks | 2.0.0 | Databricks plugin for flyte | # Databricks Plugin for Flyte
This plugin provides Databricks integration for Flyte, enabling you to run Spark jobs on Databricks as Flyte tasks.
## Installation
```bash
pip install flyteplugins-databricks
```
## Usage
```python
from flyteplugins.databricks import Databricks, DatabricksConnector
@task(task_config=Databricks(
databricks_conf={
"run_name": "flyte databricks plugin",
"new_cluster": {
"spark_version": "13.3.x-scala2.12",
"autoscale": {
"min_workers": 1,
"max_workers": 1,
},
"node_type_id": "m6i.large",
"num_workers": 1,
"aws_attributes": {
"availability": "SPOT_WITH_FALLBACK",
"instance_profile_arn": "arn:aws:iam::339713193121:instance-profile/databricks-demo",
"ebs_volume_type": "GENERAL_PURPOSE_SSD",
"ebs_volume_count": 1,
"ebs_volume_size": 100,
"first_on_demand": 1,
},
},
# "existing_cluster_id": "1113-204018-tb9vr2fm", # use existing cluster id if you want
"timeout_seconds": 3600,
"max_retries": 1,
},
databricks_instance="mycompany.cloud.databricks.com",
))
def my_spark_task() -> int:
# Your Spark code here
return 42
```
| text/markdown | null | Kevin Su <pingsutw@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flyte[connector]",
"aiohttp",
"nest-asyncio",
"flyteplugins-spark"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:47.259112 | flyteplugins_databricks-2.0.0-py3-none-any.whl | 5,779 | 4f/01/0f7a0518c1653ee13e1ac4ca1eaf194ff78f978e10b087952393e7f614fc/flyteplugins_databricks-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 094c884eba380aa1e8e5f8cd03d61bf4 | aff93014912fd21218704bedb80e8f8e7f72d6428969532b16541c372932fb9e | 4f010f7a0518c1653ee13e1ac4ca1eaf194ff78f978e10b087952393e7f614fc | null | [] | 114 |
2.4 | flyteplugins-spark | 2.0.0 | Spark plugin for flyte | # Union Spark Plugin
Union can execute Spark jobs natively on a Kubernetes Cluster, which manages a virtual cluster’s lifecycle, spin-up, and tear down. It leverages the open-sourced Spark On K8s Operator and can be enabled without signing up for any service. This is like running a transient spark cluster — a type of cluster spun up for a specific Spark job and torn down after completion.
To install the plugin, run the following command:
```bash
pip install --pre flyteplugins-spark
```
| text/markdown | null | Kevin Su <pingsutw@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyspark",
"flyte"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:46.820846 | flyteplugins_spark-2.0.0-py3-none-any.whl | 4,481 | 83/2d/40c36ea1b0cb236da9bc9b87e8806ee4375906bad55d5626525b9717f9a4/flyteplugins_spark-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 53cd492509507b100cd5bc67128dbafd | da4aa506efe1afd8188d2d43c9bc55d4d15075a64366df32000337c0a7fdc72b | 832d40c36ea1b0cb236da9bc9b87e8806ee4375906bad55d5626525b9717f9a4 | null | [] | 253 |
2.4 | flyteplugins-openai | 2.0.0 | OpenAI plugin for flyte | # Flyte OpenAI Plugin
This plugin provides a drop-in replacement for OpenAI packages. It provides
drop-in replacements for functionality in the `openai-agents` package so that
it works on Flyte.
To install the plugin, run the following command:
```bash
pip install --pre flyteplugins-openai
```
| text/markdown | null | Niels Bantilan <cosmicbboy@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"openai-agents>=0.2.4",
"flyte"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:32.163004 | flyteplugins_openai-2.0.0-py3-none-any.whl | 3,123 | 6c/6e/a28a8b736e85d1da1c6ad6628979d62e7ae9033a15e789fd45a38e69c9f5/flyteplugins_openai-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 8d10c5abdef4dee6841e795f026c3721 | 69a00db316095cd20011157f6ee8bb2b835f7a55939788fadc3d18baf4808f6d | 6c6ea28a8b736e85d1da1c6ad6628979d62e7ae9033a15e789fd45a38e69c9f5 | null | [] | 110 |
2.4 | flyteplugins-sglang | 2.0.0 | SGLang plugin for flyte | # Flyte SGLang Plugin
Serve large language models using SGLang with Flyte Apps.
This plugin provides the `SGLangAppEnvironment` class for deploying and serving LLMs using [SGLang](https://docs.sglang.ai/).
## Installation
```bash
pip install --pre flyteplugins-sglang
```
## Usage
```python
import flyte
import flyte.app
from flyteplugins.sglang import SGLangAppEnvironment
# Define the SGLang app environment
sglang_app = SGLangAppEnvironment(
name="my-llm-app",
model="s3://your-bucket/models/your-model",
model_id="your-model-id",
resources=flyte.Resources(cpu="4", memory="16Gi", gpu="L40s:1"),
stream_model=True, # Stream model directly from blob store to GPU
scaling=flyte.app.Scaling(
replicas=(0, 1),
scaledown_after=300,
),
)
if __name__ == "__main__":
flyte.init_from_config()
app = flyte.serve(sglang_app)
print(f"Deployed SGLang app: {app.url}")
```
## Features
- **Streaming Model Loading**: Stream model weights directly from object storage to GPU memory, reducing startup time and disk requirements.
- **OpenAI-Compatible API**: The deployed app exposes an OpenAI-compatible API for chat completions.
- **Auto-scaling**: Configure scaling policies to scale up/down based on traffic.
- **Tensor Parallelism**: Support for distributed inference across multiple GPUs.
## Extra Arguments
You can pass additional arguments to the SGLang server using the `extra_args` parameter:
```python
sglang_app = SGLangAppEnvironment(
name="my-llm-app",
model="s3://your-bucket/models/your-model",
model_id="your-model-id",
extra_args="--max-model-len 8192 --enforce-eager",
)
```
See the [SGLang server arguments documentation](https://docs.sglang.ai/backend/server_arguments.html) for available options.
| text/markdown | null | Niels Bantilan <cosmicbboy@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flyte"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:32.109123 | flyteplugins_sglang-2.0.0-py3-none-any.whl | 8,458 | 2b/18/d556a09c6663953149292c10a9013ee76478c2ab97295c53de50971049c8/flyteplugins_sglang-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | f476fe89a0ecb342ae3750b39dcfddb3 | cb61502179c1a6d318cd0196ec6bd5364720adf3699a646635f2988d4f85157c | 2b18d556a09c6663953149292c10a9013ee76478c2ab97295c53de50971049c8 | null | [] | 108 |
2.4 | flyteplugins-ray | 2.0.0 | Ray plugin for flyte | # Flyte Ray Plugin
Union can execute Ray jobs natively on a Kubernetes Cluster,
which manages a virtual cluster’s lifecycle, spin-up, and tear down.
It leverages the open-sourced KubeRay and can be enabled without signing up for any service.
This is like running a transient ray cluster —
a type of cluster spun up for a specific Ray job and torn down after completion.
To install the plugin, run the following command:
```bash
pip install --pre flyteplugins-ray
```
| text/markdown | null | Kevin Su <pingsutw@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"ray[default]",
"flyte"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T22:54:31.811249 | flyteplugins_ray-2.0.0-py3-none-any.whl | 3,344 | 79/f0/2e964d1f24a8c3712b1b77970e21ca7af153d51fac39b09dd8c286ca82fd/flyteplugins_ray-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | cd9540aae5102c6faba6fffc26e0b8e2 | ee15e6e264c967560f323d413c7a587d56aac92da1371133e044cc385e4e24c5 | 79f02e964d1f24a8c3712b1b77970e21ca7af153d51fac39b09dd8c286ca82fd | null | [] | 148 |
2.4 | flyteplugins-dask | 2.0.0 | Dask plugin for flyte | # Union Dask Plugin
Flyte can execute `dask` jobs natively on a Kubernetes Cluster, which manages the virtual `dask` cluster's lifecycle
(spin-up and tear down). It leverages the open-source Kubernetes Dask Operator and can be enabled without signing up
for any service. This is like running a transient (ephemeral) `dask` cluster - a type of cluster spun up for a specific
task and torn down after completion. This helps in making sure that the Python environment is the same on the job-runner
(driver), scheduler and the workers.
To install the plugin, run the following command:
```bash
pip install --pre flyteplugins-dask
```
| text/markdown | null | Kevin Su <pingsutw@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"dask[distributed]>=2022.10.2",
"flyte",
"bokeh"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:31.694098 | flyteplugins_dask-2.0.0-py3-none-any.whl | 3,241 | 5b/e7/29dfbaa6f03841e319860ae6fda5d8b1eaa5c92710ed84d32d908b154c00/flyteplugins_dask-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 6d7f72496c0743ff247ff9d637a080fa | 9a27ddfe2664f9896b75476d74f6fb6a280016ae7be3be0265c5b7255520cbb4 | 5be729dfbaa6f03841e319860ae6fda5d8b1eaa5c92710ed84d32d908b154c00 | null | [] | 143 |
2.4 | flyteplugins-bigquery | 2.0.0 | BigQuery plugin for flyte | # BigQuery Plugin for Flyte
This plugin provides BigQuery integration for Flyte, enabling you to run BigQuery queries as Flyte tasks.
## Installation
```bash
pip install flyteplugins-bigquery
```
## Usage
```python
from flyteplugins.bigquery import BigQueryConfig, BigQueryTask
config = BigQueryConfig(ProjectID="my-project", Location="US")
task = BigQueryTask(
name="my_query",
query_template="SELECT * FROM dataset.table WHERE id = {{ .user_id }}",
plugin_config=config,
inputs={"user_id": int},
)
```
| text/markdown | null | Kevin Su <pingsutw@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flyte[connector]",
"google-cloud-bigquery",
"google-cloud-bigquery-storage"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:31.486478 | flyteplugins_bigquery-2.0.0-py3-none-any.whl | 5,214 | 31/d4/13d3a6a12ded05f635ee692122c3ca711dff3de033d7a9ee6f715f7bf05b/flyteplugins_bigquery-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 8fc834284cb9be1637fb6cf1bcc6c5ab | a4586906706b87aa71f7dd8718a1355f1d0fae0cc312438b9ca4829bb95f1b71 | 31d413d3a6a12ded05f635ee692122c3ca711dff3de033d7a9ee6f715f7bf05b | null | [] | 114 |
2.4 | flyteplugins-snowflake | 2.0.0 | Snowflake plugin for flyte | # Snowflake Plugin for Flyte
Run Snowflake SQL queries as Flyte tasks with parameterized inputs, key-pair authentication, batch inserts, and DataFrame support.
## Installation
```bash
pip install flyteplugins-snowflake
```
## Quick start
```python
from flyteplugins.snowflake import Snowflake, SnowflakeConfig
import flyte
config = SnowflakeConfig(
account="myorg-myaccount",
user="flyte_user",
database="ANALYTICS",
schema="PUBLIC",
warehouse="COMPUTE_WH",
)
query = Snowflake(
name="count_users",
query_template="SELECT COUNT(*) FROM users",
plugin_config=config,
snowflake_private_key="snowflake-pk",
)
```
## Authentication
The plugin supports Snowflake [key-pair authentication](https://docs.snowflake.com/en/user-guide/key-pair-auth). Pass secret keys via `snowflake_private_key` (and optionally `snowflake_private_key_passphrase`).
```python
task = Snowflake(
name="my_task",
query_template="SELECT 1",
plugin_config=config,
snowflake_private_key="private-key",
snowflake_private_key_passphrase="passphrase",
# Generates env vars: PRIVATE_KEY, PASSPHRASE
)
```
For other auth methods (password, OAuth, etc.), pass them via `connection_kwargs`:
```python
config = SnowflakeConfig(
account="myorg-myaccount",
user="flyte_user",
database="ANALYTICS",
schema="PUBLIC",
warehouse="COMPUTE_WH",
connection_kwargs={"password": "...", "role": "ADMIN"},
)
```
## Parameterized queries
Use `%(name)s` placeholders and typed `inputs`:
```python
lookup = Snowflake(
name="lookup_user",
query_template="SELECT * FROM users WHERE id = %(user_id)s",
plugin_config=config,
inputs={"user_id": int},
output_dataframe_type=pd.DataFrame,
snowflake_private_key="snowflake-pk",
)
```
## Batch inserts
Set `batch=True` to expand list inputs into multi-row `VALUES` clauses:
```python
insert_rows = Snowflake(
name="insert_users",
query_template="INSERT INTO users (id, name, age) VALUES (%(id)s, %(name)s, %(age)s)",
plugin_config=config,
inputs={"id": list[int], "name": list[str], "age": list[int]},
snowflake_private_key="snowflake-pk",
batch=True,
)
# Calling with id=[1,2], name=["Alice","Bob"], age=[30,25] expands to:
# INSERT INTO users (id, name, age) VALUES (%(id_0)s, %(name_0)s, %(age_0)s), (%(id_1)s, %(name_1)s, %(age_1)s)
```
## Reading results as DataFrames
Set `output_dataframe_type` to get query results as a pandas DataFrame:
```python
import pandas as pd
select_task = Snowflake(
name="get_users",
query_template="SELECT * FROM users",
plugin_config=config,
output_dataframe_type=pd.DataFrame,
snowflake_private_key="snowflake-pk",
)
```
## Full example
```python
import pandas as pd
from flyteplugins.snowflake import Snowflake, SnowflakeConfig
import flyte
config = SnowflakeConfig(
user="KEVIN",
account="PWGJLTH-XKB21544",
database="FLYTE",
schema="PUBLIC",
warehouse="COMPUTE_WH",
)
insert_task = Snowflake(
name="insert_rows",
inputs={"id": list[int], "name": list[str], "age": list[int]},
plugin_config=config,
query_template="INSERT INTO FLYTE.PUBLIC.TEST (ID, NAME, AGE) VALUES (%(id)s, %(name)s, %(age)s)",
snowflake_private_key="snowflake",
batch=True,
)
select_task = Snowflake(
name="select_all",
output_dataframe_type=pd.DataFrame,
plugin_config=config,
query_template="SELECT * FROM FLYTE.PUBLIC.TEST",
snowflake_private_key="snowflake",
)
snowflake_env = flyte.TaskEnvironment.from_task("snowflake_env", insert_task, select_task)
env = flyte.TaskEnvironment(
name="example_env",
image=flyte.Image.from_debian_base().with_pip_packages("flyteplugins-snowflake"),
secrets=[flyte.Secret(key="snowflake", as_env_var="SNOWFLAKE_PRIVATE_KEY")],
depends_on=[snowflake_env],
)
@env.task
async def main(ids: list[int], names: list[str], ages: list[int]) -> float:
await insert_task(id=ids, name=names, age=ages)
df = await select_task()
return df["AGE"].mean().item()
if __name__ == "__main__":
flyte.init_from_config()
run = flyte.with_runcontext(mode="remote").run(
main, ids=[123, 456], names=["Kevin", "Alice"], ages=[30, 25],
)
print(run.url)
```
| text/markdown | null | Kevin Su <pingsutw@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flyte[connector]",
"snowflake-connector-python[pandas]",
"cryptography"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:31.456838 | flyteplugins_snowflake-2.0.0-py3-none-any.whl | 11,248 | 4e/1c/3cba1eeee73f798217ecd4b6361b36f02e30fb81075c79dafb6b1d9effe8/flyteplugins_snowflake-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | a8027326d54488cbc2d99f5c0095bc0c | b4828db2a37c34cadc261e81f120c0a17c2cd3cb0263b94c3e92afdeacf6085c | 4e1c3cba1eeee73f798217ecd4b6361b36f02e30fb81075c79dafb6b1d9effe8 | null | [] | 127 |
2.4 | flyteplugins-wandb | 2.0.0 | Weights & Biases plugin for Flyte | # Weights & Biases Plugin
This plugin provides integration between Flyte and Weights & Biases (W&B) for experiment tracking, including support for distributed training with PyTorch Elastic.
## Quickstart
```python
from flyteplugins.wandb import wandb_init, wandb_config, get_wandb_run
@wandb_init(project="my-project", entity="my-team")
@env.task
def train():
run = get_wandb_run()
run.log({"loss": 0.5, "accuracy": 0.9})
```
## Core concepts
### Decorator order
`@wandb_init` and `@wandb_sweep` must be the **outermost decorators** (applied after `@env.task`):
```python
@wandb_init # Outermost
@env.task # Task decorator
def my_task():
...
```
### Run modes
The `run_mode` parameter controls how W&B runs are created:
- **`"auto"`** (default): Creates a new run if no parent exists, otherwise shares the parent's run
- **`"new"`**: Always creates a new W&B run with a unique ID
- **`"shared"`**: Always shares the parent's run ID (useful for child tasks)
### Accessing the run
Use `get_wandb_run()` to access the current W&B run:
```python
from flyteplugins.wandb import get_wandb_run
run = get_wandb_run()
if run:
run.log({"metric": value})
```
Returns `None` if not within a `@wandb_init` decorated task or if the current rank should not log (in distributed training).
## Distributed training
The plugin automatically detects distributed training environments (PyTorch Elastic) and configures W&B appropriately.
### Environment variables
Distributed training is detected via these environment variables (set by `torchrun`/`torch.distributed.elastic`):
| Variable | Description |
|----------|-------------|
| `RANK` | Global rank of the process |
| `WORLD_SIZE` | Total number of processes |
| `LOCAL_RANK` | Rank within the current node |
| `LOCAL_WORLD_SIZE` | Number of processes per node |
| `GROUP_RANK` | Worker/node index (0, 1, 2, ...) |
### Rank scope
The `rank_scope` parameter controls the granularity of W&B runs in multi-node distributed training:
- **`"global"`** (default): Treat all workers as one unit → **1 run** (or 1 group for `run_mode="new"`)
- **`"worker"`**: Treat each worker/node independently → **N runs** (or N groups for `run_mode="new"`)
The effect of `rank_scope` depends on `run_mode`:
#### run_mode="auto" + rank_scope
```python
# Global scope (default): Only global rank 0 logs → 1 run total
@wandb_init
@multi_node_env.task
def train():
run = get_wandb_run() # Non-None only for global rank 0
...
# Worker scope: Local rank 0 of each worker logs → N runs (1 per worker)
@wandb_init(rank_scope="worker")
@multi_node_env.task
def train():
run = get_wandb_run() # Non-None for local_rank 0 on each worker
...
```
#### run_mode="shared" + rank_scope
```python
# Global scope: All ranks log to 1 shared run
@wandb_init(run_mode="shared")
@multi_node_env.task
def train():
run = get_wandb_run() # All ranks get a run object, all log to same run
...
# Worker scope: All ranks on each worker share a run → N runs total
@wandb_init(run_mode="shared", rank_scope="worker")
@multi_node_env.task
def train():
run = get_wandb_run() # All ranks get a run, grouped by worker
...
```
#### run_mode="new" + rank_scope
```python
# Global scope: Each rank gets own run, all grouped together → N×M runs, 1 group
@wandb_init(run_mode="new")
@multi_node_env.task
def train():
run = get_wandb_run() # Each rank has its own run
# Run IDs: {base}-rank-{global_rank}
...
# Worker scope: Each rank gets own run, grouped per worker → N×M runs, N groups
@wandb_init(run_mode="new", rank_scope="worker")
@multi_node_env.task
def train():
run = get_wandb_run() # Each rank has its own run
# Run IDs: {base}-worker-{idx}-rank-{local_rank}
...
```
### Run modes in distributed context
| run_mode | rank_scope | Who initializes W&B? | W&B Runs | Grouping |
|----------|------------|----------------------|----------|----------|
| `"auto"` | `"global"` | global rank 0 only | 1 | - |
| `"auto"` | `"worker"` | local_rank 0 per worker | N | - |
| `"shared"` | `"global"` | all ranks (shared mode) | 1 | - |
| `"shared"` | `"worker"` | all ranks (shared mode) | N | - |
| `"new"` | `"global"` | all ranks | N×M | 1 group |
| `"new"` | `"worker"` | all ranks | N×M | N groups |
Where N = number of workers/nodes, M = processes per worker.
### Run ID patterns
| Scenario | Run ID Pattern | Group |
|----------|----------------|-------|
| Single-node auto/shared | `{base}` | - |
| Single-node new | `{base}-rank-{rank}` | `{base}` |
| Multi-node auto (global) | `{base}` | - |
| Multi-node auto (worker) | `{base}-worker-{idx}` | - |
| Multi-node shared (global) | `{base}` | - |
| Multi-node shared (worker) | `{base}-worker-{idx}` | - |
| Multi-node new (global) | `{base}-rank-{global_rank}` | `{base}` |
| Multi-node new (worker) | `{base}-worker-{idx}-rank-{local_rank}` | `{base}-worker-{idx}` |
Where `{base}` = `{run_name}-{action_name}`
### Example: Distributed training task
```python
from flyteplugins.wandb import wandb_init, wandb_config, get_wandb_run, get_distributed_info
from flyteplugins.pytorch.task import Elastic
# Multi-node environment (2 nodes, 4 GPUs each)
multi_node_env = flyte.TaskEnvironment(
name="multi_node_env",
resources=flyte.Resources(gpu="V100:4", shm="auto"),
plugin_config=Elastic(nproc_per_node=4, nnodes=2),
secrets=flyte.Secret(key="wandb_api_key", as_env_var="WANDB_API_KEY"),
)
@wandb_init # run_mode="auto", rank_scope="global" by default → 1 run total
@multi_node_env.task
def train_multi_node():
import torch.distributed as dist
dist.init_process_group("nccl")
run = get_wandb_run() # Returns run for global rank 0 only, None for others
dist_info = get_distributed_info()
# Training loop...
if run:
run.log({"loss": loss.item()})
dist.destroy_process_group()
```
### Worker scope for per-worker logging
Use `rank_scope="worker"` when you want each worker/node to have its own W&B run:
```python
@wandb_init(rank_scope="worker") # 1 run per worker
@multi_node_env.task
def train_per_worker():
run = get_wandb_run() # Returns run for local_rank 0 of each worker
if run:
# Each worker logs to its own run
run.log({"loss": loss.item(), "worker": dist_info["worker_index"]})
```
### Shared mode for all-Rank logging
Use `run_mode="shared"` when you want all ranks to log to the same W&B run:
```python
@wandb_init(run_mode="shared")
@multi_node_env.task
def train_all_ranks_log():
run = get_wandb_run() # All ranks get a run object
# All ranks can log - W&B handles deduplication
run.log({"loss": loss.item(), "rank": dist.get_rank()})
```
### New mode for per-rank runs
Use `run_mode="new"` when you want each rank to have its own W&B run:
```python
@wandb_init(run_mode="new")
@multi_node_env.task
def train_per_rank():
run = get_wandb_run() # Each rank gets its own run
# Runs are grouped in W&B UI for easy comparison
run.log({"loss": loss.item()})
```
## Configuration
### wandb_config
Use `wandb_config()` to pass configuration that propagates to child tasks:
```python
from flyteplugins.wandb import wandb_config
# With flyte.with_runcontext
run = flyte.with_runcontext(
custom_context=wandb_config(
project="my-project",
entity="my-team",
tags=["experiment-1"],
)
).run(my_task)
# As a context manager
with wandb_config(project="override-project"):
await child_task()
```
### Configuring run_mode and rank_scope
Both `run_mode` and `rank_scope` can be set via decorator or context:
```python
# Via decorator (takes precedence)
@wandb_init(run_mode="shared", rank_scope="worker")
@multi_node_env.task
def train():
...
# Via context (useful for dynamic configuration)
run = flyte.with_runcontext(
custom_context=wandb_config(
project="my-project",
run_mode="shared",
rank_scope="worker",
)
).run(train)
```
When both are specified, **decorator arguments take precedence** over context config.
### Decorator vs context config
| Source | Scope | Use case |
|--------|-------|----------|
| Decorator (`@wandb_init(...)`) | Current task and traces only | Static per-task config |
| Context (`wandb_config(...)`) | Propagates to child tasks | Dynamic/shared config |
Priority order (highest to lowest):
1. Decorator arguments
2. Context config (`wandb_config`)
3. Defaults (`run_mode="auto"`, `rank_scope="global"`)
## W&B links
Tasks decorated with `@wandb_init` or `@wandb_sweep` automatically get W&B links in the Flyte UI:
- With `rank_scope="global"` (default): A single link to the one W&B run
- With `rank_scope="worker"`: Each worker gets its own link
- Links point directly to the corresponding W&B runs or sweeps
- Project/entity are retrieved from decorator parameters or context configuration
## Sweeps
Use `@wandb_sweep` to create W&B sweeps:
```python
from flyteplugins.wandb import wandb_sweep, wandb_sweep_config, get_wandb_sweep_id
@wandb_init
def objective():
# Training logic - this runs for each sweep trial
run = get_wandb_run()
config = run.config # Sweep parameters are passed via run.config
# Train with sweep-suggested hyperparameters
model = train(lr=config.lr, batch_size=config.batch_size)
wandb.log({"loss": loss, "accuracy": accuracy})
@wandb_sweep
@env.task
def run_sweep():
sweep_id = get_wandb_sweep_id()
# Launch sweep agents to run trials
# count=10 means run 10 trials total
wandb.agent(sweep_id, function=objective, count=10)
```
**Note:** A maximum of **20 sweep agents** can be launched at a time.
Configure sweeps with `wandb_sweep_config()`:
```python
run = flyte.with_runcontext(
custom_context=wandb_sweep_config(
method="bayes",
metric={"name": "loss", "goal": "minimize"},
parameters={"lr": {"min": 1e-5, "max": 1e-2}},
project="my-project",
)
).run(run_sweep)
```
## Downloading logs
Set `download_logs=True` to download W&B run/sweep logs after task completion. The download I/O is traced by Flyte's `@flyte.trace`, making the logs visible in the Flyte UI:
```python
@wandb_init(download_logs=True)
@env.task
def train():
...
# Or via context
wandb_config(download_logs=True)
wandb_sweep_config(download_logs=True)
```
The downloaded logs include all files uploaded to W&B during the run (metrics, artifacts, etc.).
## API reference
### Functions
- `get_wandb_run()` - Get the current W&B run object (or `None`)
- `get_wandb_sweep_id()` - Get the current sweep ID (or `None`)
- `get_distributed_info()` - Get distributed training info dict (or `None`)
- `wandb_config(...)` - Create W&B configuration for context
- `wandb_sweep_config(...)` - Create sweep configuration for context
### Decorators
- `@wandb_init` - Initialize W&B for a task or function
- `run_mode`: `"auto"` (default), `"new"`, or `"shared"`
- `rank_scope`: `"global"` (default) or `"worker"` - controls which ranks log in distributed training
- `download_logs`: If `True`, download W&B logs after task completion
- `project`, `entity`: W&B project and entity names
- `@wandb_sweep` - Create a W&B sweep for a task
### Links
- `Wandb` - Link class for W&B runs
- `WandbSweep` - Link class for W&B sweeps
### Types
- `RankScope` - Literal type: `"global"` | `"worker"`
- `RunMode` - Literal type: `"auto"` | `"new"` | `"shared"`
| text/markdown | Flyte Contributors | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"wandb",
"flyte"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:29.626960 | flyteplugins_wandb-2.0.0-py3-none-any.whl | 22,004 | 75/b6/ae21ad75c6ce5f18294048d97184aad1363577b1403b5b7f120011cb39d0/flyteplugins_wandb-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 2cb0d5d28b9768fb8101cd7878b7d255 | bb01b2e3d0558c282b1a23e6203402c206de0a4eba2b9e82265d9a0196162624 | 75b6ae21ad75c6ce5f18294048d97184aad1363577b1403b5b7f120011cb39d0 | null | [] | 108 |
2.4 | flyteplugins-polars | 2.0.0 | polars plugin for flyte | # Polars Plugin
This plugin provides native support for **Polars DataFrames and LazyFrames** in Flyte, enabling efficient data processing with Polars' high-performance DataFrame library.
The plugin supports:
- `polars.DataFrame` - Eager evaluation DataFrames
- `polars.LazyFrame` - Lazy evaluation DataFrames for optimized query execution
Both types can be serialized to and deserialized from Parquet format, making them ideal for large-scale data processing workflows.
To install the plugin, run the following command:
```bash
pip install --pre flyteplugins-polars
```
| text/markdown | null | Flyte Contributors <admin@flyte.org> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"polars",
"flyte",
"numpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:29.243211 | flyteplugins_polars-2.0.0-py3-none-any.whl | 4,150 | 42/a8/61fcda6b3bd8b6d58b79bcd99649f93b5e8a6aa17633317588abc1a38b54/flyteplugins_polars-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 1c7c56a7193bd9d3c23d522eea8acdd2 | c534685c1dae3bfff8d8afd1a5262d94afe210cde085ad34a0e18c9b1ebfb789 | 42a861fcda6b3bd8b6d58b79bcd99649f93b5e8a6aa17633317588abc1a38b54 | null | [] | 120 |
2.4 | flyte | 2.0.0 | Add your description here | # Flyte 2 SDK 🚀
**Type-safe, distributed orchestration of agents, ML pipelines, and more — in pure Python with async/await or sync!**
[](https://pypi.org/project/flyte/)
[](https://pypi.org/project/flyte/)
[](LICENSE)
> ⚡ **Pure Python workflows** • 🔄 **Async-first parallelism** • 🛠️ **Zero DSL constraints** • 📊 **Sub-task observability**
## 🌍 Ecosystem & Resources
- **📖 Documentation**: [Docs Link](https://www.union.ai/docs/v2/flyte/user-guide/)
- **▶️ Getting Started**: [Docs Link](https://www.union.ai/docs/v2/flyte/user-guide/getting-started/)
- **💬 Community**: [Slack](https://slack.flyte.org/) | [GitHub Discussions](https://github.com/flyteorg/flyte/discussions)
- **🎓 Examples**: [GitHub Examples](https://github.com/flyteorg/flyte-sdk/tree/main/examples)
- **🐛 Issues**: [Bug Reports](https://github.com/flyteorg/flyte/issues)
## What is Flyte 2?
Flyte 2 represents a fundamental shift from constrained domain-specific languages to **pure Python workflows**. Write data pipelines, ML training jobs, and distributed compute exactly like you write Python—because it *is* Python.
```python
import flyte
env = flyte.TaskEnvironment("hello_world")
@env.task
async def process_data(data: list[str]) -> list[str]:
# Use any Python construct: loops, conditionals, try/except
results = []
for item in data:
if len(item) > 5:
results.append(await transform_item(item))
return results
@env.task
async def transform_item(item: str) -> str:
return f"processed: {item.upper()}"
if __name__ == "__main__":
flyte.init()
result = flyte.run(process_data, data=["hello", "world", "flyte"])
```
## 🌟 Why Flyte 2?
| Feature Highlight | Flyte 1 | Flyte 2 |
|-| ------- | ------- |
| **No More Workflow DSL** | ❌ `@workflow` decorators with Python subset limitations | ✅ **Pure Python**: loops, conditionals, error handling, dynamic structures |
| **Async-First Parallelism** | ❌ Custom `map()` functions and workflow-specific parallel constructs | ✅ **Native `asyncio`**: `await asyncio.gather()` for distributed parallel execution |
| **Fine-Grained Observability** | ❌ Task-level logging only | ✅ **Function-level tracing** with `@flyte.trace` for sub-task checkpoints |
## 🚀 Quick Start
### Installation
```bash
# Install uv package manager
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create virtual environment
uv venv && source .venv/bin/activate
# Install Flyte 2 (beta)
uv pip install --prerelease=allow flyte
```
### Your First Workflow
```python
# hello.py
# /// script
# requires-python = ">=3.10"
# dependencies = ["flyte>=2.0.0b0"]
# ///
import flyte
env = flyte.TaskEnvironment(
name="hello_world",
resources=flyte.Resources(memory="250Mi")
)
@env.task
def calculate(x: int) -> int:
return x * 2 + 5
@env.task
async def main(numbers: list[int]) -> float:
# Parallel execution across distributed containers
results = await asyncio.gather(*[
calculate.aio(num) for num in numbers
])
return sum(results) / len(results)
if __name__ == "__main__":
flyte.init_from_config("config.yaml")
run = flyte.run(main, numbers=list(range(10)))
print(f"Result: {run.result}")
print(f"View at: {run.url}")
```
```bash
# Run locally, execute remotely
uv run --prerelease=allow hello.py
```
## 🏗️ Core Concepts
### **TaskEnvironments**: Container Configuration Made Simple
```python
# Group tasks with shared configuration
env = flyte.TaskEnvironment(
name="ml_pipeline",
image=flyte.Image.from_debian_base().with_pip_packages(
"torch", "pandas", "scikit-learn"
),
resources=flyte.Resources(cpu=4, memory="8Gi", gpu=1),
reusable=flyte.ReusePolicy(replicas=3, idle_ttl=300)
)
@env.task
def train_model(data: flyte.io.File) -> flyte.io.File:
# Runs in configured container with GPU access
pass
@env.task
def evaluate_model(model: flyte.io.File, test_data: flyte.io.File) -> dict:
# Same container configuration, different instance
pass
```
### **Pure Python Workflows**: No More DSL Constraints
```python
@env.task
async def dynamic_pipeline(config: dict) -> list[str]:
results = []
# ✅ Use any Python construct
for dataset in config["datasets"]:
try:
# ✅ Native error handling
if dataset["type"] == "batch":
result = await process_batch(dataset)
else:
result = await process_stream(dataset)
results.append(result)
except ValidationError as e:
# ✅ Custom error recovery
result = await handle_error(dataset, e)
results.append(result)
return results
```
### **Async Parallelism**: Distributed by Default
```python
@env.task
async def parallel_training(hyperparams: list[dict]) -> dict:
# Each model trains on separate infrastructure
models = await asyncio.gather(*[
train_model.aio(params) for params in hyperparams
])
# Evaluate all models in parallel
evaluations = await asyncio.gather(*[
evaluate_model.aio(model) for model in models
])
# Find best model
best_idx = max(range(len(evaluations)),
key=lambda i: evaluations[i]["accuracy"])
return {"best_model": models[best_idx], "accuracy": evaluations[best_idx]}
```
## 🎯 Advanced Features
### **Sub-Task Observability with Tracing**
```python
@flyte.trace
async def expensive_computation(data: str) -> str:
# Function-level checkpointing - recoverable on failure
result = await call_external_api(data)
return process_result(result)
@env.task(cache=flyte.Cache(behavior="auto"))
async def main_task(inputs: list[str]) -> list[str]:
results = []
for inp in inputs:
# If task fails here, it resumes from the last successful trace
result = await expensive_computation(inp)
results.append(result)
return results
```
### **Remote Task Execution**
```python
import flyte.remote
# Remote tasks deployed elsewhere
torch_task = flyte.remote.Task.get("torch_env.train_model", auto_version="latest")
spark_task = flyte.remote.Task.get("spark_env.process_data", auto_version="latest")
@env.task
async def orchestrator(raw_data: flyte.io.File) -> flyte.io.File:
# Execute Spark job on big data cluster
processed = await spark_task(raw_data)
# Execute PyTorch training on GPU cluster
model = await torch_task(processed)
return model
```
## 📊 Native Jupyter Integration
Run and monitor workflows directly from notebooks:
```python
# In Jupyter cell
import flyte
flyte.init_from_config()
run = flyte.run(my_workflow, data=large_dataset)
# Stream logs in real-time
run.logs.stream()
# Get outputs when complete
results = run.wait()
```
## 🔧 Configuration & Deployment
### Configuration File
```yaml
# config.yaml
endpoint: https://my-flyte-instance.com
project: ml-team
domain: production
image:
builder: local
registry: ghcr.io/my-org
auth:
type: oauth2
```
### Deploy and Run
```bash
# Deploy tasks to remote cluster
flyte deploy my_workflow.py
# Run deployed workflow
flyte run my_workflow --input-file params.json
# Monitor execution
flyte logs <execution-id>
```
## Migration from Flyte 1
| Flyte 1 | Flyte 2 |
|---------|---------|
| `@workflow` + `@task` | `@env.task` only |
| `flytekit.map()` | `await asyncio.gather()` |
| `@dynamic` workflows | Regular `@env.task` with loops |
| `flytekit.conditional()` | Python `if/else` |
| `LaunchPlan` schedules | `@env.task(on_schedule=...)` |
| Workflow failure handlers | Python `try/except` |
## 🤝 Contributing
We welcome contributions! Whether it's:
- 🐛 **Bug fixes**
- ✨ **New features**
- 📚 **Documentation improvements**
- 🧪 **Testing enhancements**
### Setup & Iteration Cycle
To get started, make sure you start from a new virtual environment and install this package in editable mode with any of the supported Python versions, from 3.10 to 3.13.
```bash
uv venv --python 3.13
uv pip install -e .
```
Besides from picking up local code changes, installing the package in editable mode
also changes the definition of the default `Image()` object to use a locally
build wheel. You will need to build said wheel by yourself though, with the `make dist` target.
```bash
make dist
python maint_tools/build_default_image.py
```
You'll need to have a local docker daemon running for this. The build script does nothing
more than invoke the local image builder, which will create a buildx builder named `flytex` if not present. Note that only members of the `Flyte Maintainers` group has
access to push to the default registry. If you don't have access, please make sure to
specify the registry and name to the build script.
```bash
python maint_tools/build_default_image.py --registry ghcr.io/my-org --name my-flyte-image
```
## 📄 License
Flyte 2 is licensed under the [Apache 2.0 License](LICENSE).
| text/markdown | null | Ketan Umare <kumare3@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles>=24.1.0",
"click>=8.2.1",
"cloudpickle>=3.1.1",
"docstring_parser>=0.16",
"fsspec>=2025.3.0",
"grpcio>=1.71.0",
"obstore>=0.7.3",
"protobuf>=6.30.1",
"pydantic>=2.10.6",
"pyyaml>=6.0.2",
"rich-click==1.8.9",
"httpx<1.0.0,>=0.28.1",
"keyring>=25.6.0",
"msgpack>=1.1.0",
"toml>=0.... | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:28.673106 | flyte-2.0.0-py3-none-any.whl | 533,292 | 99/5c/4edb784382a83d3953e7762bb67d53c8dda4c8bbc9180d3d088edeec7057/flyte-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 484aefb1fb36a678c012287bc1702565 | 22e06f5df3275218141ae1b81142158a5297154a88c9fd9aed10f2efaa8e674d | 995c4edb784382a83d3953e7762bb67d53c8dda4c8bbc9180d3d088edeec7057 | null | [
"LICENSE"
] | 23,436 |
2.4 | flyteplugins-vllm | 2.0.0 | vLLM plugin for flyte | # Union vLLM Plugin
Serve large language models using vLLM with Flyte Apps.
This plugin provides the `VLLMAppEnvironment` class for deploying and serving LLMs using [vLLM](https://docs.vllm.ai/).
## Installation
```bash
pip install --pre flyteplugins-vllm
```
## Usage
```python
import flyte
import flyte.app
from flyteplugins.vllm import VLLMAppEnvironment
# Define the vLLM app environment
vllm_app = VLLMAppEnvironment(
name="my-llm-app",
model="s3://your-bucket/models/your-model",
model_id="your-model-id",
resources=flyte.Resources(cpu="4", memory="16Gi", gpu="L40s:1"),
stream_model=True, # Stream model directly from blob store to GPU
scaling=flyte.app.Scaling(
replicas=(0, 1),
scaledown_after=300,
),
)
if __name__ == "__main__":
flyte.init_from_config()
app = flyte.serve(vllm_app)
print(f"Deployed vLLM app: {app.url}")
```
## Features
- **Streaming Model Loading**: Stream model weights directly from object storage to GPU memory, reducing startup time and disk requirements.
- **OpenAI-Compatible API**: The deployed app exposes an OpenAI-compatible API for chat completions.
- **Auto-scaling**: Configure scaling policies to scale up/down based on traffic.
- **Tensor Parallelism**: Support for distributed inference across multiple GPUs.
| text/markdown | null | Niels Bantilan <cosmicbboy@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flyte>=2.0.0b43"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:54:28.189656 | flyteplugins_vllm-2.0.0-py3-none-any.whl | 7,902 | 59/8e/35a3fbd86232914824c5657d190cb72d0daa98e103a2a950cff500dff581/flyteplugins_vllm-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 4862d51a9a0a6235126ccda07156b77b | 59ae4988b713f2078c6d3fcf569547e11adbaf20a40328215e51e196b3874baf | 598e35a3fbd86232914824c5657d190cb72d0daa98e103a2a950cff500dff581 | null | [] | 113 |
2.4 | internetarchive | 5.8.0 | A Python interface to archive.org. | A Python and Command-Line Interface to Archive.org
==================================================
|tox|
|versions|
|downloads|
|contributors|
.. |tox| image:: https://github.com/jjjake/internetarchive/actions/workflows/tox.yml/badge.svg
:target: https://github.com/jjjake/internetarchive/actions/workflows/tox.yml
.. |versions| image:: https://img.shields.io/pypi/pyversions/internetarchive.svg
:target: https://pypi.org/project/internetarchive
.. |downloads| image:: https://static.pepy.tech/badge/internetarchive/month
:target: https://pepy.tech/project/internetarchive
.. |contributors| image:: https://img.shields.io/github/contributors/jjjake/internetarchive.svg
:target: https://github.com/jjjake/internetarchive/graphs/contributors
This package installs a command-line tool named ``ia`` for using Archive.org from the command-line.
It also installs the ``internetarchive`` Python module for programmatic access to archive.org.
Please report all bugs and issues on `Github <https://github.com/jjjake/internetarchive/issues>`__.
SECURITY NOTICE
_______________
**Please upgrade to v5.4.2+ immediately.** Versions <=5.4.1 contain a critical directory traversal vulnerability in the ``File.download()`` method. `See the changelog for details <https://github.com/jjjake/internetarchive/blob/master/HISTORY.rst>`_. Thank you to Pengo Wray for their contributions in identifying and resolving this issue.
Installation
------------
You can install this module via `pipx <https://pipx.pypa.io/stable/>`_:
.. code:: bash
$ pipx install internetarchive
Binaries of the command-line tool are also available:
.. code:: bash
$ curl -LO https://archive.org/download/ia-pex/ia
$ chmod +x ia
$ ./ia --help
Unsupported Installation Methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**This library must only be installed via** `one of the supported methods <https://archive.org/developers/internetarchive/installation.html>`_ **(i.e.** ``pip``, ``pipx``, **or from source).**
Installation via third-party package managers like Homebrew, MacPorts, or Linux system packages (apt, yum, etc.) is **not supported**. These versions are often severely outdated, incompatible, and broken.
If you have installed this software via Homebrew, please uninstall it (`brew uninstall internetarchive`) and use a supported method.
Documentation
-------------
Documentation is available at `https://archive.org/services/docs/api/internetarchive <https://archive.org/services/docs/api/internetarchive>`_.
Contributing
------------
All contributions are welcome and appreciated. Please see `https://archive.org/services/docs/api/internetarchive/contributing.html <https://archive.org/services/docs/api/internetarchive/contributing.html>`_ for more details.
| text/x-rst | Jacob M. Johnson | jake@archive.org | null | null | AGPL-3.0 | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"P... | [] | https://github.com/jjjake/internetarchive | null | >=3.9 | [] | [] | [] | [
"jsonpatch>=0.4",
"requests<3.0.0,>=2.25.0",
"tqdm>=4.0.0",
"urllib3>=1.26.0",
"importlib-metadata>=3.6.0; python_version <= \"3.10\"",
"black; extra == \"all\"",
"mypy; extra == \"all\"",
"pre-commit; extra == \"all\"",
"pytest; extra == \"all\"",
"safety; extra == \"all\"",
"setuptools; extra ... | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.0 | 2026-02-18T22:54:17.196527 | internetarchive-5.8.0.tar.gz | 135,263 | 68/e6/594ecd2469fca16f92e9538c40c010c87cc1d5b166dea67d266b405b08e3/internetarchive-5.8.0.tar.gz | source | sdist | null | false | 6a7cadea1635870680e939a1bc9099cf | 0dff14d51bafa3da587f32a8cb6478327e20c2d040a61fbfb4ad30d592062cc2 | 68e6594ecd2469fca16f92e9538c40c010c87cc1d5b166dea67d266b405b08e3 | null | [
"LICENSE"
] | 18,135 |
2.4 | falcon-sbi | 0.3.0 | CLI-driven framework for simulation-based inference with large simulators | # Falcon
[](https://github.com/cweniger/falcon/actions/workflows/tests.yml)
[](https://codecov.io/gh/cweniger/falcon)
[](https://cweniger.github.io/falcon/)
Falcon is a CLI-driven Python framework for **simulation-based inference (SBI)** with large, expensive simulators. Born in astrophysics, built for any domain with complex forward models — break your model into components and Falcon jointly infers their parameters.
- **Composable** — define multi-component models as a graph of simulators in YAML, each wrapped with a thin Python interface, regardless of framework.
- **Adaptive** — steers simulations toward high-posterior regions as training progresses, focusing compute where it matters.
- **Concurrent** — trains neural posterior estimators across heterogeneous parameter blocks in parallel, using Ray for distributed execution.
- **Batteries included** — ships with neural spline flows, data embeddings (including CNN/transformer support), and built-in experiment tracking via WandB.
## Installation
```bash
pip install falcon-sbi
```
For development:
```bash
git clone https://github.com/cweniger/falcon.git
cd falcon
pip install -e ".[monitor]"
```
## Quick Start
Run the minimal example (a 3-parameter Gaussian inference problem):
```bash
cd examples/01_minimal
falcon launch --run-dir outputs/run_01
falcon sample posterior --run-dir outputs/run_01
```
This trains a neural posterior estimator on simulated data, then draws 1000 posterior samples. Results are saved under `outputs/run_01/`.
## How It Works
You define a directed graph of random variables in `config.yml`. Each node has a **simulator** (forward model) and optionally an **estimator** (learned posterior). Falcon iterates between simulating data and training the estimator, automatically managing the sample buffer.
```yaml
graph:
z: # Latent parameters
evidence: [x]
simulator:
_target_: falcon.priors.Hypercube
priors:
- ['uniform', -5.0, 5.0]
estimator:
_target_: falcon.estimators.Flow
x: # Observations
parents: [z]
simulator:
_target_: model.Simulate
observed: "./data/obs.npz['x']"
```
## CLI
```bash
falcon launch [--run-dir DIR] [--config-name NAME] [key=value ...]
falcon sample prior|posterior|proposal --run-dir DIR
falcon graph # Visualize graph structure
falcon monitor # Real-time TUI dashboard (requires pip install "falcon[monitor]")
```
## Examples
| Example | Description |
|---------|-------------|
| [`01_minimal`](examples/01_minimal) | Basic 3-parameter inference |
| [`02_bimodal`](examples/02_bimodal) | 10D bimodal posterior with training strategies |
| [`03_composite`](examples/03_composite) | Multi-node graph with image embeddings |
| [`04_gaussian`](examples/04_gaussian) | Gaussian inference |
| [`05_linear_regression`](examples/05_linear_regression) | Linear regression |
## Documentation
For tutorials, configuration reference, and API docs, see **[cweniger.github.io/falcon](https://cweniger.github.io/falcon/)**.
## Citation
```bibtex
@software{falcon2024,
title = {Falcon: Distributed Dynamic Simulation-Based Inference},
author = {Weniger, Christoph},
year = {2024},
url = {https://github.com/cweniger/falcon}
}
```
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | Christoph Weniger | null | null | null | null | simulation-based-inference, sbi, bayesian, pytorch | [
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=2.0.0",
"numpy>=1.16",
"ray>=2.0",
"omegaconf>=2.1",
"coolname>=1.0",
"rich>=13.0",
"blessed>=1.20",
"sbi>=0.18; extra == \"sbi\"",
"wandb>=0.15.0; extra == \"wandb\"",
"textual>=0.40.0; extra == \"monitor\"",
"sbi>=0.18; extra == \"all\"",
"wandb>=0.15.0; extra == \"all\"",
"textual... | [] | [] | [] | [
"Homepage, https://github.com/cweniger/falcon",
"Documentation, https://cweniger.github.io/falcon/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:54:03.350047 | falcon_sbi-0.3.0.tar.gz | 9,757,132 | 36/14/e950b54de2a19660f723b1e7048896dd935c3a921a94cc2ff78e0faff31f/falcon_sbi-0.3.0.tar.gz | source | sdist | null | false | 827ccaf39d0b3a848b7980620b3481a1 | 5a86096d051eb68267557e81ad671cb14650f65e29ec9946338b30544e2a4b60 | 3614e950b54de2a19660f723b1e7048896dd935c3a921a94cc2ff78e0faff31f | Apache-2.0 | [
"LICENSE"
] | 252 |
2.4 | flake8-type-checking | 3.2.0 | A flake8 plugin for managing type-checking imports & forward references | [](https://pypi.org/project/flake8-type-checking/)
[](https://github.com/snok/flake8-type-checking/actions/workflows/testing.yml)
[](https://pypi.org/project/flake8-type-checking/)
[](http://mypy-lang.org/)
[](https://codecov.io/gh/snok/flake8-type-checking)
# flake8-type-checking
Lets you know which imports to move in or out of
[type-checking](https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING) blocks.
The plugin assumes that the imports you only use for type hinting
*are not* required at runtime. When imports aren't strictly required at runtime, it means we can guard them.
Guarding imports provides 3 major benefits:
- 🔧 It reduces import circularity issues,
- 🧹 It organizes imports, and
- 🚀 It completely eliminates the overhead of type hint imports at runtime
<br>
Essentially, this code:
```python
import pandas # 15mb library
x: pandas.DataFrame
```
becomes this:
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import pandas # <-- no longer imported at runtime
x: "pandas.DataFrame"
```
More examples can be found in the [examples](#examples) section.
<br>
If you're using [pydantic](https://pydantic-docs.helpmanual.io/),
[fastapi](https://fastapi.tiangolo.com/), [cattrs](https://github.com/python-attrs/cattrs),
or [injector](https://github.com/python-injector/injector)
see the [configuration](#configuration) for how to enable support.
## Primary features
The plugin will:
- Tell you when an import should be moved into a type-checking block
- Tell you when an import should be moved out again
And depending on which error code range you've opted into, it will tell you
- Whether you need to add a `from __future__ import annotations` import
- Whether you need to quote an annotation
- Whether you can unquote a quoted annotation
## Error codes
| Code | Description |
|-------|-------------------------------------------------------------------------------------------|
| TC001 | Move application import into a type-checking block |
| TC002 | Move third-party import into a type-checking block |
| TC003 | Move built-in import into a type-checking block |
| TC004 | Move import out of type-checking block. Import is used for more than type hinting. |
| TC005 | Found empty type-checking block |
| TC006 | Annotation in typing.cast() should be a string literal |
| TC007 | Type alias needs to be made into a string literal |
| TC008 | Type alias does not need to be a string literal |
| TC009 | Move declaration out of type-checking block. Variable is used for more than type hinting. |
| TC010 | Operands for | cannot be a string literal |
## Choosing how to handle forward references
You need to choose whether to opt-into using the
`TC100`- or the `TC200`-range of error codes.
They represent two different ways of solving the same problem, so please only choose one.
`TC100` and `TC101` manage forward references by taking advantage of
[postponed evaluation of annotations](https://www.python.org/dev/peps/pep-0563/).
| Code | Description |
|-------|-----------------------------------------------------|
| TC100 | Add 'from \_\_future\_\_ import annotations' import |
| TC101 | Annotation does not need to be a string literal |
`TC200` and `TC201` manage forward references using [string literals](https://www.python.org/dev/peps/pep-0484/#forward-references).
| Code | Description |
|-------|-----------------------------------------------------|
| TC200 | Annotation needs to be made into a string literal |
| TC201 | Annotation does not need to be a string literal |
## Enabling error ranges
Add `TC` and `TC1` or `TC2` to your flake8 config like this:
```ini
[flake8]
max-line-length = 80
max-complexity = 12
...
ignore = E501
# You can use 'extend-select' (new in flake8 v4):
extend-select = TC, TC2
# OR 'select':
select = C,E,F..., TC, TC2 # or TC1
# OR 'enable-extensions':
enable-extensions = TC, TC2 # or TC1
```
If you are unsure which `TC` range to pick, see the [rationale](#rationale) for more info.
## Installation
```shell
pip install flake8-type-checking
```
## Configuration
These options are configurable, and can be set in your flake8 config.
## Python 3.14+
If your code is targeting Python 3.14+ you no longer need to wrap
annotations in quotes or add a future import. So in this case it's
recommended to add `type-checking-p314plus = true` to your flake8
configuration and select the `TC1` rules.
- **setting name**: `type-checking-p314plus`
- **type**: `bool`
```ini
[flake8]
type-checking-py314plus = true # default false
```
### Typing modules
If you re-export `typing` or `typing_extensions` members from a compatibility
module, you will need to specify them here in order for inference to work
correctly for special forms like `Literal` or `Annotated`.
If you use relative imports for the compatibility module in your code-base
you will need to add separate entries for each kind of relative import you
use.
- **setting name**: `type-checking-typing-modules`
- **type**: `list`
```ini
[flake8]
type-checking-typing-modules = mylib.compat, .compat, ..compat # default []
```
### Exempt modules
If you wish to exempt certain modules from
needing to be moved into type-checking blocks, you can specify which
modules to ignore.
- **setting name**: `type-checking-exempt-modules`
- **type**: `list`
```ini
[flake8]
type-checking-exempt-modules = typing_extensions # default []
```
### Strict
The plugin, by default, will report TC00[1-3] errors
for imports if there aren't already other imports from the same module.
When there are other imports from the same module,
the import circularity and performance benefits no longer
apply from guarding an import.
When strict mode is enabled, the plugin will flag all
imports that *can* be moved.
- **setting name**: `type-checking-strict`
- **type**: `bool`
```ini
[flake8]
type-checking-strict = true # default false
```
### Force `from __future__ import annotations` import
The plugin, by default, will only report a TC100 error, if annotations
contain references to typing only symbols. If you want to enforce a more
consistent style and use a future import in every file that makes use
of annotations, you can enable this setting.
When `force-future-annotation` is enabled, the plugin will flag all
files that contain annotations but not future import.
- **setting name**: `type-checking-force-future-annotation`
- **type**: `bool`
```ini
[flake8]
type-checking-force-future-annotation = true # default false
```
### Pydantic support
If you use Pydantic models in your code, you should enable Pydantic support.
This will treat any class variable annotation as being needed during runtime.
- **name**: `type-checking-pydantic-enabled`
- **type**: `bool`
```ini
[flake8]
type-checking-pydantic-enabled = true # default false
```
### Pydantic support base-class passlist
Disabling checks for all class annotations is a little aggressive.
If you feel comfortable that all base classes named, e.g., `NamedTuple` are *not* Pydantic models,
then you can pass the names of the base classes in this setting, to re-enable checking for classes
which inherit from them.
- **name**: `type-checking-pydantic-enabled-baseclass-passlist`
- **type**: `list`
```ini
[flake8]
type-checking-pydantic-enabled-baseclass-passlist = NamedTuple, TypedDict # default []
```
### FastAPI support
If you're using the plugin for a FastAPI project,
you should enable support. This will treat the annotations
of any decorated function as needed at runtime.
Enabling FastAPI support will also enable Pydantic support.
- **name**: `type-checking-fastapi-enabled`
- **type**: `bool`
```ini
[flake8]
type-checking-fastapi-enabled = true # default false
```
One more thing to note for FastAPI users is that dependencies
(functions used in `Depends`) will produce false positives, unless
you enable dependency support as described below.
### FastAPI dependency support
In addition to preventing false positives for decorators, we *can*
prevent false positives for dependencies. We are making a pretty bad
trade-off however: by enabling this option we treat every annotation
in every function definition across your entire project as a possible
dependency annotation. In other words, we stop linting all function
annotations completely, to avoid the possibility of false positives.
If you prefer to be on the safe side, you should enable this - otherwise
it might be enough to be aware that false positives can happen for functions
used as dependencies.
Enabling dependency support will also enable FastAPI and Pydantic support.
- **name**: `type-checking-fastapi-dependency-support-enabled`
- **type**: `bool`
```ini
[flake8]
type-checking-fastapi-dependency-support-enabled = true # default false
```
### SQLAlchemy 2.0+ support
If you're using SQLAlchemy 2.0+, you can enable support.
This will treat any `Mapped[...]` types as needed at runtime.
It will also specially treat the enclosed type, since it may or may not
need to be available at runtime depending on whether or not the enclosed
type is a model or not, since models can have circular dependencies.
- **name**: `type-checking-sqlalchemy-enabled`
- **type**: `bool`
```ini
type-checking-sqlalchemy-enabled = true # default false
```
### SQLAlchemy 2.0+ support mapped dotted names
Since it's possible to create subclasses of `sqlalchemy.orm.Mapped` that
define some custom behavior for the mapped attribute, but otherwise still
behave like any other mapped attribute, i.e. the same runtime restrictions
apply it's possible to provide additional dotted names that should be treated
like subclasses of `Mapped`. By default we check for `sqlalchemy.orm.Mapped`,
`sqlalchemy.orm.DynamicMapped` and `sqlalchemy.orm.WriteOnlyMapped`.
If there's more than one import path for the same `Mapped` subclass, then you
need to specify each of them as a separate dotted name.
- **name**: `type-checking-sqlalchemy-mapped-dotted-names`
- **type**: `list`
```ini
type-checking-sqlalchemy-mapped-dotted-names = a.MyMapped, a.b.MyMapped # default []
```
### Cattrs support
If you're using the plugin in a project which uses `cattrs`,
you can enable support. This will treat the annotations
of any decorated `attrs` class as needed at runtime, since
`cattrs.unstructure` calls will fail when loading
classes where types are not available at runtime.
Note: the cattrs support setting does not yet detect and
ignore class var annotations on dataclasses or other non-attrs class types.
This can be added in the future if needed.
- **name**: `type-checking-cattrs-enabled`
- **type**: `bool`
```ini
[flake8]
type-checking-cattrs-enabled = true # default false
```
### Injector support
If you're using the injector library, you can enable support.
This will treat any `Inject[Dependency]` types as needed at runtime.
- **name**: `type-checking-injector-enabled`
- **type**: `bool`
```ini
type-checking-injector-enabled = true # default false
```
## Rationale
Why did we create this plugin?
Good type hinting typically requires a lot of project imports, which can increase
the risk of [import cycles](https://mypy.readthedocs.io/en/stable/runtime_troubles.html?#import-cycles)
in a project. The recommended way of preventing this problem is to use `typing.TYPE_CHECKING` blocks
to guard these types of imports. In particular, `TC001` helps protect against this issue.
Once imports are guarded, they will no longer be evaluated/imported during runtime. The
consequence of this is that these imports can no longer be treated as if they
were imported outside the block. Instead we need to use [forward references](https://www.python.org/dev/peps/pep-0484/#forward-references).
For Python version `>= 3.7`, there are actually two ways of solving this issue.
You can either make your annotations string literals, or you can use a `__futures__` import to enable [postponed evaluation of annotations](https://www.python.org/dev/peps/pep-0563/).
See [this](https://stackoverflow.com/a/55344418/8083459) excellent stackoverflow answer
for a great explanation of the differences.
## Examples
<details>
<summary><b>Performance example</b></summary>
Imports for type hinting can have a performance impact.
```python
import pandas
def dataframe_length(df: pandas.DataFrame) -> int:
return len(df)
```
In this example, we import a 15mb library, for a single type hint.
We don't need to perform this operation at runtime, *at all*.
If we know that the import will not otherwise be needed by surrounding code,
we can simply guard it, like this:
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
import pandas # <-- no longer imported at runtime
def dataframe_length(df: "pandas.DataFrame") -> int:
return len(df)
```
Now the import is no longer made at runtime. If you're unsure about how this works, see the [mypy docs](https://mypy.readthedocs.io/en/stable/runtime_troubles.html?#typing-type-checking) for a basic introduction.
</details>
<details>
<summary><b>Import circularity example</b></summary>
**Bad code**
`models/a.py`
```python
from models.b import B
class A(Model):
def foo(self, b: B): ...
```
`models/b.py`
```python
from models.a import A
class B(Model):
def bar(self, a: A): ...
```
Will result in these errors
```shell
>> a.py: TC002 Move third-party import 'models.b.B' into a type-checking block
>> b.py: TC002 Move third-party import 'models.a.A' into a type-checking block
```
and consequently trigger these errors if imports are purely moved into type-checking block, without proper forward reference handling
```shell
>> a.py: TC100 Add 'from __future__ import annotations' import
>> b.py: TC100 Add 'from __future__ import annotations' import
```
or
```shell
>> a.py: TC200 Annotation 'B' needs to be made into a string literal
>> b.py: TC200 Annotation 'A' needs to be made into a string literal
```
**Good code**
`models/a.py`
```python
# TC1
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from models.b import B
class A(Model):
def foo(self, b: B): ...
```
or
```python
# TC2
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from models.b import B
class A(Model):
def foo(self, b: 'B'): ...
```
`models/b.py`
```python
# TC1
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from models.a import A
class B(Model):
def bar(self, a: A): ...
```
or
```python
# TC2
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from models.a import A
class B(Model):
def bar(self, a: 'A'): ...
```
</details>
<details>
<summary><b>Examples from the wild</b></summary>
Here are a few examples of public projects that use `flake8-type-checking`:
- [Example from the Poetry codebase](https://github.com/python-poetry/poetry/blob/714c09dd845c58079cff3f3cbedc114dff2194c9/src/poetry/factory.py#L1:L33)
- [Example from the asgi-correlation-id codebase](https://github.com/snok/asgi-correlation-id/blob/main/asgi_correlation_id/middleware.py#L1:L12)
</details>
## Running the plugin as a pre-commit hook
You can run this flake8 plugin as a [pre-commit](https://github.com/pre-commit/pre-commit) hook:
```yaml
- repo: https://github.com/pycqa/flake8
rev: 4.0.1
hooks:
- id: flake8
additional_dependencies:
- flake8-type-checking
```
## Contributing
Please feel free to open an issue or a PR 👏
| text/markdown | Sondre Lillebø Gundersen | sondrelg@live.no | null | null | BSD-3-Clause | flake8, plugin, linting, type hint, typing, imports | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",... | [] | null | null | >=3.10 | [] | [] | [] | [
"classify-imports",
"flake8"
] | [] | [] | [] | [
"Homepage, https://github.com/snok",
"Repository, https://github.com/snok/flake8-type-checking",
"Releases, https://github.com/snok/flake8-type-checking/releases"
] | poetry/2.3.2 CPython/3.10.19 Linux/6.14.0-1017-azure | 2026-02-18T22:53:06.605090 | flake8_type_checking-3.2.0-py3-none-any.whl | 32,320 | 5f/ca/012947455b483dd89ccf1bd5f556a2edc4b2351b886b8cf938543fe8e708/flake8_type_checking-3.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | aa50003aa33ec5ad2e7f2e11486f0e05 | 3475f77c03a414725a2d5cb11b8308514fab4d568b06025981742dd36cde9dc7 | 5fca012947455b483dd89ccf1bd5f556a2edc4b2351b886b8cf938543fe8e708 | null | [
"LICENSE"
] | 3,339 |
2.4 | fairyex | 0.2.2 | FairyEx - FEErical Extraction | # FairyEx
FairyEx (for FEE Extraction) is a Python package to perform extraction with some ZIP solution file.
## Magical extraction
- Fast: focus on speed
- Efficient: low memory usage
- Easy: to install and to use
## Quickstart
```bash
pip install fairyex
```
```python
from fairyex import DarkSol
with DarkSol("Model Open World Solution.zip") as ds:
df = ds.query(
phase="STSchedule",
children_class="Generator",
children=ds.query_children("Generator"),
properties=["Generation"],
samples=["1", "2", "3"],
)
```
| text/markdown | Harry Ung | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"pandas",
"polars",
"pyarrow"
] | [] | [] | [] | [
"Documentation, https://codacave.gitlab.io/fairyex/",
"Repository, https://gitlab.com/CodaCave/fairyex",
"Changelog, https://gitlab.com/CodaCave/fairyex/-/releases"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T22:52:41.704747 | fairyex-0.2.2.tar.gz | 10,525 | 63/04/1e30e26f5365be3f13aa5d08c2bacc9e830aba1eee98199b75fec3345282/fairyex-0.2.2.tar.gz | source | sdist | null | false | c413e61f3e84f9d3274272b3844ab386 | f301c563d0db18fc9f4db9638b5cbeda90be7c821750e80e684119862768360e | 63041e30e26f5365be3f13aa5d08c2bacc9e830aba1eee98199b75fec3345282 | null | [] | 246 |
2.4 | markmap-anywidget | 0.2.0 | Interactive mindmaps from markdown using markmap | # markmap-anywidget
A simple [anywidget](https://github.com/manzt/anywidget) implementation of [markmap](https://markmap.js.org/) for Jupyter and marimo notebooks.
## Installation
**PyPI:**
```bash
pip install markmap-anywidget
```
**Nix:**
```nix
# your flake.nix
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
markmap-anywidget.url = "github:daniel-fahey/markmap-anywidget";
};
outputs = { nixpkgs, markmap-anywidget, ... }:
let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
in {
devShells.${system}.default = pkgs.mkShell {
packages = [
(pkgs.python3.withPackages (ps: [
markmap-anywidget.packages.${system}.default
]))
];
};
};
}
```
## Usage
See the [`marimo` documentation](https://docs.marimo.io/api/inputs/anywidget/) for more information.
```python
import marimo as mo
from markmap_anywidget import MarkmapWidget
widget = mo.ui.anywidget(
MarkmapWidget(
markdown_content="""---
markmap:
colorFreezeLevel: 2
maxWidth: 300
---
# markmap
## Links
- [Website](https://markmap.js.org/)
- [GitHub](https://github.com/gera2ld/markmap)
## Features
- `inline code`
- **strong** and *italic*
- Katex: $x = {-b \pm \sqrt{b^2-4ac} \over 2a}$
"""
)
)
widget
```
## Development
```bash
git clone git@github.com:daniel-fahey/markmap-anywidget.git
cd markmap-anywidget
nix develop -c bun run build # rebuild static assets
nix develop -c marimo run examples/marimo.py # test widget
```
Tests run automatically in the Nix build.
## Releasing
```bash
# Update version
echo "0.3.0" > VERSION
nix develop -c bun run build
# Commit and tag
git add -A && git commit -m "v0.3.0"
git tag v0.3.0 && git push --tags
# Create release on GitHub (publishes to PyPI automatically)
```
| text/markdown | null | Daniel Fahey <daniel.fahey@proton.me> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"anywidget>=0.9.18"
] | [] | [] | [] | [
"Homepage, https://github.com/daniel-fahey/markmap-anywidget"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:51:42.729014 | markmap_anywidget-0.2.0.tar.gz | 849,834 | 35/aa/2e534ce80bedfae77be82900e15c7657c0a8b592a526c1226cd50a5fa2e6/markmap_anywidget-0.2.0.tar.gz | source | sdist | null | false | 4f529ad49e4ed9297bcc4104edc6fe07 | 162cdfbf0dd9a06a1fcf9252c429322a54b5fae11ae8de23b9439a7e74cd46de | 35aa2e534ce80bedfae77be82900e15c7657c0a8b592a526c1226cd50a5fa2e6 | MIT | [
"LICENSE"
] | 242 |
2.4 | mscompress | 1.0.10 | Python bindings for mscompress, a compression library for mass spectrometry data | # MScompress Python Library
Python bindings for MScompress, a high-performance compression library for mass spectrometry data.
## Overview
The MScompress Python library provides a fast and efficient way to compress and decompress mass spectrometry data files in mzML format. It features:
- 🚀 **High Performance**: Multi-threaded compression/decompression with state-of-the-art speeds
- 📦 **MSZ Format**: Novel compressed format with random-access capabilities
- 🔄 **Lossless & Lossy**: Support for both lossless and lossy compression modes
- 🐍 **Pythonic API**: Clean, intuitive interface with NumPy integration
- 🎯 **Direct Data Access**: Extract spectra, m/z arrays, and intensity data without full decompression
## Installation
### From PyPI
```bash
pip install mscompress
```
### From Source
**Prerequisites:**
- Python ≥ 3.9
- NumPy
- Cython
- C compiler (GCC, Clang, or MSVC)
**Build and install:**
```bash
git clone --recurse-submodules https://github.com/chrisagrams/mscompress.git
cd mscompress/python
pip install -e .
```
## Quick Start
### Basic Usage
```python
import mscompress
# Read an mzML file
mzml = mscompress.read("data.mzML")
# Compress to MSZ format
mzml.compress("data.msz")
# Read compressed MSZ file
msz = mscompress.read("data.msz")
# Decompress back to mzML
msz.decompress("output.mzML")
```
### Working with Spectra
```python
import mscompress
# Open a file (mzML or MSZ)
file = mscompress.read("data.mzML")
# Access file metadata
print(f"File size: {file.filesize} bytes")
print(f"Total spectra: {len(file.spectra)}")
# Iterate through all spectra
for spectrum in file.spectra:
print(f"Scan {spectrum.scan}: MS{spectrum.ms_level}")
print(f" Retention time: {spectrum.retention_time:.2f}s")
print(f" Data points: {spectrum.size}")
# Access specific spectrum by index
spectrum = file.spectra[0]
# Get m/z and intensity arrays as NumPy arrays
mz_array = spectrum.mz
intensity_array = spectrum.intensity
# Get combined peaks as 2D array [m/z, intensity]
peaks = spectrum.peaks
# Access spectrum metadata XML
xml_element = spectrum.xml
```
### Compression Configuration
```python
import mscompress
# Open mzML file
mzml = mscompress.read("data.mzML")
# Configure compression settings
mzml.arguments.threads = 8 # Use 8 threads
mzml.arguments.zstd_compression_level = 3 # Set ZSTD level (1-22)
mzml.arguments.blocksize = 100 # Spectra per block
# Compress with custom settings
mzml.compress("data.msz")
```
### Random Access to Compressed Data
```python
import mscompress
# Open compressed MSZ file
msz = mscompress.read("data.msz")
# Directly access specific spectrum without full decompression
spectrum_100 = msz.spectra[100]
print(f"Scan: {spectrum_100.scan}")
print(f"m/z range: {spectrum_100.mz[0]:.2f} - {spectrum_100.mz[-1]:.2f}")
# Extract binary data for a specific spectrum
mz_binary = msz.get_mz_binary(100)
intensity_binary = msz.get_inten_binary(100)
xml_metadata = msz.get_xml(100)
```
### Context Manager Support
```python
import mscompress
# Use with context manager for automatic resource cleanup
with mscompress.read("data.mzML") as file:
for spectrum in file.spectra:
# Process spectrum
process_spectrum(spectrum)
# File is automatically closed
```
## API Reference
### Functions
#### `read(path: str | bytes) -> MZMLFile | MSZFile`
Opens and parses an mzML or MSZ file.
**Parameters:**
- `path`: Path to the file (string or bytes)
**Returns:**
- `MZMLFile` or `MSZFile` object depending on file type
**Raises:**
- `FileNotFoundError`: If file does not exist
- `IsADirectoryError`: If path points to a directory
- `OSError`: If file type cannot be determined
#### `get_num_threads() -> int`
Returns the number of available CPU threads.
#### `get_filesize(path: str | bytes) -> int`
Returns the size of a file in bytes.
### Classes
#### `MZMLFile`
Handler for mzML format files.
**Properties:**
- `path`: File path (bytes)
- `filesize`: File size in bytes (int)
- `format`: Data format information (DataFormat)
- `spectra`: Collection of spectra (Spectra)
- `positions`: Division/position information (Division)
- `arguments`: Runtime configuration (RuntimeArguments)
**Methods:**
- `compress(output: str | bytes)`: Compress to MSZ format
- `get_mz_binary(index: int) -> np.ndarray`: Extract m/z array for spectrum
- `get_inten_binary(index: int) -> np.ndarray`: Extract intensity array for spectrum
- `get_xml(index: int) -> Element`: Extract XML metadata for spectrum
- `describe() -> dict`: Get file description dictionary
#### `MSZFile`
Handler for MSZ (compressed) format files.
**Properties:**
- Same as `MZMLFile`
**Methods:**
- `decompress(output: str | bytes)`: Decompress to mzML format
- `get_mz_binary(index: int) -> np.ndarray`: Extract m/z array for spectrum
- `get_inten_binary(index: int) -> np.ndarray`: Extract intensity array for spectrum
- `get_xml(index: int) -> Element`: Extract XML metadata for spectrum
- `describe() -> dict`: Get file description dictionary
#### `Spectrum`
Represents a single mass spectrum.
**Properties:**
- `index`: Spectrum index (int)
- `scan`: Scan number (int)
- `ms_level`: MS level (int)
- `retention_time`: Retention time in seconds (float)
- `size`: Number of data points (int)
- `mz`: m/z values (np.ndarray)
- `intensity`: Intensity values (np.ndarray)
- `peaks`: Combined m/z and intensity as 2D array (np.ndarray)
- `xml`: XML metadata element (Element)
#### `Spectra`
Collection of spectra with lazy loading and iteration support.
**Methods:**
- `__len__()`: Get total number of spectra
- `__iter__()`: Iterate over all spectra
- `__getitem__(index: int)`: Access spectrum by index
#### `RuntimeArguments`
Runtime configuration for compression/decompression.
**Properties:**
- `threads`: Number of threads to use (int)
- `blocksize`: Number of spectra per block (int)
- `mz_scale_factor`: m/z scaling factor (int)
- `int_scale_factor`: Intensity scaling factor (int)
- `target_xml_format`: Target XML format (int)
- `target_mz_format`: Target m/z format (int)
- `target_inten_format`: Target intensity format (int)
- `zstd_compression_level`: ZSTD compression level 1-22 (int)
#### `DataFormat`
Data format information.
**Properties:**
- `source_mz_fmt`: Source m/z format (int)
- `source_inten_fmt`: Source intensity format (int)
- `source_compression`: Source compression type (int)
- `source_total_spec`: Total number of spectra (int)
- `target_xml_format`: Target XML format (int)
- `target_mz_format`: Target m/z format (int)
- `target_inten_format`: Target intensity format (int)
**Methods:**
- `to_dict() -> dict`: Convert to dictionary representation
- `__str__() -> str`: String representation
#### `Division`
Division structure containing data positions and scan information.
**Properties:**
- `spectra`: Spectrum data positions (DataPositions)
- `xml`: XML data positions (DataPositions)
- `mz`: m/z data positions (DataPositions)
- `inten`: Intensity data positions (DataPositions)
- `size`: Number of divisions (int)
- `scans`: Scan numbers (np.ndarray)
- `ms_levels`: MS levels (np.ndarray)
- `ret_times`: Retention times (np.ndarray or None)
#### `DataPositions`
Position information for data blocks.
**Properties:**
- `start_positions`: Start positions (np.ndarray[uint64])
- `end_positions`: End positions (np.ndarray[uint64])
- `total_spec`: Total number of spectra (int)
## Examples
### Extract All MS2 Spectra
```python
import mscompress
import numpy as np
file = mscompress.read("data.mzML")
ms2_spectra = []
for spectrum in file.spectra:
if spectrum.ms_level == 2:
ms2_spectra.append({
'scan': spectrum.scan,
'rt': spectrum.retention_time,
'mz': spectrum.mz,
'intensity': spectrum.intensity
})
print(f"Found {len(ms2_spectra)} MS2 spectra")
```
### Compare Compression Ratios
```python
import mscompress
import os
mzml = mscompress.read("data.mzML")
original_size = mzml.filesize
# Compress with different ZSTD levels
for level in [1, 3, 5, 10]:
mzml.arguments.zstd_compression_level = level
output = f"data_level_{level}.msz"
mzml.compress(output)
compressed_size = os.path.getsize(output)
ratio = original_size / compressed_size
print(f"Level {level}: {ratio:.2f}x compression")
```
### Parallel Processing of Spectra
```python
import mscompress
from multiprocessing import Pool
def process_spectrum(args):
file_path, index = args
file = mscompress.read(file_path)
spectrum = file.spectra[index]
# Your processing logic here
return spectrum.scan, len(spectrum.mz)
file = mscompress.read("data.mzML")
indices = range(len(file.spectra))
args = [(file.path.decode(), i) for i in indices]
with Pool() as pool:
results = pool.map(process_spectrum, args)
print(f"Processed {len(results)} spectra")
```
### Filter and Extract Peaks
```python
import mscompress
import numpy as np
file = mscompress.read("data.mzML")
# Extract peaks above intensity threshold
threshold = 1000.0
for spectrum in file.spectra:
mask = spectrum.intensity > threshold
filtered_mz = spectrum.mz[mask]
filtered_intensity = spectrum.intensity[mask]
print(f"Scan {spectrum.scan}: {len(filtered_mz)} peaks above threshold")
```
## Performance Tips
1. **Multi-threading**: Set `arguments.threads` to match your CPU core count for optimal performance
2. **Block size**: Adjust `arguments.blocksize` based on your data - larger blocks may improve compression ratio but reduce random-access granularity
3. **Compression level**: ZSTD levels 1-3 offer good speed/ratio balance; higher levels improve compression at the cost of speed
4. **Memory usage**: When processing large files, iterate through spectra rather than loading all into memory
## Type Hints
The library includes full type hints and stub files (`.pyi`) for improved IDE support and type checking:
```python
import mscompress
from typing import Union
def process_file(path: Union[str, bytes]) -> None:
file = mscompress.read(path) # Type checker knows this is MZMLFile | MSZFile
spectra = file.spectra # Type: Spectra
spectrum = spectra[0] # Type: Spectrum
mz = spectrum.mz # Type: np.ndarray[np.float32 | np.float64]
```
## Contributing
Contributions are welcome! Please see the main repository for guidelines:
https://github.com/chrisagrams/mscompress
## License
See the main repository for license information.
## Support
For bug reports and feature requests, please open an issue:
https://github.com/chrisagrams/mscompress/issues
## Citation
If you use MScompress in your research, please cite our work (citation information coming soon).
| text/markdown | Chris Grams | chrisagrams@gmail.com | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"mlcroissant>=1.0.0",
"numpy<2",
"zstandard>=0.19.0",
"setuptools>=45; extra == \"dev\"",
"cython; extra == \"dev\"",
"numpy; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"tomli; extra == \"dev\"",
"torch>=2.0; extra == \"pytorch\"",
"mlcroissant>=1.0.0; extra =... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:50:41.042091 | mscompress-1.0.10.tar.gz | 2,056,547 | ba/5d/8714b3ca513f1bd087af07bda60cb92ab1feffeb80c151249c8b282414f2/mscompress-1.0.10.tar.gz | source | sdist | null | false | d485db87edc32aa8db7b63e165903d43 | 0b937a0a9adc18ec0e3263ae3d8c38922dcc3bbca38b86833b28c7660b3273bd | ba5d8714b3ca513f1bd087af07bda60cb92ab1feffeb80c151249c8b282414f2 | null | [] | 1,513 |
2.4 | cleancloud | 1.4.0 | Reduce cloud costs through safe, read-only hygiene evaluation. Built for production environments with confidence-scored signals for AWS and Azure. | # CleanCloud



[](https://github.com/cleancloud-io/cleancloud/actions/workflows/security-scan.yml)

**CleanCloud is a safe, read-only cloud hygiene tool for teams that can't afford to break production.**
CleanCloud helps SRE and platform teams **safely identify orphaned, untagged, and inactive cloud resources** — and **enforce cost hygiene as a CI/CD gate** so findings don't sit in a report no one acts on. Built for AWS and Azure, safe to run in production, CI/CD pipelines, and regulated environments.
- **Read-only by design** — No deletions, no tag modifications, no resource changes
- **Conservative detection** — Multiple signals with explicit confidence levels (LOW/MEDIUM/HIGH)
- **CI/CD enforcement** — Fail builds on findings with `--fail-on-confidence HIGH` (safe by default, opt-in strict)
- **Zero telemetry** — No phone-home, no data collection, no analytics
### Why CleanCloud exists
Most cost tools require write access, send data to SaaS platforms, and generate reports no one acts on. CleanCloud is different:
- **Read-only** — your cloud provider enforces it, not us
- **Runs in your environment** — no data leaves your account
- **Enforces in CI/CD** — findings become gates, not backlog
```bash
pipx install cleancloud
# AWS
cleancloud scan --provider aws --all-regions
# Azure
cleancloud scan --provider azure
```
### How it works
```
Your Cloud Account CleanCloud (pip install) Your CI/CD Pipeline
(AWS / Azure) (read-only scan) (GitHub Actions, etc.)
IAM Role ──────────► cleancloud scan ──────────► Findings (JSON/CSV/human)
(Reader only) - 20 detection rules │
No write access - Confidence scoring ▼
OIDC temporary tokens - Evidence per finding --fail-on-confidence HIGH
Exit 0 = pass
Exit 2 = policy violation
```
### Evaluating CleanCloud for enterprise use?
CleanCloud is designed for production environments where
- Write access is prohibited,
- InfoSec review is mandatory,
- CI/CD enforcement is required and
- data cannot leave the cloud account.
If your team is assessing cloud cost governance or hygiene controls, we can support:
- Security reviews and IAM validation
- CI/CD rollout design
- Multi-account architecture discussions
**[Start an evaluation discussion](https://www.getcleancloud.com/#contact)**
---
## Commands at a Glance
### `doctor` — Validate credentials and permissions
Checks authentication, security grade, and read-only permissions before you scan. Run this first.
```bash
# AWS — validates IAM credentials, auth method, and all required permissions
# Defaults to us-east-1 if --region is not specified
cleancloud doctor --provider aws
# AWS — validate against a specific region
cleancloud doctor --provider aws --region us-west-2
# Azure — validates credentials, subscription access, and Reader role permissions
# --region is not applicable for Azure doctor
cleancloud doctor --provider azure
```
Sample output (AWS — CI/CD with OIDC):
```
Authentication Method: OIDC (AssumeRoleWithWebIdentity)
[OK] Security Grade: EXCELLENT
[OK] - Temporary credentials
[OK] - Auto-rotated
Permissions Tested: 14/14 passed
[OK] AWS ENVIRONMENT READY FOR CLEANCLOUD
```
Sample output (Azure — CI/CD with Workload Identity):
```
Authentication Method: OIDC (Workload Identity Federation)
[OK] Security Grade: EXCELLENT
[OK] - No client secrets stored
[OK] - Temporary credentials
Subscriptions: 2 accessible
[OK] AZURE ENVIRONMENT READY FOR CLEANCLOUD
```
> Local development uses AWS CLI profiles (ACCEPTABLE) or service principals (POOR). Doctor will recommend upgrading to OIDC. See [`docs/example-outputs.md`](docs/example-outputs.md) for full output examples.
### `scan` — Find orphaned and idle resources
Scans your cloud account for unattached volumes, idle gateways, untagged resources, and more. Read-only, safe for production. See all [20 rules across AWS and Azure](docs/rules.md).
```bash
# AWS — single region
cleancloud scan --provider aws --region us-east-1
# AWS — all regions with active resources (auto-detected)
cleancloud scan --provider aws --all-regions
# Azure — all accessible subscriptions (default)
cleancloud scan --provider azure
# Azure — specific subscription
cleancloud scan --provider azure --subscription <subscription-id>
# Azure — filter by location
cleancloud scan --provider azure --region eastus
# Output formats — feed into dashboards (Grafana, Datadog, etc.) or automation
cleancloud scan --provider aws --all-regions --output json --output-file results.json
cleancloud scan --provider azure --output csv --output-file results.csv
# Exclude resources by tag
cleancloud scan --provider aws --all-regions --ignore-tag env:production
cleancloud scan --provider aws --all-regions --config cleancloud.yaml
```
### Enforce policies in CI/CD
By default, scans exit `0` even with findings (safe for any pipeline). Opt in to enforcement with these flags:
| Flag | Behavior | Exit code |
|------|----------|-----------|
| *(none)* | Report findings, never fail | `0` |
| `--fail-on-confidence HIGH` | Fail only on HIGH confidence findings | `2` |
| `--fail-on-confidence MEDIUM` | Fail on MEDIUM or higher | `2` |
| `--fail-on-confidence LOW` | Fail on any confidence level | `2` |
| `--fail-on-findings` | Fail on any finding (strict mode) | `2` |
```bash
# AWS — fail CI on HIGH confidence findings only (recommended starting point)
cleancloud scan --provider aws --all-regions --fail-on-confidence HIGH
# Azure — fail CI on MEDIUM or higher
cleancloud scan --provider azure --fail-on-confidence MEDIUM
# Strict mode — fail on any finding
cleancloud scan --provider aws --all-regions --fail-on-findings
# Full CI/CD example: OIDC auth, JSON output, enforce HIGH confidence
cleancloud scan --provider aws --all-regions \
--output json --output-file scan.json \
--fail-on-confidence HIGH
```
---
## Table of Contents
- [Commands at a Glance](#commands-at-a-glance)
- [See It In Action](#see-it-in-action)
- [What CleanCloud Detects](#what-cleancloud-detects)
- [Installation](#installation)
- [Try It Locally](#try-it-locally-2-minutes)
- [CI/CD Pipelines](#running-in-cicd-pipelines)
- [Security & Trust](#security--trust)
- [Tag-Based Filtering](#tag-based-filtering)
- [Enterprise & Production Use](#built-for-production--enterprise-use)
- [Who This Is For](#who-cleancloud-is-and-is-not-for)
- [Why Teams Choose CleanCloud](#why-teams-choose-cleancloud)
- [Design Philosophy](#design-philosophy)
- [Documentation](#documentation)
---
## See It In Action
### `cleancloud scan --provider aws --all-regions` — Find what's costing you money
```
Found 6 hygiene issues:
1. [AWS] Unattached EBS Volume
Risk : Low
Confidence : High
Resource : aws.ebs.volume → vol-0a1b2c3d4e5f67890
Region : us-east-1
Rule : aws.ebs.volume.unattached
Reason : Volume has been unattached for 47 days
Detected : 2026-02-08T14:32:01+00:00
Details:
- size_gb: 500
- availability_zone: us-east-1a
- state: available
- tags: {"Project": "legacy-api", "Owner": "platform"}
2. [AWS] Idle NAT Gateway
Risk : Medium
Confidence : Medium
Resource : aws.ec2.nat_gateway → nat-0abcdef1234567890
Region : us-west-2
Rule : aws.ec2.nat_gateway.idle
Reason : No traffic detected for 21 days
Detected : 2026-02-08T14:32:04+00:00
Details:
- name: staging-nat
- state: available
- vpc_id: vpc-0abc123
- total_bytes_out: 0
- total_bytes_in: 0
- estimated_monthly_cost_usd: 32.40
- idle_threshold_days: 14
3. [AWS] Unattached Elastic IP
Risk : Low
Confidence : High
Resource : aws.ec2.elastic_ip → eipalloc-0a1b2c3d4e5f6
Region : eu-west-1
Rule : aws.ec2.elastic_ip.unattached
Reason : Elastic IP not associated with any instance or ENI (age: 92 days)
Detected : 2026-02-08T14:32:06+00:00
Details:
- public_ip: 52.18.xxx.xxx
- domain: vpc
- age_days: 92
--- Scan Summary ---
Total findings: 6
By risk: low: 5 medium: 1
By confidence: high: 2 medium: 4
Regions scanned: us-east-1, us-west-2, eu-west-1 (auto-detected)
```
> **Coming soon:** Cost impact summaries with estimated monthly waste per finding. **[Get notified](https://getcleancloud.com)**
### `cleancloud scan --provider azure` — Azure scan output
```
Found 5 hygiene issues:
1. [AZURE] Unattached Managed Disk
Risk : Low
Confidence : Medium
Resource : azure.compute.disk → data-disk-legacy-api
Region : eastus
Rule : azure.unattached_disk
Reason : Managed disk not attached to any VM (age: 34 days)
Detected : 2026-02-08T14:45:12+00:00
Details:
- size_gb: 256
- disk_state: Unattached
- subscription: Production
2. [AZURE] Unused Public IP
Risk : Low
Confidence : High
Resource : azure.network.public_ip → pip-old-gateway
Region : westeurope
Rule : azure.public_ip_unused
Reason : Public IP not associated with any resource
Detected : 2026-02-08T14:45:13+00:00
Details:
- ip_address: 20.82.xxx.xxx
- allocation_method: Static
- subscription: Staging
3. [AZURE] Load Balancer with No Backends
Risk : Medium
Confidence : High
Resource : azure.network.load_balancer → lb-deprecated-service
Region : eastus
Rule : azure.lb_no_backends
Reason : Load balancer has no backend pools configured
Detected : 2026-02-08T14:45:14+00:00
Details:
- sku: Standard
- subscription: Production
--- Scan Summary ---
Total findings: 5
By risk: low: 4 medium: 1
By confidence: high: 3 medium: 2
Subscriptions scanned: Production, Staging (all accessible)
```
> Every finding includes confidence levels and evidence so your team reviews with context — not guesswork.
For full output examples including doctor validation, JSON, and CSV formats, see [`docs/example-outputs.md`](docs/example-outputs.md).
---
## What CleanCloud Detects
20 high-signal rules across AWS and Azure — each read-only, conservative, and designed to avoid false positives in IaC environments.
**Understanding confidence levels:**
- **HIGH** — Single definitive signal with very low false-positive risk (e.g., a volume is unattached, an IP has no association)
- **MEDIUM** — Time-based heuristics or multiple signals required (e.g., no traffic for 14+ days, snapshot age exceeds threshold)
> **Disagree with a confidence level?** You control the enforcement threshold with `--fail-on-confidence`. Start with `HIGH` to only catch the most obvious waste, then tighten to `MEDIUM` as your team validates. You can also exclude specific resources using [tag-based filtering](#tag-based-filtering).
| Provider | Rule | What It Finds | Confidence |
|----------|------|---------------|------------|
| AWS | Unattached EBS Volumes | Volumes not attached to any instance | HIGH |
| AWS | Old EBS Snapshots | Snapshots older than 90 days | MEDIUM |
| AWS | Infinite Retention Logs | CloudWatch log groups that never expire | MEDIUM |
| AWS | Unattached Elastic IPs | EIPs not associated with any resource (30+ days) | HIGH |
| AWS | Detached ENIs | Network interfaces detached for 60+ days | MEDIUM |
| AWS | Untagged Resources | EBS volumes, S3 buckets, log groups with no tags | MEDIUM |
| AWS | Old AMIs | AMIs older than 180 days with snapshot storage costs | MEDIUM |
| AWS | Idle NAT Gateways | NAT Gateways with zero traffic for 14+ days (~$32/mo each) | MEDIUM |
| AWS | Idle RDS Instances | RDS instances with zero connections for 14+ days | HIGH |
| AWS | Idle Elastic Load Balancers | ALB/NLB/CLB with zero traffic for 14+ days | HIGH |
| Azure | Unattached Managed Disks | Disks not attached to any VM | MEDIUM |
| Azure | Old Snapshots | Snapshots exceeding age threshold | MEDIUM |
| Azure | Unused Public IPs | Public IPs with no configuration attached | HIGH |
| Azure | Empty Load Balancers | Load balancers with no backend pools | HIGH |
| Azure | Empty App Gateways | Application gateways with no backends | HIGH |
| Azure | Empty App Service Plans | App Service Plans with no web apps | HIGH |
| Azure | Idle VNet Gateways | Virtual Network Gateways with no active connections | MEDIUM |
| Azure | Stopped (Not Deallocated) VMs | VMs stopped but still incurring full compute charges | HIGH |
| Azure | Idle SQL Databases | SQL databases with zero connections for 14+ days | HIGH |
| Azure | Untagged Resources | Resources with no tags attached | MEDIUM |
**See [`docs/rules.md`](docs/rules.md) for full details, signals used, and evidence documentation.**
### Enforce these findings in CI/CD
Every rule above produces findings with a confidence level (LOW/MEDIUM/HIGH). Use that to set your enforcement threshold:
```bash
# Fail only on HIGH confidence findings (recommended starting point)
cleancloud scan --provider aws --all-regions --fail-on-confidence HIGH
# Tighten over time — fail on MEDIUM or higher
cleancloud scan --provider azure --fail-on-confidence MEDIUM
# Strictest — fail on any finding regardless of confidence
cleancloud scan --provider aws --all-regions --fail-on-findings
```
Start with `HIGH` to catch the obvious waste (unattached EBS volumes, unused public IPs, empty load balancers), then tighten to `MEDIUM` as your team cleans up. See [all enforcement options](#enforce-policies-in-cicd) for the full flag reference.
---
## Installation
### Quick Install
```bash
pipx install cleancloud
```
### Don't have pipx?
**macOS:**
```bash
brew install pipx
pipx install cleancloud
```
**Linux:**
```bash
sudo apt install pipx # Ubuntu/Debian
pipx install cleancloud
```
**Windows:**
```powershell
python3 -m pip install --user pipx
python3 -m pipx ensurepath
pipx install cleancloud
```
### CI/CD Environments
```bash
pip install cleancloud
```
### Troubleshooting
<details>
<summary>Command not found: pip</summary>
```bash
pip3 install cleancloud
# or
python3 -m pip install cleancloud
```
</details>
<details>
<summary>externally-managed-environment error</summary>
Use `pipx` (see above) — this is the modern way to install Python CLI tools.
</details>
<details>
<summary>Command not found: cleancloud (after pipx install)</summary>
pipx installs to `~/.local/bin` which may not be on your PATH:
```bash
pipx ensurepath
source ~/.zshrc # macOS (zsh)
source ~/.bashrc # Linux (bash)
```
Then verify:
```bash
cleancloud --version
```
</details>
<details>
<summary>'cleancloud' already seems to be installed</summary>
If you previously installed an older version, reinstall with `--force`:
```bash
pipx install cleancloud --force
```
</details>
### Verify Installation
```bash
cleancloud --version
```
---
## Try It Locally (2 minutes)
Start here. No OIDC or CI/CD setup needed — just your existing cloud credentials.
### AWS
**Option A: AWS CLI** (if you already have `aws configure` set up)
```bash
# Your existing AWS CLI credentials work — no extra setup
cleancloud doctor --provider aws
cleancloud scan --provider aws --region us-east-1
```
**Option B: Environment variables**
```bash
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-key>
export AWS_DEFAULT_REGION=us-east-1
cleancloud doctor --provider aws
cleancloud scan --provider aws --region us-east-1
```
**Permissions required:** Your IAM user/role needs 14 read-only permissions (`ec2:Describe*`, `rds:Describe*`, `s3:List*`, etc.). The `doctor` command will tell you exactly which permissions are missing. Full IAM policy: [`docs/aws.md`](docs/aws.md)
### Azure
**Option A: Azure CLI** (if you already have `az login` set up)
```bash
# Your existing Azure CLI session works — no extra setup
cleancloud doctor --provider azure
cleancloud scan --provider azure
```
**Option B: Service principal**
```bash
export AZURE_CLIENT_ID=<your-client-id>
export AZURE_TENANT_ID=<your-tenant-id>
export AZURE_CLIENT_SECRET=<your-client-secret>
export AZURE_SUBSCRIPTION_ID=<your-subscription-id>
cleancloud doctor --provider azure
cleancloud scan --provider azure
```
**Permissions required:** `Reader` role at subscription scope (built-in, no custom definition needed). Full RBAC setup: [`docs/azure.md`](docs/azure.md)
### View Results
```bash
# Human-readable (default)
cleancloud scan --provider aws --all-regions
# JSON — feed into dashboards or automation
cleancloud scan --provider aws --all-regions --output json --output-file results.json
# CSV — for spreadsheet review
cleancloud scan --provider azure --output csv --output-file results.csv
```
**JSON Output Schema:** Versioned (`1.0.0`) with backward compatibility. See [`docs/example-outputs.md`](docs/example-outputs.md) for complete examples.
> **Ready to enforce in CI/CD?** Once local scans look right, set up OIDC and add enforcement flags. See the next section.
---
## Running in CI/CD Pipelines
Graduate from local scans to automated enforcement. This requires a one-time OIDC setup (~5 minutes) so your pipeline can authenticate without long-lived secrets.
### Prerequisites
Follow the step-by-step setup guide for your provider:
- **AWS**: [`docs/aws.md`](docs/aws.md) — Create IAM role with OIDC trust, attach read-only policy, add GitHub variable
- **Azure**: [`docs/azure.md`](docs/azure.md) — Create app registration with Workload Identity Federation, assign Reader role, add GitHub secrets
> No long-lived secrets needed — OIDC provides temporary credentials that auto-rotate every run.
### AWS — GitHub Actions with OIDC
```yaml
# .github/workflows/cleancloud.yml
- name: Configure AWS credentials (OIDC)
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::${{ vars.AWS_ACCOUNT_ID }}:role/CleanCloudCIReadOnly
aws-region: us-east-1
- name: Install CleanCloud
run: pip install cleancloud
- name: Validate AWS permissions
run: cleancloud doctor --provider aws --region us-east-1
- name: Scan and enforce
run: |
cleancloud scan --provider aws --region us-east-1 \
--output json --output-file scan.json \
--fail-on-confidence HIGH
```
> Use `--all-regions` instead of `--region us-east-1` to scan all regions with active resources.
### Azure — GitHub Actions with Workload Identity
```yaml
# .github/workflows/cleancloud.yml
- name: Azure Login (OIDC)
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Install CleanCloud
run: pip install cleancloud
- name: Validate Azure permissions
run: cleancloud doctor --provider azure
- name: Scan and enforce
run: |
cleancloud scan --provider azure \
--output json --output-file scan.json \
--fail-on-confidence MEDIUM
```
### What enforcement looks like
- **No findings or below threshold** — exit `0`, pipeline continues
- **Findings at or above threshold** — exit `2`, pipeline fails with a summary of violations
- **No flags** — exit `0` always (safe for scheduled scans and exploratory runs)
**Complete CI/CD guide:** [`docs/ci.md`](docs/ci.md) — GitHub Actions workflow examples (AWS, Azure, multi-cloud), OIDC setup, enforcement patterns, output formats, tag filtering, and troubleshooting.
---
## Security & Trust
CleanCloud is designed for enterprise environments where security review and approval are required.
### Why InfoSec Teams Trust CleanCloud
**Verifiable Read-Only Design:**
- **IAM Proof Pack**: Audit a concise JSON policy, not our code
- **OIDC-First**: Temporary credentials, no secrets stored
- **Cloud-Enforced**: AWS/Azure guarantees read-only, not us
- **Conservative Detection**: MEDIUM confidence by default, age thresholds, explicit evidence
**How It Works:**
1. You create a read-only IAM role (we provide the JSON policy)
2. Run our verification script to prove it's safe
3. CleanCloud scans using temporary OIDC tokens
4. Results are yours - we never see your data
**The Trust Model:**
> "By requiring a separate, verifiable Read-Only IAM role, CleanCloud shifts trust from our code to your Cloud Provider's enforcement. InfoSec teams don't need to audit our Python code line-by-line—they audit a concise JSON policy and verify it's read-only."
### Read-Only by Design
**No destructive permissions required:**
- Only `List*`, `Describe*`, `Get*` operations
- No `Delete*`, `Modify*`, or `Tag*` permissions
- No resource mutations or state changes
- Safe for production accounts and regulated environments
**IAM Proof Pack:** [Ready-to-use policies and verification scripts](security/) with automated safety tests
### OIDC-First Authentication
**No long-lived credentials:**
- AWS IAM Roles with GitHub Actions OIDC (recommended)
- Azure Workload Identity Federation (recommended)
- Short-lived tokens only
- No stored credentials in CI/CD
### Privacy Guarantees
**Zero telemetry, zero outbound calls:**
- No analytics or usage tracking
- No phone-home or update checks
- No data collection of any kind
- Only AWS/Azure API calls (read-only)
### Safety Regression Tests
**Multi-layer verification:**
- Static AST analysis blocks forbidden SDK calls
- Runtime SDK guards prevent mutations in tests
- IAM policy validation ensures read-only access
- Runs automatically in CI for all PRs
**For InfoSec Teams:**
- [Security Policy & Threat Model](SECURITY.md) - **Enterprise security documentation**
- [Information Security Readiness Guide](docs/infosec-readiness.md)
- [IAM Proof Pack Documentation](docs/infosec-readiness.md#iam-proof-pack)
- [Threat Model & Mitigations](docs/infosec-readiness.md#threat-model)
- [Safety Test Documentation](docs/safety.md)
---
## Tag-Based Filtering
CleanCloud supports tag-based filtering to reduce noise by ignoring findings for resources you explicitly mark. Useful when certain environments, teams, or services should be out of scope for hygiene review.
> **Note:** Tag filtering is **ignore-only** — it does not disable rules, modify resources, or protect them from deletion. CleanCloud remains read-only.
### Config file (`cleancloud.yaml`)
Create a `cleancloud.yaml` file in your project root (or specify a custom path with `--config`):
```yaml
version: 1
tag_filtering:
enabled: true
ignore:
- key: env
value: production
- key: team
value: platform
- key: keep # key-only match (any value)
```
```bash
cleancloud scan --provider aws --region us-east-1 --config cleancloud.yaml
```
### CLI overrides (highest priority)
```bash
cleancloud scan --provider aws --region us-east-1 \
--ignore-tag env:production \
--ignore-tag team:platform
```
**Important:** CLI `--ignore-tag` replaces YAML configuration — they are not merged. This ensures CI/CD runs are explicit and predictable.
### Behavior
- If a resource has any matching tag, its finding is ignored
- Matching is exact (no regex, no partial matches)
- Multiple ignore rules are OR'ed (any match ignores)
- Ignored findings are counted and reported in the summary for auditability
```
Ignored by tag policy: 7 findings
```
Tag filtering works best with broad ownership or scope tags (`env`, `team`, `service`) — not per-resource exceptions.
---
## Built for Production & Enterprise Use
CleanCloud is designed to be approved by security teams, not bypassed.
> **Most FinOps tools generate reports that no one acts on.** CleanCloud closes the loop with `--fail-on-confidence HIGH` — turning findings into CI/CD gates that block deployment until the waste is resolved. Detection *and* enforcement, without touching your infrastructure.
### Enterprise Features
- **Read-only by design** - No Delete*, Modify*, or Tag* permissions required
- **OIDC-first authentication** - AWS IAM Roles & Azure Workload Identity
- **Parallel, multi-region scanning** - Fast execution across all regions
- **CI/CD native** - Stable exit codes, JSON/CSV output, policy enforcement
- **Audit-friendly** - Deterministic output, no side effects, versioned schemas
### Stability Guarantees
- **CLI backward compatibility** within major versions
- **Exit codes are stable and intentional** - Never fails builds by accident
- **JSON schemas are versioned** - Safe to parse programmatically
- **Read-only always** - Safety regression tests in CI
### Exit Codes
**Safe by Default:** CleanCloud reports findings but exits with code `0` (success) unless you explicitly configure failure conditions.
| Code | Meaning |
|------|---------|
| `0` | Scan completed successfully (default: findings reported but don't fail) |
| `1` | Configuration error, invalid region/location, or unexpected error |
| `2` | Policy violation (only when using `--fail-on-findings` or `--fail-on-confidence`) |
| `3` | Missing permissions or invalid credentials |
See [Enforce policies in CI/CD](#enforce-policies-in-cicd) for usage examples.
---
## Who CleanCloud Is (and Is Not) For
**Built for teams who operate at scale:**
- **Cloud Architects** designing cost-governance frameworks across accounts and subscriptions
- **SRE / Platform teams** who need safe, scheduled hygiene evaluation in production
- **FinOps teams** building resource accountability without tooling risk
- **Security-reviewed environments** where mutations are prohibited and tooling must pass InfoSec review
- **CI/CD pipelines** enforcing cost hygiene as a gate — without infrastructure changes
**CleanCloud is NOT:**
- An automated cleanup or deletion service
- A replacement for Trusted Advisor, Azure Advisor, or Config
- A cost dashboard with rightsizing recommendations
- A tool that modifies, tags, or deletes resources
---
## Why Teams Choose CleanCloud
### Cost Optimization Without Compromising Safety
Most cost tools require write access, agent installation, or SaaS data sharing. CleanCloud takes a different approach: **read-only evaluation that your InfoSec team can approve in an afternoon.**
| Need | Cost Dashboards | Cleanup Automation | CleanCloud |
|------|-----------------|-------------------|------------|
| **Spending trends** | Excellent | Not a goal | Not a goal |
| **Orphaned resource detection** | Limited or noisy | Aggressive | Conservative, high-signal |
| **Safe for production** | Varies | Risk of deletion | Read-only always |
| **CI/CD cost enforcement** | Not designed for it | Risky | Purpose-built |
| **Confidence scoring** | Binary yes/no | Binary yes/no | LOW/MEDIUM/HIGH |
| **InfoSec approval** | Varies | Difficult | Designed for it |
| **Telemetry / data sharing** | Usually required | Usually required | Zero — fully private |
### CleanCloud Complements Your Existing Stack
- Use **AWS Cost Explorer / Azure Cost Management** to track spending trends
- Use **Trusted Advisor / Azure Advisor** for rightsizing recommendations
- Use **CleanCloud** to find orphaned resources your other tools miss — safely — and enforce it as a CI/CD gate
> **Cost dashboards show you what you're spending.**
> **CleanCloud shows you what you can safely stop spending — and blocks deploys until you do.**
**Learn more:** [Where CleanCloud Fits (design diagram)](docs/design.md#where-cleancloud-fits)
---
## Design Philosophy
CleanCloud is built on four core principles:
**1. Conservative by Default** - Multiple signals with explicit confidence levels (LOW/MEDIUM/HIGH) reduce false positives
**2. Read-Only Always** - No Delete*, Tag*, or Modify* permissions; safe for production
**3. Enforce, Don't Just Report** - CI/CD gates with `--fail-on-confidence` turn findings into action, not backlog
**4. Review-Ready Findings** - Every finding includes evidence and confidence so teams can act with context, not guesswork
**Learn more:** [Confidence logic documentation](docs/confidence.md)
---
## Roadmap
> Roadmap items are added only after conservative signal design and safety review.
### Coming Soon
- GCP support (read-only, parity with existing trust guarantees)
- Additional AWS rules (empty security groups)
- Additional Azure rules (unused NICs, old images)
- Rule filtering (`--rules` flag)
- Multi-account scanning (AWS Organizations support)
### Not Planned
These are intentional non-goals to preserve safety and trust.
- Automated cleanup or deletion
- Rightsizing or instance optimization suggestions
- Billing data access or spending analysis
- Resource tagging or mutations
CleanCloud will remain focused on **safe cost optimization through hygiene detection and CI/CD enforcement**, not automation or infrastructure changes.
---
## Documentation
- [`SECURITY.md`](SECURITY.md) - **Security policy and threat model for enterprise evaluation**
- [`docs/infosec-readiness.md`](docs/infosec-readiness.md) - Information security readiness guide for enterprise teams
- [`security/`](security/) - IAM Proof Pack (ready-to-use policies and verification scripts)
- [`docs/rules.md`](docs/rules.md) - Detailed rule behavior and signals
- [`docs/aws.md`](docs/aws.md) - AWS setup and IAM policy
- [`docs/azure.md`](docs/azure.md) - Azure setup and RBAC configuration
- [`docs/ci.md`](docs/ci.md) - CI/CD integration examples
- [`docs/example-outputs.md`](docs/example-outputs.md) - Full output examples (doctor, scan, JSON) for AWS and Azure
---
## Early Adopters
CleanCloud is currently being evaluated by:
- Platform teams in regulated environments
- Financial services companies
- Government cloud workloads
**Want to evaluate CleanCloud in your environment?**
Start a conversation: https://www.getcleancloud.com/#contact
---
## Questions or Feedback?
We'd love to hear from you:
- **Found a bug?** [Open an issue](https://github.com/cleancloud-io/cleancloud/issues)
- **Have a feature request?** [Start a discussion](https://github.com/cleancloud-io/cleancloud/discussions)
- **Want to chat?** Email us at suresh@getcleancloud.com
- **Like CleanCloud?** [Star us on GitHub](https://github.com/cleancloud-io/cleancloud)
**Using CleanCloud in production?** We'd love to feature your story!
## Contributing
Contributions are welcome! Please ensure all PRs:
- Include tests for new rules
- Follow the conservative design philosophy
- Maintain read-only operation
- Include documentation updates
See [`CONTRIBUTING.md`](CONTRIBUTING.md) for details.
---
## License
[MIT License](LICENSE)
---
| text/markdown | null | CleanCloud <suresh@getcleancloud.com> | null | null | MIT | aws, azure, cloud, hygiene, devops, sre, infrastructure, security, compliance, enterprise, ci-cd, read-only | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Program... | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0.0",
"PyYAML>=6.0",
"urllib3>=2.6.3",
"jaraco-context>=6.1.0",
"pyasn1>=0.6.2",
"boto3>=1.26.0",
"botocore>=1.29.0",
"azure-identity>=1.15.0",
"azure-mgmt-resource>=23.0.0",
"azure-mgmt-subscription>=3.0.0",
"azure-mgmt-compute>=30.0.0",
"azure-mgmt-network>=25.0.0",
"azure-mgmt-w... | [] | [] | [] | [
"Homepage, https://github.com/cleancloud-io/cleancloud",
"Documentation, https://github.com/cleancloud-io/cleancloud#readme",
"Repository, https://github.com/cleancloud-io/cleancloud",
"Issues, https://github.com/cleancloud-io/cleancloud/issues",
"Discussions, https://github.com/cleancloud-io/cleancloud/dis... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:50:29.144108 | cleancloud-1.4.0.tar.gz | 130,471 | 5d/d7/b538b35f856a5dc0a9076c6d8d90c7918a16616aa8e5e522273245b81f2f/cleancloud-1.4.0.tar.gz | source | sdist | null | false | 57d0b5e4e35829934633653e2b3556fd | 4461c97705310c642bce7bfc3c3362d066e5b73de27f2270d557e5b78fd41d92 | 5dd7b538b35f856a5dc0a9076c6d8d90c7918a16616aa8e5e522273245b81f2f | null | [
"LICENSE"
] | 264 |
2.4 | nepse-cli | 3.1.15 | Modern CLI tool for Meroshare IPO automation and NEPSE market data with interactive TUI | # Nepse CLI - Meroshare IPO Automation & Market Data
[](https://badge.fury.io/py/nepse-cli)
[](https://pypi.org/project/nepse-cli/)

A professional command-line tool for NEPSE market analysis and automated Meroshare IPO applications.
**Key Features:**
- Comprehensive market data and technical analysis
- Automated IPO/FPO/Rights application (headless or GUI mode)
- Multi-member portfolio tracking and management
- Real-time stock information, signals, and announcements
- Interactive shell with command palette and autocompletion
- Fast API-based IPO application with concurrent processing
## Requirements
- Python 3.8 or higher
- Windows, Linux, or macOS
- Internet connection
- Playwright (auto-installed)
- Meroshare account for IPO features
## Installation
### PyPI (Recommended)
**Important:** Avoid Microsoft Store Python. Use [python.org](https://www.python.org/downloads/) for proper PATH configuration.
```bash
pip install nepse-cli
```
**Update:**
```bash
pip install --upgrade nepse-cli
```
**Run:**
```bash
nepse
```
If command not found, use: `python -m nepse_cli` or see [Troubleshooting](#troubleshooting).
---
### From Source
**Development Installation:**
```bash
cd "Nepse CLI"
pip install -e .
```
**Windows Quick Start:**
Double-click `start_nepse.bat` to auto-configure and launch.
**Browser Setup:**
Playwright browsers install automatically on first run. Manual installation:
```bash
playwright install chromium
```
## Usage
### Interactive Shell (Recommended)
```bash
nepse
```
Shell features include command palette (`/`), autocompletion, command history, and inline help.
### Command Categories
#### IPO Automation
```bash
nepse apply # ⚡ Fast API-based application (single member)
nepse apply-all # ⚡ Fast API-based apply for all members
nepse result # Check IPO application results
# Legacy browser-based commands (deprecated)
nepse apply-legacy # Browser-based application (will be removed)
nepse apply-all-legacy # Browser-based apply all (will be removed)
```
#### Member Management
```bash
nepse add-member # Add/update family member
nepse list-members # View all members
nepse edit-member # Edit member details
nepse delete-member # Remove a member
nepse manage-members # Interactive member management
```
#### Portfolio & Authentication
```bash
nepse get-portfolio # View member portfolio
nepse test-login # Test Meroshare login
nepse dplist # List available depository participants
```
#### Market Data & Indices
```bash
nepse ipo # Open IPOs/FPOs/Rights
nepse nepse # NEPSE indices
nepse subidx BANKING # Sector sub-indices
nepse mktsum # Market summary
nepse topgl # Top gainers/losers
nepse sectors # All sector performance
```
#### Stock Analysis
```bash
nepse stonk NABIL # Stock details and live price
nepse profile NABIL # Company profile
nepse fundamental NABIL # Fundamental analysis
nepse depth NABIL # Market depth (order book)
nepse 52week # 52-week high/low performers
nepse near52 # Stocks near 52-week marks
```
#### Trading Information
```bash
nepse floor # Floor sheet (live trades)
nepse floor NABIL # Filtered by symbol
nepse floor --buyer 58 # Filtered by buyer broker
nepse floor --seller 59 # Filtered by seller broker
nepse brokers # List all NEPSE brokers (S.N., Broker No., Name)
nepse signals # Trading signals (strong buy/sell)
nepse announce # Latest market announcements
nepse holidays # Upcoming market holidays
```
## Features
### IPO Automation Features
- Automated application for IPO, FPO, and Rights offerings
- Multi-member support for family-wide applications
- Browser-based (Playwright) and API-based (fast) modes
- Headless operation with optional GUI for debugging
- Automatic share quantity calculation and validation
- Result checking and status tracking
### Market Analysis
- Real-time indices (NEPSE, Sensitive, Float, Sector)
- Stock fundamentals, profiles, and technical indicators
- Market depth and order book analysis
- Trading signals and price alerts
- Floor sheet with live trade data
- 52-week performance tracking
- Broker rankings and analysis
### Portfolio Management
- Multi-member portfolio tracking
- Real-time P&L calculation with WACC
- Secure credential storage
- Interactive member management
- Login verification and session handling
### User Interface
- Modern TUI with Rich tables and panels
- Interactive shell with autocompletion
- Command palette for quick navigation
- Progress indicators for long operations
- Formatted output with color coding
## Configuration
Credential data is stored in: `C:\Users\%USERNAME%\Documents\merosharedata\`
**Files:**
- `family_members.json` - Member credentials
- `ipo_config.json` - Application settings
- `nepse_cli_history.txt` - Command history
**Member Data Structure:**
```json
{
"members": [
{
"name": "identifier",
"dp_value": "139",
"username": "meroshare_username",
"password": "meroshare_password",
"transaction_pin": "1234",
"applied_kitta": 10,
"crn_number": "CRN_NUMBER"
}
]
}
```
## Multi-Member Management
Manage credentials for multiple family members to streamline IPO applications.
### Setup
```bash
nepse add-member
```
Required information:
- Name identifier (e.g., "Dad", "Mom", "Self")
- DP number and account credentials
- Meroshare username and password
- Transaction PIN (4-digit)
- Default kitta amount
- CRN number
### Operations
```bash
nepse list-members # View all members
nepse edit-member # Modify member details
nepse delete-member # Remove a member
nepse manage-members # Interactive management menu
```
### Batch Operations
```bash
nepse apply-all # Apply IPO for all members
nepse fast-apply-all # Fast apply for all members
```
## Command Reference
### Most Used Commands
```bash
nepse apply # Apply for IPO
nepse fast-apply # Fast API-based IPO apply
nepse result # Check IPO results
nepse stonk <SYMBOL> # Stock information
nepse mktsum # Market overview
nepse ipo # Open offerings
nepse get-portfolio # View portfolio
```
### Available Sector Indices
`BANKING`, `DEVBANK`, `FINANCE`, `HOTELS AND TOURISM`, `HYDROPOWER`, `INVESTMENT`, `LIFE INSURANCE`, `MANUFACTURING AND PROCESSING`, `MICROFINANCE`, `MUTUAL FUND`, `NONLIFE INSURANCE`, `OTHERS`, `TRADING`
### Command Flags
- `--gui`: Show browser window (for browser-based commands)
- `--verbose`: Show detailed output
- Type `help <command>` in shell for command-specific help
## Security
- Credentials stored locally in JSON format
- User-level file permissions (600 on Unix)
- Data stored in user's Documents directory
- Never commit credential files to version control
- Use environment variables for CI/CD if needed
## Troubleshooting
### Windows: 'nepse' Command Not Found
**Most Common Issue:** Microsoft Store Python has PATH configuration problems.
**Recommended Solution:**
1. Uninstall Python from Microsoft Store
2. Install from [python.org](https://www.python.org/downloads/)
3. Check "Add Python to PATH" during installation
4. Reinstall: `pip install nepse-cli`
**Alternative Solutions:**
1. **Use module syntax:**
```bash
python -m nepse_cli
```
2. **Add Scripts to PATH manually:**
```bash
# Find Scripts path:
python -c "import sys; import os; print(os.path.join(sys.prefix, 'Scripts'))"
# Add output path to System Environment Variables (Win+R → sysdm.cpl → Advanced → Environment Variables)
```
3. **Reinstall with --user flag:**
```bash
pip uninstall nepse-cli
pip install --user nepse-cli
```
### Linux/Mac: Command Not Found
- Ensure `~/.local/bin` is in PATH
- Use `pip install --user nepse-cli`
- Restart terminal after installation
### Browser Installation
```bash
playwright install chromium
```
### Login Issues
```bash
nepse test-login # Verify credentials
nepse list-members # Check stored data
nepse edit-member # Update credentials
```
### Common Errors
- **Timeout errors**: Check internet connection or use `--gui` to see what's happening
- **Element not found**: Update playwright browsers or report issue
- **API errors**: Service may be temporarily down, retry later
## Contributing
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
## License
This project is licensed under the MIT License. See [LICENSE](LICENSE) file for details.
## Disclaimer
This tool is for educational and personal use only. Users are responsible for complying with Meroshare's terms of service and applicable regulations. The developers are not liable for any misuse or issues arising from the use of this tool.
| text/markdown | MenaceXnadin | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language ... | [] | https://github.com/menaceXnadin/nepse-cli | null | >=3.7 | [] | [] | [] | [
"playwright>=1.40.0",
"prompt_toolkit>=3.0.48",
"rich>=13.0.0",
"colorama>=0.4.6",
"requests>=2.25.0",
"beautifulsoup4>=4.9.0",
"cloudscraper>=1.2.0",
"tenacity>=9.0.0",
"lxml>=4.9.0",
"curl-cffi>=0.5.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:50:15.002650 | nepse_cli-3.1.15.tar.gz | 67,892 | 3f/af/e27b44ac6d4489a6c21cd05fb969abef2a9f6116f9c770e15b5e5af3f20d/nepse_cli-3.1.15.tar.gz | source | sdist | null | false | 31e294ba1cf5047141205d6c519edf57 | 23dc2bf0eb7998caef4503f47e81f775e2834931f883541c68bf6cc698b9542a | 3fafe27b44ac6d4489a6c21cd05fb969abef2a9f6116f9c770e15b5e5af3f20d | null | [
"LICENSE"
] | 241 |
2.3 | torch-sim-atomistic | 0.5.2 | A pytorch toolkit for calculating material properties using MLIPs | # TorchSim
[](https://github.com/torchsim/torch-sim/actions/workflows/test.yml)
[](https://codecov.io/gh/torchsim/torch-sim)
[](https://python.org/downloads)
[](https://pypi.org/project/torch-sim-atomistic)
[][zenodo]
[zenodo]: https://zenodo.org/records/15127004
<!-- help docs find start of prose in readme, DO NOT REMOVE -->
TorchSim is a next-generation open-source atomistic simulation engine for the MLIP
era. By rewriting the core primitives of atomistic simulation in Pytorch, it allows
orders of magnitude acceleration of popular machine learning potentials.
* Automatic batching and GPU memory management allowing significant simulation speedup
* Support for MACE, Fairchem, SevenNet, ORB, MatterSim, graph-pes, and metatomic MLIP models
* Support for classical lennard jones, morse, and soft-sphere potentials
* Molecular dynamics integration schemes like NVE, NVT Langevin, and NPT Langevin
* Relaxation of atomic positions and cell with gradient descent and FIRE
* Swap monte carlo and hybrid swap monte carlo algorithm
* An extensible binary trajectory writing format with support for arbitrary properties
* A simple and intuitive high-level API for new users
* Integration with ASE, Pymatgen, and Phonopy
* and more: differentiable simulation, elastic properties, custom workflows...
## Quick Start
Here is a quick demonstration of many of the core features of TorchSim:
native support for GPUs, MLIP models, ASE integration, simple API,
autobatching, and trajectory reporting, all in under 40 lines of code.
### Running batched MD
<!-- tested in tests/test_runners::test_readme_example, update as needed -->
```py
import torch
import torch_sim as ts
# run natively on gpus
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# easily load the model from mace-mp
from mace.calculators.foundations_models import mace_mp
from torch_sim.models.mace import MaceModel
mace = mace_mp(model="small", return_raw_model=True)
mace_model = MaceModel(model=mace, device=device)
from ase.build import bulk
cu_atoms = bulk("Cu", "fcc", a=3.58, cubic=True).repeat((2, 2, 2))
many_cu_atoms = [cu_atoms] * 50
trajectory_files = [f"Cu_traj_{i}.h5md" for i in range(len(many_cu_atoms))]
# run them all simultaneously with batching
final_state = ts.integrate(
system=many_cu_atoms,
model=mace_model,
n_steps=50,
timestep=0.002,
temperature=1000,
integrator=ts.Integrator.nvt_langevin,
trajectory_reporter=dict(filenames=trajectory_files, state_frequency=10),
)
final_atoms_list = final_state.to_atoms()
# extract the final energy from the trajectory file
final_energies = []
for filename in trajectory_files:
with ts.TorchSimTrajectory(filename) as traj:
final_energies.append(traj.get_array("potential_energy")[-1])
print(final_energies)
```
### Running batched relaxation
To then relax those structures with FIRE is just a few more lines.
```py
# relax all of the high temperature states
relaxed_state = ts.optimize(
system=final_state,
model=mace_model,
optimizer=ts.Optimizer.fire,
autobatcher=True,
init_kwargs=dict(cell_filter=ts.CellFilter.frechet),
)
print(relaxed_state.energy)
```
## Speedup
TorchSim achieves up to 100x speedup compared to ASE with popular MLIPs.
<img src="https://raw.githubusercontent.com/TorchSim/torch-sim/main/docs/_static/speedup_plot.svg" alt="Speedup comparison" width="100%">
This figure compares the time per atom of ASE and `torch_sim`. Time per atom is defined
as the number of atoms / total time. While ASE can only run a single system of `n_atoms`
(on the $x$ axis), `torch_sim` can run as many systems as will fit in memory. On an H100 80 GB card,
the max atoms that could fit in memory was ~8,000 for [EGIP](https://github.com/FAIR-Chem/fairchem),
~10,000 for [MACE-MPA-0](https://github.com/ACEsuit/mace), ~22,000 for [Mattersim V1 1M](https://github.com/microsoft/mattersim),
~2,500 for [SevenNet](https://github.com/MDIL-SNU/SevenNet), and ~9000 for [PET-MAD](https://github.com/lab-cosmo/pet-mad).
This metric describes model performance by capturing speed and memory usage simultaneously.
## Installation
### PyPI Installation
```sh
pip install torch-sim-atomistic
```
### Installing from source
```sh
git clone https://github.com/TorchSim/torch-sim
cd torch-sim
pip install .
```
## Examples
To understand how TorchSim works, start with the [comprehensive tutorials](https://torchsim.github.io/torch-sim/user/overview.html) in the documentation.
## Core Modules
TorchSim's package structure is summarized in the [API reference](https://torchsim.github.io/torch-sim/reference/index.html) documentation and drawn as a treemap below.

## Contributing
If you are interested in contributing, please join our [slack](https://join.slack.com/t/torchsim/shared_invite/zt-3fkiju9ip-XhUH7TYp_ejJT6QqEPKMJQ) and check out the [contributing.md](CONTRIBUTING.md).
## License
TorchSim is released under an [MIT license](LICENSE).
## Citation
If you use TorchSim in your research, please cite our [publication](https://iopscience.iop.org/article/10.1088/3050-287X/ae1799).
| text/markdown | Abhijeet Gangan, Janosh Riebesell, Orion Cohen | Abhijeet Gangan <abhijeetgangan@g.ucla.edu>, Janosh Riebesell <janosh.riebesell@gmail.com>, Orion Cohen <orioncohen@berkeley.edu> | null | null | The MIT License (MIT) Copyright 2025 Radical AI Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | chemistry, interatomic-potentials, machine-learning, materials-science | [
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: C... | [] | null | null | >=3.12 | [] | [] | [] | [
"h5py>=3.12.1",
"nvalchemi-toolkit-ops>=0.2.0",
"numpy<3,>=1.26",
"tables>=3.10.2",
"torch>=2",
"tqdm>=4.67",
"autodoc-pydantic==2.2.0; extra == \"docs\"",
"furo==2024.8.6; extra == \"docs\"",
"ipython==8.34.0; extra == \"docs\"",
"ipykernel==6.30.1; extra == \"docs\"",
"jsonschema[format]; extr... | [] | [] | [] | [
"Repo, https://github.com/torchsim/torch-sim"
] | uv/0.8.14 | 2026-02-18T22:49:54.999320 | torch_sim_atomistic-0.5.2.tar.gz | 206,878 | 33/f2/55fe36788afa2def89ec08b56729d69cbb386eac966212468668eaa9bca8/torch_sim_atomistic-0.5.2.tar.gz | source | sdist | null | false | 55ed51a46eda27f7ec5da664f3faa2cb | 631577b0e73add513ebe9e4fa204f80f8c2f83570c2d1a285404082004941d7b | 33f255fe36788afa2def89ec08b56729d69cbb386eac966212468668eaa9bca8 | null | [] | 452 |
2.4 | fastapi-auth-utils | 2.0.2 | Authentication/Authorization utilities for 🚀FastAPI | # fastapi-auth-utils
A minimal authentication/authorization utility for FastAPI.
## How tom install
using pip:
```bash
pip install fastapi-auth-utils
```
using poetry:
```bash
poetry add fastapi-auth-utils
```
## How to use
See [examples](https://github.com/alirezaja1384/fastapi-auth-utils/tree/main/examples)
## License
MIT
| text/markdown | Alireza Jafari | alirezaja1384@gmail.com | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"fastapi>=0.101.0",
"PyJWT>=2.8.0"
] | [] | [] | [] | [
"Source, https://github.com/alirezaja1384/fastapi-auth-utils",
"Tracker, https://github.com/alirezaja1384/fastapi-auth-utils/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T22:48:34.242183 | fastapi_auth_utils-2.0.2.tar.gz | 8,894 | 2f/17/53f74231c1e3502b294210028d4146d2c2b9f562adf614ca95f1d0ddf5a5/fastapi_auth_utils-2.0.2.tar.gz | source | sdist | null | false | 0d261c199df50cf9cdf1268f910728b5 | 3b6ce8d77e166c5f09b2af5d5a33a52f2b747e7a44ca96a3055dd2a5c31bf8a2 | 2f1753f74231c1e3502b294210028d4146d2c2b9f562adf614ca95f1d0ddf5a5 | null | [
"LICENSE"
] | 251 |
2.3 | tzafon | 2.20.2 | The official Python library for the computer API | # Tzafon Python SDK
<!-- prettier-ignore -->
[)](https://pypi.org/project/tzafon/)
The Tzafon Python SDK enables programmatic control of Chromium browsers and Linux desktop environments. Build automation workflows with full stealth capabilities, multi-tab management, and an OpenAI-compatible Chat Completions API.
## Features
- **Browser Automation**: Control Chromium browsers with navigation, clicking, typing, scrolling, and screenshots
- **Desktop Automation**: Automate Linux desktop environments with GUI interactions
- **Multi-Tab Support**: Control multiple browser tabs within a single session
- **Page Context API**: Get detailed page state including viewport, scroll position, URL, and title
- **Full Stealth**: Built-in stealth mode for web automation
- **Sync & Async**: Both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx)
## Documentation
Full documentation available at [docs.tzafon.ai](https://docs.tzafon.ai). API reference in [api.md](https://github.com/tzafon/computer-python/tree/main/api.md).
## Installation
```sh
pip install tzafon
```
## Quick Start
Get your API key from [tzafon.ai](https://tzafon.ai) and set it as an environment variable:
```sh
export TZAFON_API_KEY=sk_your_api_key_here
```
### Browser Automation
```python
from tzafon import Computer
client = Computer()
with client.create(kind="browser") as computer:
# Navigate to a webpage
computer.navigate("https://wikipedia.org")
computer.wait(2)
# Take a screenshot
result = computer.screenshot()
url = computer.get_screenshot_url(result)
print(f"Screenshot: {url}")
# Interact with the page
computer.click(400, 300)
computer.type("Ada Lovelace")
computer.hotkey("Return")
```
### Desktop Automation
```python
from tzafon import Computer
client = Computer()
with client.create(kind="desktop") as computer:
computer.click(500, 300)
computer.type("Hello Desktop")
computer.hotkey("ctrl", "s")
computer.screenshot()
```
### Session Configuration
```python
computer = client.create(
kind="browser", # "browser" or "desktop"
timeout_seconds=3600, # Maximum session lifetime
inactivity_timeout_seconds=120, # Auto-terminate after idle
display={"width": 1280, "height": 720, "scale": 1.0},
context_id="my-session", # Optional identifier
auto_kill=True, # End session on inactivity
)
```
## Available Actions
### Navigation
- `navigate(url)` - Navigate to a URL (browser only)
### Mouse
- `click(x, y)` - Click at coordinates
- `double_click(x, y)` - Double-click at coordinates
- `right_click(x, y)` - Right-click for context menus
- `drag(from_x, from_y, to_x, to_y)` - Drag between positions
- `mouse_down(x, y)` / `mouse_up(x, y)` - Press/release mouse button
### Keyboard
- `type(text)` - Type text at cursor
- `hotkey(*keys)` - Send keyboard shortcuts (e.g., `hotkey("ctrl", "c")`)
- `key_down(key)` / `key_up(key)` - Press/release keys
### Other
- `scroll(dx, dy)` - Scroll viewport
- `screenshot()` - Capture screen
- `html()` - Get page HTML content
- `wait(seconds)` - Pause execution
- `set_viewport(width, height)` - Change viewport size
- `batch(actions)` - Execute multiple actions in one API call
## Page Context API
Get detailed page state with your actions:
```python
result = computer.execute_action("screenshot", include_context=True)
context = result.page_context
print(f"URL: {context.url}")
print(f"Title: {context.title}")
print(f"Viewport: {context.viewport_width}x{context.viewport_height}")
print(f"Scroll position: ({context.scroll_x}, {context.scroll_y})")
```
## Chat Completions API
OpenAI-compatible API for AI-powered workflows:
```python
from openai import OpenAI
client = OpenAI(
api_key="your_tzafon_api_key",
base_url="https://api.tzafon.ai/v1"
)
response = client.chat.completions.create(
model="tzafon.northstar.cua.sft", # Optimized for computer-use automation
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What should I click to search?"}
]
)
```
Available models:
- `tzafon.sm-1` - Fast, lightweight model for general tasks
- `tzafon.northstar.cua.sft` - Optimized for browser/desktop automation
## MCP Server
Enable AI assistants to interact with the API:
[](https://cursor.com/en-US/install-mcp?name=tzafon-mcp&config=eyJuYW1lIjoidHphZm9uLW1jcCIsInRyYW5zcG9ydCI6Imh0dHAiLCJ1cmwiOiJodHRwczovL2NvbXB1dGVyLnN0bG1jcC5jb20iLCJoZWFkZXJzIjp7IngtdHphZm9uLWFwaS1rZXkiOiJNeSBBUEkgS2V5In19)
[](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22tzafon-mcp%22%2C%22type%22%3A%22http%22%2C%22url%22%3A%22https%3A%2F%2Fcomputer.stlmcp.com%22%2C%22headers%22%3A%7B%22x-tzafon-api-key%22%3A%22My%20API%20Key%22%7D%7D)
## Basic Usage
```python
import os
from tzafon import Computer
client = Computer(
api_key=os.environ.get("TZAFON_API_KEY"), # This is the default and can be omitted
)
# List existing sessions
computer_responses = client.computers.list()
```
We recommend using [python-dotenv](https://pypi.org/project/python-dotenv/) to add `TZAFON_API_KEY="My API Key"` to your `.env` file so that your API key is not stored in source control.
## Async usage
Simply import `AsyncComputer` instead of `Computer` and use `await` with each API call:
```python
import os
import asyncio
from tzafon import AsyncComputer
client = AsyncComputer(
api_key=os.environ.get("TZAFON_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
computer_responses = await client.computers.list()
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install tzafon[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from tzafon import DefaultAioHttpClient
from tzafon import AsyncComputer
async def main() -> None:
async with AsyncComputer(
api_key=os.environ.get("TZAFON_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
computer_responses = await client.computers.list()
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from tzafon import Computer
client = Computer()
computer_response = client.computers.create(
display={},
)
print(computer_response.display)
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `tzafon.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `tzafon.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `tzafon.APIError`.
```python
import tzafon
from tzafon import Computer
client = Computer()
try:
client.computers.list()
except tzafon.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except tzafon.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except tzafon.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from tzafon import Computer
# Configure the default for all requests:
client = Computer(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).computers.list()
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from tzafon import Computer
# Configure the default for all requests:
client = Computer(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Computer(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).computers.list()
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/tzafon/computer-python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `COMPUTER_LOG` to `info`.
```shell
$ export COMPUTER_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from tzafon import Computer
client = Computer()
response = client.computers.with_raw_response.list()
print(response.headers.get('X-My-Header'))
computer = response.parse() # get the object that `computers.list()` would have returned
print(computer)
```
These methods return an [`APIResponse`](https://github.com/tzafon/computer-python/tree/main/src/tzafon/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/tzafon/computer-python/tree/main/src/tzafon/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.computers.with_streaming_response.list() as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from tzafon import Computer, DefaultHttpxClient
client = Computer(
# Or use the `COMPUTER_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from tzafon import Computer
with Computer() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/tzafon/computer-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import tzafon
print(tzafon.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/tzafon/computer-python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Computer <simon@tzafon.ai> | null | null | MIT | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Pro... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/tzafon/computer-python",
"Repository, https://github.com/tzafon/computer-python"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-18T22:48:16.781304 | tzafon-2.20.2.tar.gz | 134,007 | 08/65/e8900d4fa9a182d3757da26183cb31b99c44a1e9d1e2550cc99e3a415d63/tzafon-2.20.2.tar.gz | source | sdist | null | false | c66b7b05e19940f1d2a3fc5f4ea8253a | aea6a1888161b79622c233ea4e4d7601b0ab9a8d573053d42f52046136e41467 | 0865e8900d4fa9a182d3757da26183cb31b99c44a1e9d1e2550cc99e3a415d63 | null | [] | 249 |
2.4 | apax | 0.13.0 | Atomistic Learned Potential Package in JAX | # `apax`: Atomistic learned Potentials in JAX!
[](https://apax.readthedocs.io/en/latest/)
[](https://doi.org/10.5281/zenodo.10040710)
[](https://doi.org/10.1021/acs.jcim.5c01221)
[](https://opensource.org/licenses/MIT)
[](https://discord.gg/7ncfwhsnm4)
`apax`[^1][^2] is a high-performance, extendable package for training of and inference with atomistic neural networks.
It implements the Gaussian Moment Neural Network model [^3][^4].
It is based on [JAX](https://jax.readthedocs.io/en/latest/) and uses [JaxMD](https://github.com/jax-md/jax-md) as a molecular dynamics engine.
## Installation
Apax is available on PyPI with a CPU version of JAX.
```bash
pip install apax
```
If you want to enable GPU support (only on Linux), please run
```bash
pip install "apax[cuda]"
```
For more detailed instructions, please refer to the [documentation](https://apax.readthedocs.io/en/latest/).
## Usage
### Your first apax Model
In order to train a model, you need to run
```bash
apax train config.yaml
```
We offer some input file templates to get new users started as quickly as possible.
Simply run the following commands and add the appropriate entries in the marked fields
```bash
apax template train # use --full for a template with all input options
```
Please refer to the [documentation](https://apax.readthedocs.io/en/latest/) for a detailed explanation of all parameters.
The documentation can convenienty be accessed by running `apax docs`.
## Molecular Dynamics
There are two ways in which `apax` models can be used for molecular dynamics out of the box.
High performance NVT simulations using JaxMD can be started with the CLI by running
```bash
apax md config.yaml md_config.yaml
```
A template command for MD input files is provided as well.
The second way is to use the ASE calculator provided in `apax.md`.
## Input File Auto-Completion
use the following command to generate JSON schemata for training and MD configuration files:
```bash
apax schema
```
If you are using VSCode, you can utilize them to lint and autocomplete your input files.
The command creates the 2 schemata and adds them to the projects `.vscode/settings.json`
## Authors
- Moritz René Schäfer
- Nico Segreto
Under the supervion of Johannes Kästner
## Contributing
We are happy to receive your issues and pull requests!
Do not hesitate to contact any of the authors above if you have any further questions.
## Acknowledgements
The creation of Apax was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in the framework of the priority program SPP 2363, “Utilization and Development of Machine Learning for Molecular Applications - Molecular Machine Learning” Project No. 497249646 and the Ministry of Science, Research and the Arts Baden-Württemberg in the Artificial Intelligence Software Academy (AISA).
Further funding though the DFG under Germany's Excellence Strategy - EXC 2075 - 390740016 and the Stuttgart Center for Simulation Science (SimTech) was provided.
## References
[^1]: Moritz René Schäfer, Nico Segreto, Fabian Zills, Christian Holm, Johannes Kästner, [Apax: A Flexible and Performant Framework For The Development of Machine-Learned Interatomic Potentials](https://doi.org/10.1021/acs.jcim.5c01221), J. Chem. Inf. Model. **65**, 8066-8078 (2025)
[^2]: 10.5281/zenodo.10040711
[^3]: V. Zaverkin and J. Kästner, [“Gaussian Moments as Physically Inspired Molecular Descriptors for Accurate and Scalable Machine Learning Potentials,”](https://doi.org/10.1021/acs.jctc.0c00347) J. Chem. Theory Comput. **16**, 5410–5421 (2020).
[^4]: V. Zaverkin, D. Holzmüller, I. Steinwart, and J. Kästner, [“Fast and Sample-Efficient Interatomic Neural Network Potentials for Molecules and Materials Based on Gaussian Moments,”](https://pubs.acs.org/doi/10.1021/acs.jctc.1c00527) J. Chem. Theory Comput. **17**, 6658–6670 (2021).
| text/markdown | null | Moritz René Schäfer <schaefer@theochem.uni-stuttgart.de>, Nico Segreto <segreto@theochem.uni-stuttgart.de> | null | null | null | interatomic potentials, machine-learning, molecular-dynamics | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"ase>=3.24.0",
"clu>=0.0.12",
"e3x>=1.0.2",
"einops>=0.8.0",
"flax>=0.10.6",
"jax>=0.7.0",
"lazy-loader>=0.4",
"numpy>=2.0.0",
"optax>=0.2.4",
"orbax-checkpoint>=0.11.0",
"pydantic>=2.10.5",
"tensorflow>=2.18.0",
"typer>=0.13.0",
"uncertainty-toolbox>=0.1.1",
"vesin>=0.3.0",
"znh5md>=0... | [] | [] | [] | [
"Repository, https://github.com/apax-hub/apax",
"Releases, https://github.com/apax-hub/apax/releases",
"Discord, https://discord.gg/7ncfwhsnm4",
"Documentation, https://apax.readthedocs.io/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T22:47:55.471748 | apax-0.13.0-py3-none-any.whl | 173,321 | 7f/b1/29dee8f9df3496fefecbd9d9a8d49225d38da8120c72978b08b70a8e934b/apax-0.13.0-py3-none-any.whl | py3 | bdist_wheel | null | false | d29704d0bf9a7d282ac5b72e8b21d1c8 | 1845803ee61d01f5aa078ef145d182d8fd5eba0a2e03e2f2f476540ed5891a23 | 7fb129dee8f9df3496fefecbd9d9a8d49225d38da8120c72978b08b70a8e934b | MIT | [
"LICENSE"
] | 251 |
2.4 | mcp-server-unifi | 0.1.0 | Read-only MCP server for UniFi network analysis — gives LLMs safe access to controller data for diagnostics, monitoring, and capacity planning | # mcp-server-unifi
[](https://gitlab.com/gitlab_joel/mcp-server-unifi/-/pipelines)
[](https://gitlab.com/gitlab_joel/mcp-server-unifi/-/pipelines)
[](https://pypi.org/project/mcp-server-unifi/)
[](https://pypi.org/project/mcp-server-unifi/)
[](LICENSE)
Read-only MCP server for UniFi network analysis. Gives Claude (and other MCP
clients) safe access to controller data for diagnostics, monitoring, and
capacity planning.
Python 3.11+ | 19 tools | 4 resources | 6 prompts
## Architecture
```
+-----------------+
| Claude / MCP |
| Client |
+--------+--------+
| stdio (JSON-RPC)
+--------+--------+
| mcp-server-unifi |
| MCP Server |
| |
| - Tools (19) |
| - Resources (4) |
| - Prompts (6) |
+--------+--------+
|
+--------+--------+
| Session Pool |
| Circuit Breaker |
+--------+--------+
| httpx (HTTPS)
+--------+--------+
| UniFi Controller|
| (Network API) |
+-----------------+
```
Mermaid source: [docs/architecture.mmd](docs/architecture.mmd)
## Install
### uvx (recommended)
No install needed — runs directly from PyPI:
```json
{
"mcpServers": {
"unifi": {
"command": "uvx",
"args": ["mcp-server-unifi"],
"env": {
"UNIFI_HOST": "https://192.168.1.1",
"UNIFI_USER": "mcp-user",
"UNIFI_PASS": "your-password-here",
"UNIFI_SITE": "default",
"UNIFI_VERIFY_SSL": "false"
}
}
}
}
```
Add this to `.mcp.json` (Claude Code) or your client's MCP config. Restart the client.
### pip
```bash
pip install mcp-server-unifi
mcp-server-unifi # start server on stdio
```
### From source
```bash
git clone https://gitlab.com/gitlab_joel/mcp-server-unifi.git
cd mcp-server-unifi
make venv
cp .env.example .env # fill in your credentials
make run
```
### MCP Inspector
```bash
npx @modelcontextprotocol/inspector # interactive testing
```
## Configuration
All config is via environment variables — set in `.env` or in the `env` block of `.mcp.json`.
| Variable | Default | Description |
|----------|---------|-------------|
| `UNIFI_HOST` | `https://localhost:8443` | Controller URL |
| `UNIFI_USER` | `admin` | API username |
| `UNIFI_PASS` | *(required)* | API password |
| `UNIFI_SITE` | `default` | UniFi site name |
| `UNIFI_VERIFY_SSL` | `false` | Verify TLS certificate |
| `UNIFI_SESSION_TTL` | `1800` | Session lifetime (seconds) before re-auth |
| `UNIFI_MIN_LOGIN_INTERVAL` | `10` | Minimum seconds between login attempts |
| `UNIFI_POLL_INTERVAL` | `60` | Resource polling interval (seconds, min 15) |
Recommend creating a dedicated read-only local user on the UniFi controller for the MCP server rather than using an admin account.
## Tools
### Devices & Topology
| Tool | Description | Required Args |
|------|-------------|---------------|
| `unifi_list_devices` | List all APs, switches, gateways with status | -- |
| `unifi_ap_details` | Detailed AP info (radios, VAPs, load) | `mac` |
| `unifi_switch_details` | Switch info + per-port status | `mac` |
| `unifi_switch_port_stats` | Per-port error/throughput stats | `mac` (opt: `port_idx`) |
| `unifi_topology` | Device uplink chain for backbone paths | -- |
| `unifi_device_stats_history` | Historical throughput, client count, error stats | `mac` (opt: `hours`, `granularity`) |
### Clients
| Tool | Description | Required Args |
|------|-------------|---------------|
| `unifi_list_clients` | Connected clients | (opt: `ap_mac`, `limit`, `offset`) |
| `unifi_client_details` | Client info by MAC | `mac` |
| `unifi_search_client` | Search by hostname pattern | `hostname` (opt: `limit`) |
### Wireless / RF
| Tool | Description | Required Args |
|------|-------------|---------------|
| `unifi_channel_utilization` | Channel stats across all APs | -- |
| `unifi_wlan_config` | SSID config (DTIM, roaming, band steering) | (opt: `ssid`) |
| `unifi_ap_radio_config` | Radio config (channel, power, RSSI) | `mac` |
| `unifi_rogue_aps` | Neighboring/rogue APs for interference analysis | (opt: `ap_mac`, `limit`) |
| `unifi_radio_ai_settings` | Radio AI auto-channel/power settings | -- |
### Events & Health
| Tool | Description | Required Args |
|------|-------------|---------------|
| `unifi_events` | Recent network events | (opt: `limit`, `client_mac`, `event_type`) |
| `unifi_alarms` | Active alarms | -- |
| `unifi_site_health` | Site health metrics for all subsystems | -- |
| `unifi_system_logs` | System log entries (richer than events) | (opt: `limit`, `category`, `severity`) |
| `unifi_recommend_roaming` | Best AP for roaming via AI sampling | `mac` |
All tools return JSON. Output exceeding ~25k tokens is automatically truncated with pagination metadata.
## Resources
| URI | Description | Update |
|-----|-------------|--------|
| `unifi://devices` | All network devices with status | Polled, SHA-256 change detect |
| `unifi://clients/active` | Connected wireless + wired clients | Polled, SHA-256 change detect |
| `unifi://health` | Site health summary | Polled, SHA-256 change detect |
| `unifi://events/infrastructure` | Non-roam events (device offline, switch connect, channel change) | Polled, SHA-256 change detect |
Clients receive `notifications/resources/updated` when data changes. Polling interval controlled by `UNIFI_POLL_INTERVAL`.
## Prompts
### Diagnostics
**`troubleshoot_wifi`** — Diagnose WiFi issues for a specific client.
- Args: `client_identifier` (required), `timeframe` (optional, default "last 24 hours")
- Searches for client, uses elicitation if multiple match, gathers diagnostics (details, events, channel utilization), returns timeline + root cause + recommendations
**`incident_investigation`** — Correlate events with device state for incident analysis.
- Args: `timeframe` (optional), `device_mac` (optional)
- Gathers system logs, events, topology, and switch port stats. Builds chronological timeline, traces blast radius through uplink chain, identifies root cause
### Assessments
**`deployment_assessment`** — Full network health assessment with structured findings.
- Args: `focus` (optional: "switches", "wireless", "events", or "all")
- Scans all devices, switch ports, wireless, topology, and infrastructure events. Produces a structured report with comparison anchors for delta tracking
**`wifi_audit`** — Full wireless environment audit.
- Args: none
- Scans all APs, channel utilization, rogues, client distribution, WLAN config. Returns structured findings with recommendations
### Capacity Planning
**`capacity_report`** — AP load and throughput trending analysis.
- Args: `hours` (optional, default 24)
- Per-AP client distribution by band, hourly throughput history, channel utilization, band steering effectiveness, capacity headroom estimation
**`switch_health`** — Deep dive on switch port errors and drop rates.
- Args: `switch_mac` (optional — analyzes all switches if omitted)
- Per-switch port error table, drop rate computation, STP state audit, uplink chain health
## Resilience
- **Circuit breaker** — Trips after 3 consecutive auth failures. Exponential backoff (30s → 60s → 120s → 300s max). Prevents hammering a down controller.
- **Session pool** — Single shared httpx session with configurable TTL (default 30 min). Auto-refreshes on expiry or 401. Avoids repeated logins.
- **Login throttle** — Minimum 10s between login attempts (configurable). Stays under the UniFi OS ~5 attempts/window rate limit.
- **Output truncation** — Binary-search truncation at ~80k chars (~20k tokens). Adds `_truncated` metadata with item count and guidance.
## Development
```bash
git clone <repo-url> && cd mcp-server-unifi
make venv # create venv + install dev deps
make test # run test suite (424 tests, 89% coverage)
make lint # ruff linter
make format # ruff auto-format
```
### Make Targets
| Target | Description |
|--------|-------------|
| `make test` | Run test suite with coverage |
| `make coverage` | Tests + HTML coverage report |
| `make lint` | Run ruff linter |
| `make format` | Auto-format with ruff |
| `make run` | Start MCP server |
| `make clean` | Remove build artifacts |
| `make docker-build` | Build Docker image |
| `make docker-run` | Run in Docker (uses `.env`) |
### Testing
424 tests across unit, protocol, and behavioral categories:
- **Unit** — tools, resources, prompts, circuit breaker, session pool, output truncation, progress, logging
- **Protocol** — MCP JSON-RPC validation through real client/server stack (initialization, capabilities, tool invocation, error handling)
- **Behavioral** — Tool naming conventions, JSON Schema correctness, keyword-overlap tool selection scoring, response format validation
Config: `asyncio_mode = "auto"` in pyproject.toml. Do **not** use `pytestmark = pytest.mark.anyio` — it conflicts with pytest-asyncio.
## Project Structure
```
src/mcp_server_unifi/
server.py MCP server, tool dispatch, create_server()
client.py UniFi API client (httpx, auth, retry)
tools.py Tool implementations (formatting, logic)
resources.py Resource handlers + ResourcePoller
prompts.py Prompt templates (troubleshoot, audit)
session_pool.py Shared session with TTL + login throttle
circuit_breaker.py CircuitBreaker (closed/open/half-open)
output.py Output truncation (binary search)
progress.py Progress notification helper
logging_handler.py MCP logging channel
tests/
conftest.py Fixtures, mock UniFi client, MCP session
test_tools.py Tool logic
test_resources.py Resource handlers
test_prompts.py Prompt workflows
test_mcp_protocol.py In-process protocol validation
test_behavioral.py Naming, schema, selection scoring
test_circuit_breaker.py
test_session_pool.py
test_client.py
test_output.py
test_progress.py
test_logging_handler.py
test_advanced.py
```
## Known Gotchas
- **Claude Code does not render progress or logging notifications** — The server emits them correctly, but Claude Code's UI does not display them yet. Tracked: [claude-code#4157](https://github.com/anthropics/claude-code/issues/4157) (progress), [claude-code#3174](https://github.com/anthropics/claude-code/issues/3174) (logging). MCP Inspector displays both.
- **UniFi OS login rate limiting** — `/api/auth/login` allows ~5 attempts per window with a 5-10 min cooldown (HTTP 429). Cumulative across all clients. The session pool and login throttle mitigate this.
- **No switch historical stats** — The UniFi API has no `stat/report` endpoint for switches. Use `unifi_switch_port_stats` for real-time counters.
---
See [IMPROVEMENTS.md](IMPROVEMENTS.md) for development history.
| text/markdown | Joel Robison | null | null | null | null | analysis, mcp, model-context-protocol, monitoring, network, unifi | [
"Development Status :: 4 - Beta",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Syste... | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"respx>=0.21.0; extra == \"dev\"",
"ruff>=0.3.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/gitlab_joel/mcp-server-unifi",
"Repository, https://gitlab.com/gitlab_joel/mcp-server-unifi",
"Issues, https://gitlab.com/gitlab_joel/mcp-server-unifi/-/issues",
"Changelog, https://gitlab.com/gitlab_joel/mcp-server-unifi/-/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T22:47:32.514509 | mcp_server_unifi-0.1.0.tar.gz | 494,847 | f4/4e/12b7fac8510197ee298d68b6b6723d8024534a66bc4b9dd7396669e54da9/mcp_server_unifi-0.1.0.tar.gz | source | sdist | null | false | 7a6754343a165d950e75ba3efc6a23e6 | 96cc33b8e40b79eb7fdda41eb14ecec5516e24e1b8202da22587246985b9de7a | f44e12b7fac8510197ee298d68b6b6723d8024534a66bc4b9dd7396669e54da9 | MIT | [
"LICENSE"
] | 244 |
2.4 | JET3 | 3.0.1 | Jet Propulsion Laboratory Evapotranspiration Ensemble ECOSTRESS Collection 3 | # JET3: JPL Evapotranspiration Ensemble Algorithm
[](https://github.com/JPL-Evapotranspiration-Algorithms/JET3/actions/workflows/ci.yml)
JET3 is a Python package for computing evapotranspiration (ET), evaporative stress index (ESI), and water use efficiency (WUE) using an ensemble of four scientifically-validated models. This package provides the core algorithms used in the ECOSTRESS and Surface Biology and Geology (SBG) missions, now available as a standalone, mission-independent tool.
JET3 is part of the [JPL-Evapotranspiration-Algorithms](https://github.com/JPL-Evapotranspiration-Algorithms) organization and serves as the foundation for:
- [ECOSTRESS Collection 3 L3T/L4T JET Products](https://github.com/ECOSTRESS-Collection-3/ECOv003-L3T-L4T-JET)
- [SBG Collection 1 L4 JET Products](https://github.com/sbg-tir/SBG-TIR-L4-JET)
## Authors
[Gregory H. Halverson](https://github.com/gregory-halverson-jpl) (they/them)<br>
[gregory.h.halverson@jpl.nasa.gov](mailto:gregory.h.halverson@jpl.nasa.gov)<br>
NASA Jet Propulsion Laboratory 329G
Kerry Cawse-Nicholson (she/her)<br>
NASA Jet Propulsion Laboratory 329G
Madeleine Pascolini-Campbell (she/her)<br>
NASA Jet Propulsion Laboratory 329F
This package has been developed using open-science practices with New Technology Report (NTR) and open-source license from NASA Jet Propulsion Laboratory. The code is based on algorithms originally developed for the ECOSTRESS mission and refined for broader applications.
## 1. Introduction
JET3 produces estimates of:
- **Evapotranspiration (ET)**: The combined process of water evaporation from soil and transpiration from plants
- **Evaporative Stress Index (ESI)**: A measure of vegetation water stress (ratio of actual to potential ET)
- **Water Use Efficiency (WUE)**: The ratio of carbon uptake (GPP) to water loss (ET)
Accurate modeling of ET requires consideration of many environmental and biological controls including: solar radiation, atmospheric water vapor deficit, soil water availability, vegetation physiology and phenology (Brutsaert, 1982; Monteith, 1965; Penman, 1948).
JET3 implements an ensemble approach combining four scientifically-validated models:
- **PT-JPL-SM**: Priestley-Taylor model with soil moisture sensitivity
- **STIC-JPL**: Surface Temperature Initiated Closure model
- **PM-JPL**: Penman-Monteith model (MOD16 derivative)
- **BESS-JPL**: Breathing Earth System Simulator model
The ensemble median provides robust ET estimates with reduced uncertainty compared to individual models.
### Required Inputs
JET3 requires the following input variables:
- **Surface temperature (ST)**: Land surface temperature in Celsius
- **NDVI**: Normalized Difference Vegetation Index
- **Albedo**: Surface albedo
- **Air temperature (Ta)**: Near-surface air temperature in Celsius
- **Relative humidity (RH)**: Near-surface relative humidity (0-1)
- **Soil moisture (SM)**: Volumetric soil moisture (m³/m³)
- **Net radiation (Rn)**: Net radiation in W/m²
- **Additional meteorological variables** for individual models
These inputs can be derived from any appropriate remote sensing or ground-based data sources.
## 2. Installation
Install from PyPI:
```bash
pip install JET3
```
Or install from source:
```bash
git clone https://github.com/JPL-Evapotranspiration-Algorithms/JET3.git
cd JET3
pip install -e .
```
## 3. Usage
```python
from JET3 import JET
# Initialize JET ensemble processor
jet = JET()
# Compute ET ensemble with your input data
results = jet.process(
ST=surface_temperature, # Surface temperature (°C)
NDVI=ndvi, # Normalized Difference Vegetation Index
albedo=albedo, # Surface albedo
Ta=air_temperature, # Air temperature (°C)
RH=relative_humidity, # Relative humidity (0-1)
SM=soil_moisture, # Soil moisture (m³/m³)
Rn=net_radiation, # Net radiation (W/m²)
# ... other required inputs
)
# Access individual model outputs
et_ptjplsm = results['PTJPLSMinst']
et_stic = results['STICJPLdaily']
et_pmjpl = results['PMJPLdaily']
et_bess = results['BESSJPLdaily']
# Access ensemble estimate
et_ensemble = results['ETdaily']
et_uncertainty = results['ETinstUncertainty']
# Access derived products
esi = results['ESI'] # Evaporative Stress Index
wue = results['WUE'] # Water Use Efficiency
```
## 4. Evapotranspiration Models
JET3 implements an ensemble of four evapotranspiration models, each with different strengths and theoretical foundations. The ensemble approach combines outputs to reduce uncertainty and improve overall accuracy.
### 4.1. Priestley-Taylor (PT-JPL-SM) Evapotranspiration Model
The Priestley-Taylor Jet Propulsion Laboratory model with Soil Moisture (PT-JPL-SM), developed by Dr. Adam Purdy and Dr. Joshua Fisher, was designed as a soil moisture-sensitive evapotranspiration product for the Soil Moisture Active-Passive (SMAP) mission. The model estimates instantaneous canopy transpiration, leaf surface evaporation, and soil moisture evaporation using the Priestley-Taylor formula with a set of constraints. These three partitions are combined into total latent heat flux in watts per square meter for the ensemble estimate.
**Reference**: Purdy, A.J., Fisher, J.B., Goulden, M.L., Colliander, A., Halverson, G., Tu, K., Famiglietti, J.S. (2018). SMAP soil moisture improves global evapotranspiration. *Remote Sensing of Environment*, 219, 1-14. https://doi.org/10.1016/j.rse.2018.09.023
**Repository**: [PT-JPL-SM](https://github.com/JPL-Evapotranspiration-Algorithms/PT-JPL-SM)
### 4.2. Surface Temperature Initiated Closure (STIC-JPL) Evapotranspiration Model
The Surface Temperature Initiated Closure-Jet Propulsion Laboratory (STIC-JPL) model, contributed by Dr. Kaniska Mallick, was designed as a surface temperature-sensitive ET model, adopted by ECOSTRESS and SBG for improved estimates of ET reflecting mid-day heat stress. The STIC-JPL model estimates total latent heat flux directly using thermal remote sensing observations. This instantaneous estimate of latent heat flux is included in the ensemble estimate.
**Reference**: Mallick, K., Trebs, I., Boegh, E., Giustarini, L., Schlerf, M., Drewry, D.T., Hoffmann, L., von Randow, C., Kruijt, B., Araùjo, A., Saleska, S., Ehleringer, J.R., Domingues, T.F., Ometto, J.P.H.B., Nobre, A.D., de Moraes, O.L.L., Hayek, M., Munger, J.W., Wofsy, S.C. (2016). Canopy-scale biophysical controls of transpiration and evaporation in the Amazon Basin. *Hydrology and Earth System Sciences*, 20, 4237-4264. https://doi.org/10.5194/hess-20-4237-2016
**Repository**: [STIC-JPL](https://github.com/JPL-Evapotranspiration-Algorithms/STIC-JPL)
### 4.3. Penman Monteith (PM-JPL) Evapotranspiration Model
The Penman-Monteith-Jet Propulsion Laboratory (PM-JPL) algorithm is a derivation of the MOD16 algorithm that was originally designed as the ET product for the Moderate Resolution Imaging Spectroradiometer (MODIS) and continued as a Visible Infrared Imaging Radiometer Suite (VIIRS) product. PM-JPL uses a similar approach to PT-JPL and PT-JPL-SM to independently estimate vegetation and soil components of instantaneous ET, but using the Penman-Monteith formula instead of the Priestley-Taylor. The PM-JPL latent heat flux partitions are summed to total latent heat flux for the ensemble estimate.
**Reference**: Running, S., Mu, Q., Zhao, M., Moreno, A. (2019). MODIS Global Terrestrial Evapotranspiration (ET) Product (MOD16A2/A3 and MOD16A2GF/A3GF). NASA Earth Observing System Data and Information System (EOSDIS) Land Processes Distributed Active Archive Center (LP DAAC). https://doi.org/10.5067/MODIS/MOD16A2.061
**Repository**: [PM-JPL](https://github.com/JPL-Evapotranspiration-Algorithms/PM-JPL)
### 4.4. Breathing Earth System Simulator (BESS-JPL) Gross Primary Production (GPP) Model
The Breathing Earth System Simulator Jet Propulsion Laboratory (BESS-JPL) model is a coupled surface energy balance and photosynthesis model contributed by Dr. Youngryel Ryu. The model iteratively calculates net radiation, ET, and Gross Primary Production (GPP) estimates. The latent heat flux component of BESS-JPL is included in the ensemble estimate, while the BESS-JPL net radiation is used as input to the other ET models.
**Reference**: Ryu, Y., Baldocchi, D.D., Kobayashi, H., van Ingen, C., Li, J., Black, T.A., Beringer, J., van Gorsel, E., Knohl, A., Law, B.E., Roupsard, O. (2011). Integration of MODIS land and atmosphere products with a coupled-process model to estimate gross primary productivity and evapotranspiration from 1 km to global scales. *Global Biogeochemical Cycles*, 25, GB4017. https://doi.org/10.1029/2011GB004053
**Repository**: [BESS-JPL](https://github.com/JPL-Evapotranspiration-Algorithms/BESS-JPL)
### 4.5. Ensemble Processing
The ensemble ET estimate is computed as the median of total latent heat flux (in watts per square meter) from the PT-JPL-SM, STIC-JPL, PM-JPL, and BESS-JPL models. This median is then upscaled to a daily ET estimate in millimeters per day. The standard deviation between these multiple estimates represents the ensemble uncertainty.
### 4.6. AquaSEBS Water Surface Evaporation
For water surface pixels, JET3 implements the AquaSEBS (Aquatic Surface Energy Balance System) model developed by Abdelrady et al. (2016) and validated by Fisher et al. (2023). Water surface evaporation is calculated using a physics-based approach that combines the equilibrium temperature model for water heat flux with the Priestley-Taylor equation for evaporation estimation.
**References**:
- Abdelrady, A., Timmermans, J., Vekerdy, Z., Salama, M.S. (2016). Surface Energy Balance of Fresh and Saline Waters: AquaSEBS. *Remote Sensing*, 8, 583. https://doi.org/10.3390/rs8070583
- Fisher, J.B., Dohlen, M.B., Halverson, G.H., Collison, J.W., Hook, S.J., Hulley, G.C. (2023). Remotely sensed terrestrial open water evaporation. *Scientific Reports*, 13, 8217. https://doi.org/10.1038/s41598-023-34921-2
**Repository**: [AquaSEBS](https://github.com/JPL-Evapotranspiration-Algorithms/AquaSEBS)
#### Methodology
The AquaSEBS model implements the surface energy balance equation specifically adapted for water bodies:
$$R_n = LE + H + W$$
Where the water heat flux (W) is calculated using the equilibrium temperature model:
$$W = \beta \times (T_e - WST)$$
The key parameters include:
- **Temperature difference**: $T_n = 0.5 \times (WST - T_d)$ where WST is water surface temperature and $T_d$ is dew point temperature
- **Evaporation efficiency**: $\eta = 0.35 + 0.015 \times WST + 0.0012 \times T_n^2$
- **Thermal exchange coefficient**: $\beta = 4.5 + 0.05 \times WST + (\eta + 0.47) \times S$
- **Equilibrium temperature**: $T_e = T_d + \frac{SW_{net}}{\beta}$
Latent heat flux is then calculated using the Priestley-Taylor equation with α = 1.26 for water surfaces:
$$LE = \alpha \times \frac{\Delta}{\Delta + \gamma} \times (R_n - W)$$
#### Validation and Accuracy
The AquaSEBS methodology has been extensively validated against 19 in situ open water evaporation sites worldwide spanning multiple climate zones. Performance metrics include:
**Daily evaporation estimates:**
- **MODIS-based**: r² = 0.47, RMSE = 1.5 mm/day (41% of mean), Bias = 0.19 mm/day
- **Landsat-based**: r² = 0.56, RMSE = 1.2 mm/day (38% of mean), Bias = -0.8 mm/day
**Instantaneous estimates (controlled for high wind events >7.5 m/s):**
- **Correlation**: r² = 0.71
- **RMSE**: 53.7 W/m² (38% of mean)
- **Bias**: -19.1 W/m² (13% of mean)
The model demonstrates particular strength in water-limited environments and performs well across spatial scales from 30m (Landsat) to 1km (MODIS) resolution.
Water surface evaporation estimates are included in the `ETdaily` layer in mm per day, integrated over the daylight period from sunrise to sunset.
### 4.7. Evaporative Stress Index (ESI) and Water Use Efficiency (WUE)
The PT-JPL-SM model generates estimates of both actual and potential instantaneous ET. The potential evapotranspiration (PET) estimate represents the maximum expected ET if there were no water stress to plants on the ground. The ratio of the actual ET estimate to the PET estimate forms an index representing the water stress of plants, with zero being fully stressed with no observable ET and one being non-stressed with ET reaching PET.
**Evaporative Stress Index (ESI)**:
$$\text{ESI} = \frac{\text{ET}_{\text{actual}}}{\text{PET}}$$
Water Use Efficiency (WUE) relates the amount of carbon that plants are taking in (GPP from BESS-JPL) to the amount of water that plants are releasing (transpiration from PT-JPL-SM):
**Water Use Efficiency (WUE)**:
$$\text{WUE} = \frac{\text{GPP}}{\text{Transpiration}}$$
WUE is expressed as the ratio of grams of carbon that plants take in to kilograms of water that plants release ($\text{g C kg}^{-1} \text{H}_2\text{O}$).
## 5. Theory
The JPL evapotranspiration (JET) ensemble provides a robust estimation of ET from multiple ET models. The ET ensemble incorporates ET data from four algorithms: Priestley Taylor-Jet Propulsion Laboratory model with soil moisture (PT-JPL-SM), the Penman Monteith-Jet Propulsion Laboratory model (PM-JPL), Surface Temperature Initiated Closure-Jet Propulsion Laboratory model (STIC-JPL), and the Breathing Earth System Simulator-Jet Propulsion Laboratory model (BESS-JPL).
Each model brings complementary strengths:
- **PT-JPL-SM**: Incorporates soil moisture constraints and partitions ET into canopy transpiration, interception, and soil evaporation
- **STIC-JPL**: Leverages surface temperature to detect heat stress and reduced ET capacity
- **PM-JPL**: Implements the widely-used Penman-Monteith formulation with aerodynamic considerations
- **BESS-JPL**: Couples photosynthesis with ET through a comprehensive surface energy balance
The ensemble median approach reduces model-specific biases and provides more robust estimates than any individual model.
## 6. Validation
The JET ensemble approach has been validated against flux tower measurements from the FLUXNET network as documented in Pierrat et al. (2025) and through the ECOSTRESS mission. The validation demonstrated that the ensemble evapotranspiration estimates:
- Show strong correlation with flux tower measurements (R² > 0.7) across most biomes
- Capture the diurnal and seasonal patterns of evapotranspiration effectively
- Perform well in water-limited ecosystems where thermal stress indicators are most valuable
- Benefit from the ensemble approach, with the median estimate generally outperforming individual models
- Maintain accuracy across a range of spatial scales
### Performance by Biome
The validation results indicate varying performance across different ecosystem types:
- **Croplands**: Excellent agreement during growing season, capturing irrigation and phenological patterns
- **Forests**: Good performance in temperate and boreal forests, with some challenges in dense tropical canopies
- **Grasslands**: Strong performance in both natural and managed grassland systems
- **Shrublands**: Reliable estimates in semi-arid regions where thermal stress is prevalent
The JET3 ensemble provides reliable ET estimates suitable for water resource management, agricultural monitoring, and ecosystem research applications.
## 7. Acknowledgements
We would like to thank Joshua Fisher as the initial science lead of the ECOSTRESS mission and PI of the ROSES project to develop the JET ensemble approach.
We would like to thank Adam Purdy for developing the PT-JPL-SM model.
We would like to thank Kaniska Mallick for contributing the STIC model.
We would like to thank Youngryel Ryu for contributing the BESS-JPL model.
## 8. References
- Abdelrady, A., Timmermans, J., Vekerdy, Z., Salama, M.S. (2016). Surface Energy Balance of Fresh and Saline Waters: AquaSEBS. *Remote Sensing*, 8, 583. https://doi.org/10.3390/rs8070583
- Allen, R.G., Tasumi, M., & Trezza, R. (2007). "Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC)—Model." *Journal of Irrigation and Drainage Engineering*, 133(4), 380-394. https://doi.org/10.1061/(ASCE)0733-9437(2007)133:4(380)
- Brutsaert, W. (1982). *Evaporation into the Atmosphere: Theory, History, and Applications*. Springer Netherlands. https://doi.org/10.1007/978-94-017-1497-6
- Fisher, J.B., Dohlen, M.B., Halverson, G.H., Collison, J.W., Hook, S.J., Hulley, G.C. (2023). Remotely sensed terrestrial open water evaporation. *Scientific Reports*, 13, 8217. https://doi.org/10.1038/s41598-023-34921-2
- Fisher, J.B., Tu, K.P., Baldocchi, D.D. (2008). Global estimates of the land–atmosphere water flux based on monthly AVHRR and ISLSCP-II data, validated at 16 FLUXNET sites. *Remote Sensing of Environment*, 112(3), 901-919. https://doi.org/10.1016/j.rse.2007.06.025
- Mallick, K., Trebs, I., Boegh, E., Giustarini, L., Schlerf, M., Drewry, D.T., Hoffmann, L., von Randow, C., Kruijt, B., Araùjo, A., Saleska, S., Ehleringer, J.R., Domingues, T.F., Ometto, J.P.H.B., Nobre, A.D., de Moraes, O.L.L., Hayek, M., Munger, J.W., Wofsy, S.C. (2016). Canopy-scale biophysical controls of transpiration and evaporation in the Amazon Basin. *Hydrology and Earth System Sciences*, 20, 4237-4264. https://doi.org/10.5194/hess-20-4237-2016
- Monteith, J.L. (1965). "Evaporation and Environment." *Symposia of the Society for Experimental Biology*, 19, 205-234.
- Penman, H.L. (1948). "Natural Evaporation from Open Water, Bare Soil and Grass." *Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences*, 193(1032), 120-145. https://doi.org/10.1098/rspa.1948.0037
- Pierrat, Z., et al. (2025). "Validation of ECOSTRESS Collection 3 Evapotranspiration Products Using FLUXNET Measurements." *Remote Sensing of Environment* (in press).
- Purdy, A.J., Fisher, J.B., Goulden, M.L., Colliander, A., Halverson, G., Tu, K., Famiglietti, J.S. (2018). SMAP soil moisture improves global evapotranspiration. *Remote Sensing of Environment*, 219, 1-14. https://doi.org/10.1016/j.rse.2018.09.023
- Running, S., Mu, Q., Zhao, M., Moreno, A. (2019). MODIS Global Terrestrial Evapotranspiration (ET) Product (MOD16A2/A3 and MOD16A2GF/A3GF). NASA Earth Observing System Data and Information System (EOSDIS) Land Processes Distributed Active Archive Center (LP DAAC). https://doi.org/10.5067/MODIS/MOD16A2.061
- Ryu, Y., Baldocchi, D.D., Kobayashi, H., van Ingen, C., Li, J., Black, T.A., Beringer, J., van Gorsel, E., Knohl, A., Law, B.E., Roupsard, O. (2011). Integration of MODIS land and atmosphere products with a coupled-process model to estimate gross primary productivity and evapotranspiration from 1 km to global scales. *Global Biogeochemical Cycles*, 25, GB4017. https://doi.org/10.1029/2011GB004053
| text/markdown | null | "Gregory H. Halverson" <gregory.h.halverson@jpl.nasa.gov> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"AquaSEBS",
"BESS-JPL",
"check-distribution",
"FLiESANN",
"GEOS5FP",
"MODISCI",
"numpy",
"onnxruntime",
"pandas",
"PM-JPL",
"PTJPL",
"PTJPLSM",
"python-dateutil",
"pytictoc",
"rasters",
"SEBAL-soil-heat-flux",
"shapely",
"STIC-JPL",
"verma-net-radiation",
"build; extra == \"dev... | [] | [] | [] | [
"Homepage, https://github.com/ECOSTRESS-Collection-2/ECOv003-L3T-L4T-JET"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T22:47:00.691769 | jet3-3.0.1.tar.gz | 5,646,294 | 00/c3/37f52b3d35b0cf4ec120fa3330ca0734dca72237c1531f617729f177a64f/jet3-3.0.1.tar.gz | source | sdist | null | false | f313d54b52d7cd33bb38a9cd59667d64 | d26a9be578b1546daa7bcc7ccd30231831aeb6da9c7c85a905ae34f835d3908b | 00c337f52b3d35b0cf4ec120fa3330ca0734dca72237c1531f617729f177a64f | null | [
"LICENSE"
] | 0 |
2.4 | comicbox | 2.2.0 | Comic book archive multi format metadata read/write/transform tool and image extractor. | # Comicbox
A comic book archive metadata reader and writer.
## ✨ Features
### 📚 Comic Formats
Comicbox reads CBZ, CBR, CBT, and optionally PDF. Comicbox archives and writes
CBZ archives and PDF metadata.
### 🏷️ Metadata Formats
Comicbox reads and writes:
- [ComicRack ComicInfo.xml v2.1 (draft) schema](https://anansi-project.github.io/docs/comicinfo/schemas/v2.1),
- [Metron MetronInfo.xml v1.0](https://metron-project.github.io/docs/category/metroninfo)
- [Comic Book Lover ComicBookInfo schema](https://code.google.com/archive/p/comicbookinfo/)
- [CoMet schema](https://github.com/wdhongtw/comet-utils).
- [PDF Metadata](https://pymupdf.readthedocs.io/en/latest/tutorial.html#accessing-meta-data).
- Embedding ComicInfo.xml or MetronInfo.xml inside PDFs.
- A variety of filename schemes that encode metadata.
### Usefulness
Comicbox's primary purpose is a library for use by
[Codex comic reader](https://github.com/ajslater/codex/). The API isn't well
documented, but you can infer what it does pretty easily here:
[comicbox.comic_archive](https://github.com/ajslater/comicbox/blob/main/comicbox/comic_archive.py)
as the primary interface.
The command line can perform most of comicbox's functions including reading and
writing metadata recursively, converting between metadata formats and extracting
pages.
### Limitations and Alternatives
Comicbox does _not_ use popular metadata database APIs or have a GUI!
[Comictagger](https://github.com/comictagger/comictagger) probably the most
useful comicbook tagger. It does most of what Comicbox does but also
automatically tags comics with the ComicVine API and has a desktop UI.
## 📜 News
Comicbox has a [NEWS file](NEWS.md) to summarize changes that affect users.
## 🕸️ HTML Docs
[HTML formatted docs are available here](https://comicbox.readthedocs.io)
## 📦 Installation
<!-- eslint-skip -->
```sh
pip install comicbox
```
Comicbox supports PDFs as an extra when installed like:
<!-- eslint-skip -->
```sh
pip install comicbox[pdf]
```
### Dependencies
#### Base
Comicbox generally works without any binary dependencies but requires `unrar` be
on the path to convert CBR into CBZ or extract files from CBRs.
#### PDF
The pymupdf dependency has wheels that install a local version of libmupdf. But
for some platforms (e.g. Linux on ARM, Windows) it may require libstdc++ and
c/c++ build tools installed to compile a libmupdf. More detail on this is
available in the
[pymupdf docs](https://pymupdf.readthedocs.io/en/latest/installation.html#installation-when-a-suitable-wheel-is-not-available).
##### Installing Comicbox on ARM (AARCH64) with Python 3.13
Pymupdf has no pre-built wheels for AARCH64 so pip must build it and the build
fails on Python 3.13 without this environment variable set:
```sh
PYMUPDF_SETUP_PY_LIMITED_API=0 pip install comicbox
```
You will also have to have the `build-essential` and `python3-dev` or equivalent
packages installed on on your Linux.
## ⌨️ Use
##### Related Projects
Comicbox makes use of two of my other small projects:
[comicfn2dict](https://github.com/ajslater/comicfn2dict) which parses metadata
in comic filenames into python dicts. This library is also used by Comictagger.
[pdffile](https://github.com/ajslater/pdffile) which presents a ZipFile like
interface for PDF files.
### Console
Type
<!-- eslint-skip -->
```sh
comicbox -h
```
see the CLI help.
#### Examples
<!-- eslint-skip -->
```sh
comicbox test.cbz -m "{Tags: a,b,c, story_arcs: {d:1,e:'',f:3}" -m "Publisher: SmallComics" -w cr
```
Will write those tags to comicinfo.xml in the archive.
Be sure to add spaces after colons so they parse as valid YAML key value pairs.
This is easy to forget.
But it's probably better to use the --print action to see what it's going to do
before you actually write to the archive:
<!-- eslint-skip -->
```sh
comicbox test.cbz -m "{Tags: a,b,c, story_arcs: {d:1,e:'',f:3}" -m "Publisher: SmallComics" -p
```
A recursive example:
<!-- eslint-skip -->
```sh
comicbox --recurse -m "publisher: 'SC Comics'" -w cr ./SmallComicsComics/
```
Will recursively change the publisher to "SC Comics" for every comic found in
under the SmallComicsComics directory.
#### Escaping YAML
the `-m` command line argument accepts the YAML language for tags. Certain
characters like `\,:;_()$%^@` are part of the YAML language. To successful
include them as data in your tags, look up
["Escaping YAML" documentation online](https://www.w3schools.io/file/yaml-escape-characters/)
##### Deleting Metadata
To delete metadata from the cli you're best off exporting the current metadata,
editing the file and then re-importing it with the delete previous metadata
option:
<!-- eslint-skip -->
```sh
# export the current metadata
comicbox --export cix "My Overtagged Comic.cbz"
# Adjust the metadata in an editor.
nvim comicinfo.xml
# Check that importing the metadata will look how you like
comicbox --import comicinfo.xml -p "My Overtagged Comic.cbz"
# Delete all previous metadata from the comic (careful!)
comicbox --delete-all-tags "My Overtagged Comic.cbz"
# Import the metadata into the file and write it.
comicbox --import comicinfo.xml --write cix "My Overtagged Comic.cbz"
```
#### Quirks
##### --metadata parses all formats.
The comicbox.yaml format represents the ComicInfo.xml Web tag as sub an
`identifiers.<NID>.url` tag. But fear not, you don't have to remember this. The
CLI accepts heterogeneous tag types with the `-m` option, so you can type:
<!-- eslint-skip -->
```sh
comicbox -p -m "Web: https://foo.com" mycomic.cbz
```
and the identifier tag should appear in comicbox.yaml as:
```yaml
identifiers:
foo.com:
id_key: ""
url: https://foo.com
```
You don't even need the root tag.
##### Setting Title when Stories are present.
If the metadata contains Stories (MetronInfo.xml only) the title is computed
from the Stories. If you wish to set the title regardless, use the --replace
option. e.g.
```sh
comicbox -m "series: 'G.I. Robot', title: 'Foreign and Domestic'" -Rp
```
But be aware it will also create a story with the title's new name.
##### Identifiers
Comicbox aggregates IDS, GTINS and URLS from other formats into a common
Identifiers structure.
##### Reprints
Comicbox aggregates Alternate Names, Aliases and IsVersionOf from other formats
into a common Reprints list.
##### URNs
Because the Notes field is commonly abused in ComicInfo.xml to represent fields
ComicInfo does not (yet?) support comicbox parses the notes field heavily
looking for embedded data. Comicbox also writes identifiers into the Notes field
using an
[Uniform Resource Name](https://en.wikipedia.org/wiki/Uniform_Resource_Name)
format.
Comicbox also looks for identifiers in Tag fields of formats that don't have
their own Identifiers field.
##### Prettified Fields
Comicbox liberally accepts all kinds of values that may be enums in other
formats, like AgeRating, Formats and Creidit Roles. In a weak attempt to
standardize these values comicbox will Title case values submitted to these
fields. When writing to standard formats, comicbox attempts to transforms these
values into enums supported by the output format.
#### Packages
Comicbox actually installs three different packages:
- `comicbox` The main API and CLI script.
- `comicfn2dict` A separate library for parsing comic filenames into dicts it
also includes a CLI script.
- `pdffile` A utility library for reading and writing PDF files with an API like
Python's ZipFile
### ⚙️ Config
comicbox accepts command line arguments but also an optional config file and
environment variables.
The variables have defaults specified in
[a default yaml](https://github.com/ajslater/comicbox/blob/main/comicbox/config_default.yaml)
The environment variables are the variable name prefixed with `COMICBOX_`. (e.g.
COMICBOX_COMICINFOXML=0)
#### Log Level
change logging level:
<!-- eslint-skip -->
```sh
LOGLEVEL=ERROR comicbox -p <path>
```
## 🛠 API
Comicbox is mostly used by me in [Codex](https://github.com/ajslater/codex/) as
a metadata extractor. Here's a brief example, but the API remains undocumented.
```python
with Comicbox(path_to_comic) as cb:
metadata = cb.to_dict()
page_count = cb.page_count()
file_type = cb.get_file_type()
mtime = cb.get_metadata_mtime()
image_data = car.get_cover_page(to_pixmap=True)
```
Attached to these docs in the navigation header there are some auto generated
API docs that might be better than nothing.
### API Example
I don't have many examples yet. But here's one someone asked about on GitHub.
#### Adding a ComicInfo.xml formatted dict to the metadata
```python
from argparse import Namespace
from comicbox.box import Comicbox
from comicbox.transforms.comicinfo import ComicInfoTransform
CBZ_PATH = Path("/Users/GullyFoyle/Comics/DC Comics/Star Spangled War Stories/Star Spangled War Stories (1962) #101.cbz")
CONFIG = Namespace(
# This config writes comicinfo.xml and also reads comicinfo.xml from the source file.
# If you don't want to read old data, do not include the read argument.
comicbox=Namespace(write=["cix"], read=["cix"], compute_pages=False)
)
# You can use any comic metadata format as long as it matches it's transform class.
CIX_DICT = { .... } # A ComicInfo.xml style dict.
# xml dicts are those parsed and emitted by xmltodict https://github.com/martinblech/xmltodict
# read about complex elements with attributes on that page.
SOURCE_TRANSFORM_CLASS = ComicInfoTransform
with Comicbox(CBZ_PATH, config=WRITE_CONFIG) as car:
car.add_source(CIX_DICT, SOURCE_TRANSFORM_CLASS)
car.write() # this will write using the config to the cbz_path.
```
This code would be similar to these command line arguments:
```sh
comicbox --import my-own-comicbox.json --import my-own-comicinfo.xml --write cr "Star Spangled War Stories (1962) #101.cbz"
```
## 📋 Schemas
Comicbox supports most popular comicbook metadata schema definitions. These are
defined on the [SCHEMAS page](SCHEMAS.md).
## 🔀 Tag Translations
A rough [table](TAGS.md) of how Comicbox handles tag translations between
popular comic book metadata formats.
## 🛠 Development
Comicbox code is hosted at [Github](https://github.com/ajslater/comicbox)
You may access most development tasks from the makefile. Run make to see
documentation.
### Environment variables
There is a special environment variable `DEBUG_TRANSFORM` that will print
verbose schema transform information
| text/markdown | AJ Slater | AJ Slater <aj@slater.net> | null | null | null | cb7, cbr, cbt, cbz, comet, comic, comicbookinfo, comicinfo, metroninfo, pdf | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"ansicolors~=1.1.8",
"bidict~=0.23",
"case-converter~=1.2",
"comicfn2dict<1.0.0,>=0.2.4",
"confuse<2.2.0,~=2.1.0",
"deepdiff~=8.2",
"glom~=25.12",
"loguru~=0.7",
"jsonschema[format]~=4.20",
"marshmallow-jsonschema-2>=0.1.0",
"marshmallow-union>=0.1.15.post1",
"marshmallow~=4.2.0",
"py7zr~=1.... | [] | [] | [] | [
"News, https://comicbox.readthedocs.io/NEWS/",
"Documentation, https://comicbox.readthedocs.io",
"Source, https://github.com/ajslater/comicbox"
] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T22:46:59.377255 | comicbox-2.2.0.tar.gz | 71,195,818 | 15/ab/3b5662b9a12061800e2f229b1b5da2441a1e6deb63581e6edb2f2d9d82b4/comicbox-2.2.0.tar.gz | source | sdist | null | false | a345c7dab45ef95d42d7cc0c68c1011f | 85c9dc50ed8e549a28f6f87f6ffff2e89251987dbff8cbc469004f2387f4989d | 15ab3b5662b9a12061800e2f229b1b5da2441a1e6deb63581e6edb2f2d9d82b4 | LGPL-3.0-only | [] | 400 |
2.4 | pylearn-reader | 1.0.1 | Interactive Python learning desktop app with PDF book reader and code editor | # PyLearn
[](https://github.com/fritz99-lang/pylearn/actions/workflows/ci.yml)
[](LICENSE)
[](https://www.python.org/)
[](https://mypy-lang.org/)
An interactive desktop app for learning programming from PDF books. Split-pane interface with a book reader on the left and a code editor + console on the right — read the book, write code, and run it all in one place.
## Screenshots
**Light mode** — reading with Book menu open:

**Dark mode** — reading with code execution:

**Sepia mode** — reading view:

**Sepia mode** — running code (Zen of Python):

## Features
- **PDF Book Reader** — Parses PDF books into structured, styled HTML with headings, body text, and syntax-highlighted code blocks
- **Code Editor** — QScintilla-powered editor with syntax highlighting, line numbers, auto-indent, and configurable font/tab settings
- **Code Execution** — Run Python code directly from the editor with output displayed in an integrated console (30s timeout, sandboxed subprocess)
- **Table of Contents** — Auto-generated chapter navigation from PDF structure
- **Progress Tracking** — SQLite database tracks chapter completion status, bookmarks, and notes per book
- **Bookmarks & Notes** — Save bookmarks and attach notes to any page
- **Multiple Book Profiles** — Supports Python, C++, and HTML/CSS books with per-book font classification profiles
- **Themes** — Light, dark, and sepia themes for the reader panel
- **External Editor** — Launch code in Notepad++ or your preferred external editor
- **Parsed Content Caching** — PDF parsing results cached as JSON for fast subsequent loads
## Requirements
- Python 3.12 or newer
- Windows, macOS, or Linux
### Platform Notes
| Platform | Notes |
|----------|-------|
| **Windows** | Works out of the box with Python 3.12+ |
| **Linux** | Install system deps first: `sudo apt-get install libegl1 libxkbcommon0` |
| **macOS** | May need Xcode command line tools: `xcode-select --install` |
## Installation
Pick **one** of the methods below.
### Option A: Install from PyPI (simplest)
```bash
pip install pylearn-reader
```
### Option B: Install from GitHub (latest code)
```bash
pip install git+https://github.com/fritz99-lang/pylearn.git
```
### Option C: Clone the repo (for development)
```bash
git clone https://github.com/fritz99-lang/pylearn.git
cd pylearn
# Create a virtual environment (recommended)
python -m venv .venv
source .venv/bin/activate # Linux/macOS
.venv\Scripts\activate # Windows
pip install -e . # editable install
pip install -e ".[dev]" # or include pytest + mypy
```
## Registering a Book
PyLearn reads PDF books, but it needs to know where your PDFs are. You do this by editing a single config file called `books.json`.
### Step 1 — Find your config directory
Where `books.json` lives depends on how you installed:
| Install method | Config location |
|----------------|-----------------|
| **Git clone** (Option C) | `config/` folder inside the repo — configs are created automatically from the `.json.example` files on first launch |
| **pip install** (Option A or B) | A `config/` folder inside your system's app-data directory (see table below) |
**App-data directory by platform** (pip installs only):
| Platform | Path |
|----------|------|
| Windows | `%LOCALAPPDATA%\pylearn\config\` (typically `C:\Users\<you>\AppData\Local\pylearn\config\`) |
| macOS | `~/Library/Application Support/pylearn/config/` |
| Linux | `~/.local/share/pylearn/config/` |
> **Tip:** Launch the app once (`python -m pylearn`) and it will create the config directory for you. Then you just need to add your `books.json` file inside it.
### Step 2 — Create (or edit) `books.json`
If the file doesn't exist yet, create it. Add one entry per book:
```json
{
"books": [
{
"book_id": "learning_python",
"title": "Learning Python, 5th Edition",
"pdf_path": "C:/Users/you/Documents/Learning_Python.pdf",
"language": "python",
"profile_name": "learning_python"
}
]
}
```
**Field reference:**
| Field | Required | Description |
|-------|----------|-------------|
| `book_id` | Yes | A short unique ID you make up — no spaces, lowercase with underscores (e.g. `"my_python_book"`) |
| `title` | Yes | Display name shown in the app |
| `pdf_path` | Yes | **Absolute path** to the PDF file on your computer. Use forward slashes even on Windows (`C:/Users/...`) |
| `language` | No | `"python"` (default), `"cpp"`, or `"html"` — controls syntax highlighting in the editor |
| `profile_name` | No | A named parsing profile that fine-tunes how PDF fonts are classified. Leave it as `""` (empty string) for auto-detection, which works well for most books |
**Available named profiles** (only needed if auto-detection doesn't work well for your book):
| `profile_name` | Best for |
|-----------------|----------|
| `learning_python` | *Learning Python* by Mark Lutz |
| `python_cookbook` | *Python Cookbook* by Beazley & Jones |
| `programming_python` | *Programming Python* by Mark Lutz |
| `cpp_generic` | General C++ textbooks |
| `cpp_primer` | *C++ Primer* by Lippman et al. |
| `effective_cpp` | *Effective C++* by Scott Meyers |
| *(empty string)* | Auto-detect from the PDF — **try this first** |
### Step 3 — Launch and parse
```bash
python -m pylearn
```
On first launch for each book, the app will ask if you want to parse the PDF. Click **Yes** — this takes a minute or two depending on the book size. After parsing, the book content is cached so subsequent launches are instant.
## Usage
| Area | What it does |
|------|-------------|
| **Left panel** | Book reader — navigate chapters via the table of contents sidebar |
| **Right panel (top)** | Code editor — write or paste code from the book |
| **Right panel (bottom)** | Console — see output from running your code |
| **Toolbar** | Theme switching, bookmarks, notes, progress tracking |
## Keyboard Shortcuts
| Category | Shortcut | Action |
|----------|----------|--------|
| **Navigation** | `Alt+Left` / `Alt+Right` | Previous / next chapter |
| | `Ctrl+M` | Mark chapter complete |
| | `Ctrl+T` | Toggle TOC panel |
| **Search** | `Ctrl+F` | Find in current chapter |
| | `Ctrl+Shift+F` | Search all books |
| **Code** | `F5` | Run code |
| | `Shift+F5` | Stop execution |
| | `Ctrl+S` | Save code to file |
| | `Ctrl+O` | Load code from file |
| | `Ctrl+E` | Open in external editor |
| **View** | `Ctrl+=` / `Ctrl+-` | Increase / decrease font size |
| | `Ctrl+1` / `2` / `3` | Focus TOC / reader / editor |
| **Notes** | `Ctrl+B` | Add bookmark |
| | `Ctrl+N` | Add note |
| **Help** | `Ctrl+/` | Show shortcuts dialog |
## Configuration
Config files are JSON. For git-clone installs they live in `config/` inside the repo. For pip installs they live in your app-data directory (see [Registering a Book](#registering-a-book) for the exact path).
- **`app_config.json`** — Window size, theme, splitter positions, last opened book
- **`books.json`** — Registered books with PDF paths and profile names
- **`editor_config.json`** — Editor font size, tab width, line numbers, execution timeout
## Development
```bash
# Run all tests (702 tests)
pytest tests/ -v
# Skip slow tests
pytest tests/ -v -m "not slow"
# Type checking
mypy src/pylearn/
# Pre-parse books to cache
python scripts/parse_books.py
# Analyze PDF font metadata (useful for creating new book profiles)
python scripts/analyze_pdf_fonts.py path/to/book.pdf
```
## Project Structure
```
src/pylearn/
parser/ PDF parsing, font analysis, content classification, caching
renderer/ HTML rendering, syntax highlighting, themes
executor/ Subprocess-based code execution with sandboxing
ui/ PyQt6 widgets (main window, reader, editor, console, dialogs)
core/ Config, database, models, constants
utils/ Text utilities, error handling
config/ User-specific JSON config (not committed; see *.json.example)
data/ SQLite database + parsed PDF cache (not committed)
tests/ 702 tests (500+ unit + 150+ integration)
scripts/ Utility scripts for PDF analysis and book parsing
```
## Troubleshooting
**App won't start**
- Make sure PyQt6 is installed: `pip install PyQt6 PyQt6-QScintilla`
- Verify Python 3.12+: `python --version`
- On Linux, install system deps: `sudo apt-get install libegl1 libxkbcommon0`
**"No books registered" or no books appear**
- Make sure `books.json` exists in the correct config directory (see [Registering a Book](#registering-a-book) above)
- Open the file and check for JSON syntax errors (missing commas, unclosed braces)
- Make sure each entry has at least `book_id`, `title`, and `pdf_path`
**"PDF not found" error during parsing**
- The `pdf_path` in `books.json` must be an **absolute path** — relative paths won't work
- Use forward slashes, even on Windows: `"C:/Users/you/Documents/book.pdf"`
- Double-check the file actually exists at that path
**Book not parsing correctly**
- Try auto-detection first: set `"profile_name": ""` in your book entry
- Use **Book > Re-parse (clear cache)** from the menu bar to force a fresh parse
- If auto-detection gives poor results, try a named profile (see the profile table above)
- For dev installs: run `python scripts/analyze_pdf_fonts.py path/to/book.pdf` to inspect font metadata
**Code execution timeout**
- Default timeout is 30 seconds
- Increase it in `editor_config.json` by setting `"execution_timeout"` to a higher value (in seconds)
## Acknowledgments
Built in partnership with [Claude Code](https://claude.ai/claude-code) (Anthropic) — architecture, implementation, testing, and code review.
## License
[MIT](LICENSE) - Copyright (c) 2026 Nathan Tritle
| text/markdown | Nate Tritle | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"PyQt6>=6.6.0",
"PyQt6-QScintilla>=2.14.0",
"PyMuPDF>=1.23.0",
"Pygments>=2.17.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-qt>=4.2.0; extra == \"dev\"",
"mypy>=1.8.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T22:46:58.120620 | pylearn_reader-1.0.1.tar.gz | 77,624 | f9/93/c433bbca5061a9ff7c635003a7ce29ee465357209651e4dc22a6a9bfab0b/pylearn_reader-1.0.1.tar.gz | source | sdist | null | false | 315943307ffc56522df2d1a341ca63c3 | 2bafce2fdc6f82782e17e2c1b7a4a35f776aa6f4964f4538ef2207fc90b98812 | f993c433bbca5061a9ff7c635003a7ce29ee465357209651e4dc22a6a9bfab0b | MIT | [
"LICENSE"
] | 236 |
2.3 | nauyaca | 0.11.0 | Modern Gemini protocol server and client implementation using asyncio | # Nauyaca - Gemini Protocol Server & Client
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/astral-sh/ruff)
[](http://mypy-lang.org/)
[](https://nauyaca.readthedocs.io)
A modern, high-performance implementation of the [Gemini protocol](https://geminiprotocol.net) in Python. Nauyaca (pronounced "now-YAH-kah", meaning "serpent" in Nahuatl) uses asyncio's low-level Protocol/Transport pattern for efficient, non-blocking network I/O with both server and client capabilities.
## Key Features
- **High Performance** - asyncio Protocol/Transport pattern for maximum efficiency
- **Security First** - Mandatory TLS 1.2+, TOFU certificate validation, rate limiting, and access control
- **Production Ready** - TOML configuration, middleware system, and systemd integration
- **Developer Friendly** - Full type hints, comprehensive tests, and modern tooling with `uv`
> **[Read the full documentation](https://nauyaca.readthedocs.io)**
## Quick Start
### Installation
```bash
# As a CLI tool (recommended)
uv tool install nauyaca
# As a library
uv add nauyaca
# From source (for development)
git clone https://github.com/alanbato/nauyaca.git
cd nauyaca && uv sync
```
### Run a Server
```bash
# Serve content from a directory
nauyaca serve ./capsule
# With configuration file
nauyaca serve --config config.toml
# With hot-reload for development
nauyaca serve ./capsule --reload
```
### Fetch Content
```bash
# Get a Gemini resource
nauyaca get gemini://geminiprotocol.net/
# With verbose output
nauyaca get gemini://geminiprotocol.net/ --verbose
```
### Use as a Library
```python
import asyncio
from nauyaca.client import GeminiClient
async def main():
async with GeminiClient() as client:
response = await client.get("gemini://geminiprotocol.net/")
if response.is_success():
print(response.body)
elif response.is_redirect():
print(f"Redirect to: {response.redirect_url}")
else:
print(f"Error {response.status}: {response.meta}")
asyncio.run(main())
```
## Advanced Features
### Location-Based Routing
Nauyaca supports flexible routing with multiple handlers per server. Configure different URL paths to use different handlers (static files, proxy, etc.):
```toml
# Proxy API requests to backend server
[[locations]]
prefix = "/api/"
handler = "proxy"
upstream = "gemini://backend.example.com:1965"
strip_prefix = true # /api/resource → /resource on backend
timeout = 30
# Serve static content for everything else
[[locations]]
prefix = "/"
handler = "static"
document_root = "./capsule"
```
### Reverse Proxy
Use Nauyaca as a reverse proxy to aggregate multiple Gemini capsules:
```toml
# Forward blog requests to blog server
[[locations]]
prefix = "/blog/"
handler = "proxy"
upstream = "gemini://blog.example.com"
strip_prefix = true
# Forward wiki requests to wiki server
[[locations]]
prefix = "/wiki/"
handler = "proxy"
upstream = "gemini://wiki.example.com"
strip_prefix = true
# Serve homepage from local files
[[locations]]
prefix = "/"
handler = "static"
document_root = "./homepage"
```
See `config.example.toml` for more configuration examples.
## Documentation
**[nauyaca.readthedocs.io](https://nauyaca.readthedocs.io)**
| Section | Description |
|---------|-------------|
| [Installation](https://nauyaca.readthedocs.io/installation/) | Setup and requirements |
| [Quick Start](https://nauyaca.readthedocs.io/quickstart/) | Get running in 5 minutes |
| [Tutorials](https://nauyaca.readthedocs.io/tutorials/) | Step-by-step guides |
| [How-to Guides](https://nauyaca.readthedocs.io/how-to/) | Task-oriented recipes |
| [Reference](https://nauyaca.readthedocs.io/reference/) | CLI, configuration, API |
| [Explanation](https://nauyaca.readthedocs.io/explanation/) | Architecture and concepts |
## Contributing
```bash
# Setup
git clone https://github.com/alanbato/nauyaca.git
cd nauyaca && uv sync
# Test
uv run pytest
# Lint & Type Check
uv run ruff check src/ tests/
uv run mypy src/
```
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
MIT License - see [LICENSE](LICENSE) for details.
## Resources
- [Gemini Protocol](https://geminiprotocol.net) - Official protocol website
- [SECURITY.md](SECURITY.md) - Security documentation and vulnerability reporting
- [GitHub Issues](https://github.com/alanbato/nauyaca/issues) - Bug reports
- [GitHub Discussions](https://github.com/alanbato/nauyaca/discussions) - Questions and ideas
## Acknowledgments
- **Solderpunk** for creating the Gemini protocol
- The **Gemini community** for feedback and inspiration
---
**Status**: Active development (pre-1.0). Core protocol and security features are stable.
| text/markdown | Alan Velasco | Alan Velasco <alan@alanbato.com> | null | null | MIT | gemini, protocol, server, client, asyncio, networking | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"tlacacoca>=0.2.1",
"rich>=14.2.0",
"tomli>=2.0.0; python_full_version < \"3.11\"",
"typer>=0.20.0",
"watchfiles>=0.20.0; extra == \"reload\""
] | [] | [] | [] | [
"Repository, https://github.com/alanbato/nauyaca",
"Bug Tracker, https://github.com/alanbato/nauyaca/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:46:49.744244 | nauyaca-0.11.0.tar.gz | 49,655 | 8c/be/c672ddfab03a1deeb92594fb0c429d87a0f0dbc8c0d591fe2ac902ebab28/nauyaca-0.11.0.tar.gz | source | sdist | null | false | 57335c6fa255e6545a266e7cf3124630 | 3d844bf2631dc9b5f2b959bf594262025a29b2ce1f42b8f7f9b05ac15210d719 | 8cbec672ddfab03a1deeb92594fb0c429d87a0f0dbc8c0d591fe2ac902ebab28 | null | [] | 262 |
2.4 | flamapy | 2.5.0.dev0 | Flamapy feature model is a distribution of the flama framework containing all plugins required to analyze feature models. It also offers a richer API and a complete command line interface and documentation. | <div align="center">
<a href="">[](https://github.com/flamapy/flamapy/actions/workflows/tests.yml)</a>
<a href="">[](https://github.com/flamapy/flamapy/actions/workflows/commits.yml)</a>
<a href="">
<a href="">
</div>
#
<div id="top"></div>
<br />
<div align="center">
<h3 align="center">FLAMAPY</h3>
<p align="center">
A new and easy way to use FLAMA
<br />
<a href="https://github.com/flamapy/flamapy/issues">Report Bug</a>
·
<a href="https://github.com/flamapy/flamapy/issues">Request Feature</a>
</p>
</div>
<!-- ABOUT THE PROJECT -->
# About The Project
FLAMAPY Feature model distribution provides an easier way of using FLAMAPY when analysing feature models. It packs the most used plugins for analyis of feature models adding a layer of convenience to use the framework or integrate it. If some other operations are required please see in the [documentation](flamapy.github.io/docs) if its supported in the ecosystem though the Python interface.
Feature Model Analysis has a crucial role in software product line engineering, enabling us to understand, design, and validate the complex relationships among features in a software product line. These feature models can often be complex and challenging to analyze due to their variability, making it difficult to identify conflicts, dead features, and potential optimizations. This is where this distribution comes in.
# Using the CMD
```bash
flamapy --help #This will show the commands available
flamapy satisfiable <path to file> to check if a model is valid
...
```
# Using the Python interface
This is simple, Flama FM dist in hosted in pypi, therefore simply add the package flama-fm-dist to your requirements file and call the API as follows:
```python
from flamapy.interfaces.python.flama_feature_model import FLAMAFeatureModel
fm = FLAMAFeatureModel("path/to/feature/model")
print(fm.valid())
```
# Operations available:
Currently this distribution offers a subset of the operations available in the ecosystem, in the case of the feature mdoel distribution, we provide those operations that are well tested and used by the community. Nonetheless, If other more complex operations are required you can rely on the python interface of the framework to execute them all.
* atomic_sets: This operation is used to find the atomic sets in a model: It returns the atomic sets if they are found in the model. If the model does not follow the UVL specification, an exception is raised and the operation returns False.
* average_branching_factor:This refers to the average number of child features that a parent feature has in a feature model. It's calculated by dividing the total number of child features by the total number of parent features. A high average branching factor indicates a complex feature model with many options, while a low average branching factor indicates a simpler model.
* commonality: This is a measure of how often a feature appears in the products of a product line. It's usually expressed as a percentage. A feature with 100 per cent commonality is a core feature, as it appears in all products.
* configurations: These are the individual outcomes that can be produced from a feature model. Each product is a combination of features that satisfies all the constraints and dependencies in the feature model.
* configurations_number: This is the total number of different full configurations that can be produced from a feature model. It's calculated by considering all possible combinations of features, taking into account the constraints and dependencies between features.
* core_features: These are the features that are present in all products of a product line. In a feature model, they are the features that are mandatory and not optional. Core features define the commonality among all products in a product line. This call requires sat to be called, however, there is an implementation within flamapy that does not requires sat, please use the framework in case of needing it.
* count_leafs: This operation counts the number of leaf features in a feature model. Leaf features are those that do not have any child features. They represent the most specific options in a product line.
* dead_features: These are features that, due to the constraints and dependencies in the feature model, cannot be included in any valid product. Dead features are usually a sign of an error in the feature model.
* estimated_number_of_configurations: This is an estimate of the total number of different products that can be produced from a feature model. It's calculated by considering all possible combinations of features. This can be a simple multiplication if all
features are independent, but in most cases, constraints and dependencies between features need to be taken into account.
* false_optional_features: These are features that appear to be optional in the feature model, but due to the constraints and dependencies, must be included in every valid product. Like dead features, false optional features are usually a sign of an error in the feature model.
* feature_ancestors: These are the features that are directly or indirectly the parent of a given feature in a feature model. Ancestors of a feature are found by traversing up the feature hierarchy. This information can be useful to understand the context and dependencies of a feature.
* filter: This operation selects a subset of the products of a product line based on certain criteria. For example, youmight filter the products to only include those that contain a certain feature.
* leaf_features: This operation is used to find leaf features in a model: It returns the leaf features if they are found in themodel. If the model does not follow the UVL specification, an exception is raised and the operation returns False.
* max_depth: This operation is used to find the max depth of the tree in a model: It returns the max depth of the tree. If the model does not follow the UVL specification, an exception is raised and the operation returns False.
* satisfiable: In the context of feature models, this usually refers to whether the feature model itself satisfies all the
constraints and dependencies. A a valid feature model is one that does encodes at least a single valid product.
* satisfiable_configuration: This is a product that is produced from a valid configuration of features. A valid product satisfies all the constraints and dependencies in the feature model.
| text/markdown | null | Flamapy <flamapy@us.es> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"flamapy-fw~=2.5.0.dev0",
"flamapy-fm~=2.5.0.dev0",
"flamapy-sat~=2.5.0.dev0",
"flamapy-bdd~=2.5.0.dev0",
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"prospector; extra == \"dev\"",
"mypy; extra == \"dev\"",
"coverage; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/flamapy/flamapy-feature-model"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T22:46:45.971720 | flamapy-2.5.0.dev0.tar.gz | 16,920 | 04/3a/f138126371fdb95f1e62fd4fd342894a68b247bf58f585fa0469aaba56d0/flamapy-2.5.0.dev0.tar.gz | source | sdist | null | false | 8d64e3db76991f681ce788bbd90256bf | 7252198ed31b3f306140ebe8c02dc081ecfe9779d7795f9d6d1c0a64a4e4fcbb | 043af138126371fdb95f1e62fd4fd342894a68b247bf58f585fa0469aaba56d0 | GPL-3.0-or-later | [
"LICENSE"
] | 201 |
2.4 | flamapy-bdd | 2.5.0.dev0 | bdd-plugin for the automated analysis of feature models | # BDD plugin for flamapy
- [BDD plugin for flamapy](#bdd-plugin-for-flamapy)
- [Description](#description)
- [Requirements and Installation](#requirements-and-installation)
- [Functionality and usage](#functionality-and-usage)
- [Load a feature model in UVL and create the BDD](#load-a-feature-model-in-uvl-and-create-the-bdd)
- [Save the BDD in a file](#save-the-bdd-in-a-file)
- [Load the BDD from a file](#load-the-bdd-from-a-file)
- [Analysis operations](#analysis-operations)
- [Contributing to the BDD plugin](#contributing-to-the-bdd-plugin)
## Description
This plugin supports Binary Decision Diagrams (BDDs) representations for feature models.
The plugin is based on [flamapy](https://github.com/flamapy/core) and thus, it follows the same architecture:
<p align="center">
<img width="750" src="doc/bdd_plugin.png">
</p>
The BDD plugin relies on the [dd](https://github.com/tulip-control/dd) library to manipulate BDDs.
The complete documentation of such library is available [here](https://github.com/tulip-control/dd/blob/main/doc.md).
The following is an example of feature model and its BDD using complemented arcs.
<p align="center">
<img width="750" src="doc/fm_example.png">
</p>
<p align="center">
<img width="750" src="doc/bdd_example.svg">
</p>
## Requirements and Installation
- Python 3.9+
- This plugin depends on the [flamapy core](https://github.com/flamapy/core) and on the [Feature Model plugin](https://github.com/flamapy/fm_metamodel).
```
pip install flamapy flamapy-fm flamapy-bdd
```
We have tested the plugin on Linux, but Windows is also supported.
## Functionality and usage
The executable script [test_bdd_metamodel.py](https://github.com/flamapy/bdd_metamodel/blob/master/tests/test_bdd_metamodel.py) serves as an entry point to show the plugin in action.
The following functionality is provided:
### Load a feature model in UVL and create the BDD
```python
from flamapy.metamodels.fm_metamodel.transformations import UVLReader
from flamapy.metamodels.bdd_metamodel.transformations import FmToBDD
# Load the feature model from UVL
feature_model = UVLReader('models/uvl_models/pizzas.uvl').transform()
# Create the BDD from the feature model
bdd_model = FmToBDD(feature_model).transform()
```
### Save the BDD in a file
```python
from flamapy.metamodels.bdd_metamodel.transformations import PNGWriter, DDDMPv3Writer
# Save the BDD as an image in PNG
PNGWriter(path='my_bdd.png', bdd_model).transform()
# Save the BDD in a .dddmp file
DDDMPv3Writer(f'my_bdd.dddmp', bdd_model).transform()
```
Writers available: DDDMPv3 ('dddmp'), DDDMPv2 ('dddmp'), JSON ('json'), Pickle ('p'), PDF ('pdf'), PNG ('png'), SVG ('svg').
### Load the BDD from a file
```python
from flamapy.metamodels.bdd_metamodel.transformations import JSONReader
# Load the BDD from a .json file
bdd_model = JSONReader(path='path/to/my_bdd.json').transform()
```
Readers available: JSON ('json'), DDDMP ('dddmp'), Pickle ('p').
*NOTE:* DDDMP and Pickle readers are not fully supported yet.
### Analysis operations
- Satisfiable
Return whether the model is satisfiable (valid):
```python
from flamapy.metamodels.bdd_metamodel.operations import BDDSatisfiable
satisfiable = BDDSatisfiable().execute(bdd_model).get_result()
print(f'Satisfiable? (valid?): {satisfiable}')
```
- Configurations number
Return the number of configurations:
```python
from flamapy.metamodels.bdd_metamodel.operations import BDDConfigurationsNumber
n_configs = BDDConfigurationsNumber().execute(bdd_model).get_result()
print(f'#Configurations: {n_configs}')
```
- Configurations
Enumerate the configurations of the model:
```python
from flamapy.metamodels.bdd_metamodel.operations import BDDConfigurations
configurations = BDDConfigurations().execute(bdd_model).get_result()
for i, config in enumerate(configurations, 1):
print(f'Config {i}: {[feat for feat in config.elements if config.elements[feat]]}')
```
- Sampling
Return a sample of the given size of uniform random configurations with or without replacement:
```python
from flamapy.metamodels.bdd_metamodel.operations import BDDSampling
sampling_op = BDDSampling()
sampling_op.set_sample_size(5)
sampling_op.set_with_replacement(False) # Default False
sample = sampling_op.execute(bdd_model).get_result()
for i, config in enumerate(sample, 1):
print(f'Config {i}: {[feat for feat in config.elements if config.elements[feat]]}')
```
- Product Distribution
Return the number of products (configurations) having a given number of features:
```python
from flamapy.metamodels.bdd_metamodel.operations import BDDProductDistribution
dist = BDDProductDistribution().execute(bdd_model).get_result()
print(f'Product Distribution: {dist}')
```
- Feature Inclusion Probability
Return the probability for a feature to be included in a valid configuration:
```python
from flamapy.metamodels.bdd_metamodel.operations import BDDFeatureInclusionProbability
prob = BDDFeatureInclusionProbability().execute(bdd_model).get_result()
for feat in prob.keys():
print(f'{feat}: {prob[feat]}')
```
- Core features
Return the core features (those features that are present in all the configurations):
```python
from flamapy.metamodels.bdd_metamodel.operations import BDDCoreFeatures
core_features = BDDCoreFeatures().execute(bdd_model).get_result()
print(f'Core features: {core_features}')
```
- Dead features
Return the dead features (those features that are not present in any configuration):
```python
from flamapy.metamodels.bdd_metamodel.operations import BDDDeadFeatures
dead_features = BDDDeadFeatures().execute(bdd_model).get_result()
print(f'Dead features: {dead_features}')
```
Most analysis operations support also a partial configuration as an additional argument, so the operation will return the result taking into account the given partial configuration. For example:
```python
from flamapy.core.models import Configuration
# Create a partial configuration
elements = {'Pizza': True, 'Big': True}
partial_config = Configuration(elements)
# Calculate the number of configuration from the partial configuration
configs_number_op = BDDConfigurationsNumber()
configs_number_op.set_partial_configuration(partial_config)
n_configs = configs_number_op.execute(bdd_model).get_result()
print(f'#Configurations: {n_configs}')
```
## Contributing to the BDD plugin
To contribute in the development of this plugin:
1. Fork the repository into your GitHub account.
2. Clone the repository: `git@github.com:<<username>>/bdd_metamodel.git`
3. Create a virtual environment: `python -m venv env`
4. Activate the virtual environment: `source env/bin/activate`
5. Install the plugin dependencies: `pip install flamapy flamapy-fm`
6. Install the BDD plugin from the source code: `pip install -e bdd_metamodel`
Please try to follow the standards code quality to contribute to this plugin before creating a Pull Request:
- To analyze your Python code and output information about errors, potential problems, convention violations and complexity, pass the prospector with:
`make lint`
- To analyze the static type checker for Python and find bugs, pass the Mypy:
`make mypy`
| text/markdown | null | Flamapy <flamapy@us.es> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"flamapy-fw~=2.5.0.dev0",
"flamapy-fm~=2.5.0.dev0",
"dd~=0.6.0",
"graphviz~=0.20",
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"prospector; extra == \"dev\"",
"mypy; extra == \"dev\"",
"coverage; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/flamapy/bdd_metamodel"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T22:46:10.236714 | flamapy_bdd-2.5.0.dev0.tar.gz | 35,597 | 58/24/a73376ed803bff7f9dc64cb83eb0a3135a20a5fe11eabd189c97d2d0542b/flamapy_bdd-2.5.0.dev0.tar.gz | source | sdist | null | false | 8b77a5bdd75a8be72d0a921f296ed391 | 6f78c74ff0cc6df92c38107a927fa810dc49590449e8b0d6e32dfda3b807cce5 | 5824a73376ed803bff7f9dc64cb83eb0a3135a20a5fe11eabd189c97d2d0542b | GPL-3.0-or-later | [
"LICENSE"
] | 213 |
2.4 | flamapy-z3 | 2.5.0.dev0 | z3-plugin for the automated analysis of feature models | # Automated Analysis of UVL using Satisfiability Modulo Theories
## Description
This repository contains the plugin that supports z3 representations for feature models.
The plugin is based on [flamapy](https://flamapy.github.io/), and relies on the [Z3 solver](https://github.com/Z3Prover/z3?tab=readme-ov-file) library. The architecture is as follows:
<p align="center">
<img width="750" src="resources/images/z3metamodel.png">
</p>
## Requirements and Installation
- [Python 3.11+](https://www.python.org/)
- [Flamapy](https://www.flamapy.org/)
The framework has been tested in Linux and Windows 11 with Python 3.12. Python 3.13+ may not be still supported.
### Download and installation
1. Install [Python 3.11+](https://www.python.org/).
2. Download/Clone this repository and enter into the main directory.
3. Create a virtual environment: `python -m venv env`
4. Activate the environment:
In Linux: `source env/bin/activate`
In Windows: `.\env\Scripts\Activate`
5. Install dependencies (flamapy): `pip install -r requirements.txt`
** In case that you are running Ubuntu and get an error installing flamapy, please install the package python3-dev with the command `sudo apt update && sudo apt install python3-dev` and update wheel and setuptools with the command `pip install --upgrade pip wheel setuptools` before step 5.
## Functionality and usage
The executable script [test.py](/test.py) serves as an entry point to show the plugin in action.
Simply run: `python test.py` to see it in action over the running feature model presented in the paper.
The following functionality is provided:
### Load a feature model in UVL and translate to SMT
```python
from flamapy.metamodels.fm_metamodel.transformations import UVLReader
from flamapy.metamodels.z3_metamodel.transformations import FmToZ3
# Load the feature model from UVL
fm_model = UVLReader('resources/models/uvl_models/Pizza_z3.uvl').transform()
# Transform the feature model to SMT
z3_model = FmToZ3(fm_model).transform()
```
### Analysis operations
The following operations are available:
```python
from flamapy.metamodels.z3_metamodel.operations import (
Z3Satisfiable,
Z3Configurations,
Z3ConfigurationsNumber,
Z3CoreFeatures,
Z3DeadFeatures,
Z3FalseOptionalFeatures,
Z3AttributeOptimization,
Z3SatisfiableConfiguration,
Z3FeatureBounds,
Z3AllFeatureBounds,
)
```
- **Satisfiable**
Return whether the model is satisfiable (valid):
```python
satisfiable = Z3Satisfiable().execute(z3_model).get_result()
print(f'Satisfiable? (valid?): {satisfiable}')
```
- **Core features**
Return the core features of the model:
```python
core_features = Z3CoreFeatures().execute(z3_model).get_result()
print(f'Core features: {core_features}')
```
- **Dead features**
Return the dead features of the model:
```python
dead_features = Z3DeadFeatures().execute(z3_model).get_result()
print(f'Dead features: {dead_features}')
```
- **False-Optional features**
Return the false-optional features of the model:
```python
false_optional_features = Z3FalseOptionalFeatures().execute(z3_model).get_result()
print(f'False-optional features: {false_optional_features}')
```
- **Configurations**
Enumerate the configurations of the model:
```python
configurations = Z3Configurations().execute(z3_model).get_result()
print(f'Configurations: {len(configurations)}')
for i, config in enumerate(configurations, 1):
config_str = ', '.join(f'{f}={v}' if not isinstance(v, bool) else f'{f}' for f,v in config.elements.items() if config.is_selected(f))
print(f'Config. {i}: {config_str}')
```
- **Configurations number**
Return the number of configurations:
```python
n_configs = Z3ConfigurationsNumber().execute(z3_model).get_result()
print(f'Configurations number: {n_configs}')
```
- **Boundaries analysis of typed features**
Return the boundaries of the numerical features (Integer, Real, String) of the model:
```python
attributes = fm_model.get_attributes()
print('Attributes in the model')
for attr in attributes:
print(f' - {attr.name} ({attr.attribute_type})')
variable_bounds = Z3AllFeatureBounds().execute(z3_model).get_result()
print('Variable bounds for all typed variables:')
for var_name, bounds in variable_bounds.items():
print(f' - {var_name}: {bounds}')
```
- **Configuration optimization based on feature attributes:**
Return the set of configurations that optimize the given goals (i.e., the pareto front):
```python
attribute_optimization_op = Z3AttributeOptimization()
attributes = {'Price': OptimizationGoal.MAXIMIZE,
'Kcal': OptimizationGoal.MINIMIZE}
attribute_optimization_op.set_attributes(attributes)
configurations_with_values = attribute_optimization_op.execute(z3_model).get_result()
print(f'Optimum configurations: {len(configurations_with_values)} configs.')
for i, config_value in enumerate(configurations_with_values, 1):
config, values = config_value
config_str = ', '.join(f'{f}={v}' if not isinstance(v, bool) else f'{f}' for f,v in config.elements.items() if config.is_selected(f))
values_str = ', '.join(f'{k}={v}' for k,v in values.items())
print(f'Config. {i}: {config_str} | Values: {values_str}')
```
- Configuration validation:
Return whether a given partial or full configuration is valid:
```python
from flamapy.metamodels.configuration_metamodel.transformations import ConfigurationJSONReader
configuration = ConfigurationJSONReader('resources/configs/pizza_z3_config1.json').transform()
configuration.set_full(False)
print(f'Configuration: {configuration.elements}')
satisfiable_configuration_op = Z3SatisfiableConfiguration()
satisfiable_configuration_op.set_configuration(configuration)
is_satisfiable = satisfiable_configuration_op.execute(z3_model).get_result()
print(f'Is the configuration satisfiable? {is_satisfiable}')
```
**Note:** The Z3Configurations and Z3ConfigurationsNumber operations may takes longer if the number of configuration is huge, or even not finish if the model is unbounded.
**Note:** The Z3Configurations and Z3ConfigurationsNumber operations support also a partial configuration as an additional argument, so the operation will return the result taking into account the given partial configuration.
For example:
```python
from flamapy.core.models import Configuration
# Create a partial configuration
elements = {'Pizza': True, 'SpicyLvl': 5}
partial_config = Configuration(elements)
# Calculate the number of configuration from the partial configuration
configs_number_op = Z3ConfigurationsNumber()
configs_number_op.set_partial_configuration(partial_config)
n_configs = configs_number_op.execute(z3_model).get_result()
print(f'#Configurations: {n_configs}')
```
| text/markdown | null | Flamapy <flamapy@us.es> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"flamapy-fw~=2.5.0.dev0",
"flamapy-fm~=2.5.0.dev0",
"z3-solver~=4.14.1.0",
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"prospector; extra == \"dev\"",
"mypy; extra == \"dev\"",
"coverage; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/flamapy/z3_metamodel"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T22:45:49.553891 | flamapy_z3-2.5.0.dev0.tar.gz | 22,996 | 27/29/8bd22330c610e5991779785a3051e0c49b0d10ac11038cf11922275ab7dd/flamapy_z3-2.5.0.dev0.tar.gz | source | sdist | null | false | 367a0f67d0ba8a345f49941458f51028 | beea9b7e0c9c8adb733daac23fa8cd325359afee7e4d241136daf5cee39b0fc9 | 27298bd22330c610e5991779785a3051e0c49b0d10ac11038cf11922275ab7dd | GPL-3.0-or-later | [] | 216 |
2.4 | flamapy-sat | 2.5.0.dev0 | flamapy-sat is a plugin to flamapy module | # pysat_metamodel
This repository will host the pysat metamodel and its operation implementation
## Install for development
``` bash
pip install -e .
```
## Make sure that you have installed python-dev
``` bash
sudo apt install python-dev #python3-dev in Ubuntu derivatives
```
| text/markdown | null | Flamapy <flamapy@us.es> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"flamapy-fw~=2.5.0.dev0",
"flamapy-fm~=2.5.0.dev0",
"python-sat~=0.1.7.dev1",
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"prospector; extra == \"dev\"",
"mypy; extra == \"dev\"",
"coverage; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/flamapy/pysat_metamodel"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T22:45:47.648547 | flamapy_sat-2.5.0.dev0.tar.gz | 27,071 | 95/b3/338b5cd30080c28a541e4717baa29f812a482dfbfddf9a09fe2478372c87/flamapy_sat-2.5.0.dev0.tar.gz | source | sdist | null | false | bd5d8bfc5b84e25cb001e6ed637ab251 | e1f5b7598fe01aea62280ea5b2c0d11573214c57780f8147a392bbe8203e3574 | 95b3338b5cd30080c28a541e4717baa29f812a482dfbfddf9a09fe2478372c87 | GPL-3.0-or-later | [
"LICENSE"
] | 212 |
2.4 | sdv-installer | 0.3.0 | Package to install SDV Enterprise packages. | _This package is part of [The Synthetic Data Vault Project](https://sdv.dev/),
a project from [DataCebo](https://datacebo.com/)._
# Overview
The [**Synthetic Data Vault**](https://docs.sdv.dev/sdv) (SDV) is a Python
library designed to be your one-stop shop for creating tabular synthetic data.
You can get started with the publicly-available SDV Community for exploring the
benefits of synthetic data. When you're ready to take synthetic data to the
next level, upgrade to [SDV Enterprise](https://docs.sdv.dev/sdv/explore/sdv-enterprise)
and purchase [Bundles](https://docs.sdv.dev/sdv/explore/sdv-bundles) to access additional features.
This package provides a CLI for SDV Enterprise customers to easily access and
manage all SDV-related software. It allows you to:
* Input your SDV username and license key
* List out all the SDV-related packages that you have access to
* Download and install those packages
```
% sdv-installer install
Username: <email>
License Key: ********************************
Installing SDV Enterprise:
sdv-enterprise (version 0.30.0) - Installed!
Installing Bundles:
bundle-cag - Installed!
bundle-xsynthesizers - Installed!
Success! All packages have been installed. You are ready to use SDV Enterprise.
```
# How it works
The SDV Installer is a convenience wrapper around
[pip](https://pypi.org/project/pip/), the package installer for Python.
Under-the-hood, SDV Installer takes your input, determines which
package you can access, and calls pip to install those packages with the
appropriate flags and options.
To see the exact pip commands, use the `--debug` flag:
```
% sdv-installer install --debug
Username: <email>
License Key: ********************************
Installing SDV Enterprise:
pip install sdv-enterprise==0.30.0 --index-url <URL>@pypi.datacebo.com/ --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --trusted-host pypi.datacebo.com
Installing Bundles:
...
Success! All packages have been installed. You are ready to use SDV Enterprise.
```
# Get the SDV Installer
Get the latest version of the SDV Installer.
```
pip install sdv-installer --upgrade
```
SDV Installer is set up to access and manage all SDV-related packages – SDV
Enterprise, as well as Bundles. You can manage installation for a variety of
scenarios, including offline installation, partial installation, upgrading, and
more.
**For more details about installing SDV Enterprise, please see our
[Installation
Guide](https://docs.sdv.dev/sdv-enterprise/installation/instructions).**
| text/markdown | null | "DataCebo, Inc." <info@datacebo.com> | null | null | null | sdv, synthetic-data, synthetic-data-generation, timeseries, single-table, multi-table | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python ::... | [] | null | null | <3.15,>=3.9 | [] | [] | [] | [
"requests",
"platformdirs",
"pip>=22.3",
"packaging",
"pytest; extra == \"test\"",
"invoke; extra == \"test\"",
"packaging; extra == \"test\"",
"pip>=9.0.1; extra == \"dev\"",
"pytest; extra == \"dev\"",
"invoke; extra == \"dev\"",
"bump-my-version>=0.18.3; extra == \"dev\"",
"ruff>=0.7.1; ext... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:45:45.022485 | sdv_installer-0.3.0.tar.gz | 23,377 | 01/69/151388774ed1e6d8c48150e7a364f9991752d8ab4a4e6ceecb432456f5ef/sdv_installer-0.3.0.tar.gz | source | sdist | null | false | 0e3de092af173df7989929932be0cee8 | 3c6b77dfac745a70fdc746701dc1fdace4de83daea748794c803413542c94841 | 0169151388774ed1e6d8c48150e7a364f9991752d8ab4a4e6ceecb432456f5ef | MIT | [
"LICENSE"
] | 248 |
2.4 | flamapy-fm | 2.5.0.dev0 | flamapy-fm is a plugin to Flamapy module | # fm_metamodel
This repo host the feature model concrete classes
## Install for development
```
pip install -e .
```
| text/markdown | null | Flamapy <flamapy@us.es> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"flamapy-fw~=2.5.0.dev0",
"uvlparser~=2.5.0.dev63",
"afmparser~=1.0.3",
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"prospector; extra == \"dev\"",
"mypy; extra == \"dev\"",
"coverage; extra == \"dev\"",
"antlr4-tools; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/flamapy/fm_metamodel"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T22:45:04.146962 | flamapy_fm-2.5.0.dev0.tar.gz | 51,421 | e1/2c/3ab911929338f316c49304299d14eaceeb52ef6fd603e4ea33ed6756032a/flamapy_fm-2.5.0.dev0.tar.gz | source | sdist | null | false | d14506cea0109ee74c84b3df7b72f480 | d442d878356f25225faeced4a1eef6dfb58f7dbb0354ad0b3568c31e336d3738 | e12c3ab911929338f316c49304299d14eaceeb52ef6fd603e4ea33ed6756032a | GPL-3.0-or-later | [
"LICENSE"
] | 262 |
2.4 | flamapy-fw | 2.5.0.dev0 | Flamapy is a Python-based AAFM framework that takes into consideration previous AAFM tool designs and enables multi-solver and multi-metamodel support for the integration of AAFM tooling on the Python ecosystem. | # Flamapy
Flamapy is a Python-based AAFM framework that takes into consideration previous AAFM tool designs and enables multi-solver and multi-metamodel support for the integration of AAFM tooling on the Python ecosystem.
The main features of the framework are:
* Easy to extend by enabling the creation of new plugins following a semi-automatic generator approach.
* Support multiple variability models. Currently, it provides support for cardinality-based feature models. However, it is easy to integrate others such as attributed feature models
* Support multiple solvers. Currently, it provides support for the PySAT metasolver, which enables more than ten different solvers.
* Support multiple operations. It is developed, having in mind multi-model operations such as those depicted by Familiar and single-model operations.
## Available plugins
[flamapy](https://github.com/flamapy/flamapy). This is a meta-package that installs all plugins for feature modelling analysis and the CLI and Python interfaces.
[flamapy-fm](https://github.com/flamapy/fm_metamodel) This is a plugin that provides support for feature modelling. Include several readers/writers from different formats.
[flamapy-sat](https://github.com/flamapy/pysat_metamodel) This plugin enable different analysis operations that require sat as backend.
[flamapy-bdd](https://github.com/flamapy/bdd_metamodel) This plugin enable different analysis operations that require bdd as backend.
## Documentation
All the proyect related documentation can be found in wiki format at [the tool website](https://flamapy.github.io/)
## Changelog
Detailed changes for each release are documented in the [release notes](https://github.com/diverso-lab/core/releases)
## Contributing
See [CONTRIBUTING.md](https://github.com/diverso-lab/core/blob/master/CONTRIBUTING.md)
| text/markdown | null | Flamapy <flamapy@us.es> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"prospector; extra == \"dev\"",
"mypy; extra == \"dev\"",
"coverage; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/flamapy/core"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T22:44:21.848215 | flamapy_fw-2.5.0.dev0.tar.gz | 20,332 | fb/7f/0c95196679a7b303afa8922b3e4527ccbb21b4920912facb51fd27412ebf/flamapy_fw-2.5.0.dev0.tar.gz | source | sdist | null | false | 3c37809adb2d0910f26b3b466cf9fc9a | 72b65ced5d857e89dbd36124cc990d9b970d2d4ec8f56ac9725bbb27964d85a1 | fb7f0c95196679a7b303afa8922b3e4527ccbb21b4920912facb51fd27412ebf | GPL-3.0-or-later | [
"LICENSE"
] | 277 |
2.4 | chatunify | 0.1.0 | Convert any sentence or chat messages array to the official chat template for OpenAI, Mistral, and Qwen models. | # chatunify
Convert any sentence or chat messages array into the official chat template for OpenAI, Mistral, and Qwen models.
## Installation
```bash
pip install chatunify
```
## Quick Start
```python
from chatunify import apply_template
# Plain string — auto-wrapped as a user message
result = apply_template("Hello!", model="gpt-4o")
result = apply_template("Hello!", model="mistral-large-latest")
result = apply_template("Hello!", model="qwen2.5-72b-instruct")
# Full messages array
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"},
]
result = apply_template(messages, model="mistral-large-latest")
```
## Supported Models
| Provider | Model name patterns | Template |
|---|---|---|
| **OpenAI** | `gpt-*`, `o1`, `o3`, `o4`, `chatgpt-*` | JSON messages array (API format) |
| **Mistral V1** | `mistral-7b-v0.1` | `<s> [INST] user [/INST] assistant</s>` |
| **Mistral V2** | `mistral-7b-v0.2`, `mixtral-*`, `mistral-small/medium` | `<s>[INST] user[/INST] assistant</s>` |
| **Mistral V3** | `mistral-large`, `mistral-nemo`, `codestral`, `*-2407+` | `<s>[INST]user[/INST]assistant</s>` |
| **Qwen** | `qwen*`, `qwq*` | ChatML `<\|im_start\|>role\ncontent<\|im_end\|>` |
## Template Outputs
### OpenAI
Returns a pretty-printed JSON string of the messages array — the canonical OpenAI API input format.
```json
[
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello!"}
]
```
### Mistral (V3 / Tekken)
```
<s>[INST]You are helpful.
Hello![/INST]
```
### Qwen (ChatML)
```
<|im_start|>system
You are helpful.<|im_end|>
<|im_start|>user
Hello!<|im_end|>
<|im_start|>assistant
```
## API
### `apply_template(input, model)`
| Parameter | Type | Description |
|---|---|---|
| `input` | `str` or `list[dict]` | Plain string or list of `{"role": ..., "content": ...}` dicts |
| `model` | `str` | Model name (e.g. `"gpt-4o"`, `"mistral-large-latest"`, `"qwen2.5-72b-instruct"`) |
**Returns:** `str` — the formatted prompt string.
### `detect_provider(model)`
Returns `(provider, variant)` tuple for a given model name. Useful for debugging routing.
```python
from chatunify import detect_provider
detect_provider("gpt-4o") # ("openai", None)
detect_provider("mistral-large-latest") # ("mistral", "v3")
detect_provider("qwen2.5-72b-instruct") # ("qwen", None)
```
## Running Tests
```bash
python tests/test_templates.py
```
| text/markdown | null | null | null | null | MIT License
Copyright (c) 2026
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/you/chatunify"
] | twine/6.2.0 CPython/3.11.12 | 2026-02-18T22:44:16.656262 | chatunify-0.1.0.tar.gz | 6,616 | a2/4c/51a7cdda3f3077dc84cea81302d59f7b6b45b18c5c477af3594aaa6da301/chatunify-0.1.0.tar.gz | source | sdist | null | false | 861b32bf58d5bdb28ba78724a1429475 | 199c7506888fd593d598bbe63cd38054f9128361a8dc8430a47815a7e4871ad0 | a24c51a7cdda3f3077dc84cea81302d59f7b6b45b18c5c477af3594aaa6da301 | null | [
"LICENSE"
] | 264 |
2.4 | dtale | 3.20.0 | Web Client for Visualizing Pandas Objects | |image0|
- `Live Demo <http://alphatechadmin.pythonanywhere.com>`__
--------------
|CircleCI| |PyPI Python Versions| |PyPI| |Conda| |ReadTheDocs| |codecov|
|Downloads| |Open in VS Code|
What is it?
-----------
D-Tale is the combination of a Flask back-end and a React front-end to
bring you an easy way to view & analyze Pandas data structures. It
integrates seamlessly with ipython notebooks & python/ipython terminals.
Currently this tool supports such Pandas objects as DataFrame, Series,
MultiIndex, DatetimeIndex & RangeIndex.
Origins
-------
D-Tale was the product of a SAS to Python conversion. What was
originally a perl script wrapper on top of SAS’s ``insight`` function is
now a lightweight web client on top of Pandas data structures.
In The News
-----------
- `4 Libraries that can perform EDA in one line of python
code <https://towardsdatascience.com/4-libraries-that-can-perform-eda-in-one-line-of-python-code-b13938a06ae>`__
- `React Status <https://react.statuscode.com/issues/204>`__
- `KDNuggets <https://www.kdnuggets.com/2020/08/bring-pandas-dataframes-life-d-tale.html>`__
- `Man Institute <https://www.man.com/maninstitute/d-tale>`__ (warning:
contains deprecated functionality)
- `Python
Bytes <https://pythonbytes.fm/episodes/show/169/jupyter-notebooks-natively-on-your-ipad>`__
- `FlaskCon 2020 <https://www.youtube.com/watch?v=BNgolmUWBp4&t=33s>`__
- `San Diego
Python <https://www.youtube.com/watch?v=fLsGur5YqeE&t=29s>`__
- `Medium: towards data
science <https://towardsdatascience.com/introduction-to-d-tale-5eddd81abe3f>`__
- `Medium: Exploratory Data Analysis – Using
D-Tale <https://medium.com/da-tum/exploratory-data-analysis-1-4-using-d-tale-99a2c267db79>`__
- `EOD Notes: Using python and dtale to analyze
correlations <https://www.google.com/amp/s/eod-notes.com/2020/05/07/using-python-and-dtale-to-analyze-correlations/amp/>`__
- `Data Exploration is Now Super Easy w/
D-Tale <https://dibyendudeb.com/d-tale-data-exploration-tool/>`__
- `Practical Business
Python <https://pbpython.com/dataframe-gui-overview.html>`__
Tutorials
---------
- `Pip Install Python YouTube
Channel <https://m.youtube.com/watch?v=0RihZNdQc7k&feature=youtu.be>`__
- `machine_learning_2019 <https://www.youtube.com/watch?v=-egtEUVBy9c>`__
- `D-Tale The Best Library To Perform Exploratory Data Analysis Using
Single Line Of
Code🔥🔥🔥🔥 <https://www.youtube.com/watch?v=xSXGcuiEzUc>`__
- `Explore and Analyze Pandas Data Structures w/
D-Tale <https://m.youtube.com/watch?v=JUu5IYVGqCg>`__
- `Data Preprocessing simplest method
🔥 <https://www.youtube.com/watch?v=Q2kMNPKgN4g>`__
## Related Resources
- `Adventures In Flask While Developing
D-Tale <https://github.com/man-group/dtale/blob/master/docs/FlaskCon/FlaskAdventures.md>`__
- `Adding Range Selection to
react-virtualized <https://github.com/man-group/dtale/blob/master/docs/RANGE_SELECTION.md>`__
- `Building Draggable/Resizable
Modals <https://github.com/man-group/dtale/blob/master/docs/DRAGGABLE_RESIZABLE_MODALS.md>`__
- `Embedding Flask Apps within
Streamlit <https://github.com/man-group/dtale/blob/master/docs/EMBEDDED_STREAMLIT.md>`__
Where To get It
---------------
The source code is currently hosted on GitHub at:
https://github.com/man-group/dtale
Binary installers for the latest released version are available at the
`Python package index <https://pypi.org/project/dtale>`__ and on conda
using `conda-forge <https://github.com/conda-forge/dtale-feedstock>`__.
.. code:: sh
# conda
conda install dtale -c conda-forge
# if you want to also use "Export to PNG" for charts
conda install -c plotly python-kaleido
.. code:: sh
# or PyPI
pip install dtale
Getting Started
---------------
======== =========
PyCharm jupyter
======== =========
|image9| |image10|
======== =========
Python Terminal
~~~~~~~~~~~~~~~
This comes courtesy of PyCharm |image11| Feel free to invoke ``python``
or ``ipython`` directly and use the commands in the screenshot above and
it should work
Issues With Windows Firewall
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you run into issues with viewing D-Tale in your browser on Windows
please try making Python public under “Allowed Apps” in your Firewall
configuration. Here is a nice article: `How to Allow Apps to Communicate
Through the Windows
Firewall <https://www.howtogeek.com/howto/uncategorized/how-to-create-exceptions-in-windows-vista-firewall/>`__
Additional functions available programmatically
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: python
import dtale
import pandas as pd
df = pd.DataFrame([dict(a=1,b=2,c=3)])
# Assigning a reference to a running D-Tale process
d = dtale.show(df)
# Accessing data associated with D-Tale process
tmp = d.data.copy()
tmp['d'] = 4
# Altering data associated with D-Tale process
# FYI: this will clear any front-end settings you have at the time for this process (filter, sorts, formatting)
d.data = tmp
# Shutting down D-Tale process
d.kill()
# using Python's `webbrowser` package it will try and open your server's default browser to this process
d.open_browser()
# There is also some helpful metadata about the process
d._data_id # the process's data identifier
d._url # the url to access the process
d2 = dtale.get_instance(d._data_id) # returns a new reference to the instance running at that data_id
dtale.instances() # prints a list of all ids & urls of running D-Tale sessions
License
-------
D-Tale is licensed under the GNU LGPL v2.1. A copy of which is included
in
`LICENSE <https://github.com/man-group/dtale/blob/master/LICENSE.md>`__
Additional Documentation
------------------------
Located on the main `github repo <https://github.com/man-group/dtale>`__
.. |image0| image:: https://raw.githubusercontent.com/aschonfeld/dtale-media/master/images/Title.png
:target: https://github.com/man-group/dtale
.. |CircleCI| image:: https://circleci.com/gh/man-group/dtale.svg?style=shield&circle-token=4b67588a87157cc03b484fb96be438f70b5cd151
:target: https://circleci.com/gh/man-group/dtale
.. |PyPI Python Versions| image:: https://img.shields.io/pypi/pyversions/dtale.svg
:target: https://pypi.python.org/pypi/dtale/
.. |PyPI| image:: https://img.shields.io/pypi/v/dtale
:target: https://pypi.org/project/dtale/
.. |Conda| image:: https://img.shields.io/conda/v/conda-forge/dtale
:target: https://anaconda.org/conda-forge/dtale
.. |ReadTheDocs| image:: https://readthedocs.org/projects/dtale/badge
:target: https://dtale.readthedocs.io
.. |codecov| image:: https://codecov.io/gh/man-group/dtale/branch/master/graph/badge.svg
:target: https://codecov.io/gh/man-group/dtale
.. |Downloads| image:: https://pepy.tech/badge/dtale
:target: https://pepy.tech/project/dtale
.. |Open in VS Code| image:: https://img.shields.io/badge/Visual_Studio_Code-0078D4?style=for-the-badge&logo=visual%20studio%20code&logoColor=white
:target: https://open.vscode.dev/man-group/dtale
.. |image9| image:: https://raw.githubusercontent.com/aschonfeld/dtale-media/master/gifs/dtale_demo_mini.gif
.. |image10| image:: https://raw.githubusercontent.com/aschonfeld/dtale-media/master/gifs/dtale_ipython.gif
.. |image11| image:: https://raw.githubusercontent.com/aschonfeld/dtale-media/master/images/Python_Terminal.png
| null | MAN Alpha Technology | ManAlphaTech@man.com | null | null | LGPL | numeric, pandas, visualization, flask | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
"Programming Language :: Python",
"Topic :: Scientific/Engineering",
"Programming Language :: Python :: 2.7",... | [] | https://github.com/man-group/dtale | null | null | [] | [] | [] | [
"lz4<=2.2.1; python_version == \"2.7\"",
"lz4<=3.1.10; python_version == \"3.6\"",
"lz4; python_version > \"3.6\"",
"beautifulsoup4!=4.13.0b2; python_version > \"3.0\"",
"beautifulsoup4<=4.9.3; python_version == \"2.7\"",
"Brotli<=1.0.9; python_version == \"2.7\"",
"certifi<=2021.10.8; python_version ==... | [] | [] | [] | [] | twine/4.0.1 CPython/3.8.0 | 2026-02-18T22:43:49.467914 | dtale-3.20.0.tar.gz | 16,444,012 | dd/b3/83cd4eb0ec47c354f31e8252338f52ea6f43f98dd631f5a4a23c8e33feed/dtale-3.20.0.tar.gz | source | sdist | null | false | 4c12b609eae8159a9eb80cc4d72b5c10 | 4a60f8b5e11e59056a8c7b05dcf746835b48bae2626c69fb2a77459d657128c8 | ddb383cd4eb0ec47c354f31e8252338f52ea6f43f98dd631f5a4a23c8e33feed | null | [] | 1,879 |
2.4 | FinTrack | 1.2.0 | A robust portfolio tracker with multi-currency support and short selling | # FinTrack - Advanced Portfolio Tracker
[](./tests)
[](https://www.python.org/)
[](LICENSE)
A Python package for tracking and analyzing stock portfolios with multi-currency support, automatic dividend tracking, and short selling.
## Important note
Tests, instructions and docstrings written using Claude, I tried to find any incorrect information but some may have slipped through the crack.
## Features
- **Portfolio Management**: Track multiple stock holdings with buy/sell transactions
- **Short Selling**: Open and close short positions with mark-to-market daily valuation
- **Dynamic Price Tracking**: Automatically fetch and store historical stock prices using yfinance
- **Multi-Currency Support**: Handle stocks traded in different currencies with automatic conversion
- **Cash Management**: Maintain accurate cash balances accounting for all transaction types and dividend payments
- **Dividend Tracking**: Automatically capture and account for dividend payments (long positions only)
- **Historical Analysis**: Query portfolio composition and value at any point in time
- **Stock Returns Analysis**: Calculate individual stock performance including short positions
- **Index Comparison**: Compare your portfolio returns against benchmark indices
- **Comprehensive Logging**: Track all operations with detailed logging
- **Input Validation**: Validate all transaction data before processing
## Installation
Install the package using pip:
```bash
pip install FinTrack
```
For development:
```bash
git clone https://github.com/arofredriksson/FinTrack.git
cd FinTrack
pip install -e ".[dev]"
pytest tests/
```
## Quick Start
### 1. Create a Transaction CSV
Create `transactions.csv`:
```csv
Date;Ticker;Type;Amount;Price
2023-01-15;AAPL;Buy;10;150.00
2023-02-20;MSFT;Buy;5;250.00
2023-03-10;AAPL;Sell;5;165.00
2023-04-05;TSLA;Buy;2;800.00
2023-06-01;TSLA;Short;3;290.00
2023-09-15;TSLA;Cover;3;240.00
```
### 2. Initialize and Query
```python
from FinTrack import FinTrack
from datetime import date
# Create portfolio
portfolio = FinTrack(
initial_cash=150000,
currency="USD",
csv_file="transactions.csv"
)
# Update with latest data
portfolio.update_portfolio()
# Get current holdings (short positions prefixed with "Short: ")
holdings = portfolio.get_current_holdings()
print(f"Holdings: {holdings}")
# Get portfolio value over time (shorts valued mark-to-market)
values = portfolio.get_portfolio_value(
date(2023, 1, 1),
date(2023, 12, 31)
)
# Get portfolio summary
summary = portfolio.get_portfolio_summary()
print(f"Total Value: {summary['total_value']:,.2f} {summary['currency']}")
```
## CSV Format
**Delimiter:** Semicolon (`;`)
**Required columns:**
| Column | Type | Description |
|--------|---------|---------------------------|
| Date | YYYY-MM-DD | Transaction date |
| Ticker | String | Stock ticker symbol |
| Type | Buy/Sell/Short/Cover | Transaction type |
| Amount | Integer | Number of shares |
| Price | Number | Price per share |
**Transaction type effects:**
| Type | Shares | Cash |
|-------|-------------|-------------------------|
| Buy | +shares | −(shares × price) |
| Sell | −shares | +(shares × price) |
| Short | −shares | +(shares × price) |
| Cover | +shares | −(shares × price) |
**Example:**
```csv
Date;Ticker;Type;Amount;Price
2023-01-15;AAPL;Buy;10;150.50
2023-02-20;MSFT;Buy;5;250.75
2023-03-10;AAPL;Sell;5;165.25
2023-04-05;TSLA;Buy;2;800.00
2023-06-01;NVDA;Short;4;420.00
2023-11-01;NVDA;Cover;4;460.00
```
## Documentation
### Core Methods
#### `FinTrack.__init__(initial_cash, currency, csv_file, user_id=None)`
Initialize a portfolio tracker.
**Parameters:**
- `initial_cash`: Starting cash amount (must be non-negative)
- `currency`: Base currency code (3-letter code, e.g., 'USD', 'EUR')
- `csv_file`: Path to transactions CSV file
- `user_id`: Optional identifier for multi-user setups (default: 'default')
**Raises:**
- `FileNotFoundError`: If CSV file doesn't exist
- `ValidationError`: If parameters are invalid
#### `get_current_holdings() -> List[str]`
Get list of current stock holdings with company names. Short positions are prefixed with `"Short: "`.
#### `get_portfolio_value(from_date, to_date) -> Dict[date, float]`
Get portfolio value for each day in date range.
For short positions, value = cash (including short proceeds) + (negative_shares × current_price), which equals the unrealized P&L on the short automatically.
#### `get_portfolio_cash(date) -> Optional[float]`
Get cash balance on specific date. Cash includes proceeds received from short sales.
#### `get_portfolio_summary() -> Dict`
Get comprehensive portfolio summary. Holdings include an `is_short` boolean field. Short positions show negative `shares` and `value`.
#### `get_stock_returns(from_date, to_date) -> Dict[str, float]`
Calculate returns for each stock held during the period, including short positions.
- **Long positions**: standard return on invested capital
- **Short positions**: return = (proceeds − cover cost) / cover cost
- **Mixed activity**: Modified Dietz-style approach
**Returns:** Dictionary mapping ticker symbols to returns (e.g., 0.062 = 6.2%, −0.05 = −5%)
#### `print_stock_returns(from_date, to_date, sort_by='return')`
Print a formatted table of stock returns. Open short positions are labelled with `(Short)`.
#### `get_index_returns(ticker, start_date, end_date) -> List[float]`
Get daily returns for a benchmark index relative to start price.
#### `update_portfolio()`
Refresh portfolio with latest data from Yahoo Finance.
### Short Selling — How It Works
#### Opening a short (`Type=Short`)
- Holdings for that ticker decrease by the shorted amount (goes negative)
- Cash increases by `shares × price` (simplified — proceeds credited immediately)
- Prices are fetched for the ticker throughout the short period
#### Closing a short (`Type=Cover`)
- Holdings for that ticker increase by the covered amount (back toward zero)
- Cash decreases by `shares × price` (cost to buy back)
#### Daily valuation of open shorts
Because short proceeds were already credited to cash, the portfolio value calculation is:
```
value = cash + Σ(shares × price)
```
Since shorted shares are negative, their contribution is negative — i.e., the current cost-to-cover is subtracted from cash. The net result is the unrealized P&L:
```
unrealized P&L = proceeds_received − current_cost_to_cover
```
**Example:** Short 10 shares of TSLA at $300 → cash +$3,000. If today's price is $260:
```
contribution = -10 × $260 = -$2,600
net = $3,000 (cash) - $2,600 = +$400 unrealized gain ✓
```
### Configuration
Data is stored in user's home directory:
```
~/.fintrack/
├── default/ # Default user
│ └── data/
│ └── portfolio.db # Portfolio database
├── user123/ # Custom user
│ └── data/
│ └── portfolio.db
└── logs/
└── fintrack.log # Activity log
```
### Logging
```python
from FinTrack import setup_logger, get_logger
import logging
logger = setup_logger("my_app", level=logging.DEBUG)
logger.info("Portfolio initialized")
```
Logs are written to `~/.fintrack/logs/fintrack.log` by default.
### Error Handling
```python
from FinTrack import (
FinTrackError,
ValidationError,
DataFetchError,
PriceError,
DatabaseError,
ConfigError,
)
try:
portfolio = FinTrack(150000, "USD", "transactions.csv")
except ValidationError as e:
print(f"Invalid input: {e}")
except FinTrackError as e:
print(f"FinTrack error: {e}")
```
### Input Validation
```python
from FinTrack import TransactionValidator
df = pd.read_csv("transactions.csv", sep=";")
is_valid, errors = TransactionValidator.validate_dataframe(df)
if not is_valid:
for error in errors:
print(f" - {error}")
```
Validation checks:
- ✓ Date format (YYYY-MM-DD)
- ✓ Ticker symbols (non-empty)
- ✓ Transaction type (Buy, Sell, Short, or Cover)
- ✓ Amount (positive integer)
- ✓ Price (positive number)
## How It Works
### Database Structure
FinTrack uses SQLite with three main tables:
1. **portfolio**: Holdings per date (positive = long, negative = short)
2. **cash**: Cash balance tracking (includes short sale proceeds)
3. **prices**: Daily stock prices in base currency (fetched for all open positions)
### Price Management
- Prices automatically fetched from Yahoo Finance for all non-zero positions (long and short)
- Multi-currency portfolios: prices converted to base currency
- Forward-filling for missing trading days
- Custom prices from CSV supported
### Cash Flow Tracking
| Event | Cash effect |
|----------------|-----------------|
| Buy stock | Decreases |
| Sell stock | Increases |
| Short stock | Increases |
| Cover short | Decreases |
| Dividend | Increases (long positions only) |
### Stock Returns Calculation
Returns use a Modified Dietz approach treating all cash outflows (Buy, Cover) and inflows (Sell, Short) consistently, giving accurate performance metrics regardless of position type or how many times shares changed hands during the period.
## Supported Currencies
```python
portfolio = FinTrack(100000, "USD") # US Dollar
portfolio = FinTrack(100000, "EUR") # Euro
portfolio = FinTrack(100000, "GBP") # British Pound
portfolio = FinTrack(100000, "JPY") # Japanese Yen
portfolio = FinTrack(100000, "SEK") # Swedish Krona
```
## Requirements
- Python >= 3.8
- pandas >= 1.3.0
- yfinance >= 0.2.0
## Development
### Running Tests
```bash
pytest tests/
pytest tests/ --cov=src/FinTrack --cov-report=html
pytest tests/test_validation.py
```
### Code Quality
```bash
black src/
flake8 src/
mypy src/
```
## Limitations
- Prices fetched from Yahoo Finance — verify data quality
- Daily resolution only (intra-day trading not supported)
- Short selling uses a simplified cash model (proceeds credited immediately, no margin requirements)
- Corporate actions (stock splits, mergers) must be manually adjusted
- Past dividend data depends on Yahoo Finance records
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
## License
MIT License — see [LICENSE](LICENSE) for details.
## Disclaimer
This software is provided as-is for educational and informational purposes. Always verify your portfolio calculations independently. The short selling implementation uses a simplified cash model and does not account for margin requirements, borrowing costs, or broker-specific rules. The author is not responsible for any financial losses resulting from use of this software.
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for detailed release notes.
## Support
- [GitHub Issues](https://github.com/arofredriksson/FinTrack/issues)
- Email: arofre903@gmail.com
## Version History
- **v1.2.0** (2026-02-18): Short selling support (Short/Cover transaction types, mark-to-market valuation)
- **v1.1.1** (2026-02-15): Added stock returns analysis methods and improved index returns handling
- **v1.1.0** (2026-02-14): Major refactoring with full test suite, proper error handling, logging, and pandas 2.0 compatibility
- **v1.0.0** (2026-02-13): Initial release
---
**Built by:** Aron Fredriksson
**License:** MIT
**Last Updated:** February 2026
| text/markdown | Aron Fredriksson | Aron Fredriksson <arofre903@gmail.com> | null | null | MIT | portfolio, stocks, finance, tracking, investment, multi-currency, short selling | [
"Development Status :: 4 - Beta",
"Intended Audience :: Financial and Insurance Industry",
"Topic :: Office/Business :: Financial :: Investment",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3... | [] | https://github.com/arofre/FinTrack | null | >=3.8 | [] | [] | [] | [
"pandas>=1.3.0",
"yfinance>=0.2.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"flake8>=4.0.0; extra == \"dev\"",
"mypy>=0.950; extra == \"dev\"",
"twine>=4.0.0; extra == \"dev\"",
"build>=0.8.0; extra == \"dev\"",
"sphinx>=4.0.0; e... | [] | [] | [] | [
"Homepage, https://github.com/arofredriksson/FinTrack",
"Bug Reports, https://github.com/arofredriksson/FinTrack/issues",
"Source, https://github.com/arofredriksson/FinTrack",
"Documentation, https://github.com/arofredriksson/FinTrack/wiki"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-18T22:42:54.942157 | fintrack-1.2.0.tar.gz | 39,029 | 0c/00/a2cfbe50f85e414231289251e24df30bbfa2b340aef6a150c852ad1b35ce/fintrack-1.2.0.tar.gz | source | sdist | null | false | b2188507e5bade1ceccef919fd3e479e | 029b2ddf262c378ac698c06e5bd5cd2bc8f51e623538fb826ea9a66845ee268b | 0c00a2cfbe50f85e414231289251e24df30bbfa2b340aef6a150c852ad1b35ce | null | [
"LICENSE"
] | 0 |
2.4 | querycraft | 1.5.5 | Provide usefully SQL classes and functions to execute SQL queries step by step | # QueryCraft
**Version actuelle : 0.2.1** (Février 2025)
[TOC]
## Le nom ?
Un nom en anglais qui évoque l'idée de "façonner" ou "construire" des requêtes SQL de manière intuitive, parfait pour une approche pédagogique. (GPT 4o ;-) )
## Objectifs
L'objectif de cette bibliothèque est de proposer des classes Python permettant de manipuler des requêtes SQL.
Elle propose aussi des applications pour décomposer l'exécution d'une requête SQL sur une base de données PostgreSQL, MySQL ou SQLite.
## Fonctionnalités
- **Analyse de requêtes SQL** : Analysez et comprenez la structure de vos requêtes SQL.
- **Décomposition de requêtes** : Décomposez vos requêtes SQL en étapes simples pour une meilleure compréhension.
- **Support multi-SGBD** : Compatible avec PostgreSQL, MySQL et SQLite.
- **Interface en ligne de commande** : Utilisez l'application en ligne de commande pour analyser et décomposer vos requêtes SQL.
- **Aide de l'IA** : Comprenez vos erreurs SQL grâce à l'aide de l'IA.
- **Formats d'affichage multiples** : Exportez les résultats en format texte, HTML ou Markdown (nouveau en v0.2.1).
## Limitations
### Limitations liées à SQL et aux SGBD
- **Opérateurs SQL non couverts** : Certains opérateurs SQL avancés peuvent ne pas être entièrement pris en charge, en particulier les opérateurs ensemblistes.
Par exemple, les opérateurs `INTERSECT`, `EXCEPT` et `UNION` ne sont pas pris en charge.
Les sous-requêtes dans le 'From' sont prises en charges, mais pas les sous-requêtes dans le 'Where' et le 'Having' (pas de pas à pas possible).
- **Support limité des fonctions SQL** : Certaines fonctions SQL avancées peuvent ne pas être entièrement prises en charge.
- **Compatibilité avec les versions de SGBD** : La compatibilité avec les versions spécifiques de PostgreSQL, MySQL et SQLite peut varier.
### Problème avec la version de Python
QueryCraft fonctionne avec Python 3.11.2. À ce jour (13/03/2025), une bibliothèque (psycopg2) pose des problèmes avec Python 3.12. Il est donc préférable de rester pour l'instant sur la version 3.11.
L'outil a été testé sur Apple Mac Book Pro / MacOS Tahoe 26.2 (python 3.11.4, 3.12.10 et 3.14.2) et sur Raspberry Pi 4 / Raspberry Pi OS 12 Bookworm (python 3.11.2).
## Installation
### Après téléchargement depuis Gitlab :
```shell
git clone https://gitlab.univ-nantes.fr/ls2n-didactique/querycraft.git
cd querycraft
pip install -e .
```
### Sans téléchargement depuis Gitlab :
```shell
pip install querycraft
```
## Mise à jour
```shell
pip install --upgrade querycraft
```
## Usage
**Pour voir les commandes et comprendre l'utilisation avec exemples voir : [HOW_TO_USE](HOW_TO_USE.md)**
La requête devra être écrite entre double quotes " ".
### Interface Web (Nouveau !)
Depuis la version 0.2.1, QueryCraft propose une interface web moderne pour exécuter vos requêtes SQL directement depuis un navigateur.
```bash
# Lancer l'interface web
web-sbs
# Accéder via le navigateur
http://127.0.0.1:5000
```
**Voir la documentation complète : [WEB_INTERFACE.md](WEB_INTERFACE.md)**
Fonctionnalités :
- 🌐 Interface web intuitive et responsive
- 📊 Visualisation pas à pas des requêtes
- 🎨 Tableaux HTML formatés avec couleurs
- 🤖 Support de l'aide IA en mode verbose
- 📋 Affichage du schéma de la base
- ⌨️ Raccourci Ctrl+Enter pour exécuter
### Formats de sortie
Depuis la version 0.2.1, QueryCraft supporte trois formats d'affichage des résultats :
- **txt** (par défaut) : Affichage formaté dans le terminal avec des caractères Unicode
- **html** : Export au format HTML avec support des couleurs via CSS inline
- **md** : Export au format Markdown compatible avec les outils de documentation
Les fonctions de formatage de tableaux (`format_table_1`, `format_table_2`, `format_table_3`) acceptent maintenant un paramètre `output` :
```python
from querycraft.tools import format_table_1, format_table_2, format_table_3
# Format texte (par défaut)
print(format_table_1(headers, rows))
print(format_table_1(headers, rows, output='txt'))
# Format HTML
html_table = format_table_1(headers, rows, output='html')
# Format Markdown
md_table = format_table_1(headers, rows, output='md')
```
**Caractéristiques des formats :**
- **Format txt** : Utilise `colorama` pour la coloration (rouge pour les différences, vert pour les correspondances)
- **Format HTML** : Génère des tableaux avec `<table>`, `<thead>`, `<tbody>` et styles CSS pour la coloration
- **Format Markdown** : Génère des tableaux standards Markdown avec indicateurs visuels (❌/✓) pour les différences
### PostgreSQL
```shell
usage: pgsql-sbs [-h] [-u USER] [-p PASSWORD] [--host HOST] [--port PORT] -d DB [-v] [-nsbs] (-b | -f FILE | -s SQL)
Effectue l'exécution pas à pas d'une requête sur PostgreSQL (c) E. Desmontils, Nantes Université, 2024
options:
-h, --help show this help message and exit
-u USER, --user USER database user (by default desmontils-e)
-p PASSWORD, --password PASSWORD
database password
--host HOST database host (by default localhost)
--port PORT database port (by default 5432)
-d DB, --db DB database name
-v, --verbose verbose mode
-nsbs, --step_by_step
step by step mode
-b, --describe DB Schema
-f FILE, --file FILE sql file
-s SQL, --sql SQL sql string
```
### SQLite
```shell
usage: sqlite-sbs [-h] [-d DB] [-v] [-nsbs] (-b | -f FILE | -s SQL)
Effectue l'exécution pas à pas d'une requête sur SQLite (c) E. Desmontils, Nantes Université, 2024
options:
-h, --help show this help message and exit
-d DB, --db DB database name (by default cours.db)
-v, --verbose verbose mode
-nsbs, --step_by_step
step by step mode
-b, --describe DB Schema
-f FILE, --file FILE sql file
-s SQL, --sql SQL sql string
```
### MySQL
```shell
usage: mysql-sbs [-h] [-u USER] [-p PASSWORD] [--host HOST] [--port PORT] -d DB [-v] [-nsbs] (-b | -f FILE | -s SQL)
Effectue l'exécution pas à pas d'une requête sur MySQL (c) E. Desmontils, Nantes Université, 2024
options:
-h, --help show this help message and exit
-u USER, --user USER database user (by default desmontils-e)
-p PASSWORD, --password PASSWORD
database password
--host HOST database host (by default localhost)
--port PORT database port (by default 3306)
-d DB, --db DB database name
-v, --verbose verbose mode
-nsbs, --step_by_step
step by step mode
-b, --describe DB Schema
-f FILE, --file FILE sql file
-s SQL, --sql SQL sql string
```
## Paramétrage
Il est possible de modifier certains paramètres de l'outil à travers l'application "admin-sbs".
```shell
usage: admin-sbs [-h] [--set SET [SET ...]]
Met à jour des paramètres du fichier de configuration.
options:
-h, --help show this help message and exit
--set SET [SET ...] Assignments au format Section.clef=valeur.
```
L'absence de paramètres permet d'afficher les paramètres courants :
```shell
% admin-sbs
Aucune assignation à traiter.
[Database]
┌───────────┬───────────┐
│ Clé ┆ Valeur │
╞═══════════╪═══════════╡
│ type ┆ sqlite │
│ database ┆ cours.db │
│ username ┆ None │
│ password ┆ None │
│ host ┆ None │
│ port ┆ None │
└───────────┴───────────┘
[LRS]
┌───────────┬────────────────────────────────────┐
│ Clé ┆ Valeur │
╞═══════════╪════════════════════════════════════╡
│ endpoint ┆ http://local.veracity.it/querycra… │
│ username ┆ user │
│ password ┆ **** │
│ mode ┆ off │
└───────────┴────────────────────────────────────┘
[IA]
┌──────────┬─────────────────┐
│ Clé ┆ Valeur │
╞══════════╪═════════════════╡
│ modele ┆ gemini-2.0-flash│
│ service ┆ google │
│ api-key ┆ None │
│ url ┆ None │
│ mode ┆ on │
└──────────┴─────────────────┘
Services reconnus : ollama, ollama_cloud, lms, poe, openai, google et generic
[Autre]
┌──────────────┬─────────┐
│ Clé ┆ Valeur │
╞══════════════╪═════════╡
│ debug ┆ False │
│ verbose ┆ False │
│ cache ┆ on │
│ duree-cache ┆ 60 │
│ aide ┆ off │
└──────────────┴─────────┘
```
Cela permet, par exemple de spécifier la base de données par défaut. Par exemple :
```shell
admin-sbs --set Database.type=sqlite Database.database=em.db
```
Du coup, dans "sqlite-sbs", l'option "-d" devient optionnelle.
Pour les services de LLM sur le Cloud, pour des questions de sécurité, il est préférable de mettre les clés d'API en variables d'environnement plutôt que dans le fichier de configuration. 'api-key' sera alors à 'None'.
Par exemple :
```shell
export POE_API_KEY=...
export OPENAI_API_KEY=...
export OLLAMA_API_KEY=...
```
Ces clés API peuvent aussi être utilisées pour gérer une défaillance des services de LLM locaux.
## LRS
L'outil peut être interfacé avec un LRS compatible XAPI (testé avec Veracity ; https://lrs.io/home ; https://lrs.io/home/download).
L'activation et les paramètres du service sont à renseigner dans la section LRS des paramètres.
## Aide de l'IA
Il est possible d'activer ou désactiver l'aide par IA, ainsi que de choisir le service d'IA générative à utiliser. L'appel au service d'IA générative n'est fait que dans le mode "verbose" (option "-v").
Pour bénéficier de l'aide de l'IA, il faut, par exemple, installer Ollama (https://ollama.com/), récupérer le modèle de langage "codellama:7b" puis lancer le serveur Ollama.
Soit :
```shell
ollama pull codellama:7b
ollama serve
```
Puis, l'activer dans l'outil :
```shell
admin-sbs --set IA.service=ollama IA.modele=codellama:7b IA.mode=on
```
Pour désactiver l'IA :
```shell
admin-sbs --set IA.mode=off
```
Le service IA possède un système de cache pour ne pas effectuer plusieurs fois la même requête à l'IAg. Les informations dans ce cache ont une durée de vie limitée. Le paramètre "duree-cache" fixe cette durée en nombre de jours.
Les services IA générative disponibles sont :
- **ollama** : Service local Ollama
- Configuration : `IA.service=ollama IA.modele=codellama:7b IA.url=None IA.api-key=None`
- L'API vise http://localhost:11434
- **ollama_cloud** : Service cloud Ollama
- Configuration : `IA.service=ollama_cloud IA.modele=llama3 IA.url=None IA.api-key=xxxxxxx`
- **lms** : LM Studio (local)
- Configuration : `IA.service=lms IA.modele=mistralai/mistral-nemo-instruct-2407 IA.url=None IA.api-key=None`
- L'API locale est recherchée automatiquement par l'application
- **poe** : Service Poe.com
- Configuration : `IA.service=poe IA.modele=gpt-4.1-nano IA.url=None IA.api-key=xxxxxxx`
- L'API vise https://api.poe.com/v1
- **openai** : OpenAI API
- Configuration : `IA.service=openai IA.modele=gpt-3.5-turbo IA.url=None IA.api-key=xxxxxxx`
- L'API vise https://api.openai.com/v1/chat/completions
- **google** : Google Gemini API
- Configuration : `IA.service=google IA.modele=gemini-2.0-flash-exp IA.url=None IA.api-key=xxxxxxx`
- L'API vise https://generativelanguage.googleapis.com/v1beta/openai/
- **generic** : Service générique compatible OpenAI API
- Configuration : `IA.service=generic IA.modele=gpt-3.5-turbo IA.url=https://xxxxx IA.api-key=xxxxxxx`
- Pour tout service compatible avec l'API d'OpenAI
Le service d'IA générative est appelé dans trois situations :
- en cas d'erreur générée par le SGBD pour aider l'élève ou l'étudiant à comprendre son erreur ;
- lors de la description de la base de données (option "--describe") pour expliquer la base de données ;
- lors de l'exécution d'une requête pour expliquer la structure de la requête.
Attention, le service d'IA générative ne garantit pas la validité de l'aide. Il faut que les étudiant ou élèves vérifient la réponse et se rapprochent des enseignants si nécessaire.
## Gestion des exercices
Depuis la version 0.0.65, QueryCraft propose une gestion des exercices par la commande :
```shell
exos-sbs -h
usage: exos-sbs [-h] {create-ex,delete-ex,add-q,delete-q,show-ex} ...
Gestion d'exercices et questions.
positional arguments:
{create-ex,delete-ex,add-q,delete-q,show-ex}
create-ex Créer un exercice
delete-ex Supprimer un exercice
add-q Ajouter une question de type I->R à un exercice
delete-q Supprimer une question
show-ex Afficher un exercice
options:
-h, --help show this help message and exit
```
### Créer un exercice
```shell
% exos-sbs create-ex -h
usage: exos-sbs create-ex [-h] code
positional arguments:
code Code de l'exercice
options:
-h, --help show this help message and exit
```
### Ajouter une question à un exercice
```shell
exos-sbs add-q -h
usage: exos-sbs add-q [-h] code numero requete intention
positional arguments:
code Code de l'exercice
numero Numéro de la question
requete Requête SQL
intention Intention de la requête
options:
-h, --help show this help message and exit
```
NB : pour avoir des explications sur l'intention, voir l'article [1].
### Supprimer une question d'un exercice
```shell
% exos-sbs delete-q -h
usage: exos-sbs delete-q [-h] code numero
positional arguments:
code Code de l'exercice
numero Numéro de la question
options:
-h, --help show this help message and exit
```
### Supprimer un exercice
```shell
% exos-sbs delete-ex -h
usage: exos-sbs delete-ex [-h] code
positional arguments:
code Code de l'exercice
options:
-h, --help show this help message and exit
```
### Afficher un exercice
```shell
exos-sbs show-ex -h
usage: exos-sbs show-ex [-h] code
positional arguments:
code Code de l'exercice
options:
-h, --help show this help message and exit
```
### Exécuter un exercice
```shell
% sqlite-sbs -d cours.db -s 'SELECT m.codemat, titre FROM matieres m left join notes n on m.codemat = n.codemat inner join etudiants using (noetu) group by m.codemat, titre having count(*) > 1 ;' -v -e exos1 -q q1 -nsbs
```
L'aide de l'enseignant est exploitée si la configuration 'Autre.aide = on'
## Exemples d'utilisation
### Exemple 1 : Requête simple avec SQLite
```shell
% sqlite-sbs -d cours.db -s "SELECT * FROM etudiants WHERE age > 20"
```
### Exemple 2 : Requête avec PostgreSQL et mode verbose
```shell
% pgsql-sbs -d mydb -u postgres -s "SELECT nom, prenom FROM users WHERE active = true" -v
```
### Exemple 3 : Afficher le schéma de la base
```shell
% sqlite-sbs -d cours.db -b
```
### Exemple 4 : Exécuter une requête depuis un fichier
```shell
% mysql-sbs -d test -u root -f requetes.sql
```
### Exemple 5 : Utilisation avec l'IA pour expliquer les erreurs
```shell
# Activer l'IA
% admin-sbs --set IA.mode=on IA.service=ollama IA.modele=codellama:7b
# Exécuter une requête avec aide de l'IA
% sqlite-sbs -d cours.db -s "SELECT * FROM table_inexistante" -v
```
## Article de recherche et conférences
1- Emmanuel Desmontils, Laura Monceaux. **Enseigner SQL en NSI**. Atelier « Apprendre la Pensée Informatique de la Maternelle à l'Université », dans le cadre de la conférence Environnements Informatiques pour l'Apprentissage Humain (EIAH), Jun 2023, Brest, France. pp.17-24.
https://hal.science/hal-04144210
https://apimu.gitlabpages.inria.fr/site/ateliers/pdf-apimu23/APIMUEIAH_2023_paper_3.pdf
2- Emmanuel Desmontils. Enseigner SQL en NSI : typologie et cas de la jointure. Journée des enseignants de SNT et de NSI 2024, Académie de la Réunion et IREMI de La Réunion, Dec 2024, Saint-Denis (La Réunion), France.
https://hal.science/hal-05030037v1
## Génération de la documentation
```shell
pdoc3 --html --force -o doc querycraft
```
## Remerciements
- Wiktoria SLIWINSKA, étudiante ERASMUS en licence Informatique à l'Université de Nantes en 2023-2024, pour son aide à la conception du POC initial.
- Baptiste GIRARD, étudiant en licence Informatique à l'Université de Nantes en 2024-2025, pour son aide à la fiabilisation de l'outil.
## Autres sites
Sur PyPi : https://pypi.org/project/querycraft/
HAL (pour citer dans une publication) : https://hal.science/hal-04964895
## Licence
© E. Desmontils, Nantes Université, 2024-2025
Ce logiciel est distribué sous licence GPLv3.
| text/markdown | Emmanuel Desmontils | emmanuel.desmontils@univ-nantes.fr | Emmanuel Desmontils | emmanuel.desmontils@univ-nantes.fr | GPL V3 | SQL Step-By-Step Query Database LLM IA | [
"Topic :: Education",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [
"ALL"
] | https://gitlab.univ-nantes.fr/ls2n-didactique/querycraft | null | >=3.11.0 | [] | [] | [] | [
"argparse>=1.4.0",
"openai>=2.9.0",
"tincan>=1.0.0",
"SQLAlchemy>=2.0.40",
"jinja2>=3.1.6",
"mysql-connector-python>=9.0.0",
"lmstudio>=1.5.0",
"psycopg2-binary==2.9.10",
"ollama>=0.4.7",
"keyboard>=0.13.5",
"google-genai>=1.61.0",
"markdown>=3.5.0",
"sqlglot==25.33.0",
"colorama>=0.4.6",
... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.4 | 2026-02-18T22:42:35.458866 | querycraft-1.5.5.tar.gz | 121,870 | 8b/04/5081c41f1f0e9bd98bb9578ada9a7f12aa7744e505531ae4c5617d70315b/querycraft-1.5.5.tar.gz | source | sdist | null | false | 7500ff6c35fd27d4c2d2c149998d2c19 | 8a56e70bf226ef2b53ec76e2f0f192770854cabc8ba21666374c941a9a1eab5c | 8b045081c41f1f0e9bd98bb9578ada9a7f12aa7744e505531ae4c5617d70315b | null | [
"LICENSE",
"AUTHORS"
] | 247 |
2.4 | open-dread-rando-exlaunch | 1.2.0 | Exlaunch binary files for open-dread-rando. | # Dread Remote Lua
A further modification of Dread depackager, that makes the game listen to a socket, running any lua code sent to it.
# Original readme
# dread depackager
A modification for Metroid: Dread allowing redirection of files from within pkg files to loose files in RomFS.
# usage
dread deapackager expects a json file called "replacements.json" to be placed into the root of the RomFS directory for your mod, and the subsdk9 and main.npdm files in the exefs directory.
replacements.json can have two structures, and it will be automatically detected.
format 1 example:
```json
{
"replacements" :
[
"file1/path/within/pkg",
"file2/path/within/pkg"
]
}
```
with format 1, `file1/path/within/pkg` will be directed to `rom:/file1/path/within/pkg` when the game tries to open it from within a pkg, and instead will open it from the same path within RomFS.
format 2 example:
```json
{
"replacements" :
[
{ "file1/path/within/pkg" : "rom:/mymod/file1" },
{ "file2/path/within/pkg" : "rom:/mymod/file2" }
]
}
```
with format 2, the RomFS path is arbitrarily defined for any pkg file path, allowing for more flexible organization of the reaplced files in the finished mod
# How it works
Dread depackager uses the filepaths listed in replacements.json to selectively replace paths in the game's path to crc conversion code.
All file paths first pass through this function, and by hooking it and replacing the string, it can selectively redirect file paths into romfs
dread depackager uses a few libraries:
- exlaunch, a code injection framework for switch executables. Its original readme can be found below
- cJSON, a json parsing library written in C. Dread depackager uses a slightly modified version of this library.
# Original exlaunch readme
## exlaunch
A framework for injecting C/C++ code into Nintendo Switch applications/applet/sysmodules.
## Note
This project is a work in progress. If you have issues, reach out to Shadów#1337 on Discord.
## Credit
- Atmosphère: A great reference and guide.
- oss-rtld: Included for (pending) interop with rtld in applications (License [here](https://github.com/shadowninja108/exlaunch/blob/main/source/lib/reloc/rtld/LICENSE.txt)).
| text/markdown | null | null | null | null | null | null | [
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/randovania/open-dread-rando-exlaunch"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:42:08.961428 | open_dread_rando_exlaunch-1.2.0.tar.gz | 390,758 | 43/d7/beca22eb7f03dbc7176b56af6fefdf60fa4faf64d9dfe4d41be8f5f62bdb/open_dread_rando_exlaunch-1.2.0.tar.gz | source | sdist | null | false | cebe891cbb43f8d41c55cbe0f2e51281 | b924d67c99a5614367229822d20865b8c883640c8c8e8cbd2a1a4d6c9a57cdc2 | 43d7beca22eb7f03dbc7176b56af6fefdf60fa4faf64d9dfe4d41be8f5f62bdb | null | [
"LICENSE"
] | 1,187 |
2.4 | cryptoserve | 1.4.1 | CryptoServe SDK - Zero-config cryptographic operations with auto-registration and local key caching | # CryptoServe SDK

Zero-config cryptographic operations with managed keys and auto-registration.
## Installation
```bash
pip install cryptoserve
```
## Quick Start (Recommended)
```bash
# One-time login (stores credentials locally)
cryptoserve login
```
```python
from cryptoserve import CryptoServe
# Initialize - auto-registers your app on first use
crypto = CryptoServe(
app_name="my-service",
team="platform",
environment="development"
)
# Encrypt/Decrypt
encrypted = crypto.encrypt(b"sensitive data", context="user-pii")
decrypted = crypto.decrypt(encrypted, context="user-pii")
# Sign/Verify
signature = crypto.sign(b"document", key_id="signing-key")
is_valid = crypto.verify_signature(b"document", signature, key_id="signing-key")
# Hash and MAC
hash_hex = crypto.hash(b"data", algorithm="sha256")
mac_hex = crypto.mac(b"message", key=secret_key, algorithm="hmac-sha256")
```
## Local Mode (No Server)
Run the full SDK API without a server. All operations happen locally using a password or master key.
```python
from cryptoserve import CryptoServe
# Initialize with a password (deterministic key derivation)
crypto = CryptoServe.local(password="my-secret-password")
# Same API as server mode
encrypted = crypto.encrypt(b"sensitive data", context="user-pii")
decrypted = crypto.decrypt(encrypted, context="user-pii")
# String helpers
encoded = crypto.encrypt_string("PII data", context="user-pii")
text = crypto.decrypt_string(encoded, context="user-pii")
# JSON
crypto.encrypt_json({"email": "user@example.com"}, context="user-pii")
# Hash and MAC work locally too
hash_hex = crypto.hash(b"data")
```
Two instances with the same password can decrypt each other's data. Different contexts derive different keys, providing isolation.
## CryptoServe Class
The `CryptoServe` class provides:
| Method | Description |
|--------|-------------|
| `encrypt(plaintext, context)` | Encrypt binary data |
| `decrypt(ciphertext, context)` | Decrypt binary data |
| `encrypt_string(text, context)` | Encrypt string (returns base64) |
| `decrypt_string(ciphertext, context)` | Decrypt to string |
| `encrypt_json(obj, context)` | Encrypt JSON object |
| `decrypt_json(ciphertext, context)` | Decrypt to JSON |
| `sign(data, key_id)` | Create digital signature |
| `verify_signature(data, signature, key_id)` | Verify signature |
| `hash(data, algorithm)` | Compute cryptographic hash |
| `mac(data, key, algorithm)` | Compute MAC |
| `health_check()` | Verify connection |
| `cache_stats()` | Get cache performance stats |
| `invalidate_cache(context)` | Clear cached keys |
| `local(password=..., master_key=...)` | Create local-mode instance (class method) |
| `migrate_from_easy(ciphertext, password, target, context)` | Migrate easy-blob data (static method) |
## Performance Features
CryptoServe SDK includes built-in performance optimizations:
### Local Key Caching
Keys are cached locally to reduce network round-trips:
| Metric | Value |
|--------|-------|
| Server round-trip | ~90ms |
| Cached operation | ~0.3ms avg |
| Min latency | 0.009ms |
| **Speedup** | **~250x** |
| Cache hit rate | 90%+ (after warmup) |
```python
from cryptoserve import CryptoServe
# Enable caching (default: enabled)
crypto = CryptoServe(
app_name="my-service",
team="platform",
enable_cache=True, # Default: True
cache_ttl=300.0, # 5 minutes (default)
cache_size=100, # Max cached keys (default)
)
# First call fetches key from server and caches it
encrypted = crypto.encrypt(b"data", context="user-pii") # ~90ms
# Subsequent calls use cached key (local AES-256-GCM)
encrypted = crypto.encrypt(b"more data", context="user-pii") # ~0.3ms
```
### Cache Statistics
Monitor cache performance:
```python
stats = crypto.cache_stats()
print(f"Hit rate: {stats['hit_rate']:.1%}")
print(f"Hits: {stats['hits']}, Misses: {stats['misses']}")
print(f"Cache size: {stats['size']}")
```
### Cache Invalidation
Invalidate cached keys (e.g., after key rotation):
```python
# Invalidate specific context
crypto.invalidate_cache("user-pii")
# Invalidate all cached keys
crypto.invalidate_cache()
```
## Health Check
```python
from cryptoserve import CryptoServe
crypto = CryptoServe(app_name="my-service")
if crypto.health_check():
print("Connected!")
else:
print("Connection failed")
```
## FastAPI Integration
```python
from cryptoserve import CryptoServe
from cryptoserve.fastapi import configure, EncryptedStr
from pydantic import BaseModel
# Configure once at startup
crypto = CryptoServe(app_name="my-api", team="platform")
configure(crypto)
class User(BaseModel):
name: str
email: EncryptedStr["user-pii"] # Automatically encrypted
```
## SQLAlchemy Integration
```python
from cryptoserve import CryptoServe
from cryptoserve.fastapi import configure, EncryptedString
from sqlalchemy import Column, Integer
# Configure once at startup
crypto = CryptoServe(app_name="my-api", team="platform")
configure(crypto)
class User(Base):
id = Column(Integer, primary_key=True)
email = Column(EncryptedString(context="user-pii"))
```
## Auto-Protect (Third-Party Libraries)
```python
from cryptoserve import auto_protect
auto_protect(encryption_key=key)
# Now all outbound requests are automatically protected
import requests
requests.post(url, json={"email": "user@example.com"}) # Auto-encrypted
```
## CLI
```bash
# Interactive context wizard
cryptoserve wizard
# Verify SDK health
cryptoserve verify
# Show identity info
cryptoserve info
# List encryption contexts
cryptoserve contexts
```
### Offline Tools (No Server Required)
```bash
# Encrypt/decrypt strings
cryptoserve encrypt "sensitive data" --password my-secret
cryptoserve decrypt "<base64>" --password my-secret
# Encrypt/decrypt files
cryptoserve encrypt --file report.pdf --output report.enc --password my-secret
cryptoserve decrypt --file report.enc --output report.pdf --password my-secret
# Hash a password (prompts for input if no argument)
cryptoserve hash-password
cryptoserve hash-password "my-password" --algo pbkdf2
# Create a JWT token
cryptoserve token --key my-secret-key-1234 --payload '{"sub":"user-1"}' --expires 3600
```
## Package Architecture
CryptoServe uses a modular architecture for flexibility:
| Package | Purpose | Install |
|---------|---------|---------|
| `cryptoserve` | Full SDK with managed keys | `pip install cryptoserve` |
| `cryptoserve-core` | Pure crypto primitives | `pip install cryptoserve-core` |
| `cryptoserve-client` | API client only | `pip install cryptoserve-client` |
| `cryptoserve-auto` | Auto-protect libraries | `pip install cryptoserve-auto` |
**Use cases:**
- **Most users**: Install `cryptoserve` for the full experience
- **Bring your own keys**: Install `cryptoserve-core` only
- **Custom integration**: Install `cryptoserve-client` for API access
- **Dependency protection**: Add `cryptoserve-auto` for automatic protection
## Error Handling
```python
from cryptoserve import (
CryptoServe,
AuthenticationError,
AuthorizationError,
ContextNotFoundError,
)
crypto = CryptoServe(app_name="my-app", team="platform")
try:
ciphertext = crypto.encrypt(data, context="user-pii")
except AuthenticationError:
# Token expired or invalid
pass
except AuthorizationError:
# Not allowed to use this context
pass
except ContextNotFoundError:
# Context doesn't exist
pass
```
## License
Apache 2.0
| text/markdown | null | CryptoServe <hello@cryptoserve.dev> | null | null | Apache-2.0 | cryptography, encryption, security, sdk, performance, caching | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming La... | [] | null | null | >=3.9 | [] | [] | [] | [
"cryptoserve-core>=0.4.0",
"cryptoserve-client>=0.3.0",
"cryptography>=41.0.0",
"pyyaml>=6.0",
"requests>=2.28.0",
"pydantic>=2.0.0; extra == \"fastapi\"",
"sqlalchemy>=2.0.0; extra == \"sqlalchemy\"",
"cryptoserve-auto>=0.3.0; extra == \"auto\"",
"pydantic>=2.0.0; extra == \"all\"",
"sqlalchemy>=... | [] | [] | [] | [
"Homepage, https://github.com/ecolibria/cryptoserve",
"Documentation, https://docs.cryptoserve.dev",
"Repository, https://github.com/ecolibria/cryptoserve"
] | twine/6.1.0 CPython/3.12.9 | 2026-02-18T22:41:56.935924 | cryptoserve-1.4.1.tar.gz | 99,633 | 39/fb/67877aeea53bc1673b9fb09f310dd160e482f5c54e5f12adf65b7972a71b/cryptoserve-1.4.1.tar.gz | source | sdist | null | false | 811d44a63b5f1dc4948d8db1e7e6bf15 | 929dadf07ef2e9f3e863ae48febb6ce90e31ae086e5d3f78b6bc6a142385b10a | 39fb67877aeea53bc1673b9fb09f310dd160e482f5c54e5f12adf65b7972a71b | null | [
"LICENSE"
] | 269 |
2.4 | cryptoserve-core | 0.4.1 | CryptoServe Core - Pure cryptographic primitives | # cryptoserve-core

Pure cryptographic primitives for Python. Zero network dependencies, one `pip install`, production-ready defaults.
## Installation
```bash
pip install cryptoserve-core
```
## Quick Start
```python
import cryptoserve_core as crypto
# Encrypt a string with a password
encrypted = crypto.encrypt_string("sensitive data", password="my-secret")
decrypted = crypto.decrypt_string(encrypted, password="my-secret")
# Hash a password (scrypt, PHC format)
hashed = crypto.hash_password("user-password")
assert crypto.verify_password("user-password", hashed)
# Create a JWT token
token = crypto.create_token({"sub": "user-123"}, key=b"my-secret-key-1234567890")
claims = crypto.verify_token(token, key=b"my-secret-key-1234567890")
```
## Easy Encryption
Password-based encryption using PBKDF2 (600K iterations) + AES-256-GCM. Each call generates a fresh random salt and nonce.
```python
from cryptoserve_core import encrypt, decrypt, encrypt_string, decrypt_string
# Bytes
ciphertext = encrypt(b"secret bytes", password="my-password")
plaintext = decrypt(ciphertext, password="my-password")
# Strings (returns URL-safe base64)
encoded = encrypt_string("secret text", password="my-password")
text = decrypt_string(encoded, password="my-password")
```
### File Encryption
Files under 64KB use a single encrypted blob. Larger files use chunked encryption for memory efficiency.
```python
from cryptoserve_core import encrypt_file, decrypt_file
encrypt_file("report.pdf", "report.pdf.enc", password="file-password")
decrypt_file("report.pdf.enc", "report.pdf", password="file-password")
```
## Password Hashing
Secure password hashing with scrypt or PBKDF2. Output follows the PHC (Password Hashing Competition) string format for safe database storage.
```python
from cryptoserve_core import hash_password, verify_password, check_strength
# Hash (default: scrypt)
hashed = hash_password("user-password")
# Output: $scrypt$n=16384,r=8,p=1$<salt>$<hash>
# Hash with PBKDF2
hashed = hash_password("user-password", algorithm="pbkdf2")
# Output: $pbkdf2-sha256$i=600000$<salt>$<hash>
# Verify (constant-time comparison)
assert verify_password("user-password", hashed)
# Strength check (0-4 score)
result = check_strength("P@ssw0rd!2026")
print(f"Score: {result.score}/4 ({result.label})")
print(f"Feedback: {result.feedback}")
```
## JWT Tokens
Minimal JWT implementation using HS256 (HMAC-SHA256). No pyjwt dependency.
```python
from cryptoserve_core import create_token, verify_token, decode_token
key = b"my-secret-key-minimum-16-bytes"
# Create with automatic iat/exp claims
token = create_token({"sub": "user-123", "role": "admin"}, key=key, expires_in=3600)
# Verify signature and expiry
claims = verify_token(token, key=key)
# Decode without verification (inspect claims)
claims = decode_token(token)
```
## Low-Level API
For custom key management and direct cipher access:
```python
from cryptoserve_core import AESGCMCipher, ChaCha20Cipher, KeyDerivation
# Generate a key
key = KeyDerivation.generate_key(256)
# AES-256-GCM encryption
cipher = AESGCMCipher(key)
ciphertext, nonce = cipher.encrypt(b"sensitive data")
plaintext = cipher.decrypt(ciphertext, nonce)
# ChaCha20-Poly1305 encryption
cipher = ChaCha20Cipher(key)
ciphertext, nonce = cipher.encrypt(b"sensitive data")
plaintext = cipher.decrypt(ciphertext, nonce)
# Key derivation from password
key, salt = KeyDerivation.from_password("my-password", bits=256, iterations=600_000)
```
## Supported Algorithms
| Algorithm | Security | Use Case |
|-----------|----------|----------|
| AES-256-GCM | 256-bit | General purpose, hardware accelerated |
| ChaCha20-Poly1305 | 256-bit | Mobile, real-time applications |
| scrypt | N=16384, r=8, p=1 | Password hashing (interactive) |
| PBKDF2-SHA256 | 600K iterations | Password hashing (compatibility) |
| HS256 | HMAC-SHA256 | JWT token signing |
## Why cryptoserve-core?
- **Zero config** - Production-safe defaults for every algorithm
- **No server required** - Works entirely offline
- **One dependency** - Only `cryptography` (no pyjwt, bcrypt, argon2-cffi)
- **Auditable** - Small, focused codebase with continuous security validation
- **Standards compliant** - NIST-approved algorithms, PHC hash format
## License
Apache 2.0
| text/markdown | null | CryptoServe <hello@cryptoserve.dev> | null | null | Apache-2.0 | cryptography, encryption, security, aes, chacha20, password-hashing, jwt, file-encryption | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming La... | [] | null | null | >=3.9 | [] | [] | [] | [
"cryptography>=41.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ecolibria/cryptoserve",
"Documentation, https://docs.cryptoserve.dev",
"Repository, https://github.com/ecolibria/cryptoserve"
] | twine/6.1.0 CPython/3.12.9 | 2026-02-18T22:41:23.249493 | cryptoserve_core-0.4.1.tar.gz | 17,715 | 53/48/9b310d40015c8577940c5562c80c257a923cff04c78abddc096382749039/cryptoserve_core-0.4.1.tar.gz | source | sdist | null | false | b9b58611023b1b1a447cd9d1899713e0 | 9e2432f59af777ecdc7352a63ac11bf0fdd3f93f3ac82fc02c98a25c41bc5c69 | 53489b310d40015c8577940c5562c80c257a923cff04c78abddc096382749039 | null | [
"LICENSE"
] | 281 |
2.4 | tristero | 0.3.7 | Library for trading on Tristero | # Tristero
[](https://badge.fury.io/py/tristero)
[](https://pypi.org/project/tristero/)
This repository is home to Tristero's trading library.
### How it works
Tristero supports two primary swap mechanisms:
#### Permit2 Swaps (EVM-to-EVM)
- **Quote & Approve** - Request a quote and approve tokens via Permit2 (gasless approval)
- **Sign & Submit** - Sign an EIP-712 order and submit for execution
- **Monitor** - Track swap progress via WebSocket updates
#### Feather Swaps (UTXO-based)
- **Quote & Deposit** - Request a quote to receive a deposit address
- **Manual Transfer** - Send funds to the provided deposit address
- **Monitor** - Track swap completion via WebSocket updates
This library provides both high-level convenience functions and lower-level components for precise control.
### Installation
```
pip install tristero
```
### Environment Configuration
Tristero supports three environments: **PRODUCTION** (default), **STAGING**, and **LOCAL**.
Set the environment globally at startup:
```py
from tristero import set_config
set_config("STAGING") # all subsequent calls use the staging API
```
Or override per call:
```py
quote = await get_swap_quote(..., env="LOCAL")
```
Every user-facing function in the SDK accepts an optional `env` keyword argument.
### Quick Start
#### Spot Swap (quote, sign, submit)
```py
import asyncio
import json
import os
from eth_account import Account
from tristero import get_swap_quote, sign_and_submit, make_async_w3
async def main() -> None:
private_key = os.getenv("TEST_ACCOUNT_PRIVKEY")
if not private_key:
raise RuntimeError("Set TEST_ACCOUNT_PRIVKEY")
wallet = Account.from_key(private_key).address
w3 = make_async_w3(os.getenv("ARB_RPC_URL", "https://arbitrum-one-rpc.publicnode.com"))
# 1. Get a quote (USDC -> WETH on Arbitrum)
quote = await get_swap_quote(
wallet=wallet,
src_chain=42161,
src_token="0xaf88d065e77c8cC2239327C5EDb3A432268e5831", # USDC
dst_chain=42161,
dst_token="0x82aF49447D8a07e3bd95BD0d56f35241523fBab1", # WETH
amount=1_000_000, # 1 USDC (6 decimals)
)
print(json.dumps(quote, indent=2))
# 2. Sign and submit (w3 required for Permit2 approval on source chain)
result = await sign_and_submit(quote, private_key, w3=w3, wait=True, timeout=300)
print(result)
asyncio.run(main())
```
#### Margin Position (quote, sign, submit)
```py
import asyncio
import json
import os
from eth_account import Account
from tristero import get_swap_quote, sign_and_submit
async def main() -> None:
private_key = os.getenv("TEST_ACCOUNT_PRIVKEY")
if not private_key:
raise RuntimeError("Set TEST_ACCOUNT_PRIVKEY")
wallet = Account.from_key(private_key).address
# 1. Get a margin quote (2x leveraged USDC/WETH on Arbitrum)
quote = await get_swap_quote(
wallet=wallet,
src_chain=42161,
src_token="0xaf88d065e77c8cC2239327C5EDb3A432268e5831", # USDC (collateral)
dst_chain=42161,
dst_token="0x82aF49447D8a07e3bd95BD0d56f35241523fBab1", # WETH (base)
amount=1_000_000, # 1 USDC collateral (6 decimals)
leverage=2,
)
print(json.dumps(quote, indent=2))
# 2. Sign and submit (no w3 needed for margin)
result = await sign_and_submit(quote, private_key, wait=True, timeout=120)
print(result)
asyncio.run(main())
```
### More Examples
#### Spot Swap (direct execution)
```py
import os
import asyncio
from eth_account import Account
from tristero import ChainID, TokenSpec, execute_permit2_swap, make_async_w3
async def main() -> None:
private_key = os.getenv("TEST_ACCOUNT_PRIVKEY")
if not private_key:
raise RuntimeError("Set TEST_ACCOUNT_PRIVKEY")
account = Account.from_key(private_key)
arbitrum_rpc = os.getenv("ARB_RPC_URL", "https://arbitrum-one-rpc.publicnode.com")
w3 = make_async_w3(arbitrum_rpc)
result = await execute_permit2_swap(
w3=w3,
account=account,
src_t=TokenSpec(chain_id=ChainID(42161), token_address="0xaf88d065e77c8cC2239327C5EDb3A432268e5831"), # USDC (Arbitrum)
dst_t=TokenSpec(chain_id=ChainID(8453), token_address="0xfde4C96c8593536E31F229EA8f37b2ADa2699bb2"), # USDT (Base)
raw_amount=1_000_000, # 1 USDC (6 decimals)
timeout=300,
)
print(result)
asyncio.run(main())
```
#### Margin: Direct Open
```py
import asyncio
import os
from eth_account import Account
from tristero import open_margin_position
async def main() -> None:
private_key = os.getenv("TEST_ACCOUNT_PRIVKEY", "")
if not private_key:
raise RuntimeError("Set TEST_ACCOUNT_PRIVKEY")
wallet = Account.from_key(private_key).address
result = await open_margin_position(
private_key=private_key,
chain_id="42161",
wallet_address=wallet,
quote_currency="0xaf88d065e77c8cC2239327C5EDb3A432268e5831", # USDC
base_currency="0x82aF49447D8a07e3bd95BD0d56f35241523fBab1", # WETH
leverage_ratio=2,
collateral_amount="1000000", # 1 USDC (6 decimals)
wait_for_result=True,
timeout=120,
)
print(result)
asyncio.run(main())
```
#### Margin: List Positions / Close Position
```py
import asyncio
import os
from eth_account import Account
from tristero import close_margin_position, list_margin_positions
async def main() -> None:
private_key = os.getenv("TEST_ACCOUNT_PRIVKEY", "")
if not private_key:
raise RuntimeError("Set TEST_ACCOUNT_PRIVKEY")
wallet = Account.from_key(private_key).address
positions = await list_margin_positions(wallet)
open_pos = next((p for p in positions if p.status == "open"), None)
if not open_pos:
raise RuntimeError("no open positions")
result = await close_margin_position(
private_key=private_key,
chain_id="42161",
position_id=open_pos.taker_token_id,
escrow_contract=open_pos.escrow_address,
authorized=open_pos.filler_address,
cash_settle=False,
fraction_bps=10_000,
deadline_seconds=3600,
wait_for_result=True,
timeout=120,
)
print(result)
asyncio.run(main())
```
#### WebSocket Quote Streaming
`subscribe_quotes` opens a persistent WebSocket connection and delivers live quotes via an async callback (~500 ms updates).
If the callback is still running when a newer quote arrives, intermediate updates are **dropped** and only the latest quote is delivered once the callback finishes (latest-only pattern). This makes it safe to do slow work (e.g. sign + submit) inside the callback without worrying about duplicate executions.
**Simple: print every quote**
```py
import asyncio
from tristero import subscribe_quotes
async def main() -> None:
async def on_quote(quote):
print(f"dst_qty={quote['dst_token_quantity']} order_id={quote['order_id'][:16]}...")
async def on_error(exc):
print(f"Error: {exc}")
async with await subscribe_quotes(
wallet="0xYOUR_WALLET",
src_chain=42161,
src_token="0xaf88d065e77c8cC2239327C5EDb3A432268e5831", # USDC
dst_chain=1,
dst_token="0xdAC17F958D2ee523a2206206994597C13D831ec7", # USDT
amount=1_000_000,
on_quote=on_quote,
on_error=on_error,
) as sub:
await asyncio.sleep(10) # stream for 10 seconds
asyncio.run(main())
```
**Advanced: sign and submit the first quote, then stop**
```py
import asyncio
import os
from eth_account import Account
from tristero import subscribe_quotes, sign_and_submit, make_async_w3
async def main() -> None:
private_key = os.getenv("TEST_ACCOUNT_PRIVKEY", "")
wallet = Account.from_key(private_key).address
w3 = make_async_w3(os.getenv("ARB_RPC_URL", "https://arbitrum-one-rpc.publicnode.com"))
done = asyncio.Event()
async def on_quote(quote):
if done.is_set():
return
quote["_type"] = "swap"
result = await sign_and_submit(quote, private_key, w3=w3, wait=False)
print(result)
done.set()
sub = await subscribe_quotes(
wallet=wallet,
src_chain=42161,
src_token="0xaf88d065e77c8cC2239327C5EDb3A432268e5831",
dst_chain=42161,
dst_token="0x82aF49447D8a07e3bd95BD0d56f35241523fBab1",
amount=1_000_000,
on_quote=on_quote,
)
await done.wait()
await sub.close()
asyncio.run(main())
```
**Limit order: wait for a target price, then submit**
```py
import asyncio
import os
from eth_account import Account
from tristero import subscribe_quotes, sign_and_submit, make_async_w3
async def main() -> None:
private_key = os.getenv("TEST_ACCOUNT_PRIVKEY", "")
wallet = Account.from_key(private_key).address
w3 = make_async_w3(os.getenv("ARB_RPC_URL", "https://arbitrum-one-rpc.publicnode.com"))
done = asyncio.Event()
baseline: list[int] = []
improvement_bps = 10 # submit when price is 10 bps better than first quote
async def on_quote(quote):
if done.is_set():
return
dst_qty = int(quote.get("dst_token_quantity", 0))
if not baseline:
baseline.append(dst_qty)
print(f"Baseline: {dst_qty}")
return
threshold = baseline[0] * (10_000 + improvement_bps) / 10_000
if dst_qty < threshold:
print(f"dst_qty={dst_qty} (waiting for >= {threshold:.0f})")
return
# Threshold met — sign and submit THIS specific quote
print(f"Target reached: {dst_qty} >= {threshold:.0f}, submitting!")
quote["_type"] = "swap"
result = await sign_and_submit(quote, private_key, w3=w3, wait=False)
print(result)
done.set()
sub = await subscribe_quotes(
wallet=wallet,
src_chain=42161,
src_token="0xaf88d065e77c8cC2239327C5EDb3A432268e5831",
dst_chain=42161,
dst_token="0x82aF49447D8a07e3bd95BD0d56f35241523fBab1",
amount=1_000_000,
on_quote=on_quote,
)
await done.wait()
await sub.close()
asyncio.run(main())
```
The callback receives the exact quote that triggered the condition. Because of the
latest-only pattern, even if signing takes longer than 500 ms, that specific quote
is what gets signed — newer arrivals simply queue up and don't cause duplicates.
#### Feather: Start (get deposit address)
Feather swaps are deposit-based: you start an order to receive a `deposit_address`, send funds to it manually, then optionally wait for completion.
Submit only:
```py
import asyncio
from tristero import ChainID, TokenSpec, start_feather_swap
async def main() -> None:
# Example: ETH (native) -> XMR (native)
src_t = TokenSpec(chain_id=ChainID.ethereum, token_address="native")
dst_t = TokenSpec(chain_id=ChainID.monero, token_address="native")
# Replace with your own destination address on the destination chain.
dst_addr = "YOUR_XMR_ADDRESS"
swap = await start_feather_swap(
src_t=src_t,
dst_t=dst_t,
dst_addr=dst_addr,
raw_amount=100_000_000_000_000_000, # 0.1 ETH in wei
)
order_id = (
(swap.data or {}).get("id")
or (swap.data or {}).get("order_id")
or (swap.data or {}).get("orderId")
or ""
)
print("order_id:", order_id)
print("deposit_address:", swap.deposit_address)
asyncio.run(main())
```
Submit + wait (WebSocket):
```py
import asyncio
from tristero import ChainID, OrderType, TokenSpec, start_feather_swap, wait_for_completion
async def main() -> None:
src_t = TokenSpec(chain_id=ChainID.ethereum, token_address="native")
dst_t = TokenSpec(chain_id=ChainID.monero, token_address="native")
dst_addr = "YOUR_XMR_ADDRESS"
swap = await start_feather_swap(
src_t=src_t,
dst_t=dst_t,
dst_addr=dst_addr,
raw_amount=100_000_000_000_000_000,
)
order_id = (
(swap.data or {}).get("id")
or (swap.data or {}).get("order_id")
or (swap.data or {}).get("orderId")
or ""
)
if not order_id:
raise RuntimeError(f"Feather swap response missing order id: {swap.data}")
print("deposit_address:", swap.deposit_address)
print("Waiting for completion...")
completion = await wait_for_completion(order_id, order_type=OrderType.FEATHER)
print(completion)
asyncio.run(main())
```
| text/markdown | null | pty1 <pty11@proton.me> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"certifi>=2023.7.22",
"eth-account>=0.8.0",
"glom>=25.12.0",
"httpx>=0.23.0",
"pydantic>=2.0.0",
"tenacity>=8.0.0",
"web3>=6.0.0",
"websockets>=10.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T22:40:35.909534 | tristero-0.3.7.tar.gz | 53,973 | 8e/38/9e0d2a1168192052d568c9a1eb55cd04c3b6539b07dbb11ea9d58dea7008/tristero-0.3.7.tar.gz | source | sdist | null | false | c266cdcea78bde6c665be7bb210cf2f4 | be7a1418760637876fe39457b486a867d426817304849c78e5639da225c9450d | 8e389e0d2a1168192052d568c9a1eb55cd04c3b6539b07dbb11ea9d58dea7008 | null | [
"LICENSE"
] | 241 |
2.1 | odoo-addon-l10n-br-cnab-structure | 16.0.3.1.1 | This module allows defining the structure for generating the CNAB file. Used to exchange information with Brazilian banks. | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
==============
CNAB Structure
==============
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:ab63535c4c881754038c67850d8330a06b643959af4d5d6f7f4b19036651ec1e
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fl10n--brazil-lightgray.png?logo=github
:target: https://github.com/OCA/l10n-brazil/tree/16.0/l10n_br_cnab_structure
:alt: OCA/l10n-brazil
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/l10n-brazil-16-0/l10n-brazil-16-0-l10n_br_cnab_structure
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/l10n-brazil&target_branch=16.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module adds functionality for implementing brazilian banking
automation by CNAB file exchange.
**Table of contents**
.. contents::
:local:
Installation
============
Configuration
=============
Usage
=====
Configuration and usability: https://youtu.be/ljommELunlA
Known issues / Roadmap
======================
Changelog
=========
14.0.0.0.1 (2022)
-----------------
Module creation.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/l10n-brazil/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/l10n-brazil/issues/new?body=module:%20l10n_br_cnab_structure%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Engenere
* Escodoo
Contributors
------------
- `Engenere <https://engenere.one>`__:
- Antônio S. Pereira Neto <neto@engenere.one>
- Felipe Motter Pereira <felipe@engenere.one>
Other credits
-------------
The development of this module has been financially supported by:
- Escodoo - https://www.escodoo.com.br
- Engenere - https://engenere.one/
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-antoniospneto| image:: https://github.com/antoniospneto.png?size=40px
:target: https://github.com/antoniospneto
:alt: antoniospneto
.. |maintainer-felipemotter| image:: https://github.com/felipemotter.png?size=40px
:target: https://github.com/felipemotter
:alt: felipemotter
.. |maintainer-marcelsavegnago| image:: https://github.com/marcelsavegnago.png?size=40px
:target: https://github.com/marcelsavegnago
:alt: marcelsavegnago
.. |maintainer-kaynnan| image:: https://github.com/kaynnan.png?size=40px
:target: https://github.com/kaynnan
:alt: kaynnan
Current `maintainers <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-antoniospneto| |maintainer-felipemotter| |maintainer-marcelsavegnago| |maintainer-kaynnan|
This module is part of the `OCA/l10n-brazil <https://github.com/OCA/l10n-brazil/tree/16.0/l10n_br_cnab_structure>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Engenere, Escodoo, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/l10n-brazil | null | >=3.10 | [] | [] | [] | [
"odoo-addon-l10n_br_account_payment_order<16.1dev,>=16.0dev",
"odoo-addon-l10n_br_coa_generic<16.1dev,>=16.0dev",
"odoo<16.1dev,>=16.0a",
"pyyaml",
"unidecode"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T22:39:26.588528 | odoo_addon_l10n_br_cnab_structure-16.0.3.1.1-py3-none-any.whl | 191,729 | 95/52/3a82422252f640a7a1c45ae10ccac6ebd726ba3583ac05d12e71b7dc35aa/odoo_addon_l10n_br_cnab_structure-16.0.3.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 8e77086b4b0bfb95a40b0eb8ede036df | 7ad647e79a1445cd8063215a02c8726b65edef284254a519987ea825c53c77ca | 95523a82422252f640a7a1c45ae10ccac6ebd726ba3583ac05d12e71b7dc35aa | null | [] | 100 |
2.4 | isoring | 0.1.8 | A structure for data security and a cracking environment. | # isoring
a component from the terminated project, puissec.
A structure called the `IsoRingedChain` is supposed to guard a big secret, comprised of a sequence
of arbitrarily-lengthed vectors (secrets).
Current version on pypi.org:
----------------------------
0.1.6
This project has peaked in development.
# What is a Secret?
Who knows, really?
In this computer program, a secret is represented by a finitely lengthed vector. The `Sec` structure contains this vector of
length `n`, as well as `k` additional vectors in the same dimension of `n`. These `k+1` vectors are the local optima of
`n`-space in real numbers. Every one of those vectors has an associated probability value, the probability values adding up
to one. These probability values can be arbitrary, meaning the actual secret (vector) of `Sec` may have any probability
value in `[0,1.]`. Design of `Sec` is based on the common machine-learning problem of choosing local optima over the best
solution.
The `IsoRing` structure contains the secret, a vector in finite space. It contains that secret in one `Sec` instance.
`IsoRing` also holds an additional `j` `Sec` instances, each of these `Sec` instances in a unique finite vector
dimension. These `j` instances serve as buffers to third-party acquisition of the actual `Sec` instance. In effect,
`IsoRing` has two primary layers of defenses: the `j` `Sec` instances and the `r_j >= 1` alternative local optima to
the actual vector of some `Sec`.
At any point in program run, outward representative of `IsoRing` is exactly one `Sec` instance, the isomorphic
representation (iso-repr).
In order for a third-party to interact with an `IsoRing` for information, third-party will have to interact with
feedback function of `IsoRing`. For an `IsoRing` in iso-repr vector dimension `q`, feedback function provides a
`q`-vector of distance scores. Distance scores are conventionally euclidean point distances. However, there are
alternative feedback functions that provide distorted distance scores via pseudo-random number generator.
NOTE:
In this open implementation of cracking simulations involving `IsoRing`, the `Cracker` does not consider the
feedback function vectors. Doing so adds a layer of complication that is better placed in programs that rely
on this project's code. `Cracker` uses background information during its cracking attempts, instead of interpreting
the feedback vectors it receives every time it makes a guess on an `IsoRing`.
For a sequence of arbitrarily-lengthed vectors (secrets), an `IsoRingedChain` is used to cover it and this structure
is, in turn, composed of a number of `IsoRing`s equal to the number of those vectors (secrets).
# What is an Isomorphic Ringed Chain?
An `IsoRingedChain` guards a sequence of vectors (secrets). Any of the `IsoRing`s in an `IsoRingedChain` may be in
an isomorphic representation not of the same dimension of the actual secret. Additionally, every `IsoRing` in an
`IsoRingedChain` has dependencies and co-dependencies in order for third-party access to it. Dependencies are the
`IsoRing`s that must have been "cracked" by the third-party before getting to it, and co-dependencies are the
`IsoRing`s that must be "cracked" alongside it. Cracking cannot proceed by an ordering of the `IsoRing`s that
violate the specified dependencies and co-dependencies linking these structures together for the `IsoRingedChain`.
# What is Cracking?
In this program, the `Cracker` structure is responsible for determining all the secrets of an `IsoRingedChain`. This
process of determination is "cracking". The `Cracker` must attempt cracking in the order specified by all of the
contained `IsoRing`s' dependencies and co-dependencies. Otherwise, program will halt `Cracker` midway, resulting in
its failure.
`Cracker` is given background information, `BackgroundInfo`, on the target `IsoRingedChain`.
NOTE: There are deficits to this map design of background information. However, it was chosen because it avoids the
Curse of Dimensionality, one of a few major problems that plagued successful complete development of the
program `puissec`, found at `github.com/changissnz/puissec`. Program `puissec` was the predecessor of this
program `isoring`. Program `isoring` is, in fact, a simpler version of only part of the problems in the
conceptualization of `puissec`.
`BackgroundInfo` has three main components.
1. Hypothesis map,
`<Isoring> identifier -> <Sec> index -> <HypStruct>`.
2. Suspected `IsoRing`-to-`Sec` map,
`<Isoring> identifier -> <Sec> index`.
3. Order of cracking, a sequence with each element
`{set of co-dependent IsoRing identifiers}`.
One deficit about design of this `BackgroundInfo` is the hypothesis map. Every `Sec` instance can have at
most one hypothesis on it. And the `HypStruct` is focused on exactly one local optimum.
`HypStruct` represents a hypothesis on a `Sec`, of vector dimension `k`, and has these attributes.
1. Suspected optima index `i` of the `Sec`.
2. Bounds (a `k x 2` matrix) suspected to contain optima `i`.
3. Hop size `h`, an integer, uniformly partitioning the bounds into `k^h` points.
4. Probability marker `P`, used to cross-reference with probability output value `P'` from a
cracked `Sec`.
The probability values in this program are meant to be reference values for a `Cracker`. If a `Cracker`
uses a `HypStruct` to crack `Sec`, and the output probability from `Sec` differs from that of the
`HypStruct` used, then the `Cracker` does not accept the cracking vector as the actual secret.
NOTE: these are information games, broadly speaking.
For every `IsoRing` being targeted by a `Cracker`, `Cracker` uses one `Crackling` at any one time, in
order to attempt to crack the `IsoRing` for the vector (secret) of the `IsoRing`'s suspected `Sec`. A
`Cracker` will attempt to use as many `Crackling`s, in re-cracking sessions, for an `IsoRing` as program
permits until `Cracker` cracks its wanted local optimum from the suspected `Sec` of said `IsoRing`.
A structure called a `SearchSpaceIterator` is employed by every `Crackling` to execute brute-force
cracking attempts on an `IsoRing`'s isomorphic representation. `SearchSpaceIterator` outputs `k^h`
points that uniformly cover the input bounds of matrix `k x 2`.
If the `Cracker` does not have a `HypStruct` for the `IsoRing`'s isomorphic representation (the second
layer in the three-layer hypothesis map), program halts `Cracker` midway. If the `Crackling` is
successful in cracking the isomorphic representation, `IsoRing` has to switch its isomorphic representation
to an uncracked `Sec`. If there are none that have not been cracked, `IsoRing` stops switching its
isomorphic representation due to the `Cracker` no longer being interested in cracking it anymore.
If a `Crackling` does not produce any (vector, associated probability value) on a `Sec`, `Cracker` cannot
proceed to attempting to crack any `IsoRing`s dependent on the `IsoRing`, pertaining to said `Sec`, being
cracked. In most cases, `Cracker` would be halted.
# The Brute-Force Environment
There Are Rules To The Game. The rules are enforced in the environment `BruteForceEnv`, where a `Cracker`
attempts to crack an `IsoRingedChain`. A `Cracker` is granted some arbitrary amount of energy, a real number.
If the energy falls to zero or below, program halts `Cracker`. A `Cracker` can use only `t` number of
`Crackling`s at once. If there are co-dependent `IsoRing` sets in `IsoRingedChain` that are greater in size
than `t`, successfully cracking the `IsoRingedChain` is impossible for the `Cracker`.
# What is Successful Cracking? Complete Execution or Complete Acquisition of Actual?
The complete execution of cracking all the `IsoRing`s in an `IsoRingedChain` can be known by a `Cracker` by
the time the program halts `Cracker`. However, the `BackgroundInfo` of a `Cracker` is what allows it to verify
on what the actual vectors from `IsoRingedChain` are.
# Additional Features
Generative methods, found in this program, can be used to produce the relevant data structures. These
methods do come in handy since there are a lot of variables to type up by hand.
# An Example On the User Interface

Project is also up on `pypi.org`. Install with
`pip install isoring`.
Here is an example of use.

| text/markdown | Richard Pham | Richard Pham <phamrichard45@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/Changissnz/isoring | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/changissnz/isoring",
"Issues, https://github.com/changissnz/isoring/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T22:37:06.015205 | isoring-0.1.8.tar.gz | 30,955 | 25/cf/8c81c49bc6630c84a91030dec3204d17a9d80fcad3e3a4327ab98e477184/isoring-0.1.8.tar.gz | source | sdist | null | false | 4ae3ee207cfaede2bbdf05e14ce91fe3 | d877d3acc426d397f398df81f7dcc0cecbdcb0b7b8d68e3788900eaf7ae63bf0 | 25cf8c81c49bc6630c84a91030dec3204d17a9d80fcad3e3a4327ab98e477184 | MIT | [
"LICENSE"
] | 243 |
2.4 | rowquery | 0.1.0 | A SQL-first Python library for querying and mapping data across multiple database backends. | # RowQuery
[](https://pypi.org/project/rowquery/)
[](https://pypi.org/project/rowquery/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/maksim-shevtsov/RowQuery/actions)
[](https://codecov.io/gh/maksim-shevtsov/RowQuery)
[](https://github.com/astral-sh/ruff)
A SQL-first Python library for querying and mapping data across multiple database backends.
## Features
- **Multi-Database Support**: SQLite, PostgreSQL, MySQL, Oracle with unified interface
- **SQL-First Design**: Load queries from `.sql` files organized in namespaces, or pass inline SQL directly
- **Inline SQL**: Execute raw SQL strings alongside registry keys — no registry required for ad-hoc queries
- **SQL Sanitization**: Configurable sanitizer strips comments, blocks statement stacking, and restricts SQL verbs
- **Flexible Mapping**: Map results to dataclasses, Pydantic models, or plain classes
- **Aggregate Mapping**: Reconstruct complex object graphs from joined queries (single-pass O(n))
- **Transaction Management**: Context manager support with automatic rollback
- **Migration Management**: Version-controlled database migrations
- **Repository Pattern**: Optional DDD-style repository base classes
- **Async Support**: Full async/await support for all operations
- **Type Safe**: Fully typed with mypy strict mode
## Installation
### Core (SQLite support included)
```bash
pip install rowquery
```
### With Database Drivers
```bash
pip install rowquery[postgres] # PostgreSQL
pip install rowquery[mysql] # MySQL
pip install rowquery[oracle] # Oracle
pip install rowquery[all] # All drivers
```
## Quick Start
### 1. Organize Your SQL Files
```
sql/
user/
get_by_id.sql
list_active.sql
order/
create.sql
```
### 2. Execute Queries
```python
from row_query import Engine, ConnectionConfig, SQLRegistry
config = ConnectionConfig(driver="sqlite", database="app.db")
registry = SQLRegistry("sql/")
engine = Engine.from_config(config, registry)
# Registry key lookup (dot-separated namespace)
user = engine.fetch_one("user.get_by_id", {"id": 1})
users = engine.fetch_all("user.list_active")
count = engine.fetch_scalar("user.count")
# Inline SQL — pass a raw SQL string directly
user = engine.fetch_one("SELECT * FROM users WHERE id = ?", 1)
users = engine.fetch_all("SELECT * FROM users WHERE active = ?", True)
count = engine.fetch_scalar("SELECT COUNT(*) FROM users")
rows = engine.fetch_all("SELECT * FROM users WHERE id IN (?, ?)", [1, 2])
```
### 3. Map to Models
```python
from dataclasses import dataclass
from row_query.mapping import ModelMapper
@dataclass
class User:
id: int
name: str
email: str
mapper = ModelMapper(User)
user = engine.fetch_one("user.get_by_id", {"id": 1}, mapper=mapper)
# Returns: User(id=1, name="Alice", email="alice@example.com")
```
### 4. Aggregate Mapping (Reconstruct Object Graphs)
```python
from row_query.mapping import aggregate, AggregateMapper
from dataclasses import dataclass
@dataclass
class Order:
id: int
total: float
@dataclass
class UserWithOrders:
id: int
name: str
email: str
orders: list[Order]
# Build mapping plan for complex object graph
plan = (
aggregate(UserWithOrders, prefix="user__")
.key("id")
.auto_fields()
.collection("orders", Order, prefix="order__", key="id")
.build()
)
# Execute joined query and map in single pass
users = engine.fetch_all("user.with_orders", mapper=AggregateMapper(plan))
```
### 5. Transactions
```python
# Use context manager for automatic rollback on error
with engine.transaction() as tx:
tx.execute("user.create", {"name": "Alice", "email": "alice@example.com"})
tx.execute("audit.log", {"action": "user_created"})
# Commits on exit, rolls back on exception
```
### 6. SQL Sanitization
```python
from row_query import Engine, SQLSanitizer
# Configure what inline SQL is permitted
sanitizer = SQLSanitizer(
strip_comments=True, # Remove -- and /* */ comments (default: True)
block_multiple_statements=True, # Reject "SELECT 1; DROP TABLE t" (default: True)
allowed_verbs=frozenset({"SELECT"}), # Only allow SELECT statements (default: None = any)
)
engine = Engine.from_config(config, registry, sanitizer=sanitizer)
# Inline SQL is sanitized before execution
users = engine.fetch_all("SELECT * FROM users -- get all") # comment stripped
engine.execute("DROP TABLE users") # raises SQLSanitizationError (verb not allowed)
engine.execute("SELECT 1; DROP TABLE t") # raises SQLSanitizationError (multiple statements)
# Registry queries are always trusted and never sanitized
users = engine.fetch_all("user.list_active") # no sanitization applied
```
### 7. Async Support
```python
from row_query import AsyncEngine, ConnectionConfig
config = ConnectionConfig(driver="sqlite", database="app.db")
engine = AsyncEngine.from_config(config, registry)
async def fetch_users():
# Registry key or inline SQL — both work
users = await engine.fetch_all("user.list_active")
users = await engine.fetch_all("SELECT * FROM users WHERE active = ?", True)
return users
# Async transactions
async with engine.transaction() as tx:
await tx.execute("user.create", {"name": "Bob"})
await tx.execute("INSERT INTO audit (action) VALUES (?)", "user_created")
```
## Documentation
- [Examples](./examples/) - Runnable code examples
- [CONTRIBUTING.md](./CONTRIBUTING.md) - Development guide
- [CHANGELOG.md](./CHANGELOG.md) - Version history
## Development
This project uses [uv](https://github.com/astral-sh/uv) for package management.
```bash
# Install dependencies
uv sync --extra all --extra dev
# Run tests
uv run pytest
# Run tests with coverage
uv run pytest --cov=row_query --cov-report=html
# Lint and format
uv run ruff check row_query/ tests/
uv run ruff format row_query/ tests/
# Type check
uv run mypy row_query/
```
## License
MIT License - see [LICENSE](LICENSE) for details.
## Contributing
Contributions welcome! Please read [CONTRIBUTING.md](CONTRIBUTING.md) first.
| text/markdown | null | Maksim Shevtsov <maksim.shautsou.dev@gmail.com> | null | null | MIT | database, ddd, mapping, query, repository, sql | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyth... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0.0",
"aiomysql>=0.2.0; extra == \"all\"",
"aiosqlite>=0.20.0; extra == \"all\"",
"mysql-connector-python>=8.0.0; extra == \"all\"",
"oracledb>=2.0.0; extra == \"all\"",
"psycopg[binary]>=3.1.0; extra == \"all\"",
"mypy>=1.10.0; extra == \"dev\"",
"pre-commit>=3.7.0; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://github.com/maksim-shevtsov/RowQuery",
"Documentation, https://github.com/maksim-shevtsov/RowQuery#readme",
"Repository, https://github.com/maksim-shevtsov/RowQuery.git",
"Issues, https://github.com/maksim-shevtsov/RowQuery/issues",
"Changelog, https://github.com/maksim-shevtsov/RowQuery/b... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:37:04.436239 | rowquery-0.1.0.tar.gz | 48,509 | a4/cc/c5673f461520022337449dc64f7c333f8b65032be0c42013caa58e2befe4/rowquery-0.1.0.tar.gz | source | sdist | null | false | df5b6b8c4043fd4c21f7edb3075fc074 | 7132b3e4b64ab03b1c63430fa14ae6bdae2d3000b53814e3bfb3b7c54ab5abe3 | a4ccc5673f461520022337449dc64f7c333f8b65032be0c42013caa58e2befe4 | null | [
"LICENSE"
] | 228 |
2.4 | datahugger-ng | 0.3.0 | python binding of datahugger -- rust tool for fetching data and metadata from DOI or URL. | # Datahugger API doc

This module provides a unified interface to **resolve**, **crawl**, and **download** datasets exposed over HTTP-like endpoints.
A key design goal is that dataset crawling can be consumed **both synchronously and asynchronously** using the same API.
## Overview
* Resolve a dataset from a URL
* Crawl its contents as a stream of entries (files or directories)
* Download and validate dataset contents using a blocking API backed by an async runtime
## Core Concepts
### `DirEntry`
Represents a directory in the dataset.
```python
@dataclass
class DirEntry(Entry):
path_crawl_rel: pathlib.Path
root_url: str
api_url: str
```
#### Fields
- `path_crawl_rel`
Path of the directory relative to the dataset root.
- `root_url`
Root URL of the dataset this directory belongs to.
- `api_url`
API endpoint used to query the directory contents.
### `FileEntry`
Represents a file in the dataset.
```python
@dataclass
class FileEntry(Entry):
path_crawl_rel: pathlib.Path
download_url: str
size: int | None
checksum: list[tuple[str, str]]
```
#### Fields
- `path_crawl_rel`
Path of the file relative to the dataset root.
- `download_url`
URL from which the file can be downloaded.
- `size`
File size in bytes, if known.
- `checksum`
List of checksum pairs `(algorithm, value)`
(e.g. `("sha256", "...")`).
## Iteration Model
### `SyncAsyncIterator[T]`
A protocol that allows a single object to be used as **both a synchronous and an asynchronous iterator**.
```python
class SyncAsyncIterator(Protocol[T]):
def __aiter__(self) -> AsyncIterator[T]: ...
async def __anext__(self) -> T: ...
def __iter__(self) -> Iterator[T]: ...
def __next__(self) -> T: ...
```
This enables APIs that can be consumed in either context without duplication.
## Dataset
The central abstraction representing a remote dataset.
```python
class Dataset:
def crawl(self) -> SyncAsyncIterator[FileEntry | DirEntry]: ...
def crawl_file(self) -> SyncAsyncIterator[FileEntry]: ...
def download_with_validation(
self, dst_dir: pathlib.Path, limit: int = 0
) -> None: ...
def id(self) -> str: ...
def root_url(self) -> str: ...
```
### `Dataset.crawl()`
```python
def crawl(self) -> SyncAsyncIterator[FileEntry | DirEntry]
```
Returns a stream of dataset entries (optional type that can be either `DirEntry` or `FileEntry`).
The returned object supports **both**:
#### Synchronous iteration
```python
for entry in dataset.crawl():
print(entry)
```
#### Asynchronous iteration
```python
async for entry in dataset.crawl():
print(entry)
```
Entries are yielded as either `DirEntry` or `FileEntry`.
### `Dataset.download_with_validation()`
```python
def download_with_validation(
self, dst_dir: pathlib.Path, limit: int = 0
) -> None
```
Downloads files in the dataset into the given directory and validates them using the provided checksums.
* This is a **blocking** call.
* Internally backed by a Rust async runtime.
* Intended for use from synchronous Python code.
#### Parameters
* **`dst_dir`**
Destination directory for downloaded files.
* **`limit`**
Maximum number of files to download.
`0` means no limit.
### `Dataset.root_url()`
```python
def root_url(self) -> str
```
Returns the dataset’s root URL.
## Resolving a Dataset
### `resolve`
```python
def resolve(url: str, /) -> Dataset
```
Resolves a dataset from a given URL.
#### Example
```python
dataset = resolve("https://example.com/dataset")
```
The returned `Dataset` can then be crawled or downloaded.
## Example Usage
### Crawl a dataset synchronously
```python
dataset = resolve("https://example.com/dataset")
for entry in dataset.crawl():
if isinstance(entry, FileEntry):
print("File:", entry.path_crawl_rel)
elif isinstance(entry, DirEntry):
print("Dir:", entry.path_crawl_rel)
```
### Crawl a dataset asynchronously
```python
dataset = resolve("https://example.com/dataset")
async for entry in dataset.crawl():
print(entry)
```
### Download a dataset
```python
dataset = resolve("https://example.com/dataset")
dataset.download_with_validation(dst_dir=pathlib.Path("./data"))
```
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | https://github.com/EOSC-Data-Commons/datahugger-ng | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | maturin/1.12.2 | 2026-02-18T22:36:57.286497 | datahugger_ng-0.3.0-cp310-abi3-win_amd64.whl | 2,880,580 | c8/83/92f583b96e81438b33b913a9791fa84f4a7119f2ddca7ed0b1e127aaf540/datahugger_ng-0.3.0-cp310-abi3-win_amd64.whl | cp310 | bdist_wheel | null | false | 5a9303bc97951ceef40ac97ff909ecd0 | c94a8e87fbb70715d647917fbdfa4029fb921e6d3a117b7573d998f3cbfa4a36 | c88392f583b96e81438b33b913a9791fa84f4a7119f2ddca7ed0b1e127aaf540 | null | [] | 696 |
2.4 | biasclear | 1.1.0 | Structural bias detection engine built on Persistent Influence Theory (PIT). Detect framing, anchoring, false consensus, and 30+ rhetorical distortion patterns. | # BiasClear
**Structural bias detection and correction engine built on Persistent Influence Theory (PIT).**
BiasClear identifies and corrects structural bias in text — not surface-level sentiment, but the deeper patterns that shape how information is framed, weighted, and presented. It uses a frozen ethics core (immutable code, not tunable weights) to evaluate text against epistemological principles.
## Features
- **Scan** — Detect structural bias patterns (framing, anchoring, authority, omission, false equivalence)
- **Correct** — Generate debiased alternatives with inline diffs
- **Score** — Calculate truth alignment scores based on PIT principles
- **Audit** — SHA-256 blockchain audit chain for every evaluation
- **Frozen Ethics Core** — Immutable governance engine that cannot be prompt-injected
## Quick Start
```bash
# Install
pip install -e .
# CLI usage
biasclear scan "The data clearly proves that this approach is the only viable option."
biasclear correct "Studies overwhelmingly show..."
# API server
pip install -e ".[api]"
uvicorn biasclear.api:app --port 8100
```
## Architecture
```
biasclear/
├── frozen_core.py # Immutable ethics engine (PIT-based)
├── detector.py # Bias pattern detection
├── corrector.py # Bias correction with diffs
├── scorer.py # Truth alignment scoring
├── audit.py # SHA-256 audit chain
├── api.py # FastAPI server
├── cli.py # Command-line interface
└── demo.html # Interactive demo page
```
## Client SDK
A Python client SDK for consuming the BiasClear API is included in `biasclear-client/`:
```bash
pip install -e ./biasclear-client
```
```python
from biasclear_client import BiasClearClient
client = BiasClearClient(base_url="http://localhost:8100", api_key="your-key")
result = client.scan("The data clearly shows...")
print(result.flags)
```
See [biasclear-client/README.md](biasclear-client/README.md) for full SDK documentation.
## License
AGPL-3.0 — See [LICENSE](LICENSE) for details.
The client SDK (`biasclear-client/`) is licensed under MIT for unrestricted integration.
| text/markdown | Brad Slimp | null | null | null | AGPL-3.0-only | bias, nlp, ai-safety, fairness, ethics, persuasion, fact-checking | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artific... | [] | null | null | >=3.11 | [] | [] | [] | [
"python-dotenv",
"google-genai",
"diff-match-patch",
"fastapi>=0.100.0; extra == \"api\"",
"uvicorn[standard]>=0.20.0; extra == \"api\"",
"sqlcipher3>=0.5.0; extra == \"encrypt\"",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/bws82/biasclear",
"Repository, https://github.com/bws82/biasclear",
"Issues, https://github.com/bws82/biasclear/issues"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-18T22:34:54.789259 | biasclear-1.1.0.tar.gz | 82,825 | e6/28/6120553e8528deee3377dfbfdccec8c48f475a43b5fc870b9364b0098a62/biasclear-1.1.0.tar.gz | source | sdist | null | false | 8c7d49119c0471798fcdc495624f32c5 | d4a1746450abcfeff21fc1767682d3217a39cae373f6197ca61333f1db48bbca | e6286120553e8528deee3377dfbfdccec8c48f475a43b5fc870b9364b0098a62 | null | [
"LICENSE"
] | 252 |
2.4 | truss | 0.13.5 | A seamless bridge from model development to model delivery | # Truss
**The simplest way to serve AI/ML models in production**
[](https://badge.fury.io/py/truss)
[](https://github.com/basetenlabs/truss/actions/workflows/release.yml)
Truss is the CLI for deploying and serving ML models on Baseten. Package your model's serving logic in Python, launch training jobs, and deploy to production—Truss handles containerization, dependency management, and GPU configuration.
Deploy models from any framework: `transformers`, `diffusers`, PyTorch, TensorFlow, vLLM, SGLang, TensorRT-LLM, and more:
* 🦙 [Llama 3](https://github.com/basetenlabs/truss-examples/tree/main/llama/llama-3-8b-instruct) ([70B](https://github.com/basetenlabs/truss-examples/tree/main/llama/llama-3-70b-instruct)) · [Qwen3](https://github.com/basetenlabs/truss-examples/tree/main/qwen/qwen-3)
* 🧠 [DeepSeek R1](https://github.com/basetenlabs/truss-examples/tree/main/deepseek) — reasoning models
* 🎨 [FLUX.1](https://github.com/basetenlabs/truss-examples/tree/main/flux) — image generation
* 🗣 [Whisper v3](https://github.com/basetenlabs/truss-examples/tree/main/whisper/whisper-v3-truss) — speech recognition
**[Get started](https://docs.baseten.co/examples/deploy-your-first-model)** | [100+ examples](https://github.com/basetenlabs/truss-examples/) | [Documentation](https://docs.baseten.co)
## Why Truss?
* **Write once, run anywhere:** Package model code, weights, and dependencies with a model server that behaves the same in development and production.
* **Fast developer loop:** Iterate with live reload, skip Docker and Kubernetes configuration, and use a batteries-included serving environment.
* **Support for all Python frameworks:** From `transformers` and `diffusers` to PyTorch and TensorFlow to vLLM, SGLang, and TensorRT-LLM, Truss supports models created and served with any framework.
* **Production-ready:** Built-in support for GPUs, secrets, caching, and autoscaling when deployed to [Baseten](https://baseten.co) or your own infrastructure.
## Installation
Install Truss with:
```
pip install --upgrade truss
```
## Quickstart
As a quick example, we'll package a [text classification pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines) from the open-source [`transformers` package](https://github.com/huggingface/transformers).
### Create a Truss
To get started, create a Truss with the following terminal command:
```sh
truss init text-classification
```
When prompted, give your Truss a name like `Text classification`.
Then, navigate to the newly created directory:
```sh
cd text-classification
```
### Implement the model
One of the two essential files in a Truss is `model/model.py`. In this file, you write a `Model` class: an interface between the ML model that you're packaging and the model server that you're running it on.
There are two member functions that you must implement in the `Model` class:
* `load()` loads the model onto the model server. It runs exactly once when the model server is spun up or patched.
* `predict()` handles model inference. It runs every time the model server is called.
Here's the complete `model/model.py` for the text classification model:
```python
from transformers import pipeline
class Model:
def __init__(self, **kwargs):
self._model = None
def load(self):
self._model = pipeline("text-classification")
def predict(self, model_input):
return self._model(model_input)
```
### Add model dependencies
The other essential file in a Truss is `config.yaml`, which configures the model serving environment. For a complete list of the config options, see [the config reference](https://truss.baseten.co/reference/config).
The pipeline model relies on [Transformers](https://huggingface.co/docs/transformers/index) and [PyTorch](https://pytorch.org/). These dependencies must be specified in the Truss config.
In `config.yaml`, find the line `requirements`. Replace the empty list with:
```yaml
requirements:
- torch==2.0.1
- transformers==4.30.0
```
No other configuration is needed.
## Deployment
Truss is maintained by [Baseten](https://baseten.co) and deploys to the [Baseten Inference Stack](https://www.baseten.co/resources/guide/the-baseten-inference-stack/), which combines optimized inference runtimes with production infrastructure for autoscaling, multi-cloud reliability, and fast cold starts.
### Get an API key
To set up the Baseten remote, you'll need a
[Baseten API key](https://app.baseten.co/settings/account/api_keys). If you
don't have a Baseten account, no worries, just
[sign up for an account](https://app.baseten.co/signup/) and you'll be issued
plenty of free credits to get you started.
### Run `truss push`
With your Baseten API key ready to paste when prompted, you can deploy your
model:
```sh
truss push
```
You can monitor your model deployment from [your model dashboard on Baseten](https://app.baseten.co/models/).
### Invoke the model
After the model has finished deploying, you can invoke it from the terminal.
**Invocation**
```sh
truss predict -d '"Truss is awesome!"'
```
**Response**
```json
[
{
"label": "POSITIVE",
"score": 0.999873161315918
}
]
```
## Truss contributors
We enthusiastically welcome contributions in accordance with our
[contributors' guide](CONTRIBUTING.md) and
[code of conduct](CODE_OF_CONDUCT.md).
| text/markdown | null | Pankaj Gupta <no-reply@baseten.co>, Phil Howes <no-reply@baseten.co> | null | null | null | AI, MLOps, Machine Learning, Model Deployment, Model Serving | [] | [] | null | null | <3.15,>=3.9 | [] | [] | [] | [
"aiofiles<25,>=24.1.0",
"blake3<2,>=1.0.4",
"boto3<2,>=1.34.85",
"click<9,>=8.0.3",
"google-cloud-storage>=2.10.0",
"httpx-ws<0.8,>=0.7.1",
"httpx>=0.24.1",
"huggingface-hub>=0.25.0",
"inquirerpy<0.4,>=0.3.4",
"jinja2<4,>=3.1.2",
"libcst>=1.1.2",
"loguru>=0.7.2",
"packaging>=20.9",
"pathsp... | [] | [] | [] | [
"Repository, https://github.com/basetenlabs/truss",
"Homepage, https://truss.baseten.co",
"Bug Reports, https://github.com/basetenlabs/truss/issues",
"Documentation, https://truss.baseten.co",
"Baseten, https://baseten.co"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:34:30.813205 | truss-0.13.5.tar.gz | 459,038 | 23/15/21746a20c81a0c72249cc1dfd07c97ef3b5203e4a77fdb1fc02deed51788/truss-0.13.5.tar.gz | source | sdist | null | false | d0e27a01cd01fb25256791b5c9ed6ebc | 83a3744439e11a0365b807706dfdc54ca88e584cba493fd03b986e6fc02f4e4d | 231521746a20c81a0c72249cc1dfd07c97ef3b5203e4a77fdb1fc02deed51788 | MIT | [
"LICENSE"
] | 15,687 |
2.4 | tingbok | 0.0.4 | Product and category lookup service for domestic inventory systems | # tingbok
Product and category lookup service for domestic inventory systems - this service provides a centralized API for:
- **Global tingbok category vocabulary** — a curated ~258-concept taxonomy for household inventory categorization
- **SKOS category lookups** — hierarchy paths from AGROVOC, DBpedia, Wikidata
- **EAN/barcode product lookups** — product data from Open Food Facts, Open Library, shopping receipts and various other lookup services
In the future, I'm also considering to organize other infomation into the vocabulary, information currently stored in free-text "tags" to the vocabulary, including weather a product is "broken", "worn" or "brand new", weather a product is meant for ladies, gents or children, etc.
## Background
I'm working on a domestic inventory service [inventory-md](https://github.com/tobixen/inventory-md), and I already have two database instances - one for my boat and one for my home.
What I (or you) actually have in stock is *local* information. Information on the things one *may* have in the inventory belongs to a global database. As there is overlapping between the two inventories, I already want those two to share information. Since the services out there that exists and can be queried for free often have restricted capacity and may be rate-limited it's important with caching - so the caching system was created early on - but how to share the caches? Not only between the instances, but I also have the data duplicated on my laptop and on a server.
We have sort of an hierarchy here, at the very bottom are the "root categories". Clothes and food are usually two very different things and fits well as root categories in a domestic inventory system. (Of course, personal opinions as well as local needs may vary. It should be possible to override those things). Intermediate categories exists, like "food/fruits" - and very specific categories like "food/diary/milk/fresh full fat milk". This is important when generating shopping lists - I do want to always have fresh full fat milk in the fridge as well as some fruits and nuts.
Near the bottom there may be very specific information about brand/producer, package size, etc, this is often linked with a European/International Article Number (EAN). Perhaps it had a price tag when purchasing it as well.
All this information belongs to a global database.
(At the very bottom, there may also be information about a specific item. A teddy bear may have an EAN, but your daughters teddy bear should be considered unique and may also have a name. This does not belong to a global database).
### SKOS
I wanted to slap some standard hierarchical category system on the inventories. According to Wikipedia, "Simple Knowledge Organization System (SKOS) is a W3C recommendation designed for representation of (...) classification schemes", so it seemed a perfect fit. Unfortunately, this standard is just describing the schema of a classification scheme. I found three public databases, AGROVOC, DBpedia and WikiData. All three of them have very slow query APIs, so a local cache was paramount - but even that seemed insufficient as the API calls was timing out frequently - when managing to get hold of data from the source it seemed important to keep it and share it with all instances. I've found better ways of accessing the information, but still the public databases are slow, so it's nice to have a public cache available. It's also possible to download the complete database from upstream and serve it, but even the smallest (Agrovoc) is big and takes long time to load, so better to do this from some separate service than to do it every time the inventory is changed.
**Sources:** Agrovoc/DBpedia/Wikidata
**Data flow:** Caching
### EANs, ISBNs, price information etc
To make it easier to populate the database, it's an important feature to look up EANs and find both product information and category information.
There exists a public database of food-related EANs, the OpenFoodFacts database. It does contain some category system (not SKOS-based), and is the one that seems to be closest to the hierarhcical categorization system we'd like to use in the inventory database. However, as it's name suggests, it's mostly about food products. The database seems to work well, but it's still nice to have a caching service, it makes things more robust (the inventory-md will by default try the tingbok service first and then go directly towards the official sources if tingbok doesn't work). There also exists various open and free services for looking up book ISBNs. Unfortunately, most non-food EAN databases are commercial. Fetching things from commercial databases, caching them and exposting them for the public may lead to nasty juridical side effects, so I may need to look into that.
Another source (which in some cases seems essential as some shop chains may have their local article numbers and bar code systems) is to simply compare bar codes with shopping receipts. This way one gets price information as well. It may involve a lot of work, but now with AI-tools it's possible to get it done relatively quickly.
**Sources:** OpenFoodFacts and various other sources, including user contributions through the API (in the beginning we'll try without any kind of authentication, perhaps we should require signed data in the future).
**Data flow:** This is more than just caching, since user contributions are allowed, we're actually building up a database here that neesd to be backed up as well. The database should be considered free and it should be possible not only to look up things but also download the full database.
### The global tingbok category vocabulary
As none of the sources have a category hierachy suitable for easy navigation, it was sadly necessary to build yet another vocabulary. The tingbok category vocabulary is mostly meant to link up the concepts from the other category sources into a neat category tree. It may also be the "official source" of what's true when different databases shows different things - like "bedding" are things optimized for absorbing animal pee in AGROVOC, while in most domestic inventory lists this category are for things optimized for human having a quality sleep.
**Sources:** Curated database with user contributions, but we can take it as pull requests in GitHub as for now.
**Data flow:** The service will serve data from a local static database.
## Name
*Tingbok* is Norwegian, and it was an official registry from 1633-03-05 to 1931. The word "ting" was in this context meant to refer to a court or other group of people making decisions. In the beginning it contained court decisions, but gradually contained mostly ownership information on properties and special clausules on property. Admittedly it's not much related to a category and EAN database.
The word "ting" has multiple meanings, and by today the strongest connection is to "thing". An inventory listing is basically a list of things. Both as a "book containing my things" and as a "registry of property", the word "Tingbok" seems to fit my inventory-md service quite well. As for now, "Tingbok" is the name of the "official" product and category lookup service, but I'm consiering to rename my inventory-md as well.
To me, a Norwegian born 40 years after the last tingbok entry was written, "Tingbok" does not sound like a very official thing, it has a bit of a funny sound to it. Claude suggested it, and I decided to stick with it.
## Quick start
You are supposed to use tingbok.plann.no - you are not supposed to set up your own server. As for now. I'd happily accept pull requests if you want to contribute on making this a *federated service*.
TODO: fix a Makefile
TODO: fix "claude skills" to always fix a Makefile and never suggest "pip install" in any documentation
Disclaimer: All documentation below is AI-generated.
```bash
pip install tingbok
uvicorn tingbok.app:app --host 127.0.0.1 --port 5100
```
## API endpoints
| Method | Path | Description |
|--------|------|-------------|
| `GET` | `/health` | Liveness check |
| `GET` | `/api/skos/lookup` | Single concept lookup |
| `GET` | `/api/skos/hierarchy` | Full hierarchy paths |
| `GET` | `/api/skos/labels` | Translations for a URI |
| `GET` | `/api/ean/{ean}` | EAN/barcode product lookup |
| `GET` | `/api/vocabulary` | Full package vocabulary |
| `GET` | `/api/vocabulary/{concept_id}` | Single concept |
## Development
```bash
pip install -e ".[dev]"
pytest tests/ -v
ruff check src/
```
## License
AGPL 3
| text/markdown | null | Tobias Brox <tobias@tobixen.no> | null | null | AGPL-3.0-or-later | null | [
"Development Status :: 3 - Alpha",
"Framework :: FastAPI",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
... | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.100",
"pyyaml>=6.0",
"uvicorn[standard]>=0.20",
"anyio[trio]>=4.0; extra == \"dev\"",
"httpx>=0.25; extra == \"dev\"",
"pre-commit>=4.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"niquests>=3.0; extra == \"ean\"",
"niquests>=3.0; extra == \"skos... | [] | [] | [] | [
"Repository, https://github.com/tobixen/tingbok",
"Issues, https://github.com/tobixen/tingbok/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:34:14.576854 | tingbok-0.0.4.tar.gz | 19,426 | 99/6a/17dc80ba7ef1560089ed3ee66f76ab556728bf6c4a0f669f46410e177c36/tingbok-0.0.4.tar.gz | source | sdist | null | false | 6cddd8ecd161ff1decb4020dd9dea934 | af91f20cb387b051565a0dcce7cfac930379f371cfe8f3ff0a01f5133d18dc5b | 996a17dc80ba7ef1560089ed3ee66f76ab556728bf6c4a0f669f46410e177c36 | null | [
"LICENSE"
] | 248 |
2.4 | flexmem | 0.1.0 | Permanent memory for Claude Code sessions. Your intellectual capital, preserved. | # flexmem
Permanent memory for Claude Code sessions. Your intellectual capital, preserved.
Coming soon. Follow [@axp_systems](https://axp.systems).
| text/markdown | null | AXP Systems <hello@axp.systems> | null | null | MIT | ai, claude, knowledge, mcp, memory | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://axp.systems",
"Repository, https://github.com/axp-systems/flexmem"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T22:34:12.774266 | flexmem-0.1.0.tar.gz | 850 | fa/ba/a0ce0de7330a917f1bc5ba6505a7b62b2abc34c22f56ea73d7fd72db91f0/flexmem-0.1.0.tar.gz | source | sdist | null | false | 785e3c17c5f488fd8dceba7bcbb3becf | 03a903c726cf46a850b7d7402f03c1e8abfb771c7e3e42cf554c2b88896ccc09 | fabaa0ce0de7330a917f1bc5ba6505a7b62b2abc34c22f56ea73d7fd72db91f0 | null | [] | 261 |
2.4 | convergentAI | 1.1.0 | Multi-agent coherence and coordination for AI systems | # Convergent
Coordination library for multi-agent AI systems. Agents share an intent graph, detect overlaps before building, and converge on compatible outputs — eliminating rework cycles from parallel code generation.
[](https://github.com/AreteDriver/convergent/actions/workflows/ci.yml)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://pypi.org/project/convergentAI/)
[]()
[]()
## Why This Exists
- **Problem:** Parallel AI agents generating code independently produce incompatible outputs. Agent A builds `User` with `int` IDs while Agent B uses `UUID`. Code fails to merge. 2-3 rework cycles before anything integrates.
- **Audience:** Multi-agent orchestration frameworks, distributed systems with autonomous agents, anyone running parallel AI code generation.
- **Outcome:** Agents publish what they're building to a shared intent graph. Before starting work, they check for overlaps and adopt existing decisions. Compatible output on first try. Zero rework.
## What It Does
- **Intent graph** — Shared, append-only graph of architectural decisions. Agents publish intents (what they build, what they need) and query for overlaps
- **Structural matching** — Detect when two agents plan to build the same interface based on name, kind, and tag similarity
- **Stability scoring** — Evidence-weighted confidence (test passes, code commits, downstream consumers) determines which intent wins conflicts
- **Constraint enforcement** — Hard requirements that must hold (type checks pass, no circular deps) validated by subprocess gates
- **Triumvirate voting** — Phi-weighted consensus engine with configurable quorum (ANY, MAJORITY, UNANIMOUS)
- **Stigmergy** — Trail markers that agents leave for future agents, with exponential decay (inspired by ant pheromone trails)
- **Flocking** — Emergent group behavior from local rules: alignment (adopt patterns), cohesion (detect drift), separation (avoid file conflicts)
- **Zero dependencies** — Pure Python, stdlib only. Optional Rust acceleration via PyO3
## Quickstart
### Prerequisites
- Python 3.10+
### Install
```bash
pip install convergentAI
# or from source:
git clone https://github.com/AreteDriver/convergent.git
cd convergent
pip install -e .
```
### Run
```python
from convergent import IntentResolver, PythonGraphBackend, Intent, InterfaceSpec
resolver = IntentResolver(backend=PythonGraphBackend())
# Agent A publishes what it's building
resolver.publish(Intent(
intent_id="auth-service",
agent_id="agent-a",
description="JWT authentication service",
interfaces=[
InterfaceSpec(name="User", kind="class", tags=["auth", "model"]),
],
))
# Agent B checks for overlapping work before starting
overlaps = resolver.find_overlapping(Intent(
intent_id="user-module",
agent_id="agent-b",
description="User management",
interfaces=[
InterfaceSpec(name="User", kind="class", tags=["auth", "model"]),
],
))
# → overlaps shows agent-a already owns the User class
# → agent-b adopts agent-a's schema instead of building its own
```
## Usage Examples
### Example 1: Persistent intent graph with SQLite
```python
from convergent import IntentResolver, SQLiteBackend
# WAL mode, concurrent reads, persistent across restarts
resolver = IntentResolver(backend=SQLiteBackend("./intents.db"))
resolver.publish(intent)
# Inspect from CLI
# python -m convergent inspect ./intents.db --format table
```
### Example 2: Consensus voting
```python
from convergent import GorgonBridge, CoordinationConfig
bridge = GorgonBridge(CoordinationConfig(db_path="./coordination.db"))
# Request a vote
request_id = bridge.request_consensus(
task_id="pr-42",
question="Should we merge this PR?",
context="All tests pass, adds new auth endpoint",
)
# Agents vote (phi-weighted by historical trust)
bridge.submit_agent_vote(
request_id, "agent-1", "reviewer", "claude:sonnet",
"approve", 0.9, "LGTM"
)
decision = bridge.evaluate(request_id)
# → DecisionOutcome.APPROVED
```
### Example 3: Enrich agent prompts with coordination context
```python
context = bridge.enrich_prompt(
agent_id="agent-1",
task_description="implement auth",
file_paths=["src/auth.py"],
)
# → Returns stigmergy markers + flocking constraints + phi score context
# → Inject into agent's system prompt for coordination-aware generation
```
## Architecture
```text
Gorgon (orchestrator)
│
▼
┌── Convergent ───────────────────────────────────────┐
│ │
│ Coordination Protocol (Phase 3) │
│ ┌────────────┐ ┌───────────┐ ┌─────────┐ │
│ │ Triumvirate │ │ Stigmergy │ │Flocking │ │
│ │ (voting) │ │ (trails) │ │ (swarm) │ │
│ └──────┬──────┘ └─────┬─────┘ └────┬────┘ │
│ └───────┬───────┴─────────────┘ │
│ ▼ │
│ Intent Graph + Intelligence (Phase 1-2) │
│ ┌──────────────────────────────────────────┐ │
│ │ Resolver │ Contracts │ Governor │ Gates │ │
│ └──────────────────────────────────────────┘ │
│ ▼ │
│ ┌──────────────────────────────────────────┐ │
│ │ Python (memory) │ SQLite │ Rust (opt) │ │
│ └──────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────┘
```
**Key components:**
| Component | Purpose |
|-----------|---------|
| `IntentResolver` | Query the intent graph, detect overlaps, resolve conflicts |
| `MergeGovernor` | Three-layer decision authority: constraints → intents → economics |
| `Triumvirate` | Phi-weighted voting with configurable quorum levels |
| `StigmergyField` | Trail markers with exponential decay for indirect agent communication |
| `FlockingCoordinator` | Alignment, cohesion, separation rules for emergent coordination |
| `GorgonBridge` | Single entry point for orchestrator integration |
## Testing
```bash
# Python-only (no Rust needed)
PYTHONPATH=python pytest tests/ -v
# With optional Rust acceleration
maturin develop --release && pytest tests/ -v
# Lint
ruff check python/ tests/ && ruff format --check python/ tests/
```
800+ tests, 99% coverage, CI green.
## Roadmap
- **v1.0.0** (current): Stable API contract, published to PyPI, PEP 561 py.typed
- **v0.6.0**: Pluggable signal bus (SQLite cross-process + filesystem), decision history query API
- **v0.5.0**: Coordination protocol (triumvirate voting, stigmergy, flocking, signal bus)
- **v0.4.0**: CLI inspector, async backend, Rust backend parity
## License
[MIT](LICENSE)
| text/markdown | AreteDriver | null | null | null | MIT | multi-agent, ai, orchestration, convergence, intent-graph | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest-benchmark>=4.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"anthropic>=0.39.0; extra == \"llm\"",
"maturin<2.0,>=1.0; extra == \"rust\""
] | [] | [] | [] | [
"Homepage, https://github.com/AreteDriver/convergent",
"Repository, https://github.com/AreteDriver/convergent",
"Issues, https://github.com/AreteDriver/convergent/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:33:56.722389 | convergentai-1.1.0.tar.gz | 161,103 | 21/eb/d96af240548521eaa9524693ca1aec80a97e711ecbe9ee767a3a2cc2ee3a/convergentai-1.1.0.tar.gz | source | sdist | null | false | d59be98980ef0c50cfd1f990b78923e9 | df6ad1a596d557fec591bc76337271baddec142e20dd9ee4c96f763bd792609a | 21ebd96af240548521eaa9524693ca1aec80a97e711ecbe9ee767a3a2cc2ee3a | null | [
"LICENSE"
] | 0 |
2.4 | crunch-convert | 0.8.0 | crunch-convert - Conversion module for the CrunchDAO Platform | # Crunch Convert Tool
[](https://github.com/crunchdao/crunch-convert/actions/workflows/pytest.yml)
This Python library is designed for the [CrunchDAO Platform](https://hub.crunchdao.com/), exposing the conversion tools in a very small CLI.
- [Crunch Convert Tool](#crunch-convert-tool)
- [Installation](#installation)
- [Usage](#usage)
- [Convert a Notebook](#convert-a-notebook)
- [Freeze Requirements](#freeze-requirements)
- [Features](#features)
- [Automatic line commenting](#automatic-line-commenting)
- [Ignore everything](#ignore-everything)
- [Specifying package versions](#specifying-package-versions)
- [Inconsistent versions](#inconsistent-versions)
- [Standard libraries](#standard-libraries)
- [Optional dependencies](#optional-dependencies)
- [Name conflicts](#name-conflicts)
- [Ignore an import](#ignore-an-import)
- [R imports via rpy2](#r-imports-via-rpy2)
- [Embedded Files](#embedded-files)
- [Contributing](#contributing)
- [License](#license)
# Installation
Use [pip](https://pypi.org/project/crunch-convert/) to install the `crunch-convert`.
```bash
pip install --upgrade crunch-convert
```
# Usage
## Convert a Notebook
```bash
crunch-convert notebook ./my-notebook.ipynb --write-requirements --write-embedded-files
```
<details>
<summary>Show a programmatic way</summary>
```python
from crunch_convert.notebook import extract_from_file
from crunch_convert.requirements_txt import CrunchHubWhitelist, format_files_from_imported
flatten = extract_from_file("notebook.ipynb")
# Write the main.py
with open("main.py", "w") as fd:
fd.write(flatten.source_code)
# Map the imported requirements using the Crunch Hub's whitelist
whitelist = CrunchHubWhitelist()
requirements_files = format_files_from_imported(
flatten.requirements,
header="extracted from a notebook",
whitelist=whitelist,
)
# Write the requirements.txt files (Python and/or R)
for requirement_language, content in requirements_files.items():
with open(requirement_language.txt_file_name, "w") as fd:
fd.write(content)
# Write the embedded files
for embedded_file in flatten.embedded_files:
with open(embedded_file.normalized_path, "w") as fd:
fd.write(embedded_file.content)
```
</details>
## Freeze Requirements
```bash
crunch-convert requirements-txt freeze requirements.user.txt
```
<details>
<summary>Show a programmatic way</summary>
```python
from crunch_convert import RequirementLanguage
from crunch_convert.requirements_txt import CrunchHubVersionFinder, CrunchHubWhitelist, format_files_from_named, freeze, parse_from_file
whitelist = CrunchHubWhitelist()
version_finder = CrunchHubVersionFinder()
# Open the requirements.txt to freeze
with open("requirements.txt", "r") as fd:
content = fd.read()
# Parse it into NamedRequirement
requirements = parse_from_file(
language=RequirementLanguage.PYTHON,
file_content=content
)
# Freeze them
frozen_requirements = freeze(
requirements=requirements,
# Only freeze if required by the whitelist
freeze_only_if_required=True,
whitelist=whitelist,
version_finder=version_finder,
)
# Format the new requirements.txt using now frozen requirements
frozen_requirements_files = format_files_from_named(
frozen_requirements,
header="frozen from registry",
whitelist=whitelist,
)
# Write to the new file
with open("requirements.frozen.txt", "w") as fd:
content = frozen_requirements_files[RequirementLanguage.PYTHON]
fd.write(content)
```
> [!TIP]
> The output of `format_files_from_imported()` can be re-parsed right after, no need to first store it in a file.
</details>
# Features
## Automatic line commenting
Only includes the functions, imports, and classes will be kept.
Everything else is commented out to prevent side effects when your code is loaded into the cloud environment. (e.g. when you're exploring the data, debugging your algorithm, or doing visualizating using Matplotlib, etc.)
You can prevent this behavior by using special comments to tell the system to keep part of your code:
- To start a section that you want to keep, write: `@crunch/keep:on`
- To end the section, write: `@crunch/keep:off`
```python
# @crunch/keep:on
# keep global initialization
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# keep constants
TRAIN_DEPTH = 42
IMPORTANT_FEATURES = [ "a", "b", "c" ]
# @crunch/keep:off
# this will be ignored
x, y = crunch.load_data()
def train(...):
...
```
The result will be:
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
TRAIN_DEPTH = 42
IMPORTANT_FEATURES = [ "a", "b", "c" ]
#x, y = crunch.load_data()
def train(...):
...
```
> [!TIP]
> You can put a `@crunch/keep:on` at the top of the cell and never close it to keep everything.
### Ignore everything
To ignore everything when submitting, use the `@crunch/keep:none` command to exclude even imports, functions, and classes.
```python
# @crunch/keep:none
from google.colab import files
files.download("test.joblib")
def score_local():
...
```
The result will be:
```python
# from google.colab import files
# files.download("test.joblib")
# def score_local():
# ...
```
> [!TIP]
> You can put a `@crunch/keep:none` at the top of the cell and never close it to keep absolutly nothing. <br />
> You can put a `@crunch/keep:off` to restore the [default commenting behavior](#automatic-line-commenting).
## Specifying package versions
Since submitting a notebook does not include a `requirements.txt`, users can instead specify the version of a package using import-level [requirement specifiers](https://pip.pypa.io/en/stable/reference/requirement-specifiers/#examples) in a comment on the same line.
```python
# Valid statements
import pandas # == 1.3
import sklearn # >= 1.2, < 2.0
import tqdm # [foo, bar]
import sklearn # ~= 1.4.2
from requests import Session # == 1.5
```
### Inconsistent versions
Specifying multiple times will cause the submission to be rejected if they are different.
```python
# Inconsistant versions will be rejected
import pandas # == 1.3
import pandas # == 1.5
```
### Standard libraries
Specifying versions on standard libraries does nothing (but they will still be rejected if there is an inconsistent version).
```python
# Will be ignored
import os # == 1.3
import sys # == 1.5
```
### Optional dependencies
If an optional dependency is required for the code to work properly, an import statement must be added, even if the code does not use it directly.
```python
import castle.algorithms
# Keep me, I am needed by castle
import torch
```
### Name conflicts
It is possible for multiple import names to resolve to different libraries on PyPI. If this happens, you must specify which one you want. If you do not want a specific version, you can use `@latest`, as without this, we cannot distinguish between commented code and version specifiers.
```python
# Prefer https://pypi.org/project/EMD-signal/
import pyemd # EMD-signal @latest
# Prefer https://pypi.org/project/pyemd/
import pyemd # pyemd @latest
```
### Ignore an import
If you do not want the process to add the package to the `requirements.txt` file, you can use `@ignore` as a version specifier.
```python
# Ignore pandas, use already installed (if any; else, import error is expected!)
import pandas # @ignore
```
## R imports via rpy2
For notebook users, the packages are automatically extracted from the `importr("<name>")` calls, which is provided by [rpy2](https://rpy2.github.io/).
```python
# Import the `importr` function
from rpy2.robjects.packages import importr
# Import the "base" R package
base = importr("base")
```
The following format must be followed:
- The import must be declared at the root level.
- The result must be assigned to a variable; the variable's name will not matter.
- The function name must be `importr`, and it must be imported as shown in the example above.
- The first argument must be a string constant, variables or other will be ignored.
- The other arguments are ignored; this allows for [custom import mapping](https://rpy2.github.io/doc/latest/html/robjects_rpackages.html#importing-r-packages) if necessary.
The line will not be commented, [read more about line commenting here](#automatic-line-commenting).
## Embedded Files
Additional files can be embedded in cells to be submitted with the Notebook. In order for the system to recognize a cell as an Embed File, the following syntax must be followed:
```
---
file: <file_name>.md
---
<!-- File content goes here -->
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Aenean rutrum condimentum ornare.
```
Submitting multiple cells with the same file name will be rejected.
While the focus is on Markdown files, any text file will be accepted. Including but not limited to: `.txt`, `.yaml`, `.json`, ...
# Contributing
Pull requests are always welcome! If you find any issues or have suggestions for improvements, please feel free to submit a pull request or open an issue in the GitHub repository.
# License
[MIT](https://choosealicense.com/licenses/mit/)
| text/markdown | Enzo CACERES | enzo.caceres@crunchdao.com | null | null | null | package development template | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.7"
] | [] | https://github.com/crunchdao/crunch-convert | null | >=3 | [] | [] | [] | [
"click",
"libcst",
"requests",
"requirements-parser>=0.11.0",
"pandas; extra == \"test\"",
"parameterized; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:32:08.570810 | crunch_convert-0.8.0.tar.gz | 21,713 | 9c/cd/155b128b95aacd24f5c35c354c72cd8f2985157bcb275812edfc05c3aa6b/crunch_convert-0.8.0.tar.gz | source | sdist | null | false | f9977a0012e33be17a2e012d3885e206 | 8766ebbdfde9ed757de6bc5746873e34ace3d58ebe6c31501a3761efbf6b0e17 | 9ccd155b128b95aacd24f5c35c354c72cd8f2985157bcb275812edfc05c3aa6b | null | [] | 565 |
2.4 | obris-mcp | 0.1.1 | MCP server for Obris — bring your saved references into AI conversations | # Obris MCP Server
An [MCP](https://modelcontextprotocol.io/) server that brings your saved [Obris](https://obris.ai) references into AI conversations on Claude, ChatGPT, Gemini, and any MCP-compatible client.
## Features
- **List projects** — Browse all your Obris projects to find the one you need
- **Pull in references** — Retrieve saved bookmarks, highlights, and notes as context for any conversation
- **Read-only** — The server only reads your data; it never modifies anything in your Obris account
## Setup
### 1. Get your API key
Generate an API key from your [Obris dashboard](https://app.obris.ai/settings).
### 2. Install and configure
#### Claude Desktop
Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
```json
{
"mcpServers": {
"obris": {
"command": "uvx",
"args": ["obris-mcp"],
"env": {
"OBRIS_API_KEY": "your_api_key_here"
}
}
}
}
```
#### Other MCP clients
```bash
# Install
pip install obris-mcp
# Set your API key
export OBRIS_API_KEY=your_api_key_here
# Run
obris-mcp
```
## Tools
| Tool | Description |
|------|-------------|
| `list_projects` | List all your Obris projects |
| `get_project_references` | Get saved references for a specific project |
## Examples
### Example 1: Listing your projects
**Prompt:** "What Obris projects do I have?"
The `list_projects` tool is called and returns:
```
Projects:
- Marketing Strategy (id: proj_abc123)
- Product Roadmap (id: proj_def456)
- Competitive Analysis (id: proj_ghi789)
```
### Example 2: Getting references for a project
**Prompt:** "Pull in my saved references for the Marketing Strategy project."
The `list_projects` tool is called first to find the project ID, then `get_project_references` is called with `project_id: "proj_abc123"`:
```
### How to Build a Content Strategy
Content strategy starts with understanding your audience...
(source: https://example.com/content-strategy)
---
### SEO Best Practices 2025
Focus on search intent rather than keyword density...
(source: https://example.com/seo-guide)
```
### Example 3: Using references as context for a task
**Prompt:** "Using my Competitive Analysis references, summarize the key differentiators of our top 3 competitors."
The `list_projects` tool finds the Competitive Analysis project, then `get_project_references` retrieves all saved references. The AI uses those references as context to generate a structured summary of competitor differentiators based on your saved research.
## Development
```bash
# Clone and install
git clone https://github.com/obris-dev/obris-mcp.git
cd obris-mcp
cp .env.example .env # Add your API key
# Install with uv
uv sync
# Run locally
uv run obris-mcp
```
## Privacy Policy
This server sends your Obris API key to the Obris API (`api.obris.ai`) to authenticate requests. It retrieves project and reference data from your account. No data is stored locally or sent to any third party.
For the full privacy policy, see [obris.ai/privacy](https://obris.ai/privacy).
## Support
For issues or questions, contact [support@obris.ai](mailto:support@obris.ai) or open an issue on [GitHub](https://github.com/obris-dev/obris-mcp/issues).
## License
MIT | text/markdown | null | null | null | null | null | ai, context, mcp, obris, references | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"mcp[cli]>=1.0.0",
"python-dotenv>=1.0.0"
] | [] | [] | [] | [] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T22:31:22.952965 | obris_mcp-0.1.1-py3-none-any.whl | 5,623 | 57/b2/07fc1e21458d751c8034af5859cda85ce423dda2f798e5d62d9b03d05d75/obris_mcp-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | db8a025a013f9e2a95ebf8ec24ed1124 | e5160307eec4b0383d8899e6f2cba4c225f66f946725dda49fcb0323a73cc3ed | 57b207fc1e21458d751c8034af5859cda85ce423dda2f798e5d62d9b03d05d75 | MIT | [
"LICENSE"
] | 243 |
2.4 | nrtk | 0.27.1 | Natural Robustness Toolkit (NRTK) is a platform for generating validated, sensor-specific perturbations and transformations used to evaluate the robustness of computer vision models. | 
<hr/>
<!-- :auto badges: -->
[](https://pypi.org/project/nrtk/)

[](https://nrtk.readthedocs.io/en/latest/?badge=latest)
<!-- :auto badges: -->
# Natural Robustness Toolkit (NRTK)
> The Natural Robustness Toolkit (NRTK) is an open source toolkit for generating
> operationally realistic perturbations to evaluate the natural robustness of
> computer vision algorithms.
The `nrtk` package evaluates the natural robustness of computer vision
algorithms to various perturbations, including sensor-specific changes to camera
focal length, aperture diameter, etc.
We have also created `nrtk.interop` module to support AI T&E use cases and
workflows, through interoperability with
[MAITE](https://github.com/mit-ll-ai-technology/maite) and integration with
other [JATIC](https://cdao.pages.jatic.net/public/) tools. Users seeking to use
NRTK to perturb MAITE-wrapped datasets or evaluate MAITE-wrapped models should
utilize this module. Explore our
[T&E guides](https://nrtk.readthedocs.io/en/latest/tutorials/testing_and_evaluation_notebooks.html)
which demonstrate how `nrtk` perturbations and `maite` can be applied to assess
operational risks.
## Why NRTK?
NRTK addresses the critical gap in evaluating computer vision model resilience
to real-world operational conditions beyond what traditional image augmentation
libraries cover. T&E engineers need precise methods to assess how models respond
to sensor-specific variables (focal length, aperture diameter, pixel pitch) and
environmental factors without the prohibitive costs of exhaustive data
collection. NRTK leverages pyBSM's physics-based models to rigorously simulate
how imaging sensors capture and process light, enabling systematic robustness
testing across parameter sweeps, identification of performance boundaries, and
visualization of model degradation. This capability is particularly valuable for
satellite and aerial imaging applications, where engineers can simulate
hypothetical sensor configurations to support cost-performance trade-off
analysis during system design—ensuring AI models maintain reliability when
deployed on actual hardware facing natural perturbations in the field.
## Target Audience
This toolkit is intended to help data scientists, developers, and T&E engineers
who want to rigorously evaluate and enhance the robustness of their computer
vision models. For users of the JATIC product suite, this toolkit is used to
assess model robustness against natural perturbations.
<!-- :auto installation: -->
## Installation
`nrtk` installation has been tested on Unix and Linux systems.
To install the current version via `pip`:
```bash
pip install nrtk
```
To install the current version via `conda-forge`:
```bash
conda install -c conda-forge nrtk
```
This installs core functionality, but many specific perturbers require
additional dependencies.
### Installation with Optional Features (Extras)
NRTK uses optional "extras" to avoid installing unncessary dependencies. You can
install extras with square brackets:
```bash
# Install with extras (note: no spaces after commas)
pip install nrtk[<extra1>,<extra2>]
```
#### Common Installation Patterns
```bash
# For basic OpenCV image perturbations
pip install nrtk[graphics]
# For basic Pillow image perturbations
pip install nrtk[Pillow]
# For pybsm's sensor-based perturbations
pip install nrtk[pybsm]
```
**Note**: Choose either `graphics` or `headless` for OpenCV, not both.
More information on extras and related perturbers, including a complete list of
extras, can be found
[here](https://nrtk.readthedocs.io/en/latest/getting_started/installation.html#extras).
Details on the perturbers and their dependencies can be found
[here](https://nrtk.readthedocs.io/en/latest/reference/api/implementations.html).
For more detailed installation instructions, visit the
[installation documentation](https://nrtk.readthedocs.io/en/latest/getting_started/installation.html).
<!-- :auto installation: -->
<!-- :auto getting-started: -->
## Getting Started
Explore usage examples of the `nrtk` package in various contexts using the
Jupyter notebooks provided in the `./docs/examples/` directory.
<!-- :auto getting-started: -->
## Example: A First Look at NRTK Perturbations
Via the pyBSM package, NRTK exposes a large set of Optical Transfer Functions
(OTFs). These OTFs can simulate different environmental and sensor-based
effects. For example, the :ref:`JitterPerturber <JitterPerturber>` simulates
different levels of sensor jitter. By modifying its input parameters, you can
observe how sensor jitter affects image quality.
#### Input Image
Below is an example of an input image that will undergo a Jitter OTF
perturbation. This image represents the initial state before any transformation.

#### Code Sample
Below is some example code that applies a Jitter OTF transformation::
```
from nrtk.impls.perturb_image.optical.otf import JitterPerturber
import numpy as np
from PIL import Image
INPUT_IMG_FILE = 'docs/images/input.jpg'
image = np.array(Image.open(INPUT_IMG_FILE))
otf = JitterPerturber(sx=8e-6, sy=8e-6, name="test_name")
out_image = otf(image)
```
This code uses default values and provides a sample input image. However, you
can adjust the parameters and use your own image to visualize the perturbation.
The sx and sy parameters (the root-mean-squared jitter amplitudes in radians, in
the x and y directions) are the primary way to customize a jitter perturber.
Larger jitter amplitude generate a larger Gaussian blur kernel.
#### Resulting Image
The output image below shows the effects of the Jitter OTF on the original
input. This result illustrates the Gaussian blur introduced due to simulated
sensor jitter.

<!-- :auto documentation: -->
## Documentation
Documentation for both release snapshots and the latest main branch is available
on [ReadTheDocs](https://nrtk.readthedocs.io).
To build the Sphinx-based documentation locally for the latest reference:
```bash
# Install dependencies
poetry sync --with main,linting,tests,docs
# Navigate to the documentation root
cd docs
# Build the documentation
poetry run make html
# Open the generated documentation in your browser
firefox _build/html/index.html
```
<!-- :auto documentation: -->
<!-- :auto contributing: -->
## Contributing
Contributions are encouraged!
The following points help ensure contributions follow development practices.
- Follow the
[JATIC Design Principles](https://cdao.pages.jatic.net/public/program/design-principles/).
- Adopt the Git Flow branching strategy.
- See the
[release process documentation](https://nrtk.readthedocs.io/en/latest/development/release_process.html)
for detailed release information.
- Additional contribution guidelines and issue reporting steps can be found in
[CONTRIBUTING.md](https://github.com/Kitware/nrtk/blob/main/CONTRIBUTING.md).
<!-- :auto contributing: -->
<!-- :auto developer-tools: -->
### Developer Tools
Ensure the source tree is acquired locally before proceeding.
#### Poetry Install
You can install using [Poetry](https://python-poetry.org/):
> [!IMPORTANT] NRTK requires Poetry 2.2 or higher.
> [!WARNING] Users unfamiliar with Poetry should use caution. See
> [installation documentation](https://nrtk.readthedocs.io/en/latest/getting_started/installation.html#from-source)
> for more information.
```bash
poetry install --with main,linting,tests,docs --extras "<extra1> <extra2> ..."
```
#### Pre-commit Hooks
Pre-commit hooks ensure that code complies with required linting and formatting
guidelines. These hooks run automatically before commits but can also be
executed manually. To bypass checks during a commit, use the `--no-verify` flag.
To install and use pre-commit hooks:
```bash
# Install required dependencies
poetry sync --with main,linting,tests,docs
# Initialize pre-commit hooks for the repository
poetry run pre-commit install
# Run pre-commit checks on all files
poetry run pre-commit run --all-files
```
<!-- :auto developer-tools: -->
## NRTK Demonstration Tool
This [associated project](https://github.com/Kitware/nrtk-explorer) provides a
local web application that provides a demonstration of visual saliency
generation in a user interface. This provides an example of how image
perturbation, as generated by this package, can be utilized in a user interface
to facilitate dataset exploration. This tool uses the
[trame framework](https://kitware.github.io/trame/).

<!-- :auto license: -->
## License
[Apache 2.0](https://github.com/Kitware/nrtk/blob/main/LICENSE)
<!-- :auto license: -->
<!-- :auto contacts: -->
## Contacts
**Current Maintainers**: Brandon RichardWebster (@bjrichardwebster), Emily
Veenhuis (@eveenhuis)
We welcome contributions to NRTK! Please start discussions by opening an issue
or pull request on GitHub. This keeps the conversation visible and helps the
whole community benefit. Our preferred channels are public, but if you'd like to
reach out privately first, feel free to contact us at nrtk@kitware.com.
<!-- :auto contacts: -->
<!-- :auto acknowledgment: -->
## Acknowledgment
This material is based upon work supported by the Chief Digital and Artificial
Intelligence Office under Contract No. 519TC-23-9-2032. The views and
conclusions contained herein are those of the author(s) and should not be
interpreted as necessarily representing the official policies or endorsements,
either expressed or implied, of the U.S. Government.
<!-- :auto acknowledgment: -->
| text/markdown | Kitware, Inc. | nrtk@kitware.com | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
... | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"Pillow>=12.1.1; extra == \"diffusion\"",
"Pillow>=12.1.1; extra == \"pillow\"",
"Pillow>=12.1.1; extra == \"tools\"",
"accelerate>=0.20.0; extra == \"diffusion\"",
"click>=8.0.0; extra == \"tools\"",
"diffusers>=0.21.0; extra == \"diffusion\"",
"fastapi>=0.110.0; extra == \"tools\"",
"kwcoco>=0.2.18;... | [] | [] | [] | [
"Documentation, https://nrtk.readthedocs.io/"
] | poetry/2.3.2 CPython/3.10.19 Linux/6.14.0-1011-aws | 2026-02-18T22:30:26.757320 | nrtk-0.27.1-py3-none-any.whl | 141,000 | eb/e5/b9e2ffc4a20c3702ec789a45a41690527f78917e955fafe0d3193d2d2fa8/nrtk-0.27.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 51b432bce3d9ac076cdfe84b8ef5f356 | 87e229b28a029a407f3b46849af2145446b5494b71d94e4ffef7bd0cf101415e | ebe5b9e2ffc4a20c3702ec789a45a41690527f78917e955fafe0d3193d2d2fa8 | Apache-2.0 | [
"LICENSE"
] | 281 |
2.4 | sqlite-muninn | 0.2.0 | HNSW vector search + graph traversal + Node2Vec for SQLite | # sqlite-muninn
<div align="center">
<img src="https://joshpeak.net/sqlite-muninn/assets/muninn_logo_transparent.png" alt="Muninn Raven Logo" width=480px/>
<p><i>Odin's mythic <a href="https://en.wikipedia.org/wiki/Huginn_and_Muninn">raven of Memory</a>.</i></p>
</div>
A zero-dependency C extension for SQLite to add an advanced collection of knowledge graph primitives like Vector Similarity Search, HNSW Indexes, Graph database, Community Detection, Node2Vec capabilities.
**[Documentation](https://neozenith.github.io/sqlite-muninn/)** | **[GitHub](https://github.com/neozenith/sqlite-muninn)**
## Features
- **HNSW Vector Index** — O(log N) approximate nearest neighbor search with incremental insert/delete
- **Graph Traversal** — BFS, DFS, shortest path, connected components, PageRank on any edge table
- **Centrality Measures** — Degree, betweenness (Brandes), and closeness centrality with weighted/temporal support
- **Community Detection** — Leiden algorithm for discovering graph communities with modularity scoring
- **Node2Vec** — Learn structural node embeddings from graph topology, store in HNSW for similarity search
- **Zero dependencies** — Pure C11, compiles to a single `.dylib`/`.so`/`.dll`
- **SIMD accelerated** — ARM NEON and x86 SSE distance functions
## Build
Requires SQLite development headers and a C11 compiler.
```bash
# macOS (Homebrew SQLite recommended)
brew install sqlite
make all
# Linux
sudo apt-get install libsqlite3-dev
make all
# Run tests
make test # C unit tests
make test-python # Python integration tests
make test-all # Both
```
## Quick Start
```sql
.load ./muninn
-- Create an HNSW vector index
CREATE VIRTUAL TABLE my_vectors USING hnsw_index(
dimensions=384, metric='cosine', m=16, ef_construction=200
);
-- Insert vectors
INSERT INTO my_vectors (rowid, vector) VALUES (1, ?); -- 384-dim float32 blob
-- KNN search
SELECT rowid, distance FROM my_vectors
WHERE vector MATCH ?query AND k = 10 AND ef_search = 64;
-- Graph traversal on any edge table
SELECT node, depth FROM graph_bfs
WHERE edge_table = 'friendships' AND src_col = 'user_a'
AND dst_col = 'user_b' AND start_node = 'alice' AND max_depth = 3
AND direction = 'both';
-- Connected components
SELECT node, component_id, component_size FROM graph_components
WHERE edge_table = 'friendships' AND src_col = 'user_a' AND dst_col = 'user_b';
-- PageRank
SELECT node, rank FROM graph_pagerank
WHERE edge_table = 'citations' AND src_col = 'citing' AND dst_col = 'cited'
AND damping = 0.85 AND iterations = 20;
-- Betweenness centrality (find bridge nodes)
SELECT node, centrality FROM graph_betweenness
WHERE edge_table = 'friendships' AND src_col = 'user_a' AND dst_col = 'user_b'
AND direction = 'both'
ORDER BY centrality DESC LIMIT 10;
-- Community detection (Leiden algorithm)
SELECT node, community_id, modularity FROM graph_leiden
WHERE edge_table = 'friendships' AND src_col = 'user_a' AND dst_col = 'user_b';
-- Learn structural embeddings from graph topology
SELECT node2vec_train(
'friendships', 'user_a', 'user_b', 'my_vectors',
64, 1.0, 1.0, 10, 80, 5, 5, 0.025, 5
);
```
## Examples
Self-contained examples in the [`examples/`](examples/) directory:
| Example | Demonstrates |
|---------|-------------|
| [Semantic Search](examples/semantic_search/) | HNSW index, KNN queries, point lookup, delete |
| [Movie Recommendations](examples/movie_recommendations/) | Vector similarity for content-based recommendations |
| [Social Network](examples/social_network/) | Graph TVFs on a social graph (BFS, components, PageRank) |
| [Research Papers](examples/research_papers/) | Citation graph analysis with Node2Vec embeddings |
| [Transit Routes](examples/transit_routes/) | Shortest path and graph traversal on route networks |
```bash
make all
python examples/semantic_search/example.py
```
## API Reference
### HNSW Virtual Table (`hnsw_index`)
```sql
CREATE VIRTUAL TABLE name USING hnsw_index(
dimensions=N, -- vector dimensionality (required)
metric='l2', -- 'l2' | 'cosine' | 'inner_product'
m=16, -- max connections per node per layer
ef_construction=200 -- beam width during index construction
);
```
**Columns:**
| Column | Type | Hidden | Description |
|--------|------|--------|-------------|
| `rowid` | INTEGER | Yes | User-assigned ID for joining with application tables |
| `vector` | BLOB | No | `float32[dim]` — input for INSERT, MATCH constraint for search |
| `distance` | REAL | No | Computed distance (output only, during search) |
| `k` | INTEGER | Yes | Top-k parameter (search constraint) |
| `ef_search` | INTEGER | Yes | Search beam width (search constraint) |
**Operations:**
```sql
-- Insert
INSERT INTO t (rowid, vector) VALUES (42, ?blob);
-- KNN search
SELECT rowid, distance FROM t WHERE vector MATCH ?query AND k = 10;
-- Point lookup
SELECT vector FROM t WHERE rowid = 42;
-- Delete (with automatic neighbor reconnection)
DELETE FROM t WHERE rowid = 42;
-- Drop (removes index and all shadow tables)
DROP TABLE t;
```
**Shadow tables** (auto-managed):
- `{name}_config` — HNSW parameters
- `{name}_nodes` — stored vectors and level assignments
- `{name}_edges` — the proximity graph (usable by graph TVFs)
### Graph Table-Valued Functions
All graph TVFs work on **any** existing SQLite table with source/target columns. Table and column names are validated against SQL injection.
#### `graph_bfs` / `graph_dfs`
Breadth-first or depth-first traversal from a start node.
```sql
SELECT node, depth, parent FROM graph_bfs
WHERE edge_table = 'edges'
AND src_col = 'src'
AND dst_col = 'dst'
AND start_node = 'node-42'
AND max_depth = 5
AND direction = 'forward'; -- 'forward' | 'reverse' | 'both'
```
| Output Column | Type | Description |
|---------------|------|-------------|
| `node` | TEXT | Node identifier |
| `depth` | INTEGER | Hop distance from start |
| `parent` | TEXT | Parent node in traversal tree (NULL for start) |
#### `graph_shortest_path`
Unweighted (BFS) or weighted (Dijkstra) shortest path.
```sql
-- Unweighted
SELECT node, distance, path_order FROM graph_shortest_path
WHERE edge_table = 'edges' AND src_col = 'src' AND dst_col = 'dst'
AND start_node = 'A' AND end_node = 'Z' AND weight_col IS NULL;
-- Weighted (Dijkstra)
SELECT node, distance, path_order FROM graph_shortest_path
WHERE edge_table = 'edges' AND src_col = 'src' AND dst_col = 'dst'
AND start_node = 'A' AND end_node = 'Z' AND weight_col = 'weight';
```
| Output Column | Type | Description |
|---------------|------|-------------|
| `node` | TEXT | Node on the path |
| `distance` | REAL | Cumulative distance from start |
| `path_order` | INTEGER | Position in path (0-indexed) |
#### `graph_components`
Connected components via Union-Find with path compression.
```sql
SELECT node, component_id, component_size FROM graph_components
WHERE edge_table = 'edges' AND src_col = 'src' AND dst_col = 'dst';
```
| Output Column | Type | Description |
|---------------|------|-------------|
| `node` | TEXT | Node identifier |
| `component_id` | INTEGER | Component index (0-based) |
| `component_size` | INTEGER | Number of nodes in this component |
#### `graph_pagerank`
Iterative power method PageRank with configurable damping and iterations.
```sql
SELECT node, rank FROM graph_pagerank
WHERE edge_table = 'edges' AND src_col = 'src' AND dst_col = 'dst'
AND damping = 0.85 -- optional, default 0.85
AND iterations = 20; -- optional, default 20
```
| Output Column | Type | Description |
|---------------|------|-------------|
| `node` | TEXT | Node identifier |
| `rank` | REAL | PageRank score (sums to ~1.0) |
#### `graph_degree`
Degree centrality for all nodes.
```sql
SELECT node, in_degree, out_degree, degree, centrality FROM graph_degree
WHERE edge_table = 'edges' AND src_col = 'src' AND dst_col = 'dst';
```
| Output Column | Type | Description |
|---------------|------|-------------|
| `node` | TEXT | Node identifier |
| `in_degree` | REAL | Count (or weighted sum) of incoming edges |
| `out_degree` | REAL | Count (or weighted sum) of outgoing edges |
| `degree` | REAL | Total degree (in + out) |
| `centrality` | REAL | Normalized degree centrality |
Optional constraints: `weight_col`, `direction`, `normalized`, `timestamp_col`, `time_start`, `time_end`.
#### `graph_betweenness`
Betweenness centrality via Brandes' O(VE) algorithm.
```sql
SELECT node, centrality FROM graph_betweenness
WHERE edge_table = 'edges' AND src_col = 'src' AND dst_col = 'dst'
AND direction = 'both';
```
| Output Column | Type | Description |
|---------------|------|-------------|
| `node` | TEXT | Node identifier |
| `centrality` | REAL | Betweenness centrality score |
Optional constraints: `weight_col`, `direction`, `normalized`, `timestamp_col`, `time_start`, `time_end`.
#### `graph_closeness`
Closeness centrality with Wasserman-Faust normalization for disconnected graphs.
```sql
SELECT node, centrality FROM graph_closeness
WHERE edge_table = 'edges' AND src_col = 'src' AND dst_col = 'dst'
AND direction = 'both';
```
| Output Column | Type | Description |
|---------------|------|-------------|
| `node` | TEXT | Node identifier |
| `centrality` | REAL | Closeness centrality score |
Optional constraints: `weight_col`, `direction`, `timestamp_col`, `time_start`, `time_end`.
#### `graph_leiden`
Community detection via the Leiden algorithm (Traag et al., 2019).
```sql
SELECT node, community_id, modularity FROM graph_leiden
WHERE edge_table = 'edges' AND src_col = 'src' AND dst_col = 'dst';
```
| Output Column | Type | Description |
|---------------|------|-------------|
| `node` | TEXT | Node identifier |
| `community_id` | INTEGER | Community assignment (0-based) |
| `modularity` | REAL | Global modularity score of the partition |
Optional constraints: `weight_col`, `resolution` (default 1.0), `timestamp_col`, `time_start`, `time_end`.
### `node2vec_train()`
Learn vector embeddings from graph structure using biased random walks (Node2Vec) and Skip-gram with Negative Sampling (SGNS).
```sql
SELECT node2vec_train(
edge_table, -- name of edge table
src_col, -- source column name
dst_col, -- destination column name
output_table, -- HNSW table to store embeddings (must exist)
dimensions, -- embedding size (must match HNSW table)
p, -- return parameter (1.0 = uniform/DeepWalk)
q, -- in-out parameter (1.0 = uniform/DeepWalk)
num_walks, -- walks per node
walk_length, -- max steps per walk
window_size, -- SGNS context window
negative_samples, -- negative samples per positive
learning_rate, -- initial learning rate (decays linearly)
epochs -- training epochs
);
-- Returns: number of nodes embedded
```
**p, q parameter guide:**
| Setting | Walk Behavior | Best For |
|---------|--------------|----------|
| p=1, q=1 | Uniform (DeepWalk) | General structural similarity |
| Low p (0.25) | BFS-like, stays local | Community/cluster detection |
| Low q (0.5) | DFS-like, explores far | Structural role similarity |
## Benchmarks
The project includes a comprehensive benchmark suite comparing muninn against other SQLite extensions across real-world workloads.
**Vector search** benchmarks compare against [sqlite-vector](https://github.com/nicepkg/sqlite-vector), [sqlite-vec](https://github.com/asg017/sqlite-vec), and [vectorlite](https://github.com/nicepkg/vectorlite) using 3 embedding models (MiniLM, MPNet, BGE-Large) and 2 text datasets (AG News, Wealth of Nations) at scales up to 250K vectors.
**Graph traversal** benchmarks compare muninn TVFs against recursive CTEs and [GraphQLite](https://github.com/nicepkg/graphqlite) on synthetic graphs (Erdos-Renyi, Barabasi-Albert) at scales up to 100K nodes.
Results include interactive Plotly charts for insert throughput, search latency, recall, database size, and tipping-point analysis. See the [full benchmark results](https://neozenith.github.io/sqlite-muninn/benchmarks/) on the documentation site.
```bash
make -C benchmarks help # List all benchmark targets
make -C benchmarks analyze # Generate charts and reports from existing results
```
## Project Structure
```
src/ C11 source (extension entry point, HNSW, graph TVFs, Node2Vec)
test/ C unit tests (custom minimal framework)
pytests/ Python integration tests (pytest)
examples/ Self-contained usage examples
benchmarks/
scripts/ Benchmark runners and analysis scripts
charts/ Plotly JSON chart specs (committed for docs site)
results/ JSONL benchmark data (generated, not committed)
docs/ MkDocs documentation source
```
## Documentation
Full documentation is published at **[neozenith.github.io/sqlite-muninn](https://neozenith.github.io/sqlite-muninn/)** via MkDocs Material with interactive Plotly charts.
```bash
make docs-serve # Local dev server with live reload
make docs-build # Build static site
```
## Research References
| Feature | Paper |
|---------|-------|
| HNSW | Malkov & Yashunin, TPAMI 2020 |
| MN-RU insert repair | arXiv:2407.07871, 2024 |
| Patience early termination | SISAP 2025 |
| Betweenness centrality | Brandes, J. Math. Sociol. 2001 |
| Leiden community detection | Traag, Waltman & van Eck, Sci. Rep. 2019 |
| Node2Vec | Grover & Leskovec, KDD 2016 |
| SGNS | Mikolov et al., 2013 |
## License
MIT. See [LICENSE](LICENSE).
| text/markdown | null | null | null | null | null | sqlite, vector, hnsw, graph, node2vec, search | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: C",
"Programming Language :: Python :: 3",
"Topic :: Database",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | <3.14,>=3.12 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/user/sqlite-muninn",
"Repository, https://github.com/user/sqlite-muninn"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T22:30:21.012982 | sqlite_muninn-0.2.0-py3-none-macosx_11_0_universal2.whl | 119,638 | f8/30/a9210b578463e2b6f36e4848870fdd52a78a102e4b6eefd6d30dc311c593/sqlite_muninn-0.2.0-py3-none-macosx_11_0_universal2.whl | py3 | bdist_wheel | null | false | b76be1cbd66ec27fc1a5258b7f29f01a | 179af7d1549b71d7042e99561cd0f4a2fc74f96ac66a933d13452e7b748883fa | f830a9210b578463e2b6f36e4848870fdd52a78a102e4b6eefd6d30dc311c593 | MIT | [
"LICENSE"
] | 306 |
2.4 | erbsland-sphinx-ansi | 1.1.0 | A Sphinx extension to format ANSI colored/formated terminal output |
# ANSI-Colored Terminal Output for Sphinx
`erbsland-sphinx-ansi` is a lightweight Sphinx extension that renders ANSI-colored and formatted terminal output directly in your documentation.
It is useful for command-line tools, build logs, and interactive sessions where terminal colors improve readability.
## Installation
```shell
pip install erbsland-sphinx-ansi
```
## Quick Start
Enable the extension in `conf.py`:
```python
extensions = [
# ...
"erbsland.sphinx.ansi",
]
```
Use the `erbsland-ansi` directive:
```rst
.. erbsland-ansi::
:escape-char: ␛
␛[32m[sphinx-autobuild] ␛[36mStarting initial build␛[0m
␛[32m[sphinx-autobuild] ␛[34m> python -m sphinx build doc _build␛[0m
␛[32m[sphinx-autobuild] ␛[36mServing on http://127.0.0.1:9000␛[0m
␛[32m[sphinx-autobuild] ␛[36mWaiting to detect changes...␛[0m
```
`escape-char` is optional. If set, this character is replaced with the ANSI escape character (`\x1b`) when parsing the directive content. If omitted, provide real ANSI escape sequences directly.
## Output Behavior
- HTML output: ANSI sequences are converted into styled output.
- Non-HTML output: ANSI formatting is stripped, leaving plain text.
## Requirements
- Python 3.10+
- Sphinx 8.0+ (required by the extension at runtime)
## Development
Install development dependencies:
```shell
pip install -r requirements-dev.txt
```
Run tests:
```shell
pytest
```
## Documentation
Project documentation is here: [https://sphinx-ansi.erbsland.dev](https://sphinx-ansi.erbsland.dev).
## License
Copyright (c) 2026 Tobias Erbsland / Erbsland DEV (<https://erbsland.dev>)
Licensed under the Apache License, Version 2.0.
See `LICENSE` for details.
| text/markdown | null | Tobias Erbsland / Erbsland DEV <info@erbsland.dev> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/erbsland-dev/erbsland-sphinx-ansi",
"Issues, https://github.com/erbsland-dev/erbsland-sphinx-ansi/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:30:19.924388 | erbsland_sphinx_ansi-1.1.0.tar.gz | 24,484 | c6/6b/b73db78f535ac2893f65f81c7709f53d1238f7129f07538f2d8557a531bc/erbsland_sphinx_ansi-1.1.0.tar.gz | source | sdist | null | false | 8067db3b3727d26c462d81d3f86cdcd9 | b7a357ba9222f811423ed3cf596d699f1546e9cc37a7ecab06ca2001348b4ab3 | c66bb73db78f535ac2893f65f81c7709f53d1238f7129f07538f2d8557a531bc | null | [
"LICENSE"
] | 501 |
2.4 | gxhash | 0.4.1 | Python bindings for GxHash | # gxhash-py
[](https://github.com/winstxnhdw/gxhash)
[](https://pypi.python.org/pypi/gxhash)

[](https://www.python.org/)
[](https://codecov.io/github/winstxnhdw/gxhash)
[](https://github.com/winstxnhdw/gxhash/actions/workflows/main.yml)
<p align="center">
<picture align="center">
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/wiki/winstxnhdw/gxhash/resources/throughput-cropped-dark.png" width=50%>
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/wiki/winstxnhdw/gxhash/resources/throughput-cropped-light.png" width=50%>
<img alt="Shows a bar chart with benchmark results." src="https://raw.githubusercontent.com/wiki/winstxnhdw/gxhash/resources/throughput-cropped-light.png">
</picture>
</p>
<p align="center">
<i>128-bit hash throughput (MiB/s)</i>
</p>
Python bindings for [GxHash](https://github.com/ogxd/gxhash), a blazingly fast and robust non-cryptographic hashing algorithm.
## Highlights
- [Fastest non-cryptographic hash algorithm](https://github.com/winstxnhdw/gxhash/blob/main/bench/README.md) of its class.
- Pure Rust backend with zero additional Python runtime overhead.
- Zero-copy data access across the FFI boundary via the [buffer protocol](https://docs.python.org/3/c-api/buffer.html).
- Support for [async hashing](https://github.com/winstxnhdw/gxhash/tree/main/bench#asynchronous-hashing) with multithreaded parallelism for non-blocking applications.
- Passes all [SMHasher](https://github.com/rurban/smhasher) tests and produces high-quality, hardware-accelerated 32/64/128-bit hashes.
- Guaranteed [stable hashes](https://github.com/ogxd/gxhash?tab=readme-ov-file#hashes-stability) across all supported platforms.
- Provides a [performant](https://github.com/winstxnhdw/gxhash/tree/main/bench#128-bit), drop-in replacement for the built-in [hashlib](https://docs.python.org/3/library/hashlib.html) module.
- SIMD-accelerated [hexdigest](https://docs.python.org/3/library/hashlib.html#hashlib.hash.hexdigest) encoding with SSSE3/NEON intrinsics.
- Fully-typed, clean API with uncompromising [strict-mode](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#diagnostic-settings-defaults) conformance across all major type checkers.
- Zero-dependency installations on all platforms supported by [maturin](https://github.com/PyO3/maturin) and [puccinialin](https://github.com/konstin/puccinialin).
## Installation
`gxhash` is available on [PyPI](https://pypi.python.org/pypi/gxhash) and can be installed via `pip`.
```bash
pip install gxhash
```
For the best throughput, you can allow `gxhash` to use wider registers by installing with the `MATURIN_PEP517_ARGS` environment variable.
> [!WARNING]\
> This is only possible on systems that support `VAES` and `AVX2` instruction sets. Running on unsupported hardware will result in an illegal instruction error at **runtime**.
```bash
MATURIN_PEP517_ARGS="--features hybrid" pip install gxhash
```
By default, `gxhash` attempts to detect and use your system's vectorisation features. You can manually control this by setting the specific `RUSTFLAGS` for your machine. For x64 systems, the minimum required features are `aes` and `ssse3`.
```bash
RUSTFLAGS="-C target-feature=+aes,+ssse3" pip install gxhash
```
For ARM64 systems, the minimum required features are `aes` and `neon`.
```bash
RUSTFLAGS="-C target-feature=+aes,+neon" pip install gxhash
```
## Supported Platforms
`gxhash` is well supported across a wide range of platforms, thanks in part to [maturin](https://github.com/PyO3/maturin), and more specifically [puccinialin](https://github.com/konstin/puccinialin). Therefore, `gxhash` supports [all platforms](https://www.maturin.rs/platform_support.html) that `maturin` and `puccinialin` support. `gxhash` is also actively tested on the following platforms:
- Ubuntu 24.04 x64
- Ubuntu 24.04 ARM64
- macOS 15 x64
- macOS 15 ARM64
- Windows Server 2025 x64
- Windows 11 ARM64
## Usage
Hashing bytes.
```python
from gxhash import GxHash32
def main() -> None:
gxhash = GxHash32(seed=0)
result = gxhash.hash(b"Hello, world!")
if __name__ == "__main__":
main()
```
Hashing bytes asynchronously.
```python
from asyncio import run
from gxhash import GxHash128
async def main() -> None:
gxhash = GxHash128(seed=0)
result = await gxhash.hash_async(b"Hello, world!")
if __name__ == "__main__":
run(main())
```
As a drop-in replacement for `hashlib`.
> [!WARNING]\
> [GxHash](https://github.com/ogxd/gxhash) is not an incremental hasher, and all inputs provided to the `update` method will be accumulated internally. This can lead to an unexpected increase in memory usage if you are expecting streaming behaviour.
> Also note that hash computation in `gxhash.hashlib` functions are deferred and only computed when `digest` or `hexdigest` is called.
```python
from gxhash.hashlib import gxhash128
def main() -> None:
hasher = gxhash128(data=b"Hello, world!", seed=0)
result = hasher.hexdigest()
if __name__ == "__main__":
main()
```
## Contribute
Read the [CONTRIBUTING.md](https://github.com/winstxnhdw/gxhash/blob/main/CONTRIBUTING.md) docs for development setup and guidelines.
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | hash | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9"... | [] | https://github.com/winstxnhdw/gxhash | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/winstxnhdw/gxhash",
"Homepage, https://github.com/winstxnhdw/gxhash",
"Issues, https://github.com/winstxnhdw/gxhash/issues",
"Repository, https://github.com/winstxnhdw/gxhash"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T22:28:52.596988 | gxhash-0.4.1.tar.gz | 19,874 | bf/91/776b465da6563d8f510950399fb66a389530d4bb617f827357a75aa820a3/gxhash-0.4.1.tar.gz | source | sdist | null | false | 2588cc2143952891f5e5851b8a2af680 | 7d14b35d5180ce0466c67eb869674d4d7150ad2b557d64c3e885b8cd022a9e63 | bf91776b465da6563d8f510950399fb66a389530d4bb617f827357a75aa820a3 | MIT | [] | 170 |
2.4 | pydantic-gitlab-webhooks | 0.3.52 | Pydantic models for GitLab webhook payloads | # Pydantic models for GitLab Webhooks
[](https://pypi.org/p/pydantic-gitlab-webhooks/)

[](https://github.com/rjw57/pydantic-gitlab-webhooks/releases)
[](https://github.com/rjw57/pydantic-gitlab-webhooks/actions/workflows/main.yml?query=branch%3Amain)
Module containing Pydantic models for validating bodies from [GitLab webhook
requests](https://docs.gitlab.com/ee/user/project/integrations/webhook_events.html).
## Documentation
The project documentation including an API reference can be found at
[https://rjw57.github.io/pydantic-gitlab-webhooks/](https://rjw57.github.io/pydantic-gitlab-webhooks/).
## Usage example
Intended usage is via a single `validate_event_header_and_body` function which will
validate an incoming webhook's `X-Gitlab-Event` header and the body after being parsed
into a Python dict.
```py
from pydantic import ValidationError
from pydantic_gitlab_webhooks import (
validate_event_body_dict,
validate_event_header_and_body_dict,
)
event_body = {
"object_kind": "access_token",
"group": {
"group_name": "Twitter",
"group_path": "twitter",
"group_id": 35,
"full_path": "twitter"
},
"object_attributes": {
"user_id": 90,
"created_at": "2024-01-24 16:27:40 UTC",
"id": 25,
"name": "acd",
"expires_at": "2024-01-26"
},
"event_name": "expiring_access_token"
}
# Use the value of the "X-Gitlab-Event" header and event body to validate
# the incoming event.
parsed_event = validate_event_header_and_body_dict(
"Resource Access Token Hook",
event_body
)
assert parsed_event.group.full_path == "twitter"
# Invalid event bodies or hook headers raise Pydantic validation errors
try:
validate_event_header_and_body_dict("invalid hook", event_body)
except ValidationError:
pass # ok - expected error raised
else:
assert False, "ValidationError was not raised"
# Event bodies can be parsed without the header hook if necessary although using
# the hook header is more efficient.
parsed_event = validate_event_body_dict(event_body)
assert parsed_event.group.full_path == "twitter"
# Event models may be imported individually. For example:
from pydantic_gitlab_webhooks.events import GroupAccessTokenEvent
parsed_event = GroupAccessTokenEvent.model_validate(event_body)
```
| text/markdown | Rich Wareham | rich.pydantic-gitlab-webhooks@richwareham.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"pydantic[email]<3.0.0,>=2.9.2",
"python-dateutil<3.0.0,>=2.9.0.post0"
] | [] | [] | [] | [
"Changelog, https://github.com/rjw57/pydantic-gitlab-webhooks/blob/main/CHANGELOG.md",
"Documentation, https://rjw57.github.io/pydantic-gitlab-webhooks",
"Homepage, https://github.com/rjw57/pydantic-gitlab-webhooks",
"Issues, https://github.com/rjw57/pydantic-gitlab-webhooks/issues",
"Repository, https://gi... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:28:29.387244 | pydantic_gitlab_webhooks-0.3.52.tar.gz | 8,993 | ba/88/7a9a3745b9675d77f0235717720457184ccc4b420b721c5bf8e4cd7870b9/pydantic_gitlab_webhooks-0.3.52.tar.gz | source | sdist | null | false | e85432de5d29a1233d1e2c1782bf5869 | 551762bdfaa8d715ca5029e6c5d24477ba03e45dff1d1b216febe2a53f24811d | ba887a9a3745b9675d77f0235717720457184ccc4b420b721c5bf8e4cd7870b9 | null | [
"LICENSE.txt"
] | 243 |
2.4 | crowdstrike-aidr-openai | 0.2.0 | A wrapper around the OpenAI Python library that wraps the Responses API with CrowdStrike AIDR | # CrowdStrike AIDR + OpenAI Python API library
A wrapper around the OpenAI Python library that wraps the [Responses API](https://platform.openai.com/docs/api-reference/responses)
with CrowdStrike AIDR. Supports Python v3.12 and greater.
## Installation
```bash
pip install -U crowdstrike-aidr-openai
```
## Usage
```python
import os
from crowdstrike_aidr_openai import CrowdStrikeOpenAI
client = CrowdStrikeOpenAI(
api_key=os.environ.get("OPENAI_API_KEY"),
# CrowdStrike AIDR options
crowdstrike_aidr_api_key=os.environ.get("CS_AIDR_API_TOKEN"),
crowdstrike_aidr_base_url_template=os.environ.get("CS_AIDR_BASE_URL_TEMPLATE"),
)
response = client.responses.create(
model="gpt-4o",
instructions="You are a coding assistant that talks like a pirate.",
input="How do I check if a Python object is an instance of a class?",
)
print(response.output_text)
```
## Microsoft Azure OpenAI
To use this library with [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/overview),
use the `CrowdStrikeAzureOpenAI` class instead of the `CrowdStrikeOpenAI` class.
```python
from crowdstrike_aidr_openai import CrowdStrikeAzureOpenAI
client = CrowdStrikeAzureOpenAI(
# https://learn.microsoft.com/azure/ai-services/openai/reference#rest-api-versioning
api_version="2023-07-01-preview",
# https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource
azure_endpoint="https://example-endpoint.openai.azure.com",
# CrowdStrike AIDR options
crowdstrike_aidr_api_key=os.environ.get("CS_AIDR_API_TOKEN"),
crowdstrike_aidr_base_url_template=os.environ.get("CS_AIDR_BASE_URL_TEMPLATE"),
)
completion = client.chat.completions.create(
model="deployment-name", # e.g. gpt-35-instant
messages=[
{
"role": "user",
"content": "How do I output all files in a directory using Python?",
},
],
)
print(completion.to_json())
```
| text/markdown | CrowdStrike | CrowdStrike <support@crowdstrike.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"anyio~=4.12.0",
"crowdstrike-aidr~=0.5.0",
"httpx~=0.28.1",
"openai~=2.8.1",
"pydantic-core~=2.41.5",
"typing-extensions~=4.15.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:27:10.462031 | crowdstrike_aidr_openai-0.2.0.tar.gz | 12,345 | b5/cb/dc1dfe954636990c72eedb17b4028eccdcdc6ed3e355ab48805c9c899029/crowdstrike_aidr_openai-0.2.0.tar.gz | source | sdist | null | false | d56bb3c4d3d95b0ab1b22bce15b93104 | 01170607e6a08f2a07c9c88ca712ccff6ebe8a34be89d2e6157a24854cff6856 | b5cbdc1dfe954636990c72eedb17b4028eccdcdc6ed3e355ab48805c9c899029 | MIT | [] | 241 |
2.4 | tei-loop | 0.3.1 | Target, Evaluate, Improve: A self-improving loop for agentic systems | # TEI Loop
**Target, Evaluate, Improve** — a self-improving loop for agentic systems.
## Get Started
```bash
pip install tei-loop
```
```bash
python3 -m tei_loop your_agent.py
```
TEI does the rest:
- Finds your agent function automatically
- Generates a test query automatically
- Evaluates across 4 dimensions
- Applies targeted improvements
- Saves results to `tei-results/` folder (created next to your agent file)
> If `pip` is not found, use `pip3` instead.
## What TEI Creates
```
your_project/
your_agent.py
tei-results/ <-- created automatically
run_20260218_071500.json
latest.json
```
## Options
```bash
python3 -m tei_loop agent.py # Auto everything
python3 -m tei_loop agent.py --function my_func # Pick specific function
python3 -m tei_loop agent.py --query "custom input" # Custom test query
python3 -m tei_loop agent.py --retries 5 # More improvement cycles
python3 -m tei_loop agent.py --verbose # Detailed output
```
## Python API
```python
import asyncio
from tei_loop import TEILoop
def my_agent(query: str) -> str:
return result
async def main():
loop = TEILoop(agent=my_agent)
result = await loop.run("test query")
print(result.summary())
asyncio.run(main())
```
## 4 Evaluation Dimensions
| Dimension | What it checks |
|---|---|
| **Target Alignment** | Did the agent pursue the correct objective? |
| **Reasoning Soundness** | Was the reasoning logical? |
| **Execution Accuracy** | Were tools called correctly? |
| **Output Integrity** | Is the output complete and accurate? |
## Two Modes
**Runtime** (default): Per-query improvement in seconds.
**Development**: Across many queries, permanent prompt improvements.
```python
results = await loop.develop(queries=[...], max_iterations=50)
```
## Works With Any Agent
Any Python callable. No framework lock-in: LangGraph, CrewAI, custom Python, FastAPI, anything.
## License
MIT
| text/markdown | null | Orkhan Javadli <ojavadli@gmail.com> | null | null | MIT | agents, evaluation, improvement, llm, agentic-systems, self-improving | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pydantic>=2.0",
"openai>=1.40; extra == \"openai\"",
"anthropic>=0.34; extra == \"anthropic\"",
"google-generativeai>=0.8; extra == \"google\"",
"openai>=1.40; extra == \"all\"",
"anthropic>=0.34; extra == \"all\"",
"google-generativeai>=0.8; extra == \"all\"",
"pytest>=8.0; extra == \"dev\"",
"pyt... | [] | [] | [] | [
"Homepage, https://github.com/ojavadli/tei-loop",
"Repository, https://github.com/ojavadli/tei-loop"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T22:26:51.509355 | tei_loop-0.3.1.tar.gz | 40,663 | 03/e7/c18e95a26dfd4aac6c698d82fffae47dc98ce185402e37adbaf28c027ce4/tei_loop-0.3.1.tar.gz | source | sdist | null | false | cf2dbef88da4c4523ba2bc9afda7fcae | 0a9cc647b6abd1b6a966b66104261ebab452b46a62366e0016b1f5772794052f | 03e7c18e95a26dfd4aac6c698d82fffae47dc98ce185402e37adbaf28c027ce4 | null | [
"LICENSE"
] | 233 |
2.4 | hypermedia | 6.0.0 | An opinionated way to work with html in pure python with htmx support. | # Hypermedia
Hypermedia is a pure python library for working with `HTML`. Hypermedia's killer feature is that it is composable through a `slot` concept. Because of that, it works great with `</> htmx` where you need to respond with both __partials__ and __full page__ html.
Hypermedia is made to work with `FastAPI` and `</> htmx`, but can be used by anything to create HTML.
___

___
**Documentation
[Stable](https://thomasborgen.github.io/hypermedia/) |
[Source Code](https://github.com/thomasborgen/hypermedia) |
[Task Tracker](https://github.com/thomasborgen/hypermedia/issues)**
## Features
* Build __HTML__ with python classes
* __Composable__ templates through a __slot__ system
* Seamless integration with __</> htmx__
* Fully typed and __Autocompletion__ for html/htmx attributes and styles
* Opinionated simple decorator for __FastAPI__
* Unlike other template engines like Jinja2 we have full typing since we never leave python land.
## The Basics
All html tags can be imported directly like:
```python
from hypermedia import Html, Body, Div, A
```
Tags are nested by adding children in the constructor:
```python
from hypermedia import Html, Body, Div
Html(Body(Div(), Div()))
```
Add text to your tag:
```python
from hypermedia import Div
Div("Hello world!")
```
use `.dump()` to dump your code to html.
```python
from hypermedia import Bold, Div
Div("Hello ", Bold("world!")).dump()
```
output
```html
<div>Hello <b>world!</b></div>
```
## Composability with slots
```python
from hypermedia import Html, Body, Div, Menu, H1, Div, Ul, Li
base = Html(
Body(
Menu(slot="menu"),
Div(slot="content"),
),
)
menu = Ul(Li("home"))
content = Div("Some content")
base.extend("menu", menu)
base.extend("content", content)
base.dump()
```
output
```html
<html>
<body>
<menu>
<ul><li>home</li></ul>
</menu>
<div>
<div>Some content</div>
</div>
</body>
</html>
```
# Documentation
Head over to our [documentation](https://thomasborgen.github.io/hypermedia)
If you are using this for HTMX, then please read the [HTMX section](https://thomasborgen.github.io/hypermedia/htmx/) of the documentation
| text/markdown | null | Thomas Borgen <thomas@borgenit.no> | null | null | MIT | Extendable, FastAPI, HTML, HTML Templating, HTMX, Partial HTML | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"mkdocs-material>=9.7.1",
"typing-extensions>=4.12.2",
"fastapi>=0.128.5; extra == \"fastapi\""
] | [] | [] | [] | [
"Homepage, https://github.com/thomasborgen/hypermedia",
"Documentation, https://github.com/thomasborgen/hypermedia",
"Repository, https://github.com/thomasborgen/hypermedia",
"Changelog, https://github.com/thomasborgen/hypermedia/blob/main/CHANGELOG.md"
] | uv/0.7.8 | 2026-02-18T22:26:49.798755 | hypermedia-6.0.0.tar.gz | 133,281 | 3c/39/0a531e4bca5897900376987f7b04671b05a08665d2ba39d2cb65a05e6785/hypermedia-6.0.0.tar.gz | source | sdist | null | false | 0e1256bbe4a58008f0c538a0ebad1cc5 | 2301cfa33f2c952da80ed4d61c85b84a959fa16c0e1f959c0765f83515b7f473 | 3c390a531e4bca5897900376987f7b04671b05a08665d2ba39d2cb65a05e6785 | null | [
"LICENSE"
] | 236 |
2.4 | srdedupe | 0.1.1 | Safe, FPR-controlled, reproducible deduplication pipeline for bibliographic reference files, designed for systematic review workflows. | # srdedupe — Safe Bibliographic Deduplication
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/enniolopes/srdedupe/actions)
[](https://codecov.io/github/enniolopes/srdedupe)
Safe, reproducible deduplication for systematic reviews and bibliographic databases.
Parses and deduplicates bibliographic reference files (RIS, NBIB, BibTeX, WoS, EndNote) with FPR-controlled decision making, full audit trails, and deterministic outputs.
## Installation
```bash
pip install srdedupe
```
## Quick Start
### Parse and export
```python
from srdedupe import parse_file, parse_folder, write_jsonl
# Single file (format auto-detected)
records = parse_file("references.ris")
# Entire folder
records = parse_folder("data/", recursive=True)
# Export to JSONL
write_jsonl(records, "output.jsonl")
```
### Deduplicate
```python
from srdedupe import dedupe
result = dedupe("references.ris", output_dir="out", fpr_alpha=0.01)
print(f"Records: {result.total_records}")
print(f"Auto-merged clusters: {result.total_duplicates_auto}")
print(f"Review records: {result.total_review_records}")
print(f"Unique records: {result.total_unique_records}")
print(f"Dedup rate: {result.dedup_rate:.1%}")
print(f"Output: {result.output_files['deduplicated_ris']}")
```
### CLI
```bash
# Parse to JSONL
srdedupe parse references.ris -o output.jsonl
srdedupe parse data/ -o records.jsonl --recursive
# Full deduplication pipeline
srdedupe deduplicate references.ris
srdedupe deduplicate data/ -o results --fpr-alpha 0.005 --verbose
```
## How It Works
A 6-stage pipeline controlled by false positive rate (FPR):
1. **Parse & Normalize** — Multi-format ingestion, field normalization
2. **Candidate Generation** — High-recall blocking (DOI, PMID, year+title, LSH)
3. **Probabilistic Scoring** — Fellegi-Sunter model with field-level comparisons
4. **Three-Way Decision** — AUTO_DUP / REVIEW / AUTO_KEEP with Neyman-Pearson FPR control
5. **Global Clustering** — Connected components with anti-transitivity checks
6. **Canonical Merge** — Deterministic survivor selection and field merging
Pairs classified as REVIEW are preserved in output artifacts for manual inspection.
## API Reference
`parse_file(path, *, strict=True) -> list[CanonicalRecord]`
- Parse a single bibliographic file. Format is auto-detected from file content.
`parse_folder(path, *, pattern=None, recursive=False, strict=False) -> list[CanonicalRecord]`
- Parse all supported files in a folder. Optional glob `pattern` (e.g. `"*.ris"`).
`write_jsonl(records, path, *, sort_keys=True) -> None`
- Write records to JSONL file with deterministic field ordering.
`dedupe(input_path, *, output_dir="out", fpr_alpha=0.01, t_low=0.3, t_high=None) -> PipelineResult`
Run the full deduplication pipeline. Returns a `PipelineResult` with:
- `success`, `total_records`, `total_candidates`, `total_duplicates_auto`, `total_review_records`, `total_unique_records`, `dedup_rate`
- `output_files` — dict mapping artifact names to file paths
- `error_message` — error details if `success` is False
### Advanced: `PipelineConfig` + `run_pipeline`
For full control (custom blockers, FS model path, audit logger):
```python
from pathlib import Path
from srdedupe.engine import PipelineConfig, run_pipeline
config = PipelineConfig(
fpr_alpha=0.01,
t_low=0.3,
t_high=None,
candidate_blockers=["doi", "pmid", "year_title"],
output_dir=Path("out"),
)
result = run_pipeline(input_path=Path("references.ris"), config=config)
```
## Supported Formats
| Format | Extensions |
|--------|-----------|
| RIS | `.ris` |
| PubMed/NBIB | `.nbib`, `.txt` |
| BibTeX | `.bib` |
| Web of Science | `.ciw` |
| EndNote Tagged | `.enw` |
## Pipeline Output Structure
```
out/
├── stage1/canonical_records.jsonl
├── stage2/candidate_pairs.jsonl
├── stage3/scored_pairs.jsonl
├── stage4/pair_decisions.jsonl
├── stage5/clusters.jsonl
├── artifacts/
│ ├── deduped_auto.ris
│ ├── merged_records.jsonl
│ ├── clusters_enriched.jsonl
│ ├── review_pending.ris (if review pairs exist)
│ └── singletons.ris (if singletons exist)
└── reports/
├── ingestion_report.json (folder input only)
└── merge_summary.json
```
## Development
```bash
make dev # Install dependencies + pre-commit hooks
make test-fast # Quick validation while coding
make check # Lint + type check + format (before committing)
make test # Full test suite (417 tests, ≥80% coverage)
```
## Documentation
- [CONTRIBUTING.md](CONTRIBUTING.md) — Code style, testing, contribution guidelines
## License
MIT — see [LICENSE](LICENSE).
## Citation
```bibtex
@software{srdedupe2026,
author = {Lopes, Ennio Politi},
title = {srdedupe: Safe Bibliographic Deduplication},
year = {2026},
url = {https://github.com/enniolopes/srdedupe}
}
```
| text/markdown | Ennio Politi Lopes | enniolopes@gmail.com | Ennio Politi Lopes | enniolopes@gmail.com | MIT | deduplication, systematic-review, bibliographic-references, research, scientific, reproducible-research | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Information Analysis",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"P... | [] | null | null | >=3.11 | [] | [] | [] | [
"build>=1.0; extra == \"dev\"",
"click>=8.1",
"datasketch>=1.6.0",
"jsonschema>=4.26; extra == \"dev\"",
"mypy>=1.8; extra == \"dev\"",
"pre-commit>=3.0; extra == \"dev\"",
"pyright>=1.1; extra == \"type-check\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"ruff>=0.2; ex... | [] | [] | [] | [
"Changelog, https://github.com/enniolopes/srdedupe/blob/main/CHANGELOG.md",
"Documentation, https://github.com/enniolopes/srdedupe#readme",
"Homepage, https://github.com/enniolopes/srdedupe",
"Issues, https://github.com/enniolopes/srdedupe/issues",
"Repository, https://github.com/enniolopes/srdedupe"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:26:15.940436 | srdedupe-0.1.1.tar.gz | 86,303 | 18/ab/ebe2c274f8943473b141e53dbba0915ce2e307868fced317e2b1ea040ca8/srdedupe-0.1.1.tar.gz | source | sdist | null | false | 68fc5b0bab39faa235180e71d2d039cb | e209890e5246c5843271f07e97e2128ee0121075b55311836fe5aa5e58e9c81d | 18abebe2c274f8943473b141e53dbba0915ce2e307868fced317e2b1ea040ca8 | null | [
"LICENSE"
] | 227 |
2.4 | google-parallax | 0.0.1 | A library for autoscaling JAX models. | # Parallax
**Parallax** is a library for automatically scaling JAX models. It simplifies
the process of training large models by offering automatic parallelism
strategies and memory optimizations, allowing you to focus on your model
architecture rather than sharding configurations.
**Parallax helps you:**
* **Automatically shard** your JAX models and functions without manually
defining `PartitionSpec`s.
* **Apply advanced parallelism** strategies like Fully Sharded Data Parallel
(FSDP) and Distributed Data Parallel (DDP) with ease.
* **Optimize memory usage** through model offloading (keeping weights in CPU
RAM) and rematerialization.
With Parallax, you can scale off-the-shelf JAX models to run on larger hardware
configurations or fit larger models on existing hardware without extensive code
modifications.
> This is not an officially supported Google product. This project is not
> eligible for the
> [Google Open Source Software Vulnerability Rewards Program](
https://bughunters.google.com/open-source-security).
## Installation
You can install Parallax using pip:
```bash
pip install google-parallax
```
## Usage
Parallax integrates seamlessly with [Flax
NNX](https://flax.readthedocs.io/en/latest/index.html) models. Here is a simple
example of how to use auto-sharding:
```python
import parallax
from flax import nnx
import jax
import jax.numpy as jnp
model = parallax.create_sharded_model(
model_or_fn=lambda: Model(...),
sample_inputs=(jnp.ones((1, 32)),),
strategy=parallax.ShardingStrategy.AUTO,
data_axis_name='fsdp',
model_axis_name='tp',
)
```
## Features
* **AutoSharding**: Automatically discover optimal sharding strategies.
* **FSDP & DDP**: Ready-to-use implementations of common parallel training
strategies.
* **Model Offloading**: Stream model weights from CPU to device memory to
train larger models.
* **Rematerialization**: Automatic activation recomputation to save memory.
## Contributing
We welcome contributions! Please check
[`docs/contributing.md`](docs/contributing.md) for details on how to submit
pull requests and report bugs.
## Support
If you encounter any issues, please report them on our [GitHub
Issues](https://github.com/google/parallax/issues) page.
| text/markdown | null | Jeff Carpenter <code@jeffcarp.com>, Stella Yan <sueyan@gmail.com> | null | null |
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"flax",
"jax",
"absl-py; extra == \"dev\"",
"numpy; extra == \"dev\"",
"optax; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/google/parallax",
"Issues, https://github.com/google/parallax/issues"
] | twine/6.1.0 CPython/3.11.6 | 2026-02-18T22:24:46.977472 | google_parallax-0.0.1.tar.gz | 6,804 | 0d/cc/f63a97966fc493fb9d8ba8c4276e269a7d2d9f2869abff13a611c8561136/google_parallax-0.0.1.tar.gz | source | sdist | null | false | 7cd8bda2464df0a4bfce3ad6525057ca | 4830fb9a132421a162637cc1ee29f99767292b5d0037fbfd7c3aabc84c911ef6 | 0dccf63a97966fc493fb9d8ba8c4276e269a7d2d9f2869abff13a611c8561136 | null | [
"LICENSE"
] | 246 |
2.4 | marksync | 0.2.17 | Multi-agent collaborative editing and deployment of Markpact projects via CRDT delta sync | # marksync
Multi-agent collaborative editing and deployment of [Markpact](https://github.com/wronai/markpact) projects via CRDT delta sync.
```
pip install marksync[all]
```
## What it does
Multiple AI agents work simultaneously on a single Markpact `README.md` — editing code, reviewing quality, deploying changes — all synchronized in real-time through delta patches (only changed code blocks are transmitted, not the entire file).
A built-in **DSL** (Domain-Specific Language) lets you orchestrate agents, define pipelines, and control the entire architecture from an interactive shell or via REST/WebSocket API.
```
┌─────────────────────────────────────────────────────────────┐
│ marksync Runtime │
│ │
│ ┌────────────┐ ┌──────────────────────────────────────┐ │
│ │ DSL Shell │──►│ DSL Executor │ │
│ │ (CLI REPL) │ │ agents · pipelines · routes · config│ │
│ └────────────┘ └──────────┬───────────────────────────┘ │
│ ┌────────────┐ │ spawns / controls │
│ │ REST API │──► ▼ │
│ │ port 8080 │ ┌──────────────────────────────────────┐ │
│ └────────────┘ │ Agent Workers │ │
│ ┌────────────┐ │ editor · reviewer · deployer · mon │ │
│ │ WS API │──►└──────────┬───────────────────────────┘ │
│ │ /ws/dsl │ │ │
│ └────────────┘ ▼ │
│ ┌──────────────────────────────────────┐ │
│ │ SyncServer (WebSocket:8765) │ │
│ │ CRDT doc · delta patches · persist │ │
│ └──────────────┬───────────────────────┘ │
│ ▼ │
│ README.md (disk) │
└─────────────────────────────────────────────────────────────┘
```
## Prerequisites
**Ollama** running locally on your host (not inside Docker):
```bash
# Install ollama: https://ollama.ai
ollama pull qwen2.5-coder:7b
ollama serve # keep running
```
## Quick Start — Docker Compose
```bash
git clone https://github.com/wronai/marksync.git
cd marksync
docker compose up --build
```
This starts 4 services (not 1-per-agent — all agents run in a single orchestrator):
| Container | Role | What it does |
|-----------|------|-------------|
| `sync-server` | Hub | WebSocket server, persists README.md, broadcasts changes |
| `orchestrator` | Agents | Reads `agents.yml`, spawns all agents in 1 process |
| `api-server` | API | REST/WS API for remote DSL control |
| `init-project` | Seed | Copies `examples/1/README.md` into shared volume |
Agent definitions live in `agents.yml` — define once, use everywhere:
```yaml
# agents.yml
agents:
editor-1: { role: editor, auto_edit: true }
reviewer-1: { role: reviewer }
deployer-1: { role: deployer }
monitor-1: { role: monitor }
pipelines:
review-flow: { stages: [editor-1, reviewer-1] }
```
Then push changes from your host:
```bash
pip install -e .
marksync push README.md
marksync blocks examples/1/README.md
```
## Quick Start — Without Docker
```bash
pip install -e .
# Terminal 1: Start sync server
marksync server examples/1/README.md
# Terminal 2: Start all agents from agents.yml
marksync orchestrate -c agents.yml
# Terminal 3: Web sandbox (edit & test in browser)
marksync sandbox
# Open http://localhost:8888
```
Or start agents individually:
```bash
marksync agent --role editor --name editor-1
```
## DSL — Agent Orchestration
The marksync DSL lets you control agents, pipelines, and architecture from a shell or API.
### Interactive Shell
```bash
marksync shell
```
```
marksync> AGENT coder editor --model qwen2.5-coder:7b --auto-edit
marksync> AGENT reviewer-1 reviewer
marksync> AGENT watcher monitor
marksync> PIPE review-flow coder -> reviewer-1 -> deployer-1
marksync> ROUTE markpact:run -> deployer-1
marksync> LIST agents
marksync> STATUS
```
### Orchestrate from agents.yml
```bash
# Dry-run — preview the plan
marksync orchestrate --dry-run
# Run all agents
marksync orchestrate -c agents.yml
# Run only editors
marksync orchestrate --role editor
```
### REST / WebSocket API
```bash
# Start the API server
marksync api --port 8080
```
```bash
# Execute DSL commands via REST
curl -X POST http://localhost:8080/api/v1/execute \
-H "Content-Type: application/json" \
-d '{"command": "AGENT coder editor --auto-edit"}'
# List agents
curl http://localhost:8080/api/v1/agents
# WebSocket: ws://localhost:8080/ws/dsl
# Swagger docs: http://localhost:8080/docs
```
See [docs/dsl-reference.md](docs/dsl-reference.md) and [docs/api.md](docs/api.md) for full reference.
## CLI Reference
```
marksync server README.md [--host 0.0.0.0] [--port 8765]
marksync agent --role {editor|reviewer|deployer|monitor} --name NAME
marksync orchestrate [-c agents.yml] [--role ROLE] [--dry-run] [--export-dsl FILE]
marksync push README.md [--server-uri ws://...]
marksync blocks README.md
marksync shell [--script FILE]
marksync api [--host 0.0.0.0] [--port 8080]
marksync sandbox [--port 8888]
```
All commands read defaults from `.env` — no need to pass `--server-uri`, `--ollama-url` etc.
## Agent Roles
### Editor (`--role editor`)
Receives block updates, sends code to Ollama for improvement (error handling, type hints, docstrings). Use `--auto-edit` to automatically push improvements back to server.
### Reviewer (`--role reviewer`)
Analyzes every changed block for bugs, security issues, and best practices. Results are logged — does not modify code.
### Deployer (`--role deployer`)
Watches for changes to `markpact:run` and `markpact:deps` blocks. Triggers `markpact README.md --run` to rebuild and redeploy the application.
### Monitor (`--role monitor`)
Logs every block change with block ID, content size, and SHA-256 hash. Useful for audit trails and debugging.
## Python API
```python
from marksync import SyncServer, SyncClient, AgentWorker
from marksync.agents import AgentConfig
# Start server programmatically
server = SyncServer(readme="README.md", port=8765)
await server.run()
# Push changes
client = SyncClient(readme="README.md", uri="ws://localhost:8765")
patches, saved = await client.push_changes()
# Start an agent
config = AgentConfig(
name="my-agent", role="reviewer",
server_uri="ws://localhost:8765",
ollama_url="http://localhost:11434",
ollama_model="qwen2.5-coder:7b",
)
agent = AgentWorker(config)
await agent.run()
# DSL — programmatic orchestration
from marksync.dsl import DSLExecutor
executor = DSLExecutor(server_uri="ws://localhost:8765")
await executor.execute("AGENT coder editor --auto-edit")
await executor.execute("PIPE review coder -> reviewer")
print(executor.snapshot())
```
## Sync Protocol
Communication uses JSON messages over WebSocket:
| Direction | Type | Description |
|-----------|------|-------------|
| S→C | `manifest` | `{block_id: sha256}` map on connect |
| C→S | `patch` | diff-match-patch delta for one block |
| C→S | `full` | Full block content (fallback) |
| S→C | `ack` | Confirmation with sequence number |
| S→C | `nack` | Patch failed, hash mismatch |
| S→C | `patch`/`full` | Broadcast to other clients |
| C→S | `get_snapshot` | Request full README markdown |
| S→C | `snapshot` | Full README content |
Delta strategy: if patch < 80% of full content → send patch. Otherwise send full block. SHA-256 hash verification on every apply.
## Project Structure
```
marksync/
├── .env # Centralized config (ports, hosts, model)
├── agents.yml # Agent definitions (single source of truth)
├── pyproject.toml # Package config (pip install .)
├── Dockerfile # Single image for server + agents
├── docker-compose.yml # 4 services (server, orchestrator, api, init)
├── docs/
│ ├── architecture.md # System design & data flow
│ ├── dsl-reference.md # DSL command reference
│ ├── api.md # REST & WebSocket API docs
│ └── 1/README.md # Example 1 usage guide
├── examples/
│ ├── 1/ # Task Manager API
│ ├── 2/ # Chat WebSocket App
│ └── 3/ # Data Pipeline CLI
├── marksync/
│ ├── __init__.py # Package exports
│ ├── cli.py # Click CLI (server, agent, orchestrate, sandbox, ...)
│ ├── settings.py # Centralized config from .env
│ ├── orchestrator.py # Reads agents.yml, spawns agents
│ ├── dsl/
│ │ ├── parser.py # DSLParser, DSLCommand, CommandType
│ │ ├── executor.py # DSLExecutor, AgentHandle, Pipeline, Route
│ │ ├── shell.py # Interactive REPL (DSLShell)
│ │ └── api.py # FastAPI REST + WebSocket endpoints
│ ├── sync/
│ │ ├── __init__.py # BlockParser, MarkpactBlock
│ │ ├── crdt.py # CRDTDocument (pycrdt/Yjs)
│ │ └── engine.py # SyncServer, SyncClient
│ ├── agents/
│ │ └── __init__.py # AgentWorker, AgentConfig, OllamaClient
│ ├── sandbox/
│ │ └── app.py # Web sandbox UI (edit, test, orchestrate)
│ └── transport/
│ └── __init__.py # MQTT/gRPC extension point
└── tests/
├── test_dsl.py # DSL parser & executor (36 tests)
├── test_examples.py # Example block parsing (21 tests)
├── test_orchestrator.py # Orchestrator & agents.yml (24 tests)
└── test_settings.py # Settings & .env loading (13 tests)
```
## Configuration
All config lives in two files:
| File | Purpose |
|------|----------|
| `.env` | Ports, hosts, model, log level |
| `agents.yml` | Agent definitions, pipelines, routes |
### Environment Variables (`.env`)
| Variable | Default | Description |
|----------|---------|-------------|
| `MARKSYNC_PORT` | `8765` | Sync server port |
| `MARKSYNC_SERVER` | `ws://localhost:8765` | Sync server URI |
| `MARKSYNC_API_PORT` | `8080` | DSL API port |
| `OLLAMA_URL` | `http://localhost:11434` | Ollama API endpoint |
| `OLLAMA_MODEL` | `qwen2.5-coder:7b` | LLM model for agents |
| `MARKPACT_PORT` | `8088` | Markpact app port |
| `LOG_LEVEL` | `INFO` | Logging level |
## Integration with Markpact
marksync is designed to work with [markpact](https://github.com/wronai/markpact):
```bash
# Install both
pip install markpact marksync[all]
# Edit collaboratively with marksync
marksync server README.md
# Deploy with markpact
markpact README.md --run
```
The deployer agent can automatically trigger `markpact` rebuilds when code blocks change.
## Documentation
- [Architecture](docs/architecture.md) — system design, layers, data flow
- [DSL Reference](docs/dsl-reference.md) — full command reference for the orchestration DSL
- [API Reference](docs/api.md) — REST & WebSocket API documentation
- [Example 1 Guide](docs/1/README.md) — Task Manager API walkthrough
- [Changelog](CHANGELOG.md) — version history
- [TODO](TODO.md) — roadmap and planned features
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
## Author
Created by **Tom Sapletta** - [tom@sapletta.com](mailto:tom@sapletta.com)
| text/markdown | null | Tom Sapletta <tom@sapletta.com> | null | null | null | agents, collaborative, crdt, markdown, markpact, sync | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.1",
"diff-match-patch>=20241021",
"fastapi>=0.115",
"httpx>=0.27",
"markdown-it-py>=3.0",
"markpact>=0.1.15",
"nfo>=0.2.13",
"pycrdt-websocket>=0.16",
"pycrdt>=0.12",
"pyyaml>=6.0",
"rich>=13.0",
"uvicorn>=0.30",
"watchdog>=6.0",
"websockets>=16.0",
"litellm>=1.40; extra == \"a... | [] | [] | [] | [
"Homepage, https://github.com/wronai/marksync",
"Repository, https://github.com/wronai/marksync"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-18T22:24:34.960716 | marksync-0.2.17.tar.gz | 258,569 | ca/ec/3e37ce3a8b5919b8e6b353434da5455202be9c176192a2d7fdafc6a3797a/marksync-0.2.17.tar.gz | source | sdist | null | false | a94eed630e6b54e4f5fdb5b9a4efbbc2 | b3ec7a9aa853b999410a582c7cb3cd8cef2050cf4180ec26da0150a16983f831 | caec3e37ce3a8b5919b8e6b353434da5455202be9c176192a2d7fdafc6a3797a | Apache-2.0 | [
"LICENSE"
] | 235 |
2.4 | posthoganalytics | 7.9.3 | Integrate PostHog into any python application. | # posthoganalytics
> **Do not use this package.** Use [`posthog`](https://pypi.org/project/posthog/) instead.
```bash
pip install posthog
```
This package exists solely for internal use by [posthog/posthog](https://github.com/posthog/posthog) to avoid import conflicts with the local `posthog` package in that repository. It is an automatically generated mirror of `posthog` — same code, same versions, just published under a different name.
If you are not working on the PostHog main repository, you should never need this package. All documentation, issues, and development happen in [`posthog-python`](https://github.com/posthog/posthog-python).
| text/markdown | Posthog | PostHog <hey@posthog.com> | PostHog | PostHog <hey@posthog.com> | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ... | [] | https://github.com/posthog/posthog-python | null | >=3.10 | [] | [] | [] | [
"requests<3.0,>=2.7",
"six>=1.5",
"python-dateutil>=2.2",
"backoff>=1.10.0",
"distro>=1.5.0",
"typing-extensions>=4.2.0",
"langchain>=0.2.0; extra == \"langchain\"",
"django-stubs; extra == \"dev\"",
"lxml; extra == \"dev\"",
"mypy; extra == \"dev\"",
"mypy-baseline; extra == \"dev\"",
"types-... | [] | [] | [] | [
"Homepage, https://github.com/posthog/posthog-python",
"Repository, https://github.com/posthog/posthog-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:20:35.002599 | posthoganalytics-7.9.3.tar.gz | 171,917 | 58/ca/bc26aae02014cf0204d5ab2c9054887414100883f1e708e27d40407e9702/posthoganalytics-7.9.3.tar.gz | source | sdist | null | false | 281c322b6e131b6e7aeb50abe513db2c | b651d9ae6078f4690fc80631043034c58c5550ac494e9bfd406089005f9f4b96 | 58cabc26aae02014cf0204d5ab2c9054887414100883f1e708e27d40407e9702 | null | [
"LICENSE"
] | 368 |
2.4 | posthog | 7.9.3 | Integrate PostHog into any python application. | # PostHog Python
<p align="center">
<img alt="posthoglogo" src="https://user-images.githubusercontent.com/65415371/205059737-c8a4f836-4889-4654-902e-f302b187b6a0.png">
</p>
<p align="center">
<a href="https://pypi.org/project/posthog/"><img alt="pypi installs" src="https://img.shields.io/pypi/v/posthog"/></a>
<img alt="GitHub contributors" src="https://img.shields.io/github/contributors/posthog/posthog-python">
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/posthog/posthog-python"/>
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/posthog/posthog-python"/>
</p>
Please see the [Python integration docs](https://posthog.com/docs/integrations/python-integration) for details.
## Python Version Support
| SDK Version | Python Versions Supported | Notes |
| ------------- | ---------------------------- | -------------------------- |
| 7.3.1+ | 3.10, 3.11, 3.12, 3.13, 3.14 | Added Python 3.14 support |
| 7.0.0 - 7.0.1 | 3.10, 3.11, 3.12, 3.13 | Dropped Python 3.9 support |
| 4.0.1 - 6.x | 3.9, 3.10, 3.11, 3.12, 3.13 | Python 3.9+ required |
## Development
### Testing Locally
We recommend using [uv](https://docs.astral.sh/uv/). It's super fast.
1. Run `uv venv env` (creates virtual environment called "env")
- or `python3 -m venv env`
2. Run `source env/bin/activate` (activates the virtual environment)
3. Run `uv sync --extra dev --extra test` (installs the package in develop mode, along with test dependencies)
- or `pip install -e ".[dev,test]"`
4. you have to run `pre-commit install` to have auto linting pre commit
5. Run `make test`
6. To run a specific test do `pytest -k test_no_api_key`
## PostHog recommends `uv` so...
```bash
uv python install 3.12
uv python pin 3.12
uv venv
source env/bin/activate
uv sync --extra dev --extra test
pre-commit install
make test
```
### Running Locally
Assuming you have a [local version of PostHog](https://posthog.com/docs/developing-locally) running, you can run `python3 example.py` to see the library in action.
### Testing changes locally with the PostHog app
You can run `make prep_local`, and it'll create a new folder alongside the SDK repo one called `posthog-python-local`, which you can then import into the posthog project by changing pyproject.toml to look like this:
```toml
dependencies = [
...
"posthoganalytics" #NOTE: no version number
...
]
...
[tools.uv.sources]
posthoganalytics = { path = "../posthog-python-local" }
```
This'll let you build and test SDK changes fully locally, incorporating them into your local posthog app stack. It mainly takes care of the `posthog -> posthoganalytics` module renaming. You'll need to re-run `make prep_local` each time you make a change, and re-run `uv sync --active` in the posthog app project.
## Releasing
This repository uses [Sampo](https://github.com/bruits/sampo) for versioning, changelogs, and publishing to crates.io.
1. When making changes, include a changeset: `sampo add`
2. Create a PR with your changes and the changeset file
3. Add the `release` label and merge to `main`
4. Approve the release in Slack when prompted — this triggers version bump, crates.io publish, git tag, and GitHub Release
You can also trigger a release manually via the workflow's `workflow_dispatch` trigger (still requires pending changesets).
| text/markdown | Posthog | PostHog <hey@posthog.com> | PostHog | PostHog <hey@posthog.com> | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ... | [] | https://github.com/posthog/posthog-python | null | >=3.10 | [] | [] | [] | [
"requests<3.0,>=2.7",
"six>=1.5",
"python-dateutil>=2.2",
"backoff>=1.10.0",
"distro>=1.5.0",
"typing-extensions>=4.2.0",
"langchain>=0.2.0; extra == \"langchain\"",
"django-stubs; extra == \"dev\"",
"lxml; extra == \"dev\"",
"mypy; extra == \"dev\"",
"mypy-baseline; extra == \"dev\"",
"types-... | [] | [] | [] | [
"Homepage, https://github.com/posthog/posthog-python",
"Repository, https://github.com/posthog/posthog-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:20:24.085371 | posthog-7.9.3.tar.gz | 172,336 | 7e/06/bcffcd262c861695fbaa74490b872e37d6fc41d3dcc1a43207d20525522f/posthog-7.9.3.tar.gz | source | sdist | null | false | e0ec8568ef6bcc3c9e906ad62ac882b9 | 55f7580265d290936ac4c112a4e2031a41743be4f90d4183ac9f85b721ff13ae | 7e06bcffcd262c861695fbaa74490b872e37d6fc41d3dcc1a43207d20525522f | null | [
"LICENSE"
] | 506,922 |
2.4 | trainkeeper | 0.3.0 | Production-grade ML training toolkit with distributed training, GPU profiling, smart checkpointing, and interactive dashboards | <div align="center">
<img src="https://raw.githubusercontent.com/mosh3eb/TrainKeeper/main/assets/branding/trainkeeper-logo.png" alt="TrainKeeper Logo" width="100%">
<br>
[](https://pypi.org/project/trainkeeper/)
[](https://pypi.org/project/trainkeeper/)
[](LICENSE)
<h3>Production-Grade Training Guardrails for PyTorch</h3>
<p>Reproducible • Debuggable • Distributed • Efficient</p>
</div>
---
**TrainKeeper** is a minimal-decision, high-signal toolkit for building robust ML training systems. It adds guardrails **inside** your training loops without replacing your existing stack (PyTorch, Lightning, Accelerate).
## ⚡️ Why TrainKeeper?
Most failures happen **silently** inside execution loops: non-determinism, data drift, unstable gradients, and inconsistent environments. TrainKeeper solves this with zero-config composable modules.
- **🔒 Zero-Surprise Reproducibility**: Automatic seed setting, environment capture, and git state locking.
- **🛡️ Data Integrity**: Schema inference and drift detection caught *before* training wastes GPU hours.
- **🚅 Distributed Made Easy**: Auto-configured DDP and FSDP with a single line of code.
- **📉 Resource Efficiency**: GPU memory profiling and smart checkpointing that respects disk limits.
## 📦 Installation
```bash
pip install trainkeeper
```
## 🚀 Quick Start
Wrap your entry point to effectively "freeze" the experimental conditions:
```python
from trainkeeper.experiment import run_reproducible
@run_reproducible(auto_capture_git=True)
def train():
print("TrainKeeper is running: Experiment is now reproducible.")
if __name__ == "__main__":
train()
```
## ✨ Features at a Glance
### 1. Distributed Training (DDP & FSDP)
Stop fighting with `torchrun`.
```python
from trainkeeper.distributed import distributed_training, wrap_model_fsdp
with distributed_training() as dist_config:
model = MyModel()
model = wrap_model_fsdp(model, dist_config) # FSDP with auto-wrapping!
```
### 2. GPU Memory Profiler
Find leaks and optimize batch sizes automatically.
```python
from trainkeeper.gpu_profiler import GPUProfiler
profiler = GPUProfiler()
profiler.start()
# ... training loop ...
print(profiler.stop().summary())
# Output: "Fragmentation detected (35%). Suggestion: Empty cache at epoch end."
```
### 3. Interactive Dashboard
Explore experiments, compare metrics, and analyze drift.
```bash
pip install trainkeeper[dashboard]
tk dashboard
```
## 🔗 Links
- **GitHub Repository**: [mosh3eb/TrainKeeper](https://github.com/mosh3eb/TrainKeeper)
- **Full Documentation**: [Read the Docs](https://github.com/mosh3eb/TrainKeeper/tree/main/docs)
| text/markdown | Mohamed Salem | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"pyyaml>=6.0",
"numpy>=1.23",
"pandas>=1.5",
"requests>=2.28",
"torch>=2.0; extra == \"torch\"",
"torchvision; extra == \"vision\"",
"datasets; extra == \"nlp\"",
"transformers; extra == \"nlp\"",
"torch; extra == \"nlp\"",
"scikit-learn; extra == \"tabular\"",
"openml; extra == \"tabular\"",
... | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.0 | 2026-02-18T22:19:40.821989 | trainkeeper-0.3.0.tar.gz | 74,751 | d0/79/1d585ac7251bb43524c7fc6956838f6e38b478a9b8f0ff35f04deaa6ac7c/trainkeeper-0.3.0.tar.gz | source | sdist | null | false | e7074dc87632c738cbdaca302f60a46e | 7171b2fadea78ce22e81687eb50e71d0d2db7c7020caf4b0b7fb383d19e2efbd | d0791d585ac7251bb43524c7fc6956838f6e38b478a9b8f0ff35f04deaa6ac7c | Apache-2.0 | [
"LICENSE"
] | 244 |
2.4 | panpipelines | 1.1.7 | MRI Processing Pipelines for PAN Healthy Minds for Life Study | # PANpipelines
---
This repository contains all the necessary scripts for reproducing the steps taken to preprocess and analyze MRI data collected during the Precision Aging Network (PAN) project.
# The panpipelines package
The PAN Pipelines use a set of python modules packaged under the main `panpipelines` package to run all the preprocessing and analysis workflows which are based on NiPype.
# Installation
It is recommended that a python environment manager like `conda` or `virtualenv` is used to install the **panpipelines** package. Assuming you have created a conda environment called `panpython` then the package can be installed as follows:
```
conda activate panpython
pip install panpipelines
```
# Current Limitations
The current pipeline is currently optimised for **SLURM** environments in which singularity containers are automatically bound by the system administrator to disk locations on which users manage their data. There is however support for the use of the `-B` parameter in singularity to map output locations to their respective locations within the singularity. This functionality will attempt to automatically translate all host location parameters in a command call to their container locations. This has not been extensively tested and so should be used with caution.
By changing the `PROCESSING_ENVIRONMENT` parameter in the config file to `local` then pipelines will be run without being submitted to slurm using python's `multiprocessing` library. Docker containers can also be invoked instead of singularity images by using `docker run` instead of `singularity run` in the `*_CONTAINER_RUN_OPTIONS` parameter.
Several pipelines rely on the image `aacazxnat/panproc-minimal:0.2` which is defined here https://github.com/MRIresearch/panproc-minimal. See the section below **Building singularity images from Docker Images** for information on how to convert your docker images into singularity images.
# Building singularity images from Docker Images
The script below can be used to build a singularity image from a docker image. The script defines a location `$SINGULARITY_CACHEDIR` which is used to download the image layers. This can be set up in a location where there is a reasonable amount of disk space as the layers can be quite large in size. The docker image location is defined by `$DOCKERURI`. Once the singularity image `$SINGNAME` is built it can be moved to another location if desired.
```
#!/bin/bash
export SINGULARITY_CACHEDIR=$PWD/singularitycache
mkdir -p $SINGULARITY_CACHEDIR
SINGNAME=panprocminimal-v0.2.sif
DOCKERURI=docker://aacazxnat/panproc-minimal:0.2
singularity build $SINGNAME $DOCKERURI
```
# Deployment
For an example of using the package to process MRI data please refer to the `./deployment` folder. All the necessary parameters for running the pipelines are described in a **config** file in the `./config` subdirectory which is passed as a parameter to the main module `pan_processing.py`. In the example provided this file is named `panpipeconfig_slurm.config`.
The scripts used to process data for the 1st 250 participants of the PAN project are available in `PAN250_Deployment`.
The scripts used to process data for the April 2025 Deployment of the PAN project are available in `april2025_PAN_Deploymnent`
# Running pan_processing.py
The pan processing pipelines are run by simply calling the `pan_processing.py` as described in the script `run_pan250.sh` in the `PAN250_Deployment` directory.
The following parameters are available:
`config_file` : The configuration file
`--pipeline_outdir` : The ouput directory. This overrides `PIPELINE_DIR` in configuration file
`--participants_file` : The list of participants. This overrides `PARTICIPANTS_FILE` in configuration file.
`--sessions_file` : The list of sessions. This overrides `SESSIONS_FILE` in configuration file.
`--pipelines` : List of pipelines to run. This overrides `PIPELINES` in configuration file. If let undefined then all pipelines are run.
`--pipeline_match` : Pattern to use to filter out pipelines that you want from the full list of pipelines.
`--projects` : List of Projects to use for processing. If this is undefined then the PAN projects `"001_HML","002_HML","003_HML",and "004_HML"` are used.
`--participant_label` : Specify participants to process. This overrides `PARTICIPANT_LABEL`. Pass in `ALL_SUBJECTS` to process all subjects defined in the parricipants list.
`--participant_exclusions` : Specify participants to exclude from processing.
`--session_label` : Specify sessions to process. This overrides `SESSION_LABEL`. Pass in `ALL_SESSIONS` to process all sessions availabe to subjects defined in the parricipants list.
# Config file structure
The configuration file `pan250.config` drives how the processing occurs. It uses `json` format. The first entry is always called `"all_pipelines"` and this contains configuration details that are common to all pipelines. Individual pipelines can then be configured in the file. Any configuration details specified for an individual pipeline will override the common entry defined in the "all_pipelines" section.
## Lookup and direct parameters in the config file
Parameter values that are surrounded opening ad closing arrows e.g. `<PROC_DIR>` are lookup variables that are populated by the actual direct definitoons of these variables. For example below:
```
"PROC_DIR" : "/xdisk/nkchen/chidigonna",
"DATA_DIR" : "<PROC_DIR>/data
```
would mean that `DATA_DIR` evaluates to `/xdisk/nkchen/chidigonna/data`. Without the surrounding arrows then it would evaluate to `PROC_DIR/data`
While there is some logic to sort parameters so that lookup variables are evaluated correctly regardless of order of definiton this has not been completelely tested and may fail in complex scenarios. It is advised that were possible any definitions that are required by downstream lookup variables are defined early and in the `all_pipelines` section where possible.
## Configuration entry examples for "all_pipelines"
| Key | Description | Example | Default Value if undefined |
| -------- | --------------- | ----------------| ---------- |
| BIDS_SOURCE | Data is to be downloaded from XNAT HOST, FTP or already exists locally | "BIDS_SOURCE" : "XNAT" | "LOCAL" |
| FORCE_BIDS_DOWNLOAD | Always download subject data from source even if the data already exists locally | "FORCE_BIDS_DOWNLOAD" : "Y" | "N" |
## Configuration entry examples at pipeline level
| Key | Description | Example | Default Value if undefined |
| -------- | --------------- | ----------------| ---------- |
| PIPELINE_CLASS | The pipeline type that the defined pipeline belongs to | "PIPELINE_CLASS" : "fmriprep_panpipeline" | N/A |
| PIPELINE_DIR | The parent directory for pipeline outputs. This is overwritten by the `--pipeline_outdir` parameter of `pan_processing.py` | "PIPELINE_DIR " : "/path/to/pipeline_output_directory" | N/A |
## Implicit Configuration entries
There are a number of configuration entries that are implicitly set by the software which in general are better left alone though there might be fringe use cases where it is helpful to overwrite.
| Key | Description | Example | Default Value if undefined |
| -------- | --------------- | ----------------| ---------- |
| PWD | The working directory from which the shell script that invokes the python package is called. This can be overwritten to rerun processing located in another directory different from the startup script. | "CWD" : "path/to/new/working/directory" | N/A |
| PKG_DIR | The python package directory that is parent directory to the panpipelines source. This can be overwritten to use a different panpipeline package that is installed separate from the panprocessing.py module. It is hard to see a reason for this though. | "PKG_DIR" : "/path/to/package" | N/A |
| text/markdown | null | Chidi Ugonna <chidiugonna@arizona.edu> | null | null | MIT License
Copyright (c) 2023 MRIresearch
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"beautifulsoup4==4.12.3",
"dipy==1.10.0",
"dominate==2.9.1",
"mne[hdf]==1.1.0",
"nibabel==5.2.0",
"nilearn==0.10.2",
"nipype==1.8.6",
"nitransforms==23.0.1",
"numpy==1.26.3",
"pandas==2.1.4",
"paramiko==3.5.1",
"psycopg2-binary==2.9.10",
"pybids==0.16.4",
"pydicom==2.4.4",
"pysftp==0.2.9... | [] | [] | [] | [
"Homepage, https://github.com/MRIresearch/PANpipelines",
"Bug Tracker, https://github.com/MRIresearch/PANpipelines/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T22:18:48.706621 | panpipelines-1.1.7.tar.gz | 3,499,030 | a2/3f/5bf85d004a9684ff2ef0aa1202b5a15852c55b225cdd409cca38d79b10ae/panpipelines-1.1.7.tar.gz | source | sdist | null | false | a5f788da3f575d72873929d7ebab015d | bcbba7ccca66a66c2af1596704849288a8ca48e7076e0dffdb9c15b5d64ced70 | a23f5bf85d004a9684ff2ef0aa1202b5a15852c55b225cdd409cca38d79b10ae | null | [
"LICENSE"
] | 239 |
2.4 | pydelfini | 1.18.1 | an easy-to-use Python client for Delfini | # PyDelfini
PyDelfini is an easy-to-use Python client for the Delfini data commons
platform. It's great for scripts, notebooks, or as a foundation for
other clients to interact with Delfini's public API.
# Quickstart
```
$ pip install pydelfini
$ python
>>> from pydelfini import login
>>> client = login('delfini.bioteam.net')
To activate your session, visit the URL below:
https://delfini.bioteam.net/login/activate/........
Waiting for session activation...
>>> collection = client.get_collection_by_name('MHSVI')
>>> collection
<DelfiniCollection: name=MHSVI version=LIVE id=...>
```
# Features
* Interact with collections, folders, and items
* Read and write data streams (raw files)
* Read and write data tables via Pandas DataFrames
Coming soon:
* Work with data elements
* Persist data elements through DataFrames
* Work with dataviews (create, edit using simple construction tools)
| text/markdown | null | BioTeam <contact@bioteam.net> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pyyaml==6.0.3",
"httpx==0.25.2",
"attrs==25.4.0",
"python-dateutil==2.9.0.post0",
"pyarrow==21.0.0",
"pandas==2.3.3",
"tqdm==4.67.3",
"tabulate==0.9.0",
"sphinx==8.2.3; extra == \"dev\"",
"pytest==7.4.4; extra == \"dev\"",
"pytest-httpx==0.27.0; extra == \"dev\"",
"syrupy==4.9.1; extra == \"d... | [] | [] | [] | [
"Documentation, https://bioteam.github.io/delfini/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:18:14.345789 | pydelfini-1.18.1.tar.gz | 387,440 | 31/b8/826a9c4cfca8ccca57ba0ed90aa591b8bf7c51e365c576bee598359672ce/pydelfini-1.18.1.tar.gz | source | sdist | null | false | 7900b77668134464fd5255b30e58b07d | 0cc4a071c1e53532780c214fad8a8ed4392ab56160bd7b0c24dba7d254475783 | 31b8826a9c4cfca8ccca57ba0ed90aa591b8bf7c51e365c576bee598359672ce | null | [] | 237 |
2.4 | wandb | 0.25.1rc20260218 | A CLI and library for interacting with the Weights & Biases API. | <div align="center">
<img src="https://i.imgur.com/dQLeGCc.png" width="600" /><br><br>
</div>
<p align="center">
<a href="https://pypi.python.org/pypi/wandb"><img src="https://img.shields.io/pypi/v/wandb" /></a>
<a href="https://anaconda.org/conda-forge/wandb"><img src="https://img.shields.io/conda/vn/conda-forge/wandb" /></a>
<a href="https://pypi.python.org/pypi/wandb"><img src="https://img.shields.io/pypi/pyversions/wandb" /></a>
<a href="https://circleci.com/gh/wandb/wandb"><img src="https://img.shields.io/circleci/build/github/wandb/wandb/main" /></a>
<a href="https://codecov.io/gh/wandb/wandb"><img src="https://img.shields.io/codecov/c/gh/wandb/wandb" /></a>
</p>
<p align='center'>
<a href="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_%26_Biases.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
</p>
Use W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to production machine learning models. Get started with W&B today, [sign up for a W&B account](https://wandb.com?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme)!
<br>
Building an LLM app? Track, debug, evaluate, and monitor LLM apps with [Weave](https://wandb.github.io/weave?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=readme), our new suite of tools for GenAI.
# Documentation
See the [W&B Developer Guide](https://docs.wandb.ai/?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=documentation) and [API Reference Guide](https://docs.wandb.ai/training/api-reference#api-overview?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=documentation) for a full technical description of the W&B platform.
# Quickstart
Install W&B to track, visualize, and manage machine learning experiments of any size.
## Install the wandb library
```shell
pip install wandb
```
## Sign up and create an API key
Sign up for a [W&B account](https://wandb.ai/login?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=quickstart). Optionally, use the `wandb login` CLI to configure an API key on your machine. You can skip this step -- W&B will prompt you for an API key the first time you use it.
## Create a machine learning training experiment
In your Python script or notebook, initialize a W&B run with `wandb.init()`.
Specify hyperparameters and log metrics and other information to W&B.
```python
import wandb
# Project that the run is recorded to
project = "my-awesome-project"
# Dictionary with hyperparameters
config = {"epochs" : 1337, "lr" : 3e-4}
# The `with` syntax marks the run as finished upon exiting the `with` block,
# and it marks the run "failed" if there's an exception.
#
# In a notebook, it may be more convenient to write `run = wandb.init()`
# and manually call `run.finish()` instead of using a `with` block.
with wandb.init(project=project, config=config) as run:
# Training code here
# Log values to W&B with run.log()
run.log({"accuracy": 0.9, "loss": 0.1})
```
Visit [wandb.ai/home](https://wandb.ai/home) to view recorded metrics such as accuracy and loss and how they changed during each training step. Each run object appears in the Runs column with generated names.
# Integrations
W&B [integrates](https://docs.wandb.ai/models/integrations) with popular ML frameworks and libraries making it fast and easy to set up experiment tracking and data versioning inside existing projects.
For developers adding W&B to a new framework, follow the [W&B Developer Guide](https://docs.wandb.ai/models/integrations/add-wandb-to-any-library).
# W&B Hosting Options
Weights & Biases is available in the cloud or installed on your private infrastructure. Set up a W&B Server in a production environment in one of three ways:
1. [Multi-tenant Cloud](https://docs.wandb.ai/platform/hosting/hosting-options/multi_tenant_cloud?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=hosting): Fully managed platform deployed in W&B’s Google Cloud Platform (GCP) account in GCP’s North America regions.
2. [Dedicated Cloud](https://docs.wandb.ai/platform/hosting/hosting-options/dedicated_cloud?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=hosting): Single-tenant, fully managed platform deployed in W&B’s AWS, GCP, or Azure cloud accounts. Each Dedicated Cloud instance has its own isolated network, compute and storage from other W&B Dedicated Cloud instances.
3. [Self-Managed](https://docs.wandb.ai/platform/hosting/hosting-options/self-managed?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=hosting): Deploy W&B Server on your AWS, GCP, or Azure cloud account or within your on-premises infrastructure.
See the [Hosting documentation](https://docs.wandb.ai/guides/hosting?utm_source=github&utm_medium=code&utm_campaign=wandb&utm_content=hosting) in the W&B Developer Guide for more information.
# Python Version Support
We are committed to supporting our minimum required Python version for _at least_ six months after its official end-of-life (EOL) date, as defined by the Python Software Foundation. You can find a list of Python EOL dates [here](https://devguide.python.org/versions/).
When we discontinue support for a Python version, we will increment the library’s minor version number to reflect this change.
# Contribution guidelines
Weights & Biases ❤️ open source, and we welcome contributions from the community! See the [Contribution guide](https://github.com/wandb/wandb/blob/main/CONTRIBUTING.md) for more information on the development workflow and the internals of the wandb library. For wandb bugs and feature requests, visit [GitHub Issues](https://github.com/wandb/wandb/issues) or contact support@wandb.com.
# W&B Community
Be a part of the growing W&B Community and interact with the W&B team in our [Discord](https://wandb.me/discord). Stay connected with the latest ML updates and tutorials with [W&B Fully Connected](https://wandb.ai/fully-connected).
# License
[MIT License](https://github.com/wandb/wandb/blob/main/LICENSE)
| text/markdown | null | Weights & Biases <support@wandb.com> | null | null | MIT License
Copyright (c) 2021 Weights and Biases, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Go",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 ... | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0.1",
"eval-type-backport; python_version < \"3.10\"",
"gitpython!=3.1.29,>=1.0.0",
"packaging",
"platformdirs",
"protobuf!=4.21.0,!=5.28.0,<7,>=3.15.0; python_version == \"3.9\" and sys_platform == \"linux\"",
"protobuf!=4.21.0,!=5.28.0,<7,>=3.19.0; python_version > \"3.9\" and sys_platform =... | [] | [] | [] | [
"Source, https://github.com/wandb/wandb",
"Bug Reports, https://github.com/wandb/wandb/issues",
"Documentation, https://docs.wandb.ai/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:16:49.328646 | wandb-0.25.1rc20260218.tar.gz | 43,988,857 | 2a/e1/13046d75f66b4dabed68cc21a11340d26e5295e575a97fced6b527769198/wandb-0.25.1rc20260218.tar.gz | source | sdist | null | false | 6902365948885e9bad140382feb639b3 | 891096e1149c127b3d1d4880473827381e9121ac21e0bd87aae696efb6f25515 | 2ae113046d75f66b4dabed68cc21a11340d26e5295e575a97fced6b527769198 | null | [
"LICENSE"
] | 4,606 |
2.4 | pyinaturalist-convert | 0.8.0 | Data conversion tools for iNaturalist observations and taxonomy | # pyinaturalist-convert
[](https://github.com/pyinat/pyinaturalist-convert/actions)
[](https://codecov.io/gh/pyinat/pyinaturalist-convert)
[](https://pyinaturalist-convert.readthedocs.io)
[](https://pypi.org/project/pyinaturalist-convert)
[](https://anaconda.org/conda-forge/pyinaturalist-convert)
[](https://pypi.org/project/pyinaturalist-convert)
This package provides tools to convert iNaturalist observation data to and from a wide variety of
useful formats. This is mainly intended for use with the iNaturalist API
via [pyinaturalist](https://github.com/niconoe/pyinaturalist), but also works with other data sources.
Complete project documentation can be found at [pyinaturalist-convert.readthedocs.io](https://pyinaturalist-convert.readthedocs.io).
# Formats
## Import
* CSV (From either [API results](https://www.inaturalist.org/pages/api+reference#get-observations)
or the [iNaturalist export tool](https://www.inaturalist.org/observations/export))
* JSON (from API results)
* [`pyinaturalist.Observation`](https://pyinaturalist.readthedocs.io/en/stable/modules/pyinaturalist.models.Observation.html) objects
* Dataframes, Feather, Parquet, and anything else supported by [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html)
* [iNaturalist GBIF Archive](https://www.inaturalist.org/pages/developers)
* [iNaturalist Taxonomy Archive](https://www.inaturalist.org/pages/developers)
* [iNaturalist Open Data on Amazon](https://github.com/inaturalist/inaturalist-open-data)
* Note: see [API Recommended Practices](https://www.inaturalist.org/pages/api+recommended+practices)
for details on which data sources are best suited to different use cases
## Export
* CSV, Excel, and anything else supported by [tablib](https://tablib.readthedocs.io/en/stable/formats/)
* Dataframes, Feather, Parquet, and anything else supported by [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html)
* Darwin Core
* GeoJSON
* GPX
* SQLite
* SQLite + FTS5 text search for taxonomy
# Installation
Install with pip:
```bash
pip install pyinaturalist-convert
```
Or with conda:
```bash
conda install -c conda-forge pyinaturalist-convert
```
To keep things modular, many format-specific dependencies are not installed by default, so you may
need to install some more packages depending on which features you want. Each module's docs lists
any extra dependencies needed, and a full list can be found in
[pyproject.toml](https://github.com/pyinat/pyinaturalist-convert/blob/main/pyproject.toml#L27).
For getting started, it's recommended to install all optional dependencies:
```bash
pip install pyinaturalist-convert[all]
```
# Usage
## Export
Get your own observations and save to CSV:
```python
from pyinaturalist import get_observations
from pyinaturalist_convert import *
observations = get_observations(user_id='my_username')
to_csv(observations, 'my_observations.csv')
```
Or any other supported format:
```python
to_dwc(observations, 'my_observations.dwc')
to_excel(observations, 'my_observations.xlsx')
to_feather(observations, 'my_observations.feather')
to_geojson(observations, 'my_observations.geojson')
to_gpx(observations, 'my_observations.gpx')
to_hdf(observations, 'my_observations.hdf')
to_json(observations, 'my_observations.json')
to_parquet(observations, 'my_observations.parquet')
df = to_dataframe(observations)
```
## Import
Most file formats can be loaded via `pyinaturalist_convert.read()`:
```python
observations = read('my_observations.csv')
observations = read('my_observations.xlsx')
observations = read('my_observations.feather')
observations = read('my_observations.hdf')
observations = read('my_observations.json')
observations = read('my_observations.parquet')
```
## Download
Download the complete research-grade observations dataset:
```python
download_dwca_observations()
```
And load it into a SQLite database:
```python
load_dwca_observations()
```
And do the same with the complete taxonomy dataset:
```python
download_dwca_taxa()
load_dwca_taxa()
```
Load taxonomy data into a full text search database:
```python
load_taxon_fts_table(languages=['english', 'german'])
```
And get lightning-fast autocomplete results from it:
```python
ta = TaxonAutocompleter()
ta.search('aves')
ta.search('flughund', language='german')
```
# Feedback
If you have any problems, suggestions, or questions about pyinaturalist-convert, you are welcome to [create an issue](https://github.com/pyinat/pyinaturalist-convert/issues/new/choose) or [discussion](https://github.com/orgs/pyinat/discussions). Also, **PRs are welcome!**
| text/markdown | Jordan Cook | null | null | null | null | biodiversity, convert, csv, darwin-core, dataframe, export, gpx, inaturalist | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"flatten-dict>=0.4",
"pyinaturalist>=0.21",
"requests-ratelimiter<0.9,>=0.6",
"tablib>=3.0",
"polars>=1.30; extra == \"aggregate\"",
"sqlalchemy>=2.0; extra == \"aggregate\"",
"alembic>=1.13; extra == \"db\"",
"sqlalchemy>=2.0; extra == \"db\"",
"xmltodict>=0.12; extra == \"dwc\"",
"pandas>=1.2; e... | [] | [] | [] | [
"homepage, https://github.com/pyinat/pyinaturalist-convert",
"repository, https://github.com/pyinat/pyinaturalist-convert",
"documentation, https://pyinaturalist-convert.readthedocs.io"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:16:25.846413 | pyinaturalist_convert-0.8.0.tar.gz | 46,429 | 02/a7/13545dd8bad10381a64ce1da0275d91f52a70bd79f7899e293efbf9b66f2/pyinaturalist_convert-0.8.0.tar.gz | source | sdist | null | false | 35e353a0501687ebb4575481438f04f8 | e6d2668940fe236f4708b4a4f5e6b4ee68a4552d7bfc69799dc1d0b559d3d54a | 02a713545dd8bad10381a64ce1da0275d91f52a70bd79f7899e293efbf9b66f2 | null | [
"LICENSE"
] | 245 |
2.4 | gitta | 0.1.6 | AI-powered Git commit message generator from your terminal | # gitta
AI-powered Git commit messages and PR descriptions from your terminal.
<p align="center">
<img src="https://raw.githubusercontent.com/alexziao05/gitta/main/demo.gif" alt="Gitta Demo" width="600">
</p>
## Installation
```bash
pip install gitta
```
## Quick Start
```bash
gitta init # Set up your AI provider
gitta add . # Stage files + generate commit message
gitta ship # Stage, commit, and push in one step
gitta pr --create # Generate and create a PR on GitHub
```
## Usage
### Setup
Run the interactive setup wizard to configure your AI provider:
```bash
gitta init
```
You'll be prompted for your provider name, API base URL, model, commit style, and API key. Your configuration is stored in `~/.gitta/config.toml`.
<p align="center">
<img src="https://raw.githubusercontent.com/alexziao05/gitta/main/configuration.png" alt="Recommended Configuration" width="500">
</p>
### Committing
Generate an AI-powered commit message from your staged changes:
```bash
gitta add . # Stage files + generate message
gitta commit # Generate message for already-staged files
gitta commit --dry-run # Preview without committing
```
You'll see the generated message and can **confirm** (y), **edit** (e), or **cancel** (n).
### Split Commits
When changes span multiple modules, use `--split` to create scoped commits:
```bash
gitta commit --split
gitta add . --split
gitta ship --split
```
Gitta groups changes by module (e.g. `cli`, `ai`, `core`) and generates a message for each. You can then commit **all** separately, **merge** into one, or **cancel**.
To enable by default:
```bash
gitta config set multi_file true
```
### Ship
Stage everything, generate a commit message, and push — all in one step:
```bash
gitta ship
```
### Branch Names
Generate a branch name from a natural language description:
```bash
gitta branch "fix login timeout" # → fix/login-timeout
gitta branch "add user avatar upload" -c # Create and checkout the branch
```
### Pull Requests
Generate a PR title and description from all commits on your branch:
```bash
gitta pr # Preview the generated title + body
gitta pr --create # Push branch and create PR on GitHub
gitta pr --create --draft # Create as draft
gitta pr --base develop # Compare against a specific base branch
```
When creating, you can **confirm** (y), **edit** (e), or **cancel** (n) before the PR is submitted. Requires the [GitHub CLI](https://cli.github.com/) (`gh`).
### Explain
Explain what a commit or file change does in plain English:
```bash
gitta explain abc1234 # Explain a commit
gitta explain src/main.py # Explain uncommitted changes to a file
```
### Merging
Merge the PR for your current branch directly from the terminal:
```bash
gitta merge # Merge commit (default)
gitta merge --squash # Squash and merge
gitta merge --rebase # Rebase and merge
gitta merge -D # Merge without deleting the remote branch
```
Requires the [GitHub CLI](https://cli.github.com/) (`gh`).
### Utilities
```bash
gitta log # Show recent commits in a table
gitta log -n 20 # Show last 20 commits
gitta config list # Show all config values
gitta config get <key> # Get a config value
gitta config set <key> <value> # Update a config value
gitta doctor # Diagnose setup issues
```
## Configuration
Config is stored in `~/.gitta/config.toml`. Available settings:
| Key | Description | Default |
|---|---|---|
| `provider` | AI provider name (e.g. `openai`) | — |
| `base_url` | API endpoint | — |
| `model` | Model identifier (e.g. `gpt-4o`) | — |
| `style` | Commit format: `conventional`, `simple`, `detailed` | — |
| `max_diff_chars` | Max diff size sent to AI | `32000` |
| `multi_file` | Enable split commits by default | `false` |
| text/markdown | null | null | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"openai>=2.21.0",
"rich>=14.3.2",
"tomli-w>=1.2.0",
"typer>=0.23.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T22:15:25.634554 | gitta-0.1.6.tar.gz | 1,291,967 | 28/1e/14437542750fb94b66c0ee0d81f40648aa9b1ca53bacd86a5bf9a9412e45/gitta-0.1.6.tar.gz | source | sdist | null | false | 7a0abd4d726ad7ed6510f6ea9e765534 | 7cbd8d09cdb7c92c46c3e0bef7d4417a2b5f48e2c0b9b79b23f338c98e9104d5 | 281e14437542750fb94b66c0ee0d81f40648aa9b1ca53bacd86a5bf9a9412e45 | null | [
"LICENSE"
] | 230 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.