metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | pymc-marketing | 0.18.1 | Marketing Statistical Models in PyMC | <div align="center">

</div>
----


[](https://codecov.io/gh/pymc-labs/pymc-marketing)
[](https://github.com/astral-sh/ruff)
[](https://www.pymc-marketing.io/en/latest/)
[](https://pypi.python.org/pypi/pymc-marketing)
[](https://opensource.org/licenses/Apache-2.0)
[](https://pepy.tech/project/pymc-marketing)
[](https://pepy.tech/project/pymc-marketing)
[](https://pepy.tech/project/pymc-marketing)
# <span style="color:limegreen">PyMC-Marketing</span>: Bayesian Marketing Mix Modeling (MMM) & Customer Lifetime Value (CLV)
## Marketing Analytics Tools from [PyMC Labs](https://www.pymc-labs.com)
Unlock the power of **Marketing Mix Modeling (MMM)**, **Customer Lifetime Value (CLV)** and **Customer Choice Analysis (CSA)** analytics with PyMC-Marketing. This open-source marketing analytics tool empowers businesses to make smarter, data-driven decisions for maximizing ROI in marketing campaigns.
----
This repository is supported by [PyMC Labs](https://www.pymc-labs.com).
<center>
<img src="https://raw.githubusercontent.com/pymc-labs/pymc-marketing/main/docs/source/_static/labs-logo-light.png" width="50%" />
</center>
For businesses looking to integrate PyMC-Marketing into their operational framework, [PyMC Labs](https://www.pymc-labs.com) offers expert consulting and training. Our team is proficient in state-of-the-art Bayesian modeling techniques, with a focus on Marketing Mix Models (MMMs) and Customer Lifetime Value (CLV). For more information see [here](https://github.com/pymc-labs/pymc-marketing/tree/main/README.md#-schedule-a-free-consultation-for-mmm--clv-strategy).
Explore these topics further by watching our video on [Bayesian Marketing Mix Models: State of the Art](https://www.youtube.com/watch?v=xVx91prC81g).
### Community Resources
- [PyMC-Marketing Discussions](https://github.com/pymc-labs/pymc-marketing/discussions)
- [PyMC Discourse](https://discourse.pymc.io/)
- [Bayesian Discord server](https://discord.gg/swztKRaVKe)
- [MMM Hub Slack](https://www.mmmhub.org/slack)
## Quick Installation Guide
To dive into PyMC-Marketing, set up a specialized Python environment, `marketing_env`, via conda-forge:
```bash
conda create -c conda-forge -n marketing_env pymc-marketing
conda activate marketing_env
```
For a comprehensive installation guide, refer to the [official PyMC installation documentation](https://www.pymc.io/projects/docs/en/latest/installation.html).
### Docker
We provide a `Dockerfile` to build a Docker image for PyMC-Marketing so that is accessible from a Jupyter Notebook. See [here](https://github.com/pymc-labs/pymc-marketing/tree/main/scripts/docker/README.md) for more details.
## In-depth Bayesian Marketing Mix Modeling (MMM) in PyMC
Leverage our Bayesian MMM API to tailor your marketing strategies effectively. Leveraging on top of the research article [Jin, Yuxue, et al. “Bayesian methods for media mix modeling with carryover and shape effects.” (2017)](https://research.google/pubs/pub46001/), and extending it by integrating the expertise from core PyMC developers, our API provides:
| Feature | Benefit |
| ------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Custom Priors and Likelihoods | Tailor your model to your specific business needs by including domain knowledge via prior distributions. |
| Adstock Transformation | Optimize the carry-over effects in your marketing channels. |
| Saturation Effects | Understand the diminishing returns in media investments. |
| Customize adstock and saturation functions | You can select from a variety of adstock and saturation functions. You can even implement your own custom functions. See [documentation guide](https://www.pymc-marketing.io/en/stable/notebooks/mmm/mmm_components.html). |
| Time-varying Intercept | Capture time-varying baseline contributions in your model (using modern and efficient Gaussian processes approximation methods). See [guide notebook](https://www.pymc-marketing.io/en/stable/notebooks/mmm/mmm_time_varying_media_example.html). |
| Time-varying Media Contribution | Capture time-varying media efficiency in your model (using modern and efficient Gaussian processes approximation methods). See the [guide notebook](https://www.pymc-marketing.io/en/stable/notebooks/mmm/mmm_tvp_example.html). |
| Visualization and Model Diagnostics | Get a comprehensive view of your model's performance and insights. |
| Causal Identification | Input a business driven directed acyclic graph to identify the meaningful variables to include into the model to be able to draw causal conclusions. For a concrete example see the [guide notebook](https://www.pymc-marketing.io/en/stable/notebooks/mmm/mmm_causal_identification.html). |
| Choose among many inference algorithms | We provide the option to choose between various NUTS samplers (e.g. BlackJax, NumPyro and Nutpie). See the [example notebook](https://www.pymc-marketing.io/en/stable/notebooks/general/other_nuts_samplers.html) for more details. |
| GPU Support | PyMC's multiple backends allow for GPU acceleration. |
| Out-of-sample Predictions | Forecast future marketing performance with credible intervals. Use this for simulations and scenario planning. |
| Budget Optimization | Allocate your marketing spend efficiently across various channels for maximum ROI. See the [budget optimization example notebook](https://www.pymc-marketing.io/en/stable/notebooks/mmm/mmm_budget_allocation_example.html) |
| Experiment Calibration | Fine-tune your model based on empirical experiments for a more unified view of marketing. See the [lift test integration explanation](https://www.pymc-marketing.io/en/stable/notebooks/mmm/mmm_lift_test.html) for more details. [Here](https://www.pymc-marketing.io/en/stable/notebooks/mmm/mmm_roas.html) you can find a *Case Study: Unobserved Confounders, ROAS and Lift Tests*. |
### MMM Quickstart
The following snippet of code shows how to initiate and fit a `MMM` model.
```python
import pandas as pd
from pymc_marketing.mmm import (
GeometricAdstock,
LogisticSaturation,
)
from pymc_marketing.mmm.multidimensional import MMM
from pymc_marketing.paths import data_dir
file_path = data_dir / "mmm_example.csv"
data = pd.read_csv(file_path, parse_dates=["date_week"])
mmm = MMM(
adstock=GeometricAdstock(l_max=8),
saturation=LogisticSaturation(),
date_column="date_week",
channel_columns=["x1", "x2"],
control_columns=[
"event_1",
"event_2",
"t",
],
yearly_seasonality=2,
)
X = data.drop("y", axis=1)
y = data["y"]
mmm.fit(X, y)
```
After the model is fitted, we can explore the reults and insights. For example, we can plot the components contributions:

You can compute channels efficienty and compare them with the estimated return on ad spend (ROAS).
<center>
<img src="https://raw.githubusercontent.com/pymc-labs/pymc-marketing/main/docs/source/_static/roas_efficiency.png" width="70%" />
</center>
Once the model is fitted, we can further optimize our budget allocation as we are including diminishing returns and carry-over effects in our model.
<center>
<img src="https://raw.githubusercontent.com/pymc-labs/pymc-marketing/main/docs/source/_static/mmm_plot_plot_channel_contributions_grid.png" width="80%" />
</center>
- Explore a hands-on our [quickstart guide](https://pymc-marketing.readthedocs.io/en/stable/notebooks/mmm/mmm_quickstart.html) and more complete [simulated example](https://pymc-marketing.readthedocs.io/en/stable/notebooks/mmm/mmm_example.html) for more insights into MMM with PyMC-Marketing.
- Get started with a complete end-to-end analysis: from model specification to budget allocation. See the [guide notebook](https://www.pymc-marketing.io/en/stable/notebooks/mmm/mmm_case_study.html).
### Essential Reading for Marketing Mix Modeling (MMM)
- [Bayesian Media Mix Modeling for Marketing Optimization](https://www.pymc-labs.com/blog-posts/bayesian-media-mix-modeling-for-marketing-optimization/)
- [Improving the Speed and Accuracy of Bayesian Marketing Mix Models](https://www.pymc-labs.com/blog-posts/reducing-customer-acquisition-costs-how-we-helped-optimizing-hellofreshs-marketing-budget/)
- [Johns, Michael and Wang, Zhenyu. "A Bayesian Approach to Media Mix Modeling"](https://www.youtube.com/watch?v=UznM_-_760Y)
- [Orduz, Juan. "Media Effect Estimation with PyMC: Adstock, Saturation & Diminishing Returns"](https://juanitorduz.github.io/pymc_mmm/)
- [A Comprehensive Guide to Bayesian Marketing Mix Modeling](https://1749.io/learn/f/a-comprehensive-guide-to-bayesian-marketing-mix-modeling)
### Explainer App: Streamlit App of MMM Concepts
Dynamic and interactive visualization of key Marketing Mix Modeling (MMM) concepts, including adstock, saturation, and the use of Bayesian priors. This app aims to help marketers, data scientists, and anyone interested in understanding MMM more deeply.
**[Check out the app here](https://pymc-marketing-app.streamlit.app/)**
## Unlock Customer Lifetime Value (CLV) with PyMC
Understand and optimize your customer's value with our **CLV models**. Our API supports various types of CLV models, catering to both contractual and non-contractual settings, as well as continuous and discrete transaction modes.
- [CLV Quickstart](https://www.pymc-marketing.io/en/stable/notebooks/clv/clv_quickstart.html)
- [BG/NBD model](https://www.pymc-marketing.io/en/stable/notebooks/clv/bg_nbd.html)
- [Pareto/NBD model](https://www.pymc-marketing.io/en/stable/notebooks/clv/pareto_nbd.html)
- [Gamma-Gamma model](https://www.pymc-marketing.io/en/stable/notebooks/clv/gamma_gamma.html)
- [Shifted BG model](https://www.pymc-marketing.io/en/stable/notebooks/clv/sbg.html)
- [Modified BG/NBD model](https://www.pymc-marketing.io/en/stable/notebooks/clv/mbg_nbd.html)
### Examples
| | **Non-contractual** | **Contractual** |
| -------------- | ------------------------ | ----------------------- |
| **Continuous** | online purchases | ad conversion time |
| **Discrete** | concerts & sports events | recurring subscriptions |
### CLV Quickstart
```python
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from pymc_marketing import clv
from pymc_marketing.paths import data_dir
file_path = data_dir / "clv_quickstart.csv"
data = pd.read_csv(data_path)
data["customer_id"] = data.index
beta_geo_model = clv.BetaGeoModel(data=data)
beta_geo_model.fit()
```
Once fitted, we can use the model to predict the number of future purchases for known customers, the probability that they are still alive, and get various visualizations plotted.

See the Examples section for more on this.
## Customer Choice Analysis with PyMC-Marketing
Analyze the impact of new product launches and understand customer choice behavior with our **Multivariate Interrupted Time Series (MVITS)** models. Our API supports analysis in both saturated and unsaturated markets to help you:
| Feature | Benefit |
| --------------------------- | ----------------------------------------------------------------- |
| Market Share Analysis | Understand how new products affect existing product market shares |
| Causal Impact Assessment | Measure the true causal effect of product launches on sales |
| Saturated Market Analysis | Model scenarios where total market size remains constant |
| Unsaturated Market Analysis | Handle cases where new products grow the total market size |
| Visualization Tools | Plot market shares, causal impacts, and counterfactuals |
| Bayesian Inference | Get uncertainty estimates around all predictions |
### Customer Choice Quickstart
```python
import pandas as pd
from pymc_marketing.customer_choice import MVITS, plot_product
# Define existing products
existing_products = ["competitor", "own"]
# Create MVITS model
mvits = MVITS(
existing_sales=existing_products,
saturated_market=True, # Set False for unsaturated markets
)
# Fit model
mvits.fit(X, y)
# Plot causal impact on market share
mvits.plot_causal_impact_market_share()
# Plot counterfactuals
mvits.plot_counterfactual()
```
<center>
<img src="https://raw.githubusercontent.com/pymc-labs/pymc-marketing/main/docs/source/_static/conterfactual.png" width="100%" />
</center>
See our example notebooks for [saturated markets](https://www.pymc-marketing.io/en/stable/notebooks/customer_choice/mv_its_saturated.html) and [unsaturated markets](https://www.pymc-marketing.io/en/stable/notebooks/customer_choice/mv_its_unsaturated.html) to learn more about customer choice modeling with PyMC-Marketing.
## Bass Diffusion Model
The Bass Diffusion Model is a popular model for predicting the adoption of new products. It is a type of product life cycle model that describes the market penetration of a new product as a function of time. PyMC-Marketing provides a flexible implementation of the Bass Diffusion Model, allowing you to customize the model parameters and fit the model to your specific data (many products).
<center>
<img src="https://raw.githubusercontent.com/pymc-labs/pymc-marketing/main/docs/source/_static/bass.png" width="100%" />
</center>
## Discrete Choice Models
Discrete choice models come in various forms, but each aims to show how choosing between a set of alternatives can be understood as a function of the observable attributes of the alternatives at hand. This type of modelling drives insight into the "must-have" features of a product, and can be used to assess the success or failure of product launches or re-launches. The PyMC-marketing implementation offers a formula based model specification, for estimating the relative utility of each good in a market and identifying their most important features.
<center>
<img src="https://raw.githubusercontent.com/pymc-labs/pymc-marketing/main/docs/source/_static/discrete_choice_before_after.png" width="100%" />
</center>
## Why PyMC-Marketing vs other solutions?
PyMC-Marketing is and will always be free for commercial use, licensed under [Apache 2.0](https://github.com/pymc-labs/pymc-marketing/tree/main/LICENSE). Developed by core developers behind the popular PyMC package and marketing experts, it provides state-of-the-art measurements and analytics for marketing teams.
Due to its open-source nature and active contributor base, new features are constantly added. Are you missing a feature or want to contribute? Fork our repository and submit a pull request. If you have any questions, feel free to [open an issue](https://github.com/pymc-labs/pymc-marketing/issues).
### Thanks to our contributors!
[](https://github.com/pymc-labs/pymc-marketing/graphs/contributors)
## Marketing AI Assistant: MMM-GPT with PyMC-Marketing
Not sure how to start or have questions? MMM-GPT is an AI that answers questions and provides expert advice on marketing analytics using PyMC-Marketing.
**[Try MMM-GPT here.](https://mmm-gpt.com/)**
## 📞 Schedule a Free Consultation for MMM & CLV Strategy
Maximize your marketing ROI with a [free 30-minute strategy session](https://calendly.com/niall-oulton) with our PyMC-Marketing experts. Learn how Bayesian Marketing Mix Modeling and Customer Lifetime Value analytics can boost your organization by making smarter, data-driven decisions.
We provide the following professional services:
- **Custom Models**: We tailor niche marketing analytics models to fit your organization's unique needs.
- **Build Within PyMC-Marketing**: Our team members are experts leveraging the capabilities of PyMC-Marketing to create robust marketing models for precise insights.
- **SLA & Coaching**: Get guaranteed support levels and personalized coaching to ensure your team is well-equipped and confident in using our tools and approaches.
- **SaaS Solutions**: Harness the power of our state-of-the-art software solutions to streamline your data-driven marketing initiatives.
| text/markdown | null | null | null | PyMC Labs <info@pymc-labs.com> | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"arviz>=0.13.0",
"matplotlib>=3.5.1",
"narwhals",
"numpy>=2.0",
"pandas",
"preliz>=0.20.0",
"pydantic>=2.1.0",
"pymc-extras<0.9,>=0.4.0",
"pymc>=5.27.1",
"pyprojroot",
"pytensor>=2.36.3",
"pyyaml",
"scikit-learn>=1.1.1",
"seaborn>=0.12.2",
"tqdm",
"xarray-einstats>=0.5.1",
"xarray>=2024.1.0",
"dowhy; extra == \"dag\"",
"networkx; extra == \"dag\"",
"osqp<1.0.0,>=0.6.2; extra == \"dag\"",
"pygraphviz; extra == \"dag\"",
"blackjax; extra == \"docs\"",
"dowhy; extra == \"docs\"",
"fastprogress; extra == \"docs\"",
"graphviz; extra == \"docs\"",
"ipython!=8.7.0; extra == \"docs\"",
"ipywidgets; extra == \"docs\"",
"labs-sphinx-theme; extra == \"docs\"",
"lifetimes; extra == \"docs\"",
"mlflow>=2.0.0; extra == \"docs\"",
"myst-nb>=1.1.2; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"networkx; extra == \"docs\"",
"numba; extra == \"docs\"",
"numpydoc; extra == \"docs\"",
"numpyro; extra == \"docs\"",
"nutpie; extra == \"docs\"",
"osqp<1.0.0,>=0.6.2; extra == \"docs\"",
"preliz>=0.20.0; extra == \"docs\"",
"pylint; extra == \"docs\"",
"setuptools<81; extra == \"docs\"",
"sphinx; extra == \"docs\"",
"sphinx-autodoc-typehints; extra == \"docs\"",
"sphinx-copybutton; extra == \"docs\"",
"sphinx-design; extra == \"docs\"",
"sphinx-notfound-page; extra == \"docs\"",
"sphinx-remove-toctrees; extra == \"docs\"",
"sphinx-sitemap; extra == \"docs\"",
"sphinxext-opengraph; extra == \"docs\"",
"watermark; extra == \"docs\"",
"mypy; extra == \"lint\"",
"pandas-stubs; extra == \"lint\"",
"pre-commit>=2.19.0; extra == \"lint\"",
"ruff>=0.1.4; extra == \"lint\"",
"blackjax; extra == \"test\"",
"dowhy; extra == \"test\"",
"graphviz>=0.20.1; extra == \"test\"",
"ipykernel; extra == \"test\"",
"jax; extra == \"test\"",
"lifetimes==0.11.3; extra == \"test\"",
"mlflow>=2.0.0; extra == \"test\"",
"networkx; extra == \"test\"",
"numpyro; extra == \"test\"",
"nutpie; extra == \"test\"",
"osqp<1.0.0,>=0.6.2; extra == \"test\"",
"papermill; extra == \"test\"",
"plotly>=6.3.0; extra == \"test\"",
"polars>=1.0.0; extra == \"test\"",
"preliz>=0.20.0; extra == \"test\"",
"pygraphviz; extra == \"test\"",
"pyprojroot; extra == \"test\"",
"pytest-cov>=3.0.0; extra == \"test\"",
"pytest-mock>=3.14.0; extra == \"test\"",
"pytest-split; extra == \"test\"",
"pytest>=7.0.1; extra == \"test\"",
"setuptools<81; extra == \"test\""
] | [] | [] | [] | [
"repository, https://github.com/pymc-labs/pymc-marketing",
"homepage, https://www.pymc-marketing.io"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:56:51.727892 | pymc_marketing-0.18.1.tar.gz | 3,058,459 | 0b/a3/42a4fde0a8c39869142696463c217a4ff0867cd98083c9a1daca787881f2/pymc_marketing-0.18.1.tar.gz | source | sdist | null | false | 98c44d8017026276deadb6ec6e8ec2ef | f8064f968ef1e6e68a736f67ea3fff63e2bdb01fadcb8098f946da2ecf45c06c | 0ba342a4fde0a8c39869142696463c217a4ff0867cd98083c9a1daca787881f2 | null | [
"LICENSE"
] | 370 |
2.4 | surveyeval | 0.1.32 | A toolkit for survey evaluation | ==========
surveyeval
==========
The ``surveyeval`` package is a toolkit for AI-powered survey instrument evaluation. It's still in early development,
but is ready to support piloting and experimentation. To learn more about the overall project, see
`this blog post <https://www.linkedin.com/pulse/under-the-hood-ai-beyond-chatbots-christopher-robert-dquue>`_.
Installation
------------
Install the full version with pip::
pip install surveyeval[parser]
If you don't need anything in the ``survey_parser`` module (relating to reading, parsing, and converting
survey files), you can install a slimmed-down version with::
pip install surveyeval
Additional document-parsing dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you installed the full version with survey-parsing capabilities (``surveyeval[parsing]``), you'll also need
to install several other dependencies, which you can do by running the
`initial-setup.ipynb <https://github.com/higherbar-ai/survey-eval/blob/main/src/initial-setup.ipynb>`_ Jupyter
notebook — or by installing them manually as follows.
First, download NTLK data for natural language text processing::
# download NLTK data
import nltk
nltk.download('punkt', force=True)
Then install ``libreoffice`` for converting Office documents to PDF.
On Linux::
# install LibreOffice for document processing
!apt-get install -y libreoffice
On MacOS::
# install LibreOffice for document processing
brew install libreoffice
On Windows::
# install LibreOffice for document processing
choco install -y libreoffice
AWS Bedrock support
^^^^^^^^^^^^^^^^^^^
Finally, if you're accessing models via AWS Bedrock, the AWS CLI needs to be installed and configured for AWS access.
Jupyter notebooks with Google Colab support
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can use `the colab-or-not package <https://github.com/higherbar-ai/colab-or-not>`_ to initialize a Jupyter notebook
for Google Colab or other environments::
%pip install colab-or-not surveyeval
# download NLTK data
import nltk
nltk.download('punkt', force=True)
# set up our notebook environment (including LibreOffice)
from colab_or_not import NotebookBridge
notebook_env = NotebookBridge(
system_packages=["libreoffice"],
config_path="~/.hbai/survey-eval.env",
config_template={
"openai_api_key": "",
"openai_model": "",
"azure_api_key": "",
"azure_api_base": "",
"azure_api_engine": "",
"azure_api_version": "",
"anthropic_api_key": "",
"anthropic_model": "",
"langsmith_api_key": "",
}
)
notebook_env.setup_environment()
See `file-evaluation-example.ipynb <https://github.com/higherbar-ai/survey-eval/blob/main/src/file-evaluation-example.ipynb>`_
for an example.
Overview
---------
Here are the basics:
#. This toolkit includes code to read, parse, and evaluate survey instruments.
#. `The file-evaluation-example.ipynb Jupyter notebook <https://github.com/higherbar-ai/survey-eval/blob/main/src/file-evaluation-example.ipynb>`_
provides a working example for evaluating a single survey instrument file. It includes details on how to install,
configure, and run.
#. The evaluation engine itself lives in the ``evaluation_engine`` module. It provides a pretty basic framework for
applying different evaluation lenses to a survey instrument.
#. The ``core_evaluation_lenses`` module contains an initial set of evaluation lenses that can be applied to survey
instruments. These are the ones applied in the example notebook. They are:
a. ``PhrasingEvaluationLens``: Cases where phrasing might be adjusted to improve respondent understanding and reduce
measurement error (i.e., the kinds of phrasing issues that would be identified through rigorous cognitive
interviewing or other forms of validation)
b. ``TranslationEvaluationLens``: Cases where translations are inaccurate or phrased such that they might lead to
differing response patterns
c. ``BiasEvaluationLens``: Cases where phrasing might be improved to remove implicit bias or stigmatizing language
(inspired by `this very helpful post <https://www.linkedin.com/pulse/using-chatgpt-counter-bias-prejudice-discrimination-johannes-schunter/>`_
on the subject of using ChatGPT to identify bias)
d. ``ValidatedInstrumentEvaluationLens``: Cases where a validated instrument might be adapted to better measure an
inferred construct of interest
#. The code for reading and parsing files is in the ``survey_parser`` module. Aside from
`XLSForm <https://xlsform.org/en/>`_ files and REDCap data dictionaries — which are parsed directly — the module
relies heavily on
`the ai_workflows package <https://github.com/higherbar-ai/ai-workflows>`_ for reading files and using an LLM to
assist with parsing.
You can run the
`file-evaluation-example.ipynb <https://github.com/higherbar-ai/survey-eval/blob/main/src/file-evaluation-example.ipynb>`_
notebook as-is, but you might also consider customizing the core evaluation lenses to better meet your needs and/or
adding your own evaluation lenses to the notebook. When adding new lenses, you can just use any of the initial lenses
as a template.
If you make use of this toolkit, we'd love to hear from you — and help to share your results with the community. Please
email us at ``info@higherbar.ai``.
Technical notes
---------------
Reading and parsing input files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``survey_parser`` module contains code for reading input files. It directly supports two popular formats for
digital instruments (`XLSForm <https://xlsform.org/en/>`_ files and REDCap data dictionaries), which are read straight
into a structured format that is ready for evaluation. A wide variety of other document formats are supported via the
`ai_workflows <https://github.com/higherbar-ai/ai-workflows>`_ package, in two stages:
1. In the first stage, raw text is extracted from the document in a basic Markdown format. The techniques used depend
on the file format, but when possible an LLM is used to transform each page into Markdown text, and then all of the
text is merged together. LLM-based extraction can be slow and expensive (roughly $0.015/page), so you can disable it
by setting the ``use_llm`` parameter to ``False`` when calling the ``read_survey_contents()`` function. For example::
from surveyeval.survey_parser import SurveyInterface
survey_interface = SurveyInterface(openai_api_key=openai_api_key, openai_model=openai_model, langsmith_api_key=langsmith_api_key)
survey_contents = survey_interface.read_survey_contents(os.path.expanduser(input_path), use_llm=False)
2. In the second stage, the Markdown text is parsed into a structured format including modules, questions, response
options, and so on. This is done by the ``parse_survey_contents()`` function, which uses an LLM to assist with
parsing. For example::
data = survey_interface.parse_survey_contents(survey_contents=survey_contents, survey_context=evaluation_context)
See the `ai_workflows <https://github.com/higherbar-ai/ai-workflows>`_ documentation for more details on how particular
file formats are read.
When parsing unstructured files into a structured survey format, a lot can go wrong. If your survey file is not being
read or parsed well, you might want to simplify the file to make it easier to read. For example:
1. Make sure that separate modules are in separate sections with clear headings.
2. Make sure that questions are clearly separated from one another, each with a unique identifier of some kind.
3. Make sure that response options are clearly separated from questions, and that they are clearly associated with the
questions they belong to.
4. Label each translation with the same unique question identifier to help link them together. When possible, keep
translations together.
After you've parsed a file, you can use the ``output_parsed_data_to_xlsform()`` method if you'd like to output it as an
XLSForm file formatted for SurveyCTO.
Known issues
^^^^^^^^^^^^
These known issues are inherited from `the ai_workflows package <https://github.com/higherbar-ai/ai-workflows>`_:
#. OpenAI's `o1-mini` model is not currently supported.
#. The example Google Colab notebooks pop up a message during installation that offers to restart the runtime. You have
to click cancel so as not to interrupt execution.
#. The automatic generation and caching of JSON schemas (for response validation) can work poorly when batches of
similar requests are all launched in parallel (as each request will generate and cache the schema).
#. When reading REDCap data dictionaries, translations aren't supported.
#. LangSmith tracing support is imperfect in a few ways:
a. For OpenAI models, the top-level token usage counts are roughly doubled. You have to look to the inner LLM call
for an accurate count of input and output tokens.
b. For Anthropic models, the token usage doesn't show up at all, but you can find it by clicking into the metadata
for the inner LLM call.
c. For Anthropic models, the system prompt is only visible if you click into the inner LLM call and then switch the
*Input* display to *Raw input*.
d. For Anthropic models, images in prompts don't show properly.
Roadmap
-------
There's much that can be improved here. For example:
* We should track and report LLM costs.
* We should add an LLM cache that avoids calling out to the LLM for responses that it already has from prior requests.
After all, it's common to evaluate the same instrument multiple times, and it's incredibly wasteful to
keep going back to the LLM for the same responses every time (for requests that haven't changed in any way).
* We should improve how findings are scored and filtered, to avoid giving overwhelming numbers of minor
recommendations.
* We should improve the output format to be more user-friendly. (For example, a direct Word output with comments and
tracked changes would be very nice).
* We should add more evaluation lenses. For example:
* Double-barreled questions: Does any question ask about two things at once?
* Leading questions: Are questions neutral and don’t lead the respondent towards a particular answer?
* Response options: Are the response options exhaustive and mutually exclusive?
* Question order effects: The order in which questions appear can influence how respondents interpret and answer subsequent items. It's essential to evaluate if any questions might be leading or priming respondents in a way that could bias their subsequent answers.
* Consistency: Are scales used consistently throughout the survey?
* Reliability and validity: If established scales are used, have they been validated for the target population?
* Length and respondent burden: Is the survey too long? Long surveys can lead to respondent fatigue, which in turn might lead to decreased accuracy or increased drop-out rates.
* Ideally, we would parse modules into logical sub-modules that appear to measure a single construct, so that we can
better evaluate whether to recommend adaptation of validated instruments. Right now, an entire module is evaluated
at once, but modules often contain measurement of multiple constructs.
Credits
-------
This toolkit was originally developed by `Higher Bar AI <https://higherbar.ai>`_, a public benefit corporation, with
generous support from `Dobility, the makers of SurveyCTO <https://surveycto.com>`_.
Full documentation
------------------
See the full reference documentation here:
https://surveyeval.readthedocs.io/
Local development
-----------------
To develop locally:
#. ``git clone https://github.com/higherbar-ai/survey-eval``
#. ``cd survey-eval``
#. ``python -m venv venv``
#. ``source venv/bin/activate``
#. ``pip install -r requirements.txt``
#. Run the `initial-setup.ipynb <https://github.com/higherbar-ai/survey-eval/blob/main/src/initial-setup.ipynb>`_
Jupyter notebook
For convenience, the repo includes ``.idea`` project files for PyCharm.
To rebuild the documentation:
#. Update version number in ``/docs/source/conf.py``
#. Update layout or options as needed in ``/docs/source/index.rst``
#. In a terminal window, from the project directory:
a. ``cd docs``
b. ``SPHINX_APIDOC_OPTIONS=members,show-inheritance sphinx-apidoc -o source ../src/surveyeval --separate --force``
c. ``make clean html``
To rebuild the distribution packages:
#. For the PyPI package:
a. Update version number (and any build options) in ``/setup.py``
b. Confirm credentials and settings in ``~/.pypirc``
c. Run ``python -m build``
d. Delete old builds from ``/dist``
e. In a terminal window:
i. ``twine upload dist/* --verbose``
#. For GitHub:
a. Commit everything to GitHub and merge to ``main`` branch
b. Add new release, linking to new tag like ``v#.#.#`` in main branch
#. For readthedocs.io:
a. Go to https://readthedocs.org/projects/surveyeval/, log in, and click to rebuild from GitHub (only if it
doesn't automatically trigger)
| null | Christopher Robert | crobert@higherbar.ai | null | null | Apache 2.0 | null | [] | [] | https://github.com/higherbar-ai/survey-eval | null | >=3.10 | [] | [] | [] | [
"pydantic",
"overrides<8.0.0,>=7.3.1",
"py-ai-workflows<1.0.0,>=0.32.0",
"openpyxl<4.0.0,>=3.0.9; extra == \"parser\"",
"py-ai-workflows[docs]<1.0.0,>=0.32.0; extra == \"parser\""
] | [] | [] | [] | [
"Documentation, https://surveyeval.readthedocs.io/"
] | twine/6.2.0 CPython/3.10.9 | 2026-02-20T18:56:47.181426 | surveyeval-0.1.32.tar.gz | 80,115 | d4/a6/b6685eb3d415106aac4ae54e3f5c7dcbf2fc0e48dedc5ec016f1eefe5c08/surveyeval-0.1.32.tar.gz | source | sdist | null | false | 7111ed587294cde9eb6bab3602d1f27d | c5aafe62d4aa2af0642311eb6864a95d6271b98c346b42967d23ab73a3331f37 | d4a6b6685eb3d415106aac4ae54e3f5c7dcbf2fc0e48dedc5ec016f1eefe5c08 | null | [
"LICENSE"
] | 178 |
2.4 | tiled | 0.2.5 | Structured Scientific Data Access Service | # Tiled
Tiled is a **data access** service for data-aware portals and data science tools.
Tiled has a Python client and integrates naturally with Python data science
libraries, but nothing about the service is Python-specific; it also works from
a web browser or any Internet-connected program.
Tiled’s service can sit atop databases, filesystems, and/or remote
services to enable **search** and **structured, chunkwise access to data** in an
extensible variety of appropriate formats, providing data in a consistent
structure regardless of the format the data happens to be stored in at rest. The
natively-supported formats span slow but widespread interchange formats (e.g.
CSV, JSON) and fast, efficient ones (e.g. C buffers, Apache Arrow and Parquet).
Tiled enables slicing and sub-selection to read and transfer only the data of
interest, and it enables parallelized download of many chunks at once. Users can
access data with very light software dependencies and fast partial downloads.
Tiled puts an emphasis on **structures** rather than formats, including:
* N-dimensional strided arrays (i.e. numpy-like arrays)
* Sparse arrays
* Tabular data (e.g. pandas-like "dataframes")
* Nested, variable-sized data (as implemented by [AwkwardArray](https://awkward-array.org/))
* Hierarchical structures thereof (e.g. xarrays, HDF5-compatible structures like NeXus)
Tiled implements extensible **access control enforcement** based on web security
standards, similar to JupyterHub. Like Jupyter, Tiled can be used by a single
user or deployed as a shared public or private resource. Tiled can be configured
to use third party services for login, such as Google, ORCID, or any OIDC
or SAML authentication providers.
Tiled facilitates **client-side caching** in a standard web browser or in
Tiled's Python client, making efficient use of bandwidth. It uses
**service-side caching** of "hot" datasets and resources to expedite both
repeat requests (e.g. when several users are requesting the same chunks of
data) and distinct requests for different parts of the same dataset (e.g. when
the user is requesting various slices or columns from a dataset).
| Distribution | Where to get it |
| -------------- | ------------------------------------------------------------ |
| PyPI | `pip install tiled` |
| Conda | `conda install -c conda-forge tiled-client tiled-server` |
| Source code | [github.com/bluesky/tiled](https://github.com/bluesky/tiled) |
| Documentation | [blueskyproject.io/tiled](https://blueskyproject.io/tiled) |
## Example
In this example, we'll serve of a collection of data that is generated in
memory. Alternatively, it could be read on demand from a directory of files,
network resource, database, or some combination of these.
```
tiled serve demo
# equivalent to:
# tiled serve pyobject --public tiled.examples.generated:tree
```
And then access the data efficiently via the Python client, a web browser, or
any HTTP client.
```python
>>> from tiled.client import from_uri
>>> client = from_uri("http://localhost:8000")
>>> client
<Container {'scalars', 'nested', 'tables', 'structured_data', ...} ~8 entries>
>>> list(client)
['scalars',
'nested',
'tables',
'structured_data',
'flat_array',
'low_entropy',
'high_entropy',
'dynamic']
>>> client['nested/images/medium_image']
<ArrayClient>
>>> client['nested/images/medium_image'][:]
array([[0.49675483, 0.37832119, 0.59431287, ..., 0.16990737, 0.5396537 ,
0.61913812],
[0.97062498, 0.93776709, 0.81797714, ..., 0.96508877, 0.25208564,
0.72982507],
[0.87173234, 0.83127946, 0.91758202, ..., 0.50487542, 0.03052536,
0.9625512 ],
...,
[0.01884645, 0.33107071, 0.60018523, ..., 0.02268164, 0.46955907,
0.37842628],
[0.03405101, 0.77886243, 0.14856727, ..., 0.02484926, 0.03850398,
0.39086524],
[0.16567224, 0.1347261 , 0.48809697, ..., 0.55021249, 0.42324589,
0.31440635]])
>>> client['tables/long_table']
<DataFrameClient ['A', 'B', 'C']>
>>> client['tables/long_table'].read()
A B C
index
0 0.246920 0.493840 0.740759
1 0.326005 0.652009 0.978014
2 0.715418 1.430837 2.146255
3 0.425147 0.850294 1.275441
4 0.781036 1.562073 2.343109
... ... ... ...
99995 0.515248 1.030495 1.545743
99996 0.639188 1.278376 1.917564
99997 0.269851 0.539702 0.809553
99998 0.566848 1.133695 1.700543
99999 0.101446 0.202892 0.304338
[100000 rows x 3 columns]
>>> client['tables/long_table'].read(['A', 'B'])
A B
index
0 0.246920 0.493840
1 0.326005 0.652009
2 0.715418 1.430837
3 0.425147 0.850294
4 0.781036 1.562073
... ... ...
99995 0.515248 1.030495
99996 0.639188 1.278376
99997 0.269851 0.539702
99998 0.566848 1.133695
99999 0.101446 0.202892
```
Using an Internet browser or a command-line HTTP client like
[curl](https://curl.se/) or [httpie](https://httpie.io/) you can download the
data in whole or in efficiently-chunked parts in the format of your choice:
```
# Download tabular data as CSV
http://localhost:8000/api/v1/table/full/tables/long_table?format=csv
# or XLSX (Excel)
http://localhost:8000/api/v1/table/full/tables/long_table?format=xlsx
# and subselect columns.
http://localhost:8000/api/v1/table/full/tables/long_table?format=xlsx&field=A&field=B
# View or download (2D) array data as PNG
http://localhost:8000/api/v1/array/full/nested/images/medium_image?format=png
# and slice regions of interest.
http://localhost:8000/api/v1/array/full/nested/images/medium_image?format=png&slice=:50,100:200
```
Web-based data access usually involves downloading complete files, in the
manner of [Globus](https://www.globus.org/); or using modern chunk-based
storage formats, such as [TileDB](https://tiledb.com/) and
[Zarr](https://zarr.readthedocs.io/en/stable/) in local or cloud storage; or
using custom solutions tailored to a particular large dataset. Waiting for an
entire file to download when only the first frame of an image stack or a
certain column of a table are of interest is wasteful and can be prohibitive
for large longitudinal analyses. Yet, it is not always practical to transcode
the data into a chunk-friendly format or build a custom tile-based-access
solution. (Though if you can do either of those things, you should consider
them instead!)
<!-- README only content. Anything below this line won't be included in index.md -->
See https://blueskyproject.io/tiled for more detailed documentation.
| text/markdown | null | Bluesky Project Contributors <dallan@bnl.gov> | null | Brookhaven National Laboratory <dallan@bnl.gov> | null | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx!=0.23.1,>=0.20.0",
"json-merge-patch",
"jsonpatch",
"jsonschema",
"msgpack>=1.0.0",
"orjson",
"platformdirs",
"pydantic-settings<2.12.0,>=2",
"pydantic<3,>=2",
"pyyaml",
"typer",
"adbc-driver-manager; extra == \"all\"",
"adbc-driver-postgresql; extra == \"all\"",
"adbc-driver-sqlite; extra == \"all\"",
"aiofiles; extra == \"all\"",
"aiosqlite; extra == \"all\"",
"alembic; extra == \"all\"",
"anyio; extra == \"all\"",
"asgi-correlation-id; extra == \"all\"",
"asyncpg; extra == \"all\"",
"awkward>=2.4.3; extra == \"all\"",
"blosc2; extra == \"all\"",
"cachetools; extra == \"all\"",
"canonicaljson; extra == \"all\"",
"dask; extra == \"all\"",
"dask[array]; extra == \"all\"",
"dask[dataframe]; extra == \"all\"",
"duckdb<1.4.0; extra == \"all\"",
"entrypoints; extra == \"all\"",
"fastapi>=0.122.0; extra == \"all\"",
"h5netcdf; extra == \"all\"",
"h5py; extra == \"all\"",
"hdf5plugin; extra == \"all\"",
"jinja2; extra == \"all\"",
"jmespath; extra == \"all\"",
"lz4; extra == \"all\"",
"minio; extra == \"all\"",
"ndindex; extra == \"all\"",
"numba>=0.59.0; extra == \"all\"",
"numcodecs; extra == \"all\"",
"numpy; extra == \"all\"",
"obstore; extra == \"all\"",
"openpyxl; extra == \"all\"",
"packaging; extra == \"all\"",
"pandas<3; extra == \"all\"",
"pillow; extra == \"all\"",
"prometheus-client; extra == \"all\"",
"pyarrow>=14.0.1; extra == \"all\"",
"python-dateutil; extra == \"all\"",
"python-jose[cryptography]; extra == \"all\"",
"python-multipart; extra == \"all\"",
"redis; extra == \"all\"",
"rich; extra == \"all\"",
"sparse>=0.15.5; extra == \"all\"",
"sqlalchemy[asyncio]>=2; extra == \"all\"",
"stamina; extra == \"all\"",
"starlette>=0.48.0; extra == \"all\"",
"tifffile; extra == \"all\"",
"uvicorn[standard]; extra == \"all\"",
"watchfiles; extra == \"all\"",
"xarray; extra == \"all\"",
"zarr; extra == \"all\"",
"zstandard; extra == \"all\"",
"dask[array]; extra == \"array\"",
"numpy; extra == \"array\"",
"awkward>=2.4.3; extra == \"client\"",
"blosc2; python_version >= \"3.10\" and extra == \"client\"",
"dask[array]; extra == \"client\"",
"dask[dataframe]; extra == \"client\"",
"entrypoints; extra == \"client\"",
"lz4; extra == \"client\"",
"ndindex; extra == \"client\"",
"numba>=0.59.0; extra == \"client\"",
"numpy; extra == \"client\"",
"pandas; extra == \"client\"",
"pyarrow>=14.0.1; extra == \"client\"",
"rich; extra == \"client\"",
"sparse>=0.15.5; extra == \"client\"",
"stamina; extra == \"client\"",
"watchfiles; extra == \"client\"",
"websockets; extra == \"client\"",
"xarray; extra == \"client\"",
"zstandard; extra == \"client\"",
"blosc2; python_version >= \"3.10\" and extra == \"compression\"",
"lz4; extra == \"compression\"",
"zstandard; extra == \"compression\"",
"dask[dataframe]; extra == \"dataframe\"",
"pandas; extra == \"dataframe\"",
"pyarrow>=14.0.1; extra == \"dataframe\"",
"h5netcdf; extra == \"formats\"",
"h5py; extra == \"formats\"",
"hdf5plugin; extra == \"formats\"",
"openpyxl; extra == \"formats\"",
"pillow; extra == \"formats\"",
"tifffile; extra == \"formats\"",
"entrypoints; extra == \"minimal-client\"",
"rich; extra == \"minimal-client\"",
"stamina; extra == \"minimal-client\"",
"watchfiles; extra == \"minimal-client\"",
"websockets; extra == \"minimal-client\"",
"aiofiles; extra == \"minimal-server\"",
"aiosqlite; extra == \"minimal-server\"",
"alembic; extra == \"minimal-server\"",
"anyio; extra == \"minimal-server\"",
"asgi-correlation-id; extra == \"minimal-server\"",
"cachetools; extra == \"minimal-server\"",
"canonicaljson; extra == \"minimal-server\"",
"dask; extra == \"minimal-server\"",
"fastapi>=0.122.0; extra == \"minimal-server\"",
"jinja2; extra == \"minimal-server\"",
"jmespath; extra == \"minimal-server\"",
"numcodecs; extra == \"minimal-server\"",
"packaging; extra == \"minimal-server\"",
"prometheus-client; extra == \"minimal-server\"",
"python-dateutil; extra == \"minimal-server\"",
"python-jose[cryptography]; extra == \"minimal-server\"",
"python-multipart; extra == \"minimal-server\"",
"redis; extra == \"minimal-server\"",
"sqlalchemy[asyncio]>=2; extra == \"minimal-server\"",
"starlette>=0.48.0; extra == \"minimal-server\"",
"uvicorn[standard]; extra == \"minimal-server\"",
"zarr; extra == \"minimal-server\"",
"adbc-driver-manager; extra == \"server\"",
"adbc-driver-postgresql; extra == \"server\"",
"adbc-driver-sqlite; extra == \"server\"",
"aiofiles; extra == \"server\"",
"aiosqlite; extra == \"server\"",
"alembic; extra == \"server\"",
"anyio; extra == \"server\"",
"asgi-correlation-id; extra == \"server\"",
"asyncpg; extra == \"server\"",
"awkward>=2.4.3; extra == \"server\"",
"blosc2; python_version >= \"3.10\" and extra == \"server\"",
"cachetools; extra == \"server\"",
"canonicaljson; extra == \"server\"",
"dask; extra == \"server\"",
"dask[array]; extra == \"server\"",
"dask[dataframe]; extra == \"server\"",
"duckdb<1.4.0; extra == \"server\"",
"entrypoints; extra == \"server\"",
"fastapi>=0.122.0; extra == \"server\"",
"h5netcdf; extra == \"server\"",
"h5py; extra == \"server\"",
"hdf5plugin; extra == \"server\"",
"jinja2; extra == \"server\"",
"jmespath; extra == \"server\"",
"lz4; extra == \"server\"",
"minio; extra == \"server\"",
"ndindex; extra == \"server\"",
"numba>=0.59.0; extra == \"server\"",
"numcodecs; extra == \"server\"",
"numpy; extra == \"server\"",
"obstore; extra == \"server\"",
"openpyxl; extra == \"server\"",
"packaging; extra == \"server\"",
"pandas; extra == \"server\"",
"pillow; extra == \"server\"",
"prometheus-client; extra == \"server\"",
"pyarrow>=14.0.1; extra == \"server\"",
"python-dateutil; extra == \"server\"",
"python-jose[cryptography]; extra == \"server\"",
"python-multipart; extra == \"server\"",
"redis; extra == \"server\"",
"sparse>=0.15.5; extra == \"server\"",
"sqlalchemy[asyncio]>=2; extra == \"server\"",
"stamina; extra == \"server\"",
"starlette>=0.48.0; extra == \"server\"",
"tifffile; extra == \"server\"",
"uvicorn[standard]; extra == \"server\"",
"websockets; extra == \"server\"",
"xarray; extra == \"server\"",
"zarr; extra == \"server\"",
"zstandard; extra == \"server\"",
"ndindex; extra == \"sparse\"",
"numba>=0.59.0; extra == \"sparse\"",
"pyarrow>=14.0.1; extra == \"sparse\"",
"sparse>=0.15.5; extra == \"sparse\"",
"dask[array]; extra == \"xarray\"",
"pandas; extra == \"xarray\"",
"pyarrow; extra == \"xarray\"",
"xarray; extra == \"xarray\""
] | [] | [] | [] | [
"Homepage, https://github.com/bluesky/tiled",
"Documentation, https://blueskyproject.io/tiled",
"Bug Tracker, https://github.com/bluesky/tiled/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:55:58.772588 | tiled-0.2.5.tar.gz | 2,503,027 | 97/5c/75d97f6ea98a15317bde383fc0013f2a6d7947f79318e9cc2e20f1483f20/tiled-0.2.5.tar.gz | source | sdist | null | false | 8cbe885be9c8a01fa91b05af07e686ab | 56f3807b3595b08a4bf27556b97b3d45589087b52c47a3f049ad7c5a35700163 | 975c75d97f6ea98a15317bde383fc0013f2a6d7947f79318e9cc2e20f1483f20 | null | [
"LICENSE"
] | 1,438 |
2.4 | lmnr-claude-code-proxy | 0.1.14 | Thin proxy server for Claude Code and Laminar tracing | # Laminar Claude Code proxy
This library contains a tiny Rust proxy for Claude Code that can accept
requests from claude-agent-sdk with trace id and span id in order
to associate spans from the proxy with the parent context.
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0"
] | [] | [] | [] | [] | maturin/1.12.3 | 2026-02-20T18:55:51.055340 | lmnr_claude_code_proxy-0.1.14-cp314-cp314t-win32.whl | 927,163 | 2a/99/bfad2632d029c70db1f7b05eb35c6d698b1033f5f3326c3484264f74f2dd/lmnr_claude_code_proxy-0.1.14-cp314-cp314t-win32.whl | cp314 | bdist_wheel | null | false | 9620c8bbb88efdc8f663b28dbfae9ef3 | 48075c62210df486ce87d65f74fc69148e6515e319505401e1d94fbff33c0206 | 2a99bfad2632d029c70db1f7b05eb35c6d698b1033f5f3326c3484264f74f2dd | null | [] | 5,798 |
2.4 | biomcp-python | 0.7.3 | Biomedical Model Context Protocol Server | # BioMCP: Biomedical Model Context Protocol
> _Version 0.7.3 is the final release of the Python-based BioMCP server.
> The project has been re-architected in Rust to be more agent-friendly —
> using fewer tokens, consuming less context window, running faster, and
> adding new data sources. The Python source code is preserved here under
> the `v0.7.3` tag._
BioMCP is an open source (MIT License) toolkit that empowers AI assistants and
agents with specialized biomedical knowledge. Built following the Model Context
Protocol (MCP), it connects AI systems to authoritative biomedical data
sources, enabling them to answer questions about clinical trials, scientific
literature, and genomic variants with precision and depth.
[](https://www.youtube.com/watch?v=bKxOWrWUUhM)
## MCPHub Certification
BioMCP is certified by [MCPHub](https://mcphub.com/mcp-servers/genomoncology/biomcp). This certification ensures that BioMCP follows best practices for Model Context Protocol implementation and provides reliable biomedical data access.
## Why BioMCP?
While Large Language Models have broad general knowledge, they often lack
specialized domain-specific information or access to up-to-date resources.
BioMCP bridges this gap for biomedicine by:
- Providing **structured access** to clinical trials, biomedical literature,
and genomic variants
- Enabling **natural language queries** to specialized databases without
requiring knowledge of their specific syntax
- Supporting **biomedical research** workflows through a consistent interface
- Functioning as an **MCP server** for AI assistants and agents
## Biomedical Data Sources
BioMCP integrates with multiple biomedical data sources:
### Literature Sources
- **PubTator3/PubMed** - Peer-reviewed biomedical literature with entity annotations
- **bioRxiv/medRxiv** - Preprint servers for biology and health sciences
- **Europe PMC** - Open science platform including preprints
### Clinical & Genomic Sources
- **ClinicalTrials.gov** - Clinical trial registry and results database
- **NCI Clinical Trials Search API** - National Cancer Institute's curated cancer trials database
- Advanced search filters (biomarkers, prior therapies, brain metastases)
- Organization and intervention databases
- Disease vocabulary with synonyms
- **BioThings Suite** - Comprehensive biomedical data APIs:
- **MyVariant.info** - Consolidated genetic variant annotation
- **MyGene.info** - Real-time gene annotations and information
- **MyDisease.info** - Disease ontology and synonym information
- **MyChem.info** - Drug/chemical annotations and properties
- **TCGA/GDC** - The Cancer Genome Atlas for cancer variant data
- **1000 Genomes** - Population frequency data via Ensembl
- **cBioPortal** - Cancer genomics portal with mutation occurrence data
- **OncoKB** - Precision oncology knowledge base for clinical variant interpretation (demo server with BRAF, ROS1, TP53)
- Therapeutic implications and FDA-approved treatments
- Oncogenicity and mutation effect annotations
- Works immediately without authentication
### Regulatory & Safety Sources
- **OpenFDA** - FDA regulatory and safety data:
- **Drug Adverse Events (FAERS)** - Post-market drug safety reports
- **Drug Labels (SPL)** - Official prescribing information
- **Device Events (MAUDE)** - Medical device adverse events, with genomic device filtering
## Available MCP Tools
BioMCP provides 24 specialized tools for biomedical research:
### Core Tools (3)
#### 1. Think Tool (ALWAYS USE FIRST!)
**CRITICAL**: The `think` tool MUST be your first step for ANY biomedical research task.
```python
# Start analysis with sequential thinking
think(
thought="Breaking down the query about BRAF mutations in melanoma...",
thoughtNumber=1,
totalThoughts=3,
nextThoughtNeeded=True
)
```
The sequential thinking tool helps:
- Break down complex biomedical problems systematically
- Plan multi-step research approaches
- Track reasoning progress
- Ensure comprehensive analysis
#### 2. Search Tool
The search tool supports two modes:
##### Unified Query Language (Recommended)
Use the `query` parameter with structured field syntax for powerful cross-domain searches:
```python
# Simple natural language
search(query="BRAF melanoma")
# Field-specific search
search(query="gene:BRAF AND trials.condition:melanoma")
# Complex queries
search(query="gene:BRAF AND variants.significance:pathogenic AND articles.date:>2023")
# Get searchable fields schema
search(get_schema=True)
# Explain how a query is parsed
search(query="gene:BRAF", explain_query=True)
```
**Supported Fields:**
- **Cross-domain**: `gene:`, `variant:`, `disease:`
- **Trials**: `trials.condition:`, `trials.phase:`, `trials.status:`, `trials.intervention:`
- **Articles**: `articles.author:`, `articles.journal:`, `articles.date:`
- **Variants**: `variants.significance:`, `variants.rsid:`, `variants.frequency:`
##### Domain-Based Search
Use the `domain` parameter with specific filters:
```python
# Search articles (includes automatic cBioPortal integration)
search(domain="article", genes=["BRAF"], diseases=["melanoma"])
# Search with mutation-specific cBioPortal data
search(domain="article", genes=["BRAF"], keywords=["V600E"])
search(domain="article", genes=["SRSF2"], keywords=["F57*"]) # Wildcard patterns
# Search trials
search(domain="trial", conditions=["lung cancer"], phase="3")
# Search variants
search(domain="variant", gene="TP53", significance="pathogenic")
```
**Note**: When searching articles with a gene parameter, cBioPortal data is automatically included:
- Gene-level summaries show mutation frequency across cancer studies
- Mutation-specific searches (e.g., "V600E") show study-level occurrence data
- Cancer types are dynamically resolved from cBioPortal API
#### 3. Fetch Tool
Retrieve full details for a single article, trial, or variant:
```python
# Fetch article details (supports both PMID and DOI)
fetch(domain="article", id="34567890") # PMID
fetch(domain="article", id="10.1101/2024.01.20.23288905") # DOI
# Fetch trial with all sections
fetch(domain="trial", id="NCT04280705", detail="all")
# Fetch variant details
fetch(domain="variant", id="rs113488022")
```
**Domain-specific options:**
- **Articles**: `detail="full"` retrieves full text if available
- **Trials**: `detail` can be "protocol", "locations", "outcomes", "references", or "all"
- **Variants**: Always returns full details
### Individual Tools (21)
For users who prefer direct access to specific functionality, BioMCP also provides 21 individual tools:
#### Article Tools (2)
- **article_searcher**: Search PubMed/PubTator3 and preprints
- **article_getter**: Fetch detailed article information (supports PMID and DOI)
#### Trial Tools (5)
- **trial_searcher**: Search ClinicalTrials.gov or NCI CTS API (via source parameter)
- **trial_getter**: Fetch all trial details from either source
- **trial_protocol_getter**: Fetch protocol information only (ClinicalTrials.gov)
- **trial_references_getter**: Fetch trial publications (ClinicalTrials.gov)
- **trial_outcomes_getter**: Fetch outcome measures and results (ClinicalTrials.gov)
- **trial_locations_getter**: Fetch site locations and contacts (ClinicalTrials.gov)
#### Variant Tools (2)
- **variant_searcher**: Search MyVariant.info database
- **variant_getter**: Fetch comprehensive variant details
#### NCI-Specific Tools (6)
- **nci_organization_searcher**: Search NCI's organization database
- **nci_organization_getter**: Get organization details by ID
- **nci_intervention_searcher**: Search NCI's intervention database (drugs, devices, procedures)
- **nci_intervention_getter**: Get intervention details by ID
- **nci_biomarker_searcher**: Search biomarkers used in trial eligibility criteria
- **nci_disease_searcher**: Search NCI's controlled vocabulary of cancer conditions
#### Gene, Disease & Drug Tools (3)
- **gene_getter**: Get real-time gene information from MyGene.info
- **disease_getter**: Get disease definitions and synonyms from MyDisease.info
- **drug_getter**: Get drug/chemical information from MyChem.info
**Note**: All individual tools that search by gene automatically include cBioPortal summaries when the `include_cbioportal` parameter is True (default). Trial searches can expand disease conditions with synonyms when `expand_synonyms` is True (default).
## Quick Start
### For Claude Desktop Users
1. **Install `uv`** if you don't have it (recommended):
```bash
# MacOS
brew install uv
# Windows/Linux
pip install uv
```
2. **Configure Claude Desktop**:
- Open Claude Desktop settings
- Navigate to Developer section
- Click "Edit Config" and add:
```json
{
"mcpServers": {
"biomcp": {
"command": "uv",
"args": ["run", "--with", "biomcp-python", "biomcp", "run"]
}
}
}
```
- Restart Claude Desktop and start chatting about biomedical topics!
### Python Package Installation
```bash
# Using pip
pip install biomcp-python
# Using uv (recommended for faster installation)
uv pip install biomcp-python
# Run directly without installation
uv run --with biomcp-python biomcp trial search --condition "lung cancer"
```
## Configuration
### Environment Variables
BioMCP supports optional environment variables for enhanced functionality:
```bash
# cBioPortal API authentication (optional)
export CBIO_TOKEN="your-api-token" # For authenticated access
export CBIO_BASE_URL="https://www.cbioportal.org/api" # Custom API endpoint
# OncoKB demo server (optional - advanced users only)
# By default: Uses free demo server with BRAF, ROS1, TP53 (no setup required)
# For full gene access: Set ONCOKB_TOKEN from your OncoKB license
# export ONCOKB_TOKEN="your-oncokb-token" # www.oncokb.org/account/settings
# Performance tuning
export BIOMCP_USE_CONNECTION_POOL="true" # Enable HTTP connection pooling (default: true)
export BIOMCP_METRICS_ENABLED="false" # Enable performance metrics (default: false)
```
## Running BioMCP Server
BioMCP supports multiple transport protocols to suit different deployment scenarios:
### Local Development (STDIO)
For direct integration with Claude Desktop or local MCP clients:
```bash
# Default STDIO mode for local development
biomcp run
# Or explicitly specify STDIO
biomcp run --mode stdio
```
### HTTP Server Mode
BioMCP supports multiple HTTP transport protocols:
#### Legacy SSE Transport (Worker Mode)
For backward compatibility with existing SSE clients:
```bash
biomcp run --mode worker
# Server available at http://localhost:8000/sse
```
#### Streamable HTTP Transport (Recommended)
The new MCP-compliant Streamable HTTP transport provides optimal performance and standards compliance:
```bash
biomcp run --mode streamable_http
# Custom host and port
biomcp run --mode streamable_http --host 127.0.0.1 --port 8080
```
Features of Streamable HTTP transport:
- Single `/mcp` endpoint for all operations
- Dynamic response mode (JSON for quick operations, SSE for long-running)
- Session management support (future)
- Full MCP specification compliance (2025-03-26)
- Better scalability for cloud deployments
### Deployment Options
#### Docker
```bash
# Build the Docker image locally
docker build -t biomcp:latest .
# Run the container
docker run -p 8000:8000 biomcp:latest biomcp run --mode streamable_http
```
#### Cloudflare Workers
The worker mode can be deployed to Cloudflare Workers for global edge deployment.
Note: All APIs work without authentication, but tokens may provide higher rate limits.
## Command Line Interface
BioMCP provides a comprehensive CLI for direct database interaction:
```bash
# Get help
biomcp --help
# Run the MCP server
biomcp run
# Article search examples
biomcp article search --gene BRAF --disease Melanoma # Includes preprints by default
biomcp article search --gene BRAF --no-preprints # Exclude preprints
biomcp article get 21717063 --full
# Clinical trial examples
biomcp trial search --condition "Lung Cancer" --phase PHASE3
biomcp trial search --condition melanoma --source nci --api-key YOUR_KEY # Use NCI API
biomcp trial get NCT04280705 Protocol
biomcp trial get NCT04280705 --source nci --api-key YOUR_KEY # Get from NCI
# Variant examples with external annotations
biomcp variant search --gene TP53 --significance pathogenic
biomcp variant get rs113488022 # Includes TCGA, 1000 Genomes, and cBioPortal data by default
biomcp variant get rs113488022 --no-external # Core annotations only
# OncoKB integration (uses free demo server automatically)
biomcp variant search --gene BRAF --include-oncokb # Works with BRAF, ROS1, TP53
# Gene information with functional enrichment
biomcp gene get TP53 --enrich pathway
biomcp gene get BRCA1 --enrich ontology
biomcp gene get EGFR --enrich celltypes
# NCI-specific examples (requires NCI API key)
biomcp organization search "MD Anderson" --api-key YOUR_KEY
biomcp organization get ORG123456 --api-key YOUR_KEY
biomcp intervention search pembrolizumab --api-key YOUR_KEY
biomcp intervention search --type Device --api-key YOUR_KEY
biomcp biomarker search "PD-L1" --api-key YOUR_KEY
biomcp disease search melanoma --source nci --api-key YOUR_KEY
```
## Testing & Verification
Test your BioMCP setup with the MCP Inspector:
```bash
npx @modelcontextprotocol/inspector uv run --with biomcp-python biomcp run
```
This opens a web interface where you can explore and test all available tools.
## Enterprise Version: OncoMCP
OncoMCP extends BioMCP with GenomOncology's enterprise-grade precision oncology
platform (POP), providing:
- **HIPAA-Compliant Deployment**: Secure on-premise options
- **Real-Time Trial Matching**: Up-to-date status and arm-level matching
- **Healthcare Integration**: Seamless EHR and data warehouse connectivity
- **Curated Knowledge Base**: 15,000+ trials and FDA approvals
- **Sophisticated Patient Matching**: Using integrated clinical and molecular
profiles
- **Advanced NLP**: Structured extraction from unstructured text
- **Comprehensive Biomarker Processing**: Mutation and rule processing
Learn more: [GenomOncology](https://genomoncology.com/)
## MCP Registries
[](https://smithery.ai/server/@genomoncology/biomcp)
<a href="https://glama.ai/mcp/servers/@genomoncology/biomcp">
<img width="380" height="200" src="https://glama.ai/mcp/servers/@genomoncology/biomcp/badge" />
</a>
## Example Use Cases
### Gene Information Retrieval
```python
# Get comprehensive gene information
gene_getter(gene_id_or_symbol="TP53")
# Returns: Official name, summary, aliases, links to databases
```
### Disease Synonym Expansion
```python
# Get disease information with synonyms
disease_getter(disease_id_or_name="GIST")
# Returns: "gastrointestinal stromal tumor" and other synonyms
# Search trials with automatic synonym expansion
trial_searcher(conditions=["GIST"], expand_synonyms=True)
# Searches for: GIST OR "gastrointestinal stromal tumor" OR "GI stromal tumor"
```
### Integrated Biomedical Research
```python
# 1. Always start with thinking
think(thought="Analyzing BRAF V600E in melanoma treatment", thoughtNumber=1)
# 2. Get gene context
gene_getter("BRAF")
# 3. Search for pathogenic variants with OncoKB clinical interpretation (uses free demo server)
variant_searcher(gene="BRAF", hgvsp="V600E", significance="pathogenic", include_oncokb=True)
# 4. Find relevant clinical trials with disease expansion
trial_searcher(conditions=["melanoma"], interventions=["BRAF inhibitor"])
```
## Documentation
For comprehensive documentation, visit [https://biomcp.org](https://biomcp.org)
### Developer Guides
- [HTTP Client Guide](./docs/http-client-guide.md) - Using the centralized HTTP client
- [Migration Examples](./docs/migration-examples.md) - Migrating from direct HTTP usage
- [Error Handling Guide](./docs/error-handling.md) - Comprehensive error handling patterns
- [Integration Testing Guide](./docs/integration-testing.md) - Best practices for reliable integration tests
- [Third-Party Endpoints](./THIRD_PARTY_ENDPOINTS.md) - Complete list of external APIs used
- [Testing Guide](./docs/development/testing.md) - Running tests and understanding test categories
## Development
### Running Tests
```bash
# Run all tests (including integration tests)
make test
# Run only unit tests (excluding integration tests)
uv run python -m pytest tests -m "not integration"
# Run only integration tests
uv run python -m pytest tests -m "integration"
```
**Note**: Integration tests make real API calls and may fail due to network issues or rate limiting.
In CI/CD, integration tests are run separately and allowed to fail without blocking the build.
## BioMCP Examples Repo
Looking to see BioMCP in action?
Check out the companion repository:
👉 **[biomcp-examples](https://github.com/genomoncology/biomcp-examples)**
It contains real prompts, AI-generated research briefs, and evaluation runs across different models.
Use it to explore capabilities, compare outputs, or benchmark your own setup.
Have a cool example of your own?
**We’d love for you to contribute!** Just fork the repo and submit a PR with your experiment.
## License
This project is licensed under the MIT License.
| text/markdown | null | Ian Maurer <imaurer@gmail.com> | null | null | null | python | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"certifi>=2025.1.31",
"diskcache>=5.6.3",
"httpx>=0.28.1",
"mcp[cli]<2.0.0,>=1.12.3",
"platformdirs>=4.3.6",
"psutil>=7.0.0",
"pydantic>=2.10.6",
"python-dotenv>=1.0.0",
"rich>=14.0.0",
"typer>=0.15.2",
"uvicorn>=0.34.2",
"alphagenome>=0.1.0",
"fastapi>=0.110.0; extra == \"worker\"",
"starlette>=0.36.0; extra == \"worker\"",
"uvicorn>=0.28.0; extra == \"worker\""
] | [] | [] | [] | [
"Homepage, https://genomoncology.com/biomcp/",
"Repository, https://github.com/genomoncology/biomcp",
"Documentation, https://genomoncology.com/biomcp/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:55:37.714531 | biomcp_python-0.7.3.tar.gz | 246,218 | 19/b5/1c4be2a0e1037a2233c6ec084047ad89c0dc9b6c897c32cf6ba4d714a869/biomcp_python-0.7.3.tar.gz | source | sdist | null | false | 1e8afe7c21249991be3822fbfae566da | ab3550e09ef10b53f7218a53116b2fb32ccddab413eb203a3b3b82f9253f1638 | 19b51c4be2a0e1037a2233c6ec084047ad89c0dc9b6c897c32cf6ba4d714a869 | null | [
"LICENSE"
] | 631 |
2.4 | seleniumbase | 4.47.0 | A complete web automation framework for end-to-end testing. | <!-- SeleniumBase Docs -->
<meta property="og:site_name" content="SeleniumBase">
<meta property="og:title" content="SeleniumBase: Python Web Automation and E2E Testing" />
<meta property="og:description" content="Fast, easy, and reliable Web/UI testing with Python." />
<meta property="og:keywords" content="Python, pytest, selenium, webdriver, testing, automation, seleniumbase, framework, dashboard, recorder, reports, screenshots">
<meta property="og:image" content="https://seleniumbase.github.io/cdn/img/mac_sb_logo_5b.png" />
<link rel="icon" href="https://seleniumbase.github.io/img/logo7.png" />
<h1>SeleniumBase</h1>
<p align="center"><a href="https://github.com/seleniumbase/SeleniumBase/"><img src="https://seleniumbase.github.io/cdn/img/super_logo_sb3.png" alt="SeleniumBase" title="SeleniumBase" width="350" /></a></p>
<p align="center" class="hero__title"><b>Automate, test, and scrape the web — on your own terms.<br /></b></p>
<p align="center"><a href="https://pypi.python.org/pypi/seleniumbase" target="_blank"><img src="https://img.shields.io/pypi/v/seleniumbase.svg?color=3399EE" alt="PyPI version" /></a> <a href="https://github.com/seleniumbase/SeleniumBase/releases" target="_blank"><img src="https://img.shields.io/github/v/release/seleniumbase/SeleniumBase.svg?color=22AAEE" alt="GitHub version" /></a> <a href="https://seleniumbase.io"><img src="https://img.shields.io/badge/docs-seleniumbase.io-11BBAA.svg" alt="SeleniumBase Docs" /></a></p>
<p align="center"><a href="https://github.com/seleniumbase/SeleniumBase/actions" target="_blank"><img src="https://github.com/seleniumbase/SeleniumBase/workflows/CI%20build/badge.svg" alt="SeleniumBase GitHub Actions" /></a> <a href="https://github.com/seleniumbase/SeleniumBase/stargazers"><img src="https://img.shields.io/github/stars/seleniumbase/SeleniumBase?style=social"></a> <a href="https://pepy.tech/projects/seleniumbase?timeRange=threeMonths&category=version&includeCIDownloads=true&granularity=daily&viewType=line&versions=*" target="_blank"><img src="https://static.pepy.tech/badge/seleniumbase" alt="SeleniumBase PyPI downloads" /></a> <a href="https://discord.gg/EdhQTn3EyE" target="_blank"><img src="https://img.shields.io/discord/727927627830001734?color=7289DA&label=Discord&logo=discord&logoColor=white"/></a></p>
<p align="center">
<a href="#python_installation">🚀 Start</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/features_list.md">🏰 Features</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/customizing_test_runs.md">🎛️ Options</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/ReadMe.md">📚 Examples</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/seleniumbase/console_scripts/ReadMe.md">🪄 Scripts</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/mobile_testing.md">📱 Mobile</a>
<br />
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/method_summary.md">📘 The API</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/syntax_formats.md"> 🔠 SyntaxFormats</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/recorder_mode.md">🔴 Recorder</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/example_logs/ReadMe.md">📊 Dashboard</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/locale_codes.md">🗾 Locale</a>
<br />
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/commander.md">🎖️ GUI</a> |
<a href="https://seleniumbase.io/demo_page">📰 TestPage</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/uc_mode.md">👤 UC Mode</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/cdp_mode/ReadMe.md">🐙 CDP Mode</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/chart_maker/ReadMe.md">📶 Charts</a> |
<a href="https://seleniumbase.io/devices/?url=seleniumbase.com">🖥️ Farm</a>
<br />
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/how_it_works.md">👁️ How</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/tree/master/examples/migration/raw_selenium">🚝 Migration</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/cdp_mode/playwright/ReadMe.md">🎭 Stealthy Playwright</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/master_qa/ReadMe.md">🛂 MasterQA</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/tour_examples/ReadMe.md">🚎 Tours</a>
<br />
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/integrations/github/workflows/ReadMe.md">🤖 CI/CD</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/js_package_manager.md">❇️ JSMgr</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/translations.md">🌏 Translator</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/presenter/ReadMe.md">🎞️ Presenter</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/visual_testing/ReadMe.md">🖼️ Visual</a> |
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/case_plans.md">🗂️ CPlans</a>
<br />
</p>
<p>SeleniumBase is a browser automation framework that empowers software teams to innovate faster and handle modern web challenges with ease. With stealth options like CDP Mode, you'll avoid the usual restrictions imposed by websites deploying bot-detection services.</p>
--------
📚 Learn from [**over 200 examples** in the **SeleniumBase/examples/** folder](https://github.com/seleniumbase/SeleniumBase/tree/master/examples).
🐙 Stealth modes: <a translate="no" href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/uc_mode.md"><b>UC Mode</b></a> and <a translate="no" href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/cdp_mode/ReadMe.md"><b>CDP Mode</b></a> can bypass bot-detection, solve CAPTCHAs, and call advanced methods from the <a href="https://chromedevtools.github.io/devtools-protocol/" translate="no">Chrome Devtools Protocol</a>.
ℹ️ Many examples run with raw <code translate="no"><b>python</b></code>, although some use <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/syntax_formats.md">Syntax Formats</a> that expect <a href="https://docs.pytest.org/en/latest/how-to/usage.html" translate="no"><b>pytest</b></a> (a Python unit-testing framework included with SeleniumBase that can discover, collect, and run tests automatically).
--------
<p align="left">📗 This script performs a Google Search using SeleniumBase UC Mode + CDP Mode:<br /><a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/raw_google.py">SeleniumBase/examples/raw_google.py</a> (Results are saved as PDF, HTML, and PNG)</p>
```python
from seleniumbase import SB
with SB(uc=True, test=True) as sb:
url = "https://google.com/ncr"
sb.activate_cdp_mode(url)
sb.type('[title="Search"]', "SeleniumBase GitHub page")
sb.click("div:not([jsname]) > * > input")
sb.sleep(2)
print(sb.get_page_title())
sb.sleep(1) # Wait for the "AI Overview" result
if sb.is_text_visible("Generating"):
sb.wait_for_text("AI Overview")
sb.save_as_pdf_to_logs() # Saved to ./latest_logs/
sb.save_page_source_to_logs()
sb.save_screenshot_to_logs()
```
> `python raw_google.py`
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/raw_google.py"><img src="https://seleniumbase.github.io/cdn/img/google_sb_result.png" alt="SeleniumBase on Google" title="SeleniumBase on Google" width="480" /></a>
--------
<p align="left">📗 Here's a script that bypasses Cloudflare's challenge page with UC Mode + CDP Mode: <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/cdp_mode/raw_gitlab.py">SeleniumBase/examples/cdp_mode/raw_gitlab.py</a></p>
```python
from seleniumbase import SB
with SB(uc=True, test=True, locale="en") as sb:
url = "https://gitlab.com/users/sign_in"
sb.activate_cdp_mode(url)
sb.sleep(2)
sb.solve_captcha()
# (The rest is for testing and demo purposes)
sb.assert_text("Username", '[for="user_login"]', timeout=3)
sb.assert_element('label[for="user_login"]')
sb.highlight('button:contains("Sign in")')
sb.highlight('h1:contains("GitLab")')
sb.post_message("SeleniumBase wasn't detected", duration=4)
```
<img src="https://seleniumbase.github.io/other/cf_sec.jpg" title="SeleniumBase" width="332"> <img src="https://seleniumbase.github.io/other/gitlab_bypass.png" title="SeleniumBase" width="288">
<p align="left">📙 There's also SeleniumBase's "Pure CDP Mode", which doesn't use WebDriver or Selenium at all: <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/cdp_mode/raw_cdp_gitlab.py">SeleniumBase/examples/cdp_mode/raw_cdp_gitlab.py</a></p>
```python
from seleniumbase import sb_cdp
url = "https://gitlab.com/users/sign_in"
sb = sb_cdp.Chrome(url, incognito=True)
sb.sleep(2)
sb.solve_captcha()
sb.highlight('h1:contains("GitLab")')
sb.highlight('button:contains("Sign in")')
sb.driver.stop()
```
--------
<p align="left">📗 Here's <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/test_get_swag.py">SeleniumBase/examples/test_get_swag.py</a>, which tests an e-commerce site:</p>
```python
from seleniumbase import BaseCase
BaseCase.main(__name__, __file__) # Call pytest
class MyTestClass(BaseCase):
def test_swag_labs(self):
self.open("https://www.saucedemo.com")
self.type("#user-name", "standard_user")
self.type("#password", "secret_sauce\n")
self.assert_element("div.inventory_list")
self.click('button[name*="backpack"]')
self.click("#shopping_cart_container a")
self.assert_text("Backpack", "div.cart_item")
self.click("button#checkout")
self.type("input#first-name", "SeleniumBase")
self.type("input#last-name", "Automation")
self.type("input#postal-code", "77123")
self.click("input#continue")
self.click("button#finish")
self.assert_text("Thank you for your order!")
```
> `pytest test_get_swag.py`
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/test_get_swag.py"><img src="https://seleniumbase.github.io/cdn/gif/fast_swag_2.gif" alt="SeleniumBase Test" title="SeleniumBase Test" width="480" /></a>
> (The default browser is `--chrome` if not set.)
--------
<p align="left">📗 Here's <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/test_coffee_cart.py" target="_blank">SeleniumBase/examples/test_coffee_cart.py</a>, which verifies an e-commerce site:</p>
```zsh
pytest test_coffee_cart.py --demo
```
<p align="left"><a href="https://seleniumbase.io/coffee/" target="_blank"><img src="https://seleniumbase.github.io/cdn/gif/coffee_cart.gif" width="480" alt="SeleniumBase Coffee Cart Test" title="SeleniumBase Coffee Cart Test" /></a></p>
> <p>(<code translate="no">--demo</code> mode slows down tests and highlights actions)</p>
--------
<a id="multiple_examples"></a>
<p align="left">📗 Here's <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/test_demo_site.py" target="_blank">SeleniumBase/examples/test_demo_site.py</a>, which covers several actions:</p>
```zsh
pytest test_demo_site.py
```
<p align="left"><a href="https://seleniumbase.io/demo_page" target="_blank"><img src="https://seleniumbase.github.io/cdn/gif/demo_page_5.gif" width="480" alt="SeleniumBase Example" title="SeleniumBase Example" /></a></p>
> Easy to type, click, select, toggle, drag & drop, and more.
(For more examples, see the <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/ReadMe.md">SeleniumBase/examples/</a> folder.)
--------
<p align="left">📓 Here's a high-level stealthy architecture overview of SeleniumBase:</p>
<img src="https://seleniumbase.github.io/other/sb_stealth.png" width="650" alt="High-Level Stealthy Architecture Overview" title="High-Level Stealthy Architecture Overview" />
(For maximum stealth, use <a translate="no" href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/cdp_mode/ReadMe.md">CDP Mode</a>, which is used by <a translate="no" href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/cdp_mode/playwright/ReadMe.md">Stealthy Playwright Mode</a>)
--------
<p align="left"><a href="https://github.com/seleniumbase/SeleniumBase/"><img src="https://seleniumbase.github.io/cdn/img/super_logo_sb3.png" alt="SeleniumBase" title="SeleniumBase" width="232" /></a></p>
<blockquote>
<p dir="auto"><strong>Explore the README:</strong></p>
<ul dir="auto">
<li><a href="#install_seleniumbase" ><strong>Get Started / Installation</strong></a></li>
<li><a href="#basic_example_and_usage"><strong>Basic Example / Usage</strong></a></li>
<li><a href="#common_methods" ><strong>Common Test Methods</strong></a></li>
<li><a href="#fun_facts" ><strong>Fun Facts / Learn More</strong></a></li>
<li><a href="#demo_mode_and_debugging"><strong>Demo Mode / Debugging</strong></a></li>
<li><a href="#command_line_options" ><strong>Command-line Options</strong></a></li>
<li><a href="#directory_configuration"><strong>Directory Configuration</strong></a></li>
<li><a href="#seleniumbase_dashboard" ><strong>SeleniumBase Dashboard</strong></a></li>
<li><a href="#creating_visual_reports"><strong>Generating Test Reports</strong></a></li>
</ul>
</blockquote>
--------
<details>
<summary> ▶️ How is <b>SeleniumBase</b> different from raw Selenium? (<b>click to expand</b>)</summary>
<div>
<p>💡 SeleniumBase is a Python framework for browser automation and testing. SeleniumBase uses <a href="https://www.w3.org/TR/webdriver2/#endpoints" target="_blank">Selenium/WebDriver</a> APIs and incorporates test-runners such as <code translate="no">pytest</code>, <code translate="no">pynose</code>, and <code translate="no">behave</code> to provide organized structure, test discovery, test execution, test state (<i>eg. passed, failed, or skipped</i>), and command-line options for changing default settings (<i>eg. browser selection</i>). With raw Selenium, you would need to set up your own options-parser for configuring tests from the command-line.</p>
<p>💡 SeleniumBase's driver manager gives you more control over automatic driver downloads. (Use <code translate="no">--driver-version=VER</code> with your <code translate="no">pytest</code> run command to specify the version.) By default, SeleniumBase will download a driver version that matches your major browser version if not set.</p>
<p>💡 SeleniumBase automatically detects between CSS Selectors and XPath, which means you don't need to specify the type of selector in your commands (<i>but optionally you could</i>).</p>
<p>💡 SeleniumBase methods often perform multiple actions in a single method call. For example, <code translate="no">self.type(selector, text)</code> does the following:<br />1. Waits for the element to be visible.<br />2. Waits for the element to be interactive.<br />3. Clears the text field.<br />4. Types in the new text.<br />5. Presses Enter/Submit if the text ends in <code translate="no">"\n"</code>.<br />With raw Selenium, those actions require multiple method calls.</p>
<p>💡 SeleniumBase uses default timeout values when not set:<br />
✅ <code translate="no">self.click("button")</code><br />
With raw Selenium, methods would fail instantly (<i>by default</i>) if an element needed more time to load:<br />
❌ <code translate="no">self.driver.find_element(by="css selector", value="button").click()</code><br />
(Reliable code is better than unreliable code.)</p>
<p>💡 SeleniumBase lets you change the explicit timeout values of methods:<br />
✅ <code translate="no">self.click("button", timeout=10)</code><br />
With raw Selenium, that requires more code:<br />
❌ <code translate="no">WebDriverWait(driver, 10).until(EC.element_to_be_clickable("css selector", "button")).click()</code><br />
(Simple code is better than complex code.)</p>
<p>💡 SeleniumBase gives you clean error output when a test fails. With raw Selenium, error messages can get very messy.</p>
<p>💡 SeleniumBase gives you the option to generate a dashboard and reports for tests. It also saves screenshots from failing tests to the <code translate="no">./latest_logs/</code> folder. Raw <a href="https://www.selenium.dev/documentation/webdriver/" translate="no" target="_blank">Selenium</a> does not have these options out-of-the-box.</p>
<p>💡 SeleniumBase includes desktop GUI apps for running tests, such as <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/commander.md" translate="no">SeleniumBase Commander</a> for <code translate="no">pytest</code> and <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/behave_bdd/ReadMe.md" translate="no">SeleniumBase Behave GUI</a> for <code translate="no">behave</code>.</p>
<p>💡 SeleniumBase has its own <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/recorder_mode.md">Recorder / Test Generator</a> for creating tests from manual browser actions.</p>
<p>💡 SeleniumBase comes with <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/case_plans.md">test case management software, ("CasePlans")</a>, for organizing tests and step descriptions.</p>
<p>💡 SeleniumBase includes tools for <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/chart_maker/ReadMe.md">building data apps, ("ChartMaker")</a>, which can generate JavaScript from Python.</p>
</div>
</details>
--------
<p>📚 <b>Learn about different ways of writing tests:</b></p>
<p align="left">📗📝 Here's <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/test_simple_login.py">test_simple_login.py</a>, which uses <code translate="no"><a href="https://github.com/seleniumbase/SeleniumBase/blob/master/seleniumbase/fixtures/base_case.py">BaseCase</a></code> class inheritance, and runs with <a href="https://docs.pytest.org/en/latest/how-to/usage.html">pytest</a> or <a href="https://github.com/mdmintz/pynose">pynose</a>. (Use <code translate="no">self.driver</code> to access Selenium's raw <code translate="no">driver</code>.)</p>
```python
from seleniumbase import BaseCase
BaseCase.main(__name__, __file__)
class TestSimpleLogin(BaseCase):
def test_simple_login(self):
self.open("seleniumbase.io/simple/login")
self.type("#username", "demo_user")
self.type("#password", "secret_pass")
self.click('a:contains("Sign in")')
self.assert_exact_text("Welcome!", "h1")
self.assert_element("img#image1")
self.highlight("#image1")
self.click_link("Sign out")
self.assert_text("signed out", "#top_message")
```
<p align="left">📘📝 Here's <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/raw_login_sb.py">raw_login_sb.py</a>, which uses the <b><code translate="no">SB</code></b> Context Manager. Runs with pure <code translate="no">python</code>. (Use <code translate="no">sb.driver</code> to access Selenium's raw <code translate="no">driver</code>.)</p>
```python
from seleniumbase import SB
with SB() as sb:
sb.open("seleniumbase.io/simple/login")
sb.type("#username", "demo_user")
sb.type("#password", "secret_pass")
sb.click('a:contains("Sign in")')
sb.assert_exact_text("Welcome!", "h1")
sb.assert_element("img#image1")
sb.highlight("#image1")
sb.click_link("Sign out")
sb.assert_text("signed out", "#top_message")
```
<p align="left">📙📝 Here's <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/raw_login_driver.py">raw_login_driver.py</a>, which uses the <b><code translate="no">Driver</code></b> Manager. Runs with pure <code translate="no">python</code>. (The <code>driver</code> is an improved version of Selenium's raw <code translate="no">driver</code>, with more methods.)</p>
```python
from seleniumbase import Driver
driver = Driver()
try:
driver.open("seleniumbase.io/simple/login")
driver.type("#username", "demo_user")
driver.type("#password", "secret_pass")
driver.click('a:contains("Sign in")')
driver.assert_exact_text("Welcome!", "h1")
driver.assert_element("img#image1")
driver.highlight("#image1")
driver.click_link("Sign out")
driver.assert_text("signed out", "#top_message")
finally:
driver.quit()
```
--------
<a id="python_installation"></a>
<h2><img src="https://seleniumbase.github.io/cdn/img/python_logo.png" title="SeleniumBase" width="42" /> Set up Python & Git:</h2>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/pypi/pyversions/seleniumbase.svg?color=FACE42" title="Supported Python Versions" /></a>
🔵 Add <b><a href="https://www.python.org/downloads/">Python</a></b> and <b><a href="https://git-scm.com/">Git</a></b> to your System PATH.
🔵 Using a <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/virtualenv_instructions.md">Python virtual env</a> is recommended.
<a id="install_seleniumbase"></a>
<h2><img src="https://seleniumbase.github.io/img/logo7.png" title="SeleniumBase" width="32" /> Install SeleniumBase:</h2>
**You can install `seleniumbase` from [PyPI](https://pypi.org/project/seleniumbase/) or [GitHub](https://github.com/seleniumbase/SeleniumBase):**
🔵 **How to install `seleniumbase` from PyPI:**
```zsh
pip install seleniumbase
```
* (Add `--upgrade` OR `-U` to upgrade SeleniumBase.)
* (Add `--force-reinstall` to upgrade indirect packages.)
🔵 **How to install `seleniumbase` from a GitHub clone:**
```zsh
git clone https://github.com/seleniumbase/SeleniumBase.git
cd SeleniumBase/
pip install -e .
```
🔵 **How to upgrade an existing install from a GitHub clone:**
```zsh
git pull
pip install -e .
```
🔵 **Type `seleniumbase` or `sbase` to verify that SeleniumBase was installed successfully:**
```zsh
___ _ _ ___
/ __| ___| |___ _ _ (_)_ _ _ __ | _ ) __ _ ______
\__ \/ -_) / -_) ' \| | \| | ' \ | _ \/ _` (_-< -_)
|___/\___|_\___|_||_|_|\_,_|_|_|_\|___/\__,_/__|___|
----------------------------------------------------
╭──────────────────────────────────────────────────╮
│ * USAGE: "seleniumbase [COMMAND] [PARAMETERS]" │
│ * OR: "sbase [COMMAND] [PARAMETERS]" │
│ │
│ COMMANDS: PARAMETERS / DESCRIPTIONS: │
│ get / install [DRIVER_NAME] [OPTIONS] │
│ methods (List common Python methods) │
│ options (List common pytest options) │
│ behave-options (List common behave options) │
│ gui / commander [OPTIONAL PATH or TEST FILE] │
│ behave-gui (SBase Commander for Behave) │
│ caseplans [OPTIONAL PATH or TEST FILE] │
│ mkdir [DIRECTORY] [OPTIONS] │
│ mkfile [FILE.py] [OPTIONS] │
│ mkrec / codegen [FILE.py] [OPTIONS] │
│ recorder (Open Recorder Desktop App.) │
│ record (If args: mkrec. Else: App.) │
│ mkpres [FILE.py] [LANG] │
│ mkchart [FILE.py] [LANG] │
│ print [FILE] [OPTIONS] │
│ translate [SB_FILE.py] [LANG] [ACTION] │
│ convert [WEBDRIVER_UNITTEST_FILE.py] │
│ extract-objects [SB_FILE.py] │
│ inject-objects [SB_FILE.py] [OPTIONS] │
│ objectify [SB_FILE.py] [OPTIONS] │
│ revert-objects [SB_FILE.py] [OPTIONS] │
│ encrypt / obfuscate │
│ decrypt / unobfuscate │
│ proxy (Start a basic proxy server) │
│ download server (Get Selenium Grid JAR file) │
│ grid-hub [start|stop] [OPTIONS] │
│ grid-node [start|stop] --hub=[HOST/IP] │
│ │
│ * EXAMPLE => "sbase get chromedriver stable" │
│ * For command info => "sbase help [COMMAND]" │
│ * For info on all commands => "sbase --help" │
╰──────────────────────────────────────────────────╯
```
<h3>🔵 Downloading webdrivers:</h3>
✅ SeleniumBase automatically downloads webdrivers as needed, such as `chromedriver`.
<div></div>
<details>
<summary> ▶️ Here's sample output from a chromedriver download. (<b>click to expand</b>)</summary>
```zsh
*** chromedriver to download = 141.0.7390.78 (Latest Stable)
Downloading chromedriver-mac-arm64.zip from:
https://storage.googleapis.com/chrome-for-testing-public/141.0.7390.78/mac-arm64/chromedriver-mac-arm64.zip ...
Download Complete!
Extracting ['chromedriver'] from chromedriver-mac-arm64.zip ...
Unzip Complete!
The file [chromedriver] was saved to:
~/github/SeleniumBase/seleniumbase/drivers/
chromedriver
Making [chromedriver 141.0.7390.78] executable ...
[chromedriver 141.0.7390.78] is now ready for use!
```
</details>
<a id="basic_example_and_usage"></a>
<h2><img src="https://seleniumbase.github.io/img/logo7.png" title="SeleniumBase" width="32" /> Basic Example / Usage:</h2>
🔵 If you've cloned SeleniumBase, you can run tests from the [examples/](https://github.com/seleniumbase/SeleniumBase/tree/master/examples) folder.
<p align="left">Here's <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/my_first_test.py">my_first_test.py</a>:</p>
```zsh
cd examples/
pytest my_first_test.py
```
<a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/my_first_test.py"><img src="https://seleniumbase.github.io/cdn/gif/fast_swag_2.gif" alt="SeleniumBase Test" title="SeleniumBase Test" width="480" /></a>
<p align="left"><b>Here's the full code for <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/my_first_test.py">my_first_test.py</a>:</b></p>
```python
from seleniumbase import BaseCase
BaseCase.main(__name__, __file__)
class MyTestClass(BaseCase):
def test_swag_labs(self):
self.open("https://www.saucedemo.com")
self.type("#user-name", "standard_user")
self.type("#password", "secret_sauce\n")
self.assert_element("div.inventory_list")
self.assert_exact_text("Products", "span.title")
self.click('button[name*="backpack"]')
self.click("#shopping_cart_container a")
self.assert_exact_text("Your Cart", "span.title")
self.assert_text("Backpack", "div.cart_item")
self.click("button#checkout")
self.type("#first-name", "SeleniumBase")
self.type("#last-name", "Automation")
self.type("#postal-code", "77123")
self.click("input#continue")
self.assert_text("Checkout: Overview")
self.assert_text("Backpack", "div.cart_item")
self.assert_text("29.99", "div.inventory_item_price")
self.click("button#finish")
self.assert_exact_text("Thank you for your order!", "h2")
self.assert_element('img[alt="Pony Express"]')
self.js_click("a#logout_sidebar_link")
self.assert_element("div#login_button_container")
```
* By default, **[CSS Selectors](https://www.w3schools.com/cssref/css_selectors.asp)** are used for finding page elements.
* If you're new to CSS Selectors, games like [CSS Diner](http://flukeout.github.io/) can help you learn.
* For more reading, [here's an advanced guide on CSS attribute selectors](https://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_selectors).
<a id="common_methods"></a>
<h3><img src="https://seleniumbase.github.io/img/logo7.png" title="SeleniumBase" width="32" /> Here are some common SeleniumBase methods:</h3>
```python
self.open(url) # Navigate the browser window to the URL.
self.type(selector, text) # Update the field with the text.
self.click(selector) # Click the element with the selector.
self.click_link(link_text) # Click the link containing text.
self.go_back() # Navigate back to the previous URL.
self.select_option_by_text(dropdown_selector, option)
self.hover_and_click(hover_selector, click_selector)
self.drag_and_drop(drag_selector, drop_selector)
self.get_text(selector) # Get the text from the element.
self.get_current_url() # Get the URL of the current page.
self.get_page_source() # Get the HTML of the current page.
self.get_attribute(selector, attribute) # Get element attribute.
self.get_title() # Get the title of the current page.
self.switch_to_frame(frame) # Switch into the iframe container.
self.switch_to_default_content() # Leave the iframe container.
self.open_new_window() # Open a new window in the same browser.
self.switch_to_window(window) # Switch to the browser window.
self.switch_to_default_window() # Switch to the original window.
self.get_new_driver(OPTIONS) # Open a new driver with OPTIONS.
self.switch_to_driver(driver) # Switch to the browser driver.
self.switch_to_default_driver() # Switch to the original driver.
self.wait_for_element(selector) # Wait until element is visible.
self.is_element_visible(selector) # Return element visibility.
self.is_text_visible(text, selector) # Return text visibility.
self.sleep(seconds) # Do nothing for the given amount of time.
self.save_screenshot(name) # Save a screenshot in .png format.
self.assert_element(selector) # Verify the element is visible.
self.assert_text(text, selector) # Verify text in the element.
self.assert_exact_text(text, selector) # Verify text is exact.
self.assert_title(title) # Verify the title of the web page.
self.assert_downloaded_file(file) # Verify file was downloaded.
self.assert_no_404_errors() # Verify there are no broken links.
self.assert_no_js_errors() # Verify there are no JS errors.
```
🔵 For the complete list of SeleniumBase methods, see: <b><a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/method_summary.md">Method Summary</a></b>
<a id="fun_facts"></a>
<h2><img src="https://seleniumbase.github.io/img/logo7.png" title="SeleniumBase" width="32" /> Fun Facts / Learn More:</h2>
<p>✅ SeleniumBase automatically handles common <a href="https://www.selenium.dev/documentation/webdriver/" target="_blank">WebDriver</a> actions such as launching web browsers before tests, saving screenshots during failures, and closing web browsers after tests.</p>
<p>✅ SeleniumBase lets you customize tests via <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/customizing_test_runs.md">command-line options</a>.</p>
<p>✅ SeleniumBase uses simple syntax for commands. Example:</p>
```python
self.type("input", "dogs\n") # (The "\n" presses ENTER)
```
Most SeleniumBase scripts can be run with <code translate="no">pytest</code>, <code translate="no">pynose</code>, or pure <code translate="no">python</code>. Not all test runners can run all test formats. For example, tests that use the `sb` pytest fixture can only be run with `pytest`. (See <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/syntax_formats.md">Syntax Formats</a>) There's also a <a href="https://behave.readthedocs.io/en/stable/gherkin.html#features" target="_blank">Gherkin</a> test format that runs with <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/behave_bdd/ReadMe.md">behave</a>.
```zsh
pytest coffee_cart_tests.py --rs
pytest test_sb_fixture.py --demo
pytest test_suite.py --rs --html=report.html --dashboard
pynose basic_test.py --mobile
pynose test_suite.py --headless --report --show-report
python raw_sb.py
python raw_test_scripts.py
behave realworld.feature
behave calculator.feature -D rs -D dashboard
```
<p>✅ <code translate="no">pytest</code> includes automatic test discovery. If you don't specify a specific file or folder to run, <code translate="no">pytest</code> will automatically search through all subdirectories for tests to run based on the following criteria:</p>
* Python files that start with `test_` or end with `_test.py`.
* Python methods that start with `test_`.
With a SeleniumBase [pytest.ini](https://github.com/seleniumbase/SeleniumBase/blob/master/examples/pytest.ini) file present, you can modify default discovery settings. The Python class name can be anything because `seleniumbase.BaseCase` inherits `unittest.TestCase` to trigger autodiscovery.
<p>✅ You can do a pre-flight check to see which tests would get discovered by <code translate="no">pytest</code> before the actual run:</p>
```zsh
pytest --co -q
```
<p>✅ You can be more specific when calling <code translate="no">pytest</code> or <code translate="no">pynose</code> on a file:</p>
```zsh
pytest [FILE_NAME.py]::[CLASS_NAME]::[METHOD_NAME]
pynose [FILE_NAME.py]:[CLASS_NAME].[METHOD_NAME]
```
<p>✅ No More Flaky Tests! SeleniumBase methods automatically wait for page elements to finish loading before interacting with them (<i>up to a timeout limit</i>). This means <b>you no longer need random <span><code translate="no">time.sleep()</code></span> statements</b> in your scripts.</p>
<img src="https://img.shields.io/badge/Flaky%20Tests%3F-%20NO%21-11BBDD.svg" alt="NO MORE FLAKY TESTS!" />
✅ SeleniumBase supports all major browsers and operating systems:
<p><b>Browsers:</b> Chrome, Edge, Firefox, and Safari.</p>
<p><b>Systems:</b> Linux/Ubuntu, macOS, and Windows.</p>
✅ SeleniumBase works on all popular CI/CD platforms:
<p><a href="https://github.com/seleniumbase/SeleniumBase/blob/master/integrations/github/workflows/ReadMe.md"><img alt="GitHub Actions integration" src="https://img.shields.io/badge/GitHub_Actions-12B2C2.svg?logo=GitHubActions&logoColor=CFFFC2" /></a> <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/integrations/azure/jenkins/ReadMe.md"><img alt="Jenkins integration" src="https://img.shields.io/badge/Jenkins-32B242.svg?logo=jenkins&logoColor=white" /></a> <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/integrations/azure/azure_pipelines/ReadMe.md"><img alt="Azure integration" src="https://img.shields.io/badge/Azure-2288EE.svg?logo=AzurePipelines&logoColor=white" /></a> <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/integrations/google_cloud/ReadMe.md"><img alt="Google Cloud integration" src="https://img.shields.io/badge/Google_Cloud-11CAE8.svg?logo=GoogleCloud&logoColor=EE0066" /></a> <a href="#utilizing_advanced_features"><img alt="AWS integration" src="https://img.shields.io/badge/AWS-4488DD.svg?logo=AmazonAWS&logoColor=FFFF44" /></a> <a href="https://en.wikipedia.org/wiki/Personal_computer" target="_blank"><img alt="Your Computer" src="https://img.shields.io/badge/💻_Your_Computer-44E6E6.svg" /></a></p>
<p>✅ SeleniumBase includes an automated/manual hybrid solution called <b><a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/master_qa/ReadMe.md">MasterQA</a></b> to speed up manual testing with automation while manual testers handle validation.</p>
<p>✅ SeleniumBase supports <a href="https://github.com/seleniumbase/SeleniumBase/tree/master/examples/offline_examples">running tests while offline</a> (<i>assuming webdrivers have previously been downloaded when online</i>).</p>
<p>✅ For a full list of SeleniumBase features, <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/features_list.md">Click Here</a>.</p>
<a id="demo_mode_and_debugging"></a>
<h2><img src="https://seleniumbase.github.io/img/logo7.png" title="SeleniumBase" width="32" /> Demo Mode / Debugging:</h2>
🔵 <b>Demo Mode</b> helps you see what a test is doing. If a test is moving too fast for your eyes, run it in <b>Demo Mode</b> to pause the browser briefly between actions, highlight page elements being acted on, and display assertions:
```zsh
pytest my_first_test.py --demo
```
🔵 `time.sleep(seconds)` can be used to make a test wait at a specific spot:
```python
import time; time.sleep(3) # Do nothing for 3 seconds.
```
🔵 **Debug Mode** with Python's built-in **[pdb](https://docs.python.org/3/library/pdb.html)** library helps you debug tests:
```python
import pdb; pdb.set_trace()
import pytest; pytest.set_trace()
breakpoint() # Shortcut for "import pdb; pdb.set_trace()"
```
> (**`pdb`** commands: `n`, `c`, `s`, `u`, `d` => `next`, `continue`, `step`, `up`, `down`)
🔵 To pause an active test that throws an exception or error, (*and keep the browser window open while **Debug Mode** begins in the console*), add **`--pdb`** as a `pytest` option:
```zsh
pytest test_fail.py --pdb
```
🔵 To start tests in Debug Mode, add **`--trace`** as a `pytest` option:
```zsh
pytest test_coffee_cart.py --trace
```
<a href="https://github.com/mdmintz/pdbp"><img src="https://seleniumbase.github.io/cdn/gif/coffee_pdbp.gif" alt="SeleniumBase test with the pdbp (Pdb+) debugger" title="SeleniumBase test with the pdbp (Pdb+) debugger" /></a>
<a id="command_line_options"></a>
<h2>🔵 Command-line Options:</h2>
<a id="pytest_options"></a>
✅ Here are some useful command-line options that come with <code translate="no">pytest</code>:
```zsh
-v # Verbose mode. Prints the full name of each test and shows more details.
-q # Quiet mode. Print fewer details in the console output when running tests.
-x # Stop running the tests after the first failure is reached.
--html=report.html # Creates a detailed pytest-html report after tests finish.
--co | --collect-only # Show what tests would get run. (Without running them)
--co -q # (Both options together!) - Do a dry run with full test names shown.
-n=NUM # Multithread the tests using that many threads. (Speed up test runs!)
-s # See print statements. (Should be on by default with pytest.ini present.)
--junit-xml=report.xml # Creates a junit-xml report after tests finish.
--pdb # If a test fails, enter Post Mortem Debug Mode. (Don't use with CI!)
--trace # Enter Debug Mode at the beginning of each test. (Don't use with CI!)
-m=MARKER # Run tests with the specified pytest marker.
```
<a id="new_pytest_options"></a>
✅ SeleniumBase provides additional <code translate="no">pytest</code> command-line options for tests:
```zsh
--browser=BROWSER # (The web browser to use. Default: "chrome".)
--chrome # (Shortcut for "--browser=chrome". On by default.)
--edge # (Shortcut for "--browser=edge".)
--firefox # (Shortcut for "--browser=firefox".)
--safari # (Shortcut for "--browser=safari".)
--opera # (Shortcut for "--browser=opera".)
--brave # (Shortcut for "--browser=brave".)
--comet # (Shortcut for "--browser=comet".)
--atlas # (Shortcut for "--browser=atlas".)
--settings-file=FILE # (Override default SeleniumBase settings.)
--env=ENV # (Set the test env. Access with "self.env" in tests.)
--account=STR # (Set account. Access with "self.account" in tests.)
--data=STRING # (Extra test data. Access with "self.data" in tests.)
--var1=STRING # (Extra test data. Access with "self.var1" in tests.)
--var2=STRING # (Extra test data. Access with "self.var2" in tests.)
--var3=STRING # (Extra test data. Access with "self.var3" in tests.)
--variables=DICT # (Extra test data. Access with "self.variables".)
--user-data-dir=DIR # (Set the Chrome user data directory to use.)
--protocol=PROT | text/markdown | Michael Mintz | mdmintz@gmail.com | Michael Mintz | null | MIT | pytest, selenium, framework, automation, browser, testing, webdriver, seleniumbase, sbase, crawling, scraping | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Environment :: MacOS X",
"Environment :: Win32 (MS Windows)",
"Environment :: Web Environment",
"Framework :: Pytest",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Internet",
"Topic :: Internet :: WWW/HTTP :: Browsers",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Image Processing",
"Topic :: Scientific/Engineering :: Visualization",
"Topic :: Software Development",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Testing",
"Topic :: Software Development :: Testing :: Acceptance",
"Topic :: Software Development :: Testing :: Traffic Generation",
"Topic :: Utilities"
] | [
"Windows"
] | https://github.com/seleniumbase/SeleniumBase | null | >=3.9 | [] | [] | [] | [
"pip>=26.0.1",
"packaging>=26.0",
"setuptools~=70.2; python_version < \"3.10\"",
"setuptools>=82.0.0; python_version >= \"3.10\"",
"wheel>=0.46.3",
"attrs>=25.4.0",
"certifi>=2026.1.4",
"exceptiongroup>=1.3.1",
"websockets~=15.0.1; python_version < \"3.10\"",
"websockets>=16.0; python_version >= \"3.10\"",
"filelock~=3.19.1; python_version < \"3.10\"",
"filelock>=3.24.3; python_version >= \"3.10\"",
"fasteners>=0.20",
"mycdp>=1.3.2",
"pynose>=1.5.5",
"platformdirs~=4.4.0; python_version < \"3.10\"",
"platformdirs>=4.9.2; python_version >= \"3.10\"",
"typing-extensions>=4.15.0",
"sbvirtualdisplay>=1.4.0",
"MarkupSafe>=3.0.3",
"Jinja2>=3.1.6",
"six>=1.17.0",
"parse>=1.21.1",
"parse-type>=0.6.6",
"colorama>=0.4.6",
"pyyaml>=6.0.3",
"pygments>=2.19.2",
"pyreadline3>=3.5.4; platform_system == \"Windows\"",
"tabcompleter>=1.4.0",
"pdbp>=1.8.2",
"idna>=3.11",
"chardet==5.2.0",
"charset-normalizer<4,>=3.4.4",
"urllib3<2,>=1.26.20; python_version < \"3.10\"",
"urllib3<3,>=1.26.20; python_version >= \"3.10\"",
"requests~=2.32.5",
"sniffio==1.3.1",
"h11==0.16.0",
"outcome==1.3.0.post0",
"trio<1,>=0.31.0; python_version < \"3.10\"",
"trio<1,>=0.33.0; python_version >= \"3.10\"",
"trio-websocket~=0.12.2",
"wsproto==1.2.0; python_version < \"3.10\"",
"wsproto~=1.3.2; python_version >= \"3.10\"",
"websocket-client~=1.9.0",
"selenium==4.32.0; python_version < \"3.10\"",
"selenium==4.41.0; python_version >= \"3.10\"",
"cssselect==1.3.0; python_version < \"3.10\"",
"cssselect<2,>=1.4.0; python_version >= \"3.10\"",
"nest-asyncio==1.6.0",
"sortedcontainers==2.4.0",
"execnet==2.1.1; python_version < \"3.10\"",
"execnet==2.1.2; python_version >= \"3.10\"",
"iniconfig==2.1.0; python_version < \"3.10\"",
"iniconfig==2.3.0; python_version >= \"3.10\"",
"pluggy==1.6.0",
"pytest==8.4.2; python_version < \"3.11\"",
"pytest==9.0.2; python_version >= \"3.11\"",
"pytest-html==4.0.2",
"pytest-metadata==3.1.1",
"pytest-ordering==0.6",
"pytest-rerunfailures==16.0.1; python_version < \"3.10\"",
"pytest-rerunfailures==16.1; python_version >= \"3.10\"",
"pytest-xdist==3.8.0",
"parameterized==0.9.0",
"behave==1.2.6",
"soupsieve~=2.8.3",
"beautifulsoup4~=4.14.3",
"pyotp==2.9.0",
"python-xlib==0.33; platform_system == \"Linux\"",
"PyAutoGUI>=0.9.54; platform_system == \"Linux\"",
"markdown-it-py==3.0.0; python_version < \"3.10\"",
"markdown-it-py==4.0.0; python_version >= \"3.10\"",
"mdurl==0.1.2",
"rich<15,>=14.3.3",
"allure-pytest>=2.13.5; extra == \"allure\"",
"allure-python-commons>=2.13.5; extra == \"allure\"",
"allure-behave>=2.13.5; extra == \"allure\"",
"coverage>=7.10.7; python_version < \"3.10\" and extra == \"coverage\"",
"coverage>=7.13.4; python_version >= \"3.10\" and extra == \"coverage\"",
"pytest-cov>=7.0.0; extra == \"coverage\"",
"flake8==7.3.0; extra == \"flake8\"",
"mccabe==0.7.0; extra == \"flake8\"",
"pyflakes==3.4.0; extra == \"flake8\"",
"pycodestyle==2.14.0; extra == \"flake8\"",
"ipdb==0.13.13; extra == \"ipdb\"",
"ipython==7.34.0; extra == \"ipdb\"",
"mss==10.1.0; extra == \"mss\"",
"pdfminer.six==20251107; python_version < \"3.10\" and extra == \"pdfminer\"",
"pdfminer.six==20260107; python_version >= \"3.10\" and extra == \"pdfminer\"",
"cryptography==46.0.5; extra == \"pdfminer\"",
"cffi==2.0.0; extra == \"pdfminer\"",
"pycparser==2.23; python_version < \"3.10\" and extra == \"pdfminer\"",
"pycparser==3.0; python_version >= \"3.10\" and extra == \"pdfminer\"",
"Pillow>=11.3.0; python_version < \"3.10\" and extra == \"pillow\"",
"Pillow>=12.1.1; python_version >= \"3.10\" and extra == \"pillow\"",
"pip-system-certs==4.0; platform_system == \"Windows\" and extra == \"pip-system-certs\"",
"proxy.py==2.4.3; extra == \"proxy\"",
"playwright>=1.58.0; extra == \"playwright\"",
"psutil>=7.2.2; extra == \"psutil\"",
"PyAutoGUI>=0.9.54; platform_system != \"Linux\" and extra == \"pyautogui\"",
"selenium-stealth==1.0.6; extra == \"selenium-stealth\"",
"selenium-wire==5.1.0; extra == \"selenium-wire\"",
"pyOpenSSL>=24.2.1; extra == \"selenium-wire\"",
"pyparsing>=3.1.4; extra == \"selenium-wire\"",
"Brotli==1.1.0; extra == \"selenium-wire\"",
"blinker==1.7.0; extra == \"selenium-wire\"",
"h2==4.1.0; extra == \"selenium-wire\"",
"hpack==4.0.0; extra == \"selenium-wire\"",
"hyperframe==6.0.1; extra == \"selenium-wire\"",
"kaitaistruct==0.10; extra == \"selenium-wire\"",
"pyasn1==0.6.1; extra == \"selenium-wire\"",
"zstandard>=0.23.0; extra == \"selenium-wire\""
] | [] | [] | [] | [
"Homepage, https://github.com/seleniumbase/SeleniumBase",
"Changelog, https://github.com/seleniumbase/SeleniumBase/releases",
"Download, https://pypi.org/project/seleniumbase/#files",
"Blog, https://seleniumbase.com/",
"Discord, https://discord.gg/EdhQTn3EyE",
"PyPI, https://pypi.org/project/seleniumbase/",
"Source, https://github.com/seleniumbase/SeleniumBase",
"Repository, https://github.com/seleniumbase/SeleniumBase",
"Documentation, https://seleniumbase.io/"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T18:55:36.630957 | seleniumbase-4.47.0.tar.gz | 653,578 | 83/1d/2d328a991f7c0d1ce36e8c2ff14e5af7f54decdf699a00cc7c5de7e011c4/seleniumbase-4.47.0.tar.gz | source | sdist | null | false | 6d8f9863e00782add882d9588f8dde58 | 2dcf365c6598c5a13d207483c9f0466ceb1c17ec45f8aafabfe9995f895973c9 | 831d2d328a991f7c0d1ce36e8c2ff14e5af7f54decdf699a00cc7c5de7e011c4 | null | [
"LICENSE"
] | 23,881 |
2.4 | g2cv-casm | 0.1.3 | CASM: Continuous Attack Surface Monitoring | # CASM
## Continuous Attack Surface Monitoring

Evidence-first attack surface monitoring with safe, scope-bound verification and run-over-run change tracking.
CASM helps security teams continuously monitor external exposure in authorized environments. It discovers assets, verifies HTTP/TLS posture, and compares each run against a baseline to show exactly what changed.
## Quick Start
```bash
# Install
pip install g2cv-casm
# Create a minimal scope and targets file
cat > scope.yaml <<'YAML'
engagement_id: quickstart
allowed_domains: [example.com]
allowed_ips: []
allowed_ports: [443]
allowed_protocols: [https]
seed_targets: [example.com]
max_rate: 5
max_concurrency: 2
active_allowed: false
auth_allowed: false
YAML
cat > targets.json <<'JSON'
{
"targets": [
{"url": "https://example.com", "method": "HEAD"}
]
}
JSON
# Run a unified scan
casm run unified --config scope.yaml --targets-file targets.json --dry-run false
# Compare with a previous run
casm diff --old runs/baseline/results.sarif --new runs/current/results.sarif
```
By default, CASM auto-resolves tool binaries in this order: bundled wheel tools,
local `hands/bin` (source tree), cache, then optional download configured with
`CASM_TOOL_DOWNLOAD_URL_TEMPLATE` and `CASM_TOOL_MANIFEST_URL`.
In a source checkout, if `hands/bin/<tool>` is missing and Go is installed,
CASM auto-builds the tool on first use.
## What CASM Does
- Discover exposed assets across HTTP, DNS, and TLS contexts.
- Verify web hardening signals and transport/security headers.
- Track change between scans with baseline-aware diffs.
- Report in SARIF, Markdown, PDF, and JSONL evidence streams.
## Safety by Default
- Authorization-first scope controls (domains, IPs, ports, protocols).
- Dry-run support, deterministic blocking reasons, and rate/concurrency guardrails.
## Screenshots
Executive Summary (PDF)

Changes Since Last Scan (PDF)

## Documentation
- Full docs: `docs/` (or run `mkdocs serve`)
- Tutorials: `docs/tutorials/`
- CLI reference: `docs/reference/cli.md`
- Configuration reference: `docs/reference/configuration.md`
- Release guide: `docs/how-to/release-python-package.md`
- Security model: `docs/explanation/security-model.md`
## Project Notes
- Package name on PyPI: `g2cv-casm`
- CLI commands: `casm` and `g2cv-casm`
- Versioning is tag-driven (`vMAJOR.MINOR.PATCH`)
## Contributing and Security
- Contribution guide: `CONTRIBUTING.md`
- Security policy: `SECURITY.md`
- Code of conduct: `CODE_OF_CONDUCT.md`
## Support
If CASM is useful for your team, consider starring the repository.
It helps others discover the project and supports ongoing development.
## License
AGPL-3.0. See `LICENSE`.
Questions or partnerships: `contact@g2cv.com`
| text/markdown | null | null | null | null | AGPL-3.0-or-later | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"PyYAML==6.0.2",
"jsonschema==4.23.0",
"reportlab>=4.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:55:31.754120 | g2cv_casm-0.1.3.tar.gz | 12,296,019 | 65/2d/5915c90c9738e0f9dcf1c229446096be3b15e3a72a4a368e2257f3da3026/g2cv_casm-0.1.3.tar.gz | source | sdist | null | false | 381fc4c204be5a6766fa5665e3abca6c | 2851ef198dd3b5d5c34c499eebb70b3c7a3e40bc60e818e2130c7c836cc24370 | 652d5915c90c9738e0f9dcf1c229446096be3b15e3a72a4a368e2257f3da3026 | null | [
"LICENSE"
] | 196 |
2.4 | svs-core | 0.12.6 | Core library for SVS | # Self-Hosted Virtual Stack
**SVS is an open-source python library for managing self-hosted services on a linux server.**
[](https://pypi.org/project/svs-core/)
[](https://codecov.io/gh/kristiankunc/svs-core)
[](https://pre-commit.com/)
[](https://github.com/googleapis/release-please)
CI:
[](https://github.com/kristiankunc/svs-core/actions/workflows/publish.yml)
[](https://github.com/kristiankunc/svs-core/actions/workflows/test.yml)
## Docs
**For full docs, visit [svs.kristn.co.uk](https://svs.kristn.co.uk/)**
This readme contains a quick summary and development setup info.
## Goals
The goal of this project is to simplify deploying and managing self-hosted applications on a linux server. Inspired by [Portainer](https://www.portainer.io/) but aimed at begginer users. Under the hood, all applications are containerized using Docker. For ease of use, the library provides pre-configured service templates for popular self-hosted applications such as:
- MySQL
- PostgreSQL
- Django
- NGINX
- ...
## Technology overview
Every service will run a Docker container and all of users' services will be on the same Docker network, allowing them to communicate with each other easily without
1. exposing them to other users on the same server
2. having to use compose stacks and custom networks to allow cross-service communication.
## Features
Currently, the library is in early development and has the following features:
- [x] User management
- [x] Docker network management
- [x] Service management
- [x] Service templates
- [ ] CI/CD integration
- [ ] DB/System sync issues + recovery
- [x] Remote SSH access
## Running locally
Given this repository accesses system files, creates docker containers and manages services and is designed strictly for linux servers, it is recommended to run in a virtual environment.
The easiest way to achieve a reproducible environment is to use the included devcontainer configuration. Devcontainers allow you to run a containerized development environment with all dependencies installed. [See the devcontainer documentation](https://code.visualstudio.com/docs/devcontainers/containers).
The local devcontainer config creates the following compose stack:
1. A `python` devcontainer for the development environment.
1. A `postgres` database container for storing service data.
1. A `caddy` container to act as a HTTP proxy (needed only if testing domains locally)
This guide assumes you have chosen to use the devcontainer setup.
### Starting the devcontainer
To start the devcontainer, open the repository in Visual Studio Code and select "Reopen in Container" from the command palette. This will build the container and start it.
After attaching to the devcontainer, the dependencies will be automatically installed. After that's done, you can launch a new terminal which will have the virtual environment activated automatically.
You also need to run the [`install-dev.sh`](./install-dev.sh) script to configure your system for development. This script will create the required directories and configure permissions. It is a subset of the production install script.
After running the install script, switch to the `svs-admins` group by running
```bash
newgrp svs-admins
```
### Linting + Formatting
The devcontainer includes pre-configured linting and formatting tools for Visual Studio Code and all files should be formatted on save. If you use a different editor, you can run the pre-commit hooks manually by running `pre-commit run --all-files` in the terminal to apply the formatting and linting rules.
### Running the tests
To run the tests, you can use the `pytest` command in the terminal. This will run all tests in the `tests` directory. You can also run individual test files or functions by specifying their paths.
Tests are split into unit, integration and cli tests. They can be run separately by using the `-m` flag with pytest:
```bash
pytest -m unit
pytest -m integration
pytest -m cli
```
### Running the docs
Python docstrings are used throughout the codebase to generate documentation. To generate the documentation, you can use the `zensical` command in the terminal. This will build the documentation and serve it locally.
To run the documentation server, you can use the following command:
```bash
zensical serve
```
| text/markdown | null | Kristián Kunc <diist7i4c@mozmail.com> | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"bcrypt==5.0.0",
"docker==7.1.0",
"httpx==0.28.1",
"typer==0.23.1",
"Django==6.0.2",
"dj_database_url==3.1.0",
"psycopg[binary]==3.3.2",
"black==26.1.0; extra == \"dev\"",
"isort==7.0.0; extra == \"dev\"",
"mypy==1.19.1; extra == \"dev\"",
"pre-commit==4.5.1; extra == \"dev\"",
"pytest==9.0.2; extra == \"dev\"",
"pytest-asyncio==1.3.0; extra == \"dev\"",
"pytest-cov==7.0.0; extra == \"dev\"",
"pytest-dotenv==0.5.2; extra == \"dev\"",
"pytest_mock==3.15.1; extra == \"dev\"",
"ruff==0.15.1; extra == \"dev\"",
"twine==6.2.0; extra == \"dev\"",
"pytest-django==4.11.1; extra == \"dev\"",
"pytest-xdist==3.8.0; extra == \"dev\"",
"zensical==0.0.23; extra == \"docs\"",
"mkdocstrings[python]==1.0.3; extra == \"docs\"",
"mkdocs-awesome-nav==3.3.0; extra == \"docs\"",
"mkdocs-pdf==0.1.2; extra == \"docs\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:55:15.111847 | svs_core-0.12.6.tar.gz | 61,581 | 4b/e5/0a4a44cb2b4299d538acecd3d8887bc655287002aaa09f116193f4d15328/svs_core-0.12.6.tar.gz | source | sdist | null | false | 2a728344f8a5351ddde200bec72e00e7 | 23751b7f281e3ddf7f7ef00ba912b0f5f1e6395cefbeb813490751ebe3c66702 | 4be50a4a44cb2b4299d538acecd3d8887bc655287002aaa09f116193f4d15328 | null | [
"LICENSE"
] | 166 |
2.4 | getgrip | 0.2.2 | A retrieval engine that learns your vocabulary, remembers what works, and knows when it doesn't have an answer. | # GRIP
**A retrieval engine that learns your vocabulary, remembers what works, and knows when it doesn't have an answer.**
No embedding models. No vector databases. No API keys. Just `pip install getgrip`.
## Try it in 60 seconds
```bash
pip install getgrip
getgrip # starts server on localhost:7878
```
```bash
# Ingest your code
curl -X POST localhost:7878/ingest \
-H "Content-Type: application/json" \
-d '{"source": "/path/to/your/code"}'
# Search
curl "localhost:7878/search?q=authentication+handler&top_k=5"
```
Open `http://localhost:7878` for the web UI.
## What makes GRIP different
| Feature | GRIP | Vector DB + Embeddings |
|---------|------|----------------------|
| Setup time | `pip install getgrip` | Model download + DB setup + API keys |
| Cold start | < 2 seconds | Minutes (model loading) |
| Search latency | < 5ms | 50-200ms |
| Learns your data | Yes (co-occurrence) | No |
| Remembers queries | Yes (auto-reinforce) | No |
| Confidence scoring | HIGH/MEDIUM/LOW/NONE | Score number |
| Works offline | Yes | Depends |
| Dependencies | 4 (fastapi, uvicorn, pydantic, numpy) | 10-30+ |
## Features
- **Co-occurrence expansion** — learns term relationships from your data without external models
- **Auto-remember** — reinforces successful queries, persists across restarts
- **Session context** — conversational memory across interactions
- **Confidence scoring** — returns HIGH/MEDIUM/LOW/NONE so your app knows when to say "I don't know"
- **Plugin architecture** — GitHub repos, local files, multiple LLM providers
- **9 API endpoints** — ingest, search, query (with LLM), config, sources, delete, stats, health, web UI
- **Fully offline** — no cloud dependency, air-gapped operation
## Benchmarks
**BEIR (industry standard):** 0.58 NDCG@10 across 6 datasets, 2,771 queries — beats BM25 on all datasets.
**Real-world accuracy (3,000 queries):**
- Linux Kernel: 98.7%
- Wikipedia: 98.5%
- Project Gutenberg: 95.4%
- **Combined: 97.5%**
## Docker
```bash
docker run -d -p 7878:8000 \
-v grip-data:/data \
-v /your/code:/code \
griphub/grip:free
```
## Optional extras
```bash
pip install getgrip[pdf] # PDF parsing
pip install getgrip[rerank] # Cross-encoder reranking
pip install getgrip[llm] # LLM-powered answers (OpenAI, Anthropic, Ollama, Groq)
pip install getgrip[all] # Everything
```
## Pricing
| Tier | Chunks | Price |
|------|--------|-------|
| Free | 10,000 | $0 |
| Personal | 100,000 | $499/year |
| Team | 500,000 | $1,499/year |
| Professional | 5,000,000 | $4,999/year |
All tiers include all features. Licensed tiers preserve learning data across deletions.
**Website:** [getgrip.dev](https://getgrip.dev)
| text/markdown | Grip Hub | null | null | null | Proprietary | retrieval, search, rag, code-search, bm25, offline, no-embeddings, no-vector-db | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Topic :: Text Processing :: Indexing"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"fastapi>=0.95",
"uvicorn[standard]>=0.20",
"pydantic>=2.0",
"numpy>=1.20",
"cryptography>=41.0; extra == \"license\"",
"pypdf>=3.0; extra == \"pdf\"",
"sentence-transformers>=2.2; extra == \"rerank\"",
"requests>=2.28; extra == \"llm\"",
"openai>=1.0; extra == \"llm\"",
"anthropic>=0.18; extra == \"llm\"",
"groq>=0.4; extra == \"llm\"",
"cryptography>=41.0; extra == \"all\"",
"pypdf>=3.0; extra == \"all\"",
"sentence-transformers>=2.2; extra == \"all\"",
"requests>=2.28; extra == \"all\"",
"openai>=1.0; extra == \"all\"",
"anthropic>=0.18; extra == \"all\"",
"groq>=0.4; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://getgrip.dev",
"Documentation, https://github.com/Grip-Hub/getgrip.dev/blob/main/GUIDE.md",
"Repository, https://github.com/Grip-Hub/getgrip.dev",
"Bug Tracker, https://github.com/Grip-Hub/getgrip.dev/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T18:55:12.910003 | getgrip-0.2.2.tar.gz | 41,459 | e1/79/5e8570dc8e5c1400b6117516cbca8b2509dea1069973e2db268fde08db83/getgrip-0.2.2.tar.gz | source | sdist | null | false | 34d33dd5dd983224deb3c17d61554291 | 6d67491072fbc38ea2ac773e43344c31fb3a3c9eaea57a652691125eaeaeab70 | e1795e8570dc8e5c1400b6117516cbca8b2509dea1069973e2db268fde08db83 | null | [] | 166 |
2.4 | pulumi-esc-sdk | 0.13.0 | ESC (Environments, Secrets, Config) API | Pulumi ESC allows you to compose and manage hierarchical collections of configuration and secrets and consume them in various ways.
| text/markdown | OpenAPI Generator community | team@openapitools.org | null | null | Apache 2.0 | OpenAPI, OpenAPI-Generator, ESC (Environments, Secrets, Config) API | [] | [] | null | null | null | [] | [] | [] | [
"urllib3>=1.25.3",
"python-dateutil",
"pydantic>=2",
"typing-extensions>=4.7.1",
"pyyaml>=6.0.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T18:55:04.386419 | pulumi_esc_sdk-0.13.0.tar.gz | 47,090 | 94/2f/66b8dd61613b6e1c32ece1f8ae8de4909d285ea80c6ec94b2ca190211bcf/pulumi_esc_sdk-0.13.0.tar.gz | source | sdist | null | false | 8272e590a147aca6c668fa032162ae51 | 59a411fbf98a7882d08762c1bdbe49fbeecfa3e6a9cf7067817ed9760ab39388 | 942f66b8dd61613b6e1c32ece1f8ae8de4909d285ea80c6ec94b2ca190211bcf | null | [] | 146 |
2.4 | subliminal | 2.6.0 | Subtitles, faster than your thoughts | Subliminal
==========
Subtitles, faster than your thoughts.
.. image:: https://img.shields.io/pypi/v/subliminal.svg
:target: https://pypi.python.org/pypi/subliminal
:alt: Latest Version
.. image:: https://readthedocs.org/projects/subliminal/badge/?version=latest
:target: https://subliminal.readthedocs.org/
:alt: Documentation Status
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/Diaoul/subliminal/python-coverage-comment-action-data/endpoint.json
:target: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/Diaoul/subliminal/python-coverage-comment-action-data/endpoint.json
:alt: Code coverage
.. image:: https://img.shields.io/github/license/Diaoul/subliminal.svg
:target: https://github.com/Diaoul/subliminal/blob/master/LICENSE
:alt: License
.. image:: https://img.shields.io/badge/discord-7289da.svg?style=flat-square&logo=discord
:alt: Discord
:target: https://discord.gg/kXW6sWte9N
:Project page: https://github.com/Diaoul/subliminal
:Documentation: https://subliminal.readthedocs.org/
:Community: https://discord.gg/kXW6sWte9N
Usage
-----
.. image:: https://github.com/Diaoul/subliminal/blob/main/docs/assets/demo.gif
:alt: Demo of the app CLI usage
CLI
^^^
Download English subtitles::
$ subliminal download -l en The.Big.Bang.Theory.S05E18.HDTV.x264-LOL.mp4
Collecting videos [####################################] 100%
1 video collected / 0 video ignored / 0 error
Downloading subtitles [####################################] 100%
Downloaded 1 subtitle
Configuration
^^^^^^^^^^^^^
Arguments can be passed to the CLI using a configuration file.
By default it looks for a `subliminal.toml` file in the default configuration folder
(see the CLI help for the exact platform-specific default path).
Or use the `-c` option to specify the path to the configuration file.
`Look for this example configuration file <https://github.com/Diaoul/subliminal/blob/main/docs/assets/config.toml>`__
or use the `generate_default_config` function from the `subliminal.cli` module to generate a
configuration file with all the options and their default values::
$ python -c "from subliminal.cli import generate_default_config; print(generate_default_config())"
Library
^^^^^^^
Download best subtitles in French and English for videos less than two weeks old in a video folder:
.. code:: python
#!/usr/bin/env python
from datetime import timedelta
from babelfish import Language
from subliminal import download_best_subtitles, region, save_subtitles, scan_videos
# configure the cache
region.configure('dogpile.cache.dbm', arguments={'filename': 'cachefile.dbm'})
# scan for videos newer than 2 weeks and their existing subtitles in a folder
videos = scan_videos('/video/folder', age=timedelta(weeks=2))
# download best subtitles
subtitles = download_best_subtitles(videos, {Language('eng'), Language('fra')})
# save them to disk, next to the video
for v in videos:
save_subtitles(v, subtitles[v])
Docker
^^^^^^
Run subliminal in a docker container::
$ docker run --rm --name subliminal -v subliminal_cache:/usr/src/cache -v /tvshows:/tvshows -it ghcr.io/diaoul/subliminal download -l en /tvshows/The.Big.Bang.Theory.S05E18.HDTV.x264-LOL.mp4
Debugging
^^^^^^^^^
By default, subliminal output is minimal. Run with the `--debug` flag before the `download` command to get more information::
$ subliminal --debug download -l en The.Big.Bang.Theory.S05E18.HDTV.x264-LOL.mp4
Installation
------------
For a better isolation with your system you should use a dedicated virtualenv.
The preferred installation method is to use `pipx <https://github.com/pypa/pipx>`_ that does that for you::
$ pipx install subliminal
Subliminal can be also be installed as a regular python module by running::
$ pip install --user subliminal
If you want to modify the code, `fork <https://github.com/Diaoul/subliminal/fork>`_ this repo,
clone your fork locally and install a development version::
$ git clone https://github.com/<my-username>/subliminal
$ cd subliminal
$ pip install --user -e '.[docs,types,tests,dev]'
To extract information about the video files, `subliminal` uses `knowit <https://github.com/ratoaq2/knowit>`_.
For better results, make sure one of its provider is installed, for instance `MediaInfo <https://mediaarea.net/en/MediaInfo>`_.
Integrations
------------
Subliminal integrates with various desktop file managers to enhance your workflow:
- **Nautilus/Nemo**: See the dedicated `project page <https://github.com/Diaoul/nautilus-subliminal>`_ for more information.
- **Dolphin**: See this `Gist <https://gist.github.com/maurocolella/03a9f02c56b1a90c64f05683e2840d57>`_. for more details.
Contributing
------------
We welcome contributions from the community! If you're interested in contributing, here are a few
ways you can get involved:
- **Browse Issues and Pull Requests**: Check out the existing `Issues <https://github.com/Diaoul/subliminal/issues>`_
and `Pull Requests <https://github.com/Diaoul/subliminal/pulls>`_ to see where you can help.
- **Report Bugs or Request Features**: If you encounter a bug or have a feature request, please create a GitHub Issue.
- **Follow the Contribution Guide**: For detailed instructions on how to contribute, please refer to our
`Contribution Guide <https://github.com/Diaoul/subliminal/blob/main/CONTRIBUTING.md>`_.
Your contributions are greatly appreciated and help make this project better for everyone!
| text/x-rst | null | Antoine Bertin <diaoulael@gmail.com> | null | Antoine Bertin <diaoulael@gmail.com>, getzze <getzze@gmail.com>, Patrycja Rosa <pypi@ptrcnull.me> | MIT | episode, movie, series, show, subtitle, subtitles, tv, video | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Multimedia :: Video",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"babelfish>=0.6.1",
"beautifulsoup4>=4.4.0",
"chardet>=5.0",
"click",
"click-option-group>=0.5.6",
"defusedxml>=0.7.1",
"dogpile-cache>=1.0",
"guessit>=3.0.0",
"knowit>=0.5.5",
"platformdirs>=3",
"pysubs2>=1.7",
"requests>=2.0",
"srt>=3.5",
"stevedore>=3.0",
"tomlkit>=0.13.2",
"pre-commit>=2.9.3; extra == \"dev\"",
"tox; extra == \"dev\"",
"sphinx-autodoc-typehints; extra == \"docs\"",
"sphinx-changelog; extra == \"docs\"",
"sphinx-rtd-theme>=2; extra == \"docs\"",
"sphinx<8.2; extra == \"docs\"",
"sphinxcontrib-programoutput; extra == \"docs\"",
"towncrier; extra == \"docs\"",
"vcrpy>=5; extra == \"docs\"",
"rarfile>=2.7; extra == \"rar\"",
"colorama; extra == \"tests\"",
"coverage[toml]>=7; extra == \"tests\"",
"importlib-metadata>=4.6; python_version < \"3.10\" and extra == \"tests\"",
"pytest-cov; extra == \"tests\"",
"pytest-xdist; extra == \"tests\"",
"pytest>=6.0; extra == \"tests\"",
"rarfile>=2.7; extra == \"tests\"",
"sympy; extra == \"tests\"",
"vcrpy>=5; extra == \"tests\"",
"win32-setctime; sys_platform == \"win32\" and extra == \"tests\"",
"mypy; extra == \"types\"",
"types-beautifulsoup4; extra == \"types\"",
"types-decorator; extra == \"types\"",
"types-requests; extra == \"types\""
] | [] | [] | [] | [
"homepage, https://github.com/Diaoul/subliminal",
"repository, https://github.com/Diaoul/subliminal",
"documentation, https://subliminal.readthedocs.org"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:55:03.279450 | subliminal-2.6.0.tar.gz | 4,003,884 | 56/05/3529ed61f1471fe7c01a6a14183e21c12f3ae09dc79f796962a484d91f28/subliminal-2.6.0.tar.gz | source | sdist | null | false | 4f86ab6f663067987536e07981a6d78b | e6e7aee1b218d543dcb3b7b2248ea0f92afc4c223ce3e7af8d2c3843e31bafe5 | 56053529ed61f1471fe7c01a6a14183e21c12f3ae09dc79f796962a484d91f28 | null | [
"LICENSE"
] | 1,528 |
2.4 | dagster-webserver | 1.12.15 | Web UI for dagster. | =================
Dagster Webserver
=================
Usage
~~~~~
Eg in dagster_examples
.. code-block:: sh
dagster-webserver -p 3333
Running dev ui:
.. code-block:: sh
NEXT_PUBLIC_BACKEND_ORIGIN="http://localhost:3333" yarn start
| text/x-rst | null | Dagster Labs <hello@dagsterlabs.com> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"click<9.0,>=7.0",
"dagster==1.12.15",
"dagster-graphql==1.12.15",
"starlette!=0.36.0",
"uvicorn[standard]",
"nbconvert; extra == \"notebook\"",
"starlette[full]; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/dagster-io/dagster"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:55:00.400411 | dagster_webserver-1.12.15.tar.gz | 11,949,049 | 24/bb/5695216d8cbdcaaad2686fcda20a3269178c82d9872bcf3d95a58b3980e4/dagster_webserver-1.12.15.tar.gz | source | sdist | null | false | e13138b21d8c910218a92b97b43805e4 | f96ada7043cf4b6c9439e229a83650e415f968f1e1c1b87f5f478afc2b04354e | 24bb5695216d8cbdcaaad2686fcda20a3269178c82d9872bcf3d95a58b3980e4 | Apache-2.0 | [
"LICENSE"
] | 7,492 |
2.4 | mcp-common | 0.9.4 | Oneiric-Native foundation library providing battle-tested patterns for MCP (Model Context Protocol) servers, with YAML configuration, Rich UI, and CLI lifecycle management | # mcp-common




**Version:** 0.6.0 (Oneiric-Native)
**Status:** Production Ready
______________________________________________________________________
## Overview
mcp-common is an **Oneiric-native foundation library** for building production-grade MCP (Model Context Protocol) servers. It provides battle-tested patterns extracted from 9 production servers including crackerjack, session-mgmt-mcp, and fastblocks.
**🎯 What This Library Provides:**
- **Oneiric CLI Factory** (v0.3.3+) - Standardized server lifecycle with start/stop/restart/status/health commands
- **HTTP Client Adapter** - Connection pooling with httpx for 11x performance
- **Prompting/Notification Adapter** 🆕 - Unified cross-platform user interaction with automatic backend detection
- **Security Utilities** - API key validation (with 90% faster caching) and input sanitization (2x faster)
- **Rich Console UI** - Beautiful panels and notifications for server operations
- **Settings Management** - YAML + environment variable configuration (Pydantic-based)
- **Health Check System** - Production-ready health monitoring
- **Type-Safe** - Full Pydantic validation and type hints
- **Comprehensive Testing** - 615 tests with property-based and concurrency testing
**Design Principles:**
1. **Oneiric-Native** - Direct Pydantic, Rich library, and standard patterns
1. **Production-Ready** - Extracted from real production systems
1. **Layered Configuration** - YAML files + environment variables with clear priority
1. **Rich UI** - Professional console output with Rich panels
1. **Type-safe** - Full type hints with strict MyPy checking
1. **Well-Tested** - 90% coverage minimum
______________________________________________________________________
## 📚 Examples
See [`examples/`](./examples/) for complete production-ready examples:
### 1. CLI Server (Oneiric-Native) - NEW in v0.3.3
Demonstrates the **CLI factory** for standardized server lifecycle management:
- 5 lifecycle commands (start, stop, restart, status, health)
- PID file management with security validation
- Runtime health snapshots
- Graceful shutdown with signal handling
- Custom lifecycle handlers
```bash
cd examples
python cli_server.py start
python cli_server.py status
python cli_server.py health
python cli_server.py stop
```
### 2. Weather MCP Server (Oneiric-Native)
Demonstrates **HTTP adapters** and **FastMCP integration**:
- HTTPClientAdapter with connection pooling (11x performance)
- MCPBaseSettings with YAML + environment configuration
- ServerPanels for beautiful terminal UI
- Oneiric configuration patterns (direct instantiation)
- FastMCP tool integration (optional; install separately)
```bash
cd examples
python weather_server.py
```
**Full documentation:** [`examples/README.md`](./examples/README.md)
______________________________________________________________________
## Quick Start
### Installation
```bash
pip install mcp-common>=0.3.6
```
This automatically installs Pydantic, Rich, and all required dependencies.
If you plan to run an MCP server (e.g., the examples), install a protocol host such as FastMCP separately:
```bash
pip install fastmcp
# or
uv add fastmcp
```
### Minimal Example
```python
# my_server/settings.py
from mcp_common.config import MCPBaseSettings
from pydantic import Field
class MyServerSettings(MCPBaseSettings):
"""Server configuration following Oneiric pattern.
Loads from (priority order):
1. settings/local.yaml (gitignored)
2. settings/my-server.yaml
3. Environment variables MY_SERVER_*
4. Defaults below
"""
api_key: str = Field(description="API key for service")
timeout: int = Field(default=30, description="Request timeout")
# my_server/main.py
from fastmcp import FastMCP # Optional: install fastmcp separately
from mcp_common import ServerPanels, HTTPClientAdapter, HTTPClientSettings
from my_server.settings import MyServerSettings
# Initialize
mcp = FastMCP("MyServer")
settings = MyServerSettings.load("my-server")
# Initialize HTTP adapter
http_settings = HTTPClientSettings(timeout=settings.timeout)
http_adapter = HTTPClientAdapter(settings=http_settings)
# Define tools
@mcp.tool()
async def call_api():
# Use the global adapter instance
response = await http_adapter.get("https://api.example.com")
return response.json()
# Run server
if __name__ == "__main__":
# Display startup panel
ServerPanels.startup_success(
server_name="My MCP Server",
version="1.0.0",
features=["HTTP Client", "YAML Configuration"],
)
mcp.run()
```
______________________________________________________________________
## Core Features
### 🔌 HTTP Client Adapter
**Connection Pooling with httpx:**
- 11x faster than creating clients per request
- Automatic initialization and cleanup
- Configurable timeouts, retries, connection limits
```python
from mcp_common import HTTPClientAdapter, HTTPClientSettings
# Configure HTTP adapter
http_settings = HTTPClientSettings(
timeout=30,
max_connections=50,
retry_attempts=3,
)
# Create adapter
http_adapter = HTTPClientAdapter(settings=http_settings)
# Make requests
response = await http_adapter.get("https://api.example.com")
```
**Architecture Overview:**
```mermaid
graph TB
subgraph "mcp-common Components"
A[HTTP Client Adapter<br/>with Connection Pooling]
B[Settings Management<br/>YAML + Env Vars]
C[CLI Factory<br/>Lifecycle Management]
D[Rich UI Panels<br/>Console Output]
E[Security Utilities<br/>Validation & Sanitization]
end
subgraph "Integration"
F[FastMCP<br/>Optional]
G[MCP Server<br/>Application]
end
A --> G
B --> G
C --> G
D --> G
E --> G
F --> G
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
style D fill:#fff3e0
style E fill:#fce4ec
```
Note: Rate limiting is not provided by this library. If you use FastMCP, its built-in `RateLimitingMiddleware` can be enabled; otherwise, use project-specific configuration.
### 🎯 Oneiric CLI Factory (NEW in v0.3.3)
**Production-Ready Server Lifecycle Management:**
The `MCPServerCLIFactory` provides standardized CLI commands for managing MCP server lifecycles, inspired by Oneiric's operational patterns. It handles process management, health monitoring, and graceful shutdown out of the box.
**Features:**
- **5 Standard Commands** - `start`, `stop`, `restart`, `status`, `health`
- **Security-First** - Secure PID files (0o600), cache directories (0o700), ownership validation
- **Process Validation** - Detects stale PIDs, prevents race conditions, validates process identity
- **Health Monitoring** - Runtime health snapshots with configurable TTL
- **Signal Handling** - Graceful shutdown on SIGTERM/SIGINT
- **Custom Handlers** - Extensible lifecycle hooks for server-specific logic
- **Dual Output** - Human-readable and JSON output modes
- **Standard Exit Codes** - Shell-scriptable with semantic exit codes
**CLI Factory Architecture:**
```mermaid
graph LR
subgraph "User Application"
A[Server Implementation]
B[Custom Handlers]
end
subgraph "mcp-common CLI Factory"
C[MCPServerCLIFactory]
D[MCPServerSettings]
E[PID File Management]
F[Health Snapshots]
G[Signal Handlers]
end
subgraph "Typer CLI"
H[start command]
I[stop command]
J[restart command]
K[status command]
L[health command]
end
A --> C
B --> C
D --> C
C --> E
C --> F
C --> G
C --> H
C --> I
C --> J
C --> K
C --> L
style A fill:#e8f5e8
style B fill:#fff3e0
style C fill:#e3f2fd
style D fill:#f3e5f5
style H fill:#e0f2f1
style I fill:#e0f2f1
style J fill:#e0f2f1
style K fill:#e0f2f1
style L fill:#e0f2f1
```
**Quick Example:**
```python
from mcp_common.cli import MCPServerCLIFactory, MCPServerSettings
# 1. Load settings (YAML + env vars)
settings = MCPServerSettings.load("my-server")
# 2. Define lifecycle handlers
def start_server():
print("Server initialized!")
# Your server startup logic here
def stop_server(pid: int):
print(f"Stopping PID {pid}")
# Your cleanup logic here
def check_health():
# Return current health snapshot
return RuntimeHealthSnapshot(
orchestrator_pid=os.getpid(),
watchers_running=True,
)
# 3. Create CLI factory
factory = MCPServerCLIFactory(
server_name="my-server",
settings=settings,
start_handler=start_server,
stop_handler=stop_server,
health_probe_handler=check_health,
)
# 4. Create and run Typer app
app = factory.create_app()
if __name__ == "__main__":
app()
```
**Command Usage:**
```bash
# Start server (creates PID file and health snapshot)
python my_server.py start
# Check status (lightweight process check)
python my_server.py status
# Output: Server running (PID 12345, snapshot age: 2.3s, fresh: True)
# View health (detailed health information)
python my_server.py health
# Live health probe
python my_server.py health --probe
# Stop server (graceful shutdown with SIGTERM)
python my_server.py stop
# Force stop with timeout
python my_server.py stop --timeout 5 --force
# Restart (stop + start)
python my_server.py restart
# JSON output for automation
python my_server.py status --json
```
**Configuration:**
Settings are loaded from multiple sources (priority order):
1. `settings/local.yaml` (gitignored, for development)
1. `settings/{server-name}.yaml` (checked into repo)
1. Environment variables `MCP_SERVER_*`
1. Defaults in `MCPServerSettings`
Example `settings/my-server.yaml`:
```yaml
server_name: "My MCP Server"
cache_root: .oneiric_cache
health_ttl_seconds: 60.0
log_level: INFO
```
**Exit Codes:**
- `0` - Success
- `1` - General error
- `2` - Server not running (status/stop)
- `3` - Server already running (start)
- `4` - Health check failed
- `5` - Configuration error
- `6` - Permission error
- `7` - Timeout
- `8` - Stale PID file (use `--force`)
**Full Example:**
See [`examples/cli_server.py`](./examples/cli_server.py) for a complete working example with custom commands and health probes.
### ⚙️ Settings with YAML Support (Oneiric Pattern)
- Pure Pydantic BaseModel
- Layered configuration: YAML files + environment variables
- Type validation with Pydantic
- Path expansion (`~` → home directory)
```python
from mcp_common.config import MCPBaseSettings
class ServerSettings(MCPBaseSettings):
api_key: str # Required
timeout: int = 30 # Optional with default
# Load with layered configuration
settings = ServerSettings.load("my-server")
# Loads from:
# 1. settings/my-server.yaml
# 2. settings/local.yaml
# 3. Environment variables MY_SERVER_*
# 4. Defaults
```
### 📝 Standard Python Logging
mcp-common uses standard Python logging. Configure as needed for your server:
```python
import logging
# Configure logging
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)
logger.info("Server started")
```
### 🎨 Rich Console UI
- Beautiful startup panels
- Error displays with context
- Statistics tables
- Progress bars
```python
from mcp_common.ui import ServerPanels
ServerPanels.startup_success(
server_name="Mailgun MCP",
http_endpoint="http://localhost:8000",
features=["Rate Limiting", "Security Filters"],
)
```
### 🧪 Testing Utilities
- Mock MCP clients
- HTTP response mocking
- Shared fixtures
- DI-friendly testing
```python
from mcp_common.testing import MockMCPClient, mock_http_response
async def test_tool():
with mock_http_response(status=200, json={"ok": True}):
result = await my_tool()
assert result["success"]
```
______________________________________________________________________
## Documentation
- **[examples/README.md](./examples/README.md)** - **START HERE** - Example servers and usage patterns
- **[ONEIRIC_CLI_FACTORY\_\*.md](./docs/)** - CLI factory documentation and implementation guides
______________________________________________________________________
## Complete Example
See [`examples/`](./examples/) for a complete production-ready Weather MCP server demonstrating mcp-common patterns.
### Key Patterns Demonstrated:
1. **Oneiric Settings** - YAML + environment variable configuration with `.load()`
1. **HTTP Adapter** - HTTPClientAdapter with connection pooling
1. **Rich UI** - ServerPanels for startup/errors/status
1. **Tool Organization** - Modular tool registration with FastMCP
1. **Configuration Layering** - Multiple config sources with clear priority
1. **Type Safety** - Full Pydantic validation throughout
1. **Error Handling** - Graceful error display with ServerPanels
______________________________________________________________________
## Performance Benchmarks
### ✨ Phase 4 Optimizations (v0.6.0)
**Sanitization Early-Exit Optimization:**
| Scenario | Before | After | Speedup |
|----------|--------|-------|---------|
| Clean text (no sensitive data) | 22μs | 10μs | **2.2x faster** ⚡ |
| Text with sensitive data | 22μs | 22μs | No change |
**API Key Validation Caching:**
| Call Type | Time | Speedup |
|-----------|------|---------|
| First call (uncached) | 100μs | baseline |
| Subsequent calls (cached) | 10μs | **10x faster** ⚡ |
**Impact:**
- 2x faster for clean text sanitization (most common case)
- 10x faster for repeated API key validations
- Cache size: 128 most recent entries
- Zero breaking changes
### HTTP Client Adapter (vs new client per request)
```
Before: 100 requests in 45 seconds, 500MB memory
After: 100 requests in 4 seconds, 50MB memory
Result: 11x faster, 10x less memory
```
### Rate Limiter Overhead
```
Without: 1000 requests in 1.2 seconds
With: 1000 requests in 1.25 seconds
Result: +4% overhead (negligible vs network I/O)
```
### 📊 Testing Performance
**Test Suite Growth:**
| Version | Tests | Coverage | Execution Time |
|---------|-------|----------|----------------|
| v0.5.2 | 564 | 94% | ~110s |
| v0.6.0 | 615 | 99%+ | ~120s |
**Testing Capabilities:**
- ✅ 20 property-based tests (Hypothesis)
- ✅ 10 concurrency tests (thread-safety)
- ✅ 7 performance optimization tests
- ✅ 100% backward compatibility maintained
______________________________________________________________________
## Usage Patterns
### Pattern 1: Configure Settings with YAML
```python
from mcp_common.config import MCPBaseSettings
from pydantic import Field
class MySettings(MCPBaseSettings):
api_key: str = Field(description="API key")
timeout: int = Field(default=30, description="Timeout")
# Load from settings/my-server.yaml + env vars
settings = MySettings.load("my-server")
# Access configuration
print(f"Using API key: {settings.get_masked_key()}")
```
### Pattern 2: Use HTTP Client Adapter
```python
from mcp_common import HTTPClientAdapter, HTTPClientSettings
# Configure HTTP client
http_settings = HTTPClientSettings(
timeout=30,
max_connections=50,
retry_attempts=3,
)
# Create adapter
http = HTTPClientAdapter(settings=http_settings)
# Make requests
@mcp.tool()
async def call_api():
response = await http.get("https://api.example.com/data")
return response.json()
# Cleanup when done
await http._cleanup_resources()
```
### Pattern 3: Display Rich UI Panels
```python
from mcp_common import ServerPanels
# Startup panel
ServerPanels.startup_success(
server_name="My Server",
version="1.0.0",
features=["Feature 1", "Feature 2"],
)
# Error panel
ServerPanels.error(
title="API Error",
message="Failed to connect",
suggestion="Check your API key",
)
# Status table
ServerPanels.status_table(
title="Health Check",
rows=[
("API", "✅ Healthy", "200 OK"),
("Database", "⚠️ Degraded", "Slow queries"),
],
)
```
______________________________________________________________________
## Development
### Setup
```bash
git clone https://github.com/lesaker/mcp-common.git
cd mcp-common
pip install -e ".[dev]"
```
### Running Tests
```bash
# Run all tests with coverage
pytest --cov=mcp_common --cov-report=html
# Run specific test
pytest tests/test_http_adapter.py -v
# Run integration tests
pytest tests/integration/ -v
```
### Code Quality
```bash
# Format code
ruff format
# Lint code
ruff check
# Type checking
mypy mcp_common tests
# Run all quality checks
crackerjack --all
```
______________________________________________________________________
## Versioning
**Recent Versions:**
- **0.3.6** - Oneiric-native (production ready)
- **0.3.3** - Added Oneiric CLI Factory
- **0.3.0** - Initial Oneiric patterns
**Compatibility:**
- Requires Python 3.13+
- Optional: compatible with FastMCP 2.0+
- Uses Pydantic 2.12+, Rich 14.2+
______________________________________________________________________
## Success Metrics
**Current Status:**
1. ✅ Professional Rich UI in all components
1. ✅ 90%+ test coverage maintained
1. ✅ Zero production incidents
1. ✅ Oneiric-native patterns throughout
1. ✅ Standardized CLI lifecycle management
1. ✅ Clean dependency tree (no framework lock-in)
______________________________________________________________________
## License
BSD-3-Clause License - See [LICENSE](./LICENSE) for details
______________________________________________________________________
## Contributing
Contributions are welcome! Please:
1. Read [`examples/README.md`](./examples/README.md) for usage patterns
1. Follow Oneiric patterns (see examples)
1. Fork and create feature branch
1. Add tests (coverage ≥90%)
1. Ensure all quality checks pass (`ruff format && ruff check && mypy && pytest`)
1. Submit pull request
______________________________________________________________________
## Acknowledgments
Built with patterns extracted from 9 production MCP servers:
**Primary Pattern Sources:**
- **crackerjack** - MCP server structure, Rich UI panels, CLI patterns
- **session-mgmt-mcp** - Configuration patterns, health checks
- **fastblocks** - Adapter organization, settings management
**Additional Contributors:**
- raindropio-mcp (HTTP client patterns)
- excalidraw-mcp (testing patterns)
- opera-cloud-mcp
- mailgun-mcp
- unifi-mcp
______________________________________________________________________
## Support
For support, please check the documentation in the `docs/` directory or create an issue in the repository.
______________________________________________________________________
**Ready to get started?** Check out [`examples/`](./examples/) for working examples demonstrating all features!
| text/markdown | null | Les Leslie <les@wedgwoodwebworks.com> | null | null | BSD-3-Clause | configuration, http-client, mcp, model-context-protocol, security, utilities | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"cryptography>=41.0.0",
"oneiric>=0.3.6",
"psutil>=7.2.1",
"pydantic-settings>=2.0",
"pydantic>=2.12.5",
"pyjwt>=2.8.0",
"pyyaml>=6.0.3",
"rich>=14.2.0",
"typer>=0.21.0",
"websockets>=12.0",
"crackerjack>=0.47.0; extra == \"all\"",
"prompt-toolkit>=3.0; extra == \"all\"",
"pyobjc-core>=10.0; extra == \"all\"",
"pyobjc-framework-cocoa>=10.0; extra == \"all\"",
"pytest-benchmark>=4.0.0; extra == \"all\"",
"uv-bump>=0.4.0; extra == \"all\"",
"prompt-toolkit>=3.0; extra == \"all-prompts\"",
"pyobjc-core>=10.0; extra == \"all-prompts\"",
"pyobjc-framework-cocoa>=10.0; extra == \"all-prompts\"",
"crackerjack>=0.47.0; extra == \"dev\"",
"pytest-benchmark>=4.0.0; extra == \"dev\"",
"uv-bump>=0.4.0; extra == \"dev\"",
"pyobjc-core>=10.0; extra == \"macos-prompts\"",
"pyobjc-framework-cocoa>=10.0; extra == \"macos-prompts\"",
"prompt-toolkit>=3.0; extra == \"terminal-prompts\""
] | [] | [] | [] | [
"Homepage, https://github.com/lesleslie/mcp-common",
"Documentation, https://github.com/lesleslie/mcp-common#readme",
"Repository, https://github.com/lesleslie/mcp-common",
"Issues, https://github.com/lesleslie/mcp-common/issues"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:54:49.685565 | mcp_common-0.9.4.tar.gz | 1,859,376 | b1/f3/4486184a8792133fec7f5f71414b5fc34961b30ec061afaf0df285b8ba0f/mcp_common-0.9.4.tar.gz | source | sdist | null | false | 4509548eda3eff229bcaa8d0372a9fde | e2f5e45de987ed1678033ef3e95ae8a609806db4f8af3f253cff52e990838ad3 | b1f34486184a8792133fec7f5f71414b5fc34961b30ec061afaf0df285b8ba0f | null | [
"LICENSE"
] | 164 |
2.4 | libinephany | 1.2.1 | Inephany library containing code commonly used by multiple subpackages. | # Inephany Common Library
The Inephany Common Library (`libinephany`) is a core utility package that provides shared functionality, data models, and utilities used across multiple Inephany packages. It contains essential components for hyperparameter optimization, model observation, data serialization, and common utilities.
## Features
- **Pydantic Data Models**: Comprehensive schemas for hyperparameters, observations, and API communications
- **Utility Functions**: Common utilities for PyTorch, optimization, transforms, and more
- **Observation System**: Tools for collecting and managing model statistics and observations
- **Constants and Enums**: Standardized constants and enumerations for agent types, model families, and module types
- **AWS Integration**: Utilities for AWS services integration
- **Web Application Utilities**: Common web app functionality and endpoints
## Installation
### Prerequisites
- Python 3.10+
- Make (for build automation)
#### Ubuntu / Debian
```bash
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.12
```
#### MacOS with brew
```bash
brew install python@3.12
```
### For Developers (Monorepo)
If you're working within the Inephany monorepo, the package is already available and will be installed automatically when you run the installation commands in dependent packages.
### For Clients (Standalone Installation)
`libinephany` is available on PyPI and can be installed directly:
```bash
pip install libinephany
```
For development installations with additional dependencies:
```bash
pip install libinephany[dev]
```
## Key Components
### Pydantic Models
The package provides comprehensive data models for:
- **Hyperparameter Configurations**: `HParamConfig`, `HParamConfigs`
- **Observation Models**: `ObservationInputs`, tensor statistics
- **API Schemas**: Request/response models for client-server communication
- **State Management**: Hyperparameter states and update callbacks
### Utility Functions
#### Agent Utilities (`agent_utils.py`)
- Agent ID generation and parsing
- Hyperparameter group management
- Agent type validation
#### Constants (`constants.py`)
- Hyperparameter type constants (learning_rate, weight_decay, etc.)
- Agent prefixes and suffixes
- API key headers and timestamp formats
#### Enums (`enums.py`)
- `AgentTypes`: Learning rate, weight decay, dropout, etc.
- `ModelFamilies`: GPT, BERT, OLMo
- `ModuleTypes`: Convolutional, attention, linear, embedding
#### Optimization Utilities (`optim_utils.py`)
- PyTorch optimizer utilities
- Parameter group management
- Learning rate scheduler utilities
#### PyTorch Utilities (`torch_utils.py`)
- Tensor operations
- Model utilities
- Distributed training helpers
### Observation System
The observation system provides tools for collecting and managing model statistics:
- **StatisticManager**: Centralized statistics collection and management
- **ObserverPipeline**: Configurable observation pipelines
- **PipelineCoordinator**: Coordinates multiple observers
- **StatisticTrackers**: Specialized trackers for different metric types
## Usage Examples
### Basic Import Examples
```python
# Import common constants
from libinephany.utils.constants import LEARNING_RATE, WEIGHT_DECAY, AGENT_PREFIX_LR
# Import enums
from libinephany.utils.enums import AgentTypes, ModelFamilies, ModuleTypes
# Import utility functions
from libinephany.utils import agent_utils, optim_utils, torch_utils
# Import data models
from libinephany.pydantic_models.configs.hyperparameter_configs import HParamConfig
from libinephany.pydantic_models.schemas.response_schemas import ClientPolicySchemaResponse
```
### Working with Agent Types
```python
from libinephany.utils.enums import AgentTypes
# Check if an agent type is valid
agent_type = "learning_rate"
if agent_type in [agent.value for agent in AgentTypes]:
print(f"{agent_type} is a valid agent type")
# Get agent type by index
lr_agent = AgentTypes.get_from_index(0) # LearningRateAgent
```
### Using Constants
```python
from libinephany.utils.constants import AGENT_PREFIX_LR, LEARNING_RATE
# Generate agent ID
agent_id = f"{AGENT_PREFIX_LR}_agent_001"
hyperparam_type = LEARNING_RATE
```
### Working with Pydantic Models
```python
from libinephany.pydantic_models.configs.hyperparameter_configs import HParamConfig
# Create a hyperparameter configuration
config = HParamConfig(
name="learning_rate",
value=0.001,
min_value=1e-6,
max_value=1.0
)
```
## Development
### Running Tests
```bash
make execute-unit-tests
```
### Code Quality
```bash
make lint # Run all linters
make fix-black # Fix formatting
make fix-isort # Fix imports
```
### Version Management
```bash
make increment-patch-version # Increment patch version
make increment-minor-version # Increment minor version
make increment-major-version # Increment major version
make increment-pre-release-version # Increment pre-release version
```
## Dependencies
### Core Dependencies
- `pydantic==2.8.2` - Data validation and serialization
- `torch==2.7.1` - PyTorch for tensor operations
- `numpy==1.26.4` - Numerical computing
- `requests==2.32.4` - HTTP client
- `loguru==0.7.2` - Logging
### Optional Dependencies
- `boto3<=1.38.44` - AWS SDK
- `fastapi==0.115.11` - Web framework
- `slack-sdk==3.35.0` - Slack integration
- `transformers==4.52.4` - Hugging Face transformers
- `accelerate==1.4.0` - Hugging Face accelerate
- `gymnasium==1.0.0` - RL environments
## Troubleshooting
### Common Issues
1. **Import Errors**: Ensure you're in the virtual environment and have installed the package correctly.
2. **Version Conflicts**: If you encounter dependency conflicts, try installing in a fresh virtual environment:
```bash
python -m venv fresh_env
source fresh_env/bin/activate
make install-dev
```
3. **Make Command Not Found**: Ensure you have `make` installed on your system.
4. **Python Version Issues**: This package requires Python 3.12+. Ensure you're using the correct version.
### Getting Help
- Check the example scripts in the repository
- Review the test files for usage examples
- Ensure all dependencies are installed correctly
- Verify your Python version is 3.12+
## Contributing
When contributing to `libinephany`:
1. Follow the existing code style (Black, isort, flake8)
2. Add appropriate type hints
3. Include unit tests for new functionality
4. Update documentation for new features
5. Ensure all tests pass before submitting
## License
This package is licensed under the Apache License, Version 2.0. See the LICENSE file for details.
| text/markdown | null | Inephany <info@inephany.com> | null | null | Apache 2.0 | libinephany, library, utilities | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic<3.0.0,>=2.5.0",
"loguru<0.8.0,>=0.7.0",
"requests<3.0.0,>=2.28.0",
"numpy<2.0.0,>=1.24.0",
"scipy<2.0.0,>=1.10.0",
"slack-sdk<4.0.0,>=3.20.0",
"boto3<2.0.0,>=1.26.0",
"fastapi<0.116.0,>=0.100.0",
"aiohttp<4.0.0,>=3.8.0",
"torch<2.9.0,>=2.1.0",
"transformers<4.58.0,>=4.51.0",
"pandas<3.0.0,>=2.0.0",
"accelerate<2.0.0,>=0.20.0",
"gymnasium<2.0.0,>=0.29.0",
"pytest<9.0.0,>=7.0.0; extra == \"dev\"",
"pytest-mock<4.0.0,>=3.10.0; extra == \"dev\"",
"pytest-asyncio<0.26.0,>=0.21.0; extra == \"dev\"",
"bump-my-version==0.11.0; extra == \"dev\"",
"black==24.4.2; extra == \"dev\"",
"isort==5.9.3; extra == \"dev\"",
"flake8==7.1.0; extra == \"dev\"",
"pre-commit==4.0.1; extra == \"dev\"",
"mypy==1.13.0; extra == \"dev\"",
"types-PyYAML==6.0.12.20240808; extra == \"dev\"",
"typeguard==4.3.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:54:49.611752 | libinephany-1.2.1.tar.gz | 77,163 | 8d/1f/b623f5a94573a8136886383e34267afb8de57377de3fff1d5e76aa9663c7/libinephany-1.2.1.tar.gz | source | sdist | null | false | f519df16f7ca2c4109d3dcb8a0355361 | ab461bf9409db85f8138e157ff56515e4421bdaeb0590989575be179cd5e1a05 | 8d1fb623f5a94573a8136886383e34267afb8de57377de3fff1d5e76aa9663c7 | null | [
"LICENSE"
] | 167 |
2.4 | mcp-local-rag | 0.2.0 | Local MCP server for RAG over PDFs, DOCX, and plaintext files. | # mcp-local-rag
Local MCP server for RAG over PDFs, DOCX, and plaintext files.
## Requirements
For more complex PDFs, the following environment variables can be provided:
- `AZURE_DOCUMENT_INTELLIGENCE_ENDPOINT`; requires `mcp-local-rag[azure]`.
- `AZURE_DOCUMENT_INTELLIGENCE_KEY`; when omitted, `DefaultAzureCredential` is used. Requires `mcp-local-rag[azure]`.
- `GEMINI_API_KEY`
## Data Storage
By default, the server stores data in:
- **Windows**: `%LOCALAPPDATA%\mcp-local-rag\`
- **macOS**: `~/Library/Application Support/mcp-local-rag/`
- **Linux**: `$XDG_DATA_HOME/mcp-local-rag/`
The data directory contains:
- `markdown/` - Extracted Markdown content of indexed documents
- `metadata.db` - SQLite database for document/collection metadata
- `qdrant/` - Vector database for embeddings
AI Models are cached in the default HuggingFace cache directory (`~/.cache/huggingface/`).
To customize the data directory, set the `MCP_LOCAL_RAG_DATA_DIR` environment variable (a `mcp-local-rag/` subfolder is created automatically inside it).
## Usage
### VS Code
Add to `.vscode/mcp.json`:
```json
{
"servers": {
"mcp-local-rag": {
"command": "uvx",
"args": [
"--python",
"3.13", // Does not support Python 3.14 yet: https://github.com/microsoft/markitdown/issues/1470
"mcp-local-rag@latest"
]
}
}
}
```
If you run into SSL errors (Zscaler), you can try:
```json
{
"servers": {
"mcp-local-rag": {
"command": "uvx",
"args": [
"--native-tls",
"--python",
"3.13", // Does not support Python 3.14 yet: https://github.com/microsoft/markitdown/issues/1470
"--with",
"pip-system-certs",
"mcp-local-rag@latest"
]
}
}
}
```
| text/markdown | null | Jozef833 <172046463+Jozef833@users.noreply.github.com> | null | null | AGPL-3.0-or-later | docx, local, mcp, pdf, rag | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Text Processing :: Indexing"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"aiofiles>=25.1.0",
"google-genai[aiohttp]>=1.61.0",
"markitdown[docx]>=0.1.4",
"mcp[cli]>=1.26.0",
"numpy>=2.4.1",
"pydantic>=2.12.5",
"pymupdf-layout>=1.27.1",
"pymupdf4llm>=0.2.9",
"qdrant-client>=1.16.2",
"semantic-text-splitter>=0.29.0",
"sentence-transformers>=5.2.2",
"torch>=2.10.0",
"azure-ai-documentintelligence>=1.0.2; extra == \"azure\"",
"azure-identity>=1.25.2; extra == \"azure\"",
"azure-monitor-opentelemetry>=1.8.6; extra == \"azure\""
] | [] | [] | [] | [
"Homepage, https://github.com/Milliman-CMHH/mcp-local-rag",
"Repository, https://github.com/Milliman-CMHH/mcp-local-rag",
"Issues, https://github.com/Milliman-CMHH/mcp-local-rag/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:54:25.702541 | mcp_local_rag-0.2.0.tar.gz | 29,338 | 55/1e/d254414c7d0fe0759fa79718dc390a157a4e80fdcdd47f1b464949b42fe5/mcp_local_rag-0.2.0.tar.gz | source | sdist | null | false | dd1d2299d6acde2cc421b3b6291b0305 | 0667c93c4e6fa1aeb591b8b1137c89fe3c11c9d30e4c9619c748d64c56bb457b | 551ed254414c7d0fe0759fa79718dc390a157a4e80fdcdd47f1b464949b42fe5 | null | [
"LICENSE"
] | 169 |
2.4 | crackerjack | 0.54.2 | Crackerjack Python project management tool | # Crackerjack: Advanced AI-Driven Python Development Platform
[](https://github.com/lesleslie/crackerjack)
[](https://www.python.org/downloads/)
[](https://pytest.org)
[](https://github.com/astral-sh/ruff)
[](https://github.com/astral-sh/uv)
[](https://github.com/lesleslie/crackerjack)
[](https://opensource.org/licenses/BSD-3-Clause)

## 🎯 Purpose
**Crackerjack** transforms Python development from reactive firefighting to proactive excellence. This sophisticated platform empowers developers to create exceptional code through intelligent automation, comprehensive quality enforcement, and AI-powered assistance. Experience the confidence that comes from knowing your code meets the highest standards before it ever runs in production.
### What is "Crackerjack"?
**crack·er·jack** ˈkra-kər-ˌjak (noun): *A person or thing of marked excellence or ability; first-rate; exceptional.*
Just as the name suggests, Crackerjack makes your Python projects first-rate through:
- **🧠 Proactive AI Architecture**: 12 specialized AI agents prevent issues before they occur
- **⚡ Autonomous Quality**: Intelligent auto-fixing with architectural planning
- **🛡️ Zero-Compromise Standards**: 100% test coverage, complexity ≤15, security-first patterns
- **🔄 Learning System**: Gets smarter with every project, caching successful patterns
- **🌟 One Command Excellence**: From setup to PyPI publishing with a single command
**The Crackerjack Philosophy**: If your code needs fixing after it's written, you're doing it wrong. We prevent problems through intelligent architecture and proactive patterns, making exceptional code the natural outcome, not a lucky accident.
## What Problem Does Crackerjack Solve?
**Instead of configuring multiple tools separately:**
```bash
# Traditional workflow
pip install black isort flake8 mypy pytest
# Configure each tool individually
# Set up git hooks manually
# Remember different commands for each tool
```
**Crackerjack provides unified commands:**
```bash
pip install crackerjack
python -m crackerjack run # Setup + quality checks
python -m crackerjack run --run-tests # Add testing
python -m crackerjack run --all patch # Full release workflow
```
**Key differentiators:**
- **Single command** replaces 6+ separate tools
- **Pre-configured** with Python best practices
- **UV integration** for fast dependency management
- **Automated publishing** with PyPI authentication
- **MCP server** for AI agent integration
## The Crackerjack Philosophy
Crackerjack is built on the following core principles:
- **Code Clarity:** Code should be easy to read, understand, and maintain
- **Automation:** Tedious tasks should be automated, allowing developers to focus on solving problems
- **Consistency:** Code style, formatting, and project structure should be consistent across projects
- **Reliability:** Tests are essential, and code should be checked rigorously
- **Tool Integration:** Leverage powerful existing tools instead of reinventing the wheel
- **Auto-Discovery:** Prefer intelligent auto-discovery of configurations and settings over manual configuration whenever possible, reducing setup friction and configuration errors
- **Static Typing:** Static typing is essential for all development
## Crackerjack vs Pre-commit: Architecture & Features
Crackerjack and pre-commit solve related but different problems. While pre-commit is a language-agnostic git hook manager, Crackerjack is a comprehensive Python development platform with quality enforcement built-in.
### Architectural Differences
| Aspect | Pre-commit | Crackerjack |
|--------|-----------|-------------|
| **Execution Model** | Wrapper framework that spawns subprocesses for each hook | Direct tool invocation with adapter architecture |
| **Concurrency** | Synchronous sequential execution (one hook at a time) | **Async-first with 11 concurrent adapters** - true parallel execution |
| **Performance** | Overhead from framework wrapper + subprocess spawning | Zero wrapper overhead, 70% cache hit rate, 50% faster workflows |
| **Language Focus** | Language-agnostic (Python, Go, Rust, Docker, etc.) | Python-first with native tool implementations |
| **Configuration** | YAML-based `.pre-commit-config.yaml` with repo URLs | Python-based configuration with intelligent defaults |
| **Hook Management** | Clones repos, manages environments per hook | Native Python tools + direct UV invocation |
### Feature Comparison
#### Quality Hooks & Tools
| Feature | Pre-commit | Crackerjack |
|---------|-----------|-------------|
| **Code Formatting** | ✅ Via hooks (black, ruff, etc.) | ✅ Native Ruff integration + mdformat |
| **Linting** | ✅ Via hooks (flake8, pylint, etc.) | ✅ Native Ruff + codespell |
| **Type Checking** | ✅ Via hooks (mypy, pyright) | ✅ **Zuban** (20-200x faster than pyright) |
| **Security Scanning** | ✅ Via hooks (bandit, gitleaks) | ✅ Native bandit + gitleaks integration |
| **Dead Code Detection** | ✅ Via vulture hook | ✅ **Skylos** (20x faster than vulture) |
| **Complexity Analysis** | ❌ Not built-in | ✅ Native complexipy integration |
| **Dependency Validation** | ❌ Not built-in | ✅ Native creosote unused dependency detection |
| **Custom Python Tools** | ✅ Via `repo: local` hooks | ✅ 6 native tools in `crackerjack/tools/` |
#### Development Workflow
| Feature | Pre-commit | Crackerjack |
|---------|-----------|-------------|
| **Git Integration** | ✅ Pre-commit, pre-push, commit-msg hooks | ✅ Git hooks + intelligent commit messages |
| **Testing Framework** | ❌ Not included | ✅ Built-in pytest with coverage ratchet |
| **CI/CD Integration** | ✅ Via `pre-commit run --all-files` | ✅ Unified `--ci` mode with quality + tests |
| **Version Management** | ❌ Not included | ✅ Intelligent version bumping + AI recommendations |
| **Publishing** | ❌ Not included | ✅ PyPI publishing with UV authentication |
| **Hook Stages** | ✅ Multiple stages (commit, push, merge, manual) | ✅ Fast (~5s) vs Comprehensive (~30s) strategies |
| **Retry Logic** | ❌ No built-in retry | ✅ Automatic retry for formatting hooks |
| **Parallel Execution** | ✅ Limited parallelism (sequential by default) | ✅ **Async-first architecture**: 11 concurrent adapters, 76% speedup |
#### Advanced Features
| Feature | Pre-commit | Crackerjack |
|---------|-----------|-------------|
| **AI Integration** | ❌ Not built-in | ✅ 12 specialized AI agents + auto-fixing |
| **Dependency Injection** | ❌ Not applicable | ✅ legacy framework with protocol-based DI |
| **Caching** | ✅ Per-file hash caching | ✅ Content-based caching (70% hit rate) |
| **MCP Server** | ❌ Not included | ✅ Built-in MCP server for Claude integration |
| **Monitoring** | ❌ Not included | ✅ MCP status + progress monitors |
| **Configuration Management** | ✅ YAML + `--config` flag | ✅ settings with YAML + local overrides |
| **Auto-Update** | ✅ `pre-commit autoupdate` | ⚠️ Manual UV dependency updates |
| **Language Support** | ✅ 15+ languages (Python, Go, Rust, Docker, etc.) | ✅ Python + external tools (gitleaks, etc.) |
#### Configuration & Ease of Use
| Feature | Pre-commit | Crackerjack |
|---------|-----------|-------------|
| **Setup Complexity** | Medium (YAML config + `pre-commit install`) | Low (single `python -m crackerjack run`) |
| **Configuration Format** | YAML with repo URLs and hook IDs | Python settings with intelligent defaults |
| **Hook Discovery** | Manual (add repos to `.pre-commit-config.yaml`) | Automatic (17 tools pre-configured) |
| **Tool Installation** | Auto (pre-commit manages environments) | UV-based (one virtual environment) |
| **Learning Curve** | Medium (understand repos, hooks, stages) | Low (unified Python commands) |
### When to Use Each
**Choose Pre-commit when:**
- ✅ Working with multiple languages (Go, Rust, Docker, etc.)
- ✅ Need language-agnostic hook framework
- ✅ Want to use hooks from community repositories
- ✅ Polyglot projects requiring diverse tooling
- ✅ Simple YAML-based configuration preferred
**Choose Crackerjack when:**
- ✅ Python-focused development (Python 3.13+)
- ✅ Want comprehensive development platform (testing, publishing, AI)
- ✅ Need maximum performance (async architecture, Rust tools, caching, 11x parallelism)
- ✅ Desire AI-powered auto-fixing and recommendations
- ✅ Want unified workflow (quality + tests + publishing in one command)
- ✅ Prefer Python-based configuration over YAML
- ✅ Need advanced features (coverage ratchet, MCP integration, monitoring)
### Migration from Pre-commit
Crackerjack can **coexist** with pre-commit if needed, but most Python projects can fully migrate:
```bash
# Remove pre-commit (optional)
pre-commit uninstall
rm .pre-commit-config.yaml
# Install crackerjack
uv tool install crackerjack
# Run quality checks (replaces pre-commit run --all-files)
python -m crackerjack run
# With tests (comprehensive workflow)
python -m crackerjack run --run-tests
```
**Note**: Crackerjack Phase 8 successfully migrated from pre-commit framework to direct tool invocation, achieving 50% performance improvement while maintaining full compatibility with existing quality standards.
## Table of Contents
- [Crackerjack vs Pre-commit](#crackerjack-vs-pre-commit-architecture--features)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [AI Auto-Fix Features](#ai-auto-fix-features)
- [Core Workflow](#core-workflow)
- [Core Features](#core-features)
- [legacy Architecture & Performance](#-legacy-architecture--performance)
- [Adapters](#adapters)
- [Configuration Management](#-configuration-management-legacy-settings--configuration-templates)
- [MCP Server Configuration](#mcp-server-configuration)
- [Quality Hook Modes](#quality-hook-modes)
- [Command Reference](#command-reference)
- [Style Guide](#style-guide)
- [Publishing & Version Management](#publishing--version-management)
- [Troubleshooting](#-troubleshooting)
## Installation
### Prerequisites
- Python 3.13+
- [UV](https://github.com/astral-sh/uv) package manager
### Install UV
```bash
# Recommended: Official installer script
curl -LsSf https://astral.sh/uv/install.sh | sh
# Alternative: Using pipx
pipx install uv
# Alternative: Using Homebrew (macOS)
brew install uv
```
### Install Crackerjack
```bash
# Recommended: Using UV (fastest)
uv tool install crackerjack
# Alternative: Using pip
pip install crackerjack
# For existing project: Add as dependency
uv add crackerjack
```
## Quick Start
### Initialize a Project
```bash
# Navigate to your project directory
cd your-project
# Initialize with Crackerjack
python -m crackerjack run
# Or use interactive mode
python -m crackerjack run -i
```
## AI Auto-Fix Features

*12 specialized AI agents with confidence-based routing and batch processing*
Crackerjack provides two distinct approaches to automatic error fixing:
### 1. Hook Auto-Fix Modes (Basic Formatting)
Limited tool-specific auto-fixes for simple formatting issues:
- `ruff --fix`: Import sorting, basic formatting
- `trailing-whitespace --fix`: Removes trailing whitespace
- `end-of-file-fixer --fix`: Ensures files end with newline
**Limitations:** Only handles simple style issues, cannot fix type errors, security issues, test failures, or complex code quality problems.
### 2. AI Agent Auto-Fixing (Comprehensive Intelligence)
**Revolutionary AI-powered code quality enforcement** that automatically fixes ALL types of issues:
#### How AI Agent Auto-Fixing Works
1. **🚀 Run All Checks**: Fast hooks, comprehensive hooks, full test suite
1. **🔍 Analyze Failures**: AI parses error messages, identifies root causes
1. **🤖 Intelligent Fixes**: AI reads source code and makes targeted modifications
1. **🔄 Repeat**: Continue until ALL checks pass (up to 8 iterations)
1. **🎉 Perfect Quality**: Zero manual intervention required
#### Comprehensive Coverage
The AI agent intelligently fixes:
- **Type Errors (zuban)**: Adds missing annotations, fixes type mismatches
- **🔒 Security Issues (bandit)**: Comprehensive security hardening including:
- **Shell Injection Prevention**: Removes `shell=True` from subprocess calls
- **Weak Cryptography**: Replaces MD5/SHA1 with SHA256
- **Insecure Random Functions**: Replaces `random.choice` with `secrets.choice`
- **Unsafe YAML Loading**: Replaces `yaml.load` with `yaml.safe_load`
- **Token Exposure**: Masks PyPI tokens, GitHub PATs, and sensitive credentials
- **Debug Print Removal**: Eliminates debug prints containing sensitive information
- **Dead Code (vulture)**: Removes unused imports, variables, functions
- **Performance Issues**: Transforms inefficient patterns (list concatenation, string building, nested loops)
- **Documentation Issues**: Auto-generates changelogs, maintains consistency across .md files
- **Test Failures**: Fixes missing fixtures, import errors, assertions
- **Code Quality (refurb)**: Applies refactoring, reduces complexity
- **All Hook Failures**: Formatting, linting, style issues
#### AI Agent Commands
```bash
# Standard AI agent mode (recommended)
python -m crackerjack run --ai-fix --run-tests --verbose
# Preview fixes without applying (dry-run mode)
python -m crackerjack run --dry-run --run-tests --verbose
# Custom iteration limit
python -m crackerjack run --ai-fix --max-iterations 15
# MCP server
python -m crackerjack start
# Lifecycle commands (start/stop/restart/status/health) are available via MCPServerCLIFactory.
```
#### MCP Integration
When using crackerjack via MCP tools (session-mgmt-mcp):
```python
# ✅ CORRECT - Use semantic command + ai_agent_mode parameter
crackerjack_run(command="test", ai_agent_mode=True)
# ✅ CORRECT - With additional arguments
crackerjack_run(command="check", args="--verbose", ai_agent_mode=True, timeout=600)
# ✅ CORRECT - Dry-run mode
crackerjack_run(command="test", args="--dry-run", ai_agent_mode=True)
# ❌ WRONG - Don't put flags in command parameter
crackerjack_run(command="--ai-fix -t") # This will error!
# ❌ WRONG - Don't use --ai-fix in args
crackerjack_run(command="test", args="--ai-fix") # Use ai_agent_mode=True instead
```
#### Configuration
Auto-fix requires:
1. **Anthropic API key**: Set environment variable
```bash
export ANTHROPIC_API_KEY=sk-ant-...
```
1. **Configuration file**: `settings/adapters.yml`
```yaml
ai: claude
```
#### Key Benefits
- **Zero Configuration**: No complex flag combinations needed
- **Complete Automation**: Handles entire quality workflow automatically
- **Intelligent Analysis**: Understands code context and business logic
- **Comprehensive Coverage**: Fixes ALL error types, not just formatting
- **Perfect Results**: Achieves 100% code quality compliance
#### 🤖 Specialized Agent Architecture
**12 Specialized AI Agents** for comprehensive code quality improvements:
- **🔒 SecurityAgent**: Fixes shell injections, weak crypto, token exposure, unsafe library usage
- **♻️ RefactoringAgent**: Reduces complexity ≤15, extracts helper methods, applies SOLID principles
- **🚀 PerformanceAgent**: Optimizes algorithms, fixes O(n²) patterns, improves string building
- **📝 DocumentationAgent**: Auto-generates changelogs, maintains .md file consistency
- **🧹 DRYAgent**: Eliminates code duplication, extracts common patterns to utilities
- **✨ FormattingAgent**: Handles code style, import organization, formatting violations
- **🧪 TestCreationAgent**: Fixes test failures, missing fixtures, dependency issues
- **📦 ImportOptimizationAgent**: Removes unused imports, restructures import statements
- **🔬 TestSpecialistAgent**: Advanced testing scenarios, fixture management
- **🔍 SemanticAgent**: Advanced semantic analysis, code comprehension, intelligent refactoring suggestions based on business logic understanding
- **🏗️ ArchitectAgent**: High-level architectural patterns, design recommendations, system-level optimization strategies
- **🎯 EnhancedProactiveAgent**: Proactive issue prevention, predictive quality monitoring, optimization before problems occur
**Agent Coordination Features**:
- **Confidence Scoring**: Routes issues to best-match agent (≥0.7 confidence)
- **Batch Processing**: Groups related issues for efficient parallel processing
- **Collaborative Mode**: Multiple agents handle complex cross-cutting concerns
#### Security & Safety Features
- **Command Validation**: All AI modifications are validated for safety
- **Advanced-Grade Regex**: Centralized pattern system eliminates dangerous regex issues
- **No Shell Injection**: Uses secure subprocess execution with validated patterns
- **Rollback Support**: All changes can be reverted via git
- **Human Review**: Review AI-generated changes before commit
#### ⚡ High-Performance Rust Tool Integration
**Ultra-Fast Static Analysis Tools**:
- **🦅 Skylos** (Dead Code Detection): Replaces vulture with **20x performance improvement**
- Rust-powered dead code detection and import analysis
- Seamlessly integrates with crackerjack's quality workflow
- Zero configuration changes required
- **🔍 Zuban** (Type Checking): Replaces pyright with **20-200x performance improvement**
- Lightning-fast type checking and static analysis
- Drop-in replacement for slower Python-based tools
- Maintains full compatibility with existing configurations
**Performance Benefits**:
- **Faster Development Cycles**: Quality hooks complete in seconds, not minutes
- **Improved Developer Experience**: Near-instantaneous feedback during development
- **Seamless Integration**: Works transparently with existing crackerjack workflows
- **Zero Breaking Changes**: Same CLI interface, dramatically better performance
**Implementation Details**:
```bash
# These commands now benefit from Rust tool speed improvements:
python -m crackerjack run # Dead code detection 20x faster
python -m crackerjack run --run-tests # Type checking 20-200x faster
python -m crackerjack run --ai-fix --run-tests # Complete workflow optimized
```
**Benchmark Results**: Real-world performance measurements show consistent **6,000+ operations/second** throughput with **600KB+/second** data processing capabilities during comprehensive quality checks.
## 🎯 Skills Tracking Integration (Session-Buddy)
Crackerjack integrates with **session-buddy** for comprehensive AI agent metrics tracking and intelligent skill recommendations.
### What is Skills Tracking?
**Automated metrics collection** for all AI agent invocations:
- **Which agents were selected** - Track agent choices and why
- **User queries** - Record problems that triggered agent selection
- **Alternatives considered** - Log which other agents were evaluated
- **Success/failure rates** - Measure agent effectiveness by context
- **Performance metrics** - Duration, completion rates, by workflow phase
- **Semantic discovery** - Find best agents for problems using vector similarity
### Why It Matters
**Learn from Every Agent Invocation**:
- 🎯 **Better Agent Selection**: Learn which agents work best for specific problems
- 📊 **Performance Insights**: Identify bottlenecks and optimization opportunities
- 🧠 **Semantic Discovery**: Find agents using natural language queries
- 🔄 **Continuous Improvement**: System gets smarter with every invocation
- 📈 **Workflow Correlation**: Understand agent effectiveness by Oneiric phase
### Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Crackerjack │
│ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐│
│ │ Agent │ │ Agent │ │ Agent ││
│ │ Orchestrator │───▶│ Context │───▶│ Skills ││
│ │ │ │ │ │ Tracker ││
│ └───────────────┘ └───────────────┘ └───────┬───────┘│
│ │ │
└─────────────────────────────────────────────────┼───────────┘
│
┌─────────────────────┴─────────────────┐
│ Skills Tracking Protocol │
│ (track_invocation, get_recommendations)│
└─────────────────────┬─────────────────┘
│
┌─────────────────────────────┼─────────────────────┐
│ │ │
┌───────▼────────┐ ┌────────▼────────┐ ┌───────▼──────┐
│ Direct API │ │ MCP Bridge │ │ No-Op │
│ (Tight Coupling)│ │ (Loose Coupling)│ │ (Disabled) │
│ session-buddy│ │ session-buddy │ │ │
└───────────────┘ └─────────────────┘ └──────────────┘
│
┌───────▼────────┐
│ Dhruva Storage│
│ (SQLite + WAL) │
└────────────────┘
```
### Configuration
**Enable/Disable in `settings/local.yaml` or `settings/crackerjack.yaml`**:
```yaml
# Enable skills tracking (default: true)
skills:
enabled: true
# Backend choice: "direct", "mcp", "auto"
backend: auto # Tries MCP first, falls back to direct
# Database location (default: .session-buddy/skills.db)
db_path: null
# MCP server URL (for MCP bridge)
mcp_server_url: "http://localhost:8678"
# Recommendation settings
min_similarity: 0.3 # Minimum similarity for recommendations (0.0-1.0)
max_recommendations: 5 # Max agents to recommend
enable_phase_aware: true # Consider workflow phase in recommendations
phase_weight: 0.3 # Weight for phase effectiveness (0.0-1.0)
```
### Backend Options
| Backend | Pros | Cons | Best For |
|---------|------|------|----------|
| **`direct`** | • Fast (direct API)<br>• Simple setup<br>• Low latency | • Tight coupling<br>• Requires session-buddy in Python path | • Local development<br>• Single-machine setups |
| **`mcp`** | • Loose coupling<br>• Remote deployment<br>• Easy testing | • Higher latency<br>• More complex | • Distributed systems<br>• Microservices<br>• Multi-project setups |
| **`auto`** (default) | • Tries MCP first<br>• Automatic fallback<br>• Best of both | • Slightly slower initial connection | • Most scenarios (recommended) |
### Usage Patterns
#### 1. Automatic Tracking (Default)
All agent invocations are **automatically tracked** via `AgentOrchestrator`:
```python
# In agent_orchestrator.py
async def _execute_crackerjack_agent(agent, request):
# Automatic tracking
completer = request.context.track_skill_invocation(
skill_name=agent.metadata.name,
user_query=request.task.description,
workflow_phase=request.task.category,
)
try:
result = await agent.agent.analyze_and_fix(issue)
completer(completed=True) # Record success
except Exception as e:
completer(completed=False, error_type=str(e)) # Record failure
raise
```
#### 2. Manual Tracking in Custom Code
```python
from crackerjack.agents.base import AgentContext
# Track with manual control
context = AgentContext(
project_path=Path("/my/project"),
skills_tracker=tracker, # From dependency injection
)
# Track invocation
completer = context.track_skill_invocation(
skill_name="MyCustomAgent",
user_query="Fix complexity issues",
workflow_phase="comprehensive_hooks",
)
# ... do work ...
# Complete tracking
completer(completed=True)
```
#### 3. Get Recommendations
```python
# Get agent recommendations for a problem
recommendations = context.get_skill_recommendations(
user_query="How do I fix type errors in async code?",
limit=5,
workflow_phase="comprehensive_hooks",
)
# Returns:
# [
# {
# "skill_name": "RefactoringAgent",
# "similarity_score": 0.92,
# "completed": True,
# "duration_seconds": 45.2,
# "workflow_phase": "comprehensive_hooks"
# },
# ...
# ]
```
### Data Migration
**Migrate from JSON-based metrics to Dhruva database**:
```bash
# 1. Backup existing JSON
cp .crackerjack/metrics.json .crackerjack/metrics.json.backup
# 2. Run migration (dry-run first)
python scripts/migrate_skills_to_sessionbuddy.py --dry-run
# 3. Actual migration
python scripts/migrate_skills_to_sessionbuddy.py
# 4. Validate migration
python scripts/validate_skills_migration.py
# 5. Rollback if needed
python scripts/rollback_skills_migration.py
```
**Migration Features**:
- ✅ **Automatic backup** - Creates `.pre-migration.backup` files
- ✅ **Dry-run mode** - Preview changes without modifying database
- ✅ **Validation** - Checks JSON structure and required fields
- ✅ **Rollback support** - Restore from backup if issues occur
- ✅ **Progress tracking** - See migration status in real-time
### Performance Considerations
**Direct API (Tight Coupling)**:
- **Latency**: < 1ms per invocation (in-process)
- **Throughput**: 10,000+ invocations/second
- **Memory**: ~5MB per session
- **Best for**: Local development, single-machine setups
**MCP Bridge (Loose Coupling)**:
- **Latency**: 5-10ms per invocation (network round-trip)
- **Throughput**: 1,000+ invocations/second
- **Memory**: ~10MB per session (includes client)
- **Best for**: Distributed systems, microservices
**Overhead**:
- **No-op (disabled)**: Zero overhead (~0.001µs per check)
- **Direct**: ~0.5% overhead in typical workflows
- **MCP**: ~2% overhead in typical workflows
### Advanced Features
#### Semantic Skill Discovery
```python
# Find agents using natural language
recommendations = tracker.get_recommendations(
user_query="I need help with memory leaks in async code", limit=5
)
# Semantic search finds:
# - PerformanceAgent (specializes in leaks)
# - RefactoringAgent (async patterns)
# - TestSpecialistAgent (memory testing)
```
#### Workflow-Phase-Aware Recommendations
```python
# Get recommendations for specific Oneiric phase
recommendations = tracker.get_recommendations(
user_query="Fix import errors",
workflow_phase="fast_hooks", # Only agents effective in fast_hooks
limit=3,
)
# Considers:
# - Which agents work best in fast_hooks phase
# - Historical completion rates by phase
# - Average duration by phase
```
#### Selection Ranking
```python
# Track which alternative agents were considered
completer = context.track_skill_invocation(
skill_name="RefactoringAgent", # Selected agent
user_query="Fix complexity",
alternatives_considered=["PerformanceAgent", "DRYAgent"],
selection_rank=1, # First choice
)
```
### Troubleshooting
**Skills tracking not working**:
```bash
# Check if session-buddy is available
python -c "from session_buddy.core.skills_tracker import get_session_tracker; print('OK')"
# Verify configuration
python -c "from crackerjack.config import CrackerjackSettings; s = CrackerjackSettings.load(); print(s.skills)"
# Check database
ls -la .session-buddy/skills.db
```
**MCP connection failures**:
```bash
# Verify MCP server is running
python -m crackerjack status
# Test MCP connection
curl http://localhost:8678/health
# Check fallback to direct tracking
# MCP failures automatically fall back to direct API
```
**Migration issues**:
```bash
# Validate JSON before migration
python scripts/validate_skills_migration.py --json-only
# Run with verbose output
python scripts/migrate_skills_to_sessionbuddy.py --verbose
# Rollback if needed
python scripts/rollback_skills_migration.py --force
```
### See Also
- **CLAUDE.md**: Complete developer documentation with integration examples
- **`docs/features/SKILLS_INTEGRATION.md`**: Detailed feature documentation
- **`scripts/migrate_skills_to_sessionbuddy.py`**: Migration tool source code
## Core Workflow
**Enhanced three-stage quality enforcement with intelligent code cleaning:**
1. **Fast Hooks** (~5 seconds): Essential formatting and security checks
1. **🧹 Code Cleaning Stage** (between fast and comprehensive): AI-powered cleanup for optimal comprehensive hook results
1. **Comprehensive Hooks** (~30 seconds): Complete static analysis on cleaned code
**Optimal Execution Order**:
- **Fast hooks first** # → **retry once if any fail** (formatting fixes cascade to other issues)
- **Code cleaning** # → Remove TODO detection, apply standardized patterns
- **Post-cleaning fast hooks sanity check** # → Ensure cleaning didn't introduce issues
- **Full test suite** # → Collect ALL test failures (don't stop on first)
- **Comprehensive hooks** # → Collect ALL quality issues on clean codebase
- **AI batch fixing** # → Process all collected issues intelligently
**With AI integration:**
- `--ai-fix` flag enables automatic error resolution with specialized sub-agents
- MCP server allows AI agents to run crackerjack commands with real-time progress tracking
- Structured error output for programmatic fixes with confidence scoring
- Advanced-grade regex pattern system ensures safe automated text transformations
## Core Features
### Project Management
- **Effortless Project Setup:** Initializes new Python projects with a standard directory structure, `pyproject.toml`, and essential configuration files
- **UV Integration:** Manages dependencies and virtual environments using [UV](https://github.com/astral-sh/uv) for lightning-fast package operations
- **Dependency Management:** Automatically detects and manages project dependencies
### Code Quality
- **Automated Code Cleaning:** Removes unnecessary docstrings, line comments, and trailing whitespace
- **Consistent Code Formatting:** Enforces a unified style using [Ruff](https://github.com/astral-sh/ruff), the lightning-fast Python linter and formatter
- **Comprehensive Quality Hooks:** Direct tool invocation with no wrapper overhead - runs Python tools, Rust analyzers, and security scanners efficiently
- **Interactive Checks:** Supports interactive quality checks (like `refurb`, `bandit`, and `pyright`) to fix issues in real-time
- **Static Type Checking:** Enforces type safety with Pyright integration
### Testing & Coverage Ratchet System
- **Built-in Testing:** Automatically runs tests using `pytest` with intelligent parallelization
- **Coverage Ratchet:** Revolutionary coverage system that targets 100% - coverage can only increase, never decrease
- **Milestone Celebrations:** Progress tracking with milestone achievements (15%, 20%, 25%... # → 100%)
- **No Arbitrary Limits:** Replaced traditional hard limits with continuous improvement toward perfection
- **Visual Progress:** Rich terminal displays showing journey to 100% coverage
- **Benchmark Testing:** Performance regression detection and monitoring
- **Easy Version Bumping:** Provides commands to bump the project version (patch, minor, or major)
- **Simplified Publishing:** Automates publishing to PyPI via UV with enhanced authentication
#### Coverage Ratchet Philosophy
🎯 **Target: 100% Coverage** - Not an arbitrary number, but true comprehensive testing
📈 **Continuous Improvement** - Each test run can only maintain or improve coverage
🏆 **Milestone System** - Celebrate achievements at 15%, 25%, 50%, 75%, 90%, and 100%
🚫 **No Regression** - Once you achieve a coverage level, you can't go backward
```bash
# Show coverage progress
python -m crackerjack run --coverage-report
# Run tests with ratchet system
python -m crackerjack run --run-tests
# Example output:
# 🎉 Coverage improved from 10.11% to 15.50%!
# 🏆 Milestone achieved: 15% coverage!
# 📈 Progress: [███░░░░░░░░░░░░░░░░░] 15.50% # → 100%
# 🎯 Next milestone: 20% (+4.50% needed)
```
### Git Integration
- **Intelligent Commit Messages:** Analyzes git changes and suggests descriptive commit messages based on file types and modifications
- **Commit and Push:** Commits and pushes your changes with standardized commit messages
- **Pull Request Creation:** Creates pull requests to upstream repositories on GitHub or GitLab
- **Git Hook Integration:** Ensures code quality before commits with fast, direct tool execution
## ⚡ legacy Architecture & Performance

*Complete execution pipeline: CLI → Workflow Selection → Fast/Comprehensive Hooks → Tests → AI Batch Fixing*
Crackerjack is built on the **legacy DI framework** framework, providing advanced-grade dependency injection, intelligent caching, and parallel execution.
### What is legacy?
[legacy](https://github.com/lesleslie/crackerjack) is a lightweight dependency injection framework that enables:
- **Module-level registration** via `depends.set()` for clean dependency management
- **Runtime-checkable protocols** ensuring type safety across all components
- **Async-first design** with lifecycle management and timeout strategies
- **Clean separation of concerns** through adapters, orchestrators, and services
### Architecture Overview
**legacy Workflow Engine (Default since Phase 4.2)**
```
User Command # → BasicWorkflowEngine (legacy)
↓
Workflow Selection (Standard/Fast/Comprehensive/Test)
↓
Action Handlers (run_fast_hooks, run_code_cleaning, run_comprehensive_hooks, run_test_workflow)
↓
asyncio.to_thread() for non-blocking execution
↓
WorkflowPipeline (DI-injected via context)
↓
Phase Execution (_run_fast_hooks_phase, _run_comprehensive_hooks_phase, etc.)
↓
HookManager + TestManager (Manager Layer: 80% compliant)
↓
Direct adapter.check() calls (No subprocess overhead)
↓
ToolProxyCacheAdapter (Content-based caching, 70% hit rate)
↓
Parallel Execution (Up to 11 concurrent adapters)
↓
Results Aggregation with real-time console output
```
**Legacy Orchestrator Path** (opt-out with `--use-legacy-orchestrator`)
```
User Command # → WorkflowOrchestrator (Legacy)
↓
SessionCoordinator (@depends.inject + protocols)
↓
PhaseCoordinator (Orchestration Layer)
↓
HookManager + TestManager
↓
[Same execution path as legacy from here...]
```
**Architecture Compliance (Phase 2-4.2 Audit Results)**
| Layer | Compliance | Status | Notes |
|-------|-----------|--------|-------|
| **legacy Workflows** | 95% | ✅ Production | **Default since Phase 4.2** - Real-time output, non-blocking |
| **CLI Handlers** | 90% | ✅ Excellent | Gold standard: `@depends.inject` + `Inject[Protocol]` |
| **Services** | 95% | ✅ Excellent | Phase 3 refactored, consistent constructors |
| **Managers** | 80% | ✅ Good | Protocol-based injection, minor improvements needed |
| **Legacy Orchestration** | 70% | ⚠️ Opt-out | Available with `--use-legacy-orchestrator` |
| **Coordinators** | 70% | ⚠️ Mixed | Phase coordinators ✅, async needs standardization |
| **Agent System** | 40% | 📋 Legacy | Uses `AgentContext` pattern (predates legacy) |
**Key Architectural Patterns**
```python
# ✅ GOLD STANDARD Pattern (from CLI Handlers)
from legacy.depends import depends, Inject
from crackerjack.models.protocols import Console
@depends.inject
def setup_environment(console: Inject[Console] = None, verbose: bool = False) -> None:
"""Protocol-based injection with @depends.inject decorator."""
console.print("[green]Environment ready[/green]")
# ❌ ANTI-PATTERN: Avoid manual fallbacks
def setup_environment_wrong(console: Console | None = None):
self.console = console or Console() # Bypasses DI container
```
### Performance Benefits
| Metric | Legacy | legacy Workflows (Phase 4.2) | Improvement |
|--------|--------|----------------------------|-------------|
| **Fast Hooks** | ~45s | ~48s | Comparable |
| **Full Workflow** | ~60s | ~90s | Real-time output |
| **Console Output** | Buffered | **Real-time streaming** | UX improvement |
| **Event Loop** | Sync (blocking) | **Async (non-blocking)** | Responsive |
| **Cache Hit Rate** | 0% | **70%** | New capability |
| **Concurrent Adapters** | 1 | **11** | 11x parallelism |
| **DI Context** | Manual | **Protocol-based injection** | Type safety |
### Core Components
#### 1. Quality Assurance Adapters
**Location:** `crackerjack/adapters/`
legacy-registered adapters for all quality checks:
- **Format:** Ruff formatting, mdformat
- **Lint:** Codespell, complexity analysis
- **Security:** Bandit security scanning, Gitleaks secret detection
- **Type:** Zuban type checking (20-200x faster than Pyright)
- **Refactor:** Creosote (unused dependencies), Refurb (Python idioms)
- **Complexity:** Complexipy analysis
- **Utility:** Various validation checks
- **AI:** Claude integration for intelligent auto-fixing
#### 2. Hook Orchestrator
**Location:** `crackerjack/orchestration/hook_orchestrator.py`
Features:
- **Dual execution mode:** Legacy (pre-commit CLI) + legacy (direct adapters)
- **Dependency resolution:** Intelligent hook ordering (e.g., format before lint)
- **Adaptive strategies:** Fast, comprehensive, or dependency-aware execution
- **Graceful degradation:** Timeout strategies prevent hanging
#### 3. Cache Adapters
**Location:** `crackerjack/orchestration/cache/`
Two caching strategies:
- **ToolProxyCache:** Content-based caching with file hash verification
- **MemoryCache:** In-memory LRU cache for testing
Benefits:
- **70% cache hit rate** in typical workflows
- **Content-aware invalidation:** Only re-runs when files actually change
- **Configurable TTL:** Default 3600s (1 hour)
#### 4. MCP Server Integration
**Location:** `crackerjack/mcp/`
legacy-registered services:
- **MCPServerService:** FastMCP server for AI agent integration
- **ErrorCache:** Pattern tracking for AI fix recommendations
- **JobManager:** WebSocket job tracking and progress streaming
- **WebSocketSecurityConfig:** Security hardening (localhost-only, rate limiting)
### Mi | text/markdown | null | Les Leslie <les@wedgwoodwebworks.com> | null | null | BSD-3-CLAUSE | null | [
"Operating System :: POSIX",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing",
"Topic :: Utilities",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiofiles>=25.1.0",
"aiohttp>=3.13.2",
"bandit>=1.9.2",
"codespell>=2.4.1",
"complexipy>=5.1.0",
"creosote>=4.1.0",
"docstring-to-markdown>=0.17",
"fastapi>=0.124.2",
"fastmcp>=2.13.2",
"hatchling>=1.28.0",
"hypothesis>=6.0.0",
"ipython>=8.0.0",
"keyring>=25.7.0",
"linkcheckmd>=1.4.0",
"mcp-common>=0.3.6",
"mcp>=1.23.3",
"mdformat-ruff>=0.1.3",
"mdformat>=1.0.0",
"nltk>=3.9.2",
"numpy>=2.3.5",
"oneiric>=0.3.2",
"pip-audit>=2.10.0",
"pydantic>=2.12.5",
"pyleak>=0.1.14",
"pyscn==1.5.0",
"pytest-asyncio>=1.3.0",
"pytest-benchmark>=5.2.3",
"pytest-cov>=7.0.0",
"pytest-mock>=3.15.1",
"pytest-snob>=0.1.14",
"pytest-timeout>=2.4.0",
"pytest-xdist>=3.8.0",
"pytest>=9.0.2",
"pyyaml>=6.0.3",
"refurb>=2.2.0",
"rich>=14.2.0",
"ruff>=0.14.8",
"scikit-learn>=1.8.0",
"scipy-stubs>=1.16.3.3",
"scipy>=1.16.3",
"skylos>=2.5.3",
"structlog>=25.5.0",
"tomli-w>=1.2.0",
"transformers>=5.1.0",
"typer>=0.20.0",
"types-aiofiles>=25.1.0.20251011",
"types-psutil>=7.1.3.20251211",
"types-pyyaml>=6.0.12.20250915",
"uv-bump>=0.3.2",
"uv>=0.9.17",
"uvicorn>=0.38.0",
"vulture>=2.14",
"watchdog>=6.0.0",
"websockets>=15.0.1",
"zuban>=0.3.0",
"sentence-transformers>=2.2.0; extra == \"neural\"",
"torch>=2.0.0; (sys_platform == \"darwin\" and platform_machine == \"arm64\") and extra == \"neural\"",
"torch>=2.0.0; (sys_platform == \"darwin\" and platform_machine == \"x86_64\") and extra == \"neural\"",
"torch>=2.0.0; sys_platform == \"linux\" and extra == \"neural\"",
"torch>=2.0.0; sys_platform == \"win32\" and extra == \"neural\""
] | [] | [] | [] | [
"documentation, https://github.com/lesleslie/crackerjack",
"homepage, https://github.com/lesleslie/crackerjack",
"repository, https://github.com/lesleslie/crackerjack"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:53:47.263945 | crackerjack-0.54.2-py3-none-any.whl | 1,246,174 | 47/0e/e9b652f7c0520ae52ecb3fac6aec9cc3d70d9f7888e3b47e2190f3c83441/crackerjack-0.54.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 0717535a264fa3934d5c179e4d001d8b | ef9c5dd43a9bd43417943393daa8fdbefc814a8f827231248f24ef32fa0d95c9 | 470ee9b652f7c0520ae52ecb3fac6aec9cc3d70d9f7888e3b47e2190f3c83441 | null | [
"LICENSE"
] | 176 |
2.4 | linkedin-scraper-mcp | 4.1.2 | MCP server for LinkedIn profile, company, and job scraping with Claude AI integration. Supports direct profile/company/job URL scraping with secure credential storage. | # LinkedIn MCP Server
<p align="left">
<a href="https://pypi.org/project/linkedin-scraper-mcp/" target="_blank"><img src="https://img.shields.io/pypi/v/linkedin-scraper-mcp?color=blue" alt="PyPI"></a>
<a href="https://github.com/stickerdaniel/linkedin-mcp-server/actions/workflows/ci.yml" target="_blank"><img src="https://github.com/stickerdaniel/linkedin-mcp-server/actions/workflows/ci.yml/badge.svg?branch=main" alt="CI Status"></a>
<a href="https://github.com/stickerdaniel/linkedin-mcp-server/actions/workflows/release.yml" target="_blank"><img src="https://github.com/stickerdaniel/linkedin-mcp-server/actions/workflows/release.yml/badge.svg?branch=main" alt="Release"></a>
<a href="https://github.com/stickerdaniel/linkedin-mcp-server/blob/main/LICENSE" target="_blank"><img src="https://img.shields.io/badge/License-Apache%202.0-brightgreen?labelColor=32383f" alt="License"></a>
</p>
Through this LinkedIn MCP server, AI assistants like Claude can connect to your LinkedIn. Access profiles and companies, search for jobs, or get job details.
## Installation Methods
[](#-uvx-setup-recommended---universal)
[](#-docker-setup)
[](#-claude-desktop-dxt-extension)
[](#-local-setup-develop--contribute)
<https://github.com/user-attachments/assets/eb84419a-6eaf-47bd-ac52-37bc59c83680>
## Usage Examples
```
Research the background of this candidate https://www.linkedin.com/in/stickerdaniel/
```
```
Get this company profile for partnership discussions https://www.linkedin.com/company/inframs/
```
```
Suggest improvements for my CV to target this job posting https://www.linkedin.com/jobs/view/4252026496
```
```
What has Anthropic been posting about recently? https://www.linkedin.com/company/anthropicresearch/
```
## Features & Tool Status
| Tool | Description | Status |
|------|-------------|--------|
| `get_person_profile` | Get profile info with explicit section selection (experience, education, interests, honors, languages, contact_info) | Working |
| `get_company_profile` | Extract company information with explicit section selection (posts, jobs) | Working |
| `get_company_posts` | Get recent posts from a company's LinkedIn feed | Working |
| `search_jobs` | Search for jobs with keywords and location filters | Working |
| `search_people` | Search for people by keywords and location | Working |
| `get_job_details` | Get detailed information about a specific job posting | Working |
| `close_session` | Close browser session and clean up resources | Working |
> [!IMPORTANT]
> **Breaking change:** LinkedIn recently made some changes to prevent scraping. The newest version uses [Patchright](https://github.com/Kaliiiiiiiiii-Vinyzu/patchright-python) with persistent browser profiles instead of Playwright with session files. Old `session.json` files and `LINKEDIN_COOKIE` env vars are no longer supported. Run `--login` again to create a new profile + cookie file that can be mounted in docker. 02/2026
<br/>
<br/>
## 🚀 uvx Setup (Recommended - Universal)
**Prerequisites:** Install uv and run `uvx patchright install chromium` to set up the browser.
### Installation
**Step 1: Create a session (first time only)**
```bash
uvx linkedin-scraper-mcp --login
```
This opens a browser for you to log in manually (5 minute timeout for 2FA, captcha, etc.). The browser profile is saved to `~/.linkedin-mcp/profile/`.
**Step 2: Client Configuration:**
```json
{
"mcpServers": {
"linkedin": {
"command": "uvx",
"args": ["linkedin-scraper-mcp"]
}
}
}
```
> [!NOTE]
> Sessions may expire over time. If you encounter authentication issues, run `uvx linkedin-scraper-mcp --login` again
### uvx Setup Help
<details>
<summary><b>🔧 Configuration</b></summary>
**Transport Modes:**
- **Default (stdio)**: Standard communication for local MCP servers
- **Streamable HTTP**: For web-based MCP server
- If no transport is specified, the server defaults to `stdio`
- An interactive terminal without explicit transport shows a chooser prompt
**CLI Options:**
- `--login` - Open browser to log in and save persistent profile
- `--no-headless` - Show browser window (useful for debugging scraping issues)
- `--log-level {DEBUG,INFO,WARNING,ERROR}` - Set logging level (default: WARNING)
- `--transport {stdio,streamable-http}` - Optional: force transport mode (default: stdio)
- `--host HOST` - HTTP server host (default: 127.0.0.1)
- `--port PORT` - HTTP server port (default: 8000)
- `--path PATH` - HTTP server path (default: /mcp)
- `--logout` - Clear stored LinkedIn browser profile
- `--timeout MS` - Browser timeout for page operations in milliseconds (default: 5000)
- `--user-data-dir PATH` - Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)
- `--chrome-path PATH` - Path to Chrome/Chromium executable (for custom browser installations)
**Basic Usage Examples:**
```bash
# Create a session interactively
uvx linkedin-scraper-mcp --login
# Run with debug logging
uvx linkedin-scraper-mcp --log-level DEBUG
```
**HTTP Mode Example (for web-based MCP clients):**
```bash
uvx linkedin-scraper-mcp --transport streamable-http --host 127.0.0.1 --port 8080 --path /mcp
```
Runtime server logs are emitted by FastMCP/Uvicorn.
**Test with mcp inspector:**
1. Install and run mcp inspector ```bunx @modelcontextprotocol/inspector```
2. Click pre-filled token url to open the inspector in your browser
3. Select `Streamable HTTP` as `Transport Type`
4. Set `URL` to `http://localhost:8080/mcp`
5. Connect
6. Test tools
</details>
<details>
<summary><b>❗ Troubleshooting</b></summary>
**Installation issues:**
- Ensure you have uv installed: `curl -LsSf https://astral.sh/uv/install.sh | sh`
- Check uv version: `uv --version` (should be 0.4.0 or higher)
**Session issues:**
- Browser profile is stored at `~/.linkedin-mcp/profile/`
- Make sure you have only one active LinkedIn session at a time
**Login issues:**
- LinkedIn may require a login confirmation in the LinkedIn mobile app for `--login`
- You might get a captcha challenge if you logged in frequently. Run `uvx linkedin-scraper-mcp --login` which opens a browser where you can solve it manually.
**Timeout issues:**
- If pages fail to load or elements aren't found, try increasing the timeout: `--timeout 10000`
- Users on slow connections may need higher values (e.g., 15000-30000ms)
- Can also set via environment variable: `TIMEOUT=10000`
**Custom Chrome path:**
- If Chrome is installed in a non-standard location, use `--chrome-path /path/to/chrome`
- Can also set via environment variable: `CHROME_PATH=/path/to/chrome`
</details>
<br/>
<br/>
## 🐳 Docker Setup
**Prerequisites:** Make sure you have [Docker](https://www.docker.com/get-started/) installed and running.
### Authentication
Docker runs headless (no browser window), so you need to create a browser profile locally first and mount it into the container.
**Step 1: Create profile using uvx (one-time setup)**
```bash
uvx linkedin-scraper-mcp --login
```
This opens a browser window where you log in manually (5 minute timeout for 2FA, captcha, etc.). The browser profile is saved to `~/.linkedin-mcp/profile/`.
**Step 2: Configure Claude Desktop with Docker**
```json
{
"mcpServers": {
"linkedin": {
"command": "docker",
"args": [
"run", "--rm", "-i",
"-v", "~/.linkedin-mcp:/home/pwuser/.linkedin-mcp",
"stickerdaniel/linkedin-mcp-server:latest"
]
}
}
}
```
> [!NOTE]
> Sessions may expire over time. If you encounter authentication issues, run `uvx linkedin-scraper-mcp --login` again locally.
> [!NOTE]
> **Why can't I run `--login` in Docker?** Docker containers don't have a display server. Create a profile on your host using the [uvx setup](#-uvx-setup-recommended---universal) and mount it into Docker.
### Docker Setup Help
<details>
<summary><b>🔧 Configuration</b></summary>
**Transport Modes:**
- **Default (stdio)**: Standard communication for local MCP servers
- **Streamable HTTP**: For a web-based MCP server
- If no transport is specified, the server defaults to `stdio`
- An interactive terminal without explicit transport shows a chooser prompt
**CLI Options:**
- `--log-level {DEBUG,INFO,WARNING,ERROR}` - Set logging level (default: WARNING)
- `--transport {stdio,streamable-http}` - Optional: force transport mode (default: stdio)
- `--host HOST` - HTTP server host (default: 127.0.0.1)
- `--port PORT` - HTTP server port (default: 8000)
- `--path PATH` - HTTP server path (default: /mcp)
- `--logout` - Clear stored LinkedIn browser profile
- `--timeout MS` - Browser timeout for page operations in milliseconds (default: 5000)
- `--user-data-dir PATH` - Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)
- `--chrome-path PATH` - Path to Chrome/Chromium executable (rarely needed in Docker)
> [!NOTE]
> `--login` and `--no-headless` are not available in Docker (no display server). Use the [uvx setup](#-uvx-setup-recommended---universal) to create profiles.
**HTTP Mode Example (for web-based MCP clients):**
```bash
docker run -it --rm \
-v ~/.linkedin-mcp:/home/pwuser/.linkedin-mcp \
-p 8080:8080 \
stickerdaniel/linkedin-mcp-server:latest \
--transport streamable-http --host 0.0.0.0 --port 8080 --path /mcp
```
Runtime server logs are emitted by FastMCP/Uvicorn.
**Test with mcp inspector:**
1. Install and run mcp inspector ```bunx @modelcontextprotocol/inspector```
2. Click pre-filled token url to open the inspector in your browser
3. Select `Streamable HTTP` as `Transport Type`
4. Set `URL` to `http://localhost:8080/mcp`
5. Connect
6. Test tools
</details>
<details>
<summary><b>❗ Troubleshooting</b></summary>
**Docker issues:**
- Make sure [Docker](https://www.docker.com/get-started/) is installed
- Check if Docker is running: `docker ps`
**Login issues:**
- Make sure you have only one active LinkedIn session at a time
- LinkedIn may require a login confirmation in the LinkedIn mobile app for `--login`
- You might get a captcha challenge if you logged in frequently. Run `uvx linkedin-scraper-mcp --login` which opens a browser where you can solve captchas manually. See the [uvx setup](#-uvx-setup-recommended---universal) for prerequisites.
**Timeout issues:**
- If pages fail to load or elements aren't found, try increasing the timeout: `--timeout 10000`
- Users on slow connections may need higher values (e.g., 15000-30000ms)
- Can also set via environment variable: `TIMEOUT=10000`
**Custom Chrome path:**
- If Chrome is installed in a non-standard location, use `--chrome-path /path/to/chrome`
- Can also set via environment variable: `CHROME_PATH=/path/to/chrome`
</details>
<br/>
<br/>
## 📦 Claude Desktop (DXT Extension)
**Prerequisites:** [Claude Desktop](https://claude.ai/download) and [Docker](https://www.docker.com/get-started/) installed & running
**One-click installation** for Claude Desktop users:
1. Download the [DXT extension](https://github.com/stickerdaniel/linkedin-mcp-server/releases/latest)
2. Double-click to install into Claude Desktop
3. Create a session: `uvx linkedin-scraper-mcp --login`
> [!NOTE]
> Sessions may expire over time. If you encounter authentication issues, run `uvx linkedin-scraper-mcp --login` again.
### DXT Extension Setup Help
<details>
<summary><b>❗ Troubleshooting</b></summary>
**First-time setup timeout:**
- Claude Desktop has a ~60 second connection timeout
- If the Docker image isn't cached, the pull may exceed this timeout
- **Fix:** Pre-pull the image before first use:
```bash
docker pull stickerdaniel/linkedin-mcp-server:2.3.0
```
- Then restart Claude Desktop
**Docker issues:**
- Make sure [Docker](https://www.docker.com/get-started/) is installed
- Check if Docker is running: `docker ps`
**Login issues:**
- Make sure you have only one active LinkedIn session at a time
- LinkedIn may require a login confirmation in the LinkedIn mobile app for `--login`
- You might get a captcha challenge if you logged in frequently. Run `uvx linkedin-scraper-mcp --login` which opens a browser where you can solve captchas manually. See the [uvx setup](#-uvx-setup-recommended---universal) for prerequisites.
**Timeout issues:**
- If pages fail to load or elements aren't found, try increasing the timeout: `--timeout 10000`
- Users on slow connections may need higher values (e.g., 15000-30000ms)
- Can also set via environment variable: `TIMEOUT=10000`
</details>
<br/>
<br/>
## 🐍 Local Setup (Develop & Contribute)
Contributions are welcome! Please [open an issue](https://github.com/stickerdaniel/linkedin-mcp-server/issues) first to discuss the feature or bug fix before submitting a PR. This helps align on the approach before any code is written.
**Prerequisites:** [Git](https://git-scm.com/downloads) and [uv](https://docs.astral.sh/uv/) installed
### Installation
```bash
# 1. Clone repository
git clone https://github.com/stickerdaniel/linkedin-mcp-server
cd linkedin-mcp-server
# 2. Install UV package manager (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# 3. Install dependencies
uv sync
uv sync --group dev
# 4. Install Patchright browser
uv run patchright install chromium
# 5. Install pre-commit hooks
uv run pre-commit install
# 6. Create a session (first time only)
uv run -m linkedin_mcp_server --login
# 7. Start the server
uv run -m linkedin_mcp_server
```
### Local Setup Help
<details>
<summary><b>🔧 Configuration</b></summary>
**CLI Options:**
- `--login` - Open browser to log in and save persistent profile
- `--no-headless` - Show browser window (useful for debugging scraping issues)
- `--log-level {DEBUG,INFO,WARNING,ERROR}` - Set logging level (default: WARNING)
- `--transport {stdio,streamable-http}` - Optional: force transport mode (default: stdio)
- `--host HOST` - HTTP server host (default: 127.0.0.1)
- `--port PORT` - HTTP server port (default: 8000)
- `--path PATH` - HTTP server path (default: /mcp)
- `--logout` - Clear stored LinkedIn browser profile
- `--timeout MS` - Browser timeout for page operations in milliseconds (default: 5000)
- `--status` - Check if current session is valid and exit
- `--user-data-dir PATH` - Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)
- `--slow-mo MS` - Delay between browser actions in milliseconds (default: 0, useful for debugging)
- `--user-agent STRING` - Custom browser user agent
- `--viewport WxH` - Browser viewport size (default: 1280x720)
- `--chrome-path PATH` - Path to Chrome/Chromium executable (for custom browser installations)
- `--help` - Show help
> **Note:** Most CLI options have environment variable equivalents. See `.env.example` for details.
**HTTP Mode Example (for web-based MCP clients):**
```bash
uv run -m linkedin_mcp_server --transport streamable-http --host 127.0.0.1 --port 8000 --path /mcp
```
**Claude Desktop:**
```json
{
"mcpServers": {
"linkedin": {
"command": "uv",
"args": ["--directory", "/path/to/linkedin-mcp-server", "run", "-m", "linkedin_mcp_server"]
}
}
}
```
`stdio` is used by default for this config.
</details>
<details>
<summary><b>❗ Troubleshooting</b></summary>
**Login issues:**
- Make sure you have only one active LinkedIn session at a time
- LinkedIn may require a login confirmation in the LinkedIn mobile app for `--login`
- You might get a captcha challenge if you logged in frequently. The `--login` command opens a browser where you can solve it manually.
**Scraping issues:**
- Use `--no-headless` to see browser actions and debug scraping problems
- Add `--log-level DEBUG` to see more detailed logging
**Session issues:**
- Browser profile is stored at `~/.linkedin-mcp/profile/`
- Use `--logout` to clear the profile and start fresh
**Python/Patchright issues:**
- Check Python version: `python --version` (should be 3.12+)
- Reinstall Patchright: `uv run patchright install chromium`
- Reinstall dependencies: `uv sync --reinstall`
**Timeout issues:**
- If pages fail to load or elements aren't found, try increasing the timeout: `--timeout 10000`
- Users on slow connections may need higher values (e.g., 15000-30000ms)
- Can also set via environment variable: `TIMEOUT=10000`
**Custom Chrome path:**
- If Chrome is installed in a non-standard location, use `--chrome-path /path/to/chrome`
- Can also set via environment variable: `CHROME_PATH=/path/to/chrome`
</details>
<br/>
<br/>
## Acknowledgements
Built with [FastMCP](https://gofastmcp.com/) and [Patchright](https://github.com/Kaliiiiiiiiii-Vinyzu/patchright-python).
⚠️ Use in accordance with [LinkedIn's Terms of Service](https://www.linkedin.com/legal/user-agreement). Web scraping may violate LinkedIn's terms. This tool is for personal use only.
## License
This project is licensed under the Apache 2.0 license.
<br>
| text/markdown | null | Daniel Sticker <daniel@sticker.name> | null | null | null | linkedin, mcp, model-context-protocol, scraper, ai, automation, llm, anthropic, claude | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Environment :: Console",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"fastmcp>=2.14.0",
"inquirer>=3.4.0",
"patchright>=1.40.0",
"python-dotenv>=1.1.1"
] | [] | [] | [] | [
"Homepage, https://github.com/stickerdaniel/linkedin-mcp-server",
"Documentation, https://github.com/stickerdaniel/linkedin-mcp-server#readme",
"Repository, https://github.com/stickerdaniel/linkedin-mcp-server",
"Issues, https://github.com/stickerdaniel/linkedin-mcp-server/issues",
"Changelog, https://github.com/stickerdaniel/linkedin-mcp-server/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:53:35.300296 | linkedin_scraper_mcp-4.1.2.tar.gz | 51,242 | 92/28/97c4f460cc95ce8c9a134c3b7cf6758030ca118b840b53b69ae37eede6f4/linkedin_scraper_mcp-4.1.2.tar.gz | source | sdist | null | false | 87739695dd0dca8d856a4594766cdda5 | 4560c9507912cb1ef95f4c118c143d9616eeb05c11cd755017e89456a66d3a7f | 922897c4f460cc95ce8c9a134c3b7cf6758030ca118b840b53b69ae37eede6f4 | Apache-2.0 | [
"LICENSE"
] | 410 |
2.4 | lantern-p2p | 1.0.3 | Lantern - A P2P file sharing system with TUI dashboard | ```
_ _
| | __ _ _ _| |_ ___ _ _ _ _
| |__/ _` | ' \ _/ -_) '_| ' \
|____\__,_|_||_\__\___|_| |_||_|
```
A peer-to-peer file sharing system with a beautiful terminal UI (TUI) dashboard.


## Features
- **P2P File Sharing** - Share files directly between computers on the same network
- **Auto Discovery** - Automatically discovers peers on the LAN via UDP broadcast
- **Beautiful TUI** - Modern terminal interface built with Textual
- **Dual Mode** - Both CLI and TUI modes available
- **Light/Dark Themes** - Toggle between color schemes
- **Progress Bars** - Visual feedback for large file transfers
- **Notifications** - Toast notifications for operation completion
- **Path Safety** - Built-in protection against path traversal attacks
## Installation
### From PyPI (when published)
```bash
pip install lantern-p2p
```
### From Source
```bash
git clone https://github.com/shx-dow/lantern.git
cd lantern
pip install -e .
```
## Usage
### TUI Mode (Default)
Launch the terminal dashboard (beautiful visual interface):
```bash
lantern
```
Or explicitly:
```bash
lantern --tui
```
### CLI Mode
Use command-line interface (text-based commands):
```bash
lantern --cli
```
Available CLI commands:
- `peers` - Show discovered peers
- `list <host[:port]>` - List files on remote peer
- `download <host[:port]> <file>` - Download a file
- `upload <host[:port]> <path>` - Upload a file
- `delete <host[:port]> <file>` - Delete remote file
- `myfiles` - List your shared files
- `help` - Show help
- `quit` - Exit
### Custom Port
```bash
lantern --port 6000
```
## Key Bindings (TUI Mode)
| Key | Action |
|-----|--------|
| `F1` | Show help |
| `F5` | Refresh files |
| `t` | Toggle theme |
| `u` | Upload file |
| `d` | Download file |
| `x` | Delete file |
| `Tab` | Cycle focus |
| `q` | Quit |
## Configuration
Shared files are stored in the `shared_files/` directory by default. This directory is created automatically when you first run Lantern.
## Requirements
- Python 3.8 or higher
- textual >= 0.50.0
- psutil >= 5.9.0
## Architecture
```
lantern/
├── config.py # Configuration constants
├── protocol.py # Message framing & file transfer protocol
├── discovery.py # UDP peer discovery
├── server.py # TCP file server
├── client.py # TCP client operations
├── peer.py # CLI entry point
├── tui.py # Textual TUI dashboard
├── main.py # Package entry point
└── styles/ # CSS stylesheets
└── lantern.css
```
## Development
Install with dev dependencies:
```bash
pip install -e ".[dev]"
```
Run linting:
```bash
black lantern/
ruff check lantern/
```
## License
MIT License - see LICENSE file for details.
## Contributing
Contributions welcome! Please open an issue or pull request.
## Acknowledgments
Built with [Textual](https://textual.textualize.io/) for the TUI.
| text/markdown | null | shx-dow <83426772+shx-dow@users.noreply.github.com> | null | null | MIT | file-sharing, lan, p2p, transfer, tui | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Communications :: File Sharing",
"Topic :: Internet :: File Transfer Protocol (FTP)",
"Topic :: System :: Networking"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"psutil>=5.9.0",
"textual>=0.50.0",
"black; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/shx-dow/lantern",
"Documentation, https://github.com/shx-dow/lantern#readme",
"Repository, https://github.com/shx-dow/lantern",
"Issues, https://github.com/shx-dow/lantern/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:53:10.321104 | lantern_p2p-1.0.3.tar.gz | 18,585 | 08/ed/ee228d7582df1ab81b40b68013e6d37de9efe9395e753d40de335824a21c/lantern_p2p-1.0.3.tar.gz | source | sdist | null | false | 3979b63a5e4ca80a22655f3b9f3595e5 | 6e1a86afbd1d6252963b2330cce112bcfd8644459f047366fa3e473fbf09ef52 | 08edee228d7582df1ab81b40b68013e6d37de9efe9395e753d40de335824a21c | null | [] | 168 |
2.4 | session-buddy | 0.14.3 | A Session Management MCP Server for Claude Code | # Session Buddy
A Session Management MCP Server for Claude Code
[](https://github.com/lesleslie/crackerjack)
[](https://www.python.org/downloads/)


A dedicated MCP server that provides comprehensive session management functionality for Claude Code sessions across any project.
## 🌟 What Makes Session Buddy Unique?
Session Buddy isn't just another session management tool—it's an **intelligent development companion** that learns from your work and helps you work smarter across projects:
### 💡 Automatic Knowledge Capture
**Industry-First Feature**: Session Buddy automatically captures educational insights from your conversations using deterministic pattern matching—no manual note-taking required.
```markdown
Just write explanations like this:
`★ Insight ─────────────────────────────────────`
Your insight here...
`─────────────────────────────────────────────────`
And Session Buddy automatically:
✅ Extracts and stores it with semantic embeddings
✅ Prevents duplicate capture (SHA-256 hashing)
✅ Makes it searchable across all future sessions
✅ Requires zero configuration or manual effort
```
**The Result**: Build a personal knowledge base automatically while you work!
### 🌐 Cross-Project Intelligence
**Unique Capability**: Share knowledge across related projects automatically with dependency-aware search.
**Works Great For:**
- 🏗️ **Microservices**: Coordinate patterns across service boundaries
- 📦 **Monorepos**: Share insights across multiple packages/modules
- 🔗 **Multi-Repo**: Track patterns across related repositories
- 👥 **Teams**: Collaborative knowledge filtering with voting
**Example**: Fix an authentication issue in `auth-service`? Next time you work on `user-service` (which depends on `auth-service`), Session Buddy will surface that solution automatically.
### 🔒 Privacy-First Architecture
- ✅ **100% Local Processing**: No external API calls
- ✅ **Local AI Models**: ONNX embeddings run on your machine
- ✅ **Your Data Stays Yours**: Nothing leaves your system
- ✅ **Fast Performance**: \<50ms extraction, \<20ms search
______________________________________________________________________
## Features
### 🎉 NEW in Phase 4: Advanced Analytics & Integration
Session Buddy Phase 4 introduces enterprise-grade analytics, real-time monitoring, and cross-session learning:
#### 📊 Real-Time Monitoring
- **🔴 WebSocket Server** - Live dashboard streaming at 1-second intervals
- Top 10 most active skills displayed in real-time
- Performance anomaly detection with Z-score analysis
- Client subscriptions (all skills or specific skill monitoring)
- **📈 Prometheus Metrics** - Production-ready monitoring export
- 5 metric types: Counters, Histograms, Gauges
- HTTP endpoint on port 9090 for scraping
- Thread-safe updates for concurrent access
#### 🤖 Advanced Analytics
- **🎯 Predictive Models** - ML-based skill success prediction
- RandomForest classifier with 7 features
- 30-day historical training window
- Feature importance analysis
- **🧪 A/B Testing Framework** - Experiment with recommendation strategies
- Deterministic user assignment (SHA-256 hashing)
- Statistical significance testing (t-test, p < 0.05)
- Automated winner determination
- **📉 Time-Series Analysis** - Trend detection and forecasting
- Linear regression trend detection
- Hourly aggregation for dashboards
- Anomaly detection using Z-scores
#### 👥 Cross-Session Learning
- **🌍 Collaborative Filtering** - Learn from similar users
- Jaccard similarity for user matching
- Personalized recommendations
- SHA-256 privacy hashing for user IDs
- **📊 Community Baselines** - Global skill effectiveness
- Cross-user aggregation
- Percentile rankings
- User vs global comparisons
#### 🔗 Tool Integration
- **⚡ Crackerjack Integration** - Quality gate tracking
- Phase mapping to workflow stages
- Automatic failure recommendations
- ASCII workflow visualizations
- **💻 IDE Plugin Protocol** - Context-aware recommendations
- Code pattern detection (tests, imports, async)
- Language-specific skill patterns
- Keyboard shortcut management
- **🚀 CI/CD Tracking** - Pipeline analytics
- Stage-by-stage monitoring
- Bottleneck identification (< 80% success)
- JSON export for dashboards
#### 📚 Skills Taxonomy
- **🏷️ Categories** - Organized skill domains
- Code Quality, Testing, Documentation, Deployment, etc.
- 6 predefined categories
- Multi-modal skill types (code → diagnostics, testing → test_results)
- **🔗 Dependencies** - Co-occurrence patterns
- Lift score calculation
- Relationship mapping
- Workflow-aware recommendations
**Performance:**
- Real-time metrics: < 100ms
- Anomaly detection: < 200ms
- Collaborative filtering: < 200ms
- MCP tools: < 50ms
### Core Session Management
- **🚀 Session Initialization**: Complete setup with UV dependency management, project analysis, and automation tools
- **🔍 Quality Checkpoints**: Mid-session quality monitoring with workflow analysis and optimization recommendations
- **🏁 Session Cleanup**: Comprehensive cleanup with learning capture and handoff file creation
- **📊 Status Monitoring**: Real-time session status and project context analysis
- **⚡ Auto-Generated Shortcuts**: Automatically creates `/start`, `/checkpoint`, and `/end` Claude Code slash commands
### 🧠 Intelligence Features (Unique to Session Buddy)
Session Buddy includes **industry-first** intelligent knowledge capture and sharing features that transform how you work across projects:
#### 💡 Automatic Insights Capture & Injection
**What It Does:**
- Automatically extracts educational insights from your conversations using deterministic pattern matching
- Stores insights with semantic embeddings for intelligent retrieval
- Prevents duplicate capture through SHA-256 content hashing
- Makes insights available across sessions via semantic search
**How It Works:**
When you use explanatory mode (like this session!), Session Buddy automatically captures insights marked with the `★ Insight ─────` delimiter:
```markdown
Some explanation text.
`★ Insight ─────────────────────────────────────`
Always use async/await for database operations to prevent blocking the event loop
`─────────────────────────────────────────────────`
More text here.
```
**Multi-Point Capture Strategy:**
- **Checkpoint Capture**: Extracts insights during mid-session quality checkpoints
- **Session End Capture**: Additional extraction when session ends
- **Deduplication**: SHA-256 hashing prevents storing duplicate insights
- **Session-Level Tracking**: Maintains hash set across entire session
**Benefits:**
- ✅ **Zero Configuration**: Works automatically with explanatory mode
- ✅ **No Hallucination**: Rule-based extraction (not AI-generated)
- ✅ **High Quality**: Conservative capture (better to miss than to hallucinate)
- ✅ **Fast Performance**: \<50ms extraction, \<20ms semantic search
- ✅ **Privacy-First**: All processing done locally, no external APIs
**Documentation:** See [`docs/features/INSIGHTS_CAPTURE.md`](docs/features/INSIGHTS_CAPTURE.md) for complete details
______________________________________________________________________
#### 🌐 Global Intelligence & Pattern Sharing
**What It Does:**
- Share knowledge across related projects automatically
- Track project dependencies (uses, extends, references, shares_code)
- Search across all projects with dependency-aware ranking
- Coordinate microservices, monorepo modules, or related repositories
**How It Works:**
Create groups of related projects and define their relationships:
```python
# Create project group
group = ProjectGroup(
name="microservices-app",
projects=["auth-service", "user-service", "api-gateway"],
description="Authentication and user management microservices",
)
# Define dependencies
deps = [
ProjectDependency(
source_project="user-service",
target_project="auth-service",
dependency_type="uses",
description="User service depends on auth service for validation",
),
ProjectDependency(
source_project="api-gateway",
target_project="user-service",
dependency_type="extends",
description="Gateway extends user service with rate limiting",
),
]
```
**Cross-Project Search:**
- Search across related projects automatically
- Results ranked by dependency relationships
- Understand how solutions propagate across your codebase
**Benefits:**
- ✅ **Knowledge Reuse**: Solutions found in one project help with related projects
- ✅ **Dependency Awareness**: Understand how changes ripple across projects
- ✅ **Coordinated Development**: Work effectively across multiple codebases
- ✅ **Semantic Understanding**: Find patterns even when projects use different terminology
**Use Cases:**
- **Microservices**: Coordinate related services with shared patterns
- **Monorepos**: Manage multiple packages/modules in one repository
- **Multi-Repo**: Track patterns across separate but related repositories
______________________________________________________________________
## 🚀 Automatic Session Management (NEW!)
**For Git Repositories:**
- ✅ **Automatic initialization** when Claude Code connects
- ✅ **Automatic cleanup** when session ends (quit, crash, or network failure)
- ✅ **Intelligent auto-compaction** during checkpoints
- ✅ **Zero manual intervention** required
**For Non-Git Projects:**
- 📝 Use `/start` for manual initialization
- 📝 Use `/end` for manual cleanup
- 📝 Full session management features available on-demand
The server automatically detects git repositories and provides seamless session lifecycle management with crash resilience and network failure recovery. Non-git projects retain manual control for flexible workflow management.
### Session Lifecycle Visualization
```mermaid
stateDiagram-v2
[*] --> GitRepo: Claude Code Connects
[*] --> ManualInit: Non-Git Project
GitRepo --> AutoStart: Auto-detect Git
AutoStart: Initialize Session
AutoStart --> Working: Development
ManualInit --> ManualStart: User runs /start
ManualStart: Initialize Session
ManualStart --> Working: Development
state Working {
[*] --> Active
Active --> Checkpoint: /checkpoint
Checkpoint --> Active: Continue Work
Active --> Monitoring: Track Quality
Monitoring --> Active
}
Working --> AutoEnd: Disconnect/Quit
Working --> ManualEnd: User runs /end
AutoEnd: Auto Cleanup
AutoEnd --> [*]: Session Handoff
ManualEnd: Manual Cleanup
ManualEnd --> [*]: Session Handoff
note right of AutoStart
Automatic Features:
- UV sync
- Project analysis
- Setup .claude/
- Create shortcuts
end note
note right of AutoEnd
Crash Resilient:
- Any disconnect
- Network failure
- System crash
All handled gracefully
end note
```
### Git Repository Auto-Management Flow
```mermaid
flowchart TD
Start([Claude Code Connects]) --> Detect{Git Repo?}
Detect -->|Yes| AutoInit[Auto-Initialize]
Detect -->|No| Manual{User runs /start?}
AutoInit --> Setup[Session Setup]
Setup --> UV[UV Sync]
UV --> Analysis[Project Analysis]
Analysis --> CreateDir[Create .claude/]
CreateDir --> Shortcuts[Create Shortcuts]
Shortcuts --> Ready([Session Ready])
Manual -->|Yes| ManualInit[/start Command]
Manual -->|No| Idle([No Session])
ManualInit --> Ready
Ready --> Work[Development Work]
Work --> Checkpoint{Mid-session?}
Checkpoint -->|Yes| Compact[Auto-Compact Context]
Checkpoint -->|No| Continue{Continue?}
Compact --> Work
Continue -->|Yes| Work
Continue -->|No| End
Work --> End{Disconnect?}
End -->|Yes| AutoCleanup[Auto Cleanup]
End -->|No| Work
AutoCleanup --> Handoff[Create Handoff Doc]
Handoff --> Complete([Session Complete])
Idle --> Manual
Manual --> Complete
style AutoInit fill:#c8e6c9
style AutoCleanup fill:#c8e6c9
style ManualInit fill:#fff9c4
style Ready fill:#b2dfdb
style Complete fill:#ffccbc
```
## Available MCP Tools
This server provides **85+ specialized tools** organized into 12 functional categories.
For a complete list of tools, see the [MCP Tools Reference](docs/user/MCP_TOOLS_REFERENCE.md).
### 🎉 Phase 4 Analytics Tools (NEW!)
**Real-Time Monitoring:**
- `get_real_time_metrics` - Get top skills by usage with live dashboard data
- `detect_anomalies` - Detect performance anomalies using Z-score analysis
**Advanced Analytics:**
- `get_skill_trend` - Analyze skill effectiveness trends over time
- `get_collaborative_recommendations` - Get personalized recommendations from similar users
- `get_community_baselines` - Compare user performance vs global community
- `get_skill_dependencies` - Explore skill co-occurrence patterns
### 🧠 Intelligence Tools (What Makes Session Buddy Unique)
**Insights Management:**
- `search_insights` - Search captured insights by topic or query with semantic matching
- `insights_statistics` - View statistics about captured insights (types, topics, confidence scores)
- Wildcard search with `*` to view all captured insights
**Multi-Project Coordination:**
- `create_project_group` - Create groups of related projects for coordinated development
- `add_project_dependency` - Track relationships between projects (uses, extends, references)
- `search_across_projects` - Search across all projects with dependency-aware ranking
- `get_project_insights` - Get cross-project insights and collaboration opportunities
**Team Collaboration:**
- `create_team` - Create teams for knowledge sharing
- `search_team_knowledge` - Search across team reflections with access control
- `get_team_statistics` - View team activity and contribution metrics
- `vote_on_reflection` - Upvote/downvote team reflections for quality filtering
______________________________________________________________________
### Core Session Management
- `start` - Comprehensive session initialization with project analysis and memory setup
- `checkpoint` - Mid-session quality assessment with workflow analysis
- `end` - Complete session cleanup with learning capture
- `status` - Current session overview with health checks
### Memory & Conversation Search
- `store_reflection` - Store insights with tagging and embeddings
- `quick_search` - Fast overview search with count and top results
- `search_summary` - Aggregated insights without individual result details
- `get_more_results` - Pagination support for large result sets
- `search_by_file` - Find conversations tied to a specific file
- `search_by_concept` - Semantic search by concept with optional file context
### Knowledge Graph (DuckPGQ)
- Entity and relationship management for project knowledge
- SQL/PGQ graph queries for complex relationship analysis
- See [Oneiric Migration Guide](docs/migrations/ONEIRIC_MIGRATION_PLAN.md)
All tools use **local processing** for privacy, with **DuckDB vector storage** (FLOAT[384] embeddings) and **ONNX-based semantic search** requiring no external API calls.
## 🚀 Integration with Crackerjack
Session Buddy includes deep integration with [Crackerjack](https://github.com/lesleslie/crackerjack), the AI-driven Python development platform:
**Key Features:**
- **📊 Quality Metrics Tracking**: Automatically captures and tracks quality scores over time
- **🧪 Test Result Monitoring**: Learns from test patterns, failures, and successful fixes
- **🔍 Error Pattern Recognition**: Remembers how specific errors were resolved and suggests solutions
**Example Workflow:**
1. 🚀 **Session Buddy `start`** - Sets up your session with accumulated context from previous work
1. 🔧 **Crackerjack runs** quality checks and applies AI agent fixes to resolve issues
1. 💾 **Session Buddy captures** successful patterns and error resolutions
1. 🧠 **Next session starts** with all accumulated knowledge
For detailed information on Crackerjack integration, see [Crackerjack Integration Guide](docs/CRACKERJACK.md).
## Installation
### From Source
```bash
# Clone the repository
git clone https://github.com/lesleslie/session-buddy.git
cd session-buddy
# Install with all dependencies (development + testing)
uv sync --group dev
# Or install minimal production dependencies only
uv sync
# Or use pip (for production only)
pip install session-buddy
```
### MCP Configuration
Add to your project's `.mcp.json` file:
```json
{
"mcpServers": {
"session-buddy": {
"command": "python",
"args": ["-m", "session_buddy.server"],
"cwd": "/path/to/session-buddy",
"env": {
"PYTHONPATH": "/path/to/session-buddy"
}
}
}
}
```
### Alternative: Use Script Entry Point
If installed with pip/uv, you can use the script entry point:
```json
{
"mcpServers": {
"session-buddy": {
"command": "session-buddy",
"args": [],
"env": {}
}
}
}
```
**Dependencies:** Requires Python 3.13+. For a complete list of dependencies, see [pyproject.toml](pyproject.toml).
Recent changes include upgrading FastAPI to the 0.127+ series for improved compatibility and removing sitecustomize.py for faster startup reliability.
### 🧠 Setting Up Semantic Search (Optional)
Session Buddy includes semantic search capabilities using local AI embeddings with **no external API dependencies**.
**Current Status:**
- ✅ **Text Search**: Works out of the box (fast, keyword-based)
- ✅ **Semantic Search**: Works with ONNX model (no PyTorch required!)
**For Text Search (Default):**
No additional setup needed! The system uses full-text search with FTS5 for fast, accurate results.
**For Semantic Search (Optional):**
The system uses pre-converted ONNX models for efficient semantic search without requiring PyTorch:
```bash
# Download the pre-converted ONNX model (one-time setup)
python scripts/download_embedding_model.py
```
This downloads the **Xenova/all-MiniLM-L6-v2** model (~100MB) which includes:
- Pre-converted ONNX model (no PyTorch needed!)
- 384-dimensional embeddings for semantic similarity
- Fast CPU inference with ONNX Runtime
**Note**: Text search is highly effective and recommended for most use cases. Semantic search provides enhanced conceptual matching by understanding meaning beyond keywords.
## Usage
Once configured, the following slash commands become available in Claude Code:
**Primary Session Commands:**
- `/session-buddy:start` - Full session initialization
- `/session-buddy:checkpoint` - Quality monitoring checkpoint with scoring
- `/session-buddy:end` - Complete session cleanup with learning capture
- `/session-buddy:status` - Current status overview with health checks
**Auto-Generated Shortcuts:**
After running `/session-buddy:start` once, these shortcuts are automatically created:
- `/start` → `/session-buddy:start`
- `/checkpoint [name]` → `/session-buddy:checkpoint`
- `/end` → `/session-buddy:end`
> These shortcuts are created in `~/.claude/commands/` and work across all projects
**Memory & Search Commands:**
- `/session-buddy:quick_search` - Fast search with overview results
- `/session-buddy:search_summary` - Aggregated insights without full result lists
- `/session-buddy:get_more_results` - Paginate search results
- `/session-buddy:search_by_file` - Find results tied to a specific file
- `/session-buddy:search_by_concept` - Semantic search by concept
- `/session-buddy:search_code` - Search code-related conversations
- `/session-buddy:search_errors` - Search error and failure discussions
- `/session-buddy:search_temporal` - Search using time expressions
- `/session-buddy:store_reflection` - Store important insights with tagging
- `/session-buddy:reflection_stats` - Stats about the reflection database
For running the server directly in development mode:
```bash
python -m session_buddy.server
# or
session-buddy
```
## Memory System
**Built-in Conversation Memory:**
- **Local Storage**: DuckDB database at `~/.claude/data/reflection.duckdb`
- **Embeddings**: Local ONNX models for semantic search (no external API needed)
- **Privacy**: Everything runs locally with no external dependencies
- **Cross-Project**: Conversations tagged by project context for organized retrieval
**Search Capabilities:**
- **Semantic Search**: Vector similarity matching with customizable thresholds
- **Time Decay**: Recent conversations prioritized in results
- **Filtering**: Search by project context or across all projects
## Data Storage
This server manages its data locally in the user's home directory:
- **Memory Storage**: `~/.claude/data/reflection.duckdb`
- **Session Logs**: `~/.claude/logs/`
- **Configuration**: Uses pyproject.toml and environment variables
## Recommended Session Workflow
1. **Initialize Session**: `/session-buddy:start` - Sets up project context, dependencies, and memory system
1. **Monitor Progress**: `/session-buddy:checkpoint` (every 30-45 minutes) - Quality scoring and optimization
1. **Search Past Work**: `/session-buddy:quick_search` or `/session-buddy:search_summary` - Find relevant past conversations and solutions
1. **Store Important Insights**: `/session-buddy:store_reflection` - Capture key learnings for future sessions
1. **End Session**: `/session-buddy:end` - Final assessment, learning capture, and cleanup
## Benefits
### 🧠 Intelligence & Knowledge Sharing (Unique to Session Buddy)
- **Automatic Insights Capture**: Extracts educational insights from conversations without manual effort
- **Semantic Pattern Discovery**: Find related insights across sessions using vector embeddings
- **Cross-Project Learning**: Share knowledge between related projects automatically
- **Dependency Awareness**: Understand how solutions propagate across your codebase
- **Team Knowledge Base**: Collaborative filtering and voting for best practices
- **No Hallucination**: Rule-based extraction ensures only high-quality insights are captured
### Comprehensive Coverage
- **Session Quality**: Real-time monitoring and optimization
- **Memory Persistence**: Cross-session conversation retention
- **Project Structure**: Context-aware development workflows
### Reduced Friction
- **Single Command Setup**: One `/session-buddy:start` sets up everything
- **Local Dependencies**: No external API calls or services required
- **Intelligent Permissions**: Reduces repeated permission prompts
- **Automated Workflows**: Structured processes for common tasks
### Enhanced Productivity
- **Quality Scoring**: Guides session effectiveness
- **Built-in Memory**: Enables building on past work automatically
- **Project Templates**: Accelerates development setup
- **Knowledge Persistence**: Maintains context across sessions
## Documentation
Complete documentation is available in the `docs/` directory:
### 🎉 Phase 4 Documentation (NEW!)
- **[V4 Migration Guide](docs/migrations/V3_TO_V4_MIGRATION_GUIDE.md)** ⭐ **Start Here for Phase 4**
- Complete V3→V4 migration instructions
- Zero-breaking-changes deployment
- Rollback procedures
- Performance considerations
- **[V4 Schema Summary](PHASE4_V4_SCHEMA_SUMMARY.md)** - Complete V4 database schema reference
- 11 new tables, 6 new views
- Query examples and usage patterns
- **[Phase 4 Final Status](PHASE4_FINAL_STATUS_REPORT.md)** - Implementation complete (100% ✅)
- 32 files created, ~12,000 lines of code
- Production-ready status
- **[Phase 4 Deployment Checklist](PHASE4_DEPLOYMENT_CHECKLIST.md)** - Complete deployment validation
- Pre-flight checks
- Migration steps
- Post-deployment validation
- Rollback testing
### 🧠 Intelligence Features (What Makes Session Buddy Unique)
- **[Intelligence Features Quick Start](docs/features/INTELLIGENCE_QUICK_START.md)** ⭐ **Start Here** - 5-minute practical guide
- Automatic insights capture (how to use `★ Insight ─────` delimiters)
- Cross-project intelligence (group related projects)
- Team collaboration (shared knowledge with voting)
- Advanced search techniques (semantic, faceted, temporal)
- Configuration and troubleshooting
- **[Insights Capture & Deduplication](docs/features/INSIGHTS_CAPTURE.md)** ⭐ **Deep Dive**
- Automatic extraction of educational insights from conversations
- Multi-point capture strategy (checkpoint + session end)
- SHA-256 deduplication to prevent duplicate insights
- Semantic search with wildcard support
- Complete test coverage (62/62 tests passing)
- Architecture and implementation details
### User Documentation
- **User Documentation** - Quick start, configuration, and deployment guides
- [Quick Start Guide](docs/user/QUICK_START.md) - Get started in 5 minutes
- [Configuration Guide](docs/user/CONFIGURATION.md) - Advanced configuration options
- [MCP Tools Reference](docs/user/MCP_TOOLS_REFERENCE.md) - Complete tool documentation
### Developer Documentation
- **Developer Documentation** - Architecture, testing, and integration guides
- [Oneiric Migration Guide](docs/migrations/ONEIRIC_MIGRATION_PLAN.md) - Database migration
- [Architecture Overview](docs/developer/ARCHITECTURE.md) - System design and patterns
### Feature Guides
- **Feature Guides** - In-depth documentation of specific features
- [Token Optimization](docs/features/TOKEN_OPTIMIZATION.md) - Context window management
- [Selective Auto-Store](docs/features/SELECTIVE_AUTO_STORE.md) - Reflection storage policy
- [Auto Lifecycle](docs/features/AUTO_LIFECYCLE.md) - Automatic session management
### Reference
- **Reference** - MCP schemas and command references
## Troubleshooting
**Common Issues:**
- **Memory/embedding issues**: Ensure all dependencies are installed with `uv sync`
- **Path errors**: Verify `cwd` and `PYTHONPATH` are set correctly in `.mcp.json`
- **Permission issues**: Remove `~/.claude/sessions/trusted_permissions.json` to reset trusted operations
**Debug Mode:**
```bash
# Run with verbose logging
PYTHONPATH=/path/to/session-buddy python -m session_buddy.server --debug
```
For more detailed troubleshooting guidance, see [Configuration Guide](docs/user/CONFIGURATION.md) or [Quick Start Guide](docs/user/QUICK_START.md).
| text/markdown | null | Les Leslie <les@wedgwoodwebworks.com> | null | null | BSD-3-CLAUSE | null | [
"Operating System :: POSIX",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiofiles>=25.1.0",
"aiohttp>=3.13.3",
"crackerjack>=0.52.0",
"duckdb>=1.4.4",
"fastmcp>=2.14.5",
"hatchling>=1.28.0",
"numpy>=2.4.2",
"onnxruntime<1.24,>=1.23.2",
"prometheus-client>=0.24.1",
"psutil>=7.2.2",
"pydantic>=2.12.5",
"rich>=14.3.2",
"scikit-learn>=1.6.0",
"scipy>=1.15.0",
"structlog>=25.5.0",
"tiktoken>=0.12.0",
"transformers>=5.1.0",
"typer>=0.21.1",
"websockets>=15.0"
] | [] | [] | [] | [
"documentation, https://github.com/lesleslie/session-buddy",
"homepage, https://github.com/lesleslie/session-buddy",
"repository, https://github.com/lesleslie/session-buddy"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:52:09.164066 | session_buddy-0.14.3.tar.gz | 6,739,919 | 38/88/1f00b0dade0f59c0ad750076f97a261741d9efb13836be7cfd575f6dbcc1/session_buddy-0.14.3.tar.gz | source | sdist | null | false | c9cec481062f5479caff1f50618e97f1 | 05d9512bff3be54900b58aa88a65d869e67de85c25307399b822a42eaf722559 | 38881f00b0dade0f59c0ad750076f97a261741d9efb13836be7cfd575f6dbcc1 | null | [
"LICENSE"
] | 160 |
2.4 | agent-papers-cli | 0.1.0 | CLI tools for reading academic papers and searching the web, academic literature, and biomedical databases | # agent-papers-cli
Read academic papers, search the literature, and run multi-step research workflows — all from the terminal.
**agent-papers-cli** gives your AI agents (and you) the ability to read academic papers, search Google, Google Scholar, Semantic Scholar, PubMed, and browse webpages — all from the command line. Two CLI tools work together: `paper` for reading and navigating PDFs, and `paper-search` for querying search engines and academic databases.
Designed as building blocks for agentic research workflows, these tools let agents autonomously discover papers, read them in depth, follow citation graphs, and verify claims. The repo includes four [Claude Code skills](#agent-skills) that orchestrate multi-step research tasks like deep-dive investigations, systematic literature reviews, and fact-checking.
- **`paper`** — read, skim, and search PDFs. Inspired by [agent-browser](https://github.com/vercel-labs/agent-browser) — but for PDFs.
- **`paper-search`** — search Google, Google Scholar, Semantic Scholar, PubMed, and extract webpage content. Based on the search APIs from [dr-tulu](https://github.com/rlresearch/dr-tulu).
## Install
```bash
uv pip install -e .
# Optional: enable figure/table/equation detection (requires ~40MB for model)
uv pip install -e ".[layout]"
```
Requires Python 3.10+.
## `paper` commands
```bash
paper outline <ref> # Show heading tree
paper read <ref> [section] # Read full paper or specific section
paper skim <ref> --lines N --level L # Headings + first N sentences
paper search <ref> "query" # Keyword search with context
paper info <ref> # Show metadata
paper goto <ref> <ref_id> # Jump to a section, link, or citation
# Layout detection (requires `pip install paper-cli[layout]`)
paper detect <ref> # Run figure/table/equation detection
paper figures <ref> # List detected figures with captions
paper tables <ref> # List detected tables
paper equations <ref> # List detected equations
paper goto <ref> f1 # Jump to figure 1
paper goto <ref> t2 # Jump to table 2
paper goto <ref> eq3 # Jump to equation 3
# Highlights
paper highlight search <ref> "query" # Search PDF for text (with coordinates)
paper highlight add <ref> "query" # Find text and persist a highlight
paper highlight list <ref> # List stored highlights
paper highlight remove <ref> <id> # Remove a highlight by ID
```
`<ref>` accepts: `2302.13971`, `arxiv.org/abs/2302.13971`, `arxiv.org/pdf/2302.13971`, or a **local PDF path** like `./paper.pdf`
Arxiv papers are downloaded once and cached in `~/.papers/`. Local PDFs are read directly from disk — each gets a unique cache directory based on its absolute path (`{stem}-{hash8}`), so two different `paper.pdf` files in different directories won't collide. If you modify a local PDF after it's been parsed, the stale cache is automatically detected and re-parsed.
## `paper-search` commands
```bash
# Environment / API keys
paper-search env # Show API key status
paper-search env set KEY value # Save a key to ~/.papers/.env
# Google (requires SERPER_API_KEY)
paper-search google web "query" # Web search
paper-search google scholar "query" # Google Scholar search
# Semantic Scholar (S2_API_KEY optional, recommended for rate limits)
paper-search semanticscholar papers "query" # Paper search
[--year 2023-2025] [--min-citations 10] [--venue ACL] [--sort citationCount:desc] [--limit N]
paper-search semanticscholar snippets "query" # Text snippet search
[--year 2024] [--paper-ids id1,id2]
paper-search semanticscholar citations <id> # Papers citing this one
paper-search semanticscholar references <id> # Papers this one references
paper-search semanticscholar details <id> # Full paper metadata
# PubMed (no key needed)
paper-search pubmed "query" [--limit N] [--offset N]
# Browse (requires JINA_API_KEY for jina backend)
paper-search browse <url> [--backend jina|serper] [--timeout 30]
```
### API keys
Set API keys persistently with `paper-search env set`:
```bash
paper-search env set SERPER_API_KEY sk-... # required for google, browse --backend serper
paper-search env set S2_API_KEY ... # optional, higher Semantic Scholar rate limits
paper-search env set JINA_API_KEY ... # required for browse --backend jina
```
Keys are saved to `~/.papers/.env` and loaded automatically. Shell environment variables take precedence. Run `paper-search env` to check what's configured.
### Jump links
All output is annotated with `[ref=...]` markers that agents (or humans) can follow up on:
- `[ref=s3]` — section (jump to section 3)
- `[ref=f1]` — figure (show bounding box and caption)
- `[ref=t2]` — table (show bounding box and caption)
- `[ref=eq1]` — equation (show bounding box)
- `[ref=e1]` — external link (show URL and context)
- `[ref=c5]` — citation (look up reference [5] in the bibliography)
Use `paper goto <ref> <ref_id>` to follow any marker. A summary footer shows the available ref ranges:
```
Refs: s1..s12 (sections) · f1..f5 (figures) · t1..t3 (tables) · eq1..eq8 (equations) · e1..e8 (links) · c1..c24 (citations)
Use: paper goto 2302.13971 <ref>
```
Add `--no-refs` to any command to hide the annotations.
## Examples
### Browse a paper's structure
```bash
paper outline 2302.13971
```
```
╭──────────────────────────────────────────────────────╮
│ LLaMA: Open and Efficient Foundation Language Models │
│ arxiv.org/abs/2302.13971 │
╰──────────────────────────────────────────────────────╯
Outline
├── Abstract [ref=s1]
├── Introduction [ref=s2]
├── Approach [ref=s3]
│ ├── Pre-training Data [ref=s4]
│ ├── Architecture [ref=s5]
│ ├── Optimizer [ref=s6]
│ └── Efficient implementation [ref=s7]
├── Main results [ref=s8]
...
Refs: s1..s12 (sections) · e1..e8 (links) · c1..c24 (citations)
Use: paper goto 2302.13971 <ref>
```
### Jump to a section, link, or citation
```bash
paper goto 2302.13971 s3 # read the "Approach" section
paper goto 2302.13971 e1 # show URL and context for the first external link
paper goto 2302.13971 c5 # look up citation [5] in the bibliography
```
### Read a specific section
```bash
paper read 2302.13971 "abstract"
```
### Skim headings with first N sentences
```bash
paper skim 2302.13971 --lines 2
paper skim 2302.13971 --lines 1 --level 1 # top-level headings only
```
### Search for keywords
```bash
paper search 2302.13971 "transformer"
```
```
Match 1 in Architecture [ref=s5] (p.3)
our network is based on the transformer architec-
ture (Vaswani et al., 2017). We leverage various
```
### Hide ref annotations
```bash
paper outline 2302.13971 --no-refs
```
### Highlight text in a paper
```bash
# Search for text (shows matches with page numbers and coordinates)
paper highlight search 2501.12948 "reinforcement learning"
# Add a highlight (single match is saved directly)
paper highlight add 2501.12948 "reinforcement learning" --color green --note "key concept"
# Multiple matches? Shows a numbered list — use --pick to select
paper highlight add 2501.12948 "model" --pick 3
# Or use --interactive for a prompt, --range to paginate
paper highlight add 2501.12948 "model" --interactive
paper highlight add 2501.12948 "model" --range 21:40
# Output app-compatible JSON (ScaledPosition format, 0-1 normalized)
paper highlight add 2501.12948 "reinforcement learning" --return-json
# List and manage highlights
paper highlight list 2501.12948
paper highlight remove 2501.12948 1
```
Highlights are stored in `~/.papers/<id>/highlights.json` and optionally annotated onto `paper_annotated.pdf`.
### Detect figures, tables, and equations
Requires `pip install paper-cli[layout]` (installs `doclayout_yolo`). Model weights (~40MB) are downloaded automatically on first use to `~/.papers/.models/`.
```bash
# Run detection (results cached in ~/.papers/<id>/layout.json)
paper detect 2302.13971
# List detected elements
paper figures 2302.13971
paper tables 2302.13971
paper equations 2302.13971
# Jump to a specific element
paper goto 2302.13971 f1 # figure 1
paper goto 2302.13971 t2 # table 2
paper goto 2302.13971 eq3 # equation 3
```
Detection uses [DocLayout-YOLO](https://github.com/opendatalab/DocLayout-YOLO) trained on DocStructBench (10 categories including figures, tables, and formulas). Model weights are from our [pinned fork](https://huggingface.co/collab-dr/DocLayout-YOLO-DocStructBench). Supports CUDA, MPS (Apple Silicon), and CPU. Running `paper figures` etc. triggers detection lazily on first use — subsequent calls use the cached result. Each detected element is cropped as a PNG screenshot to `~/.papers/<id>/layout/` (e.g., `f1.png`, `t2.png`, `eq3.png`).
## Architecture
```
src/paper/ # paper CLI
├── cli.py # Click CLI — all commands defined here
├── fetcher.py # Downloads PDFs from arxiv, manages cache
├── highlighter.py # PDF text search, coordinate conversion, highlight CRUD, PDF annotation
├── layout.py # Figure/table/equation detection via DocLayout-YOLO (optional)
├── models.py # Data models: Document, Section, Sentence, Span, Box, Metadata, Link, LayoutElement, Highlight
├── parser.py # PDF → Document: text extraction, heading detection, sentence splitting
├── renderer.py # Rich terminal output, ref registry, goto rendering
└── storage.py # ~/.papers/ cache directory management
src/search/ # paper-search CLI
├── cli.py # Click CLI — all commands and subgroups
├── config.py # API key loading (dotenv), persistent storage, env status
├── models.py # Data models: SearchResult, SnippetResult, CitationResult, BrowseResult
├── renderer.py # Rich terminal output with reference IDs and suggestive prompts
└── backends/
├── google.py # Serper API (web + scholar)
├── semanticscholar.py # S2 API (papers, snippets, citations, references, details)
├── pubmed.py # NCBI E-utilities (esearch + efetch)
└── browse.py # Webpage content extraction (Jina Reader, Serper scrape)
```
### How it works
1. **Fetch**: Downloads the PDF from arxiv (and caches in `~/.papers/<id>/`) or reads a local PDF directly
2. **Parse**: Extracts text with [PyMuPDF](https://pymupdf.readthedocs.io/), detects headings via PDF outline or font-size heuristics, splits sentences with [PySBD](https://github.com/nipunsadvilkar/pySBD), extracts links and citations
3. **Detect** (optional, lazy): Detects figures, tables, and equations using [DocLayout-YOLO](https://github.com/opendatalab/DocLayout-YOLO) pre-trained on [DocStructBench](https://github.com/opendatalab/DocLayout-YOLO). Renders pages to images, runs YOLO detection, maps bounding boxes back to PDF coordinates. Supports MPS (Apple Metal), CUDA, and CPU. Runs on first `paper figures`/`tables`/`equations` call and is cached.
4. **Display**: Renders structured output with [Rich](https://rich.readthedocs.io/), annotates with `[ref=...]` jump links
The parsed structure is cached as JSON so subsequent commands are instant. Layout detection results are cached separately in `layout.json`.
### Data model
Simplified flat-layer approach inspired by [papermage](https://github.com/allenai/papermage):
- **Document** has a `raw_text` string, list of `Section`s, `Link`s, and `LayoutElement`s
- Each **Section** has a heading, level, content, and list of `Sentence`s
- Each **Link** has a kind (`external`/`internal`/`citation`), anchor text, URL, and page
- **LayoutElement** stores a detected figure, table, or equation with bounding `Box`, confidence, caption, label, and `image_path` (cropped PNG)
- **Highlight** stores persisted highlights with page, bounding rects (absolute PDF coords), color, and note
- **Span** objects store character offsets into `raw_text`, enabling text-to-PDF coordinate mapping
- Everything serializes to JSON for caching
### PDF heading detection
Two strategies, tried in order:
1. **PDF outline** — if the PDF has a built-in table of contents, use it directly (most reliable)
2. **Font-size heuristic** — detect body text size (most common), treat larger/bold text as headings, merge section number fragments ("1" + "Introduction" → "1 Introduction"), filter false positives (author names, table data, captions)
## Development
### Run tests
```bash
uv pip install -e ".[dev]"
pytest
```
### Test papers
These papers cover different PDF structures and parsing paths:
| Paper | ID | Parsing path | Notes |
|-------|-----|---|-------|
| LLaMA (Touvron et al.) | `2302.13971` | Font-size heuristic | No built-in ToC, standard two-column arxiv format |
| Gradient ↔ Adapters (Torroba-Hennigen et al.) | `2502.13811` | PDF outline | 20 TOC entries, caught outline offset bug |
| Words Like Knives (Shen et al.) | `2505.21451` | Font-size heuristic | Tricky formatting: author names at heading size, multi-line headings, small-caps |
| Completion ≠ Collaboration (Shen et al.) | `2510.25744` | PDF outline | Proper hierarchy |
| DeepSeek-R1 | `2501.12948` | PDF outline | Very long paper (86 pages), stress test |
See `tests/README.md` for detailed notes on why each paper is included and known edge cases.
```bash
# Quick smoke test across formats
paper outline 2302.13971 # font-size heuristic path
paper outline 2502.13811 # PDF outline path
paper skim 2302.13971 --lines 1
paper search 2302.13971 "transformer"
paper goto 2302.13971 s2 # jump to section
paper goto 2502.13811 e1 # jump to external link
paper outline 2302.13971 --no-refs # clean output without refs
paper highlight search 2501.12948 "reinforcement learning" # highlight search
# Layout detection (requires paper-cli[layout])
paper detect 2302.13971 # run figure/table/equation detection
paper figures 2302.13971 # list detected figures
paper goto 2302.13971 f1 # jump to figure 1
```
### Environment variables
| Variable | Default | Description |
|----------|---------|-------------|
| `PAPER_DOWNLOAD_TIMEOUT` | `120` | Download timeout in seconds |
| `SERPER_API_KEY` | — | Google search and scraping via Serper.dev |
| `S2_API_KEY` | — | Semantic Scholar API (optional, increases rate limits) |
| `JINA_API_KEY` | — | Jina Reader for webpage content extraction |
Search API keys can be set via shell env, `.env` in the working directory, or persistently via `paper-search env set`.
## Agent Skills
This repo includes [Claude Code skills](https://agentskills.io) for agent-driven research workflows. Install them into any project with:
```bash
npx skills add collaborative-deep-research/agent-papers-cli
```
Or see [SKILLS.md](SKILLS.md) for manual setup and details.
| Skill | Command | Description |
|-------|---------|-------------|
| Research Coordinator | `/research-coordinator` | Analyzes the request and dispatches to the right workflow |
| Deep Research | `/deep-research` | Broad-to-deep investigation of a topic |
| Literature Review | `/literature-review` | Systematic survey of academic literature |
| Fact Check | `/fact-check` | Verify claims against web and academic sources |
## Known limitations
- Heading detection is heuristic-based (font size + bold) — works well on standard arxiv but fragile on unusual templates
- PDFs with a built-in outline/ToC get better results (read directly when available)
- Section hierarchy (nesting) is approximate
- Citation detection: numeric citations (`[1]`, `[2, 3]`, `[1-5]`) are detected via regex; author-year citations (`(Kingma & Ba, 2015)`) are detected when hyperlinked in the PDF (via `LINK_NAMED` destinations, common in LaTeX). Non-hyperlinked author-year citations are not detected.
- When a TOC heading doesn't match any line on its page (e.g., TOC says "Proof of thm:foo" but PDF says "A.2. Proof of Thm. 1"), the section may include some extra content from the page header
## Future plans
- [GROBID](https://github.com/kermitt2/grobid) backend for ML-based section detection
- Named citation detection for non-hyperlinked author-year citations (hyperlinked ones already work via `LINK_NAMED`)
- Richer document model inspired by [papermage](https://github.com/allenai/papermage)
- Equation recognition to LaTeX (via [UniMERNet](https://github.com/opendatalab/UniMERNet) or [LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR))
- Table structure recognition (cell-level extraction)
- Fine-tuning DocLayout-YOLO on custom academic paper datasets
| text/markdown | collaborative-deep-research | Shannon Shen <shannonshen49@gmail.com> | null | null | Apache-2.0 | academic, arxiv, cli, papers, pdf, pubmed, search, semantic-scholar | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Text Processing"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"httpx>=0.27",
"pymupdf>=1.24",
"pysbd>=0.3",
"python-dotenv>=1.0",
"rich>=13.0",
"tenacity>=8.0",
"pytest>=8.0; extra == \"dev\"",
"doclayout-yolo>=0.0.4; extra == \"layout\"",
"huggingface-hub>=0.20; extra == \"layout\""
] | [] | [] | [] | [
"Homepage, https://github.com/collaborative-deep-research/agent-papers-cli",
"Repository, https://github.com/collaborative-deep-research/agent-papers-cli",
"Issues, https://github.com/collaborative-deep-research/agent-papers-cli/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:51:54.894852 | agent_papers_cli-0.1.0.tar.gz | 233,562 | 3a/fe/dd86aea482df3ae8245b44cf847b86e98e8e741bd35ed06a887af906add6/agent_papers_cli-0.1.0.tar.gz | source | sdist | null | false | 9da76b41dae698562f7a60865ee81689 | 05c16391ad453db78b5e74324aaac236e25b80f60af39b1957e9a613304c4fcb | 3afedd86aea482df3ae8245b44cf847b86e98e8e741bd35ed06a887af906add6 | null | [
"LICENSE"
] | 184 |
2.4 | skypilot-nightly | 1.0.0.dev20260220 | SkyPilot: Run AI on Any Infra — Unified, Faster, Cheaper. | <p align="center">
<img alt="SkyPilot" src="https://raw.githubusercontent.com/skypilot-org/skypilot/master/docs/source/images/skypilot-wide-light-1k.png" width=55%>
</p>
<p align="center">
<a href="https://docs.skypilot.co/">
<img alt="Documentation" src="https://img.shields.io/badge/docs-gray?logo=readthedocs&logoColor=f5f5f5">
</a>
<a href="https://github.com/skypilot-org/skypilot/releases">
<img alt="GitHub Release" src="https://img.shields.io/github/release/skypilot-org/skypilot.svg">
</a>
<a href="http://slack.skypilot.co">
<img alt="Join Slack" src="https://img.shields.io/badge/SkyPilot-Join%20Slack-blue?logo=slack">
</a>
<a href="https://github.com/skypilot-org/skypilot/releases">
<img alt="Downloads" src="https://img.shields.io/pypi/dm/skypilot">
</a>
</p>
<h3 align="center">
Run AI on Any Infrastructure
</h3>
<div align="center">
#### [🌟 **SkyPilot Demo** 🌟: Click to see a 1-minute tour](https://demo.skypilot.co/dashboard/)
</div>
SkyPilot is a system to run, manage, and scale AI workloads on any AI infrastructure.
SkyPilot gives **AI teams** a simple interface to run jobs on any infra.
**Infra teams** get a unified control plane to manage any AI compute — with advanced scheduling, scaling, and orchestration.
<img src="./docs/source/images/skypilot-abstractions-long-2.png" alt="SkyPilot Abstractions">
-----
:fire: *News* :fire:
- [Dec 2025] **SkyPilot v0.11** released: Multi-Cloud Pools, Fast Managed Jobs, Enterprise-Readiness at Large Scale, Programmability. [**Release notes**](https://github.com/skypilot-org/skypilot/releases/tag/v0.11.0)
- [Dec 2025] **SkyPilot Pools** released: Run batch inference and other jobs on a managed pool of warm workers (across clouds or clusters). [**blog**](https://blog.skypilot.co/skypilot-pools-deepseek-ocr/), [**docs**](https://docs.skypilot.co/en/latest/examples/pools.html)
- [Dec 2025] Train **an agent to use Google Search** as a tool with RL on your Kubernetes or clouds: [**blog**](https://blog.skypilot.co/verl-tool-calling/), [**example**](./llm/verl/)
- [Nov 2025] Serve **Kimi K2 Thinking** with reasoning capabilities on your Kubernetes or clouds: [**example**](./llm/kimi-k2-thinking/)
- [Oct 2025] Run **RL training for LLMs** with SkyRL on your Kubernetes or clouds: [**example**](./llm/skyrl/)
- [Oct 2025] Train and serve [Andrej Karpathy's](https://x.com/karpathy/status/1977755427569111362) **nanochat** - the best ChatGPT that $100 can buy: [**example**](./llm/nanochat)
- [Oct 2025] Run large-scale **LLM training with TorchTitan** on any AI infra: [**example**](./examples/training/torchtitan)
- [Sep 2025] Scaling AI infrastructure at Abridge - **10x faster development** with SkyPilot: [**blog**](https://blog.skypilot.co/abridge/)
- [Sep 2025] Network and Storage Benchmarks for LLM training on the cloud: [**blog**](https://maknee.github.io/blog/2025/Network-And-Storage-Training-Skypilot/)
- [Aug 2025] Serve and finetune **OpenAI GPT-OSS models** (gpt-oss-120b, gpt-oss-20b) with one command on any infra: [**serve**](./llm/gpt-oss/) + [**LoRA and full finetuning**](./llm/gpt-oss-finetuning/)
- [Jul 2025] Run distributed **RL training for LLMs** with Verl (PPO, GRPO) on any cloud: [**example**](./llm/verl/)
## Overview
SkyPilot **is easy to use for AI teams**:
- Quickly spin up compute on your own infra
- Environment and job as code — simple and portable
- Easy job management: queue, run, and auto-recover many jobs
SkyPilot **makes Kubernetes easy for AI & Infra teams**:
- Slurm-like ease of use, cloud-native robustness
- Local dev experience on K8s: SSH into pods, sync code, or connect IDE
- Turbocharge your clusters: gang scheduling, multi-cluster, and scaling
SkyPilot **unifies multiple clusters, clouds, and hardware**:
- One interface to use reserved GPUs, Kubernetes clusters, Slurm clusters, or 20+ clouds
- [Flexible provisioning](https://docs.skypilot.co/en/latest/examples/auto-failover.html) of GPUs, TPUs, CPUs, with auto-retry
- [Team deployment](https://docs.skypilot.co/en/latest/reference/api-server/api-server.html) and resource sharing
SkyPilot **cuts your cloud costs & maximizes GPU availability**:
* Autostop: automatic cleanup of idle resources
* [Spot instance support](https://docs.skypilot.co/en/latest/examples/managed-jobs.html#running-on-spot-instances): 3-6x cost savings, with preemption auto-recovery
* Intelligent scheduling: automatically run on the cheapest & most available infra
SkyPilot supports your existing GPU, TPU, and CPU workloads, with no code changes.
Install with pip:
```bash
# Choose your clouds:
pip install -U "skypilot[kubernetes,aws,gcp,azure,oci,nebius,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,seeweb,shadeform]"
```
To get the latest features and fixes, use the nightly build or [install from source](https://docs.skypilot.co/en/latest/getting-started/installation.html):
```bash
# Choose your clouds:
pip install "skypilot-nightly[kubernetes,aws,gcp,azure,oci,nebius,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,seeweb,shadeform]"
```
<p align="center">
<img src="docs/source/_static/intro.gif" alt="SkyPilot">
</p>
Current supported infra: Kubernetes, Slurm, AWS, GCP, Azure, OCI, CoreWeave, Nebius, Lambda Cloud, RunPod, Fluidstack,
Cudo, Digital Ocean, Paperspace, Cloudflare, Samsung, IBM, Vast.ai, VMware vSphere, Seeweb, Prime Intellect, Shadeform.
<p align="center">
<img alt="SkyPilot" src="https://raw.githubusercontent.com/skypilot-org/skypilot/master/docs/source/images/cloud-logos-light.png" width=85%>
</p>
<!-- source xcf file: https://drive.google.com/drive/folders/1S_acjRsAD3T14qMeEnf6FFrIwHu_Gs_f?usp=drive_link -->
## Getting started
You can find our documentation [here](https://docs.skypilot.co/).
- [Installation](https://docs.skypilot.co/en/latest/getting-started/installation.html)
- [Quickstart](https://docs.skypilot.co/en/latest/getting-started/quickstart.html)
- [CLI reference](https://docs.skypilot.co/en/latest/reference/cli.html)
## SkyPilot in 1 minute
A SkyPilot task specifies: resource requirements, data to be synced, setup commands, and the task commands.
Once written in this [**unified interface**](https://docs.skypilot.co/en/latest/reference/yaml-spec.html) (YAML or Python API), the task can be launched on any available infra (Kubernetes, Slurm, cloud, etc.). This avoids vendor lock-in, and allows easily moving jobs to a different provider.
Paste the following into a file `my_task.yaml`:
```yaml
resources:
accelerators: A100:8 # 8x NVIDIA A100 GPU
num_nodes: 1 # Number of VMs to launch
# Working directory (optional) containing the project codebase.
# Its contents are synced to ~/sky_workdir/ on the cluster.
workdir: ~/torch_examples
# Commands to be run before executing the job.
# Typical use: pip install -r requirements.txt, git clone, etc.
setup: |
cd mnist
pip install -r requirements.txt
# Commands to run as a job.
# Typical use: launch the main program.
run: |
cd mnist
python main.py --epochs 1
```
Prepare the workdir by cloning:
```bash
git clone https://github.com/pytorch/examples.git ~/torch_examples
```
Launch with `sky launch` (note: [access to GPU instances](https://docs.skypilot.co/en/latest/cloud-setup/quota.html) is needed for this example):
```bash
sky launch my_task.yaml
```
SkyPilot then performs the heavy-lifting for you, including:
1. Find the cheapest & available infra across your clusters or clouds
2. Provision the GPUs (pods or VMs), with auto-failover if the infra returned capacity errors
3. Sync your local `workdir` to the provisioned cluster
4. Auto-install dependencies by running the task's `setup` commands
5. Run the task's `run` commands, and stream logs
See [Quickstart](https://docs.skypilot.co/en/latest/getting-started/quickstart.html) to get started with SkyPilot.
## Runnable examples
See [**SkyPilot examples**](https://docs.skypilot.co/en/docs-examples/examples/index.html) that cover: development, training, serving, LLM models, AI apps, and common frameworks.
Latest featured examples:
| Task | Examples |
|----------|----------|
| Training | [Verl](https://docs.skypilot.co/en/latest/examples/training/verl.html), [Finetune Llama 4](https://docs.skypilot.co/en/latest/examples/training/llama-4-finetuning.html), [TorchTitan](https://docs.skypilot.co/en/latest/examples/training/torchtitan.html), [PyTorch](https://docs.skypilot.co/en/latest/getting-started/tutorial.html), [DeepSpeed](https://docs.skypilot.co/en/latest/examples/training/deepspeed.html), [NeMo](https://docs.skypilot.co/en/latest/examples/training/nemo.html), [Ray](https://docs.skypilot.co/en/latest/examples/training/ray.html), [Unsloth](https://docs.skypilot.co/en/latest/examples/training/unsloth.html), [Jax/TPU](https://docs.skypilot.co/en/latest/examples/training/tpu.html) |
| Serving | [vLLM](https://docs.skypilot.co/en/latest/examples/serving/vllm.html), [SGLang](https://docs.skypilot.co/en/latest/examples/serving/sglang.html), [Ollama](https://docs.skypilot.co/en/latest/examples/serving/ollama.html) |
| Models | [DeepSeek-R1](https://docs.skypilot.co/en/latest/examples/models/deepseek-r1.html), [Llama 4](https://docs.skypilot.co/en/latest/examples/models/llama-4.html), [Llama 3](https://docs.skypilot.co/en/latest/examples/models/llama-3.html), [CodeLlama](https://docs.skypilot.co/en/latest/examples/models/codellama.html), [Qwen](https://docs.skypilot.co/en/latest/examples/models/qwen.html), [Kimi-K2](https://docs.skypilot.co/en/latest/examples/models/kimi-k2.html), [Kimi-K2-Thinking](https://docs.skypilot.co/en/latest/examples/models/kimi-k2-thinking.html), [Mixtral](https://docs.skypilot.co/en/latest/examples/models/mixtral.html) |
| AI apps | [RAG](https://docs.skypilot.co/en/latest/examples/applications/rag.html), [vector databases](https://docs.skypilot.co/en/latest/examples/applications/vector_database.html) (ChromaDB, CLIP) |
| Common frameworks | [Airflow](https://docs.skypilot.co/en/latest/examples/frameworks/airflow.html), [Jupyter](https://docs.skypilot.co/en/latest/examples/frameworks/jupyter.html), [marimo](https://docs.skypilot.co/en/latest/examples/frameworks/marimo.html) |
Source files can be found in [`llm/`](https://github.com/skypilot-org/skypilot/tree/master/llm) and [`examples/`](https://github.com/skypilot-org/skypilot/tree/master/examples).
## More information
To learn more, see [SkyPilot Overview](https://docs.skypilot.co/en/latest/overview.html), [SkyPilot docs](https://docs.skypilot.co/en/latest/), and [SkyPilot blog](https://blog.skypilot.co/).
SkyPilot adopters: [Testimonials and Case Studies](https://blog.skypilot.co/case-studies/)
Partners and integrations: [Community Spotlights](https://blog.skypilot.co/community/)
Follow updates:
- [Slack](http://slack.skypilot.co)
- [X / Twitter](https://twitter.com/skypilot_org)
- [LinkedIn](https://www.linkedin.com/company/skypilot-oss/)
- [SkyPilot Blog](https://blog.skypilot.co/) ([Introductory blog post](https://blog.skypilot.co/introducing-skypilot/))
Read the research:
- [SkyPilot paper](https://www.usenix.org/system/files/nsdi23-yang-zongheng.pdf) and [talk](https://www.usenix.org/conference/nsdi23/presentation/yang-zongheng) (NSDI 2023)
- [Sky Computing whitepaper](https://arxiv.org/abs/2205.07147)
- [Sky Computing vision paper](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s02-stoica.pdf) (HotOS 2021)
- [SkyServe: AI serving across regions and clouds](https://arxiv.org/pdf/2411.01438) (EuroSys 2025)
- [Managed jobs spot instance policy](https://www.usenix.org/conference/nsdi24/presentation/wu-zhanghao) (NSDI 2024)
SkyPilot was initially started at the [Sky Computing Lab](https://sky.cs.berkeley.edu) at UC Berkeley and has since gained many industry contributors. To read about the project's origin and vision, see [Concept: Sky Computing](https://docs.skypilot.co/en/latest/sky-computing.html).
## Questions and feedback
We are excited to hear your feedback:
* For issues and feature requests, please [open a GitHub issue](https://github.com/skypilot-org/skypilot/issues/new).
* For questions, please use [GitHub Discussions](https://github.com/skypilot-org/skypilot/discussions).
For general discussions, join us on the [SkyPilot Slack](http://slack.skypilot.co).
## Contributing
We welcome all contributions to the project! See [CONTRIBUTING](CONTRIBUTING.md) for how to get involved.
| text/markdown | SkyPilot Team | null | null | null | Apache 2.0 | null | [
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Distributed Computing"
] | [] | null | null | null | [] | [] | [] | [
"wheel>=0.46.3",
"setuptools",
"pip",
"cachetools",
"click<8.2.0,>=7.0",
"colorama",
"cryptography",
"jinja2>=3.0",
"jsonschema",
"networkx",
"pandas>=1.3.0",
"pendulum",
"PrettyTable>=2.0.0",
"python-dotenv",
"rich",
"tabulate",
"typing_extensions",
"filelock>=3.15.0",
"packaging",
"psutil",
"pulp",
"pyyaml!=5.4.*,>3.13",
"ijson",
"orjson",
"requests",
"uvicorn[standard]<0.36.0,>=0.33.0",
"fastapi",
"pydantic!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3,>2",
"python-multipart",
"aiofiles",
"httpx",
"setproctitle",
"sqlalchemy>=2.0.0",
"psycopg2-binary",
"aiosqlite",
"asyncpg",
"greenlet",
"casbin",
"sqlalchemy_adapter",
"prometheus_client>=0.8.0",
"passlib",
"bcrypt==4.0.1",
"pyjwt",
"gitpython",
"paramiko",
"types-paramiko",
"alembic>=1.8.0",
"aiohttp>=3.13.3",
"anyio",
"awscli>=1.27.10; extra == \"aws\"",
"botocore>=1.29.10; extra == \"aws\"",
"boto3>=1.26.1; extra == \"aws\"",
"colorama<0.4.7; extra == \"aws\"",
"casbin; extra == \"aws\"",
"sqlalchemy_adapter; extra == \"aws\"",
"passlib; extra == \"aws\"",
"pyjwt; extra == \"aws\"",
"aiohttp; extra == \"aws\"",
"anyio; extra == \"aws\"",
"grpcio>=1.63.0; extra == \"aws\"",
"protobuf<7.0.0,>=5.26.1; extra == \"aws\"",
"aiosqlite; extra == \"aws\"",
"azure-cli>=2.65.0; extra == \"azure\"",
"azure-core>=1.31.0; extra == \"azure\"",
"azure-identity>=1.19.0; extra == \"azure\"",
"azure-mgmt-network>=27.0.0; extra == \"azure\"",
"azure-mgmt-compute>=33.0.0; extra == \"azure\"",
"azure-storage-blob>=12.23.1; extra == \"azure\"",
"msgraph-sdk; extra == \"azure\"",
"msrestazure; extra == \"azure\"",
"casbin; extra == \"azure\"",
"sqlalchemy_adapter; extra == \"azure\"",
"passlib; extra == \"azure\"",
"pyjwt; extra == \"azure\"",
"aiohttp; extra == \"azure\"",
"anyio; extra == \"azure\"",
"grpcio>=1.63.0; extra == \"azure\"",
"protobuf<7.0.0,>=5.26.1; extra == \"azure\"",
"aiosqlite; extra == \"azure\"",
"google-api-python-client>=2.69.0; extra == \"gcp\"",
"google-cloud-storage; extra == \"gcp\"",
"pyopenssl<24.3.0,>=23.2.0; extra == \"gcp\"",
"casbin; extra == \"gcp\"",
"sqlalchemy_adapter; extra == \"gcp\"",
"passlib; extra == \"gcp\"",
"pyjwt; extra == \"gcp\"",
"aiohttp; extra == \"gcp\"",
"anyio; extra == \"gcp\"",
"grpcio>=1.63.0; extra == \"gcp\"",
"protobuf<7.0.0,>=5.26.1; extra == \"gcp\"",
"aiosqlite; extra == \"gcp\"",
"ibm-cloud-sdk-core; extra == \"ibm\"",
"ibm-vpc; extra == \"ibm\"",
"ibm-platform-services>=0.48.0; extra == \"ibm\"",
"ibm-cos-sdk; extra == \"ibm\"",
"ray[default]>=2.6.1; extra == \"ibm\"",
"casbin; extra == \"ibm\"",
"sqlalchemy_adapter; extra == \"ibm\"",
"passlib; extra == \"ibm\"",
"pyjwt; extra == \"ibm\"",
"aiohttp; extra == \"ibm\"",
"anyio; extra == \"ibm\"",
"grpcio>=1.63.0; extra == \"ibm\"",
"protobuf<7.0.0,>=5.26.1; extra == \"ibm\"",
"aiosqlite; extra == \"ibm\"",
"docker; extra == \"docker\"",
"ray[default]>=2.6.1; extra == \"docker\"",
"casbin; extra == \"docker\"",
"sqlalchemy_adapter; extra == \"docker\"",
"passlib; extra == \"docker\"",
"pyjwt; extra == \"docker\"",
"aiohttp; extra == \"docker\"",
"anyio; extra == \"docker\"",
"grpcio>=1.63.0; extra == \"docker\"",
"protobuf<7.0.0,>=5.26.1; extra == \"docker\"",
"aiosqlite; extra == \"docker\"",
"casbin; extra == \"lambda\"",
"sqlalchemy_adapter; extra == \"lambda\"",
"passlib; extra == \"lambda\"",
"pyjwt; extra == \"lambda\"",
"aiohttp; extra == \"lambda\"",
"anyio; extra == \"lambda\"",
"grpcio>=1.63.0; extra == \"lambda\"",
"protobuf<7.0.0,>=5.26.1; extra == \"lambda\"",
"aiosqlite; extra == \"lambda\"",
"awscli>=1.27.10; extra == \"cloudflare\"",
"botocore>=1.29.10; extra == \"cloudflare\"",
"boto3>=1.26.1; extra == \"cloudflare\"",
"colorama<0.4.7; extra == \"cloudflare\"",
"casbin; extra == \"cloudflare\"",
"sqlalchemy_adapter; extra == \"cloudflare\"",
"passlib; extra == \"cloudflare\"",
"pyjwt; extra == \"cloudflare\"",
"aiohttp; extra == \"cloudflare\"",
"anyio; extra == \"cloudflare\"",
"grpcio>=1.63.0; extra == \"cloudflare\"",
"protobuf<7.0.0,>=5.26.1; extra == \"cloudflare\"",
"aiosqlite; extra == \"cloudflare\"",
"awscli>=1.27.10; extra == \"coreweave\"",
"botocore>=1.29.10; extra == \"coreweave\"",
"boto3>=1.26.1; extra == \"coreweave\"",
"colorama<0.4.7; extra == \"coreweave\"",
"kubernetes!=32.0.0,>=20.0.0; extra == \"coreweave\"",
"websockets; extra == \"coreweave\"",
"python-dateutil; extra == \"coreweave\"",
"casbin; extra == \"coreweave\"",
"sqlalchemy_adapter; extra == \"coreweave\"",
"passlib; extra == \"coreweave\"",
"pyjwt; extra == \"coreweave\"",
"aiohttp; extra == \"coreweave\"",
"anyio; extra == \"coreweave\"",
"grpcio>=1.63.0; extra == \"coreweave\"",
"protobuf<7.0.0,>=5.26.1; extra == \"coreweave\"",
"aiosqlite; extra == \"coreweave\"",
"ray[default]>=2.6.1; extra == \"scp\"",
"casbin; extra == \"scp\"",
"sqlalchemy_adapter; extra == \"scp\"",
"passlib; extra == \"scp\"",
"pyjwt; extra == \"scp\"",
"aiohttp; extra == \"scp\"",
"anyio; extra == \"scp\"",
"grpcio>=1.63.0; extra == \"scp\"",
"protobuf<7.0.0,>=5.26.1; extra == \"scp\"",
"aiosqlite; extra == \"scp\"",
"oci; extra == \"oci\"",
"casbin; extra == \"oci\"",
"sqlalchemy_adapter; extra == \"oci\"",
"passlib; extra == \"oci\"",
"pyjwt; extra == \"oci\"",
"aiohttp; extra == \"oci\"",
"anyio; extra == \"oci\"",
"grpcio>=1.63.0; extra == \"oci\"",
"protobuf<7.0.0,>=5.26.1; extra == \"oci\"",
"aiosqlite; extra == \"oci\"",
"kubernetes!=32.0.0,>=20.0.0; extra == \"kubernetes\"",
"websockets; extra == \"kubernetes\"",
"python-dateutil; extra == \"kubernetes\"",
"casbin; extra == \"kubernetes\"",
"sqlalchemy_adapter; extra == \"kubernetes\"",
"passlib; extra == \"kubernetes\"",
"pyjwt; extra == \"kubernetes\"",
"aiohttp; extra == \"kubernetes\"",
"anyio; extra == \"kubernetes\"",
"grpcio>=1.63.0; extra == \"kubernetes\"",
"protobuf<7.0.0,>=5.26.1; extra == \"kubernetes\"",
"aiosqlite; extra == \"kubernetes\"",
"kubernetes!=32.0.0,>=20.0.0; extra == \"ssh\"",
"websockets; extra == \"ssh\"",
"python-dateutil; extra == \"ssh\"",
"casbin; extra == \"ssh\"",
"sqlalchemy_adapter; extra == \"ssh\"",
"passlib; extra == \"ssh\"",
"pyjwt; extra == \"ssh\"",
"aiohttp; extra == \"ssh\"",
"anyio; extra == \"ssh\"",
"grpcio>=1.63.0; extra == \"ssh\"",
"protobuf<7.0.0,>=5.26.1; extra == \"ssh\"",
"aiosqlite; extra == \"ssh\"",
"runpod>=1.6.1; extra == \"runpod\"",
"tomli; extra == \"runpod\"",
"pycares<5; extra == \"runpod\"",
"casbin; extra == \"runpod\"",
"sqlalchemy_adapter; extra == \"runpod\"",
"passlib; extra == \"runpod\"",
"pyjwt; extra == \"runpod\"",
"aiohttp; extra == \"runpod\"",
"anyio; extra == \"runpod\"",
"grpcio>=1.63.0; extra == \"runpod\"",
"protobuf<7.0.0,>=5.26.1; extra == \"runpod\"",
"aiosqlite; extra == \"runpod\"",
"casbin; extra == \"fluidstack\"",
"sqlalchemy_adapter; extra == \"fluidstack\"",
"passlib; extra == \"fluidstack\"",
"pyjwt; extra == \"fluidstack\"",
"aiohttp; extra == \"fluidstack\"",
"anyio; extra == \"fluidstack\"",
"grpcio>=1.63.0; extra == \"fluidstack\"",
"protobuf<7.0.0,>=5.26.1; extra == \"fluidstack\"",
"aiosqlite; extra == \"fluidstack\"",
"cudo-compute>=0.1.10; extra == \"cudo\"",
"casbin; extra == \"cudo\"",
"sqlalchemy_adapter; extra == \"cudo\"",
"passlib; extra == \"cudo\"",
"pyjwt; extra == \"cudo\"",
"aiohttp; extra == \"cudo\"",
"anyio; extra == \"cudo\"",
"grpcio>=1.63.0; extra == \"cudo\"",
"protobuf<7.0.0,>=5.26.1; extra == \"cudo\"",
"aiosqlite; extra == \"cudo\"",
"casbin; extra == \"paperspace\"",
"sqlalchemy_adapter; extra == \"paperspace\"",
"passlib; extra == \"paperspace\"",
"pyjwt; extra == \"paperspace\"",
"aiohttp; extra == \"paperspace\"",
"anyio; extra == \"paperspace\"",
"grpcio>=1.63.0; extra == \"paperspace\"",
"protobuf<7.0.0,>=5.26.1; extra == \"paperspace\"",
"aiosqlite; extra == \"paperspace\"",
"casbin; extra == \"primeintellect\"",
"sqlalchemy_adapter; extra == \"primeintellect\"",
"passlib; extra == \"primeintellect\"",
"pyjwt; extra == \"primeintellect\"",
"aiohttp; extra == \"primeintellect\"",
"anyio; extra == \"primeintellect\"",
"grpcio>=1.63.0; extra == \"primeintellect\"",
"protobuf<7.0.0,>=5.26.1; extra == \"primeintellect\"",
"aiosqlite; extra == \"primeintellect\"",
"pydo>=0.3.0; extra == \"do\"",
"azure-core>=1.24.0; extra == \"do\"",
"azure-common; extra == \"do\"",
"casbin; extra == \"do\"",
"sqlalchemy_adapter; extra == \"do\"",
"passlib; extra == \"do\"",
"pyjwt; extra == \"do\"",
"aiohttp; extra == \"do\"",
"anyio; extra == \"do\"",
"grpcio>=1.63.0; extra == \"do\"",
"protobuf<7.0.0,>=5.26.1; extra == \"do\"",
"aiosqlite; extra == \"do\"",
"vastai-sdk>=0.1.12; extra == \"vast\"",
"casbin; extra == \"vast\"",
"sqlalchemy_adapter; extra == \"vast\"",
"passlib; extra == \"vast\"",
"pyjwt; extra == \"vast\"",
"aiohttp; extra == \"vast\"",
"anyio; extra == \"vast\"",
"grpcio>=1.63.0; extra == \"vast\"",
"protobuf<7.0.0,>=5.26.1; extra == \"vast\"",
"aiosqlite; extra == \"vast\"",
"pyvmomi==8.0.1.0.2; extra == \"vsphere\"",
"casbin; extra == \"vsphere\"",
"sqlalchemy_adapter; extra == \"vsphere\"",
"passlib; extra == \"vsphere\"",
"pyjwt; extra == \"vsphere\"",
"aiohttp; extra == \"vsphere\"",
"anyio; extra == \"vsphere\"",
"grpcio>=1.63.0; extra == \"vsphere\"",
"protobuf<7.0.0,>=5.26.1; extra == \"vsphere\"",
"aiosqlite; extra == \"vsphere\"",
"nebius>=0.3.12; extra == \"nebius\"",
"grpcio>=1.63.0; extra == \"nebius\"",
"protobuf<7.0.0,>=5.26.1; extra == \"nebius\"",
"awscli>=1.27.10; extra == \"nebius\"",
"botocore>=1.29.10; extra == \"nebius\"",
"boto3>=1.26.1; extra == \"nebius\"",
"colorama<0.4.7; extra == \"nebius\"",
"casbin; extra == \"nebius\"",
"sqlalchemy_adapter; extra == \"nebius\"",
"passlib; extra == \"nebius\"",
"pyjwt; extra == \"nebius\"",
"aiohttp; extra == \"nebius\"",
"anyio; extra == \"nebius\"",
"grpcio>=1.63.0; extra == \"nebius\"",
"protobuf<7.0.0,>=5.26.1; extra == \"nebius\"",
"aiosqlite; extra == \"nebius\"",
"casbin; extra == \"hyperbolic\"",
"sqlalchemy_adapter; extra == \"hyperbolic\"",
"passlib; extra == \"hyperbolic\"",
"pyjwt; extra == \"hyperbolic\"",
"aiohttp; extra == \"hyperbolic\"",
"anyio; extra == \"hyperbolic\"",
"grpcio>=1.63.0; extra == \"hyperbolic\"",
"protobuf<7.0.0,>=5.26.1; extra == \"hyperbolic\"",
"aiosqlite; extra == \"hyperbolic\"",
"ecsapi==0.4.0; extra == \"seeweb\"",
"casbin; extra == \"seeweb\"",
"sqlalchemy_adapter; extra == \"seeweb\"",
"passlib; extra == \"seeweb\"",
"pyjwt; extra == \"seeweb\"",
"aiohttp; extra == \"seeweb\"",
"anyio; extra == \"seeweb\"",
"grpcio>=1.63.0; extra == \"seeweb\"",
"protobuf<7.0.0,>=5.26.1; extra == \"seeweb\"",
"aiosqlite; extra == \"seeweb\"",
"casbin; extra == \"shadeform\"",
"sqlalchemy_adapter; extra == \"shadeform\"",
"passlib; extra == \"shadeform\"",
"pyjwt; extra == \"shadeform\"",
"aiohttp; extra == \"shadeform\"",
"anyio; extra == \"shadeform\"",
"grpcio>=1.63.0; extra == \"shadeform\"",
"protobuf<7.0.0,>=5.26.1; extra == \"shadeform\"",
"aiosqlite; extra == \"shadeform\"",
"python-hostlist; extra == \"slurm\"",
"casbin; extra == \"slurm\"",
"sqlalchemy_adapter; extra == \"slurm\"",
"passlib; extra == \"slurm\"",
"pyjwt; extra == \"slurm\"",
"aiohttp; extra == \"slurm\"",
"anyio; extra == \"slurm\"",
"grpcio>=1.63.0; extra == \"slurm\"",
"protobuf<7.0.0,>=5.26.1; extra == \"slurm\"",
"aiosqlite; extra == \"slurm\"",
"casbin; extra == \"yotta\"",
"sqlalchemy_adapter; extra == \"yotta\"",
"passlib; extra == \"yotta\"",
"pyjwt; extra == \"yotta\"",
"aiohttp; extra == \"yotta\"",
"anyio; extra == \"yotta\"",
"grpcio>=1.63.0; extra == \"yotta\"",
"protobuf<7.0.0,>=5.26.1; extra == \"yotta\"",
"aiosqlite; extra == \"yotta\"",
"google-cloud-storage; extra == \"all\"",
"tomli; extra == \"all\"",
"azure-cli>=2.65.0; extra == \"all\"",
"pycares<5; extra == \"all\"",
"anyio; extra == \"all\"",
"cudo-compute>=0.1.10; extra == \"all\"",
"msrestazure; extra == \"all\"",
"azure-core>=1.24.0; extra == \"all\"",
"casbin; extra == \"all\"",
"botocore>=1.29.10; extra == \"all\"",
"docker; extra == \"all\"",
"ibm-vpc; extra == \"all\"",
"ray[default]>=2.6.1; extra == \"all\"",
"oci; extra == \"all\"",
"azure-common; extra == \"all\"",
"google-api-python-client>=2.69.0; extra == \"all\"",
"azure-storage-blob>=12.23.1; extra == \"all\"",
"azure-mgmt-network>=27.0.0; extra == \"all\"",
"msgraph-sdk; extra == \"all\"",
"boto3>=1.26.1; extra == \"all\"",
"kubernetes!=32.0.0,>=20.0.0; extra == \"all\"",
"pyjwt; extra == \"all\"",
"nebius>=0.3.12; extra == \"all\"",
"python-dateutil; extra == \"all\"",
"pydo>=0.3.0; extra == \"all\"",
"azure-mgmt-compute>=33.0.0; extra == \"all\"",
"ibm-platform-services>=0.48.0; extra == \"all\"",
"grpcio>=1.63.0; extra == \"all\"",
"colorama<0.4.7; extra == \"all\"",
"passlib; extra == \"all\"",
"pyopenssl<24.3.0,>=23.2.0; extra == \"all\"",
"vastai-sdk>=0.1.12; extra == \"all\"",
"python-hostlist; extra == \"all\"",
"pyvmomi==8.0.1.0.2; extra == \"all\"",
"ibm-cloud-sdk-core; extra == \"all\"",
"ecsapi==0.4.0; extra == \"all\"",
"protobuf<7.0.0,>=5.26.1; extra == \"all\"",
"azure-core>=1.31.0; extra == \"all\"",
"aiohttp; extra == \"all\"",
"aiosqlite; extra == \"all\"",
"awscli>=1.27.10; extra == \"all\"",
"azure-identity>=1.19.0; extra == \"all\"",
"runpod>=1.6.1; extra == \"all\"",
"ibm-cos-sdk; extra == \"all\"",
"sqlalchemy_adapter; extra == \"all\"",
"websockets; extra == \"all\"",
"grpcio>=1.63.0; extra == \"remote\"",
"protobuf<7.0.0,>=5.26.1; extra == \"remote\"",
"casbin; extra == \"server\"",
"sqlalchemy_adapter; extra == \"server\"",
"passlib; extra == \"server\"",
"pyjwt; extra == \"server\"",
"aiohttp; extra == \"server\"",
"anyio; extra == \"server\"",
"grpcio>=1.63.0; extra == \"server\"",
"protobuf<7.0.0,>=5.26.1; extra == \"server\"",
"aiosqlite; extra == \"server\""
] | [] | [] | [] | [
"Homepage, https://github.com/skypilot-org/skypilot",
"Issues, https://github.com/skypilot-org/skypilot/issues",
"Discussion, https://github.com/skypilot-org/skypilot/discussions",
"Documentation, https://docs.skypilot.co/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:51:48.794190 | skypilot_nightly-1.0.0.dev20260220.tar.gz | 3,050,631 | b3/62/4caadb5b22b672fad351d0550436466dd1a092814b59a92730c9b8c1cd76/skypilot_nightly-1.0.0.dev20260220.tar.gz | source | sdist | null | false | a04712f2c900f2d3932a1c5a9b333d9e | 50359101f083b9407a96ae3879749249f035efbb601c28fdd7e34409b6ac1834 | b3624caadb5b22b672fad351d0550436466dd1a092814b59a92730c9b8c1cd76 | null | [
"LICENSE"
] | 336 |
2.4 | ratio1 | 3.5.2 | `ratio1` or Ration1 SDK is the Python SDK required for client app development for the Ratio1 ecosystem | # Ratio1 SDK
Welcome to the **Ratio1 SDK** repository, formerly known as the **ratio1 SDK**. The Ratio1 SDK is a crucial component of the Ratio1 ecosystem, designed to facilitate interactions, development, and deployment of jobs within the Ratio1 network. By enabling low-code development, the SDK allows developers to build and deploy end-to-end AI (and beyond) cooperative application pipelines seamlessly within the Ratio1 Edge Nodes ecosystem.
## Overview
The **Ratio1 SDK** is engineered to enhance the Ratio1 protocol and ecosystem, aiming to improve the functionality and performance of the Ratio1 Edge Node through dedicated research and community contributions. This SDK serves as an essential tool for developers looking to integrate their applications with the Ratio1 network, enabling them to leverage the decentralized, secure, and privacy-preserving capabilities of Ratio1 Edge Nodes.
Key functionalities of the Ratio1 SDK include:
- **Job Interactions**: Facilitate the development and management of computation tasks within the Ratio1 network.
- **Development Tools**: Provide low-code solutions for creating and deploying AI-driven application pipelines.
- **Ecosystem Integration**: Seamlessly integrate with Ratio1 Edge Nodes to utilize their computational resources effectively.
- **Collaboration and Deployment**: Enable cooperative application development and deployment across multiple edge nodes within the Ratio1 ecosystem.
Unlike the Ratio1 Core Packages, which are intended solely for protocol and ecosystem enhancements and are not meant for standalone installation, the Ratio1 SDK is designed for both client-side development and sending workloads to Ratio1 Edge Nodes, making it an indispensable tool for developers within the ecosystem.
## The `nepctl` CLI Tool
Our SDK has a CLI tool called `nepctl` that allows you to interact with the Ratio1 network. You can use it to query nodes, configure the client, and manage nodes directly from the terminal. The `nepctl` tool is a powerful utility that simplifies network interactions and provides a seamless experience for developers.
For more information on the `nepctl` CLI tool, please refer to the [nepctl](nepctl.md) documentation.
## Dependencies
The Ratio1 SDK relies on several key packages to function effectively. These dependencies are automatically managed when installing the SDK via pip:
- `pika`
- `paho-mqtt`
- `numpy`
- `pyopenssl>=23.0.0`
- `cryptography>=39.0.0`
- `python-dateutil`
- `pyaml`
## Installation
Installing the Ratio1 SDK is straightforward and is intended for development and integration into your projects. Use the following pip commands to install the SDK:
### Standard Installation
To install the Ratio1 SDK, run:
```shell
pip install ratio1_sdk --upgrade
```
### Development Installation
For development purposes, you can clone the repository and set up the SDK in an editable mode:
```shell
git clone https://github.com/Ratio1/ratio1_sdk
cd ratio1_sdk
pip install -e .
```
This allows you to make modifications to the SDK and have them reflected immediately without reinstalling.
## Documentation
Comprehensive documentation for the Ratio1 SDK is currently a work in progress. Minimal documentation is available here, with detailed code examples located in the `tutorials` folder within the project's repository. We encourage developers to explore these examples to understand the SDK's capabilities and integration methods.
## Quick Start Guides
Starting with version 2.6+, the Ratio1 SDK automatically performs self-configuration using **dAuth**—the Ratio1 decentralized self-authentication system. To begin integrating with the Ratio1 network, follow these steps:
### 1. Start a Local Edge Node (testnet)
Launch a local Ratio1 Edge Node using Docker:
```bash
docker run -d --name=r1node ratio1/edge_node:testnet
```
if you want to have a persistent volume for the node, you can use the following command:
```bash
docker run -d --name=r1node --rm --pull=always -v r1vol:/edge_node/_local_cache ratio1/edge_node:testnet
```
This way the node will store its data in the `r1vol` volume, and you can stop and start the node without losing data you might have stored in the node via deployed jobs from your SDK. We also added the `--pull=always` flag to ensure that the latest version of the node is always pulled from the Docker Hub.
After a few seconds, the node will be online. Retrieve the node's address by running:
```bash
docker exec r1node get_node_info
```
The output will resemble:
```json
{
"address": "0xai_A2pPf0lxZSZkGONzLOmhzndncc1VvDBHfF-YLWlsrG9m",
"alias": "5ac5438a2775",
"eth_address": "0xc440cdD0BBdDb5a271de07d3378E31Cb8D9727A5",
"version_long": "v2.5.36 | core v7.4.23 | SDK 2.6.15",
"version_short": "v2.5.36",
"info": {
"whitelist": []
}
}
```
As you can see, the node is online and NOT ready to accept workloads due to the fact that it has no whitelisted clients. To whitelist your client, you need to use the `add_allowed` command:
```bash
docker exec r1node add_allowed <address> [<alias>]
```
where `<address>` is the address of your client and `<alias>` is an optional alias for your client.
A example of whitelisting a client is:
```bash
docker exec r1node add_allowed 0xai_AthDPWc_k3BKJLLYTQMw--Rjhe3B6_7w76jlRpT6nDeX some-node-alias
```
You will then receive a response similar to:
```json
{
"address": "0xai_A2pPf0lxZSZkGONzLOmhzndncc1VvDBHfF-YLWlsrG9m",
"alias": "5ac5438a2775",
"eth_address": "0xc440cdD0BBdDb5a271de07d3378E31Cb8D9727A5",
"version_long": "v2.5.36 | core v7.4.23 | SDK 2.6.15",
"version_short": "v2.5.36",
"info": {
"whitelist": [
"0xai_AthDPWc_k3BKJLLYTQMw--Rjhe3B6_7w76jlRpT6nDeX"
]
}
}
```
### 2. Develop and Deploy Jobs
Use the SDK to develop and send workloads to the Edge Nodes. Below are examples of both local and remote execution.
## Examples
### Local Execution
This example demonstrates how to find all 168 prime numbers in the interval 1 - 1000 using local execution. The code leverages multiple threads to perform prime number generation efficiently.
```python
import numpy as np
from concurrent.futures import ThreadPoolExecutor
def local_brute_force_prime_number_generator():
def is_prime(n):
if n <= 1:
return False
for i in range(2, int(np.sqrt(n)) + 1):
if n % i == 0:
return False
return True
random_numbers = np.random.randint(1, 1000, 20)
thread_pool = ThreadPoolExecutor(max_workers=4)
are_primes = list(thread_pool.map(is_prime, random_numbers))
prime_numbers = []
for i in range(len(random_numbers)):
if are_primes[i]:
prime_numbers.append(random_numbers[i])
return prime_numbers
if __name__ == "__main__":
found_so_far = []
print_step = 0
while len(found_so_far) < 168:
# Compute a batch of prime numbers
prime_numbers = local_brute_force_prime_number_generator()
# Keep only the new prime numbers
for prime_number in prime_numbers:
if prime_number not in found_so_far:
found_so_far.append(prime_number)
# Show progress
if print_step % 50 == 0:
print("Found so far: {}: {}\n".format(len(found_so_far), sorted(found_so_far)))
print_step += 1
# Show final result
print("Found so far: {}: {}\n".format(len(found_so_far), sorted(found_so_far)))
```
### Remote Execution
To accelerate prime number discovery, this example demonstrates deploying the task across multiple edge nodes within the Ratio1 network. Minimal code changes are required to transition from local to remote execution.
#### 1. Modify the Prime Number Generator
```python
from ratio1_sdk import CustomPluginTemplate
def remote_brute_force_prime_number_generator(plugin: CustomPluginTemplate):
def is_prime(n):
if n <= 1:
return False
for i in range(2, int(plugin.np.sqrt(n)) + 1):
if n % i == 0:
return False
return True
random_numbers = plugin.np.random.randint(1, 1000, 20)
are_primes = plugin.threadapi_map(is_prime, random_numbers, n_threads=4)
prime_numbers = []
for i in range(len(random_numbers)):
if are_primes[i]:
prime_numbers.append(random_numbers[i])
return prime_numbers
```
#### 2. Connect to the Network and Select a Node
```python
from ratio1_sdk import Session
from time import sleep
def on_heartbeat(session: Session, node: str, heartbeat: dict):
session.P("{} is online".format(node))
return
if __name__ == '__main__':
session = Session(
on_heartbeat=on_heartbeat
)
# Run the program for 15 seconds to detect online nodes
sleep(15)
# Retrieve and select an online node
node = "0xai_A8SY7lEqBtf5XaGyB6ipdk5C30vSf3HK4xELp3iplwLe" # ratio1-1
```
#### 3. Deploy the Distributed Job
```python
from ratio1_sdk import DistributedCustomCodePresets as Presets
_, _ = session.create_chain_dist_custom_job(
node=node,
main_node_process_real_time_collected_data=Presets.PROCESS_REAL_TIME_COLLECTED_DATA__KEEP_UNIQUES_IN_AGGREGATED_COLLECTED_DATA,
main_node_finish_condition=Presets.FINISH_CONDITION___AGGREGATED_DATA_MORE_THAN_X,
main_node_finish_condition_kwargs={"X": 167},
main_node_aggregate_collected_data=Presets.AGGREGATE_COLLECTED_DATA___AGGREGATE_COLLECTED_DATA,
nr_remote_worker_nodes=2,
worker_node_code=remote_brute_force_prime_number_generator,
on_data=locally_process_partial_results,
deploy=True
)
```
#### 4. Close the Session Upon Completion
```python
# Wait until the finished flag is set to True
session.run(wait=lambda: not finished, close_pipelines=True)
```
## Project Financing Disclaimer
This project incorporates open-source components developed with the support of financing grants **SMIS 143488** and **SMIS 156084**, provided by the Romanian Competitiveness Operational Programme. We extend our sincere gratitude for this support, which has been instrumental in advancing our work and enabling us to share these resources with the community.
The content and information within this repository are solely the responsibility of the authors and do not necessarily reflect the views of the funding agencies. The grants have specifically supported certain aspects of this open-source project, facilitating broader dissemination and collaborative development.
For any inquiries regarding the funding and its impact on this project, please contact the authors directly.
## License
This project is licensed under the **Apache 2.0 License**. For more details, please refer to the [LICENSE](LICENSE) file.
## Contact
For more information, visit our website at [https://ratio1.ai](https://ratio1.ai) or reach out to us via email at [support@ratio1.ai](mailto:support@ratio1.ai).
## Citation
If you use the Ratio1 SDK in your research or projects, please cite it as follows:
```bibtex
@misc{Ratio1SDK,
author = {Ratio1.AI},
title = {Ratio1 SDK},
year = {2024-2025},
howpublished = {\url{https://github.com/Ratio1/ratio1_sdk}},
}
```
```bibtex
@misc{Ratio1EdgeNode,
author = {Ratio1.AI},
title = {Ratio1: Edge Node},
year = {2024-2025},
howpublished = {\url{https://github.com/Ratio1/edge_node}},
}
```
| text/markdown | null | Andrei Ionut Damian <andrei.damian@ratio1.ai>, Cristan Bleotiu <cristian.bleotiu@ratio1.ai> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=39.0.0",
"numpy",
"paho-mqtt",
"pandas",
"pika",
"psutil",
"pyaml",
"pyopenssl>=23.0.0",
"python-dateutil",
"web3>=7.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Ratio1/ratio1_sdk",
"Bug Tracker, https://github.com/Ratio1/ratio1_sdk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:51:42.237032 | ratio1-3.5.2.tar.gz | 425,796 | ba/8d/a7711f09be61aa7d3b2728ef46a1bb391c1faa280f1eea81ba5f69c28f19/ratio1-3.5.2.tar.gz | source | sdist | null | false | 38df7051864ead3f90f5ba02719cac78 | bedea27896864d04245b34bcc0f9d0feb2f1cee0d696c0374ec8c925c15f571b | ba8da7711f09be61aa7d3b2728ef46a1bb391c1faa280f1eea81ba5f69c28f19 | null | [
"LICENSE"
] | 173 |
2.4 | syft-client | 0.1.100 | A simple client library for setting up secure communication channels using Google Drive | # Syft-client
## Install
```
uv pip install -e .
```
## Test
```
just test-unit
just test-integration
```
| text/markdown | null | OpenMined <info@openmined.org> | null | null | Apache-2.0 | privacy, federated-learning, google-drive, communication | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Security :: Cryptography",
"Topic :: Communications :: File Sharing"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"google-api-python-client>=2.95.0",
"google-auth>=2.22.0",
"google-auth-oauthlib>=1.0.0",
"rich>=13.0.0",
"pandas",
"pyarrow",
"pydantic-settings>=2.11.0",
"pyyaml>=6.0",
"jinja2>=3.1.0",
"click>=8.0.0",
"python-daemon>=3.0.0",
"syft-bg",
"syft-job",
"syft-dataset",
"syft-dataset; extra == \"datasets\"",
"syft-job; extra == \"job\"",
"syft-client[datasets,job]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/OpenMined/syft-client",
"Repository, https://github.com/OpenMined/syft-client",
"Documentation, https://github.com/OpenMined/syft-client#readme",
"Bug Tracker, https://github.com/OpenMined/syft-client/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:51:40.157725 | syft_client-0.1.100.tar.gz | 101,641 | bd/0e/a415bb6e56ba305d4a516f0fb2bef8d08816043819e6f0cb3fe9cea2c06a/syft_client-0.1.100.tar.gz | source | sdist | null | false | 6803365d3d7cf52d51c89ecf80accc28 | 9edec754819b2e4bfa534c4f95db4c320576babfbc1072ea18a13c80b9424e22 | bd0ea415bb6e56ba305d4a516f0fb2bef8d08816043819e6f0cb3fe9cea2c06a | null | [] | 148 |
2.4 | jmstate | 0.13.4 | Joint modeling with automatic differentiation | # 📦 jmstate
**jmstate** is a Python package for **multi-state nonlinear joint modeling**.
It leverages **PyTorch** for automatic differentiation and vectorized computation, making it efficient and scalable. The package provides a flexible framework where you can use **neural networks as regression and link functions**, while still offering simpler built-in options like parametric baseline hazards.
With **jmstate**, you can model longitudinal data jointly with multi-state transitions (e.g. health progression), capture nonlinear effects, and perform inference in complex real-world settings.
---
## ✨ Features
- **Multi-State Joint Modeling**
Supports subjects moving through multiple states with transition intensities that depend on longitudinal trajectories and covariates.
- **Nonlinear Flexibility**
Use neural networks (or any PyTorch model) as regression or link functions.
- **Built-in Tools**
Includes default baseline hazards, regression, link functions, and analysis utilities.
- **Automatic Differentiation**
Powered by PyTorch for efficient gradient computation and vectorization.
---
## 🚀 Installation
```bash
pip install jmstate
```
---
## 📖 Learn More
For tutorials, API reference, visit the official site:
👉 [jmstate's documentation](https://felixlaplante0.github.io/jmstate/)
| text/markdown | Félix Laplante | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"torch",
"matplotlib",
"scikit-learn",
"rich",
"tqdm"
] | [] | [] | [] | [
"Source, https://github.com/felixlaplante/jmstate"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T18:51:20.811243 | jmstate-0.13.4.tar.gz | 31,398 | 97/a6/08feb1dc949d45a417d863465f0adeee6655a0db755ad641355eb046a9ee/jmstate-0.13.4.tar.gz | source | sdist | null | false | 593f1eca614106e49a97c81219ec076c | 47117ee0fc0bb131eddbbcd41b417b18705e70fc6de7a54c3acbfd8ccb9132ff | 97a608feb1dc949d45a417d863465f0adeee6655a0db755ad641355eb046a9ee | null | [
"LICENSE"
] | 169 |
2.4 | ndev-settings | 0.4.1 | Reusable settings and customization widget for the ndev-kit | # ndev-settings
[](https://github.com/ndev-kit/ndev-settings/raw/main/LICENSE)
[](https://pypi.org/project/ndev-settings)
[](https://python.org)
[](https://github.com/ndev-kit/ndev-settings/actions)
[](https://codecov.io/gh/ndev-kit/ndev-settings)
[](https://napari-hub.org/plugins/ndev-settings)
[](https://napari.org/stable/plugins/index.html)
[](https://github.com/copier-org/copier)
Reusable settings and customization widget for the ndev-kit
----------------------------------
This [napari] plugin was generated with [copier] using the [napari-plugin-template] v1.1.0.
<!--
Don't miss the full getting started guide to set up your new package:
https://github.com/napari/napari-plugin-template#getting-started
and review the napari docs for plugin developers:
https://napari.org/stable/plugins/index.html
-->
## Installation
You can install `ndev-settings` via [pip]:
```
pip install ndev-settings
```
If napari is not already installed, you can install `ndev-settings` with napari and Qt via:
```
pip install "ndev-settings[all]"
```
To install latest development version :
```
pip install git+https://github.com/ndev-kit/ndev-settings.git
```
## Use with external libraries
External libraries can provide their settings in YAML format with the same structure as your main `ndev_settings.yaml`.
**Step 1**: Create a YAML file in the external library (e.g., `ndev_settings.yaml`):
```yaml
ndevio_reader:
preferred_reader:
default: bioio-ome-tiff
dynamic_choices:
fallback_message: No readers found
provider: bioio.readers
tooltip: Preferred reader to use when opening images
value: bioio-ome-tiff
scene_handling:
choices:
- Open Scene Widget
- View All Scenes
- View First Scene Only
default: Open Scene Widget
tooltip: How to handle files with multiple scenes
value: View First Scene Only
clear_layers_on_new_scene:
default: false
tooltip: Whether to clear the viewer when selecting a new scene
value: false
ndevio_export:
canvas_scale:
default: 1.0
max: 100.0
min: 0.1
tooltip: Scales exported figures and screenshots by this value
value: 1.0
override_canvas_size:
default: false
tooltip: Whether to override the canvas size when exporting canvas screenshot
value: false
canvas_size:
default: !!python/tuple
- 1024
- 1024
tooltip: Height x width of the canvas when exporting a screenshot
value: !!python/tuple
- 1024
- 1024
```
**Step 2**: Register the entry point in `pyproject.toml`:
```toml
[project.entry-points."ndev_settings.manifest"]
ndevio = "ndevio:ndev_settings.yaml"
```
**Step 3**: Use the autogenerated widget in napari!

## Usage Example
```python
from ndev_settings import get_settings
settings = get_settings()
# Access settings from main file
print(settings.Canvas.canvas_scale)
# Access settings from external libraries (if installed)
print(settings.Reader.preferred_reader) # From ndevio
print(settings.Export.compression_level) # From ndevio
# Modify and save settings
settings.Canvas.canvas_scale = 2.0
settings.save() # Persists across sessions
# Reset to defaults
settings.reset_to_default("canvas_scale") # Reset single setting
settings.reset_to_default(group="Canvas") # Reset entire group
settings.reset_to_default() # Reset all settings
```
## Performance Note: npe1 Plugin Compatibility
If you have many legacy npe1 plugins installed (e.g., `napari-assistant`, `napari-segment-blobs-and-things-with-membranes`, `napari-simpleitk-image-processing`), you may experience slow widget loading times (10+ seconds) the first time you open the settings plugin widget in a napari session. This is a known issue in napari's npe1 compatibility layer, not specific to ndev-settings. The npe1 adapter iterates through all plugin widgets and performs expensive metadata lookups for each legacy plugin.
**Workaround**: If you don't need npe1 runtime behavior plugins, you can disable the adapter in napari:
1. Go to `File` -> `Preferences` -> `Plugins`
2. Uncheck "Use npe2 adapter"
3. Restart napari
This dramatically improves widget loading times since only pure npe2 plugins are discovered.
## How Settings Persistence Works
Settings are automatically cached to improve startup performance:
1. **First load**: Settings are discovered from all installed packages via entry points, merged together, and saved to a user config file
2. **Subsequent loads**: Settings are loaded directly from the cached file (much faster)
3. **Package changes**: When packages are installed/removed, settings are re-discovered and merged while preserving your customizations
**Storage location**: Settings are stored in a platform-appropriate config directory:
- **Windows**: `%LOCALAPPDATA%\ndev-settings\settings.yaml`
- **macOS**: `~/Library/Application Support/ndev-settings/settings.yaml`
- **Linux**: `~/.config/ndev-settings/settings.yaml`
**Clearing the cache**: To force re-discovery of settings (e.g., after manual edits to package YAML files):
```python
from ndev_settings import clear_settings
clear_settings() # Deletes cached settings, next load will re-discover
```
## Pre-commit hook
You can use the `reset-settings-values` pre-commit hook to reset all settings values
to their defaults before committing. To do so, add the following to your
`.pre-commit-config.yaml`:
```yaml
- repo: https://github.com/ndev-kit/ndev-settings
rev: v0.3.0
hooks:
- id: reset-settings-values
```
## Contributing
Contributions are very welcome. Tests can be run with [tox], please ensure
the coverage at least stays the same before you submit a pull request.
## License
Distributed under the terms of the [BSD-3] license,
"ndev-settings" is free and open source software
## Issues
If you encounter any problems, please [file an issue] along with a detailed description.
[napari]: https://github.com/napari/napari
[copier]: https://copier.readthedocs.io/en/stable/
[@napari]: https://github.com/napari
[MIT]: http://opensource.org/licenses/MIT
[BSD-3]: http://opensource.org/licenses/BSD-3-Clause
[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt
[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt
[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0
[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt
[napari-plugin-template]: https://github.com/napari/napari-plugin-template
[file an issue]: https://github.com/ndev-kit/ndev-settings/issues
[napari]: https://github.com/napari/napari
[tox]: https://tox.readthedocs.io/en/latest/
[pip]: https://pypi.org/project/pip/
[PyPI]: https://pypi.org/
| text/markdown | Tim Monko | timmonko@gmail.com | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: napari",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Image Processing"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"appdirs",
"magicgui",
"magic-class",
"pyyaml",
"napari[pyqt6]; extra == \"all\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/ndev-kit/ndev-settings/issues",
"Documentation, https://github.com/ndev-kit/ndev-settings#README.md",
"Source Code, https://github.com/ndev-kit/ndev-settings",
"User Support, https://github.com/ndev-kit/ndev-settings/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:51:11.361293 | ndev_settings-0.4.1.tar.gz | 67,544 | 14/5a/424a8906bfeeb7c666268a0523f9a82e42b5a224fea49d6e6c329b0e0fea/ndev_settings-0.4.1.tar.gz | source | sdist | null | false | b86e31b5040297e530441ce9489f4869 | f65ddc315d0ef7929515459254bee571e542dd265b7747315909b238b27af6ff | 145a424a8906bfeeb7c666268a0523f9a82e42b5a224fea49d6e6c329b0e0fea | BSD-3-Clause | [
"LICENSE"
] | 337 |
2.1 | bequest-colorize | 1.0.2 | Beautiful terminal colors with simple API | # 🎨 bequest-colorize
Beautiful terminal colors with a clean, simple API. Zero dependencies.
## 📦 Installation
```bash
pip install bequest-colorize
| text/markdown | Bequest | bequestsupport@gmail.com | null | null | null | color, terminal, console, pretty, colors, ansi | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Terminals",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent"
] | [] | https://pypi.org/project/bequest-colorize | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.0 | 2026-02-20T18:51:06.080973 | bequest_colorize-1.0.2-py3-none-any.whl | 2,708 | b5/82/48818afa73193cc213ef4163dd8b1a7251d462110876552774cecc6a271b/bequest_colorize-1.0.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 2f701154be68c7f379bfcd5ac48d7f4c | 5b734720f78235cf4466cfe465957f0686f0104cc9b977db3d3c1a0d39e91050 | b58248818afa73193cc213ef4163dd8b1a7251d462110876552774cecc6a271b | null | [] | 54 |
2.4 | dagit | 1.12.15 | Web UI for dagster. | =================
Dagster UI
=================
Usage
~~~~~
Eg in dagster_examples
.. code-block:: sh
dagit -p 3333
Running dev ui:
.. code-block:: sh
NEXT_PUBLIC_BACKEND_ORIGIN="http://localhost:3333" yarn start
| text/x-rst | null | Dagster Labs <hello@dagsterlabs.com> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"dagster-webserver==1.12.15",
"dagster-webserver[notebook]==1.12.15; extra == \"notebook\"",
"dagster-webserver[test]==1.12.15; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/dagster-io/dagster"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:50:55.747508 | dagit-1.12.15.tar.gz | 5,932 | a4/8e/1528e6f5e131766d6c541609ba2d2dd7d2ed747621c45e5834c063b66e65/dagit-1.12.15.tar.gz | source | sdist | null | false | a0fa0012764276f28e39624206cbbe51 | 6fed5c0b13a447cb194e01a9afcc0018212bee6d6462fe08c9cc25667c78c808 | a48e1528e6f5e131766d6c541609ba2d2dd7d2ed747621c45e5834c063b66e65 | Apache-2.0 | [
"LICENSE"
] | 777 |
2.4 | dagster | 1.12.15 | Dagster is an orchestration platform for the development, production, and observation of data assets. | <div align="center">
<!-- Note: Do not try adding the dark mode version here with the `picture` element, it will break formatting in PyPI -->
<a target="_blank" href="https://dagster.io" style="background:none">
<img alt="dagster logo" src="https://raw.githubusercontent.com/dagster-io/dagster/master/.github/dagster-readme-header.svg" width="auto" height="100%">
</a>
<a target="_blank" href="https://github.com/dagster-io/dagster" style="background:none">
<img src="https://img.shields.io/github/stars/dagster-io/dagster?labelColor=4F43DD&color=163B36&logo=github">
</a>
<a target="_blank" href="https://github.com/dagster-io/dagster/blob/master/LICENSE" style="background:none">
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg?label=license&labelColor=4F43DD&color=163B36">
</a>
<a target="_blank" href="https://pypi.org/project/dagster/" style="background:none">
<img src="https://img.shields.io/pypi/v/dagster?labelColor=4F43DD&color=163B36">
</a>
<a target="_blank" href="https://pypi.org/project/dagster/" style="background:none">
<img src="https://img.shields.io/pypi/pyversions/dagster?labelColor=4F43DD&color=163B36">
</a>
<a target="_blank" href="https://twitter.com/dagster" style="background:none">
<img src="https://img.shields.io/badge/twitter-dagster-blue.svg?labelColor=4F43DD&color=163B36&logo=twitter" />
</a>
<a target="_blank" href="https://dagster.io/slack" style="background:none">
<img src="https://img.shields.io/badge/slack-dagster-blue.svg?labelColor=4F43DD&color=163B36&logo=slack" />
</a>
<a target="_blank" href="https://linkedin.com/showcase/dagster" style="background:none">
<img src="https://img.shields.io/badge/linkedin-dagster-blue.svg?labelColor=4F43DD&color=163B36&logo=linkedin" />
</a>
</div>
**Dagster is a cloud-native data pipeline orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability.**
It is designed for **developing and maintaining data assets**, such as tables, data sets, machine learning models, and reports.
With Dagster, you declare—as Python functions—the data assets that you want to build. Dagster then helps you run your functions at the right time and keep your assets up-to-date.
Here is an example of a graph of three assets defined in Python:
```python
import dagster as dg
import pandas as pd
from sklearn.linear_model import LinearRegression
@dg.asset
def country_populations() -> pd.DataFrame:
df = pd.read_html("https://tinyurl.com/mry64ebh")[0]
df.columns = ["country", "pop2022", "pop2023", "change", "continent", "region"]
df["change"] = df["change"].str.rstrip("%").astype("float")
return df
@dg.asset
def continent_change_model(country_populations: pd.DataFrame) -> LinearRegression:
data = country_populations.dropna(subset=["change"])
return LinearRegression().fit(pd.get_dummies(data[["continent"]]), data["change"])
@dg.asset
def continent_stats(country_populations: pd.DataFrame, continent_change_model: LinearRegression) -> pd.DataFrame:
result = country_populations.groupby("continent").sum()
result["pop_change_factor"] = continent_change_model.coef_
return result
```
The graph loaded into Dagster's web UI:
<p align="center">
<img width="100%" alt="An example asset graph as rendered in the Dagster UI" src="https://raw.githubusercontent.com/dagster-io/dagster/master/.github/example-lineage.png">
</p>
Dagster is built to be used at every stage of the data development lifecycle - local development, unit tests, integration tests, staging environments, all the way up to production.
## Quick Start:
If you're new to Dagster, we recommend checking out the [docs](https://docs.dagster.io) or following the hands-on [tutorial](https://docs.dagster.io/etl-pipeline-tutorial/).
Dagster is available on PyPI and officially supports Python 3.9 through Python 3.14.
```bash
pip install dagster dagster-webserver
```
This installs two packages:
- `dagster`: The core programming model.
- `dagster-webserver`: The server that hosts Dagster's web UI for developing and operating Dagster jobs and assets.
## Documentation
You can find the full Dagster documentation [here](https://docs.dagster.io), including the [Quickstart guide](https://docs.dagster.io/getting-started/quickstart).
<hr/>
## Key Features:
<p align="center">
<img width="100%" alt="image" src="https://raw.githubusercontent.com/dagster-io/dagster/master/.github/key-features-cards.svg">
</p>
### Dagster as a productivity platform
Identify the key assets you need to create using a declarative approach, or you can focus on running basic tasks. Embrace CI/CD best practices from the get-go: build reusable components, spot data quality issues, and flag bugs early.
### Dagster as a robust orchestration engine
Put your pipelines into production with a robust multi-tenant, multi-tool engine that scales technically and organizationally.
### Dagster as a unified control plane
Maintain control over your data as the complexity scales. Centralize your metadata in one tool with built-in observability, diagnostics, cataloging, and lineage. Spot any issues and identify performance improvement opportunities.
<hr />
## Master the Modern Data Stack with integrations
Dagster provides a growing library of integrations for today’s most popular data tools. Integrate with the tools you already use, and deploy to your infrastructure.
<br/>
<p align="center">
<a target="_blank" href="https://dagster.io/integrations" style="background:none">
<img width="100%" alt="image" src="https://raw.githubusercontent.com/dagster-io/dagster/master/.github/integrations-bar-for-readme.png">
</a>
</p>
## Community
Connect with thousands of other data practitioners building with Dagster. Share knowledge, get help,
and contribute to the open-source project. To see featured material and upcoming events, check out
our [Dagster Community](https://dagster.io/community) page.
Join our community here:
- 🌟 [Star us on GitHub](https://github.com/dagster-io/dagster)
- 📥 [Subscribe to our Newsletter](https://dagster.io/newsletter-signup)
- 🐦 [Follow us on Twitter](https://twitter.com/dagster)
- 🕴️ [Follow us on LinkedIn](https://www.linkedin.com/company/dagsterlabs/)
- 📺 [Subscribe to our YouTube channel](https://www.youtube.com/@dagsterio)
- 📚 [Read our blog posts](https://dagster.io/blog)
- 👋 [Join us on Slack](https://dagster.io/slack)
- 🗃 [Browse Slack archives](https://discuss.dagster.io)
- ✏️ [Start a GitHub Discussion](https://github.com/dagster-io/dagster/discussions)
## Contributing
For details on contributing or running the project for development, check out our [contributing
guide](https://docs.dagster.io/about/contributing).
## License
Dagster is [Apache 2.0 licensed](https://github.com/dagster-io/dagster/blob/master/LICENSE).
| text/markdown | null | Dagster Labs <hello@dagsterlabs.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: System :: Monitoring",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Operating System :: OS Independent"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"click<9.0,>=5.0",
"coloredlogs<=14.0,>=6.1",
"Jinja2",
"alembic!=1.11.0,!=1.6.3,!=1.7.0,>=1.2.1",
"grpcio>=1.66.2; python_version >= \"3.13\"",
"grpcio>=1.44.0; python_version < \"3.13\"",
"grpcio-health-checking>=1.66.2; python_version >= \"3.13\"",
"grpcio-health-checking>=1.44.0; python_version < \"3.13\"",
"protobuf<7,>=3.20.0; python_version < \"3.11\"",
"protobuf<7,>=4; python_version >= \"3.11\"",
"python-dotenv",
"pytz",
"requests",
"setuptools<82",
"six",
"tabulate",
"tomli<3",
"tqdm<5",
"tzdata",
"structlog",
"sqlalchemy<3,>=1.0",
"toposort>=1.0",
"watchdog<7,>=0.8.3",
"psutil>=1.0; platform_system == \"Windows\"",
"pywin32!=226; platform_system == \"Windows\"",
"docstring-parser",
"universal_pathlib; python_version < \"3.12\"",
"universal_pathlib>=0.2.0; python_version >= \"3.12\"",
"rich",
"filelock",
"dagster-pipes==1.12.15",
"dagster-shared==1.12.15",
"antlr4-python3-runtime",
"docker; extra == \"docker\"",
"docker; extra == \"test\"",
"grpcio-tools>=1.66.2; python_version >= \"3.13\" and extra == \"test\"",
"grpcio-tools>=1.44.0; python_version < \"3.13\" and extra == \"test\"",
"mypy-protobuf; extra == \"test\"",
"objgraph; extra == \"test\"",
"pytest-cov==5.0.0; extra == \"test\"",
"pytest-mock==3.14.0; extra == \"test\"",
"pytest-xdist==3.6.1; extra == \"test\"",
"pytest>=8; extra == \"test\"",
"pytest-asyncio; extra == \"test\"",
"pytest-timeout; extra == \"test\"",
"responses<=0.23.1; extra == \"test\"",
"syrupy>=4.0.0; extra == \"test\"",
"tox>=4; extra == \"test\"",
"morefs[asynclocal]; extra == \"test\"",
"fsspec<2024.5.0; extra == \"test\"",
"rapidfuzz; extra == \"test\"",
"flaky; extra == \"test\"",
"psutil; extra == \"test\"",
"ruff==0.15.0; extra == \"test\"",
"tomlkit; extra == \"test-components\"",
"jsonschema; extra == \"test-components\"",
"pandas<3.0.0; extra == \"test-components\"",
"duckdb; extra == \"test-components\"",
"mypy==1.8.0; extra == \"mypy\"",
"pyright==1.1.379; extra == \"pyright\"",
"pandas-stubs; extra == \"pyright\"",
"types-backports; extra == \"pyright\"",
"types-certifi; extra == \"pyright\"",
"types-chardet; extra == \"pyright\"",
"types-cryptography; extra == \"pyright\"",
"types-mock; extra == \"pyright\"",
"types-paramiko; extra == \"pyright\"",
"types-pyOpenSSL; extra == \"pyright\"",
"types-python-dateutil~=2.9.0.20240316; extra == \"pyright\"",
"types-PyYAML; extra == \"pyright\"",
"types-pytz; extra == \"pyright\"",
"types-requests; extra == \"pyright\"",
"types-simplejson; extra == \"pyright\"",
"types-six; extra == \"pyright\"",
"types-tabulate; extra == \"pyright\"",
"types-tzlocal; extra == \"pyright\"",
"types-toml; extra == \"pyright\"",
"ruff==0.15.0; extra == \"ruff\""
] | [] | [] | [] | [
"Homepage, https://dagster.io",
"GitHub, https://github.com/dagster-io/dagster",
"Documentation, https://docs.dagster.io",
"Changelog, https://github.com/dagster-io/dagster/releases",
"Issue Tracker, https://github.com/dagster-io/dagster/issues",
"Twitter, https://twitter.com/dagster",
"YouTube, https://www.youtube.com/@dagsterio",
"Slack, https://dagster.io/slack",
"Blog, https://dagster.io/blog",
"Newsletter, https://dagster.io/newsletter-signup"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:50:49.373332 | dagster-1.12.15.tar.gz | 1,569,059 | 84/6b/5f1024c3e3583ba78f998fc54143184b8efbfd061c41a47901947e31bb2e/dagster-1.12.15.tar.gz | source | sdist | null | false | 8dd3048b9ad572897586f74e7797427c | f8d23466f00edba52f868abf6773de737a7ef121182954d3fbc54bd3b72e5212 | 846b5f1024c3e3583ba78f998fc54143184b8efbfd061c41a47901947e31bb2e | Apache-2.0 | [
"LICENSE",
"COPYING"
] | 14,209 |
2.4 | webu | 0.7.3 | Web Utils for browsing and scraping | # WebU
Web Utils for browsing and scraping.

## Install
```sh
pip install webu --upgrade
```
## Usage
Run example:
```sh
python example.py
```
See: [example.py](https://github.com/Hansimov/webu/blob/main/example.py)
| text/markdown | Hansimov | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"tclogger",
"DrissionPage",
"pyvirtualdisplay",
"beautifulsoup4",
"requests",
"netifaces",
"fastapi",
"uvicorn",
"numpy",
"playwright",
"psutil"
] | [] | [] | [] | [
"Homepage, https://github.com/Hansimov/webu",
"Issues, https://github.com/Hansimov/webu/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T18:49:56.206249 | webu-0.7.3.tar.gz | 90,376 | 07/f5/59ef49964f3c3f6021bf6ad24de42f39d55aeb25008af63535f5faa2d75a/webu-0.7.3.tar.gz | source | sdist | null | false | a4a2de34a9b4bc7d45ec3ad8f90e60bc | 4a6d93ba15b3b2127b1e03b8c58b9e29d58e3572507c0cd2712f0307f12fdde0 | 07f559ef49964f3c3f6021bf6ad24de42f39d55aeb25008af63535f5faa2d75a | MIT | [
"LICENSE"
] | 168 |
2.3 | radar-mapping-api | 0.2.0 | A Python client for the Radar.io geocoding, mapping, and geolocation API | # Modern Python Client for Radar.io Geocoding API
[](https://github.com/iloveitaly/radar-mapping-api/releases)
[](https://pepy.tech/project/radar-mapping-api)

[](https://opensource.org/licenses/MIT)
A Python client for the [Radar.io](https://radar.com) geocoding, mapping, and geolocation APIs.
## Why This Library?
Radar's [official Python SDK](https://github.com/radarlabs/radar-sdk-python) hasn't been updated in several years and doesn't include support for newer API endpoints.
I built this to solve a practical problem: I needed a way to interact with Radar's geocoding APIs with type hints and support for their current endpoints. This library provides that.
> [!CAUTION]
> **Pricing Alert for Startups**: Radar offers a free tier, but pricing jumps from free to $20,000/year with no incremental options in between, even when working with their startup sales team. If you're building something that will scale beyond the free tier limits, consider whether this pricing structure fits your growth trajectory.
## Installation
```bash
uv add radar-mapping-api
```
For optional Sentry integration:
```bash
uv add radar-mapping-api[sentry]
```
## Usage
### Basic Setup
```python
from radar_mapping_api import RadarClient
client = RadarClient(api_key="your_radar_api_key")
```
### Forward Geocoding
Convert an address to coordinates:
```python
result = client.forward_geocode(
query="841 Broadway, New York, NY",
country="US"
)
if result.addresses:
address = result.addresses[0]
print(f"Latitude: {address.latitude}")
print(f"Longitude: {address.longitude}")
print(f"Formatted: {address.formattedAddress}")
```
### Reverse Geocoding
Convert coordinates to an address:
```python
result = client.reverse_geocode(
coordinates="40.7128,-74.0060",
layers="postalCode,locality,state"
)
if result.addresses:
address = result.addresses[0]
print(f"City: {address.city}")
print(f"State: {address.stateCode}")
print(f"Postal Code: {address.postalCode}")
```
### Place Search
Search for places near a location:
```python
result = client.search_places(
near="40.7128,-74.0060",
categories="coffee-shop",
radius=1000,
limit=10
)
for place in result.places:
print(f"{place.name} - {', '.join(place.categories)}")
```
### Address Autocomplete
Get autocomplete suggestions for partial addresses:
```python
result = client.autocomplete(
query="841 Broad",
country_code="US",
limit=5
)
for address in result.addresses:
print(address.formattedAddress)
```
### Address Validation
Validate and normalize a structured address:
```python
result = client.validate_address(
address_label="841 Broadway",
city="New York",
state_code="NY",
postal_code="10003",
country_code="US"
)
if result.address:
print(f"Validated: {result.address.formattedAddress}")
```
### Helper Functions
The library includes helper functions for common operations:
```python
from radar_mapping_api import geocode_postal_code, geocode_coordinates
# Geocode a postal code
result = geocode_postal_code(
client,
postal_code="10007",
country="US"
)
if result:
print(f"Coordinates: {result.lat}, {result.lon}")
print(f"City: {result.city}")
# Reverse geocode coordinates
result = geocode_coordinates(
client,
lat=40.7128,
lon=-74.0060
)
if result:
print(f"Postal Code: {result.postal_code}")
print(f"State: {result.state_code}")
```
## Features
- Type-safe with Pydantic models
- Automatic retry logic with exponential backoff (up to 6 attempts)
- Does not retry on HTTP 402 (payment required) errors
- Optional Sentry integration for logging warnings
- Uses httpx for async-capable HTTP requests
- Comprehensive test suite
## Error Handling
The client includes retry logic for failed requests:
```python
import httpx
try:
result = client.forward_geocode(query="invalid address")
except httpx.HTTPError as e:
print(f"Request failed: {e}")
```
## API Reference
### RadarClient Methods
- `forward_geocode(query, layers=None, country=None, lang=None)` - Convert address to coordinates
- `reverse_geocode(coordinates, layers=None, lang=None)` - Convert coordinates to address
- `search_places(near=None, chains=None, categories=None, iata_code=None, ...)` - Search for places
- `autocomplete(query, near=None, layers=None, limit=None, ...)` - Autocomplete addresses
- `validate_address(address_label, city=None, state_code=None, ...)` - Validate addresses
### Models
All API responses are returned as Pydantic models:
- `GeocodeResponse` - Forward/reverse geocoding response
- `SearchPlacesResponse` - Place search response
- `ValidateAddressResponse` - Address validation response
- `GeocodeResult` - Simplified geocoding result
- `Address` - Address information
- `Place` - Place information
## Development
```bash
# Install with development dependencies
uv sync
# Run tests
uv run pytest
# Run linting
uv run ruff check
# Type checking
uv run pyright
```
## Links
- [Radar.io API Documentation](https://docs.radar.com/api)
- [GitHub Repository](https://github.com/iloveitaly/radar-mapping-api)
## License
See LICENSE file for details.
---
*This project was created from [iloveitaly/python-package-template](https://github.com/iloveitaly/python-package-template)*
| text/markdown | Michael Bianco | Michael Bianco <mike@mikebian.co> | null | null | null | radar, geocoding, geolocation, mapping, reverse-geocoding, places | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.28.1",
"pydantic>=2.12.3",
"tenacity>=9.1.2",
"sentry-sdk>=2.0.0; extra == \"sentry\""
] | [] | [] | [] | [
"Repository, https://github.com/iloveitaly/radar-mapping-api"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:49:27.579366 | radar_mapping_api-0.2.0.tar.gz | 7,550 | 73/c2/c0acbdef714dbe5339a1520222181018357e7b216630b12b26a7fa4a28ab/radar_mapping_api-0.2.0.tar.gz | source | sdist | null | false | c86d31ac8cedd3eae71c097c74af48d5 | 9d4d422d7c650557d4bef9fb33ecf9e6d3d71ee5e5badee8fb197d008db66000 | 73c2c0acbdef714dbe5339a1520222181018357e7b216630b12b26a7fa4a28ab | null | [] | 168 |
2.4 | pyoptix-contrib | 0.1.1 | Community-maintained Python bindings for NVIDIA OptiX (fork of otk-pyoptix with OptiX 9.1 support) | # pyoptix-contrib
Community-maintained Python bindings for [NVIDIA OptiX](https://developer.nvidia.com/optix), forked from [NVIDIA/otk-pyoptix](https://github.com/NVIDIA/otk-pyoptix).
This fork adds support for OptiX 9.1 features including cluster acceleration structures and cooperative vectors.
## Requirements
- [OptiX SDK](https://developer.nvidia.com/designworks/optix/download) 7.6 or newer (9.1+ for cluster accel / coop vec features)
- [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads) 12.6 or newer
- [CMake](https://cmake.org/)
- A C++17 compiler
## Installation
```bash
export OptiX_INSTALL_DIR=/path/to/OptiX-SDK
pip install pyoptix-contrib
```
On Windows (PowerShell):
```powershell
$env:OptiX_INSTALL_DIR = 'C:\ProgramData\NVIDIA Corporation\OptiX SDK 9.1.0'
pip install pyoptix-contrib
```
The package builds from source via CMake, so the OptiX SDK must be available at install time.
For additional CMake arguments, use the `PYOPTIX_CMAKE_ARGS` environment variable.
## Usage
```python
import optix
# Create a device context
ctx = optix.deviceContextCreate(cuda_context, optix.DeviceContextOptions())
# Query device properties
rtcore_version = optix.deviceContextGetProperty(
ctx, optix.DeviceProperty.DEVICE_PROPERTY_RTCORE_VERSION
)
```
See the [examples](https://github.com/brendancol/otk-pyoptix/tree/master/examples) directory for complete samples including triangle rendering, curves, denoising, and motion blur.
## What's New in This Fork
- **OptiX 9.1 support**: Conditional compilation via `IF_OPTIX91` macro
- **Cluster acceleration structures**: Enums, structs, and host functions (`clusterAccelComputeMemoryUsage`, `clusterAccelBuild`)
- **Cooperative vectors**: Element types, matrix layouts, and description structs
- **New primitive types**: ROCAPS curve variants and associated flags
- **`allowClusteredGeometry`** pipeline compile option
- **New device properties**: `COOP_VEC`, `CLUSTER_ACCEL`, max cluster vertices/triangles/SBT index/clusters-per-GAS
## Windows: CUDA DLL Loading (Python 3.8+)
Python 3.8+ on Windows no longer uses `PATH` to find DLLs. PyOptiX will auto-detect CUDA from the `CUDA_PATH` environment variable. If auto-detection fails, set `CUDA_BIN_DIR`:
```powershell
$env:CUDA_BIN_DIR = 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.9\bin'
```
## License
BSD 3-Clause. See [LICENSE.txt](https://github.com/brendancol/otk-pyoptix/blob/master/LICENSE.txt).
## Acknowledgments
Original work by Keith Morley and NVIDIA Corporation ([NVIDIA/otk-pyoptix](https://github.com/NVIDIA/otk-pyoptix)).
| text/markdown | Brendan Collins | Keith Morley <kmorley@nvidia.com> | null | null | null | optix, nvidia, ray-tracing, gpu, cuda, pybind11 | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: C++",
"Topic :: Scientific/Engineering",
"Topic :: Multimedia :: Graphics :: 3D Rendering",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pytest; extra == \"test\"",
"cupy; extra == \"test\"",
"numpy; extra == \"test\"",
"Pillow; extra == \"test\"",
"cuda-python; extra == \"cuda\""
] | [] | [] | [] | [
"Homepage, https://github.com/brendancol/otk-pyoptix",
"Repository, https://github.com/brendancol/otk-pyoptix",
"Upstream (NVIDIA), https://github.com/NVIDIA/otk-pyoptix"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T18:48:59.370661 | pyoptix_contrib-0.1.1-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl | 654,079 | 04/4f/4afd38d64f7043613dd60de5191b9ffedbcac0292f48ad3382b674070035/pyoptix_contrib-0.1.1-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl | cp314 | bdist_wheel | null | false | 07abd95164e0cd3a031fdcf8c3d2633b | 68ceab58242a846fe7e9de66be3fb0cc7ee2b1d901e88644649ec4b5062d4a8c | 044f4afd38d64f7043613dd60de5191b9ffedbcac0292f48ad3382b674070035 | BSD-3-Clause | [] | 415 |
2.4 | nrel-routee-compass | 0.18.0 | An eco-routing tool build upon RouteE-Powertrain | # <img src="docs/images/routeelogo.png" alt="Routee Compass" width="100"/>
<div align="left">
<img src="https://img.shields.io/badge/python-3.10%20%7C%203.11%20%7C%203.12%20%7C%203.13-blue"/>
<a href="https://anaconda.org/conda-forge/nrel.routee.compass">
<img src="https://img.shields.io/conda/v/conda-forge/nrel.routee.compass" alt="Conda Latest Release"/>
</a>
<a href="https://pypi.org/project/nrel.routee.compass/">
<img src="https://img.shields.io/pypi/v/nrel.routee.compass" alt="PyPi Latest Release"/>
</a>
<a href="https://crates.io/crates/routee-compass">
<img src="https://img.shields.io/crates/v/routee-compass" alt="Crates.io Latest Release"/>
</a>
</div>
[](https://github.com/NREL/routee-compass/actions/workflows/python-release.yaml)
RouteE Compass is an energy-aware routing engine for the RouteE ecosystem of software tools with the following key features:
- Dynamic and extensible search objectives that allow customized blends of distance, time, cost, and energy (via RouteE Powertrain) at query-time
- Core engine written in Rust for improved runtimes, parallel query execution, and the ability to load nation-sized road networks into memory
- Rust, HTTP, and Python APIs for integration into different research pipelines and other software
RouteE Compass is a part of the [RouteE](https://www.nrel.gov/transportation/route-energy-prediction-model.html) family of mobility tools created at the National Lab of the Rockies and uses [RouteE Powertrain](https://github.com/NREL/routee-powertrain) to predict vehicle energy during the search.
## Installation
See the [installation](https://nrel.github.io/routee-compass/installation.html) guide for installing RouteE Compass
## Usage
See the [documentation](https://nrel.github.io/routee-compass/) for more information.
## Contributors
RouteE Compass is currently maintained by Nick Reinicke ([@nreinicke](https://github.com/nreinicke)) and Rob Fitzgerald ([@robfitzgerald](https://github.com/robfitzgerald)).
If you're interested in contributing, please checkout the [contributing](https://nrel.github.io/routee-compass/developers/contributing.html) guide.
## License
Copyright 2023 Alliance for Energy Innovation, LLC
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| text/markdown; charset=UTF-8; variant=GFM | National Laboratory of the Rockies | null | null | null | BSD 3-Clause License Copyright (c) 2023, Alliance for Energy Innovation, LLC | eco routing | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Scientific/Engineering"
] | [] | https://nrel.github.io/routee-compass | null | <3.14,>=3.10 | [] | [] | [] | [
"tomlkit<1.0,>=0.11.0",
"pytest<9.0,>=8.0; extra == \"dev\"",
"maturin<2.0,>=1.0; extra == \"dev\"",
"jupyter-book<2.0,>=1.0; extra == \"dev\"",
"ruff<1.0,>=0.1.0; extra == \"dev\"",
"sphinx-book-theme<2.0,>=1.0.0; extra == \"dev\"",
"mypy<2.0,>=1.0.0; extra == \"dev\"",
"jupyterlab<5.0,>=4.0.0; extra == \"dev\"",
"boxsdk<4.0,>=3.0.0; extra == \"dev\"",
"types-requests<3.0,>=2.28.0; extra == \"dev\"",
"osmnx<3.0,>=2.0.5; extra == \"osm\"",
"rio-vrt<1.0,>=0.3.1; extra == \"osm\"",
"mapclassify<3.0,>=2.8.1; extra == \"osm\"",
"requests<3.0,>=2.28.0; extra == \"osm\"",
"geopandas<2.0,>=1.1.1; extra == \"osm\"",
"shapely<3.0,>=2.0.0; extra == \"osm\"",
"networkx<4.0,>=3.0; extra == \"osm\"",
"folium<1.0,>=0.14.0; extra == \"osm\"",
"pandas<3.0,>=2.0.0; extra == \"osm\"",
"rasterio<2.0,>=1.3.0; extra == \"osm\"",
"matplotlib<4.0,>=3.7.0; extra == \"osm\"",
"numpy<3.0,>=1.26; extra == \"osm\"",
"seaborn<1.0,>=0.12.0; extra == \"osm\"",
"nrel-routee-compass[osm]; extra == \"all\"",
"nrel-routee-compass[dev]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/NREL/routee-compass",
"Documentation, https://nrel.github.io/routee-compass"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T18:48:44.140696 | nrel_routee_compass-0.18.0.tar.gz | 1,623,970 | b8/cc/86b1a17063d6ba85211efe6ac3b50cb5f7745f4afae3bad8f64236929a75/nrel_routee_compass-0.18.0.tar.gz | source | sdist | null | false | 6a047119d0fd5e4982dfc85a5760d15a | aba7a0b9ac1483cdf1a19d571dd64c7d2b6a70321b23a506e2003b219335e8d2 | b8cc86b1a17063d6ba85211efe6ac3b50cb5f7745f4afae3bad8f64236929a75 | null | [
"LICENSE.md"
] | 836 |
2.4 | agent-first-data | 0.4.1 | Agent-First Data (AFDATA) — suffix-driven output formatting and protocol templates for AI agents | # agent-first-data
**Agent-First Data (AFDATA)** — Suffix-driven output formatting and protocol templates for AI agents.
The field name is the schema. Agents read `latency_ms` and know milliseconds, `api_key_secret` and know to redact, no external schema needed.
## Installation
```bash
pip install agent-first-data
```
## Quick Example
A backup tool invoked from the CLI — flags, env vars, and config all use the same suffixes:
```bash
API_KEY_SECRET=sk-1234 cloudback --timeout-s 30 --max-file-size-bytes 10737418240 /data/backup.tar.gz
```
For CLI diagnostics, enable log categories explicitly:
```bash
--log startup,request,progress,retry,redirect
--verbose # shorthand for all categories
```
Without these flags, startup diagnostics should stay off by default.
The tool reads env vars, flags, and config — all with AFDATA suffixes — and can emit a startup diagnostic event:
```python
from agent_first_data import *
import os
startup = build_json(
"log",
{
"event": "startup",
"config": {"timeout_s": 30, "max_file_size_bytes": 10737418240},
"args": {"input_path": "/data/backup.tar.gz"},
"env": {"API_KEY_SECRET": os.environ.get("API_KEY_SECRET")},
},
trace=None,
)
```
Three output formats, same data:
```
JSON: {"code":"log","event":"startup","args":{"input_path":"/data/backup.tar.gz"},"config":{"max_file_size_bytes":10737418240,"timeout_s":30},"env":{"API_KEY_SECRET":"***"}}
YAML: code: "log"
event: "startup"
args:
input_path: "/data/backup.tar.gz"
config:
max_file_size: "10.0GB"
timeout: "30s"
env:
API_KEY: "***"
Plain: args.input_path=/data/backup.tar.gz code=log event=startup config.max_file_size=10.0GB config.timeout=30s env.API_KEY=***
```
`--timeout-s` → `timeout_s` → `timeout: 30s`. `API_KEY_SECRET` → `API_KEY: "***"`. The suffix is the schema.
## API Reference
Total: **12 public APIs and 1 type** + **AFDATA logging** (3 protocol builders + 3 output functions + 1 internal + 1 utility + 4 CLI helpers + `OutputFormat`)
### Protocol Builders (returns dict)
Build AFDATA protocol structures. Return dict objects for API responses.
```python
# Success (result)
build_json_ok(result: Any, trace: Any = None) -> dict
# Error (simple message)
build_json_error(message: str, trace: Any = None) -> dict
# Generic (any code + fields)
build_json(code: str, fields: Any, trace: Any = None) -> dict
```
**Use case:** API responses (frameworks like FastAPI automatically serialize)
**Example:**
```python
from agent_first_data import *
# Startup
startup = build_json(
"log",
{
"event": "startup",
"config": {"api_key_secret": "sk-123", "timeout_s": 30},
"args": {"config_path": "config.yml"},
"env": {"RUST_LOG": "info"},
},
trace=None,
)
# Success (always include trace)
response = build_json_ok(
{"user_id": 123},
trace={"duration_ms": 150, "source": "db"},
)
# Error
err = build_json_error("user not found", trace={"duration_ms": 5})
# Specific error code
not_found = build_json(
"not_found",
{"resource": "user", "id": 123},
trace={"duration_ms": 8},
)
```
### CLI/Log Output (returns str)
Format values for CLI output and logs. **All formats redact `_secret` fields.** YAML and Plain also strip suffixes from keys and format values for human readability.
```python
output_json(value: Any) -> str # Single-line JSON, original keys, for programs/logs
output_yaml(value: Any) -> str # Multi-line YAML, keys stripped, values formatted
output_plain(value: Any) -> str # Single-line logfmt, keys stripped, values formatted
```
**Example:**
```python
from agent_first_data import *
data = {
"user_id": 123,
"api_key_secret": "sk-1234567890abcdef",
"created_at_epoch_ms": 1738886400000,
"file_size_bytes": 5242880,
}
# JSON (secrets redacted, original keys, raw values)
print(output_json(data))
# {"api_key_secret":"***","created_at_epoch_ms":1738886400000,"file_size_bytes":5242880,"user_id":123}
# YAML (keys stripped, values formatted, secrets redacted)
print(output_yaml(data))
# ---
# api_key: "***"
# created_at: "2025-02-07T00:00:00.000Z"
# file_size: "5.0MB"
# user_id: 123
# Plain logfmt (keys stripped, values formatted, secrets redacted)
print(output_plain(data))
# api_key=*** created_at=2025-02-07T00:00:00.000Z file_size=5.0MB user_id=123
```
### Internal Tools
```python
internal_redact_secrets(value: Any) -> None # Manually redact secrets in-place
```
Most users don't need this. Output functions automatically protect secrets.
### Utility Functions
```python
parse_size(s: str) -> int | None # Parse "10M" → bytes
```
**Example:**
```python
from agent_first_data import *
assert parse_size("10M") == 10485760
assert parse_size("1.5K") == 1536
assert parse_size("512") == 512
```
### CLI Helpers (for tools built on AFDATA)
Shared helpers that prevent flag-parsing drift between CLI tools. Use these instead of reimplementing `--output` and `--log` handling in each tool.
```python
class OutputFormat(enum.Enum): # JSON="json", YAML="yaml", PLAIN="plain"
cli_parse_output(s: str) -> OutputFormat # Parse --output flag; raises ValueError on unknown
cli_parse_log_filters(entries: list[str]) -> list[str] # Normalize --log: trim, lowercase, dedup, remove empty
cli_output(value: Any, format: OutputFormat) -> str # Dispatch to output_json/yaml/plain
build_cli_error(message: str) -> dict # {code:"error", error_code:"invalid_request", retryable:False, trace:{duration_ms:0}}
```
**Canonical pattern** — parse all flags before doing work, emit JSONL errors to stdout:
```python
import sys
from agent_first_data import (
OutputFormat, cli_parse_output, cli_parse_log_filters,
cli_output, build_cli_error, output_json,
)
try:
fmt = cli_parse_output(args.output)
except ValueError as e:
print(output_json(build_cli_error(str(e))))
sys.exit(2)
log = cli_parse_log_filters(args.log.split(",") if args.log else [])
# ... do work ...
print(cli_output(result, fmt))
```
See `examples/agent_cli.py` for the complete working example (`pytest examples/agent_cli.py`).
## Usage Examples
### Example 1: REST API
```python
from agent_first_data import *
from fastapi import FastAPI
app = FastAPI()
@app.get("/users/{user_id}")
async def get_user(user_id: int):
response = build_json_ok(
{"user_id": user_id, "name": "alice"},
trace={"duration_ms": 150, "source": "db"},
)
# API returns raw JSON — no output processing, no key stripping
return response
```
### Example 2: CLI Tool (Complete Lifecycle)
```python
from agent_first_data import *
# 1. Startup
startup = build_json(
"log",
{
"event": "startup",
"config": {"api_key_secret": "sk-sensitive-key", "timeout_s": 30},
"args": {"input_path": "data.json"},
"env": {"RUST_LOG": "info"},
},
trace=None,
)
print(output_yaml(startup))
# ---
# code: "log"
# event: "startup"
# args:
# input_path: "data.json"
# config:
# api_key: "***"
# timeout: "30s"
# env:
# RUST_LOG: "info"
# 2. Progress
progress = build_json(
"progress",
{"current": 3, "total": 10, "message": "processing"},
trace={"duration_ms": 1500},
)
print(output_plain(progress))
# code=progress current=3 message=processing total=10 trace.duration=1.5s
# 3. Result
result = build_json_ok(
{
"records_processed": 10,
"file_size_bytes": 5242880,
"created_at_epoch_ms": 1738886400000,
},
trace={"duration_ms": 3500, "source": "file"},
)
print(output_yaml(result))
# ---
# code: "ok"
# result:
# created_at: "2025-02-07T00:00:00.000Z"
# file_size: "5.0MB"
# records_processed: 10
# trace:
# duration: "3.5s"
# source: "file"
```
### Example 3: JSONL Output
```python
from agent_first_data import *
result = build_json_ok(
{"status": "success"},
trace={"duration_ms": 250, "api_key_secret": "sk-123"},
)
# Print JSONL to stdout (secrets redacted, one JSON object per line)
# Channel policy: machine-readable protocol/log events must not use stderr.
print(output_json(result))
# {"code":"ok","result":{"status":"success"},"trace":{"api_key_secret":"***","duration_ms":250}}
```
## Complete Suffix Example
```python
from agent_first_data import *
data = {
"created_at_epoch_ms": 1738886400000,
"request_timeout_ms": 5000,
"cache_ttl_s": 3600,
"file_size_bytes": 5242880,
"payment_msats": 50000000,
"price_usd_cents": 9999,
"success_rate_percent": 95.5,
"api_key_secret": "sk-1234567890abcdef",
"user_name": "alice",
"count": 42,
}
# YAML output (keys stripped, values formatted, secrets redacted)
print(output_yaml(data))
# ---
# api_key: "***"
# cache_ttl: "3600s"
# count: 42
# created_at: "2025-02-07T00:00:00.000Z"
# file_size: "5.0MB"
# payment: "50000000msats"
# price: "$99.99"
# request_timeout: "5.0s"
# success_rate: "95.5%"
# user_name: "alice"
# Plain logfmt output (same transformations, single line)
print(output_plain(data))
# api_key=*** cache_ttl=3600s count=42 created_at=2025-02-07T00:00:00.000Z file_size=5.0MB payment=50000000msats price=$99.99 request_timeout=5.0s success_rate=95.5% user_name=alice
```
## AFDATA Logging
AFDATA-compliant structured logging via Python's `logging` module. Every log line is formatted using the library's own `output_json`/`output_plain`/`output_yaml` functions. Span fields are carried via `contextvars` (async-safe), automatically flattened into each log line.
### API
```python
from agent_first_data import init_logging_json, init_logging_plain, init_logging_yaml
from agent_first_data.afdata_logging import AfdataHandler, get_logger, span
# Convenience initializers — set up the root logger with AFDATA output to stdout
init_logging_json(level="INFO") # Single-line JSONL (secrets redacted, original keys)
init_logging_plain(level="INFO") # Single-line logfmt (keys stripped, values formatted)
init_logging_yaml(level="INFO") # Multi-line YAML (keys stripped, values formatted)
# Low-level — create a handler for custom logger stacks
AfdataHandler(format="json") # format: "json" | "plain" | "yaml"
# Logger with default fields (returns logging.LoggerAdapter)
get_logger(name, **fields)
# Span context manager — adds fields to all log events within the block
span(**fields)
```
### Setup
```python
from agent_first_data import init_logging_json, init_logging_plain, init_logging_yaml
# JSON output for production (one JSONL line per event, secrets redacted)
init_logging_json("INFO")
# Plain logfmt for development (keys stripped, values formatted)
init_logging_plain("DEBUG")
# YAML for detailed inspection (multi-line, keys stripped, values formatted)
init_logging_yaml("DEBUG")
```
### Log Output
Standard `logging` calls work unchanged. Output format depends on the init function used.
```python
import logging
logger = logging.getLogger("myapp")
logger.info("Server started")
# JSON: {"timestamp_epoch_ms":1739000000000,"message":"Server started","target":"myapp","code":"info"}
# Plain: code=info message="Server started" target=myapp timestamp_epoch_ms=1739000000000
# YAML: ---
# code: "info"
# message: "Server started"
# target: "myapp"
# timestamp_epoch_ms: 1739000000000
logger.warning("DNS lookup failed")
# JSON: {"timestamp_epoch_ms":...,"message":"DNS lookup failed","target":"myapp","code":"warn"}
```
### Span Support
Use the `span` context manager to add fields to all log events within the block. Spans nest and work with both sync and async code.
```python
from agent_first_data import span
with span(request_id="abc-123"):
logger.info("Processing")
# {"timestamp_epoch_ms":...,"message":"Processing","target":"myapp","request_id":"abc-123","code":"info"}
with span(step="validate"):
logger.info("Validating input")
# {"timestamp_epoch_ms":...,"message":"Validating input","target":"myapp","request_id":"abc-123","step":"validate","code":"info"}
```
### Logger with Default Fields
Use `get_logger` for per-component fields that appear on every log line:
```python
from agent_first_data import get_logger
logger = get_logger("myapp.auth", component="auth")
logger.info("Token verified")
# {"timestamp_epoch_ms":...,"message":"Token verified","target":"myapp.auth","component":"auth","code":"info"}
```
### Custom Code Override
The `code` field defaults to the log level. Override with an explicit field:
```python
from agent_first_data import get_logger
logger = get_logger("myapp")
logger.info("Server ready", extra={"code": "log", "event": "startup"})
# {"timestamp_epoch_ms":...,"message":"Server ready","target":"myapp","code":"log","event":"startup"}
```
### Output Fields
Every log line contains:
| Field | Type | Description |
|:------|:-----|:------------|
| `timestamp_epoch_ms` | number | Unix milliseconds |
| `message` | string | Log message |
| `target` | string | Logger name |
| `code` | string | Level (debug/info/warn/error) or explicit override |
| *span fields* | any | From `span()` context manager |
| *event fields* | any | From `extra=` or `get_logger` fields |
### Log Output Formats
All three formats use the library's own output functions, so AFDATA suffix processing applies to log fields too:
| Format | Function | Keys | Values | Use case |
|:-------|:---------|:-----|:-------|:---------|
| **JSON** | `init_logging_json` | original (with suffix) | raw | production, log aggregation |
| **Plain** | `init_logging_plain` | stripped | formatted | development, compact scanning |
| **YAML** | `init_logging_yaml` | stripped | formatted | debugging, detailed inspection |
All formats automatically redact `_secret` fields in log output.
## Output Formats
Three output formats for different use cases:
| Format | Structure | Keys | Values | Use case |
|:-------|:----------|:-----|:-------|:---------|
| **JSON** | single-line | original (with suffix) | raw | programs, logs |
| **YAML** | multi-line | stripped | formatted | human inspection |
| **Plain** | single-line logfmt | stripped | formatted | compact scanning |
All formats automatically redact `_secret` fields.
## Supported Suffixes
- **Duration**: `_ms`, `_s`, `_ns`, `_us`, `_minutes`, `_hours`, `_days`
- **Timestamps**: `_epoch_ms`, `_epoch_s`, `_epoch_ns`, `_rfc3339`
- **Size**: `_bytes` (auto-scales to KB/MB/GB/TB), `_size` (config input, pass through)
- **Currency**: `_msats`, `_sats`, `_btc`, `_usd_cents`, `_eur_cents`, `_jpy`, `_{code}_cents`
- **Other**: `_percent`, `_secret` (auto-redacted in all formats)
## Repository
This package is part of the [agent-first-data](https://github.com/cmnspore/agent-first-data) repository, which also contains:
- **`spec/`** — Full AFDATA specification with suffix definitions, protocol format rules, and cross-language test fixtures
- **`skills/`** — AI coding agent skill for working with AFDATA conventions
To run tests, clone the full repository (tests use shared cross-language fixtures from `spec/fixtures/`):
```bash
git clone https://github.com/cmnspore/agent-first-data
cd agent-first-data/python
python -m pytest
```
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/cmnspore/agent-first-data"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T18:48:36.194683 | agent_first_data-0.4.1.tar.gz | 19,864 | bf/b6/5ad1227468f8b327cb4d0664fb1889285657fa45bff9ecd3428054588ec2/agent_first_data-0.4.1.tar.gz | source | sdist | null | false | 0396a910931c56f57303ef6f067a3e4a | 0196f2e235090ac327cc1f8a5caf7ca675f480c92ef97de6137645da642ea797 | bfb65ad1227468f8b327cb4d0664fb1889285657fa45bff9ecd3428054588ec2 | MIT | [] | 178 |
2.4 | keba-keenergy-api | 2.9.1 | A Python wrapper for the KEBA KeEnergy API used by the Web HMI. | # KEBA KeEnergy API
<!--start-home-->
A Python wrapper for the KEBA KeEnergy API used by the Web HMI.

[][pypi-version]
[][workflow-ci]
[pypi-version]: https://pypi.python.org/pypi/keba-keenergy-api
[workflow-ci]: https://github.com/superbox-dev/keba_keenergy_api/actions/workflows/ci.yml
<!--end-home-->
## Donation
<!--start-donation-->
I put a lot of time into this project. If you like it, you can support me with a donation.
[][kofi]
[kofi]: https://ko-fi.com/F2F0KXO6D
<!--end-donation-->
<!--start-home-->
## Getting started
```bash
pip install keba-keenergy-api
```
## Usage
```python
import asyncio
from typing import Any
from keba_keenergy_api import KebaKeEnergyAPI
from keba_keenergy_api.constants import HeatCircuit
from keba_keenergy_api.constants import HeatCircuitOperatingMode
async def main():
client = KebaKeEnergyAPI(
host="ap4400.local",
username="test",
password="test",
ssl=True,
skip_ssl_verification=True
)
# Get current outdoor temperature
outdoor_temperature = await client.system.get_outdoor_temperature()
# Get heat circuit temperature from heat circuit 2
heat_circuit_temperature = await client.heat_circuit.get_target_temperature(
position=2
)
# Read multiple values
data = await client.read_data(
request=[
HeatCircuit.TARGET_TEMPERATURE,
HeatCircuit.TARGET_TEMPERATURE_DAY
],
extra_attributes=True
)
# Enable "day" mode for heat circuit 2
await client.heat_circuit.set_operating_mode(
mode=HeatCircuitOperatingMode.DAY.value,
position=2
)
# Write multiple values
await client.write_data(
request={
# Write heat circuit on position 1 and 3
HeatCircuit.TARGET_TEMPERATURE_DAY: (20, None, 5),
# Write night temperature on position 1
HeatCircuit.TARGET_TEMPERATURE_NIGHT: (16,),
},
)
asyncio.run(main())
```
By default, the library creates a new connection to `KEBA KeEnergy API` with each coroutine. If you are calling a large
number of coroutines, an `aiohttp ClientSession()` can be used for connection pooling:
```python
import asyncio
from keba_keenergy_api import KebaKeEnergyAPI
from aiohttp import ClientSession
async def main():
async with ClientSession() as session:
client = KebaKeEnergyAPI(
host="ap4400.local",
username="test",
password="test",
ssl=True,
skip_ssl_verification=True,
session=session
)
...
asyncio.run(main())
```
### ⚠️ Write warnings
This is a low-level API that allows writing values outside the safe operating range.
Improper use can damage heating systems and hardware. Always check the `attributes`,
as these may contain minimum and maximum values.
*Use at your own risk!*
**Example:**
The upper limit from the hot water tank temperature is 52 °C. Do not write larger values under any circumstances,
even if it would be possible.
```json
{
"name": "APPL.CtrlAppl.sParam.hotWaterTank[0].param.normalSetTempMax.value",
"attributes": {
"dynUpperLimit": 1,
"formatId": "fmtTemp",
"longText": "Temp. nom.",
"lowerLimit": "0",
"unitId": "Temp",
"upperLimit": "52"
},
"value": "50"
}
```
**And one last warning:**
> **Attention!** Writing values should remain within normal limits, as is the case with typical use of the
> Web HMI. Permanent and very frequent writing of values reduces the lifetime of the built-in flash memory.
> **Be carefully!**
<!--end-home-->
## Documentation
Read the full API documentation on [api.superbox.one](https://api.superbox.one).
## Changelog
The changelog lives in the [CHANGELOG.md](CHANGELOG.md) document.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
<!--start-contributing-->
## Get Involved
The **KEBA KeEnergy API** is an open-source project and contributions are welcome. You can:
* Report [issues](https://github.com/superbox-dev/keba_keenergy_api/issues/new/choose) or request new features
* Improve documentation
* Contribute code
* Support the project by starring it on GitHub ⭐
<!--end-contributing-->
I'm happy about your contributions to the project!
You can get started by reading the [CONTRIBUTING.md](CONTRIBUTING.md).
| text/markdown | null | Michael Hacker <mh@superbox.one> | null | Michael Hacker <mh@superbox.one> | null | api, component, custom component, custom integration, keba, keenergy, hacs-component, hacs-integration, hacs-repository, hacs, hass, home assistant, home-assistant, homeassistant, integration | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp==3.*"
] | [] | [] | [] | [
"Homepage, https://superbox.one",
"Documentation, https://api.superbox.one",
"Issues, https://github.com/superbox-dev/keba_keenergy_api/issues",
"Source, https://github.com/superbox-dev/keba_keenergy_api"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:48:06.884175 | keba_keenergy_api-2.9.1.tar.gz | 235,815 | ad/da/f37c508894600106ea1259f8ff1e40ec9afffbf86b9de29f6e8847d44c16/keba_keenergy_api-2.9.1.tar.gz | source | sdist | null | false | 75ef2a31c1b1b82c4b9c00621c824d03 | 2d560a113d49007bc8ad92f48b343a1bba94de5a6f6a75c428ba0ef450383fc4 | addaf37c508894600106ea1259f8ff1e40ec9afffbf86b9de29f6e8847d44c16 | null | [
"LICENSE"
] | 284 |
2.4 | cbspy | 0.0.1 | A repository for interacting with and manipulating data from CBS Statline. | # cbspy
[](https://img.shields.io/github/v/release/thomaspinder/cbspy)
[](https://github.com/thomaspinder/cbspy/actions/workflows/main.yml?query=branch%3Amain)
[](https://codecov.io/gh/thomaspinder/cbspy)
[](https://img.shields.io/github/license/thomaspinder/cbspy)
A modern Python client for [CBS Statline](https://opendata.cbs.nl) open data that returns Polars DataFrames with human-readable column names.
- **Github repository**: <https://github.com/thomaspinder/cbspy/>
- **Documentation**: <https://thomaspinder.github.io/cbspy/>
## Installation
```bash
pip install cbspy
```
## Quick Start
```python
import cbspy
client = cbspy.Client()
# Discover available tables
tables = client.list_tables(language="en")
print(tables.head())
# shape: (5, 7)
# id, title, description, period, frequency, record_count, modified
# Inspect a table's structure
meta = client.get_metadata("37296eng")
for col in meta.properties:
print(f"{col.id}: {col.display_name} ({col.unit})")
# Fetch data with human-readable column names
df = client.get_data("37296eng")
print(df.head())
# Columns like "Total population", "Males", "Females" instead of
# "TotalPopulation_1", "Males_2", "Females_3"
# Filter by time period
df = client.get_data("37296eng", periods=["2022JJ00", "2023JJ00"])
```
## Development
```bash
make install # Create venv and install pre-commit hooks
make test # Run tests with coverage
make check # Run linting and type checking
```
---
Repository initiated with [fpgmaas/cookiecutter-uv](https://github.com/fpgmaas/cookiecutter-uv).
| text/markdown | null | Thomas Pinder <tompinder@live.co.uk> | null | null | null | python | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"httpx>=0.27",
"polars>=1.0",
"pydantic>=2.0"
] | [] | [] | [] | [
"Homepage, https://thomaspinder.github.io/cbspy/",
"Repository, https://github.com/thomaspinder/cbspy",
"Documentation, https://thomaspinder.github.io/cbspy/"
] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:47:59.749205 | cbspy-0.0.1.tar.gz | 135,503 | 6a/14/8207b1959dc23b2b58adf8c2c8ea647fa739fdf693d3948706d61205c57c/cbspy-0.0.1.tar.gz | source | sdist | null | false | d89764e48ab7c12a04df3c0504c68b12 | c65f543101d7d2edc081f818e6cc4461aed3edc9e59bb98cf2c9ffa45dda1c97 | 6a148207b1959dc23b2b58adf8c2c8ea647fa739fdf693d3948706d61205c57c | null | [
"LICENSE"
] | 233 |
2.4 | freq-pick | 0.1.0 | Add your description here | # freq-pick
Interactive frequency picker for a single spectrum. The API accepts a precomputed
frequency axis and magnitude array, launches a matplotlib UI, and returns a
deterministic list of selected bin indices and frequencies. Optional PNG/JSON
artifacts capture the selection.
## Install
```bash
uv sync
```
## API (primary)
```python
from pathlib import Path
import numpy as np
from freq_pick.core import Spectrum
from freq_pick.core import pick_freqs_matplotlib
f_hz = np.linspace(0.0, 200.0, 1001)
mag = np.sin(f_hz / 10.0) ** 2
spectrum = Spectrum(f_hz=f_hz, mag=mag, display_domain="linear")
selection = pick_freqs_matplotlib(
spectrum,
user_snap_hz=0.5,
xlim=(0.0, 200.0),
title="Demo",
title_append="[1/5]",
artifact_dir=Path("artifacts"),
artifact_stem="demo",
)
print(selection.selected_idx)
print(selection.selected_hz)
```
### Controls
- `shift + drag` rectangle: select max magnitude in the rectangle
- `q`: commit and quit
- `Esc`: cancel
- `c`: clear selection
- `x`: delete nearest selected peak to cursor
- `l`: toggle y scale (linear/dB)
- `h`: toggle help overlay
## CLI
```bash
uv run freq-pick \
--in spectrum.npz \
--out artifacts \
--stem run1 \
--snap-hz 0.5 \
--domain dB \
--xlim 0 200 \
--title "Spec A" \
--title-append "[3/10]"
```
Input `.npz` must include `f_hz` and `mag` arrays.
## Artifacts
When `artifact_dir` and `artifact_stem` are provided, the picker writes:
- `{stem}_pick.png`
- `{stem}_pick.json`
JSON keys:
- `schema_version`
- `selected_hz`
- `selected_idx`
- `settings`
- `spectrum_meta`
- `display_domain`
## Development
```bash
uv run pytest
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"matplotlib",
"numpy"
] | [] | [] | [] | [] | uv/0.8.22 | 2026-02-20T18:47:43.261624 | freq_pick-0.1.0.tar.gz | 189,618 | d6/c3/91bbe338b200a6b4c9c28976d1fc97f908dc73c2ab4e11e5b80f9662765a/freq_pick-0.1.0.tar.gz | source | sdist | null | false | de4d5f20cae64e84111015146f44980c | 4c19180a67dec97043c126d8f1ec630f897b9e99d69ba7f5e47e699fac82dbc3 | d6c391bbe338b200a6b4c9c28976d1fc97f908dc73c2ab4e11e5b80f9662765a | null | [
"LICENSE"
] | 201 |
2.4 | smellcheck | 0.3.7 | Python code smell detector -- 83 refactoring patterns, 56 AST checks, zero dependencies | <p align="center">
<img src="https://raw.githubusercontent.com/cheickmec/smellcheck/main/assets/logo.png" alt="smellcheck logo" width="200">
</p>
<h1 align="center">smellcheck</h1>
<p align="center">
<strong>Python Code Smell Detector & Refactoring Guide</strong><br>
83 refactoring patterns · 56 automated AST checks · zero dependencies
</p>
<p align="center">
<a href="https://pypi.org/project/smellcheck/"><img src="https://img.shields.io/pypi/v/smellcheck" alt="PyPI"></a>
<a href="https://pypi.org/project/smellcheck/"><img src="https://img.shields.io/pypi/pyversions/smellcheck" alt="Python"></a>
<a href="https://github.com/cheickmec/smellcheck/actions/workflows/ci.yml"><img src="https://github.com/cheickmec/smellcheck/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://pypistats.org/packages/smellcheck"><img src="https://img.shields.io/pypi/dm/smellcheck" alt="Downloads"></a>
<a href="https://github.com/cheickmec/smellcheck/blob/main/docs/installation.md#pre-commit"><img src="https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit" alt="pre-commit"></a>
<a href="https://github.com/cheickmec/smellcheck/blob/main/LICENSE"><img src="https://img.shields.io/github/license/cheickmec/smellcheck" alt="License"></a>
</p>
**smellcheck** is a Python code smell detector and refactoring catalog. It works as a pip-installable CLI, GitHub Action, pre-commit hook, or [Agent Skills](https://agentskills.io) plugin for AI coding assistants.
**No dependencies.** Pure Python stdlib (`ast`, `pathlib`, `json`). Runs anywhere Python 3.10+ runs.
> **What are code smells?** Code smells are surface-level patterns in source code that hint at deeper design problems — not bugs, but structural weaknesses that make code harder to maintain, extend, or understand. [Learn more →](https://github.com/cheickmec/smellcheck/blob/main/docs/code-smells-guide.md)
## Installation
### pip
```bash
pip install smellcheck
smellcheck src/
smellcheck myfile.py --format json
smellcheck src/ --min-severity warning --fail-on warning
```
Also available as a **GitHub Action**, **pre-commit hook**, **SARIF/Code Scanning** integration, **[Agent Skills](https://agentskills.io) plugin**, and **Cursor native plugin** for Claude Code, Cursor, Copilot, Gemini CLI, and more.
**[Full installation guide →](https://github.com/cheickmec/smellcheck/blob/main/docs/installation.md)**
## Usage
```bash
# Scan a directory
smellcheck src/
# Scan multiple files
smellcheck file1.py file2.py
# JSON output
smellcheck src/ --format json
# GitHub Actions annotations
smellcheck src/ --format github
# SARIF output (for GitHub Code Scanning)
smellcheck src/ --format sarif > results.sarif
# JUnit XML output (for Jenkins, GitLab, CircleCI, Azure DevOps)
smellcheck src/ --format junit > smellcheck-results.xml
# GitLab CodeClimate output (for MR code quality widget)
smellcheck src/ --format gitlab > gl-code-quality-report.json
# Filter by severity
smellcheck src/ --min-severity warning
# Control exit code
smellcheck src/ --fail-on warning # exit 1 on warning or error
smellcheck src/ --fail-on info # exit 1 on any finding
# Run only specific checks
smellcheck src/ --select SC101,SC701,SC210
# Skip specific checks
smellcheck src/ --ignore SC601,SC202
# Module execution
python3 -m smellcheck src/
# Generate a baseline of current findings
smellcheck src/ --generate-baseline > .smellcheck-baseline.json
# Only report findings not in the baseline
smellcheck src/ --baseline .smellcheck-baseline.json
# Disable caching for a fresh scan
smellcheck src/ --no-cache
# Use a custom cache directory
smellcheck src/ --cache-dir .my-cache
# Clear cached results
smellcheck --clear-cache
# Show documentation for a rule (description + before/after example)
smellcheck --explain SC701
# List all rules in a family
smellcheck --explain SC4
# List all rules grouped by family
smellcheck --explain all
```
## Configuration
smellcheck reads `[tool.smellcheck]` from the nearest `pyproject.toml`:
```toml
[tool.smellcheck]
extends = "base.toml" # inherit from a shared config file
select = ["SC101", "SC201", "SC701"] # only run these checks (default: all)
ignore = ["SC601", "SC202"] # skip these checks
per-file-ignores = {"tests/*" = ["SC201", "SC206"]} # per-path overrides
fail-on = "warning" # override default fail-on
format = "text" # override default format
baseline = ".smellcheck-baseline.json" # suppress known findings
cache = true # enable file-level caching (default: true)
cache-dir = ".smellcheck-cache" # cache directory (default: .smellcheck-cache)
```
CLI flags override config values.
### Config inheritance (`extends`)
Use `extends` to inherit settings from a shared base config:
```toml
# base.toml — shared across repos
[tool.smellcheck]
ignore = ["SC601"]
fail-on = "warning"
```
```toml
# pyproject.toml — project overrides
[tool.smellcheck]
extends = "base.toml"
ignore = ["SC202"] # adds to base; final ignore = ["SC601", "SC202"]
```
Multiple bases are supported — later entries override earlier ones for scalar values, while `ignore` lists are unioned and `per-file-ignores` are deep-merged:
```toml
extends = ["base.toml", "strict.toml"]
```
Paths are relative to the file containing the `extends` key. Chains are resolved recursively (up to 5 levels deep).
## Suppression
### Per-line
Add `# noqa: SC701` to a line to suppress that check on that line:
```python
def foo(x=[]): # noqa: SC701
return x
```
Use `# noqa` (no codes) to suppress all findings on that line. Multiple codes: `# noqa: SC601,SC202`
### Block-level
Disable specific checks for a range of lines with `# smellcheck: disable` / `# smellcheck: enable`:
```python
# smellcheck: disable SC301, SC305
class LegacyGodObject:
"""This class is intentionally large for backward compatibility."""
def method_one(self):
self._temp = compute() # SC305 suppressed by block directive
def method_two(self):
use(self._temp)
# smellcheck: enable SC301, SC305
```
Disable all checks for a range:
```python
# smellcheck: disable-all
# ... everything in this range is suppressed ...
# smellcheck: enable-all
```
### File-level
Suppress checks for an entire file (place at top of file):
```python
# smellcheck: disable-file SC301, SC305
```
Use `# smellcheck: disable-file` (no codes) to suppress all checks for the entire file.
### Scope rules
- `disable` / `enable` apply from that line to the matching `enable` (or end of file if no match)
- `disable-all` / `enable-all` work the same way but for all checks at once
- `disable-file` applies to the entire file
- Per-line `# noqa` still works alongside block directives
- Block directives do not affect cross-file findings (use `per-file-ignores` in config instead)
## Baseline
For large codebases, you can adopt smellcheck incrementally using a baseline file. The baseline records fingerprints of existing findings so only **new** issues are reported.
```bash
# 1. Generate a baseline from the current state
smellcheck src/ --generate-baseline > .smellcheck-baseline.json
# 2. Run with the baseline — only new findings are reported
smellcheck src/ --baseline .smellcheck-baseline.json
# 3. Or set it in pyproject.toml so every run uses it automatically
```
Fingerprints are resilient to line-number changes — renaming or moving code around won't break the baseline. When you fix a baselined smell, its entry is silently ignored.
`--generate-baseline` and `--baseline` are mutually exclusive.
## Diff-Aware Scanning
Focus on files you actually changed — skip the rest of the codebase:
```bash
# Only scan files changed vs. main branch
smellcheck src/ --diff main
# Only scan files changed in the last commit
smellcheck src/ --diff HEAD~1
# Only scan uncommitted changes (shorthand for --diff HEAD)
smellcheck src/ --changed-only
```
In CI, this keeps PR feedback fast and relevant:
```yaml
- uses: cheickmec/smellcheck@v0
with:
diff: origin/main
fail-on: warning
```
Cross-file checks (cyclic imports, shotgun surgery, etc.) run on the changed file set only. This is best-effort — for full cross-file accuracy, run without `--diff`.
`--diff` and `--generate-baseline` are mutually exclusive. `--diff` composes with all other flags (`--baseline`, `--format`, `--fail-on`, `--select`, `--ignore`).
## Caching
smellcheck caches per-file analysis results in `.smellcheck-cache/` to skip unchanged files on repeated scans. This is especially useful for pre-commit hooks and editor integrations.
Cache entries are keyed by file content hash, config hash, and smellcheck version — any change invalidates the relevant entry. Cross-file analysis (cyclic imports, duplicate code, etc.) always re-runs since it depends on the full file set.
```bash
# Caching is enabled by default — just run normally
smellcheck src/
# Disable caching for a guaranteed fresh scan
smellcheck src/ --no-cache
# Use a custom cache directory
smellcheck src/ --cache-dir /tmp/sc-cache
# Clear all cached results
smellcheck --clear-cache
```
Old cache entries are not automatically evicted. Run `smellcheck --clear-cache` periodically or after upgrading to reclaim disk space.
Add `.smellcheck-cache/` to your `.gitignore`. You can also configure caching in `pyproject.toml`:
```toml
[tool.smellcheck]
cache = false # disable caching
cache-dir = ".smellcheck-cache" # custom cache directory
```
## Features
- **56 automated smell checks** -- per-file AST analysis, cross-file dependency analysis, and OO metrics
- **83 refactoring patterns** -- numbered catalog with before/after examples, trade-offs, and severity levels
- **Zero dependencies** -- stdlib-only, runs on any Python 3.10+ installation
- **Multiple output formats** -- text (terminal), JSON (machine-readable), GitHub annotations (CI), SARIF 2.1.0 (Code Scanning), JUnit XML (Jenkins/GitLab/CircleCI), GitLab CodeClimate (MR quality widget)
- **Configurable** -- pyproject.toml config, inline suppression, CLI overrides
- **Baseline support** -- adopt incrementally by suppressing existing findings and only failing on new ones
- **File-level caching** -- content-hash based caching skips unchanged files for fast repeated scans
- **Multiple distribution channels** -- pip, GitHub Action, pre-commit, Agent Skills ([full list](https://github.com/cheickmec/smellcheck/blob/main/docs/installation.md))
## Detected Patterns
Every rule is identified by an **SC code** (e.g. `SC701`). Use SC codes in `--select`, `--ignore`, and `# noqa` comments.
### Per-File (41 checks)
| SC Code | Pattern | Severity |
|---------|---------|----------|
| SC101 | Setters (half-built objects) | warning |
| SC102 | UPPER_CASE without Final | info |
| SC103 | Unprotected public attributes | info |
| SC104 | Half-built objects (init assigns None) | warning |
| SC105 | Boolean flag parameters | info |
| SC106 | Global mutable state | info |
| SC107 | Sequential IDs | info |
| SC201 | Long functions (>20 lines) | warning |
| SC202 | Generic names (data, result, tmp) | info |
| SC203 | input() in business logic | warning |
| SC204 | Functions returning None or list | info |
| SC205 | Excessive decorators (>3) | info |
| SC206 | Too many parameters (>5) | warning |
| SC207 | CQS violation (query + modify) | info |
| SC208 | Unused function parameters | warning |
| SC209 | Long lambda (>60 chars) | info |
| SC210 | Cyclomatic complexity (>10) | warning |
| SC301 | Extract class (too many methods) | info |
| SC302 | isinstance chains | warning |
| SC303 | Singleton pattern | warning |
| SC304 | Dataclass candidate | info |
| SC305 | Sequential tuple indexing | info |
| SC306 | Lazy class (<2 methods) | info |
| SC307 | Temporary fields | info |
| SC401 | Dead code after return | warning |
| SC402 | Deep nesting (>4 levels) | warning |
| SC403 | Loop + append pattern | info |
| SC404 | Complex boolean expressions | warning |
| SC405 | Boolean control flag in loop | info |
| SC406 | Complex comprehension (>2 generators) | info |
| SC407 | Missing default else branch | info |
| SC501 | Error codes instead of exceptions | warning |
| SC502 | Law of Demeter violation | info |
| SC601 | Magic numbers | info |
| SC602 | Bare except / unused exception variable | error |
| SC603 | String concatenation for multiline | info |
| SC604 | contextlib candidate | info |
| SC605 | Empty catch block | warning |
| SC701 | Mutable default arguments | error |
| SC702 | open() without context manager | warning |
| SC703 | Blocking calls in async functions | warning |
### Cross-File (10 checks)
| SC Code | Pattern | Description |
|---------|---------|-------------|
| SC211 | Feature envy | Function accesses external attributes more than own |
| SC308 | Deep inheritance | Inheritance depth >4 |
| SC309 | Wide hierarchy | >5 direct subclasses |
| SC503 | Cyclic imports | DFS cycle detection |
| SC504 | God modules | >500 lines or >30 top-level definitions |
| SC505 | Shotgun surgery | Function called from >5 different files |
| SC506 | Inappropriate intimacy | >3 bidirectional class references between files |
| SC507 | Speculative generality | Abstract class with no concrete subclasses |
| SC508 | Unstable dependency | Stable module depends on unstable module |
| SC606 | Duplicate functions | AST-normalized hashing across files |
### OO Metrics (5 checks)
| SC Code | Metric | Threshold |
|---------|--------|-----------|
| SC801 | Lack of Cohesion of Methods | >0.8 |
| SC802 | Coupling Between Objects | >8 |
| SC803 | Excessive Fan-Out | >15 |
| SC804 | Response for a Class | >20 |
| SC805 | Middle Man (delegation ratio) | >50% |
## Refactoring Reference Files
Each pattern includes a description, before/after code examples, and trade-offs:
| File | Patterns |
|------|----------|
| [`state.md`](https://github.com/cheickmec/smellcheck/blob/main/plugins/python-refactoring/skills/python-refactoring/references/state.md) | Immutability, setters, attributes (SC101–SC107) |
| [`functions.md`](https://github.com/cheickmec/smellcheck/blob/main/plugins/python-refactoring/skills/python-refactoring/references/functions.md) | Extraction, naming, parameters, CQS (SC201–SC210) |
| [`types.md`](https://github.com/cheickmec/smellcheck/blob/main/plugins/python-refactoring/skills/python-refactoring/references/types.md) | Classes, reification, polymorphism, nulls (SC301–SC309) |
| [`control.md`](https://github.com/cheickmec/smellcheck/blob/main/plugins/python-refactoring/skills/python-refactoring/references/control.md) | Guards, pipelines, conditionals, phases (SC401–SC407) |
| [`architecture.md`](https://github.com/cheickmec/smellcheck/blob/main/plugins/python-refactoring/skills/python-refactoring/references/architecture.md) | DI, singletons, exceptions, delegates (SC501–SC508) |
| [`hygiene.md`](https://github.com/cheickmec/smellcheck/blob/main/plugins/python-refactoring/skills/python-refactoring/references/hygiene.md) | Constants, dead code, comments, style (SC601–SC606) |
| [`idioms.md`](https://github.com/cheickmec/smellcheck/blob/main/plugins/python-refactoring/skills/python-refactoring/references/idioms.md) | Context managers, generators, unpacking, async (SC701–SC703) |
| [`metrics.md`](https://github.com/cheickmec/smellcheck/blob/main/plugins/python-refactoring/skills/python-refactoring/references/metrics.md) | OO metrics: cohesion, coupling, fan-out, response, delegation (SC801–SC805) |
## How It Compares
| Feature | smellcheck | [PyExamine](https://github.com/KarthikShivasankar/python_smells_detector) | [SMART-Dal](https://github.com/SMART-Dal/smell-detector-python) | [Pyscent](https://github.com/whyjay17/Pyscent) |
|---------|------------|-----------|-----------|---------|
| Automated detections | 56 | 49 | 31 | 11 |
| Refactoring guidance | 83 patterns | None | None | None |
| Dependencies | 0 (stdlib) | pylint, radon | DesigniteJava | pylint, radon, cohesion |
| Python-specific idioms | Yes | No | No | No |
| Cross-file analysis | Yes | Limited | Yes | No |
| OO metrics | 5 | 19 | 0 | 1 |
| Distribution channels | 4 (pip, GHA, pre-commit, Agent Skills) | 1 | 1 | 1 |
## Contributing
Contributions welcome — see [CONTRIBUTING.md](https://github.com/cheickmec/smellcheck/blob/main/CONTRIBUTING.md) for the full guide. The core detector is `src/smellcheck/detector.py`; add new checks by extending the `SmellDetector` AST visitor class and adding a cross-file analysis function if needed.
```bash
# Development setup
git clone https://github.com/cheickmec/smellcheck.git
cd smellcheck
pip install -e .
pip install pytest
# Run tests
pytest tests/ -v
# Self-check
smellcheck src/smellcheck/
```
## License
MIT
| text/markdown | Cheick Berthe | null | null | null | null | ast, code-smells, linter, refactoring, static-analysis | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Quality Assurance",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/cheickmec/smellcheck",
"Repository, https://github.com/cheickmec/smellcheck",
"Issues, https://github.com/cheickmec/smellcheck/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:47:35.117366 | smellcheck-0.3.7.tar.gz | 1,720,993 | 1a/78/aed76ad0fb2233f5a67455c176ee7a9e6557a65cae27aa7cca488bffa323/smellcheck-0.3.7.tar.gz | source | sdist | null | false | 2b6bdb9a636c17cb63a43c1c0de92026 | 8bbd60821d246dd371694a57cd83e7eefd7c707817ac596a4b7a9f4bedebe0df | 1a78aed76ad0fb2233f5a67455c176ee7a9e6557a65cae27aa7cca488bffa323 | MIT | [
"LICENSE"
] | 177 |
2.4 | nia-mcp-server | 1.0.94 | Nia | # NIA MCP Server
The NIA MCP Server enables AI assistants like Claude to search and understand your indexed codebases through the Model Context Protocol (MCP).
## Quick Start
### Automatic Setup (Recommended) ✨
Get your API key from [https://trynia.ai/api-keys](https://trynia.ai/api-keys) and run:
```bash
yay
pipx run nia-mcp-server setup YOUR_API_KEY
``` | text/markdown | null | Nozomio Labs Team <founders@nozomio.com> | null | null | null | ai, codebase, mcp, nia, search | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"fastmcp>=3.0.0b1",
"httpcore>=1.0.0",
"httpx>=0.24.0",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"python-levenshtein>=0.21.0",
"tiktoken>=0.5.0",
"uvicorn>=0.30.0",
"opentelemetry-api>=1.20.0; extra == \"telemetry\"",
"opentelemetry-exporter-otlp>=1.20.0; extra == \"telemetry\"",
"opentelemetry-sdk>=1.20.0; extra == \"telemetry\""
] | [] | [] | [] | [
"Homepage, https://trynia.ai",
"Documentation, https://docs.trynia.ai"
] | twine/6.2.0 CPython/3.13.3 | 2026-02-20T18:47:25.278031 | nia_mcp_server-1.0.94.tar.gz | 91,077 | f3/ac/ad1741baae3d260e45139fb355fb8316ef026b94c7437dd5158b2fc0000d/nia_mcp_server-1.0.94.tar.gz | source | sdist | null | false | 9162bb61869e42df71430f40639683f2 | e91e0380ac284b2dc7c15faf8ddd0f9f0325f721d03f031d2bd3b51b399c93be | f3acad1741baae3d260e45139fb355fb8316ef026b94c7437dd5158b2fc0000d | AGPL-3.0 | [
"LICENSE"
] | 748 |
2.4 | wagtail-enap-designsystem | 1.2.1.251 | Módulo de componentes utilizado nos portais ENAP, desenvolvido com Wagtail + CodeRedCMS | # 📦 Enap Design System - Módulo para Wagtail
Este é um módulo customizado para o **Wagtail**, criado para facilitar a implementação de layouts e componentes reutilizáveis no CMS.
### 🛫 Outros READMEs
README.md, doc geral do projeto [README.md](README.md)
README-use.md, doc do uso do módulo [README-use.md](README-use.md) [ATUAL]
README-pypi.md, doc subir pacote pypi [README-pypi.md](README-pypi.md)
# ENAP Design System
O **ENAP Design System** é um módulo para o Wagtail, baseado no CodeRedCMS, que fornece componentes reutilizáveis e templates pré-configurados para facilitar a criação de sites institucionais.
## Instalação
Para instalar o pacote via PyPI, utilize:
```bash
pip install wagtail-enap-designsystem
```
### Requisitos
- **Wagtail 6.4+**
- **CodeRedCMS 4.1.1+**
- **Django 4+**
## Configuração
Após a instalação, adicione `enap_designsystem` ao seu `INSTALLED_APPS` no `settings.py`:
```python
INSTALLED_APPS = [
"enap_designsystem",
"coderedcms", # Certifique-se de que o CodeRedCMS está instalado
# ... outros módulos, como por exemplo: ...
"wagtail.contrib.forms",
"wagtail.contrib.redirects",
"wagtail.embeds",
"wagtail.sites",
"wagtail.users",
"wagtail.snippets",
"wagtail.documents",
"wagtail.images",
"wagtail.search",
"wagtail.admin",
"wagtail",
"taggit",
"modelcluster",
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
]
```
### Executando Migrações
Após a instalação e configuração, rode as migrações para garantir que todas as tabelas necessárias sejam criadas:
```bash
python manage.py migrate
```
## Uso
O `enap_designsystem` adiciona os seguintes recursos ao seu projeto:
- **ENAPLayout**: Página base herdando de `CoderedWebPage`, com suporte a anotações.
- **RootPage**: Página raiz configurada para permitir apenas subpáginas do tipo `ENAPLayout`.
- **Componentes Wagtail**: Blocos personalizados para layouts institucionais.
- **Templates Pré-preenchidos**: Modelos prontos para diferentes tipos de páginas.
### Criando uma Página com ENAPLayout
No painel administrativo do Wagtail, ao criar uma nova página, selecione **ENAPLayout** para utilizar os templates e funcionalidades do módulo.
## Cache
Se estiver utilizando `wagtailcache`, certifique-se de configurar corretamente o cache, pois a função `cache_clear` ainda não tem suporte completo:
```python
WAGTAIL_CACHE_BACKEND = "default"
```
## Desenvolvimento
(OPCIONAL dev)
**Se estiver contribuindo para o desenvolvimento do módulo**, clone o repositório e instale no modo `editable`:
```bash
git clone https://github.com/seu-org/enap_designsystem.git
cd enap_designsystem
pip install -e .
```
Para rodar o ambiente de desenvolvimento:
```bash
python manage.py runserver
```
## Contribuindo
Pull requests são bem-vindos! Para sugestões e melhorias, abra uma issue no repositório oficial.
---
🏛️ **Desenvolvido por ENAP**
| text/markdown | Renan Campos | renan.oliveira@enap.gov.br | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"django>=3.2",
"wagtail==6.4",
"coderedcms==4.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.5 | 2026-02-20T18:47:21.905883 | wagtail_enap_designsystem-1.2.1.251.tar.gz | 62,030,713 | 30/e8/7855128539e38ba1585f2c32663411564a1e7ec72835b17445160ee3afd2/wagtail_enap_designsystem-1.2.1.251.tar.gz | source | sdist | null | false | 508d9ab6032666458f721974e5708952 | 0663d86c0f7e531e3b7093e33e6b4573b5d67af0331768bb549372780fb518af | 30e87855128539e38ba1585f2c32663411564a1e7ec72835b17445160ee3afd2 | null | [
"LICENSE"
] | 165 |
2.4 | kollabor | 0.4.20 | An advanced, highly customizable terminal-based chat application for interacting with LLMs | # Kollabor
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
An advanced, highly customizable terminal-based chat application for interacting with Large Language Models (LLMs). Built with a powerful plugin system and comprehensive hook architecture for complete customization.
**macOS:** `brew install kollaborai/tap/kollabor`
**Other:** `curl -sS https://raw.githubusercontent.com/kollaborai/kollabor-cli/main/install.sh | bash`
**Run:** `kollab`
## Features
- **Event-Driven Architecture**: Everything has hooks - every action triggers customizable hooks that plugins can attach to
- **Advanced Plugin System**: Dynamic plugin discovery and loading with comprehensive SDK
- **Rich Terminal UI**: Beautiful terminal rendering with status areas, visual effects, and modal overlays
- **Conversation Management**: Persistent conversation history with full logging support
- **Model Context Protocol (MCP)**: Built-in support for MCP integration
- **Tool Execution**: Function calling and tool execution capabilities
- **Pipe Mode**: Non-interactive mode for scripting and automation
- **Environment Variable Support**: Complete configuration via environment variables (API settings, system prompts, etc.)
- **Extensible Configuration**: Flexible configuration system with plugin integration
- **Async/Await Throughout**: Modern Python async patterns for responsive performance
## Installation
### macOS (Recommended)
Standard Homebrew installation - what most macOS users expect:
```bash
brew install kollaborai/tap/kollabor
```
To upgrade:
```bash
brew upgrade kollabor
```
### One-Line Install (Cross-Platform)
Auto-detects the best method (uvx > pipx > pip):
```bash
curl -sS https://raw.githubusercontent.com/kollaborai/kollabor-cli/main/install.sh | bash
```
### Using uvx (Fastest, Isolated)
uvx runs the app in an isolated environment without installation:
```bash
uvx --from kollabor kollab
```
Or install to uv tool cache for instant startup:
```bash
uv tool install kollabor
kollab
```
### Using pipx (Isolated, Clean)
Recommended for user-space installation without system conflicts:
```bash
pipx install kollabor
```
### Using pip
Standard Python package installation:
```bash
pip install kollabor
```
### From Source
```bash
git clone https://github.com/kollaborai/kollabor-cli.git
cd kollabor-cli
pip install -e .
```
### Development Installation
```bash
pip install -e ".[dev]"
```
## Quick Start
### Interactive Mode
Simply run the CLI to start an interactive chat session:
```bash
kollab
```
### Pipe Mode
Process a single query and exit:
```bash
# Direct query
kollab "What is the capital of France?"
# From stdin
echo "Explain quantum computing" | kollab -p
# From file
cat document.txt | kollab -p
# With custom timeout
kollab --timeout 5min "Complex analysis task"
```
## Configuration
On first run, Kollabor creates a `.kollabor-cli` directory in your current working directory:
```
.kollabor-cli/
├── config.json # User configuration
├── system_prompt/ # System prompt templates
├── logs/ # Application logs
└── state.db # Persistent state
```
### Configuration Options
The configuration system uses dot notation:
- `kollabor.llm.*` - LLM service settings
- `terminal.*` - Terminal rendering options
- `application.*` - Application metadata
### Environment Variables
All configuration can be controlled via environment variables, which take precedence over config files:
#### API Configuration
```bash
KOLLABOR_API_ENDPOINT=https://api.example.com/v1/chat/completions
KOLLABOR_API_TOKEN=your-api-token-here # or KOLLABOR_API_KEY
KOLLABOR_API_MODEL=gpt-4
KOLLABOR_API_MAX_TOKENS=4096
KOLLABOR_API_TEMPERATURE=0.7
KOLLABOR_API_TIMEOUT=30000
```
#### System Prompt Configuration
```bash
# Direct string (highest priority)
KOLLABOR_SYSTEM_PROMPT="You are a helpful coding assistant."
# Custom file path
KOLLABOR_SYSTEM_PROMPT_FILE="./my_custom_prompt.md"
```
#### Using .env Files
Create a `.env` file in your project root:
```bash
KOLLABOR_API_ENDPOINT=https://api.example.com/v1/chat/completions
KOLLABOR_API_TOKEN=your-token-here
KOLLABOR_API_MODEL=gpt-4
KOLLABOR_SYSTEM_PROMPT_FILE="./prompts/specialized.md"
```
Load and run:
```bash
export $(cat .env | xargs)
kollab
```
See [ENV_VARS.md](ENV_VARS.md) for complete documentation and examples.
## Architecture
Kollabor follows a modular, event-driven architecture:
### Core Components
- **Application Core** (`kollabor/application.py`): Main orchestrator
- **Event System** (`kollabor/events/`): Central event bus with hook system
- **LLM Services** (`kollabor/llm/`): API communication, conversation management, tool execution
- **I/O System** (`kollabor/io/`): Terminal rendering, input handling, visual effects
- **Plugin System** (`kollabor/plugins/`): Dynamic plugin discovery and loading
- **Configuration** (`kollabor/config/`): Flexible configuration management
- **Storage** (`kollabor/storage/`): State management and persistence
### Plugin Development
Create custom plugins by inheriting from base plugin classes:
```python
from kollabor.plugins import BasePlugin
from kollabor.events import EventType
class MyPlugin(BasePlugin):
def register_hooks(self):
"""Register plugin hooks."""
self.event_bus.register_hook(
EventType.PRE_USER_INPUT,
self.on_user_input,
priority=HookPriority.NORMAL
)
async def on_user_input(self, context):
"""Process user input before it's sent to the LLM."""
# Your custom logic here
return context
def get_status_line(self):
"""Provide status information for the status bar."""
return "MyPlugin: Active"
```
## Hook System
The comprehensive hook system allows plugins to intercept and modify behavior at every stage:
- `pre_user_input` - Before processing user input
- `pre_api_request` - Before API calls to LLM
- `post_api_response` - After receiving LLM responses
- `pre_message_display` - Before displaying messages
- `post_message_display` - After displaying messages
- And many more...
## Project Structure
```
kollabor/
├── kollabor/ # Core application modules
│ ├── application.py # Main orchestrator
│ ├── config/ # Configuration management
│ ├── events/ # Event bus and hooks
│ ├── io/ # Terminal I/O
│ ├── llm/ # LLM services
│ ├── plugins/ # Plugin system
│ └── storage/ # State management
├── plugins/ # Plugin implementations
├── docs/ # Documentation
├── tests/ # Test suite
└── main.py # Application entry point
```
## Development
### Running Tests
```bash
# All tests
python tests/run_tests.py
# Specific test file
python -m unittest tests.test_llm_plugin
# Individual test case
python -m unittest tests.test_llm_plugin.TestLLMPlugin.test_thinking_tags_removal
```
### Code Quality
```bash
# Format code
python -m black kollabor/ plugins/ tests/ main.py
# Type checking
python -m mypy kollabor/ plugins/
# Linting
python -m flake8 kollabor/ plugins/ tests/ main.py --max-line-length=88
# Clean up cache files and build artifacts
python scripts/clean.py
```
## Requirements
- Python 3.12 or higher
- aiohttp 3.8.0 or higher
## License
MIT License - see LICENSE file for details
## Contributing
Contributions are welcome! Please see the documentation for development guidelines.
## Links
- [Documentation](https://github.com/malmazan/kollabor-cli/blob/main/docs/)
- [Bug Tracker](https://github.com/malmazan/kollabor-cli/issues)
- [Repository](https://github.com/malmazan/kollabor-cli)
## Acknowledgments
Built with modern Python async/await patterns and designed for extensibility and customization.
| text/markdown | null | Kollabor Contributors <contributors@example.com> | null | null | MIT | llm, cli, chat, terminal, ai, chatbot, assistant, kollabor, plugin-system, event-driven | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Communications :: Chat",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Terminals",
"Environment :: Console",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Operating System :: Unix"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.10.11",
"httpx>=0.27.0",
"kollabor-agent>=0.4.19",
"kollabor-ai>=0.4.19",
"kollabor-config>=0.4.19",
"kollabor-events>=0.4.19",
"kollabor-plugins>=0.4.19",
"kollabor-tui>=0.4.19",
"psutil>=5.9.0",
"packaging>=23.0",
"pydantic>=2.0.0",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/kollaborai/kollabor-cli",
"Repository, https://github.com/kollaborai/kollabor-cli",
"Documentation, https://github.com/kollaborai/kollabor-cli/blob/main/docs/",
"Bug Tracker, https://github.com/kollaborai/kollabor-cli/issues",
"Homebrew Tap, https://github.com/kollaborai/homebrew-tap"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T18:46:13.709982 | kollabor-0.4.20.tar.gz | 1,956,395 | 81/03/7e107a0a57bf91cfc23bdd239fb135b5274f54de36db527d587a443b8037/kollabor-0.4.20.tar.gz | source | sdist | null | false | 1f5516b18c1502269e8441fb779acc98 | 78f2ad1a05dc3602feeddcda148c40251b507f6bca455850dde8a72b1549218d | 81037e107a0a57bf91cfc23bdd239fb135b5274f54de36db527d587a443b8037 | null | [
"LICENSE"
] | 177 |
2.4 | radiens-core | 0.0.0b1 | Python client for Radiens neuroscience platform | # radiens-core
**Python client for the Radiens electrophysiology analytics platform.**
## Overview
This package provides a modern, type-safe Python interface to the Radiens platform for neuroscience research. It enables real-time data acquisition, offline analysis, and curation of electrophysiological recordings.
## Installation
```bash
pip install radiens-core
```
## Requirements
- Python 3.12+
- Radiens platform access
- Valid Radiens credentials
## Support
For technical support or questions about using this client:
- Contact: NeuroNexus Support
- Email: <support@neuronexus.com>
## License
See [LICENSE](LICENSE) file.
| text/markdown | null | NeuroNexus <akelley@neuronexus.com> | null | null | null | data-analysis, electrophysiology, neuroscience | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"boto3>=1.34",
"grpc-interceptor>=0.15",
"grpcio>=1.62",
"numpy>=1.26",
"pandas>=2.2",
"protobuf>=5.0",
"pydantic>=2.6",
"tqdm>=4.65"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.11 | 2026-02-20T18:45:53.286337 | radiens_core-0.0.0b1.tar.gz | 284,700 | 9c/15/f7c7bd35deaaae7e209e8b0c0bec637e66d39378cefae0321d7c97e5a0b3/radiens_core-0.0.0b1.tar.gz | source | sdist | null | false | d91c40fd724318412792d91ee2caa9a5 | b8b22560ed7a3ffede37c715f0e47e83452271d018bcada014a47c15ac025ae9 | 9c15f7c7bd35deaaae7e209e8b0c0bec637e66d39378cefae0321d7c97e5a0b3 | null | [
"LICENSE"
] | 188 |
2.4 | kollabor-agent | 0.4.19 | Agent execution toolkit - tool execution, MCP integration, file operations, and shell commands | # kollabor-agent
Agent execution toolkit - tool execution, MCP integration, file operations, and shell commands.
## Install
```bash
pip install kollabor-agent
```
## Components
- **ToolExecutor** - orchestrates tool execution (MCP, file ops, shell)
- **FileOperationsExecutor** - file CRUD operations (edit, create, delete, move, copy, grep)
- **MCPIntegration** - Model Context Protocol server discovery and tool calling
- **ShellCommandService** - interactive shell command execution
- **ShellExecutor** - low-level async shell command runner
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"kollabor-config>=0.4.19",
"kollabor-events>=0.4.19"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T18:45:44.585528 | kollabor_agent-0.4.19.tar.gz | 61,380 | 2e/b1/8672b8ab5d0057fb3d844b890da3554f5bf1ae9abe2acb7d330bb0da0bbf/kollabor_agent-0.4.19.tar.gz | source | sdist | null | false | 60d5ca033834337dc8648d2df0fa6e76 | 884e77597e4283b56fb1b7197792a10d4eee0db267cfde0b0485e1f51674797c | 2eb18672b8ab5d0057fb3d844b890da3554f5bf1ae9abe2acb7d330bb0da0bbf | null | [] | 202 |
2.4 | jupiter-subtes | 0.2.4 | Biblioteca para automatização web nos sistemas do Tesouro do Estado do Rio de Janeiro | # Automação Web & Gerenciamento de Arquivos 🚀
Esta biblioteca Python foi desenvolvida para simplificar a criação de scripts de automação, combinando o poder do **Selenium** para interações web com utilitários práticos do **Sistema Operacional** para gerenciamento de arquivos e pastas.
O objetivo é fornecer uma interface de alto nível (mais legível e menos verbosa) para tarefas comuns de RPA (Robotic Process Automation).
---
## 🛠️ Funcionalidades Principal
A biblioteca está dividida em dois pilares fundamentais:
### 1. AutomacaoWeb (Navegação)
* **Gerenciamento de Driver:** Inicialização otimizada do Microsoft Edge, incluindo suporte para modo **Headless** (segundo plano).
* **Controle de Abas:** Abertura, troca e fechamento inteligente de abas.
* **Interações Avançadas:** Cliques, digitação (com limpeza automática), Hover (passar o mouse) e seleção de dropdowns.
* **Tratamento de Esperas:** Uso nativo de `WebDriverWait` para garantir que os elementos existam antes da interação, reduzindo erros de sincronismo.
* **Captura de Tela:** Método integrado para screenshots de auditoria.
* **Suporte a Iframes:** Facilidade para entrar e sair de contextos de frames.
### 2. FileExplorer (Sistema de Arquivos)
* **Manipulação de Arquivos:** Mover, copiar, renomear e excluir arquivos com segurança.
* **Organização:** Criação de diretórios recursivos e listagem filtrada por extensão.
* **Inteligência de Download:** Função específica para localizar o arquivo mais recente em uma pasta (ideal para capturar downloads recém-concluídos).
---
## 📋 Pré-requisitos
Antes de usar, você precisará instalar as dependências necessárias:
```bash
pip install selenium
```
*Nota: Certifique-se de ter o **Microsoft Edge** instalado e o **msedgedriver** compatível com sua versão do navegador em seu PATH.*
---
## 🚀 Como Usar
Aqui está um exemplo rápido de como integrar as duas classes em um fluxo de automação:
```python
from automacao import AutomacaoWeb, FileExplorer
# 1. Iniciar a automação web
web = AutomacaoWeb()
web.iniciar_driver(headless=False)
try:
# Navegar e realizar download (exemplo hipotético)
web.abrir_url("https://exemplo.com/relatorios")
web.clicar("//button[@id='download_csv']")
# 2. Gerenciar o arquivo baixado
file_sys = FileExplorer()
downloads_path = "C:/Users/Usuario/Downloads"
# Localiza o arquivo CSV mais recente
arquivo = file_sys.obter_arquivo_mais_recente(downloads_path, extensao=".csv")
if arquivo:
file_sys.mover_arquivo(arquivo, "C:/Projeto/Dados/processar.csv")
print("Automação concluída com sucesso!")
finally:
web.fechar_navegador()
```
---
## 📂 Estrutura do Código
| Classe | Descrição |
| --- | --- |
| `AutomacaoWeb` | Encapsula a lógica do Selenium para interação com o DOM e o navegador. |
| `FileExplorer` | Utiliza as bibliotecas `os` e `shutil` para manipulação de arquivos locais. |
---
## 📝 Notas de Versão
* **Version 1.0:** Lançamento inicial com suporte ao Edge.
* **Tratamento de Erros:** Todos os métodos possuem blocos `try-except` para evitar interrupções abruptas e facilitar o debug via console.
| text/markdown | EOP/SUPCONC | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | null | [] | [] | [] | [
"selenium",
"automaweb"
] | [] | [] | [] | [
"Homepage, https://github.com/bvkila/jupiter"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T18:45:17.094064 | jupiter_subtes-0.2.4.tar.gz | 13,952 | cb/a9/f0303e5f8fac1ce21c032ee7c57c0ab98d748b4048cea0f288138c4e30e7/jupiter_subtes-0.2.4.tar.gz | source | sdist | null | false | 36b9b959dae2d5dedb3ddab7dfc1edb5 | 59c2be8487a42ed2d86e04ee67f5036e78575ffcdd3e1e9baf080873593898ed | cba9f0303e5f8fac1ce21c032ee7c57c0ab98d748b4048cea0f288138c4e30e7 | null | [
"LICENSE"
] | 188 |
2.4 | hodoscope | 0.2.4 | Library for analyzing AI agent trajectories — extract actions, summarize, embed, and visualize. | # Hodoscope
[](https://pypi.org/project/hodoscope/)
[](https://pypi.org/project/hodoscope/)
[](LICENSE)
Unsupervised, human-in-the-loop trajectory analysis for AI agents. Summarize, embed, and visualize thousands of agent actions to find patterns across models and configurations. Supports common evaluation formats and any [LiteLLM](https://docs.litellm.ai/)-compatible model for summarization and embedding.
[Homepage](https://hodoscope.dev) · [Announcement blog](https://hodoscope.dev/blog/announcement.html)
## Why Hodoscope?
Running evals across multiple models and configurations produces a mountain of raw logs, but reading them one-by-one doesn't scale. Hodoscope gives you a bird's-eye view: it extracts every agent action from your eval trajectories, summarizes each one with an LLM, embeds the summaries into a shared vector space, and then projects them into interactive 2D plots. The result is a visual map where you can spot behavioral clusters, group by any metadata field, and use density overlays to see exactly where two groups of trajectories diverge or converge. No labels or pre-defined taxonomies required.
## Features
- **Multiple supported formats** -- [Inspect AI](https://inspect.ai-safety-institute.org.uk/) `.eval` files, [OpenHands](https://github.com/All-Hands-AI/OpenHands) JSONL trajectories, [Docent](https://github.com/docent-ai/docent) collections, and raw trajectory JSONs
- **Summarization & embedding** -- distill raw agent actions into concise natural-language summaries and embed them via any LLM supported by [LiteLLM](https://docs.litellm.ai/)
- **Dimensionality reduction** -- project embedded summaries into interactive 2D scatter plots with t-SNE (recommended), PCA, UMAP, TriMap, or PaCMAP
- **Density diffing and overlay** -- overlay difference in kernel density estimates to visualize where trajectory distributions differ
- **Flexible grouping** -- group summaries by any metadata field (`--group-by model`, `--group-by score`, `--group-by task`, etc.) to compare
- **Resumable processing** -- interrupt and resume long analysis runs with `--resume`; already-processed trajectories are skipped
- **Python API** -- every CLI command maps to a public function you can call directly in notebooks or scripts
## How It Works
```
source file ─→ actions ─→ summarize ─→ embed ─→ distribution diffing ─→ visualize
```
<!-- TODO: Replace with an actual screenshot of hodoscope viz output (e.g. a t-SNE plot colored by model). -->
<!-- A visualization screenshot here would show readers what the tool produces. -->
## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Configuration](#configuration)
- [Quick Start](#quick-start)
- [CLI Reference](#cli-reference)
- [Trajectory Format](#trajectory-format)
- [Output Format](#output-format)
- [Testing](#testing)
- [Contributing](#contributing)
- [Citation](#citation)
- [License](#license)
## Prerequisites
- Python 3.11+
- By default: An **OpenAI** and a **Gemini** API key for summarization and embedding
- It's also possible to use other LLM API keys. For example, a single **[OpenRouter](https://openrouter.ai/)** API key
## Installation
```bash
pip install hodoscope
```
For development (editable install with tests):
```bash
pip install -e ".[dev]"
```
## Configuration
Create a `.env` file in the project root. Hodoscope loads it automatically at startup.
```
OPENAI_API_KEY=your-openai-key
GEMINI_API_KEY=your-gemini-key
# Optional: override defaults
# ⚠️ Default summarization model (gpt-5.2) could be expensive!
# SUMMARIZE_MODEL=openai/gpt-5.2
# EMBEDDING_MODEL=gemini/gemini-embedding-001
# MAX_WORKERS=10
```
You can also export these variables directly in your shell instead of using a `.env` file.
**Using OpenRouter (single API key):** If you prefer to use an OpenRouter key for both summarization and embedding, set `OPENROUTER_API_KEY` and prefix your model names with `openrouter/`:
```
OPENROUTER_API_KEY=your-openrouter-key
SUMMARIZE_MODEL=openrouter/openai/gpt-5.2
EMBEDDING_MODEL=openrouter/gemini/gemini-embedding-001
```
## Quick Start
```bash
# Analyze a single .eval file
hodoscope analyze run.eval
# Analyze all trajectory files in a directory
hodoscope analyze evals/
# Compare models
hodoscope viz model_*.hodoscope.json --group-by model --open
# Visualize a single result
hodoscope viz run.hodoscope.json --open
```
## CLI Reference
### `hodoscope analyze`
Process source files (.eval, directories, Docent collections) into `.hodoscope.json` analysis files.
```bash
hodoscope analyze SOURCES [OPTIONS]
Options:
--docent-id TEXT Docent collection ID as source
-o, --output TEXT Output JSON path (single source only)
--field TEXT KEY=VALUE metadata (repeatable)
-l, --limit INTEGER Limit trajectories per source
--save-samples PATH Save extracted trajectory JSONs to directory
--embed-dim INTEGER Embedding dimensionality (default: follow API default)
-m, --model-name TEXT Override auto-detected model name
--summarize-model TEXT LiteLLM model for summarization (default: openai/gpt-5.2)
--embedding-model TEXT LiteLLM model for embeddings (default: gemini/gemini-embedding-001)
--sample / --no-sample Randomly sample trajectories (use with --limit)
--seed INTEGER Random seed for --sample reproducibility
--resume / --no-resume Resume from existing output (default: on)
--reasoning-effort [low|medium|high]
Reasoning effort for summarization model
--max-workers INTEGER Max parallel workers for LLM calls (default: 10)
--reembed Re-embed existing summaries (e.g. after changing embedding model/dim)
```
Examples:
```bash
hodoscope analyze run.eval # .eval → analysis JSON
hodoscope analyze *.eval # batch: all .eval files
hodoscope analyze evals/ # batch: dir of .eval files
hodoscope analyze run.eval -o my_output.hodoscope.json # custom output path
hodoscope analyze run.eval --field env=prod # add custom metadata
hodoscope analyze run.eval --save-samples ./samples/ # save extracted trajectories
hodoscope analyze --docent-id COLLECTION_ID # docent source
hodoscope analyze path/to/samples/ # directory of trajectory JSONs
hodoscope analyze run.eval --summarize-model gemini/gemini-2.0-flash
hodoscope analyze run.eval --limit 5 --sample --seed 42
hodoscope analyze run.eval --no-resume # overwrite existing output
```
### `hodoscope viz`
Visualize analysis JSON files with interactive plots. Groups summaries by any metadata field.
```bash
hodoscope viz SOURCES [OPTIONS]
Options:
--group-by TEXT Field to group by (default: model)
--proj TEXT Projection methods: pca, tsne, umap, trimap, pacmap
(comma-separated or repeated; * or all for all; default: tsne)
-o, --output TEXT Output HTML file path (default: auto-generated timestamped name)
--filter TEXT KEY=VALUE metadata filter (repeatable, AND logic)
--open Open the generated HTML in the default browser
```
Examples:
```bash
hodoscope viz output.hodoscope.json # visualize a single analysis file (grouped by model)
hodoscope viz *.hodoscope.json --group-by score # group by score field
hodoscope viz *.hodoscope.json --proj tsne,umap # specific projection methods
hodoscope viz *.hodoscope.json --proj '*' # all methods (will be slow!)
hodoscope viz *.hodoscope.json --filter score=1.0 # only score=1.0 ones
hodoscope viz *.hodoscope.json --open # open in default browser
```
### `hodoscope sample`
Sample representative summaries using density-weighted Farthest Point Sampling on 2D projections.
> **Note:** While this command could be useful for scripting and automated pipelines, we find the interactive visualization (`hodoscope viz`) to be more intuitive and effective for human-in-the-loop explorations.
```bash
hodoscope sample SOURCES [OPTIONS]
Options:
--group-by TEXT Field to group by (default: model)
-n, --samples-per-group INTEGER
Number of representative samples per group (default: 10)
--proj TEXT Projection method for FPS ranking (pca, tsne, umap, trimap, pacmap; default: tsne)
-o, --output TEXT JSON output file (default: paginated terminal display)
--interleave Interleave groups by rank (#1 from each group, then #2, etc.)
--filter TEXT KEY=VALUE metadata filter (repeatable, AND logic)
```
Examples:
```bash
hodoscope sample output.hodoscope.json # suggest 10 per group
hodoscope sample output.hodoscope.json --group-by score -n 5 # suggest 5 per score group
hodoscope sample output.hodoscope.json --proj pca # use PCA projection
hodoscope sample output.hodoscope.json -o sampled.json # write JSON output
hodoscope sample a.hodoscope.json b.hodoscope.json --interleave # interleave groups by rank for easier comparison
hodoscope sample output.hodoscope.json --filter score=1.0 # only score=1.0 summaries
```
### `hodoscope info`
Show metadata, summary counts, and API key status for analysis JSON files.
```bash
hodoscope info output.hodoscope.json
hodoscope info results/
```
## Trajectory Format
Hodoscope first converts other trajectory sources (`.eval` files, Docent collections, etc.) to the canonical JSON format before processing. You can also pass trajectories directly in this format:
```json
{
"id": "unique-trajectory-id",
"messages": [
{"role": "user", "content": "..."},
{"role": "assistant", "content": "..."},
...
],
"metadata": {...}
}
```
## Output Format
`hodoscope analyze` produces `.hodoscope.json` files:
```json
{
"version": 1,
"created_at": "...",
"source": "path/to/run.eval",
"fields": {"model": "gpt-5", "task": "swe_bench", "accuracy": 0.8, "...": "..."},
"embedding_model": "gemini/gemini-embedding-001",
"embedding_dimensionality": 768,
"summaries": [
{
"trajectory_id": "django__django-12345_epoch_1",
"turn_id": 3,
"summary": "Update assertion to match expected output",
"action_text": "...",
"task_context": "...",
"embedding": "<base85-encoded float32 array>",
"metadata": {"score": 1.0, "instance_id": "django__django-12345", "...": "..."}
},
"..."
]
}
```
Key concepts:
- **`fields`**: File-level metadata auto-detected from .eval header (model, task, dataset_name, solver, run_id, accuracy, etc.) plus custom `--field` values. Same for all summaries.
- **`metadata`**: Per-trajectory metadata. All `sample.metadata` keys from .eval files are passed through, plus extracted keys (score, epoch, target, token usage, etc.). Varies per summary.
- **`--group-by` resolution**: Checks per-summary `metadata` first, then file-level `fields`.
- **`embedding`**: RFC 1924 base85-encoded `float32` numpy array.
## Testing
```bash
# Run the full test suite
pytest
# Unit tests only (no API keys needed)
pytest tests/test_io.py tests/test_viz.py tests/test_api.py tests/test_sampling.py
# End-to-end tests (requires API keys)
pytest tests/test_analyze.py
```
## Contributing
Contributions are welcome! We recommend opening an issue to discuss what you'd like to change before submitting a pull request.
## Citation
```bibtex
@article{zhong2026hodoscope,
title={Hodoscope: Unsupervised Behavior Discovery in AI Agents},
author={Zhong, Ziqian and Saxena, Shashwat and Raghunathan, Aditi},
year={2026},
url={https://hodoscope.dev/blog/announcement.html}
}
```
## License
This project is licensed under the MIT License. See [LICENSE](LICENSE) for details.
| text/markdown | null | Ziqian Zhong <ziqianz@andrew.cmu.edu> | null | null | null | ai, agent, trajectory, visualization, llm, embedding | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Visualization"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"litellm",
"python-dotenv",
"tqdm",
"numpy",
"scikit-learn",
"matplotlib",
"bokeh",
"umap-learn",
"trimap",
"pacmap",
"click",
"docent-python",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/AR-FORUM/hodoscope",
"Repository, https://github.com/AR-FORUM/hodoscope",
"Issues, https://github.com/AR-FORUM/hodoscope/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:45:15.403445 | hodoscope-0.2.4.tar.gz | 67,941 | eb/69/94c5ca68a807bb81b51987deb24300222c35fe01b47b4ce3f83f74854455/hodoscope-0.2.4.tar.gz | source | sdist | null | false | beebb3758f96c9be339f88b75b543437 | d87578bfbd0890cbfb2cf572ec7fa088024e70de72d70b0a347213e251633442 | eb6994c5ca68a807bb81b51987deb24300222c35fe01b47b4ce3f83f74854455 | MIT | [
"LICENSE"
] | 172 |
2.3 | marimo-base | 0.20.1 | A library for making reactive notebooks and apps | <p align="center">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/marimo-logotype-thick.svg">
</p>
<p align="center">
<em>A reactive Python notebook that's reproducible, git-friendly, and deployable as scripts or apps.</em>
</p>
<p align="center">
<a href="https://docs.marimo.io" target="_blank"><strong>Docs</strong></a> ·
<a href="https://marimo.io/discord?ref=readme" target="_blank"><strong>Discord</strong></a> ·
<a href="https://docs.marimo.io/examples/" target="_blank"><strong>Examples</strong></a> ·
<a href="https://marimo.io/gallery/" target="_blank"><strong>Gallery</strong></a> ·
<a href="https://www.youtube.com/@marimo-team/" target="_blank"><strong>YouTube</strong></a>
</p>
<p align="center">
<b>English</b>
<b> | </b>
<a href="https://github.com/marimo-team/marimo/blob/main/README_Traditional_Chinese.md" target="_blank"><b>繁體中文</b></a>
<b> | </b>
<a href="https://github.com/marimo-team/marimo/blob/main/README_Chinese.md" target="_blank"><b>简体中文</b></a>
<b> | </b>
<a href="https://github.com/marimo-team/marimo/blob/main/README_Japanese.md" target="_blank"><b>日本語</b></a>
<b> | </b>
<a href="https://github.com/marimo-team/marimo/blob/main/README_Spanish.md" target="_blank"><b>Español</b></a>
</p>
<p align="center">
<a href="https://pypi.org/project/marimo/"><img src="https://img.shields.io/pypi/v/marimo?color=%2334D058&label=pypi"/></a>
<a href="https://anaconda.org/conda-forge/marimo"><img src="https://img.shields.io/conda/vn/conda-forge/marimo.svg"/></a>
<a href="https://marimo.io/discord?ref=readme"><img src="https://shields.io/discord/1059888774789730424" alt="discord"/></a>
<img alt="Pepy Total Downloads" src="https://img.shields.io/pepy/dt/marimo?label=pypi%20%7C%20downloads"/>
<img alt="Conda Downloads" src="https://img.shields.io/conda/d/conda-forge/marimo"/>
<a href="https://github.com/marimo-team/marimo/blob/main/LICENSE"><img src="https://img.shields.io/pypi/l/marimo"/></a>
</p>
**marimo** is a reactive Python notebook: run a cell or interact with a UI
element, and marimo automatically runs dependent cells (or <a href="#expensive-notebooks">marks them as stale</a>), keeping code and outputs
consistent. marimo notebooks are stored as pure Python (with first-class SQL support), executable as scripts,
and deployable as apps.
**Highlights**.
- 🚀 **batteries-included:** replaces `jupyter`, `streamlit`, `jupytext`, `ipywidgets`, `papermill`, and more
- ⚡️ **reactive**: run a cell, and marimo reactively [runs all dependent cells](https://docs.marimo.io/guides/reactivity.html) or <a href="#expensive-notebooks">marks them as stale</a>
- 🖐️ **interactive:** [bind sliders, tables, plots, and more](https://docs.marimo.io/guides/interactivity.html) to Python — no callbacks required
- 🐍 **git-friendly:** stored as `.py` files
- 🛢️ **designed for data**: query dataframes, databases, warehouses, or lakehouses [with SQL](https://docs.marimo.io/guides/working_with_data/sql.html), filter and search [dataframes](https://docs.marimo.io/guides/working_with_data/dataframes.html)
- 🤖 **AI-native**: [generate cells with AI](https://docs.marimo.io/guides/generate_with_ai/) tailored for data work
- 🔬 **reproducible:** [no hidden state](https://docs.marimo.io/guides/reactivity.html#no-hidden-state), deterministic execution, [built-in package management](https://docs.marimo.io/guides/package_management/)
- 🏃 **executable:** [execute as a Python script](https://docs.marimo.io/guides/scripts.html), parameterized by CLI args
- 🛜 **shareable**: [deploy as an interactive web app](https://docs.marimo.io/guides/apps.html) or [slides](https://docs.marimo.io/guides/apps.html#slides-layout), [run in the browser via WASM](https://docs.marimo.io/guides/wasm.html)
- 🧩 **reusable:** [import functions and classes](https://docs.marimo.io/guides/reusing_functions/) from one notebook to another
- 🧪 **testable:** [run pytest](https://docs.marimo.io/guides/testing/) on notebooks
- ⌨️ **a modern editor**: [GitHub Copilot](https://docs.marimo.io/guides/editor_features/ai_completion.html#github-copilot), [AI assistants](https://docs.marimo.io/guides/editor_features/ai_completion.html), vim keybindings, variable explorer, and [more](https://docs.marimo.io/guides/editor_features/index.html)
- 🧑💻 **use your favorite editor**: run in [VS Code or Cursor](https://marketplace.visualstudio.com/items?itemName=marimo-team.vscode-marimo), or edit in neovim, Zed, [or any other text editor](https://docs.marimo.io/guides/editor_features/watching/)
```python
pip install marimo && marimo tutorial intro
```
_Get started instantly with [**mo**lab, our free online
notebook](https://molab.marimo.io/notebooks). Or jump to the
[quickstart](#quickstart) for a primer on our CLI._
## A reactive programming environment
marimo guarantees your notebook code, outputs, and program state are consistent. This [solves many problems](https://docs.marimo.io/faq.html#faq-problems) associated with traditional notebooks like Jupyter.
**A reactive programming environment.**
Run a cell and marimo _reacts_ by automatically running the cells that
reference its variables, eliminating the error-prone task of manually
re-running cells. Delete a cell and marimo scrubs its variables from program
memory, eliminating hidden state.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/reactive.gif" width="700px" />
<a name="expensive-notebooks"></a>
**Compatible with expensive notebooks.** marimo lets you [configure the runtime
to be
lazy](https://docs.marimo.io/guides/configuration/runtime_configuration.html),
marking affected cells as stale instead of automatically running them. This
gives you guarantees on program state while preventing accidental execution of
expensive cells.
**Synchronized UI elements.** Interact with [UI
elements](https://docs.marimo.io/guides/interactivity.html) like [sliders](https://docs.marimo.io/api/inputs/slider.html#slider),
[dropdowns](https://docs.marimo.io/api/inputs/dropdown.html), [dataframe
transformers](https://docs.marimo.io/api/inputs/dataframe.html), and [chat
interfaces](https://docs.marimo.io/api/inputs/chat.html), and the cells that
use them are automatically re-run with their latest values.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/readme-ui.gif" width="700px" />
**Interactive dataframes.** [Page through, search, filter, and
sort](https://docs.marimo.io/guides/working_with_data/dataframes.html)
millions of rows blazingly fast, no code required.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/docs-df.gif" width="700px" />
**Generate cells with data-aware AI.** [Generate code with an AI
assistant](https://docs.marimo.io/guides/editor_features/ai_completion/) that is highly
specialized for working with data, with context about your variables in memory;
[zero-shot entire notebooks](https://docs.marimo.io/guides/generate_with_ai/text_to_notebook/).
Customize the system prompt, bring your own API keys, or use local models.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/readme-generate-with-ai.gif" width="700px" />
**Query data with SQL.** Build [SQL](https://docs.marimo.io/guides/working_with_data/sql.html) queries
that depend on Python values and execute them against dataframes, databases, lakehouses,
CSVs, Google Sheets, or anything else using our built-in SQL engine, which
returns the result as a Python dataframe.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/readme-sql-cell.png" width="700px" />
Your notebooks are still pure Python, even if they use SQL.
**Dynamic markdown.** Use markdown parametrized by Python variables to tell
dynamic stories that depend on Python data.
**Built-in package management.** marimo has built-in support for all major
package managers, letting you [install packages on import](https://docs.marimo.io/guides/editor_features/package_management.html). marimo can even
[serialize package
requirements](https://docs.marimo.io/guides/package_management/inlining_dependencies/)
in notebook files, and auto install them in
isolated venv sandboxes.
**Deterministic execution order.** Notebooks are executed in a deterministic
order, based on variable references instead of cells' positions on the page.
Organize your notebooks to best fit the stories you'd like to tell.
**Performant runtime.** marimo runs only those cells that need to be run by
statically analyzing your code.
**Batteries-included.** marimo comes with GitHub Copilot, AI assistants, Ruff
code formatting, HTML export, fast code completion, a [VS Code
extension](https://marketplace.visualstudio.com/items?itemName=marimo-team.vscode-marimo),
an interactive dataframe viewer, and [many more](https://docs.marimo.io/guides/editor_features/index.html)
quality-of-life features.
## Quickstart
_The [marimo concepts
playlist](https://www.youtube.com/watch?v=3N6lInzq5MI&list=PLNJXGo8e1XT9jP7gPbRdm1XwloZVFvLEq)
on our [YouTube channel](https://www.youtube.com/@marimo-team) gives an
overview of many features._
**Installation.** In a terminal, run
```bash
pip install marimo # or conda install -c conda-forge marimo
marimo tutorial intro
```
To install with additional dependencies that unlock SQL cells, AI completion, and more,
run
```bash
pip install "marimo[recommended]"
```
**Create notebooks.**
Create or edit notebooks with
```bash
marimo edit
```
**Run apps.** Run your notebook as a web app, with Python
code hidden and uneditable:
```bash
marimo run your_notebook.py
```
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/docs-model-comparison.gif" style="border-radius: 8px" width="450px" />
**Execute as scripts.** Execute a notebook as a script at the
command line:
```bash
python your_notebook.py
```
**Automatically convert Jupyter notebooks.** Automatically convert Jupyter
notebooks to marimo notebooks with the CLI
```bash
marimo convert your_notebook.ipynb > your_notebook.py
```
or use our [web interface](https://marimo.io/convert).
**Tutorials.**
List all tutorials:
```bash
marimo tutorial --help
```
**Share cloud-based notebooks.** Use
[molab](https://molab.marimo.io/notebooks), a cloud-based marimo notebook
service similar to Google Colab, to create and share notebook links.
## Questions?
See the [FAQ](https://docs.marimo.io/faq.html) at our docs.
## Learn more
marimo is easy to get started with, with lots of room for power users.
For example, here's an embedding visualizer made in marimo
([try the notebook live on molab!](https://molab.marimo.io/notebooks/nb_jJiFFtznAy4BxkrrZA1o9b/app?show-code=true)):
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/embedding.gif" width="700px" />
Check out our [docs](https://docs.marimo.io),
[usage examples](https://docs.marimo.io/examples/), and our [gallery](https://marimo.io/gallery) to learn more.
<table border="0">
<tr>
<td>
<a target="_blank" href="https://docs.marimo.io/getting_started/key_concepts.html">
<img src="https://docs.marimo.io/_static/reactive.gif" style="max-height: 150px; width: auto; display: block" />
</a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/api/inputs/index.html">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/readme-ui.gif" style="max-height: 150px; width: auto; display: block" />
</a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/guides/working_with_data/plotting.html">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/docs-intro.gif" style="max-height: 150px; width: auto; display: block" />
</a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/api/layouts/index.html">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/outputs.gif" style="max-height: 150px; width: auto; display: block" />
</a>
</td>
</tr>
<tr>
<td>
<a target="_blank" href="https://docs.marimo.io/getting_started/key_concepts.html"> Tutorial </a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/api/inputs/index.html"> Inputs </a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/guides/working_with_data/plotting.html"> Plots </a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/api/layouts/index.html"> Layout </a>
</td>
</tr>
<tr>
<td>
<a target="_blank" href="https://molab.marimo.io/notebooks/nb_TWVGCgZZK4L8zj5ziUBNVL">
<img src="https://marimo.io/molab-shield.svg"/>
</a>
</td>
<td>
<a target="_blank" href="https://molab.marimo.io/notebooks/nb_WuoXgs7mjg5yqrMxJXjRpF">
<img src="https://marimo.io/molab-shield.svg"/>
</a>
</td>
<td>
<a target="_blank" href="https://molab.marimo.io/notebooks/nb_vXxD13t2RoMTLjC89qdn6c">
<img src="https://marimo.io/molab-shield.svg"/>
</a>
</td>
<td>
<a target="_blank" href="https://molab.marimo.io/notebooks/nb_XpXx8MX99dWAjn4k1b3xiU">
<img src="https://marimo.io/molab-shield.svg"/>
</a>
</td>
</tr>
</table>
## Contributing
We appreciate all contributions! You don't need to be an expert to help out.
Please see [CONTRIBUTING.md](https://github.com/marimo-team/marimo/blob/main/CONTRIBUTING.md) for more details on how to get
started.
> Questions? Reach out to us [on Discord](https://marimo.io/discord?ref=readme).
## Community
We're building a community. Come hang out with us!
- 🌟 [Star us on GitHub](https://github.com/marimo-team/marimo)
- 💬 [Chat with us on Discord](https://marimo.io/discord?ref=readme)
- 📧 [Subscribe to our Newsletter](https://marimo.io/newsletter)
- ☁️ [Join our Cloud Waitlist](https://marimo.io/cloud)
- ✏️ [Start a GitHub Discussion](https://github.com/marimo-team/marimo/discussions)
- 🦋 [Follow us on Bluesky](https://bsky.app/profile/marimo.io)
- 🐦 [Follow us on Twitter](https://twitter.com/marimo_io)
- 🎥 [Subscribe on YouTube](https://www.youtube.com/@marimo-team)
- 🤖 [Follow us on Reddit](https://www.reddit.com/r/marimo_notebook)
- 🕴️ [Follow us on LinkedIn](https://www.linkedin.com/company/marimo-io)
**A NumFOCUS affiliated project.** marimo is a core part of the broader Python
ecosystem and is a member of the NumFOCUS community, which includes projects
such as NumPy, SciPy, and Matplotlib.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/numfocus_affiliated_project.png" height="40px" />
## Inspiration ✨
marimo is a **reinvention** of the Python notebook as a reproducible, interactive,
and shareable Python program, instead of an error-prone JSON scratchpad.
We believe that the tools we use shape the way we think — better tools, for
better minds. With marimo, we hope to provide the Python community with a
better programming environment to do research and communicate it; to experiment
with code and share it; to learn computational science and teach it.
Our inspiration comes from many places and projects, especially
[Pluto.jl](https://github.com/fonsp/Pluto.jl),
[ObservableHQ](https://observablehq.com/tutorials), and
[Bret Victor's essays](http://worrydream.com/). marimo is part of
a greater movement toward reactive dataflow programming. From
[IPyflow](https://github.com/ipyflow/ipyflow), [streamlit](https://github.com/streamlit/streamlit),
[TensorFlow](https://github.com/tensorflow/tensorflow),
[PyTorch](https://github.com/pytorch/pytorch/tree/main),
[JAX](https://github.com/google/jax), and
[React](https://github.com/facebook/react), the ideas of functional,
declarative, and reactive programming are transforming a broad range of tools
for the better.
<p align="right">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/marimo-logotype-horizontal.png" height="200px">
</p>
| text/markdown | null | null | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Operating System :: OS Independent",
"License :: OSI Approved :: Apache Software License",
"Environment :: Console",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Education",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click<9,>=8.0",
"jedi>=0.18.0",
"markdown<4,>=3.6",
"pymdown-extensions<11,>=10.15",
"pygments<3,>=2.19",
"tomlkit>=0.12.0",
"pyyaml>=6.0.1",
"uvicorn>=0.22.0",
"starlette>=0.37.2",
"websockets>=14.2.0",
"loro>=1.10.0",
"typing-extensions>=4.4.0; python_full_version < \"3.11\"",
"docutils>=0.16.0",
"psutil>=5.0",
"itsdangerous>=2.0.0",
"narwhals>=2.0.0",
"packaging",
"msgspec>=0.20.0",
"python-lsp-server>=1.13.0; extra == \"lsp\"",
"python-lsp-ruff>=2.0.0; extra == \"lsp\"",
"mcp>=1.0.0; extra == \"mcp\"",
"pydantic>2; extra == \"mcp\"",
"marimo[sql]; extra == \"recommended\"",
"marimo[sandbox]; extra == \"recommended\"",
"altair>=5.4.0; extra == \"recommended\"",
"pydantic-ai-slim[openai]>=1.39.0; extra == \"recommended\"",
"ruff; extra == \"recommended\"",
"nbformat>=5.7.0; extra == \"recommended\"",
"pyzmq>=27.1.0; extra == \"sandbox\"",
"uv>=0.9.21; extra == \"sandbox\"",
"duckdb>=1.0.0; extra == \"sql\"",
"polars[pyarrow]>=1.9.0; extra == \"sql\"",
"sqlglot[rs]<28.7.0,>=26.2.0; extra == \"sql\""
] | [] | [] | [] | [
"homepage, https://github.com/marimo-team/marimo"
] | Hatch/1.16.3 cpython/3.13.12 HTTPX/0.28.1 | 2026-02-20T18:44:39.747066 | marimo_base-0.20.1.tar.gz | 1,182,797 | 5e/49/cb9172aa0f3ebf0eb4e235d92b111c5d04b212d1569c70e9b0ed995437b9/marimo_base-0.20.1.tar.gz | source | sdist | null | false | 2faec0ce4d7048b9326da8730d918392 | 2c654d1c21280f7f6e381252c7a140a93a2f845e308c83cfa2b55b8db2f50f31 | 5e49cb9172aa0f3ebf0eb4e235d92b111c5d04b212d1569c70e9b0ed995437b9 | null | [] | 188 |
2.4 | basic-memory | 0.18.5 | Local-first knowledge management combining Zettelkasten with knowledge graphs | <!-- mcp-name: io.github.basicmachines-co/basic-memory -->
[](https://www.gnu.org/licenses/agpl-3.0)
[](https://badge.fury.io/py/basic-memory)
[](https://www.python.org/downloads/)
[](https://github.com/basicmachines-co/basic-memory/actions)
[](https://github.com/astral-sh/ruff)


## 🚀 Basic Memory Cloud is Live!
- **Cross-device and multi-platform support is here.** Your knowledge graph now works on desktop, web, and mobile - seamlessly synced across all your AI tools (Claude, ChatGPT, Gemini, Claude Code, and Codex)
- **Early Supporter Pricing:** Early users get 25% off forever.
The open source project continues as always. Cloud just makes it work everywhere.
[Sign up now →](https://basicmemory.com)
with a 7 day free trial
# Basic Memory
Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like
Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to
enable any compatible LLM to read and write to your local knowledge base.
- Website: https://basicmemory.com
- Documentation: https://docs.basicmemory.com
## Pick up your conversation right where you left off
- AI assistants can load context from local files in a new conversation
- Notes are saved locally as Markdown files in real time
- No project knowledge or special prompting required
https://github.com/user-attachments/assets/a55d8238-8dd0-454a-be4c-8860dbbd0ddc
## Quick Start
```bash
# Install with uv (recommended)
uv tool install basic-memory
# Configure Claude Desktop (edit ~/Library/Application Support/Claude/claude_desktop_config.json)
# Add this to your config:
{
"mcpServers": {
"basic-memory": {
"command": "uvx",
"args": [
"basic-memory",
"mcp"
]
}
}
}
# Now in Claude Desktop, you can:
# - Write notes with "Create a note about coffee brewing methods"
# - Read notes with "What do I know about pour over coffee?"
# - Search with "Find information about Ethiopian beans"
```
You can view shared context via files in `~/basic-memory` (default directory location).
## Why Basic Memory?
Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation
starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:
- Chat histories capture conversations but aren't structured knowledge
- RAG systems can query documents but don't let LLMs write back
- Vector databases require complex setups and often live in the cloud
- Knowledge graphs typically need specialized tools to maintain
Basic Memory addresses these problems with a simple approach: structured Markdown files that both humans and LLMs can
read
and write to. The key advantages:
- **Local-first:** All knowledge stays in files you control
- **Bi-directional:** Both you and the LLM read and write to the same files
- **Structured yet simple:** Uses familiar Markdown with semantic patterns
- **Traversable knowledge graph:** LLMs can follow links between topics
- **Standard formats:** Works with existing editors like Obsidian
- **Lightweight infrastructure:** Just local files indexed in a local SQLite database
With Basic Memory, you can:
- Have conversations that build on previous knowledge
- Create structured notes during natural conversations
- Have conversations with LLMs that remember what you've discussed before
- Navigate your knowledge graph semantically
- Keep everything local and under your control
- Use familiar tools like Obsidian to view and edit notes
- Build a personal knowledge base that grows over time
- Sync your knowledge to the cloud with bidirectional synchronization
- Authenticate and manage cloud projects with subscription validation
- Mount cloud storage for direct file access
## How It Works in Practice
Let's say you're exploring coffee brewing methods and want to capture your knowledge. Here's how it works:
1. Start by chatting normally:
```
I've been experimenting with different coffee brewing methods. Key things I've learned:
- Pour over gives more clarity in flavor than French press
- Water temperature is critical - around 205°F seems best
- Freshly ground beans make a huge difference
```
... continue conversation.
2. Ask the LLM to help structure this knowledge:
```
"Let's write a note about coffee brewing methods."
```
LLM creates a new Markdown file on your system (which you can see instantly in Obsidian or your editor):
```markdown
---
title: Coffee Brewing Methods
permalink: coffee-brewing-methods
tags:
- coffee
- brewing
---
# Coffee Brewing Methods
## Observations
- [method] Pour over provides more clarity and highlights subtle flavors
- [technique] Water temperature at 205°F (96°C) extracts optimal compounds
- [principle] Freshly ground beans preserve aromatics and flavor
## Relations
- relates_to [[Coffee Bean Origins]]
- requires [[Proper Grinding Technique]]
- affects [[Flavor Extraction]]
```
The note embeds semantic content and links to other topics via simple Markdown formatting.
3. You see this file on your computer in real time in the current project directory (default `~/$HOME/basic-memory`).
- Realtime sync can be enabled via running `basic-memory sync --watch`
4. In a chat with the LLM, you can reference a topic:
```
Look at `coffee-brewing-methods` for context about pour over coffee
```
The LLM can now build rich context from the knowledge graph. For example:
```
Following relation 'relates_to [[Coffee Bean Origins]]':
- Found information about Ethiopian Yirgacheffe
- Notes on Colombian beans' nutty profile
- Altitude effects on bean characteristics
Following relation 'requires [[Proper Grinding Technique]]':
- Burr vs. blade grinder comparisons
- Grind size recommendations for different methods
- Impact of consistent particle size on extraction
```
Each related document can lead to more context, building a rich semantic understanding of your knowledge base.
This creates a two-way flow where:
- Humans write and edit Markdown files
- LLMs read and write through the MCP protocol
- Sync keeps everything consistent
- All knowledge stays in local files.
## Technical Implementation
Under the hood, Basic Memory:
1. Stores everything in Markdown files
2. Uses a SQLite database for searching and indexing
3. Extracts semantic meaning from simple Markdown patterns
- Files become `Entity` objects
- Each `Entity` can have `Observations`, or facts associated with it
- `Relations` connect entities together to form the knowledge graph
4. Maintains the local knowledge graph derived from the files
5. Provides bidirectional synchronization between files and the knowledge graph
6. Implements the Model Context Protocol (MCP) for AI integration
7. Exposes tools that let AI assistants traverse and manipulate the knowledge graph
8. Uses memory:// URLs to reference entities across tools and conversations
The file format is just Markdown with some simple markup:
Each Markdown file has:
### Frontmatter
```markdown
title: <Entity title>
type: <The type of Entity> (e.g. note)
permalink: <a uri slug>
- <optional metadata> (such as tags)
```
### Observations
Observations are facts about a topic.
They can be added by creating a Markdown list with a special format that can reference a `category`, `tags` using a
"#" character, and an optional `context`.
Observation Markdown format:
```markdown
- [category] content #tag (optional context)
```
Examples of observations:
```markdown
- [method] Pour over extracts more floral notes than French press
- [tip] Grind size should be medium-fine for pour over #brewing
- [preference] Ethiopian beans have bright, fruity flavors (especially from Yirgacheffe)
- [fact] Lighter roasts generally contain more caffeine than dark roasts
- [experiment] Tried 1:15 coffee-to-water ratio with good results
- [resource] James Hoffman's V60 technique on YouTube is excellent
- [question] Does water temperature affect extraction of different compounds differently?
- [note] My favorite local shop uses a 30-second bloom time
```
### Relations
Relations are links to other topics. They define how entities connect in the knowledge graph.
Markdown format:
```markdown
- relation_type [[WikiLink]] (optional context)
```
Examples of relations:
```markdown
- pairs_well_with [[Chocolate Desserts]]
- grown_in [[Ethiopia]]
- contrasts_with [[Tea Brewing Methods]]
- requires [[Burr Grinder]]
- improves_with [[Fresh Beans]]
- relates_to [[Morning Routine]]
- inspired_by [[Japanese Coffee Culture]]
- documented_in [[Coffee Journal]]
```
## Using with VS Code
Add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open User Settings (JSON)`.
```json
{
"mcp": {
"servers": {
"basic-memory": {
"command": "uvx",
"args": ["basic-memory", "mcp"]
}
}
}
}
```
Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.
```json
{
"servers": {
"basic-memory": {
"command": "uvx",
"args": ["basic-memory", "mcp"]
}
}
}
```
You can use Basic Memory with VS Code to easily retrieve and store information while coding.
## Using with Claude Desktop
Basic Memory is built using the MCP (Model Context Protocol) and works with the Claude desktop app (https://claude.ai/):
1. Configure Claude Desktop to use Basic Memory:
Edit your MCP configuration file (usually located at `~/Library/Application Support/Claude/claude_desktop_config.json`
for OS X):
```json
{
"mcpServers": {
"basic-memory": {
"command": "uvx",
"args": [
"basic-memory",
"mcp"
]
}
}
}
```
If you want to use a specific project (see [Multiple Projects](#multiple-projects) below), update your Claude Desktop
config:
```json
{
"mcpServers": {
"basic-memory": {
"command": "uvx",
"args": [
"basic-memory",
"mcp",
"--project",
"your-project-name"
]
}
}
}
```
2. Sync your knowledge:
```bash
# One-time sync of local knowledge updates
basic-memory sync
# Run realtime sync process (recommended)
basic-memory sync --watch
```
3. Cloud features (optional, requires subscription):
```bash
# Authenticate with cloud
basic-memory cloud login
# Bidirectional sync with cloud
basic-memory cloud sync
# Verify cloud integrity
basic-memory cloud check
# Mount cloud storage
basic-memory cloud mount
```
**Routing Flags** (for users with cloud subscriptions):
When cloud mode is enabled, CLI commands communicate with the cloud API by default. Use routing flags to override this:
```bash
# Force local routing (useful for local MCP server while cloud mode is enabled)
basic-memory status --local
basic-memory project list --local
# Force cloud routing (when cloud mode is disabled but you want cloud access)
basic-memory status --cloud
basic-memory project info my-project --cloud
```
The local MCP server (`basic-memory mcp`) automatically uses local routing, so you can use both local Claude Desktop and cloud-based clients simultaneously.
4. In Claude Desktop, the LLM can now use these tools:
**Content Management:**
```
write_note(title, content, folder, tags) - Create or update notes
read_note(identifier, page, page_size) - Read notes by title or permalink
read_content(path) - Read raw file content (text, images, binaries)
view_note(identifier) - View notes as formatted artifacts
edit_note(identifier, operation, content) - Edit notes incrementally
move_note(identifier, destination_path) - Move notes with database consistency
delete_note(identifier) - Delete notes from knowledge base
```
**Knowledge Graph Navigation:**
```
build_context(url, depth, timeframe) - Navigate knowledge graph via memory:// URLs
recent_activity(type, depth, timeframe) - Find recently updated information
list_directory(dir_name, depth) - Browse directory contents with filtering
```
**Search & Discovery:**
```
search(query, page, page_size) - Search across your knowledge base
search_notes(query, page, page_size, search_type, types, entity_types, after_date, metadata_filters, tags, status, project) - Search with filters
search_by_metadata(filters, limit, offset, project) - Structured frontmatter search
```
**Project Management:**
```
list_memory_projects() - List all available projects
create_memory_project(project_name, project_path) - Create new projects
get_current_project() - Show current project stats
sync_status() - Check synchronization status
```
**Visualization:**
```
canvas(nodes, edges, title, folder) - Generate knowledge visualizations
```
5. Example prompts to try:
```
"Create a note about our project architecture decisions"
"Find information about JWT authentication in my notes"
"Create a canvas visualization of my project components"
"Read my notes on the authentication system"
"What have I been working on in the past week?"
```
## Futher info
See the [Documentation](https://docs.basicmemory.com) for more info, including:
- [Complete User Guide](https://docs.basicmemory.com/user-guide/)
- [CLI tools](https://docs.basicmemory.com/guides/cli-reference/)
- [Cloud CLI and Sync](https://docs.basicmemory.com/guides/cloud-cli/)
- [Managing multiple Projects](https://docs.basicmemory.com/guides/cli-reference/#project)
- [Importing data from OpenAI/Claude Projects](https://docs.basicmemory.com/guides/cli-reference/#import)
## Logging
Basic Memory uses [Loguru](https://github.com/Delgan/loguru) for logging. The logging behavior varies by entry point:
| Entry Point | Default Behavior | Use Case |
|-------------|------------------|----------|
| CLI commands | File only | Prevents log output from interfering with command output |
| MCP server | File only | Stdout would corrupt the JSON-RPC protocol |
| API server | File (local) or stdout (cloud) | Docker/cloud deployments use stdout |
**Log file location:** `~/.basic-memory/basic-memory.log` (10MB rotation, 10 days retention)
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `BASIC_MEMORY_LOG_LEVEL` | `INFO` | Log level: DEBUG, INFO, WARNING, ERROR |
| `BASIC_MEMORY_CLOUD_MODE` | `false` | When `true`, API logs to stdout with structured context |
| `BASIC_MEMORY_FORCE_LOCAL` | `false` | When `true`, forces local API routing (ignores cloud mode) |
| `BASIC_MEMORY_ENV` | `dev` | Set to `test` for test mode (stderr only) |
### Examples
```bash
# Enable debug logging
BASIC_MEMORY_LOG_LEVEL=DEBUG basic-memory sync
# View logs
tail -f ~/.basic-memory/basic-memory.log
# Cloud/Docker mode (stdout logging with structured context)
BASIC_MEMORY_CLOUD_MODE=true uvicorn basic_memory.api.app:app
```
## Development
### Running Tests
Basic Memory supports dual database backends (SQLite and Postgres). By default, tests run against SQLite. Set `BASIC_MEMORY_TEST_POSTGRES=1` to run against Postgres (uses testcontainers - Docker required).
**Quick Start:**
```bash
# Run all tests against SQLite (default, fast)
just test-sqlite
# Run all tests against Postgres (uses testcontainers)
just test-postgres
# Run both SQLite and Postgres tests
just test
```
**Available Test Commands:**
- `just test` - Run all tests against both SQLite and Postgres
- `just test-sqlite` - Run all tests against SQLite (fast, no Docker needed)
- `just test-postgres` - Run all tests against Postgres (uses testcontainers)
- `just test-unit-sqlite` - Run unit tests against SQLite
- `just test-unit-postgres` - Run unit tests against Postgres
- `just test-int-sqlite` - Run integration tests against SQLite
- `just test-int-postgres` - Run integration tests against Postgres
- `just test-windows` - Run Windows-specific tests (auto-skips on other platforms)
- `just test-benchmark` - Run performance benchmark tests
- `just testmon` - Run tests impacted by recent changes (pytest-testmon)
- `just test-smoke` - Run fast MCP end-to-end smoke test
- `just fast-check` - Run fix/format/typecheck + impacted tests + smoke test
- `just doctor` - Run local file <-> DB consistency checks with temp config
**Postgres Testing:**
Postgres tests use [testcontainers](https://testcontainers-python.readthedocs.io/) which automatically spins up a Postgres instance in Docker. No manual database setup required - just have Docker running.
**Testmon Note:** When no files have changed, `just testmon` may collect 0 tests. That's expected and means no impacted tests were detected.
**Test Markers:**
Tests use pytest markers for selective execution:
- `windows` - Windows-specific database optimizations
- `benchmark` - Performance tests (excluded from default runs)
- `smoke` - Fast MCP end-to-end smoke tests
**Other Development Commands:**
```bash
just install # Install with dev dependencies
just lint # Run linting checks
just typecheck # Run type checking
just format # Format code with ruff
just fast-check # Fast local loop (fix/format/typecheck + testmon + smoke)
just doctor # Local consistency check (temp config)
just check # Run all quality checks
just migration "msg" # Create database migration
```
**Local Consistency Check:**
```bash
basic-memory doctor # Verifies file <-> database sync in a temp project
```
See the [justfile](justfile) for the complete list of development commands.
## License
AGPL-3.0
Contributions are welcome. See the [Contributing](CONTRIBUTING.md) guide for info about setting up the project locally
and submitting PRs.
## Star History
<a href="https://www.star-history.com/#basicmachines-co/basic-memory&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=basicmachines-co/basic-memory&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=basicmachines-co/basic-memory&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=basicmachines-co/basic-memory&type=Date" />
</picture>
</a>
Built with ♥️ by Basic Machines
| text/markdown | null | Basic Machines <hello@basic-machines.co> | null | null | AGPL-3.0-or-later | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiofiles>=24.1.0",
"aiosqlite>=0.20.0",
"alembic>=1.14.1",
"anyio>=4.10.0",
"asyncpg>=0.30.0",
"dateparser>=1.2.0",
"fastapi[standard]>=0.115.8",
"fastmcp==2.12.3",
"greenlet>=3.1.1",
"httpx>=0.28.0",
"loguru>=0.7.3",
"markdown-it-py>=3.0.0",
"mcp>=1.23.1",
"mdformat-frontmatter>=2.0.8",
"mdformat-gfm>=0.3.7",
"mdformat>=0.7.22",
"nest-asyncio>=1.6.0",
"pillow>=11.1.0",
"psycopg==3.3.1",
"pybars3>=0.9.7",
"pydantic-settings>=2.6.1",
"pydantic[email,timezone]>=2.12.0",
"pyjwt>=2.10.1",
"pyright>=1.1.390",
"pytest-aio>=1.9.0",
"pytest-asyncio>=1.2.0",
"python-dotenv>=1.1.0",
"python-frontmatter>=1.1.0",
"pyyaml>=6.0.1",
"rich>=13.9.4",
"sniffio>=1.3.1",
"sqlalchemy>=2.0.0",
"typer>=0.9.0",
"unidecode>=1.3.8",
"watchfiles>=1.0.4"
] | [] | [] | [] | [
"Homepage, https://github.com/basicmachines-co/basic-memory",
"Repository, https://github.com/basicmachines-co/basic-memory",
"Documentation, https://github.com/basicmachines-co/basic-memory#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:44:08.984328 | basic_memory-0.18.5.tar.gz | 1,045,877 | b7/78/5eed743e70fed5018f31d7730690a27351275d78e87ec54da2f59927b0dc/basic_memory-0.18.5.tar.gz | source | sdist | null | false | 2a2651c84206e7d17833c4f1e09fd4e6 | c85fe4cb987ce99ff38c50882ea0c0327e7bb57e6c487b66379249b53b22368f | b7785eed743e70fed5018f31d7730690a27351275d78e87ec54da2f59927b0dc | null | [
"LICENSE"
] | 3,190 |
2.4 | secure-fl | 2026.2.20.dev5 | Dual-Verifiable Framework for Federated Learning using Zero-Knowledge Proofs | # 🔐 Secure FL: Zero-Knowledge Federated Learning
A dual-verifiable framework for federated learning using zero-knowledge proofs to ensure training integrity and aggregation correctness.
## 🎯 Core Features
- **Dual ZKP Verification**: Client-side zk-STARKs + Server-side zk-SNARKs
- **FedJSCM Aggregation**: Momentum-based federated optimization
- **Dynamic Proof Rigor**: Adaptive proof complexity based on training stability
- **Parameter Quantization**: ZKP-compatible weight compression
## 🏗️ Architecture
```
Client Training + zk-STARK Proof → FL Server + zk-SNARK Proof → Verified Model
```
The system provides **dual verification**:
1. **Clients** generate zk-STARK proofs of correct local training
2. **Server** generates zk-SNARK proofs of correct aggregation
## 🚀 Quick Start
### Installation
```bash
# Install the package with uv (recommended)
uv pip install secure-fl
# Or install from source with uv
git clone https://github.com/krishantt/secure-fl
cd secure-fl
uv pip install -e .
# For development with all dependencies
uv sync --all-extras
```
### ZKP Prerequisites
Install zero-knowledge proof tools:
```bash
# Automated setup with make (recommended)
make setup-zkp
# Or manual setup:
# 1. Install Rust
curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
# 2. Install Circom
git clone https://github.com/iden3/circom.git
cd circom && cargo install --path circom
# 3. Install SnarkJS
npm install -g snarkjs
# Verify setup
uv run secure-fl check-zkp
```
### Basic Usage
#### Server
```python
from secure_fl import SecureFlowerServer, create_server_strategy
import torch.nn as nn
# Define model
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(784, 10)
def forward(self, x):
return self.fc(x.view(-1, 784))
# Create server with ZKP verification
strategy = create_server_strategy(
model_fn=SimpleModel,
enable_zkp=True,
proof_rigor="high"
)
server = SecureFlowerServer(strategy=strategy)
server.start(num_rounds=10)
```
### Configuration
Create and use a configuration file:
```bash
# Create example config
uv run secure-fl create-config
# Edit config.yaml as needed
# Then use it:
```
#### Client
```python
from secure_fl import create_client, start_client
from torchvision import datasets, transforms
# Load data
transform = transforms.Compose([transforms.ToTensor()])
dataset = datasets.MNIST('./data', train=True, transform=transform)
# Create secure client
client = create_client(
client_id="client_1",
model_fn=SimpleModel,
train_data=dataset,
enable_zkp=True
)
# Connect to server
start_client(client, "localhost:8080")
```
#### CLI Interface
```bash
# Start server
uv run secure-fl-server --config config.yaml
# Start client
uv run secure-fl-client --server localhost:8080 --dataset mnist --client-id client_1
# Check system status
uv run secure-fl check-zkp
```
## 🔬 Technical Details
### Zero-Knowledge Proofs
- **Client-side (zk-STARKs)**: Prove correct SGD computation using Cairo circuits
- **Server-side (zk-SNARKs)**: Prove correct FedJSCM aggregation using Circom circuits
### FedJSCM Aggregation
Momentum-based federated averaging:
```
w_{t+1} = w_t - η_g * (β * m_t + (1-β) * ∇F_t)
```
where `∇F_t` is the federated gradient and `m_t` is the momentum buffer.
### Dynamic Proof Rigor
Automatically adjusts ZKP complexity based on training stability:
- **High stability**: Reduced proof complexity for efficiency
- **Low stability**: Increased proof rigor for security
## 📊 Configuration
Create a `config.yaml`:
```yaml
server:
host: "localhost"
port: 8080
num_rounds: 10
strategy:
min_fit_clients: 2
fraction_fit: 1.0
momentum: 0.9
zkp:
enable_zkp: true
proof_rigor: "high"
quantize_weights: true
quantization_bits: 8
```
## 🔧 Development
### Setup Development Environment
```bash
git clone https://github.com/krishantt/secure-fl
cd secure-fl
# Complete development setup
make dev
# Or manually with uv
uv sync --all-extras
make setup-zkp
```
### Development Commands
```bash
# Run tests
make test
make test-quick # Fast tests with early exit
make test-cov # With coverage report
# Code quality
make lint # Check with ruff
make format # Format code
make type-check # Run mypy
make check # All quality checks
# Development workflow
make demo # Run demonstration
make clean # Clean artifacts
```
### Docker (Minimal)
```bash
# Build image
docker build -t secure-fl:local .
# Start local FL stack
docker compose up -d
# Stop services
docker compose down
```
## 📈 Experiments
Run benchmarks and experiments:
```bash
# Basic demo
make demo
# or: uv run python experiments/demo.py
# Reproducible benchmark suite
uv run python experiments/canonical_benchmark.py --datasets mnist synthetic_small --num-repeats 5 --require-real-proofs
# Custom training
uv run python experiments/train.py --config experiments/config.yaml
# Check environment
make env-info
```
## 🏷️ Repository Structure
```
secure-fl/
├── src/secure_fl/ # Main package
│ ├── federation/ # FL clients, server, and aggregation strategy
│ ├── zkp/ # ZKP managers and quantization
│ ├── models/ # Model definitions (MNIST/CIFAR/ResNet/MLP)
│ ├── core/ # Types, config, exceptions, versioning
│ ├── cli/ # CLI commands and setup helpers
│ └── utils/ # Logging and helper utilities
├── proofs/ # ZKP circuits
│ ├── client_circuits/ # zk-STARK (Cairo)
│ └── server/ # zk-SNARK (Circom)
├── experiments/ # Research experiments
├── tests/ # Test suite
└── docs/ # Documentation
```
## 🤝 Contributing
1. Fork the repository
2. Set up development environment: `make dev`
3. Create a feature branch
4. Make your changes with proper type hints
5. Add tests and ensure coverage
6. Run quality checks: `make check`
7. Test your changes: `make test`
8. Submit a pull request
### Code Style
- Use type hints throughout
- Follow the established error handling patterns
- Add proper logging with context
- Write tests for new functionality
- Update documentation as needed
## 📄 License
MIT License - see [LICENSE](LICENSE) for details.
## 📚 Citation
```bibtex
@misc{timilsina2024secure,
title={Secure FL: Dual-Verifiable Framework for Federated Learning using Zero-Knowledge Proofs},
author={Timilsina, Krishant and Paudel, Bindu},
year={2024},
url={https://github.com/krishantt/secure-fl}
}
```
## 🙏 Acknowledgments
- Flower framework for federated learning infrastructure
- Circom and Cairo for zero-knowledge proof systems
- The federated learning and cryptography research communities
| text/markdown | null | Krishant Timilsina <krishtimil@gmail.com>, Bindu Paudel <binduupaudel565@gmail.com> | null | Krishant Timilsina <krishtimil@gmail.com>, Bindu Paudel <binduupaudel565@gmail.com> | MIT | cryptography, federated-learning, machine-learning, privacy, zero-knowledge-proofs, zk-snarks, zk-starks | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Security :: Cryptography"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.0.0",
"flwr>=1.5.0",
"numpy>=1.24.0",
"pandas>=2.0.0",
"psutil>=5.9.0",
"pysnark",
"pyyaml>=6.0",
"rich>=13.0.0",
"torch>=2.0.0",
"torchvision>=0.15.0",
"tqdm>=4.65.0",
"memory-profiler>=0.61.0; extra == \"benchmark\"",
"pytest-benchmark>=5.2.3; extra == \"benchmark\"",
"mypy>=1.19.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-xdist>=3.3.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"ruff>=0.14.8; extra == \"dev\"",
"types-psutil; extra == \"dev\"",
"types-pyyaml; extra == \"dev\"",
"medmnist>=2.2.0; extra == \"medical\"",
"nibabel>=5.1.0; extra == \"medical\"",
"pydicom>=2.4.0; extra == \"medical\"",
"matplotlib>=3.7.0; extra == \"viz\"",
"plotly>=5.0.0; extra == \"viz\"",
"seaborn>=0.12.0; extra == \"viz\""
] | [] | [] | [] | [
"Homepage, https://github.com/krishantt/secure-fl",
"Bug Reports, https://github.com/krishantt/secure-fl/issues",
"Source, https://github.com/krishantt/secure-fl",
"Documentation, https://github.com/krishantt/secure-fl/blob/main/README.md"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:43:57.997810 | secure_fl-2026.2.20.dev5.tar.gz | 4,873,239 | 5b/6e/82d67894830d806178c995a7babead5ab3a2ce64d9bf2e1de5fb38eb3c71/secure_fl-2026.2.20.dev5.tar.gz | source | sdist | null | false | ff1ede59b1e84bd063e299acfdc33ffe | 89649249a7ba903a018523cf0f84229ba379eac7869d577acef8d5b420b113b6 | 5b6e82d67894830d806178c995a7babead5ab3a2ce64d9bf2e1de5fb38eb3c71 | null | [
"LICENSE"
] | 159 |
2.3 | marimo | 0.20.1 | A library for making reactive notebooks and apps | <p align="center">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/marimo-logotype-thick.svg">
</p>
<p align="center">
<em>A reactive Python notebook that's reproducible, git-friendly, and deployable as scripts or apps.</em>
</p>
<p align="center">
<a href="https://docs.marimo.io" target="_blank"><strong>Docs</strong></a> ·
<a href="https://marimo.io/discord?ref=readme" target="_blank"><strong>Discord</strong></a> ·
<a href="https://docs.marimo.io/examples/" target="_blank"><strong>Examples</strong></a> ·
<a href="https://marimo.io/gallery/" target="_blank"><strong>Gallery</strong></a> ·
<a href="https://www.youtube.com/@marimo-team/" target="_blank"><strong>YouTube</strong></a>
</p>
<p align="center">
<b>English</b>
<b> | </b>
<a href="https://github.com/marimo-team/marimo/blob/main/README_Traditional_Chinese.md" target="_blank"><b>繁體中文</b></a>
<b> | </b>
<a href="https://github.com/marimo-team/marimo/blob/main/README_Chinese.md" target="_blank"><b>简体中文</b></a>
<b> | </b>
<a href="https://github.com/marimo-team/marimo/blob/main/README_Japanese.md" target="_blank"><b>日本語</b></a>
<b> | </b>
<a href="https://github.com/marimo-team/marimo/blob/main/README_Spanish.md" target="_blank"><b>Español</b></a>
</p>
<p align="center">
<a href="https://pypi.org/project/marimo/"><img src="https://img.shields.io/pypi/v/marimo?color=%2334D058&label=pypi"/></a>
<a href="https://anaconda.org/conda-forge/marimo"><img src="https://img.shields.io/conda/vn/conda-forge/marimo.svg"/></a>
<a href="https://marimo.io/discord?ref=readme"><img src="https://shields.io/discord/1059888774789730424" alt="discord"/></a>
<img alt="Pepy Total Downloads" src="https://img.shields.io/pepy/dt/marimo?label=pypi%20%7C%20downloads"/>
<img alt="Conda Downloads" src="https://img.shields.io/conda/d/conda-forge/marimo"/>
<a href="https://github.com/marimo-team/marimo/blob/main/LICENSE"><img src="https://img.shields.io/pypi/l/marimo"/></a>
</p>
**marimo** is a reactive Python notebook: run a cell or interact with a UI
element, and marimo automatically runs dependent cells (or <a href="#expensive-notebooks">marks them as stale</a>), keeping code and outputs
consistent. marimo notebooks are stored as pure Python (with first-class SQL support), executable as scripts,
and deployable as apps.
**Highlights**.
- 🚀 **batteries-included:** replaces `jupyter`, `streamlit`, `jupytext`, `ipywidgets`, `papermill`, and more
- ⚡️ **reactive**: run a cell, and marimo reactively [runs all dependent cells](https://docs.marimo.io/guides/reactivity.html) or <a href="#expensive-notebooks">marks them as stale</a>
- 🖐️ **interactive:** [bind sliders, tables, plots, and more](https://docs.marimo.io/guides/interactivity.html) to Python — no callbacks required
- 🐍 **git-friendly:** stored as `.py` files
- 🛢️ **designed for data**: query dataframes, databases, warehouses, or lakehouses [with SQL](https://docs.marimo.io/guides/working_with_data/sql.html), filter and search [dataframes](https://docs.marimo.io/guides/working_with_data/dataframes.html)
- 🤖 **AI-native**: [generate cells with AI](https://docs.marimo.io/guides/generate_with_ai/) tailored for data work
- 🔬 **reproducible:** [no hidden state](https://docs.marimo.io/guides/reactivity.html#no-hidden-state), deterministic execution, [built-in package management](https://docs.marimo.io/guides/package_management/)
- 🏃 **executable:** [execute as a Python script](https://docs.marimo.io/guides/scripts.html), parameterized by CLI args
- 🛜 **shareable**: [deploy as an interactive web app](https://docs.marimo.io/guides/apps.html) or [slides](https://docs.marimo.io/guides/apps.html#slides-layout), [run in the browser via WASM](https://docs.marimo.io/guides/wasm.html)
- 🧩 **reusable:** [import functions and classes](https://docs.marimo.io/guides/reusing_functions/) from one notebook to another
- 🧪 **testable:** [run pytest](https://docs.marimo.io/guides/testing/) on notebooks
- ⌨️ **a modern editor**: [GitHub Copilot](https://docs.marimo.io/guides/editor_features/ai_completion.html#github-copilot), [AI assistants](https://docs.marimo.io/guides/editor_features/ai_completion.html), vim keybindings, variable explorer, and [more](https://docs.marimo.io/guides/editor_features/index.html)
- 🧑💻 **use your favorite editor**: run in [VS Code or Cursor](https://marketplace.visualstudio.com/items?itemName=marimo-team.vscode-marimo), or edit in neovim, Zed, [or any other text editor](https://docs.marimo.io/guides/editor_features/watching/)
```python
pip install marimo && marimo tutorial intro
```
_Get started instantly with [**mo**lab, our free online
notebook](https://molab.marimo.io/notebooks). Or jump to the
[quickstart](#quickstart) for a primer on our CLI._
## A reactive programming environment
marimo guarantees your notebook code, outputs, and program state are consistent. This [solves many problems](https://docs.marimo.io/faq.html#faq-problems) associated with traditional notebooks like Jupyter.
**A reactive programming environment.**
Run a cell and marimo _reacts_ by automatically running the cells that
reference its variables, eliminating the error-prone task of manually
re-running cells. Delete a cell and marimo scrubs its variables from program
memory, eliminating hidden state.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/reactive.gif" width="700px" />
<a name="expensive-notebooks"></a>
**Compatible with expensive notebooks.** marimo lets you [configure the runtime
to be
lazy](https://docs.marimo.io/guides/configuration/runtime_configuration.html),
marking affected cells as stale instead of automatically running them. This
gives you guarantees on program state while preventing accidental execution of
expensive cells.
**Synchronized UI elements.** Interact with [UI
elements](https://docs.marimo.io/guides/interactivity.html) like [sliders](https://docs.marimo.io/api/inputs/slider.html#slider),
[dropdowns](https://docs.marimo.io/api/inputs/dropdown.html), [dataframe
transformers](https://docs.marimo.io/api/inputs/dataframe.html), and [chat
interfaces](https://docs.marimo.io/api/inputs/chat.html), and the cells that
use them are automatically re-run with their latest values.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/readme-ui.gif" width="700px" />
**Interactive dataframes.** [Page through, search, filter, and
sort](https://docs.marimo.io/guides/working_with_data/dataframes.html)
millions of rows blazingly fast, no code required.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/docs-df.gif" width="700px" />
**Generate cells with data-aware AI.** [Generate code with an AI
assistant](https://docs.marimo.io/guides/editor_features/ai_completion/) that is highly
specialized for working with data, with context about your variables in memory;
[zero-shot entire notebooks](https://docs.marimo.io/guides/generate_with_ai/text_to_notebook/).
Customize the system prompt, bring your own API keys, or use local models.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/readme-generate-with-ai.gif" width="700px" />
**Query data with SQL.** Build [SQL](https://docs.marimo.io/guides/working_with_data/sql.html) queries
that depend on Python values and execute them against dataframes, databases, lakehouses,
CSVs, Google Sheets, or anything else using our built-in SQL engine, which
returns the result as a Python dataframe.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/readme-sql-cell.png" width="700px" />
Your notebooks are still pure Python, even if they use SQL.
**Dynamic markdown.** Use markdown parametrized by Python variables to tell
dynamic stories that depend on Python data.
**Built-in package management.** marimo has built-in support for all major
package managers, letting you [install packages on import](https://docs.marimo.io/guides/editor_features/package_management.html). marimo can even
[serialize package
requirements](https://docs.marimo.io/guides/package_management/inlining_dependencies/)
in notebook files, and auto install them in
isolated venv sandboxes.
**Deterministic execution order.** Notebooks are executed in a deterministic
order, based on variable references instead of cells' positions on the page.
Organize your notebooks to best fit the stories you'd like to tell.
**Performant runtime.** marimo runs only those cells that need to be run by
statically analyzing your code.
**Batteries-included.** marimo comes with GitHub Copilot, AI assistants, Ruff
code formatting, HTML export, fast code completion, a [VS Code
extension](https://marketplace.visualstudio.com/items?itemName=marimo-team.vscode-marimo),
an interactive dataframe viewer, and [many more](https://docs.marimo.io/guides/editor_features/index.html)
quality-of-life features.
## Quickstart
_The [marimo concepts
playlist](https://www.youtube.com/watch?v=3N6lInzq5MI&list=PLNJXGo8e1XT9jP7gPbRdm1XwloZVFvLEq)
on our [YouTube channel](https://www.youtube.com/@marimo-team) gives an
overview of many features._
**Installation.** In a terminal, run
```bash
pip install marimo # or conda install -c conda-forge marimo
marimo tutorial intro
```
To install with additional dependencies that unlock SQL cells, AI completion, and more,
run
```bash
pip install "marimo[recommended]"
```
**Create notebooks.**
Create or edit notebooks with
```bash
marimo edit
```
**Run apps.** Run your notebook as a web app, with Python
code hidden and uneditable:
```bash
marimo run your_notebook.py
```
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/docs-model-comparison.gif" style="border-radius: 8px" width="450px" />
**Execute as scripts.** Execute a notebook as a script at the
command line:
```bash
python your_notebook.py
```
**Automatically convert Jupyter notebooks.** Automatically convert Jupyter
notebooks to marimo notebooks with the CLI
```bash
marimo convert your_notebook.ipynb > your_notebook.py
```
or use our [web interface](https://marimo.io/convert).
**Tutorials.**
List all tutorials:
```bash
marimo tutorial --help
```
**Share cloud-based notebooks.** Use
[molab](https://molab.marimo.io/notebooks), a cloud-based marimo notebook
service similar to Google Colab, to create and share notebook links.
## Questions?
See the [FAQ](https://docs.marimo.io/faq.html) at our docs.
## Learn more
marimo is easy to get started with, with lots of room for power users.
For example, here's an embedding visualizer made in marimo
([try the notebook live on molab!](https://molab.marimo.io/notebooks/nb_jJiFFtznAy4BxkrrZA1o9b/app?show-code=true)):
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/embedding.gif" width="700px" />
Check out our [docs](https://docs.marimo.io),
[usage examples](https://docs.marimo.io/examples/), and our [gallery](https://marimo.io/gallery) to learn more.
<table border="0">
<tr>
<td>
<a target="_blank" href="https://docs.marimo.io/getting_started/key_concepts.html">
<img src="https://docs.marimo.io/_static/reactive.gif" style="max-height: 150px; width: auto; display: block" />
</a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/api/inputs/index.html">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/readme-ui.gif" style="max-height: 150px; width: auto; display: block" />
</a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/guides/working_with_data/plotting.html">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/docs-intro.gif" style="max-height: 150px; width: auto; display: block" />
</a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/api/layouts/index.html">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/outputs.gif" style="max-height: 150px; width: auto; display: block" />
</a>
</td>
</tr>
<tr>
<td>
<a target="_blank" href="https://docs.marimo.io/getting_started/key_concepts.html"> Tutorial </a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/api/inputs/index.html"> Inputs </a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/guides/working_with_data/plotting.html"> Plots </a>
</td>
<td>
<a target="_blank" href="https://docs.marimo.io/api/layouts/index.html"> Layout </a>
</td>
</tr>
<tr>
<td>
<a target="_blank" href="https://molab.marimo.io/notebooks/nb_TWVGCgZZK4L8zj5ziUBNVL">
<img src="https://marimo.io/molab-shield.svg"/>
</a>
</td>
<td>
<a target="_blank" href="https://molab.marimo.io/notebooks/nb_WuoXgs7mjg5yqrMxJXjRpF">
<img src="https://marimo.io/molab-shield.svg"/>
</a>
</td>
<td>
<a target="_blank" href="https://molab.marimo.io/notebooks/nb_vXxD13t2RoMTLjC89qdn6c">
<img src="https://marimo.io/molab-shield.svg"/>
</a>
</td>
<td>
<a target="_blank" href="https://molab.marimo.io/notebooks/nb_XpXx8MX99dWAjn4k1b3xiU">
<img src="https://marimo.io/molab-shield.svg"/>
</a>
</td>
</tr>
</table>
## Contributing
We appreciate all contributions! You don't need to be an expert to help out.
Please see [CONTRIBUTING.md](https://github.com/marimo-team/marimo/blob/main/CONTRIBUTING.md) for more details on how to get
started.
> Questions? Reach out to us [on Discord](https://marimo.io/discord?ref=readme).
## Community
We're building a community. Come hang out with us!
- 🌟 [Star us on GitHub](https://github.com/marimo-team/marimo)
- 💬 [Chat with us on Discord](https://marimo.io/discord?ref=readme)
- 📧 [Subscribe to our Newsletter](https://marimo.io/newsletter)
- ☁️ [Join our Cloud Waitlist](https://marimo.io/cloud)
- ✏️ [Start a GitHub Discussion](https://github.com/marimo-team/marimo/discussions)
- 🦋 [Follow us on Bluesky](https://bsky.app/profile/marimo.io)
- 🐦 [Follow us on Twitter](https://twitter.com/marimo_io)
- 🎥 [Subscribe on YouTube](https://www.youtube.com/@marimo-team)
- 🤖 [Follow us on Reddit](https://www.reddit.com/r/marimo_notebook)
- 🕴️ [Follow us on LinkedIn](https://www.linkedin.com/company/marimo-io)
**A NumFOCUS affiliated project.** marimo is a core part of the broader Python
ecosystem and is a member of the NumFOCUS community, which includes projects
such as NumPy, SciPy, and Matplotlib.
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/numfocus_affiliated_project.png" height="40px" />
## Inspiration ✨
marimo is a **reinvention** of the Python notebook as a reproducible, interactive,
and shareable Python program, instead of an error-prone JSON scratchpad.
We believe that the tools we use shape the way we think — better tools, for
better minds. With marimo, we hope to provide the Python community with a
better programming environment to do research and communicate it; to experiment
with code and share it; to learn computational science and teach it.
Our inspiration comes from many places and projects, especially
[Pluto.jl](https://github.com/fonsp/Pluto.jl),
[ObservableHQ](https://observablehq.com/tutorials), and
[Bret Victor's essays](http://worrydream.com/). marimo is part of
a greater movement toward reactive dataflow programming. From
[IPyflow](https://github.com/ipyflow/ipyflow), [streamlit](https://github.com/streamlit/streamlit),
[TensorFlow](https://github.com/tensorflow/tensorflow),
[PyTorch](https://github.com/pytorch/pytorch/tree/main),
[JAX](https://github.com/google/jax), and
[React](https://github.com/facebook/react), the ideas of functional,
declarative, and reactive programming are transforming a broad range of tools
for the better.
<p align="right">
<img src="https://raw.githubusercontent.com/marimo-team/marimo/main/docs/_static/marimo-logotype-horizontal.png" height="200px">
</p>
| text/markdown | null | null | null | null |
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Operating System :: OS Independent",
"License :: OSI Approved :: Apache Software License",
"Environment :: Console",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Education",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click<9,>=8.0",
"jedi>=0.18.0",
"markdown<4,>=3.6",
"pymdown-extensions<11,>=10.15",
"pygments<3,>=2.19",
"tomlkit>=0.12.0",
"pyyaml>=6.0.1",
"uvicorn>=0.22.0",
"starlette>=0.37.2",
"websockets>=14.2.0",
"loro>=1.10.0",
"typing-extensions>=4.4.0; python_full_version < \"3.11\"",
"docutils>=0.16.0",
"psutil>=5.0",
"itsdangerous>=2.0.0",
"narwhals>=2.0.0",
"packaging",
"msgspec>=0.20.0",
"python-lsp-server>=1.13.0; extra == \"lsp\"",
"python-lsp-ruff>=2.0.0; extra == \"lsp\"",
"mcp>=1.0.0; extra == \"mcp\"",
"pydantic>2; extra == \"mcp\"",
"marimo[sql]; extra == \"recommended\"",
"marimo[sandbox]; extra == \"recommended\"",
"altair>=5.4.0; extra == \"recommended\"",
"pydantic-ai-slim[openai]>=1.39.0; extra == \"recommended\"",
"ruff; extra == \"recommended\"",
"nbformat>=5.7.0; extra == \"recommended\"",
"pyzmq>=27.1.0; extra == \"sandbox\"",
"uv>=0.9.21; extra == \"sandbox\"",
"duckdb>=1.0.0; extra == \"sql\"",
"polars[pyarrow]>=1.9.0; extra == \"sql\"",
"sqlglot[rs]<28.7.0,>=26.2.0; extra == \"sql\""
] | [] | [] | [] | [
"homepage, https://github.com/marimo-team/marimo"
] | Hatch/1.16.3 cpython/3.12.3 HTTPX/0.28.1 | 2026-02-20T18:43:37.904937 | marimo-0.20.1-py3-none-any.whl | 38,644,606 | 8c/b2/350bcd7cfe76a90c1482060321d8ee36d40f3d3d241656e6a54e4723e284/marimo-0.20.1-py3-none-any.whl | py3 | bdist_wheel | null | false | fb76b4798a57123790785368b9321b24 | 4d949f3f3151399e563ef1a543cbeed2ab880f4de88119be29e6c2f094525012 | 8cb2350bcd7cfe76a90c1482060321d8ee36d40f3d3d241656e6a54e4723e284 | null | [] | 10,133 |
2.4 | d2-widget | 0.1.0 | An AnyWidget for displaying declarative diagrams written in D2 | [](https://pypi.org/project/d2-widget/)
[](https://github.com/peter-gy/d2-widget/blob/main/LICENSE)
# D2 Widget <img src="https://raw.githubusercontent.com/peter-gy/d2-widget/refs/heads/main/assets/logo.png" align="right" alt="d2-widget logo" width="150" style="filter: drop-shadow(3px 3px 3px rgba(0,0,0,0.3));"/>
> Bring the power of [D2](https://d2lang.com/) to Python notebooks.
**d2-widget** is an [AnyWidget](https://github.com/manzt/anywidget) for displaying declarative diagrams written in [D2](https://d2lang.com/).
- 🎨 **D2 Diagram Rendering**: Create and display interactive D2 diagrams directly in Python notebooks
- ⚙️ **Configurability**: Support for all D2 compilation options including themes, layouts, and rendering configurations
- 📤 **SVG Export**: Programmatically access the SVG representation for use in other documents
- ✨ **Jupyter Cell Magic**: Use the convenient `%%d2` cell magic for quick diagram creation
- 🧩 **Notebook Compatibility**: Works in Jupyter, Google Colab, Marimo, and other [AnyWidget](https://github.com/manzt/anywidget)-enabled Python notebook environments
- 🎬 **Animation Support**: Create animated diagrams with D2's native animation capabilities
## Playground
Visit the interactive [playground](https://d2-widget.peter.gy) to try out what `d2-widget` can do.
<img src="https://raw.githubusercontent.com/peter-gy/d2-widget/refs/heads/main/assets/examples/playground.gif" alt="playground" width="100%"/>
## Installation
```sh
pip install d2-widget
```
or with [uv](https://github.com/astral-sh/uv):
```sh
uv add d2-widget
```
## Usage
The following examples demonstrate how to use Widget with increasing complexity.
### Basic Usage
The simplest way to use Widget is to pass a D2 diagram as a string to the constructor.
```python
from d2_widget import Widget
Widget("x -> y")
```
<img src="https://raw.githubusercontent.com/peter-gy/d2-widget/refs/heads/main/assets/examples/simple.svg" alt="simple example" width="400"/>
### Inline Configuration
You can add direction and layout settings directly in the D2 markup.
```python
from d2_widget import Widget
Widget("""
direction: right
x -> y
""")
```
<img src="https://raw.githubusercontent.com/peter-gy/d2-widget/refs/heads/main/assets/examples/simple-inline-config.svg" alt="simple example with inline configuration" width="400"/>
### Compile Options
You can specify compile options using the second argument to the constructor.
You can read about the semantics of the options in the [D2 documentation](https://www.npmjs.com/package/@terrastruct/d2#compileoptions).
```python
from d2_widget import Widget
Widget("""
direction: right
x -> y
""",
{
"themeID": 200, # ID of the "Dark mauve" theme
"pad": 0, # Disable padding
"sketch": True, # Enable sketch mode
},
)
```
<img src="https://raw.githubusercontent.com/peter-gy/d2-widget/refs/heads/main/assets/examples/compile-options.svg" alt="example with compile options" width="400"/>
### Accessing the SVG
You can access the generated SVG using the `svg` attribute.
```python
from d2_widget import Widget
w = Widget("x -> y")
w.svg
```
### `%%d2` Cell Magic
You can use the `%%d2` cell magic to display a D2 diagram in a Jupyter notebook.
First, you need to load the extension:
```python
%load_ext d2_widget
```
Then, you can use the `%%d2` cell magic to display a D2 diagram.
You can pass compile options to the cell magic using keyword arguments.
```python
%%d2 sketch=True themeID=200
direction: right
x -> y
y -> z { style.animated: true }
z -> x
```
<img src="https://raw.githubusercontent.com/peter-gy/d2-widget/refs/heads/main/assets/examples/cell-magic.gif" alt="example with cell magic" width="100%"/>
## Contributing
Contributor setup, dev workflow, and QA commands are in [`CONTRIBUTING.md`](CONTRIBUTING.md).
| text/markdown | null | Péter Ferenc Gyarmati <dev.petergy@gmail.com> | null | null | Copyright © 2025 Péter Ferenc Gyarmati
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | anywidget, d2, diagram, jupyter, widget | [
"Development Status :: 4 - Beta",
"Framework :: Jupyter",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anywidget>=0.9",
"traitlets>=5"
] | [] | [] | [] | [
"Homepage, https://github.com/peter-gy/d2-widget",
"Repository, https://github.com/peter-gy/d2-widget",
"Documentation, https://github.com/peter-gy/d2-widget#readme",
"Bug Tracker, https://github.com/peter-gy/d2-widget/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:43:25.000744 | d2_widget-0.1.0.tar.gz | 7,294 | 07/1e/eebc8a8893090b84086dcf4d65dd32092ee52795d67d9e10fe76f70650cb/d2_widget-0.1.0.tar.gz | source | sdist | null | false | 0114d8a9fa087366290e0f9e3a6f26f4 | 9b0bf80eedbf5cdf67f65c4753c77991c6fb93039210d4786e5d1e4b35d42bd4 | 071eeebc8a8893090b84086dcf4d65dd32092ee52795d67d9e10fe76f70650cb | null | [
"LICENSE"
] | 204 |
2.4 | pagespeed | 2.1.0 | CLI tool for batch Google PageSpeed Insights analysis with CSV/JSON/HTML reports | # PageSpeed Insights Batch Analysis Tool
A command-line tool that automates Google PageSpeed Insights analysis across multiple URLs, extracting performance metrics (lab + field data) into structured CSV, JSON, and HTML reports.
## Installation
### Run instantly with `uvx` (recommended, no install needed)
```bash
uvx pagespeed quick-check https://example.com
```
### Install with `pip` or `pipx`
```bash
pip install pagespeed
pagespeed quick-check https://example.com
```
### Run from URL (just needs `uv`)
```bash
uv run https://raw.githubusercontent.com/volkanunsal/pagespeed/main/pagespeed_insights_tool.py quick-check https://example.com
```
### Development
```bash
git clone https://github.com/volkanunsal/pagespeed.git
cd pagespeed
uv run pagespeed_insights_tool.py quick-check https://example.com
```
## Prerequisites
- **Python 3.13+**
- **Google API key** (optional) — without one, you're limited to ~25 queries/day; with one, ~25,000/day
## Getting an API Key
1. Go to the [Google Cloud Console](https://console.cloud.google.com/)
2. Create a new project (or select an existing one)
3. Navigate to **APIs & Services > Library**
4. Search for **PageSpeed Insights API** and enable it
5. Go to **APIs & Services > Credentials**
6. Click **Create Credentials > API Key**
7. Copy the key and set it:
```bash
export PAGESPEED_API_KEY=your_key_here
```
Or add it to your `pagespeed.toml` config file (see [Configuration](#configuration)).
## Usage
### `quick-check` — Fast single-URL spot check
Prints a formatted report to the terminal. No files written.
```bash
# Mobile only (default)
pagespeed quick-check https://www.google.com
# Both mobile and desktop
pagespeed quick-check https://www.google.com --device both
# With specific categories
pagespeed quick-check https://www.google.com --categories performance accessibility
```
Sample output:
```
============================================================
URL: https://www.google.com
Strategy: mobile
============================================================
Performance Score: 92/100 (GOOD)
--- Lab Data ---
First Contentful Paint............. 1200ms
Largest Contentful Paint........... 1800ms
Cumulative Layout Shift............ 0.0100
Speed Index........................ 1500ms
Total Blocking Time................ 150ms
Time to Interactive................ 2100ms
```
### `audit` — Full batch analysis
Analyzes multiple URLs and writes CSV/JSON reports.
```bash
# From a file of URLs
pagespeed audit -f urls.txt
# Multiple strategies and output formats
pagespeed audit -f urls.txt --device both --output-format both
# Inline URLs with custom output path
pagespeed audit https://a.com https://b.com -o report
# With a named profile
pagespeed audit -f urls.txt --profile full
# Piped input
cat urls.txt | pagespeed audit
# Include full Lighthouse audit data in JSON output
pagespeed audit -f urls.txt --full --output-format json
# Stream results as NDJSON to stdout as they complete
pagespeed audit -f urls.txt --stream
# Pipe streamed results into jq for real-time filtering
pagespeed audit -f urls.txt --stream | jq '.performance_score'
# Stream and filter to only failing URLs
pagespeed audit -f urls.txt --stream | jq 'select(.performance_score < 50)'
```
#### `--full` flag
Pass `--full` to embed the complete raw `lighthouseResult` object from the PageSpeed API into each result in the JSON output. This includes all Lighthouse audits, opportunities, diagnostics, and metadata — useful for deep analysis or feeding into other tools.
- **JSON**: each result gains a top-level `lighthouseResult` key containing the full API object.
- **CSV**: `--full` is silently ignored; the raw object is never written to CSV.
- **File naming**: auto-named files get a `-full` suffix (e.g., `20260219T143022Z-mobile-full.json`).
#### `--stream` flag
Pass `--stream` to print results to stdout as **NDJSON** (one JSON object per line) as each URL/strategy completes, instead of buffering everything and writing files at the end. This lets you pipe results into `jq`, `grep`, or other tools without waiting for the full batch to finish.
- **Output**: one `json.dumps` line per result written to stdout immediately on completion.
- **File output**: skipped — no CSV/JSON files are written in stream mode.
- **Summary**: the post-run audit summary table is suppressed (not useful when piping).
- **Progress bar**: still shown on stderr so you can track progress while piping stdout.
- **Budget**: still evaluated if `--budget` is set, using the complete result set.
```bash
# Stream all results to stdout
pagespeed audit -f urls.txt --stream
# Extract a single field from each result
pagespeed audit -f urls.txt --stream | jq '.performance_score'
# Filter to only URLs below a score threshold
pagespeed audit -f urls.txt --stream | jq 'select(.performance_score < 50)'
# Save streamed results to a file while also viewing them
pagespeed audit -f urls.txt --stream | tee results.ndjson | jq '.url'
```
Each NDJSON line is a flat JSON object with the same fields as a CSV row (`url`, `strategy`, `performance_score`, `lab_fcp_ms`, etc.). `null` is used where a value is not available.
The URL file is one URL per line. Lines starting with `#` are comments:
```
# Main pages
https://example.com
https://example.com/about
https://example.com/contact
```
### `compare` — Compare two reports
Loads two previous report files and shows per-URL score changes.
```bash
# Compare before and after
pagespeed compare before.csv after.csv
# Custom threshold (flag changes >= 10%)
pagespeed compare --threshold 10 old.json new.json
```
Output flags regressions with `!!` and improvements with `++`.
### `report` — Generate HTML dashboard
Creates a self-contained HTML report from a results file.
```bash
# Generate HTML from CSV results
pagespeed report results.csv
# Custom output path
pagespeed report results.json -o dashboard.html
# Auto-open in browser
pagespeed report results.csv --open
```
The HTML report includes:
- Summary cards (total URLs, average/best/worst scores)
- Color-coded score table (green/orange/red)
- Core Web Vitals pass/fail indicators
- Bar charts comparing scores across URLs
- Field data table (when available)
- Sortable columns (click headers)
### `run` — Low-level direct access
Full control with every CLI flag. Same internals as `audit`.
```bash
pagespeed run https://example.com --device desktop --categories performance accessibility --delay 2.0
```
### `pipeline` — End-to-end analysis
Resolves URLs from a sitemap (or file/inline), runs the analysis, writes CSV/JSON data files, and generates an HTML report — all in one command. Optionally evaluates a performance budget.
```bash
# From a sitemap (auto-detected from URL shape)
pagespeed pipeline https://example.com/sitemap.xml
# Limit URLs and auto-open report in browser
pagespeed pipeline https://example.com/sitemap.xml --sitemap-limit 20 --open
# Filter to a section of the sitemap, both devices
pagespeed pipeline https://example.com/sitemap.xml --sitemap-filter "/blog/" --device both
# Inline URLs
pagespeed pipeline https://a.com https://b.com --device both
# From a URL file
pagespeed pipeline -f urls.txt --open
# Data files only — skip HTML report generation
pagespeed pipeline -f urls.txt --no-report --output-format json
# Evaluate Core Web Vitals budget (exits 2 on failure)
pagespeed pipeline https://example.com/sitemap.xml --budget cwv
# Custom budget with GitHub Actions output format
pagespeed pipeline https://example.com/sitemap.xml --budget budget.toml --budget-format github
```
**Sitemap auto-detection**: when a single positional argument looks like a sitemap (ends in `.xml`, contains `sitemap` in the path, or the file content starts with `<?xml`), it is treated as a sitemap source automatically. Pass `--sitemap` explicitly to use a sitemap alongside inline URLs.
#### `pipeline` flags
| Flag | Short | Default | Description |
|------|-------|---------|-------------|
| `source` | — | `[]` | Sitemap URL/path (auto-detected) or plain URLs |
| `--file` | `-f` | None | File with one URL per line |
| `--sitemap` | — | None | Explicit sitemap URL or local path |
| `--sitemap-limit` | — | None | Max URLs to extract from sitemap |
| `--sitemap-filter` | — | None | Regex to filter sitemap URLs |
| `--open` | — | `False` | Auto-open HTML report in browser after completion |
| `--no-report` | — | `False` | Skip HTML report; write data files only |
| `--budget` | — | None | Budget file (TOML) or `cwv` preset — exits 2 on failure |
| `--budget-format` | — | `text` | Budget output format: `text`, `json`, or `github` |
| `--webhook` | — | None | Webhook URL for budget result notifications |
| `--webhook-on` | — | `always` | When to send webhook: `always` or `fail` |
All `audit` flags (`--device`, `--output-format`, `--output`, `--output-dir`, `--delay`, `--workers`, `--categories`) also apply.
## Configuration
### Config file: `pagespeed.toml`
An optional TOML file for persistent settings and named profiles. The tool searches for it in:
1. Current working directory (`./pagespeed.toml`)
2. User config directory (`~/.config/pagespeed/config.toml`)
You can also pass an explicit path with `--config path/to/config.toml`.
```toml
[settings]
api_key = "YOUR_API_KEY" # or use PAGESPEED_API_KEY env var
urls_file = "urls.txt" # default URL file for -f
delay = 1.5 # seconds between API requests
device = "mobile" # mobile, desktop, or both
output_format = "csv" # csv, json, or both
output_dir = "./reports" # directory for output files
workers = 4 # concurrent workers (1 = sequential)
categories = ["performance"] # Lighthouse categories
verbose = false
[profiles.quick]
device = "mobile"
output_format = "csv"
categories = ["performance"]
[profiles.full]
device = "both"
output_format = "both"
categories = ["performance", "accessibility", "best-practices", "seo"]
[profiles.core-vitals]
device = "both"
output_format = "csv"
categories = ["performance"]
[profiles.client-report]
urls_file = "client_urls.txt"
device = "both"
output_format = "both"
output_dir = "./client-reports"
categories = ["performance", "accessibility", "seo"]
```
### Config resolution order
Settings are merged with the following priority (highest wins):
1. **CLI flags** — explicit command-line arguments
2. **Profile values** — via `--profile name`
3. **`[settings]`** — defaults from config file
4. **Built-in defaults** — hardcoded in the script
### Global flags
| Flag | Short | Default | Description |
|------|-------|---------|-------------|
| `--api-key` | — | config/env | Google API key |
| `--config` | `-c` | auto-discovered | Path to config TOML |
| `--profile` | `-p` | None | Named profile from config |
| `--verbose` | `-v` | False | Verbose output to stderr |
| `--version` | — | — | Print version and exit |
### `audit` / `run` flags
| Flag | Short | Default | Description |
|------|-------|---------|-------------|
| `urls` | — | `[]` | Positional URLs |
| `--file` | `-f` | None | File with one URL per line |
| `--device` | — | `mobile` | `mobile`, `desktop`, or `both` |
| `--output-format` | — | `csv` | `csv`, `json`, or `both` |
| `--output` | `-o` | auto-timestamped | Explicit output file path |
| `--output-dir` | — | `./reports/` | Directory for auto-named files |
| `--delay` | `-d` | `1.5` | Seconds between requests |
| `--workers` | `-w` | `4` | Concurrent workers |
| `--categories` | — | `performance` | Lighthouse categories |
| `--full` | — | `False` | Embed raw `lighthouseResult` in JSON output (ignored for CSV) |
| `--stream` | — | `False` | Print results as NDJSON to stdout as they complete (skips file output) |
## Output Formats
### File naming
By default, output files use UTC timestamps:
```
{output_dir}/{YYYYMMDD}T{HHMMSS}Z-{strategy}.{ext}
```
Examples:
```
./reports/20260216T143022Z-mobile.csv
./reports/20260216T150000Z-both.json
./reports/20260216T143022Z-report.html
```
Use `-o` to override with an explicit path.
### CSV
Flat table with one row per (URL, strategy) pair. Columns:
| Column | Description |
|--------|-------------|
| `url` | The analyzed URL |
| `strategy` | `mobile` or `desktop` |
| `performance_score` | 0-100 Lighthouse score |
| `lab_fcp_ms` | First Contentful Paint (ms) |
| `lab_lcp_ms` | Largest Contentful Paint (ms) |
| `lab_cls` | Cumulative Layout Shift |
| `lab_speed_index_ms` | Speed Index (ms) |
| `lab_tbt_ms` | Total Blocking Time (ms) |
| `lab_tti_ms` | Time to Interactive (ms) |
| `field_*` | Field (CrUX) metrics (when available) |
| `error` | Error message if the request failed |
### JSON
Structured with metadata header:
```json
{
"metadata": {
"generated_at": "2026-02-16T14:30:22+00:00",
"total_urls": 5,
"strategies": ["mobile", "desktop"],
"tool_version": "1.0.0"
},
"results": [
{
"url": "https://example.com",
"strategy": "mobile",
"performance_score": 92,
"lab_metrics": { "lab_fcp_ms": 1200, "lab_lcp_ms": 1800, ... },
"field_metrics": { "field_lcp_ms": 2100, "field_lcp_category": "FAST", ... },
"error": null
}
]
}
```
With `--full`, each result also includes the complete raw `lighthouseResult` from the API:
```json
{
"results": [
{
"url": "https://example.com",
"strategy": "mobile",
"performance_score": 92,
"lab_metrics": { ... },
"field_metrics": { ... },
"lighthouseResult": {
"audits": { ... },
"categories": { ... },
"categoryGroups": { ... },
"configSettings": { ... },
"environment": { ... },
"fetchTime": "...",
"finalUrl": "https://example.com",
"lighthouseVersion": "...",
"requestedUrl": "https://example.com",
"runWarnings": [],
"stackPacks": [],
"timing": { ... },
"i18n": { ... }
},
"error": null
}
]
}
```
## Metrics Reference
### Lab data (synthetic, from Lighthouse)
| Metric | Good | Needs Work | Poor |
|--------|------|-----------|------|
| First Contentful Paint | < 1.8s | 1.8s–3.0s | > 3.0s |
| Largest Contentful Paint | < 2.5s | 2.5s–4.0s | > 4.0s |
| Cumulative Layout Shift | < 0.1 | 0.1–0.25 | > 0.25 |
| Total Blocking Time | < 200ms | 200ms–600ms | > 600ms |
| Speed Index | < 3.4s | 3.4s–5.8s | > 5.8s |
| Time to Interactive | < 3.8s | 3.8s–7.3s | > 7.3s |
### Field data (real users, from CrUX)
Field data comes from the Chrome User Experience Report. It may not be available for low-traffic sites.
| Metric | Description |
|--------|-------------|
| FCP | First Contentful Paint — when first content appears |
| LCP | Largest Contentful Paint — when main content loads |
| CLS | Cumulative Layout Shift — visual stability |
| INP | Interaction to Next Paint — input responsiveness |
| FID | First Input Delay — (deprecated, replaced by INP) |
| TTFB | Time to First Byte — server response time |
## Rate Limits
| Scenario | Limit |
|----------|-------|
| Without API key | ~25 queries/100 seconds |
| With API key | ~25,000 queries/day (400/100 seconds) |
Tips:
- Use `--delay` to increase time between requests if hitting rate limits
- The tool retries on 429 (rate limit) responses with exponential backoff
- See [Concurrency Model](#concurrency-model) for how `--workers` and `--delay` interact
## Concurrency Model
The tool uses `asyncio` + `httpx` for non-blocking HTTP I/O.
**How it works:**
- With `--workers 1` (or effectively 1), requests run strictly sequentially — one finishes before the next starts.
- With `--workers N > 1` (default: 4), all tasks are launched together via `asyncio.gather()`. A shared `asyncio.Semaphore(1)` ensures requests _start_ no more than once per `--delay` seconds:
1. Each coroutine acquires the semaphore
2. Sleeps the remainder of `delay` since the last request started
3. Records the timestamp and releases the semaphore
4. Makes the actual HTTP request — outside the semaphore
Because the HTTP call happens after releasing the semaphore, multiple requests can be **in-flight simultaneously** even though they start `delay` seconds apart. Wall time is therefore much shorter than `n_urls × (delay + latency)`; it converges toward `n_urls × delay + avg_latency` as the number of URLs grows.
**Practical rule of thumb:**
| Goal | Setting |
|------|---------|
| Safest for rate limits | `--workers 1` (sequential) |
| Default (balanced) | `--workers 4 --delay 1.5` |
| Maximum throughput | `--workers 4 --delay 1.0` (watch for 429s) |
## Cron usage
Output files auto-increment with timestamps, so cron jobs won't overwrite previous results:
```bash
# Every Monday at 6am UTC
0 6 * * 1 cd /path/to/project && pagespeed audit -f urls.txt --profile full
```
## Examples
The [`examples/`](examples/) folder contains ready-to-use configuration files for common workflows:
| Example | Description |
|---------|-------------|
| [`basic/`](examples/basic/) | Minimal config with API key, strategy, and a sample URL list |
| [`multi-profile/`](examples/multi-profile/) | Named profiles for quick, full, and client-report workflows |
| [`ci-budget/`](examples/ci-budget/) | Strict and lenient performance budgets for CI pipelines |
| [`sitemap-pipeline/`](examples/sitemap-pipeline/) | Sitemap auto-discovery with regex filters and section-specific profiles |
Copy any example folder into your project and edit to taste. See [`examples/README.md`](examples/README.md) for full details.
## Testing
The project includes a comprehensive test suite (169 tests across 30 test classes). All tests run offline — API calls, sitemap fetches, and file I/O are mocked.
```bash
# Run all tests
uv run test_pagespeed_insights_tool.py -v
# Run a single test class
uv run test_pagespeed_insights_tool.py -v TestValidateUrl
# Run a specific test method
uv run test_pagespeed_insights_tool.py -v TestExtractMetrics.test_full_extraction
```
## License
This project is licensed under the [MIT License](LICENSE).
| text/markdown | null | null | null | null | null | core-web-vitals, lighthouse, pagespeed, performance, web-vitals | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP :: Site Management"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx",
"pandas",
"rich"
] | [] | [] | [] | [
"Homepage, https://github.com/volkanunsal/pagespeed",
"Repository, https://github.com/volkanunsal/pagespeed",
"Issues, https://github.com/volkanunsal/pagespeed/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:42:14.586506 | pagespeed-2.1.0.tar.gz | 28,864 | bd/04/a54c283727457950e4ca9d1d3152bdd2fbec0f733a933293ae01040a2905/pagespeed-2.1.0.tar.gz | source | sdist | null | false | b37660527c4ff2dc2a613ccc02de678a | ea373f209f9ea0144d34155724411e9d10bab672c45faf436afdae0f3820d3b2 | bd04a54c283727457950e4ca9d1d3152bdd2fbec0f733a933293ae01040a2905 | MIT | [
"LICENSE"
] | 197 |
2.4 | littlehorse-client | 0.15.2 | LittleHorse is a high-performance microservice orchestration engine that allows developers to build scalable, maintainable, and observable applications | # LittleHorse Python SDK
For documentation on how to use this library, please go to [the LittleHorse website](https://littlehorse.io).
For examples go to the [examples](./examples/) folder.
## Dependencies
- Install python.
- Install [pipx](https://github.com/pypa/pipx): `brew install pipx`
- Install [poetry](https://python-poetry.org/): `pipx install poetry`
- Install [poetry shell plugin](https://github.com/python-poetry/poetry-plugin-shell): `poetry self add poetry-plugin-shell`
## Initialize
```
poetry install
```
## Protobuf Compilation
```
../local-dev/compile-proto.sh
```
## Run tests
```
poetry shell
python -m unittest discover -v
```
## Validate Indentations
```
poetry run ruff check .
```
## Validate types
```
poetry run mypy .
```
## Useful Commands
Set python version:
```
poetry env use python3.9
```
## Types Map
Task arguments type reference:
```
VariableType.JSON_OBJ: dict[str, Any]
VariableType.JSON_ARR: list[Any]
VariableType.DOUBLE: float
VariableType.BOOL: bool
VariableType.STR: str
VariableType.INT: int
VariableType.BYTES: bytes
```
## Python Code Formatter
```
poetry shell
black .
```
| text/markdown | LittleHorse | engineering@littlehorse.io | null | null | AGPLv3 | littlehorse | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"authlib<1.7,>=1.6",
"grpcio==1.69.0",
"jproperties<2.2,>=2.1",
"protobuf==6.32.1",
"requests<2.33,>=2.32"
] | [] | [] | [] | [
"Documentation, https://littlehorse.io/docs/server",
"Homepage, https://littlehorse.io",
"Repository, https://github.com/littlehorse-enterprises/littlehorse",
"issues, https://github.com/littlehorse-enterprises/littlehorse/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:41:17.113856 | littlehorse_client-0.15.2.tar.gz | 85,476 | 99/23/7c4fb140ce285c6cf822537a98b9fbc9e7b6b843c0654dc59ee051da994f/littlehorse_client-0.15.2.tar.gz | source | sdist | null | false | deb90a5d3d26db2697eb90157dc2697d | 72d7bce7c3918352cfda1cac2c5576e7e7b89140ee373f8b850ad8308fa91e17 | 99237c4fb140ce285c6cf822537a98b9fbc9e7b6b843c0654dc59ee051da994f | null | [] | 183 |
2.4 | id3-dtc | 0.1.0 | Reusable ID3 Decision Tree Classifier | # ID3 Classifier
Reusable ID3 Decision Tree Algorithm.
| text/markdown | nanashi | null | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"pandas"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T18:40:17.243874 | id3_dtc-0.1.0.tar.gz | 2,320 | fa/cb/7174299c26558c2a61985712a3d719b08bfe15d955bd1814ee7b71b627b8/id3_dtc-0.1.0.tar.gz | source | sdist | null | false | 9777ceb8b142791fb1d508b4f7c69c35 | 394430d09002b135940df4952e9dda776f60bc2bbcd290793ef410504b341763 | facb7174299c26558c2a61985712a3d719b08bfe15d955bd1814ee7b71b627b8 | null | [
"LICENSE"
] | 201 |
2.4 | promptcache-ai | 0.2.0 | Semantic similarity cache for LLM responses (Redis backend, TTL, cost tracking). | PromptCache
===========
> Reduce your LLM API costs by 30--70% with semantic caching.
PromptCache reuses LLM responses for **semantically similar prompts**, not just exact string matches.
If two users ask:
- "Explain Redis in simple terms"
- "Can you explain Redis simply?"
You shouldn't pay twice.
PromptCache makes sure you don't.
* * * * *

* * * * *
The Problem
--------------
If you're using OpenAI or any LLM API in production, you're likely paying repeatedly for:
- The same question phrased differently
- Similar support requests across users
- Slight variations in prompts
- Background job retries
- RAG pipelines returning near-identical queries
Traditional caching only works for **exact matches**.
LLMs need **semantic caching**.
* * * * *
What PromptCache Does
-----------------------
1. Embeds your prompt into a vector
2. Searches Redis for similar past prompts
3. If similarity ≥ threshold → returns cached response
4. Otherwise → calls the LLM and stores the result
```sql
User Prompt
↓
Embed → Redis Vector Search
↓
Hit? → Return cached answer
Miss? → Call LLM → Store result
```
* * * * *
10-Second Example
--------------------
```python
from promptcache import SemanticCache
from promptcache.backends.redis_vector import RedisVectorBackend
from promptcache.embedders.openai import OpenAIEmbedder
from promptcache.types import CacheMeta
embedder = OpenAIEmbedder(model="text-embedding-3-small")
backend = RedisVectorBackend(
url="redis://localhost:6379/0",
dim=embedder.dim,
)
cache = SemanticCache(
backend=backend,
embedder=embedder,
namespace="support-bot",
threshold=0.92,
)
meta = CacheMeta(
model="gpt-4.1-mini",
system_prompt="You are a helpful support assistant.",
)
result = cache.get_or_set(
prompt="How do I reset my password?",
llm_call=my_llm_call,
extract_text=lambda r: r.output_text,
meta=meta,
)
print(result.cache_hit) # True or False`
```
That's it.
* * * * *
Example Impact
-----------------
In a SaaS support assistant:
- 62% cache hit rate
- 48% reduction in token usage
- 44% reduction in API spend
Your mileage depends on workload --- but high-volume, repetitive systems benefit the most.
* * * * *
Production-Ready Design
--------------------------
PromptCache isolates cache entries by:
- `namespace`
- `model`
- `system_prompt`
- `tools_schema`
- `embedder`
This prevents cross-context contamination.
Additional features:
- ✅ Redis HNSW vector search (cosine similarity)
- ✅ TTL support
- ✅ Hit-rate statistics
- ✅ Optional cost tracking
- ✅ In-memory backend (for testing)
- ✅ Framework-agnostic (no LangChain dependency)
* * * * *
Installation
---------------
```bash
pip install promptcache-ai
```
Optional OpenAI embedder:
```bash
pip install promptcache-ai[openai]
```
* * * * *
Redis Setup
--------------
PromptCache requires **Redis Stack** (RediSearch with vector support).
Run locally:
```bash
docker run -d --name redis-stack -p 6379:6379 redis/redis-stack:latest
```
Verify:
```bash
redis-cli MODULE LIST
```
You should see:
```sql
search
```
* * * * *
Stats
--------
Measure impact:
```python
print(cache.stats())
```
Example:
```json
{
"hits": 1240,
"misses": 860,
"total": 2100,
"hit_rate_percent": 59.05
}
```
* * * * *
When It Helps Most
---------------------
- Customer support bots
- Internal copilots
- FAQ systems
- Knowledge assistants
- Deterministic / low-temperature tasks
- High-volume similar prompts
* * * * *
When It May Not Help
-----------------------
- Highly personalized prompts
- Creative high-temperature tasks
- Frequently changing context
* * * * *
Testing
----------
Run unit tests:
```python
pytest
```
Run Redis integration tests:
```bash
export REDIS_URL="redis://localhost:6379/0"
pytest
```
| text/markdown | null | Tase Nikol <anikolaou.ph@gmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"redis>=5.0.0",
"numpy>=1.24",
"pydantic>=2.6",
"openai>=1.0.0; extra == \"openai\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"ruff>=0.6.0; extra == \"dev\"",
"mypy>=1.8; extra == \"dev\"",
"types-redis>=4.6.0.20241004; extra == \"dev\"",
"sentence-transformers>=2.6.0; extra == \"bench\"",
"tiktoken>=0.7.0; extra == \"bench\"",
"matplotlib>=3.8.0; extra == \"bench\""
] | [] | [] | [] | [
"Repository, https://github.com/tase-nikol/promptcache"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:40:15.865676 | promptcache_ai-0.2.0.tar.gz | 13,054 | 72/bc/960b25151b2abd64bb5c5a86e2a43ba59da1787b13308547c2b677876019/promptcache_ai-0.2.0.tar.gz | source | sdist | null | false | 37c009d58a37a986efe5cdfb690fd7db | a38431d4230b80b36f44cfd34edeef91ec134c0ed5505cea8daaecb4d5358ed5 | 72bc960b25151b2abd64bb5c5a86e2a43ba59da1787b13308547c2b677876019 | null | [
"LICENSE"
] | 191 |
2.4 | mcdc | 1.2.0.dev20260220 | MC/DC (Monte Carlo Dynamic Code): a performant, scalable, and machine-portable Python-based Monte Carlo neutron transport package | # MC/DC: Monte Carlo Dynamic Code

[](https://github.com/CEMeNT-PSAAP/MCDC/actions/workflows/regression_test.yml)
[](https://doi.org/10.21105/joss.06415)
[](https://mcdc.readthedocs.org/en/dev/ )
[](https://opensource.org/licenses/BSD-3-Clause)
MC/DC is a performant, scalable, and machine-portable Python-based Monte Carlo
neutron transport software, initiated by the Center for Exascale Monte Carlo
Neutron Transport ([CEMeNT](https://cement-psaap.github.io/)), and currently
in active development in the Center for Advancing the Radiation Resilience of
Electronics ([CARRE](https://carre-psaapiv.org)).
## Documentation
All detailed instructions and guides are hosted on [Read the Docs](https://mcdc.readthedocs.io/en/dev/). These include:
- [Installation](https://mcdc.readthedocs.io/en/dev/install.html),
- [User Guide](https://mcdc.readthedocs.io/en/dev/user/index.html),
- [API Reference](https://mcdc.readthedocs.io/en/dev/pythonapi/index.html), and
- [Contribution Guide](https://mcdc.readthedocs.io/en/dev/contribution/index.html).
## Citing
If you use MC/DC in your work and want to provide attribution, please cite the following as appropriate:
- **[MC/DC Origins]** I. Variansyah, et al. (2023). Development of MC/DC: a performant, scalable, and portable Python-based Monte Carlo neutron transport code. Proc. ANS M&C 2025, Niagara Falls, Canada. https://doi.org/10.48550/arXiv.2305.07636.
- **[MC/DC JOSS article]** J. Morgan, et al. (2024). Monte Carlo / Dynamic Code (MC/DC): An accelerated Python package for fully transient neutron transport and rapid methods development. Journal of Open Source Software, 9(96), 6415. https://doi.org/10.21105/joss.06415.
## Reporting Bugs and Issues
To report bugs or request new features, feel free to [open an Issue](https://github.com/CEMeNT-PSAAP/MCDC/issues).
| text/markdown | Caleb Shaw, Rohan Pankaj, Alexander Mote, Ethan Lame, Benjamin Whewell, Ryan G. McClarren, Todd S. Palmer, Lizhong Chen, Dmitriy Y. Anistratov, C. T. Kelley, Camille J. Palmer, Kyle E. Niemeyer | Ilham Variansyah <variansi@oregonstate.edu>, Sam Pasmann <spasmann@nd.edu>, Joanna Morgan <morgan83@llnl.gov>, Kayla Clements <clemekay@oregonstate.edu>, Braxton Cuneo <bcuneo@seattleu.edu> | null | Ilham Variansyah <variansi@oregonstate.edu>, Braxton Cuneo <bcuneo@seattleu.edu>, Kayla Clements <clemekay@oregonstate.edu>, Joanna Piper Morgan <morgan83@llnl.gov>, "Kyle E. Niemeyer" <kyle.niemeyer@oregonstate.edu> | BSD 3-Clause License
Copyright (c) 2021, CEMeNT
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | GPU, HPC, Monte Carlo, mpi4py, neutron transport, nuclear engineering, numba | [
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Natural Language :: English",
"Operating System :: MacOS",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"colorama",
"h5py",
"matplotlib",
"mpi4py>=3.1.4",
"numba>=0.60.0",
"numpy>=2.0.0",
"scipy",
"sympy",
"black; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest; extra == \"dev\"",
"furo; extra == \"docs\"",
"sphinx-toolbox; extra == \"docs\"",
"sphinx==7.2.6; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://cement-psaap.github.io/",
"Repository, https://github.com/CEMeNT-PSAAP/MCDC",
"Documentation, https://mcdc.readthedocs.io/en/dev/",
"Issues, https://github.com/CEMeNT-PSAAP/MCDC/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:40:04.181252 | mcdc-1.2.0.dev20260220.tar.gz | 10,210,637 | 36/6f/4ad951b0fc99c75e49e31d5ed48200e1c79b9629a4c96307123bef2d722b/mcdc-1.2.0.dev20260220.tar.gz | source | sdist | null | false | 0d9d036cc521a409b4954782b1eb37e6 | 336d1ddfa9afc38e532a1afb88b44065c5fa12a6348bb83f6738c9d90d059088 | 366f4ad951b0fc99c75e49e31d5ed48200e1c79b9629a4c96307123bef2d722b | null | [
"LICENSE"
] | 177 |
2.4 | muxy | 0.1.0a19 | Lightweight router for building HTTP services. | # muxy
`muxy` is a lightweight router for building HTTP services conforming to
Granian's Rust Server Gateway Interface (RSGI). It intentionally avoids magic,
prioritising explicit and composable code.
```
uv add muxy
```
## Features
- **first-class router composition** - modularise your code by nesting routers with no overhead
- **correct, efficient routing** - explicit route heirarchy so behaviour is always predictable
- **lightweight** - the core router is little more than a simple datastructure and has no dependencies
- **control** - control the full HTTP request/response cycle without digging through framework layers
- **middleware** - apply common logic to path groups simply and clearly
## Inspiration
Go's `net/http` and `go-chi/chi` are inspirations for `muxy`. I wanted their simplicity
without having to switch language. You can think of the `RSGI` interface as the muxy
equivalent of the net/http `HandlerFunc` interface, and `muxy.Router` as an equivalent of
chi's `Mux`.
## Examples
**Getting started**
```python
import asyncio
import uvloop
from granian.server.embed import Server
from muxy import Router
from muxy.rsgi import HTTPProtocol, HTTPScope
async def home(s: HTTPScope, p: HTTPProtocol) -> None:
p.response_str(200, [], "Hello world!")
async def main() -> None:
router = Router()
router.get("/", home)
server = Server(router)
try:
await server.serve()
except asyncio.CancelledError:
await server.shutdown()
if __name__ == "__main__":
uvloop.run(main())
```
**Bigger app**
See [examples/server.py](https://github.com/oliverlambson/muxy/blob/main/examples/server.py) for a runnable script.
```python
import asyncio
import json
import sqlite3
from json.decoder import JSONDecodeError
import uvloop
from granian.server.embed import Server
from muxy import Router, path_params
from muxy.rsgi import HTTPProtocol, HTTPScope, RSGIHTTPHandler
async def main() -> None:
db = sqlite3.connect(":memory:")
router = Router()
router.not_found(not_found)
router.method_not_allowed(method_not_allowed)
router.get("/", home)
router.mount("/user", user_router(db))
server = Server(router)
try:
await server.serve()
except asyncio.CancelledError:
await server.shutdown()
async def not_found(_scope: HTTPScope, proto: HTTPProtocol) -> None:
proto.response_str(404, [("Content-Type", "text/plain")], "Not found")
async def method_not_allowed(_scope: HTTPScope, proto: HTTPProtocol) -> None:
proto.response_str(405, [("Content-Type", "text/plain")], "Method not allowed")
async def home(s: HTTPScope, p: HTTPProtocol) -> None:
p.response_str(200, [("Content-Type", "text/plain")], "Welcome home")
def user_router(db: sqlite3.Connection) -> Router:
router = Router()
router.get("/", get_users(db))
router.get("/{id}", get_user(db))
router.post("/", create_user(db))
router.patch("/{id}", update_user(db))
return router
def get_users(db: sqlite3.Connection) -> RSGIHTTPHandler:
# closure over handler function to make db available within the handler
async def handler(s: HTTPScope, p: HTTPProtocol) -> None:
cur = db.cursor()
cur.execute("SELECT * FROM user")
result = cur.fetchall()
serialized = json.dumps([{"id": row[0], "name": row[1]} for row in result])
p.response_str(200, [], serialized)
return handler
def get_user(db: sqlite3.Connection) -> RSGIHTTPHandler:
async def handler(s: Scope, p: HTTPProtocol) -> None:
cur = db.cursor()
user_id = path_params.get()["id"]
try:
user_id = int(user_id)
except ValueError:
p.response_str(404, [("Content-Type", "text/plain")], "Not found")
return
cur.execute("SELECT * FROM user WHERE id = ?", (user_id,))
result = cur.fetchone()
if result is None:
p.response_str(404, [("Content-Type", "text/plain")], "Not found")
return
serialized = json.dumps({"id": result[0], "name": result[1]})
p.response_str(200, [("Content-Type", "application/json")], serialized)
return handler
def create_user(db: sqlite3.Connection) -> RSGIHTTPHandler:
async def handler(s: HTTPScope, p: HTTPProtocol) -> None:
cur = db.cursor()
body = await p()
try:
payload = json.loads(body)
except JSONDecodeError:
p.response_str(422, [("Content-Type", "text/plain")], "Invalid json")
return
try:
name = payload["name"]
except KeyError:
p.response_str(422, [("Content-Type", "text/plain")], "No name key")
return
cur.execute("INSERT INTO user (name) VALUES (?) RETURNING *", (name,))
result = cur.fetchone()
serialized = json.dumps({"id": result[0], "name": result[1]})
p.response_str(201, [("Content-Type", "application/json")], serialized)
return handler
def update_user(db: sqlite3.Connection) -> RSGIHTTPHandler: ...
if __name__ == "__main__":
uvloop.run(main())
```
| text/markdown | null | null | null | null | null | http, router, rsgi | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"muxy[compress]; extra == \"all\"",
"muxy[otel]; extra == \"all\"",
"cramjam<3.0.0,>=2.11.0; extra == \"compress\"",
"opentelemetry-api<2.0.0,>=1.39.1; extra == \"otel\""
] | [] | [] | [] | [
"Repository, https://github.com/oliverlambson/muxy",
"Issues, https://github.com/oliverlambson/muxy/issues",
"Changelog, https://github.com/oliverlambson/muxy/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:39:49.190694 | muxy-0.1.0a19.tar.gz | 28,981 | c9/10/279a08338754771f95d87226daf4f7bbae8ffc0676b138a70860a3b34c3b/muxy-0.1.0a19.tar.gz | source | sdist | null | false | 8176274cb5dff0b2b6ee2579cfb8e946 | 20a67f8b36524dc24613de90339662ea26e954e9eda6860f71048b95844dfc47 | c910279a08338754771f95d87226daf4f7bbae8ffc0676b138a70860a3b34c3b | MIT | [
"LICENSE"
] | 282 |
2.4 | pythonanywhere | 0.19.0 | PythonAnywhere helper tools for users | 
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/pythonanywhere/)
[](https://pepy.tech/project/pythonanywhere)
# PythonAnywhere cli tool
`pa` is a single command to manage PythonAnywhere services.
It is designed to be run from PythonAnywhere consoles, but many subcommands can be executed directly
from your own machine (see [usage](#Usage) below).
## Installing
### On PythonAnywhere
In a PythonAnywhere Bash console, run:
pip3.10 install --user pythonanywhere
If there is no `python3.10` on your PythonAnywhere account,
you should upgrade your account to the newest system image.
See [here](https://help.pythonanywhere.com/pages/ChangingSystemImage) how to do that.
`pa` works with python 3.8, 3.9, and 3.10 but we recommend using the latest system image.
### On your own machine
Install the `pythonanywhere` package from [PyPI](https://pypi.org/project/pythonanywhere/).
We recommend using `pipx` if you want to use it only as a cli tool, or a virtual environment
if you want to use a programmatic interface in your own code.
## Usage
There are two ways to use the package. You can just run the scripts or use the underlying api wrappers directly in your scripts.
### Command line interface
```
pa [OPTIONS] COMMAND [ARGS]...
Options:
--install-completion Install completion for the current shell.
--show-completion Show completion for the current shell, to copy it or
customize the installation.
-h, --help Show this message and exit.
Commands:
django Makes Django Girls tutorial projects deployment easy
path Perform some operations on files
schedule Manage scheduled tasks
students Perform some operations on students
webapp Everything for web apps: use this if you're not using our experimental features
website EXPERIMENTAL: create and manage ASGI websites
```
### Running `pa` on your local machine
`pa` expects the presence of some environment variables that are provided when you run your code in a PythonAnywere console.
You need to provide them if you run `pa` on your local machine.
`API_TOKEN` -- you need to set this to allow `pa` to connect to the [PythonAnywere API](https://help.pythonanywhere.com/pages/API).
To get an API token, log into PythonAnywhere and go to the "Account" page using the link at the top right.
Click on the "API token" tab, and click the "Create a new API token" button to get your token.
`PYTHONANYWHERE_SITE` is used to connect to PythonAnywhere API and defaults to `www.pythonanywhere.com`,
but you may need to set it to `eu.pythonanywhere.com` if you use our EU site.
If your username on PythonAnywhere is different from the username on your local machine,
you may need to set `USER` for the environment you run `pa` in.
### Programmatic usage in your code
Take a look at the [`pythonanywhere.task`](https://github.com/pythonanywhere/helper_scripts/blob/master/pythonanywhere/task.py)
module and docstrings of `pythonanywhere.task.Task` class and its methods.
### Legacy scripts
Some legacy [scripts](https://github.com/pythonanywhere/helper_scripts/blob/master/legacy.md) (separate for each action) are still available.
## Contributing
Pull requests are welcome! You'll find tests in the [tests](https://github.com/pythonanywhere/helper_scripts/blob/master/tests) folder...
# prep your dev environment
mkvirtualenv --python=python3.10 helper_scripts
pip install -r requirements.txt
pip install -e .
# running the tests:
pytest
# make sure that the code that you have written is well tested:
pytest --cov=pythonanywhere --cov=scripts
# to just run the fast tests:
pytest -m 'not slowtest' -v
| text/markdown | null | PythonAnywhere LLP <developers@pythonanywhere.com> | null | null | null | pythonanywhere, api, cloud, web, hosting | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.10"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"docopt",
"packaging",
"python-dateutil",
"pythonanywhere-core>=0.3.0",
"requests",
"schema==0.7.2",
"snakesay==0.10.4",
"tabulate",
"typer",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-mock; extra == \"test\"",
"pytest-mypy; extra == \"test\"",
"psutil; extra == \"test\"",
"responses; extra == \"test\"",
"virtualenvwrapper; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/pythonanywhere/helper_scripts",
"Repository, https://github.com/pythonanywhere/helper_scripts",
"Issues, https://github.com/pythonanywhere/helper_scripts/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:39:04.008079 | pythonanywhere-0.19.0.tar.gz | 56,018 | 46/03/1a32e5be048918db3b6665329892e7e438ce1f406500a4925325e3f580ed/pythonanywhere-0.19.0.tar.gz | source | sdist | null | false | 66c677732ab80ac7d82323645fee5b01 | 98e9a97e0702f0364f1e9b449881cd00bf4ede17a37926b92294a36aae098038 | 46031a32e5be048918db3b6665329892e7e438ce1f406500a4925325e3f580ed | MIT | [
"LICENSE"
] | 254 |
2.4 | askchat | 2.1.0 | Interact with ChatGPT in terminal via chattool | # AskChat
<div align="center">
<a href="https://pypi.python.org/pypi/askchat">
<img src="https://img.shields.io/pypi/v/askchat.svg" alt="PyPI version" />
</a>
<a href="https://github.com/cubenlp/askchat/actions/workflows/test.yml">
<img src="https://github.com/cubenlp/askchat/actions/workflows/test.yml/badge.svg" alt="Tests" />
</a>
<a href="https://cubenlp.github.io/askchat/">
<img src="https://img.shields.io/badge/docs-github_pages-blue.svg" alt="Documentation Status" />
</a>
<a href="https://codecov.io/gh/cubenlp/askchat">
<img src="https://codecov.io/gh/cubenlp/askchat/branch/main/graph/badge.svg" alt="Coverage" />
</a>
</div>
<div align="center">
<img src="docs/assets/askchat.png" alt="Ask Chat" width="256">
[English](README-en.md) | [简体中文](README.md)
</div>
通过命令行运行的 ChatGPT 交互工具,随时随地调用 ChatGPT。
<div align="center">
<div style="margin-top: 10px; color: #555;">终端调用</div>
<img src="docs/assets/svgs/hello.svg" alt="hello" width="480">
</div>
<div align="center">
<div style="margin-top: 10px; color: #555;">Jupyter Lab</div>
<img src="docs/assets/jupyter.gif" alt="jupyter" width="480">
</div>
## 安装及配置
```bash
pip install askchat --upgrade
```
配置环境变量:
```bash
# 初始化配置(交互式)
chatenv init -i
# 或者手动设置环境变量
export OPENAI_API_KEY="your-api-key"
export OPENAI_API_BASE="https://api.openai.com/v1"
export OPENAI_API_BASE_URL="https://api.openai.com"
export OPENAI_API_MODEL="gpt-3.5-turbo"
```
注:`OPENAI_API_BASE` 变量优先于 `OPENAI_API_BASE_URL` 变量,二者选一即可。
## 使用方法
配置完成后,进行简单的问答:
```bash
ask hello world
```
除此之外,可使用 `askchat` 进行更灵活的对话。
## AskChat
`askchat` 支持 API 调试,对话管理等功能。
### 使用示例
<div align="center">
<div style="margin-top: 10px; color: #555;">1. API 调试</div>
<img src="docs/assets/svgs/debug.svg" alt="debug" width="480">
</div>
<div align="center">
<div style="margin-top: 10px; color: #555;">2. 获取可用模型列表</div>
<img src="docs/assets/svgs/validmodels.svg" alt="validmodels" width="480">
</div>
<div align="center">
<div style="margin-top: 10px; color: #555;">3. 多轮对话,保存对话,加载对话等</div>
<img src="docs/assets/svgs/chatlog.svg" alt="chatlog" width="480">
</div>
<div align="center">
<div style="margin-top: 10px; color: #555;">4. 指定参数,使用不同的模型和 API</div>
<img src="docs/assets/svgs/para-models.svg" alt="para-models" width="480">
</div>
### 对话管理
用户保存、加载、删除和列出对话历史记录,以及继续之前的对话。
| 参数 | 示例 | 解释 |
|---------------------|------------------|--------------------------------------------|
| `-c` | `askchat -c <message>` | 继续上一次的对话 |
| `--regenerate` | `askchat -r` | 重新生成上一次对话的最后回复 |
| `--load` | `askchat -l <file>` | 加载历史对话 |
| `--print` | `askchat -p [<file>]` | 打印上次或指定的对话历史 |
| `--save` | `askchat -s <file>` | 将当前对话历史保存到文件 |
| `--delete` | `askchat -d <file>` | 删除指定的对话历史文件 |
| `--list` | `askchat --list` | 列出所有保存的对话历史文件 |
所有对话保存在 `~/.askchat/`,最近一次对话保存在 `~/.askchat/_last_chat.json`。
### 模型参数
`askchat` 的默认参数,这些参数用于直接与 ChatGPT 交互或者配置 API 的连接信息。
| 参数 | 示例 | 解释 |
|-----------------|-----------------|-----------------------------------|
| `<message>` | `askchat hello` | 最简单的对话 |
| `--model` | `-m gpt-3.5-turbo` | 指定使用的模型名称 |
| `--base-url` | `-b https://api.example.com` | 设置 Base URL (不包含 `/v1`) |
| `--api-base` | `--api-base https://api.example.com/v1` | 设置 Base URL |
| `--api-key` | `-a sk-xxxxxxx` | 提供 OpenAI API 密钥 |
| `--option` | `-o top_p 1 temperature 0.5` | 设置请求参数 |
注:一些模型 API,比如智谱,使用 `/v4` 作为 API 的基础路径,这时得用 `--api-base` 参数。
### 其他参数
辅助功能,如生成配置文件、调试日志、打印模型列表和显示版本信息等,使用 `--help` 查看所有支持的参数。
| 参数 | 示例 | 解释 |
|---------------------------|----------------------|--------------------------------------------|
| `--print-url` | `askchat hello --print-url` | 打印实际请求的 URL |
| `--debug` | `askchat --debug` | 打印调试日志 |
| `--valid-models` | `askchat --valid-models` | 打印包含 "gpt" 名称的模型列表 |
| `--all-valid-models` | `askchat --all-valid-models` | 打印所有的模型 |
| `--version` | `askchat -v` | `askchat` 的版本信息 |
注:`--all-valid-models` 会打印所有可用模型,包括 Embedding, dalle-3, tts 等,使用 `--valid-models` 可以过滤掉这些。
## 问题和反馈
使用过程中有任何问题或建议,欢迎提交 [Issue](https://github.com/cubenlp/askchat/issues)。
| text/markdown | null | Rex Wang <1073853456@qq.com> | null | null | MIT | askchat | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"chattool>=5.0.0",
"python-dotenv>=0.17.0",
"Click>=8.0",
"platformdirs>=2.0.0",
"pip>=25.0; extra == \"dev\"",
"bump2version; extra == \"dev\"",
"wheel; extra == \"dev\"",
"watchdog; extra == \"dev\"",
"flake8; extra == \"dev\"",
"tox; extra == \"dev\"",
"coverage; extra == \"dev\"",
"Sphinx; extra == \"dev\"",
"twine; extra == \"dev\"",
"pytest; extra == \"dev\"",
"build; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/cubenlp/askchat",
"Repository, https://github.com/cubenlp/askchat"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T18:38:58.742647 | askchat-2.1.0.tar.gz | 1,307,570 | 69/80/54c9c1f967417b6983415893516833c0011dbcb977fd2cc1a901f849dd9d/askchat-2.1.0.tar.gz | source | sdist | null | false | 34704ce37395b526d4005e40e318075a | 5a6fe1c6aef9ce1c761c41d27194aa3100871727a4592a268a8e3d813c49438c | 698054c9c1f967417b6983415893516833c0011dbcb977fd2cc1a901f849dd9d | null | [
"LICENSE"
] | 202 |
2.4 | rlm-code | 0.1.6 | RLM Code: Research Playground & Evaluation OS for Recursive Language Model Agentic Systems | # RLM Code
<p align="center">
<a href="https://github.com/SuperagenticAI/rlm-code">
<img src="https://github.com/SuperagenticAI/rlm-code/raw/main/assets/rlm-code-logo.png" alt="RLM Code logo" width="320">
</a>
</p>
[](https://pypi.org/project/rlm-code/)
[](https://pypi.org/project/rlm-code/)
[](https://pypi.org/project/rlm-code/)
[](https://pypi.org/project/rlm-code/)
[](https://github.com/SuperagenticAI/rlm-code/actions/workflows/ci.yml)
[](https://github.com/SuperagenticAI/rlm-code/actions/workflows/pre-commit.yml)
[](https://github.com/SuperagenticAI/rlm-code/actions/workflows/deploy-docs.yml)
[](https://github.com/SuperagenticAI/rlm-code/actions/workflows/release.yml)
[](https://superagenticai.github.io/rlm-code/)
[](https://github.com/SuperagenticAI/rlm-code/stargazers)
[](https://github.com/SuperagenticAI/rlm-code/issues)
[](https://github.com/SuperagenticAI/rlm-code/pulls)
**Run LLM-powered agents in a REPL loop, benchmark them, and compare results.**
RLM Code implements the [Recursive Language Models](https://arxiv.org/abs/2502.07503) (RLM) approach from the 2025 paper release. Instead of stuffing your entire document into the LLM's context window, RLM stores it as a Python variable and lets the LLM write code to analyze it, chunk by chunk, iteration by iteration. This is dramatically more token-efficient for large inputs.
RLM Code wraps this algorithm in an interactive terminal UI with built-in benchmarks, trajectory replay, and observability.
## Release v0.1.6
This release adds the new CodeMode path as an opt-in harness strategy.
- New harness strategy: `strategy=codemode` (default remains `strategy=tool_call`)
- MCP bridge flow for CodeMode: `search_tools` -> typed tool surface -> `call_tool_chain`
- Guardrails before execution: blocked API classes plus timeout/size/tool-call caps
- Benchmark telemetry for side-by-side comparison: `tool_call` vs `codemode`
- Dedicated docs section for CodeMode: quickstart, architecture, guardrails, evaluation
Example:
```text
/harness run "implement feature and add tests" steps=8 mcp=on strategy=codemode mcp_server=codemode
```
## Documentation
<p align="center">
<a href="https://superagenticai.github.io/rlm-code/">
<img alt="Read the RLM Code Docs" src="https://img.shields.io/badge/Read%20the%20Docs-RLM%20Code-ff7a18?style=for-the-badge&logo=readthedocs&logoColor=white">
</a>
</p>
<p align="center">
<a href="https://superagenticai.github.io/rlm-code/"><strong>Open the full documentation</strong></a>
</p>
## Install
```bash
uv tool install "rlm-code[tui,llm-all]"
```
This installs `rlm-code` as a globally available command with its own isolated environment. You get the TUI and all LLM provider clients (OpenAI, Anthropic, Gemini).
Requirements:
- Python 3.11+
- `uv` (recommended) or `pip`
- one model route (BYOK API key or local server like Ollama)
- one secure execution backend (Docker recommended; Monty optional)
Don't have uv? Install it first:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
<details>
<summary>Alternative: install with pip</summary>
```bash
pip install rlm-code[tui,llm-all]
```
</details>
<p align="center">
<img src="https://github.com/SuperagenticAI/rlm-code/raw/main/assets/rlm-lab.png" alt="RLM Research Lab view" width="980">
</p>
## Quick Start
### 1. Launch
```bash
mkdir -p ~/my-project && cd ~/my-project
rlm-code
```
This opens the terminal UI. You'll see a chat input at the bottom and tabs across the top.
### 2. Connect to an LLM
Type one of these in the chat input:
```
/connect anthropic claude-opus-4-6
```
or
```
/connect openai gpt-5.3-codex
```
or
```
/connect gemini gemini-2.5-flash
```
or for a free local model via [Ollama](https://ollama.com/):
```
/connect ollama llama3.2
```
> You need the matching API key in your environment (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GEMINI_API_KEY`) or in a `.env` file in your project directory. Ollama needs no key, just a running Ollama server.
Follow the interactive path with just `/connect` command instead: Check it worked:
```
/status
```
### 3. Run your first RLM task
```
/rlm run "Write a Python function that finds the longest common subsequence of two strings"
```
This starts the RLM loop: the LLM writes code in a sandboxed REPL, executes it, sees the output, writes more code, and iterates until it calls `FINAL(answer)` with the result.
### 4. Run a benchmark
Benchmarks let you measure how well a model performs on a set of tasks:
```
/rlm bench preset=pure_rlm_smoke
```
This runs 3 test cases through the RLM loop and scores the results.
See all available benchmarks:
```
/rlm bench list
```
### 5. View results
Use the **Research** tab (`Ctrl+5`) for live benchmark and trajectory views.
After at least two benchmark runs, export a compare report:
```
/rlm bench report candidate=latest baseline=previous format=markdown
```
### 6. Replay a session step-by-step
```
/rlm status
/rlm replay <run_id>
```
Walk through the last run one step at a time, see what code the LLM wrote, what output it got, and what it did next.
### 7. Use RLM Code as a coding agent (local/BYOK/ACP)
RLM Code can also be used as a coding-agent harness in the TUI, Just like Claude Code, Codex etc. It has mimimal harnesss to steer the model to write the code.
```text
/harness tools
/harness run "fix failing tests and add regression test" steps=8 mcp=on
```
ACP is supported too:
```text
/connect acp
/harness run "implement feature X with tests" steps=8 mcp=on
```
Notes:
- In Local/BYOK connection modes, likely coding prompts in chat can auto-route to harness.
- In ACP mode, auto-routing is intentionally off; use `/harness run ...` explicitly.
## How the RLM Loop Works
Traditional LLM usage: paste your document into the prompt, ask a question, hope the model doesn't lose details in the middle.
RLM approach:
1. Your document is stored as a Python variable `context` in a REPL
2. The LLM writes code to process it (e.g., `len(context)`, `context[:5000]`, `context.split('\n')`)
3. The code runs, and the LLM sees the output
4. The LLM writes more code based on what it learned
5. Repeat until the LLM calls `FINAL("here is my answer")`
This means the LLM can handle documents much larger than its context window, because it reads them in chunks through code rather than all at once through the prompt.
## What This Is (and Is Not)
RLM Code is:
- a research playground for recursive/model-assisted coding workflows
- a benchmarking and replay tool for reproducible experiments
RLM Code is not:
- a no-config consumer chat app
- guaranteed cheap (recursive runs can be expensive)
- safe to run with unrestricted execution settings
Use secure backend defaults (`/sandbox profile secure`) for normal use.
## Key Commands
| Command | What it does |
|---------|-------------|
| `/connect <provider> <model>` | Connect to an LLM |
| `/model` | Interactive model picker |
| `/status` | Show connection status |
| `/sandbox profile secure` | Apply secure sandbox defaults (Docker-first + strict pure RLM) |
| `/rlm run "<task>"` | Run a task through the RLM loop |
| `/rlm bench preset=<name>` | Run a benchmark preset |
| `/rlm bench list` | List available benchmarks |
| `/rlm bench compare` | Compare latest benchmark run with previous run |
| `/rlm abort [run_id\|all]` | Cancel active run(s) cooperatively |
| `/harness run "<task>"` | Run tool-using coding harness loop |
| `/rlm replay` | Step through the last run |
| `/rlm chat "<question>"` | Ask the LLM a question about your project |
| `/help` | Show all available commands |
## Cost and Safety Guardrails
Start bounded:
```text
/rlm run "small scoped task" steps=4 timeout=30 budget=60
```
For benchmarks, start with small limits:
```text
/rlm bench preset=dspy_quick limit=1
```
If a run is going out of hand:
```text
/rlm abort all
```
## What You Can Do With It
- **Analyze large documents**: Feed in a 500-page PDF and ask questions, then the LLM reads it in chunks via code
- **Compare models**: Run the same benchmark with different providers and see who scores higher
- **Compare paradigms**: Test Pure RLM vs CodeAct vs Traditional approaches on the same task
- **Debug agent behavior**: Replay any run step-by-step to see exactly what the agent did
- **Track experiments**: Every run is logged with metrics, tokens used, and trajectory
## Supported LLM Providers
| Provider | Latest Models | Setup |
|----------|--------------|-------|
| **Anthropic** | `claude-opus-4-6`, `claude-sonnet-4-5-20250929` | `ANTHROPIC_API_KEY` env var |
| **OpenAI** | `gpt-5.3-codex`, `gpt-5.2-pro` | `OPENAI_API_KEY` env var |
| **Google** | `gemini-2.5-pro`, `gemini-2.5-flash` | `GEMINI_API_KEY` or `GOOGLE_API_KEY` env var |
| **Ollama** | `llama3.2`, `qwen2.5-coder:7b` | Running Ollama server at `localhost:11434` |
## Configuration
Create an `rlm_config.yaml` in your project directory to customize settings:
```yaml
name: my-project
models:
openai_api_key: null
openai_model: gpt-5.3-codex
default_model: gpt-5.3-codex
sandbox:
runtime: docker
superbox_profile: secure
superbox_auto_fallback: true
superbox_fallback_runtimes: [docker, daytona, e2b]
pure_rlm_backend: docker
pure_rlm_strict: true
pure_rlm_allow_unsafe_exec: false
rlm:
default_benchmark_preset: dspy_quick
benchmark_pack_paths: []
```
Or generate a full sample config:
```
/init
```
## Development Setup
```bash
git clone https://github.com/SuperagenticAI/rlm-code.git
cd rlm-code
uv sync --all-extras
uv run pytest
```
## Project Structure
```
rlm_code/
rlm/ # Core RLM engine (runner, environments, policies)
ui/ # Terminal UI (Textual-based TUI)
mcp/ # MCP server for tool integration
models/ # LLM provider adapters
sandbox/ # Sandboxed code execution
harness/ # Tool-using coding harness (/harness)
```
## Resources
Full docs: https://superagenticai.github.io/rlm-code/
## Contributing
See `CONTRIBUTING.md`.
## License
Apache-2.0
---
**Brought to You by [Superagentic AI](https://super-agentic.ai)**
| text/markdown | null | Shashi Jagtap <shashi@super-agentic.ai> | null | Shashi Jagtap <shashi@super-agentic.ai> | null | ai, claude, code, dspy, interactive, language-models, nlp | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Processing :: Linguistic"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"anyio",
"click",
"dspy",
"httpx",
"httpx-sse",
"jsonschema",
"mcp",
"packaging",
"pydantic",
"pyyaml",
"requests",
"rich",
"google-adk; extra == \"adk\"",
"google-genai; extra == \"adk\"",
"python-dotenv; extra == \"adk\"",
"anthropic; extra == \"anthropic\"",
"deepagents; extra == \"deepagents\"",
"hypothesis; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-xdist; extra == \"dev\"",
"ruff; extra == \"dev\"",
"types-pyyaml; extra == \"dev\"",
"types-requests; extra == \"dev\"",
"mkdocs; extra == \"docs\"",
"mkdocs-material; extra == \"docs\"",
"mkdocs-minify-plugin; extra == \"docs\"",
"mkdocstrings[python]; extra == \"docs\"",
"deepagents; extra == \"frameworks\"",
"google-adk; extra == \"frameworks\"",
"google-genai; extra == \"frameworks\"",
"pydantic-ai; extra == \"frameworks\"",
"python-dotenv; extra == \"frameworks\"",
"google-genai; extra == \"gemini\"",
"anthropic; extra == \"llm-all\"",
"google-genai; extra == \"llm-all\"",
"openai; extra == \"llm-all\"",
"websockets; extra == \"mcp-ws\"",
"mlflow; extra == \"mlflow\"",
"openai; extra == \"openai\"",
"pydantic-ai; extra == \"pydantic\"",
"hypothesis; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-asyncio; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-xdist; extra == \"test\"",
"textual; extra == \"tui\""
] | [] | [] | [] | [
"Homepage, https://github.com/SuperagenticAI/rlm-code",
"Documentation, https://superagenticai.github.io/rlm-code/",
"Repository, https://github.com/SuperagenticAI/rlm-code",
"Bug Tracker, https://github.com/SuperagenticAI/rlm-code/issues",
"Changelog, https://github.com/SuperagenticAI/rlm-code/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.12.7 | 2026-02-20T18:38:33.552688 | rlm_code-0.1.6.tar.gz | 770,969 | 12/72/c4679f8f620c68bfbf640796285ae0d1dcffc60d2cc5f472a4592af33a98/rlm_code-0.1.6.tar.gz | source | sdist | null | false | 498a3c44ab99af5705ea30fa80741034 | d6617a388299aee524622beb27f99ec33a8093b94b666b1f6975226114aa0a59 | 1272c4679f8f620c68bfbf640796285ae0d1dcffc60d2cc5f472a4592af33a98 | Apache-2.0 | [
"LICENSE",
"NOTICE"
] | 188 |
2.4 | dbdragoness | 0.1.17 | A lightweight DB GUI module for SQL/NoSQL, like phpMyAdmin but Pythonic! | ## Features
- 🎯 Support for multiple databases: SQLite, MySQL, PostgreSQL, DuckDB, TinyDB, MongoDB
- 🎨 Modern React UI
- 🔒 Secure credential management with keyring
- 📊 Data visualization with charts
- 🔄 Import/Export capabilities
- ⚡ Fast and lightweight
## Installation
### Best and easiest method
```bash
pip install --upgrade dbdragoness
```
Note: Latest version is 0.1.17
### Development Setup (For Developers & Academic Use)
This option is recommended if you want to explore, modify, or study the source code.
1. Clone the Repository
```bash
git clone https://github.com/tech-dragoness/dbdragoness.git
cd dbdragoness
```
2. Create and Activate a Virtual Environment (Recommended)
```bash
python -m venv venv
```
Windows
```bash
venv\Scripts\activate
```
macOS / Linux
```bash
source venv/bin/activate
```
3. Install the Project in Editable Mode
Editable mode ensures that any changes you make to the source code are immediately reflected when running the tool.
```bash
pip install -e .
```
4. Run DBDragoness
```bash
dbdragoness
```
5. (Optional) Verify Installation
```bash
dbdragoness --help
```
## Quick Start
```bash
# Start the GUI
dbdragoness
# Open specific database
dbdragoness --type sql --db mydb
```
## Requirements
- Python 3.8+
- Node.js 16+ (for React UI build)
## License
MIT License
Copyright (c) 2026 Tulika Thampi. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
Credit Requirement: The original author, Tulika Thampi, and the original work must be credited in all copies, distributions, or derivative works. This credit should be visible in documentation, UI, or other appropriate places where the Software is used or presented.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| text/markdown | Tulika Thampi (Dragoness) | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"flask<3.0.0,>=2.3.0",
"click<9.0.0,>=8.1.0",
"sqlalchemy<3.0.0,>=2.0.0",
"pymysql<2.0.0,>=1.1.0",
"psycopg2-binary<3.0.0,>=2.9.0",
"duckdb-engine<1.0.0,>=0.9.0",
"tinydb<5.0.0,>=4.8.0",
"pymongo<5.0.0,>=4.6.0",
"keyring<25.0.0,>=24.0.0",
"cryptography<42.0.0,>=41.0.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/tech-dragoness/dbdragoness"
] | twine/6.2.0 CPython/3.12.6 | 2026-02-20T18:38:20.627316 | dbdragoness-0.1.17.tar.gz | 4,670,454 | 70/7a/bad4e12e3046a01f1c9a691a3a9ce4b067987bb61598030cf0e394c881d6/dbdragoness-0.1.17.tar.gz | source | sdist | null | false | 59610f88b9203829902aa2acc6c18faa | 4da4a0ea436241e64dd98c8d0ab44b04ceacc208ba02bd573c6f48c2c07fa508 | 707abad4e12e3046a01f1c9a691a3a9ce4b067987bb61598030cf0e394c881d6 | MIT | [
"LICENSE.txt"
] | 194 |
2.1 | delta-spark | 4.1.0 | Python APIs for using Delta Lake with Apache Spark | # Delta Lake
[Delta Lake](https://delta.io) is an open source storage layer that brings reliability to data lakes. Delta Lake provides ACID transactions, scalable metadata handling, and unifies streaming and batch data processing. Delta Lake runs on top of your existing data lake and is fully compatible with Apache Spark APIs.
This PyPi package contains the Python APIs for using Delta Lake with Apache Spark.
## Installation and usage
1. Install using `pip install delta-spark`
2. To use the Delta Lake with Apache Spark, you have to set additional configurations when creating the SparkSession. See the online [project web page](https://docs.delta.io/latest/delta-intro.html) for details.
## Documentation
This README file only contains basic information related to pip installed Delta Lake. You can find the full documentation on the [project web page](https://docs.delta.io/latest/delta-intro.html)
| text/markdown | The Delta Lake Project Authors | delta-users@googlegroups.com | null | null | Apache-2.0 | delta.io | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Typing :: Typed"
] | [] | https://github.com/delta-io/delta/ | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Source, https://github.com/delta-io/delta",
"Documentation, https://docs.delta.io/latest/index.html",
"Issues, https://github.com/delta-io/delta/issues"
] | twine/3.8.0 pkginfo/1.10.0 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/1.0.0 urllib3/1.26.20 tqdm/4.64.1 importlib-metadata/4.8.3 keyring/23.4.1 rfc3986/1.5.0 colorama/0.4.5 CPython/3.6.15 | 2026-02-20T18:37:59.800216 | delta_spark-4.1.0.tar.gz | 36,808 | 2f/09/d394015eb956c4475f6a949fb5fccedf7af19f97e981acbc81629e868a5e/delta_spark-4.1.0.tar.gz | source | sdist | null | false | 1a273a07460f607a3168154e7bb86a87 | 98f73c2744f972919e0472974467f85d157810b617341ebf586374d91b8eadc7 | 2f09d394015eb956c4475f6a949fb5fccedf7af19f97e981acbc81629e868a5e | null | [] | 83,030 |
2.4 | strix-agent | 0.8.1 | Open-source AI Hackers for your apps | <p align="center">
<a href="https://strix.ai/">
<img src="https://github.com/usestrix/.github/raw/main/imgs/cover.png" alt="Strix Banner" width="100%">
</a>
</p>
<div align="center">
# Strix
### Open-source AI hackers to find and fix your app’s vulnerabilities.
<br/>
<a href="https://docs.strix.ai"><img src="https://img.shields.io/badge/Docs-docs.strix.ai-2b9246?style=for-the-badge&logo=gitbook&logoColor=white" alt="Docs"></a>
<a href="https://strix.ai"><img src="https://img.shields.io/badge/Website-strix.ai-f0f0f0?style=for-the-badge&logoColor=000000" alt="Website"></a>
[](https://discord.gg/strix-ai)
<a href="https://deepwiki.com/usestrix/strix"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
<a href="https://github.com/usestrix/strix"><img src="https://img.shields.io/github/stars/usestrix/strix?style=flat-square" alt="GitHub Stars"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-3b82f6?style=flat-square" alt="License"></a>
<a href="https://pypi.org/project/strix-agent/"><img src="https://img.shields.io/pypi/v/strix-agent?style=flat-square" alt="PyPI Version"></a>
<a href="https://discord.gg/strix-ai"><img src="https://github.com/usestrix/.github/raw/main/imgs/Discord.png" height="40" alt="Join Discord"></a>
<a href="https://x.com/strix_ai"><img src="https://github.com/usestrix/.github/raw/main/imgs/X.png" height="40" alt="Follow on X"></a>
<a href="https://trendshift.io/repositories/15362" target="_blank"><img src="https://trendshift.io/api/badge/repositories/15362" alt="usestrix/strix | Trendshift" width="250" height="55"/></a>
</div>
> [!TIP]
> **New!** Strix integrates seamlessly with GitHub Actions and CI/CD pipelines. Automatically scan for vulnerabilities on every pull request and block insecure code before it reaches production!
---
## Strix Overview
Strix are autonomous AI agents that act just like real hackers - they run your code dynamically, find vulnerabilities, and validate them through actual proof-of-concepts. Built for developers and security teams who need fast, accurate security testing without the overhead of manual pentesting or the false positives of static analysis tools.
**Key Capabilities:**
- **Full hacker toolkit** out of the box
- **Teams of agents** that collaborate and scale
- **Real validation** with PoCs, not false positives
- **Developer‑first** CLI with actionable reports
- **Auto‑fix & reporting** to accelerate remediation
<br>
<div align="center">
<a href="https://strix.ai">
<img src=".github/screenshot.png" alt="Strix Demo" width="1000" style="border-radius: 16px;">
</a>
</div>
## Use Cases
- **Application Security Testing** - Detect and validate critical vulnerabilities in your applications
- **Rapid Penetration Testing** - Get penetration tests done in hours, not weeks, with compliance reports
- **Bug Bounty Automation** - Automate bug bounty research and generate PoCs for faster reporting
- **CI/CD Integration** - Run tests in CI/CD to block vulnerabilities before reaching production
## 🚀 Quick Start
**Prerequisites:**
- Docker (running)
- An LLM API key:
- Any [supported provider](https://docs.strix.ai/llm-providers/overview) (OpenAI, Anthropic, Google, etc.)
- Or [Strix Router](https://models.strix.ai) — single API key for multiple providers with $10 free credit on signup
### Installation & First Scan
```bash
# Install Strix
curl -sSL https://strix.ai/install | bash
# Or via pipx
pipx install strix-agent
# Configure your AI provider
export STRIX_LLM="openai/gpt-5" # or "strix/gpt-5" via Strix Router (https://models.strix.ai)
export LLM_API_KEY="your-api-key"
# Run your first security assessment
strix --target ./app-directory
```
> [!NOTE]
> First run automatically pulls the sandbox Docker image. Results are saved to `strix_runs/<run-name>`
---
## ✨ Features
### Agentic Security Tools
Strix agents come equipped with a comprehensive security testing toolkit:
- **Full HTTP Proxy** - Full request/response manipulation and analysis
- **Browser Automation** - Multi-tab browser for testing of XSS, CSRF, auth flows
- **Terminal Environments** - Interactive shells for command execution and testing
- **Python Runtime** - Custom exploit development and validation
- **Reconnaissance** - Automated OSINT and attack surface mapping
- **Code Analysis** - Static and dynamic analysis capabilities
- **Knowledge Management** - Structured findings and attack documentation
### Comprehensive Vulnerability Detection
Strix can identify and validate a wide range of security vulnerabilities:
- **Access Control** - IDOR, privilege escalation, auth bypass
- **Injection Attacks** - SQL, NoSQL, command injection
- **Server-Side** - SSRF, XXE, deserialization flaws
- **Client-Side** - XSS, prototype pollution, DOM vulnerabilities
- **Business Logic** - Race conditions, workflow manipulation
- **Authentication** - JWT vulnerabilities, session management
- **Infrastructure** - Misconfigurations, exposed services
### Graph of Agents
Advanced multi-agent orchestration for comprehensive security testing:
- **Distributed Workflows** - Specialized agents for different attacks and assets
- **Scalable Testing** - Parallel execution for fast comprehensive coverage
- **Dynamic Coordination** - Agents collaborate and share discoveries
---
## Usage Examples
### Basic Usage
```bash
# Scan a local codebase
strix --target ./app-directory
# Security review of a GitHub repository
strix --target https://github.com/org/repo
# Black-box web application assessment
strix --target https://your-app.com
```
### Advanced Testing Scenarios
```bash
# Grey-box authenticated testing
strix --target https://your-app.com --instruction "Perform authenticated testing using credentials: user:pass"
# Multi-target testing (source code + deployed app)
strix -t https://github.com/org/app -t https://your-app.com
# Focused testing with custom instructions
strix --target api.your-app.com --instruction "Focus on business logic flaws and IDOR vulnerabilities"
# Provide detailed instructions through file (e.g., rules of engagement, scope, exclusions)
strix --target api.your-app.com --instruction-file ./instruction.md
```
### Headless Mode
Run Strix programmatically without interactive UI using the `-n/--non-interactive` flag—perfect for servers and automated jobs. The CLI prints real-time vulnerability findings, and the final report before exiting. Exits with non-zero code when vulnerabilities are found.
```bash
strix -n --target https://your-app.com
```
### CI/CD (GitHub Actions)
Strix can be added to your pipeline to run a security test on pull requests with a lightweight GitHub Actions workflow:
```yaml
name: strix-penetration-test
on:
pull_request:
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Install Strix
run: curl -sSL https://strix.ai/install | bash
- name: Run Strix
env:
STRIX_LLM: ${{ secrets.STRIX_LLM }}
LLM_API_KEY: ${{ secrets.LLM_API_KEY }}
run: strix -n -t ./ --scan-mode quick
```
### Configuration
```bash
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key"
# Optional
export LLM_API_BASE="your-api-base-url" # if using a local model, e.g. Ollama, LMStudio
export PERPLEXITY_API_KEY="your-api-key" # for search capabilities
export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high, quick scan: medium)
```
> [!NOTE]
> Strix automatically saves your configuration to `~/.strix/cli-config.json`, so you don't have to re-enter it on every run.
**Recommended models for best results:**
- [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5`
- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6`
- [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview`
See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models.
## Documentation
Full documentation is available at **[docs.strix.ai](https://docs.strix.ai)** — including detailed guides for usage, CI/CD integrations, skills, and advanced configuration.
## Contributing
We welcome contributions of code, docs, and new skills - check out our [Contributing Guide](https://docs.strix.ai/contributing) to get started or open a [pull request](https://github.com/usestrix/strix/pulls)/[issue](https://github.com/usestrix/strix/issues).
## Join Our Community
Have questions? Found a bug? Want to contribute? **[Join our Discord!](https://discord.gg/strix-ai)**
## Support the Project
**Love Strix?** Give us a ⭐ on GitHub!
## Acknowledgements
Strix builds on the incredible work of open-source projects like [LiteLLM](https://github.com/BerriAI/litellm), [Caido](https://github.com/caido/caido), [Nuclei](https://github.com/projectdiscovery/nuclei), [Playwright](https://github.com/microsoft/playwright), and [Textual](https://github.com/Textualize/textual). Huge thanks to their maintainers!
> [!WARNING]
> Only test apps you own or have permission to test. You are responsible for using Strix ethically and legally.
</div>
| text/markdown | Strix | hi@usestrix.com | null | null | Apache-2.0 | cybersecurity, security, vulnerability, scanner, pentest, agent, ai, cli | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Security"
] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"litellm[proxy]<1.82.0,>=1.81.1",
"tenacity<10.0.0,>=9.0.0",
"pydantic[email]<3.0.0,>=2.11.3",
"rich",
"docker<8.0.0,>=7.1.0",
"textual<5.0.0,>=4.0.0",
"xmltodict<0.14.0,>=0.13.0",
"requests<3.0.0,>=2.32.0",
"cvss<4.0,>=3.2",
"google-cloud-aiplatform>=1.38; extra == \"vertex\"",
"fastapi; extra == \"sandbox\"",
"uvicorn; extra == \"sandbox\"",
"ipython<10.0.0,>=9.3.0; extra == \"sandbox\"",
"openhands-aci<0.4.0,>=0.3.0; extra == \"sandbox\"",
"playwright<2.0.0,>=1.48.0; extra == \"sandbox\"",
"gql[requests]<4.0.0,>=3.5.3; extra == \"sandbox\"",
"pyte<0.9.0,>=0.8.1; extra == \"sandbox\"",
"libtmux<0.47.0,>=0.46.2; extra == \"sandbox\"",
"numpydoc<2.0.0,>=1.8.0; extra == \"sandbox\"",
"defusedxml<0.8.0,>=0.7.1"
] | [] | [] | [] | [] | poetry/2.2.1 CPython/3.13.10 Darwin/25.2.0 | 2026-02-20T18:37:42.726348 | strix_agent-0.8.1.tar.gz | 245,690 | c1/c7/f098bc692b3955b6ff0a21d3c8d76cc375775e900b5652a8a76892dc3b90/strix_agent-0.8.1.tar.gz | source | sdist | null | false | 56af5de04d8cf2bab363f0c9fdb948a8 | 7bfefdc67c8fcecc08ba3224ca42434c058a670b9eafe7e61a60469a8974063e | c1c7f098bc692b3955b6ff0a21d3c8d76cc375775e900b5652a8a76892dc3b90 | null | [] | 393 |
2.4 | kikusan | 0.22.5 | Search and download music from YouTube Music with lyrics | <div align="center">
# Kikusan
**Search, download and sync music from YouTube Music and other places (reddit, listenbrainz, billboard) with lyrics.**
[](https://github.com/dadav/kikusan/releases)
[](https://github.com/dadav/kikusan/blob/main/LICENSE)

</div>
## Features
- **Search & Download**: Search YouTube Music and download audio in OPUS/MP3/FLAC format
- **Playlist Support**: Download entire playlists from YouTube Music, YouTube, and Deezer
- **Quick Download**: Search and download first match with a single command
- **Automatic Lyrics**: Fetch and embed synchronized lyrics from lrclib.net (LRC format)
- **Web Interface**: Modern web UI with search, download, theme toggle, and format selection
- **Docker Support**: Easy deployment with Docker and docker-compose
- **Plugin System**: Extensible architecture for custom music sources
- **Scheduled Sync**: Automated playlist monitoring with cron scheduling
- **M3U Playlists**: Automatic playlist file generation for downloads
- **Hooks**: Run custom commands when events occur (e.g., import playlists to Navidrome)
- **Retroactive Tagging**: Add lyrics and ReplayGain tags to existing audio files without re-downloading
## Usecase
I use navidrome as my music server. My music is stored on a NAS and mounted in the navidrome container as read-only.
Kikusan syncs my youtube music playlists on this shared mount and creates local m3u playlists. If kikusan has a discovery playlist configured (sync=True), songs that hav been removed from the upstream playlist are also removed from navidrome. There are some exceptions: They won't be removed if the songs are referenced by another playlist or starred in navidrome or in the `keep` playlist. Navidrome imports these playlist daily. Then I use [symfonium](https://play.google.com/store/apps/details?id=app.symfonik.music.player) to access my music via subsonic api.
## Plugin System
Kikusan supports plugins for syncing music from various sources beyond standard playlists:
**Built-in Plugins:**
- **`listenbrainz`** - Weekly recommendations from listenbrainz.org
- Required: `user` (listenbrainz username)
- Optional: `recommendation_type` (weekly-exploration, weekly-jams)
- **`rss`** - Generic RSS/Atom feed parser for music podcasts, blogs, etc.
- Required: `url` (RSS/Atom feed URL)
- Optional: `artist_field`, `title_field`, `timeout`, `user_agent`
- **`reddit`** - Fetch songs from music subreddits (r/listentothis, r/Music, r/IndieHeads, etc.)
- Required: `subreddit` (subreddit name)
- Optional: `sort` (hot/new/top/rising), `time_filter`, `limit`, `min_score`
- **`billboard`** - Fetch songs from Billboard charts (hot-100, pop-songs, etc.)
- Required: `chart_name` (e.g., 'hot-100', 'pop-songs')
- Optional: `date` (YYYY-MM-DD), `year` (for year-end charts), `limit`
**Usage:**
```bash
# List available plugins
kikusan plugins list
# Run a plugin once
kikusan plugins run listenbrainz --config '{"user": "myuser"}'
kikusan plugins run reddit --config '{"subreddit": "listentothis", "limit": 25}'
kikusan plugins run billboard --config '{"chart_name": "hot-100", "limit": 50}'
# Schedule in cron.yaml
# See cron.example.yaml for configuration examples
```
**Creating Third-Party Plugins:**
See [`examples/third-party-plugin/`](examples/third-party-plugin/) for a complete example of creating your own plugin. Plugins are distributed as Python packages and automatically discovered via entry points.
## Installation
Run from git:
```bash
git clone https://github.com/dadav/kikusan
cd kikusan
uv sync
uv run kikusan --help
```
Install as uv tool:
```bash
uv tool install kikusan
kikusan --help
```
Or via [docker-compose](./docker-compose.yml).
## Usage
### CLI
```bash
# Search for music
kikusan search "Bohemian Rhapsody"
# Download by video ID
kikusan download bSnlKl_PoQU
# Download by URL
kikusan download --url "https://music.youtube.com/watch?v=bSnlKl_PoQU"
# Search and download first match
kikusan download --query "Bohemian Rhapsody Queen"
# Download entire playlist (YouTube Music, YouTube, or Deezer)
kikusan download --url "https://music.youtube.com/playlist?list=..."
kikusan download --url "https://www.deezer.com/playlist/..."
# Custom filename format
kikusan download bSnlKl_PoQU --filename "%(title)s"
# Options
kikusan download bSnlKl_PoQU --output ~/Music --format mp3
```
### Tag Existing Files
Add lyrics and ReplayGain tags to audio files you already have, without re-downloading:
```bash
# Tag all files in a directory (recursively)
kikusan tag /path/to/music
# Preview what would be done without making changes
kikusan tag --dry-run /path/to/music
# Only add lyrics (skip ReplayGain)
kikusan tag --no-replaygain /path/to/music
# Only add ReplayGain (skip lyrics)
kikusan tag --no-lyrics /path/to/music
```
**Features:**
- Recursively processes `.opus`, `.mp3`, `.flac` files
- Extracts metadata via mutagen (title, artist, album, duration)
- Fetches lyrics from lrclib.net using exact match, fuzzy search, and cleaned metadata retries
- Applies ReplayGain/R128 loudness normalization tags via rsgain
- Skips files that already have `.lrc` sidecar files (for lyrics)
- Skips files that already have ReplayGain tags (for ReplayGain)
- Non-fatal per-file errors with summary statistics
- Both lyrics and ReplayGain are enabled by default
**Requirements:**
- For ReplayGain: `rsgain` binary must be installed (included in Docker image)
### Web Interface
```bash
kikusan web
# Open http://localhost:8000
```
**Features:**
- Search YouTube Music with real-time results
- Download individual tracks with format selection (OPUS/MP3/FLAC)
- Dark/light theme toggle with automatic system preference detection
- View counts displayed for each track
- Responsive design for mobile and desktop
### Scheduled Sync (Cron)
Automatically monitor and sync playlists, plugins, and explore sources on a schedule:
```bash
# Run continuously with cron.yaml configuration
kikusan cron
# Run all syncs once and exit
kikusan cron --once
# Use custom config file
kikusan cron --config /path/to/cron.yaml
```
Create a `cron.yaml` file to configure:
- **Playlists**: YouTube Music, YouTube, or Deezer playlists
- **Plugins**: Listenbrainz, Reddit, Billboard, RSS feeds
- **Explore**: YouTube Music charts and mood/genre categories
- **Schedule**: Standard cron expressions (e.g., "0 9 \* \* \*" for daily at 9am)
- **Sync Mode**: Keep or delete files when removed from source
#### Explore Sources
Sync tracks from YouTube Music charts or mood/genre categories:
```yaml
explore:
# Sync US music charts daily
us-charts:
type: charts
country: US # ISO 3166-1 Alpha-2 code (ZZ = global)
sync: true # Remove tracks that fall off the charts
schedule: "0 6 * * *"
limit: 10 # Optional: Only get top 10 songs from charts
# Sync a mood/genre category weekly
chill-vibes:
type: mood
params: "ggMPOg1uX1J" # Get params from: kikusan explore moods
playlist_id: "RDCLAK5uy_..." # Optional: target specific playlist (get from explore mood-playlists)
sync: false
schedule: "0 12 * * 0"
```
Use `kikusan explore moods` to discover available mood/genre categories and their `params` values, and `kikusan explore charts --country XX` to preview chart contents.
See `cron.example.yaml` for detailed configuration examples.
### Notifications
Kikusan can send push notifications via [Gotify](https://gotify.net/) for scheduled sync operations:
- **Summary notifications only** - One notification per sync operation, not per track
- **Includes download/skip/fail counts** - See results at a glance
- **Optional** - Gracefully disabled if not configured
- **Non-blocking** - Notification failures don't stop downloads
**Setup:**
1. Install a Gotify server or use an existing instance
2. Create an application token in Gotify
3. Set environment variables:
```bash
export GOTIFY_URL="https://push.example.com"
export GOTIFY_TOKEN="your-app-token"
```
**Notifications are sent for:**
- Scheduled playlist syncs (via `kikusan cron`)
- Scheduled plugin syncs (via `kikusan cron`)
- Scheduled explore syncs (via `kikusan cron`)
Notifications are **not** sent for CLI operations or web UI downloads, as these are interactive and the user already sees the results.
### Navidrome Protection
Prevent deletion of songs during sync if they are starred or in a designated playlist in Navidrome:
**Features:**
- Protect songs starred/favorited in Navidrome (via Symfonium or other Subsonic clients)
- Protect songs in a designated "keep" playlist
- Real-time API checks during each sync operation
- Gracefully disabled if not configured
- Fails safe: keeps files if Navidrome is unreachable
**Setup:**
1. Configure environment variables:
```bash
export NAVIDROME_URL="https://music.example.com"
export NAVIDROME_USER="your-username"
export NAVIDROME_PASSWORD="your-password"
export NAVIDROME_KEEP_PLAYLIST="keep" # optional, defaults to "keep"
```
2. Star songs in your Subsonic client (Symfonium, DSub, etc.) or add them to your "keep" playlist
3. When kikusan syncs playlists with `sync: true`, protected songs won't be deleted even if removed from the source playlist
**Behavior:**
- Checks both starred songs AND songs in the keep playlist
- Protected files are skipped during deletion with detailed logging
- Works alongside existing cross-playlist/plugin reference protection
- Minimal performance impact (~3 API calls per sync operation)
**Example workflow:**
1. Sync YouTube Music playlist with `sync: true`
2. Song gets removed from YouTube Music playlist
3. You've starred the song in Symfonium (synced to Navidrome)
4. Kikusan detects the star and keeps the file on disk
5. File remains available in Navidrome/Symfonium
### Hooks
Hooks allow you to run custom commands when certain events occur during sync operations. This is useful for integrating with external systems like Navidrome.
**Supported Events:**
- `playlist_updated`: Triggered when an M3U playlist is created or updated
- `sync_completed`: Triggered after every sync operation (success or failure)
**Configuration:**
Add a `hooks` section to your `cron.yaml`:
```yaml
hooks:
# Import playlist to Navidrome when updated
- event: playlist_updated
command: |
NAVIDROME_TOKEN=$(curl -s -X POST \
-H "Content-Type: application/json" \
-d "{\"username\": \"${NAVIDROME_USER}\", \"password\": \"${NAVIDROME_PASSWORD}\"}" \
"${NAVIDROME_URL}/auth/login" | jq -r '.token')
curl -X POST \
-H "Content-Type: audio/x-mpegurl" \
-H "X-ND-Authorization: Bearer ${NAVIDROME_TOKEN}" \
--data-binary @"${KIKUSAN_PLAYLIST_PATH}" \
"${NAVIDROME_URL}/api/playlist"
timeout: 30 # seconds (default: 60)
# Log sync results
- event: sync_completed
command: echo "Sync: ${KIKUSAN_PLAYLIST_NAME}" >> /var/log/sync.log
run_on_error: true # Run even if sync failed (default: false)
```
**Environment Variables:**
Hooks receive context via environment variables:
| Variable | Description |
| ----------------------- | --------------------------------------------- |
| `KIKUSAN_EVENT` | Event type (playlist_updated, sync_completed) |
| `KIKUSAN_PLAYLIST_NAME` | Name of the playlist/plugin |
| `KIKUSAN_PLAYLIST_PATH` | Absolute path to the M3U file (if exists) |
| `KIKUSAN_SYNC_TYPE` | Type: "playlist", "plugin", or "explore" |
| `KIKUSAN_DOWNLOADED` | Number of tracks downloaded |
| `KIKUSAN_SKIPPED` | Number of tracks skipped |
| `KIKUSAN_DELETED` | Number of tracks deleted |
| `KIKUSAN_FAILED` | Number of tracks that failed |
| `KIKUSAN_SUCCESS` | "true" or "false" |
**Navidrome Integration Example:**
To automatically import playlists to Navidrome using its [playlist import API](https://github.com/navidrome/navidrome/pull/2273):
1. Set environment variables (these are already used for Navidrome Protection):
```bash
export NAVIDROME_URL="https://music.example.com"
export NAVIDROME_USER="your-username"
export NAVIDROME_PASSWORD="your-password"
```
2. Add hook to `cron.yaml`:
```yaml
hooks:
- event: playlist_updated
command: |
NAVIDROME_TOKEN=$(curl -s -X POST \
-H "Content-Type: application/json" \
-d "{\"username\": \"${NAVIDROME_USER}\", \"password\": \"${NAVIDROME_PASSWORD}\"}" \
"${NAVIDROME_URL}/auth/login" | jq -r '.token')
curl -X POST \
-H "Content-Type: audio/x-mpegurl" \
-H "X-ND-Authorization: Bearer ${NAVIDROME_TOKEN}" \
--data-binary @"${KIKUSAN_PLAYLIST_PATH}" \
"${NAVIDROME_URL}/api/playlist"
```
Note: This requires `jq` to be installed for parsing the JSON response.
### Docker
```bash
docker compose up -d
# Open http://localhost:8000
```
## Configuration
### Environment Variables
| Variable | Default | Description |
| ------------------------------------ | --------------------------------- | --------------------------------------------------------------- |
| `KIKUSAN_DOWNLOAD_DIR` | `./downloads` | Download directory |
| `KIKUSAN_AUDIO_FORMAT` | `opus` | Audio format (opus, mp3, flac) |
| `KIKUSAN_FILENAME_TEMPLATE` | `%(artist,uploader)s - %(title)s` | Filename template (yt-dlp format) |
| `KIKUSAN_ORGANIZATION_MODE` | `flat` | File organization mode (flat, album) |
| `KIKUSAN_USE_PRIMARY_ARTIST` | `false` | Use primary artist for folders (true, false) |
| `KIKUSAN_WEB_PORT` | `8000` | Web server port |
| `KIKUSAN_WEB_PLAYLIST` | `None` | M3U playlist name for web downloads (optional) |
| `KIKUSAN_CORS_ORIGINS` | `*` | CORS allowed origins (comma-separated) |
| `KIKUSAN_COOKIE_MODE` | `auto` | Cookie usage: auto, always, or never |
| `KIKUSAN_COOKIE_RETRY_DELAY` | `1.0` | Delay in seconds before retrying with cookies |
| `KIKUSAN_LOG_COOKIE_USAGE` | `true` | Log cookie usage statistics (true, false) |
| `GOTIFY_URL` | `None` | Gotify server URL for notifications (optional) |
| `GOTIFY_TOKEN` | `None` | Gotify application token (optional) |
| `NAVIDROME_URL` | `None` | Navidrome server URL for protection (optional) |
| `NAVIDROME_USER` | `None` | Navidrome username (optional) |
| `NAVIDROME_PASSWORD` | `None` | Navidrome password (optional) |
| `NAVIDROME_KEEP_PLAYLIST` | `keep` | Playlist name for protection (optional) |
| `YT_DLP_COOKIE_FILE` | `None` | Path to cookies.txt file for yt-dlp (optional) |
| `KIKUSAN_MULTI_USER` | `false` | Enable per-user M3U playlists via `Remote-User` header |
| `KIKUSAN_UNAVAILABLE_COOLDOWN_HOURS` | `168` | Hours to wait before retrying unavailable videos (0 = disabled) |
### Cookie Authentication
Kikusan supports two methods for providing cookies to yt-dlp:
1. **Web UI Upload** (Recommended):
- Open the web UI
- Click the settings icon (⚙️) in the header
- Upload your cookies.txt file
- The file is stored securely at `.kikusan/cookies.txt`
2. **Environment Variable**:
```bash
export YT_DLP_COOKIE_FILE=/path/to/cookies.txt
```
**Priority**: Web-uploaded cookies take precedence over environment variable.
**Exporting Cookies**:
- Chrome/Edge: Install "Get cookies.txt LOCALLY" extension
- Firefox: Install "cookies.txt" extension
- See [yt-dlp FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp) for detailed instructions
### File Organization
Kikusan supports two file organization modes:
#### Flat Mode (Default)
All files stored in the download directory with the filename template:
```
downloads/
├── Queen - Bohemian Rhapsody.opus
├── Pink Floyd - Comfortably Numb.opus
└── ...
```
#### Album Mode
Files organized by artist and album with automatic metadata extraction:
```
downloads/
├── Queen/
│ ├── 1975 - A Night at the Opera/
│ │ ├── 01 - Death on Two Legs.opus
│ │ ├── 11 - Bohemian Rhapsody.opus
│ │ └── 12 - God Save the Queen.opus
│ └── 1991 - Innuendo/
│ ├── 01 - Innuendo.opus
│ └── 06 - The Show Must Go On.opus
└── Pink Floyd/
└── 1979 - The Wall/
├── 01 - In the Flesh.opus
└── 26 - Outside the Wall.opus
```
**Enable album mode:**
```bash
export KIKUSAN_ORGANIZATION_MODE=album
```
**Behavior:**
- **Full metadata**: `Artist/Year - Album/NN - Track.ext`
- **Missing track number**: `Artist/Year - Album/Track.ext`
- **Missing album**: `Artist/Track.ext`
- **Path sanitization**: Invalid filesystem characters are automatically removed
**Multi-Artist Handling:**
By default, album mode uses the full artist string from metadata:
- `Queen feat. David Bowie` → folder: `Queen feat. David Bowie/`
- `Artist1, Artist2` → folder: `Artist1, Artist2/`
To use only the primary artist for cleaner folder organization:
```bash
export KIKUSAN_USE_PRIMARY_ARTIST=true
```
This extracts the main artist (before separators) for folder names:
- `Queen feat. David Bowie` → folder: `Queen/`
- `Artist1, Artist2` → folder: `Artist1/`
- `Artist & Guest` → folder: `Artist/`
Supported separators (in priority order): `feat.`, `ft.`, `featuring`, `with`, `&`, `, `
The full artist metadata is still preserved in the audio file tags.
**Notes:**
- Album mode is opt-in; flat mode remains the default for backward compatibility
- Primary artist extraction is optional (disabled by default)
- Existing files are not reorganized when switching modes
- New downloads will use the selected organization mode
- File existence checking works in both modes to prevent duplicates
### State Files & Playlists
Kikusan tracks downloaded files and generates M3U playlists automatically:
- **State Files**: Stored in `{download_dir}/.kikusan/state/` (for playlists) and `{download_dir}/.kikusan/plugin_state/` (for plugins)
- **M3U Playlists**: Generated at `{download_dir}/{name}.m3u` for each sync configuration
### Unavailable Video Cooldown
Kikusan automatically prevents repeated failed downloads of unavailable videos to reduce wasted bandwidth and API requests.
**How it works:**
When a video returns a "Video unavailable" error (distinct from authentication or network errors), Kikusan records the video ID with a timestamp in `{download_dir}/.kikusan/unavailable.json`. The video will be skipped during subsequent sync operations until the cooldown period expires.
### Filename Length Safety
Kikusan automatically truncates long filenames to prevent filesystem errors while preserving readability.
## CLI Reference
This section documents all CLI commands and their options.
### Global Options
These options apply to all commands:
| Option | Env Variable | Description |
| ------------------------ | --------------------------------------- | ------------------------------------------------------------------------------- |
| `--cookie-mode` | `KIKUSAN_COOKIE_MODE` | Cookie usage: `auto` (retry on auth errors), `always`, `never`. Default: `auto` |
| `--cookie-retry-delay` | `KIKUSAN_COOKIE_RETRY_DELAY` | Delay in seconds before retrying with cookies. Default: `1.0` |
| `--no-log-cookie-usage` | (inverse of `KIKUSAN_LOG_COOKIE_USAGE`) | Disable logging of cookie usage statistics |
| `--unavailable-cooldown` | `KIKUSAN_UNAVAILABLE_COOLDOWN_HOURS` | Hours to wait before retrying unavailable videos (0 = disabled). Default: `168` |
| `--version` | - | Show version and exit |
### kikusan search
Search for music on YouTube Music.
```bash
kikusan search "query" [OPTIONS]
```
| Option | Description |
| ------------- | --------------------------------------- |
| `-l, --limit` | Maximum number of results (default: 10) |
### kikusan download
Download a track by video ID, URL, or search query.
```bash
kikusan download [VIDEO_ID] [OPTIONS]
```
| Option | Env Variable | Description |
| ---------------------------------------------- | ---------------------------- | --------------------------------------------------------------------- |
| `-u, --url` | - | YouTube, YouTube Music, or Deezer URL |
| `-q, --query` | - | Search query (downloads first match) |
| `-o, --output` | `KIKUSAN_DOWNLOAD_DIR` | Output directory |
| `-f, --format` | `KIKUSAN_AUDIO_FORMAT` | Audio format: `opus`, `mp3`, `flac`. Default: `opus` |
| `-n, --filename` | `KIKUSAN_FILENAME_TEMPLATE` | Filename template (yt-dlp format) |
| `--no-lyrics` | - | Skip fetching lyrics |
| `-p, --add-to-playlist` | - | Add downloaded track(s) to M3U playlist |
| `--organization-mode` | `KIKUSAN_ORGANIZATION_MODE` | File organization: `flat` or `album`. Default: `flat` |
| `--use-primary-artist/--no-use-primary-artist` | `KIKUSAN_USE_PRIMARY_ARTIST` | Use only primary artist for folder names in album mode |
| `--replaygain/--no-replaygain` | `KIKUSAN_REPLAYGAIN` | Apply ReplayGain/R128 tags via rsgain. Default: enabled when flag set |
### kikusan tag
Tag existing audio files with lyrics and ReplayGain (no re-download).
```bash
kikusan tag DIRECTORY [OPTIONS]
```
| Option | Description |
| ------------------------------ | ------------------------------------------------------- |
| `--lyrics/--no-lyrics` | Fetch and save lyrics from lrclib.net. Default: enabled |
| `--replaygain/--no-replaygain` | Apply ReplayGain/R128 tags via rsgain. Default: enabled |
| `--dry-run` | Preview what would be done without making changes |
**Notes:**
- Recursively processes `.opus`, `.mp3`, `.flac` files in the specified directory
- Skips files that already have `.lrc` sidecar files (for lyrics)
- Non-fatal errors: continues processing remaining files and reports summary statistics
- Requires `rsgain` binary for ReplayGain support (included in Docker image)
### kikusan web
Start the web interface.
```bash
kikusan web [OPTIONS]
```
| Option | Env Variable | Description |
| ---------------------------------------------- | ---------------------------- | ----------------------------------------------------------- |
| `--host` | - | Host to bind to. Default: `0.0.0.0` |
| `-p, --port` | `KIKUSAN_WEB_PORT` | Port to listen on. Default: `8000` |
| `-o, --output` | `KIKUSAN_DOWNLOAD_DIR` | Override download directory |
| `--cors-origins` | `KIKUSAN_CORS_ORIGINS` | CORS allowed origins (comma-separated or `*`). Default: `*` |
| `--web-playlist` | `KIKUSAN_WEB_PLAYLIST` | M3U playlist name for web downloads (optional) |
| `--multi-user/--no-multi-user` | `KIKUSAN_MULTI_USER` | Per-user playlists via `Remote-User` header. Default: off |
| `--organization-mode` | `KIKUSAN_ORGANIZATION_MODE` | File organization: `flat` or `album`. Default: `flat` |
| `--use-primary-artist/--no-use-primary-artist` | `KIKUSAN_USE_PRIMARY_ARTIST` | Use only primary artist for folder names in album mode |
### kikusan cron
Run continuous sync based on cron.yaml (playlists, plugins, and explore sources).
```bash
kikusan cron [OPTIONS]
```
| Option | Env Variable | Description |
| ---------------------------------------------- | ---------------------------- | ------------------------------------------------------ |
| `-c, --config` | - | Path to cron configuration file. Default: `cron.yaml` |
| `-o, --output` | `KIKUSAN_DOWNLOAD_DIR` | Override download directory |
| `--once` | - | Run all sync jobs once and exit (skip scheduling) |
| `-f, --format` | `KIKUSAN_AUDIO_FORMAT` | Audio format: `opus`, `mp3`, `flac`. Default: `opus` |
| `--organization-mode` | `KIKUSAN_ORGANIZATION_MODE` | File organization: `flat` or `album`. Default: `flat` |
| `--use-primary-artist/--no-use-primary-artist` | `KIKUSAN_USE_PRIMARY_ARTIST` | Use only primary artist for folder names in album mode |
### kikusan plugins list
List all available plugins.
```bash
kikusan plugins list
```
No options.
### kikusan plugins run
Run a plugin sync once (without cron.yaml).
```bash
kikusan plugins run PLUGIN_NAME --config '{"key": "value"}' [OPTIONS]
```
| Option | Env Variable | Description |
| ---------------------------------------------- | ---------------------------- | ------------------------------------------------------ |
| `-c, --config` | - | Plugin config as JSON string (required) |
| `-o, --output` | `KIKUSAN_DOWNLOAD_DIR` | Download directory |
| `-f, --format` | `KIKUSAN_AUDIO_FORMAT` | Audio format: `opus`, `mp3`, `flac`. Default: `opus` |
| `--organization-mode` | `KIKUSAN_ORGANIZATION_MODE` | File organization: `flat` or `album`. Default: `flat` |
| `--use-primary-artist/--no-use-primary-artist` | `KIKUSAN_USE_PRIMARY_ARTIST` | Use only primary artist for folder names in album mode |
## Authentication
kikusan does not use any kind of authentication. If you need to secure it, I suggest to use **Caddy** with **authelia**. This caddy config works for me:
```Caddy
(authelia_forwarder) {
forward_auth http://192.168.1.10:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
}
kikusan.foobar.test {
import authelia_forwarder
reverse_proxy http://192.168.1.11:8007
}
```
### Multi-User Playlists
When running behind a reverse proxy with SSO (e.g. Authelia), kikusan can create separate M3U playlists per user by reading the `Remote-User` header. Each user's playlist is prefixed with their username (e.g. `alice-webplaylist.m3u`).
```bash
kikusan web --web-playlist webplaylist --multi-user
```
If the header is absent (e.g. direct access without the proxy), the shared playlist is used as fallback.
## Requirements
- Python 3.12+
- ffmpeg (for audio processing)
## Disclaimer
Kikusan is intended for **private, personal use only**.
It must not be used for commercial purposes or in any way that violates copyright laws.
Users are responsible for ensuring their usage complies with applicable laws and YouTubes terms of service.
The developer does not condone copyright infringement and is not liable for misuse of this tool.
## LICENSE
[MIT](./LICENSE)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"apscheduler>=3.10.4",
"beautifulsoup4>=4.12.0",
"billboard-py>=7.1.0",
"croniter>=1.3.0",
"fastapi[standard]>=0.115.0",
"httpx>=0.27.0",
"mutagen>=1.47.0",
"pyyaml>=6.0.1",
"typer>=0.15.0",
"yt-dlp>=2025.12.8",
"ytmusicapi>=1.8.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:37:37.238093 | kikusan-0.22.5.tar.gz | 318,775 | 41/a8/0f3d6d0eb4b87a84debd7a6041baef84b15548f63bd6b7806a8c2d3ec448/kikusan-0.22.5.tar.gz | source | sdist | null | false | 1ae26ddb94bcb58e2a4d0e0fce5b24bc | e03347b6a50bd74d99439a676fd3623ae57b082d468d2c37ed909f8825ce5651 | 41a80f3d6d0eb4b87a84debd7a6041baef84b15548f63bd6b7806a8c2d3ec448 | null | [
"LICENSE"
] | 206 |
2.4 | scrapebadger | 0.1.10 | Official Python SDK for ScrapeBadger - Async web scraping APIs for Twitter and more | <p align="center">
<img src="https://scrapebadger.com/logo-dark.png" alt="ScrapeBadger" width="400">
</p>
<h1 align="center">ScrapeBadger Python SDK</h1>
<p align="center">
<a href="https://pypi.org/project/scrapebadger/"><img src="https://img.shields.io/pypi/v/scrapebadger.svg" alt="PyPI version"></a>
<a href="https://pypi.org/project/scrapebadger/"><img src="https://img.shields.io/pypi/pyversions/scrapebadger.svg" alt="Python versions"></a>
<a href="https://github.com/scrape-badger/scrapebadger-python/blob/main/LICENSE"><img src="https://img.shields.io/pypi/l/scrapebadger.svg" alt="License"></a>
<a href="https://github.com/scrape-badger/scrapebadger-python/actions/workflows/test.yml"><img src="https://github.com/scrape-badger/scrapebadger-python/actions/workflows/test.yml/badge.svg" alt="Tests"></a>
<a href="https://codecov.io/gh/scrape-badger/scrapebadger-python"><img src="https://codecov.io/gh/scrape-badger/scrapebadger-python/branch/main/graph/badge.svg" alt="Coverage"></a>
<a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/badge/code%20style-ruff-000000.svg" alt="Code style: ruff"></a>
<a href="https://mypy-lang.org/"><img src="https://img.shields.io/badge/type%20checked-mypy-blue.svg" alt="Type checked: mypy"></a>
</p>
The official Python SDK for [ScrapeBadger](https://scrapebadger.com) - async web scraping APIs for Twitter and more.
## Features
- **Async-first design** - Built with `asyncio` for high-performance concurrent scraping
- **Type-safe** - Full type hints and Pydantic models for all API responses
- **Automatic pagination** - Iterator methods for seamless pagination through large datasets
- **Retry logic** - Built-in exponential backoff for transient errors
- **Comprehensive coverage** - Access to 37+ Twitter endpoints (tweets, users, lists, communities, trends, geo)
## Installation
```bash
pip install scrapebadger
```
Or with [uv](https://github.com/astral-sh/uv):
```bash
uv add scrapebadger
```
## Quick Start
```python
import asyncio
from scrapebadger import ScrapeBadger
async def main():
async with ScrapeBadger(api_key="your-api-key") as client:
# Get a user profile
user = await client.twitter.users.get_by_username("elonmusk")
print(f"{user.name} has {user.followers_count:,} followers")
# Search tweets
tweets = await client.twitter.tweets.search("python programming")
for tweet in tweets.data:
print(f"@{tweet.username}: {tweet.text[:100]}...")
asyncio.run(main())
```
## Authentication
Get your API key from [scrapebadger.com](https://scrapebadger.com) and pass it to the client:
```python
from scrapebadger import ScrapeBadger
client = ScrapeBadger(api_key="sb_live_xxxxxxxxxxxxx")
```
You can also set the `SCRAPEBADGER_API_KEY` environment variable:
```bash
export SCRAPEBADGER_API_KEY="sb_live_xxxxxxxxxxxxx"
```
## Usage Examples
### Twitter Users
```python
async with ScrapeBadger(api_key="your-key") as client:
# Get user by username
user = await client.twitter.users.get_by_username("elonmusk")
print(f"{user.name} (@{user.username})")
print(f"Followers: {user.followers_count:,}")
print(f"Following: {user.following_count:,}")
print(f"Bio: {user.description}")
# Get user by ID
user = await client.twitter.users.get_by_id("44196397")
# Get extended "About" information
about = await client.twitter.users.get_about("elonmusk")
print(f"Account based in: {about.account_based_in}")
print(f"Username changes: {about.username_changes}")
```
### Twitter Tweets
```python
async with ScrapeBadger(api_key="your-key") as client:
# Get a single tweet
tweet = await client.twitter.tweets.get_by_id("1234567890")
print(f"@{tweet.username}: {tweet.text}")
print(f"Likes: {tweet.favorite_count:,}, Retweets: {tweet.retweet_count:,}")
# Get multiple tweets
tweets = await client.twitter.tweets.get_by_ids([
"1234567890",
"0987654321"
])
# Search tweets
from scrapebadger.twitter import QueryType
results = await client.twitter.tweets.search(
"python programming",
query_type=QueryType.LATEST # TOP, LATEST, or MEDIA
)
# Get user's timeline
tweets = await client.twitter.tweets.get_user_tweets("elonmusk")
```
### Automatic Pagination
All paginated endpoints support both manual pagination and automatic iteration:
```python
async with ScrapeBadger(api_key="your-key") as client:
# Manual pagination
followers = await client.twitter.users.get_followers("elonmusk")
for user in followers.data:
print(f"@{user.username}")
if followers.has_more:
more = await client.twitter.users.get_followers(
"elonmusk",
cursor=followers.next_cursor
)
# Automatic pagination with async iterator
async for follower in client.twitter.users.get_followers_all(
"elonmusk",
max_items=1000 # Optional limit
):
print(f"@{follower.username}")
# Collect all results into a list
all_followers = [
user async for user in client.twitter.users.get_followers_all(
"elonmusk",
max_pages=10
)
]
```
### Twitter Lists
```python
async with ScrapeBadger(api_key="your-key") as client:
# Search for lists
lists = await client.twitter.lists.search("tech leaders")
for lst in lists.data:
print(f"{lst.name}: {lst.member_count} members")
# Get list details
lst = await client.twitter.lists.get_detail("123456")
# Get list tweets
tweets = await client.twitter.lists.get_tweets("123456")
# Get list members
members = await client.twitter.lists.get_members("123456")
```
### Twitter Communities
```python
async with ScrapeBadger(api_key="your-key") as client:
from scrapebadger.twitter import CommunityTweetType
# Search communities
communities = await client.twitter.communities.search("python developers")
# Get community details
community = await client.twitter.communities.get_detail("123456")
print(f"{community.name}: {community.member_count:,} members")
print(f"Rules: {len(community.rules or [])}")
# Get community tweets
tweets = await client.twitter.communities.get_tweets(
"123456",
tweet_type=CommunityTweetType.LATEST
)
# Get members
members = await client.twitter.communities.get_members("123456")
```
### Trending Topics
```python
async with ScrapeBadger(api_key="your-key") as client:
from scrapebadger.twitter import TrendCategory
# Get global trends
trends = await client.twitter.trends.get_trends()
for trend in trends.data:
count = f"{trend.tweet_count:,}" if trend.tweet_count else "N/A"
print(f"{trend.name}: {count} tweets")
# Get trends by category
news = await client.twitter.trends.get_trends(category=TrendCategory.NEWS)
sports = await client.twitter.trends.get_trends(category=TrendCategory.SPORTS)
# Get trends for a specific location (WOEID)
us_trends = await client.twitter.trends.get_place_trends(23424977) # US
print(f"Trends in {us_trends.name}:")
for trend in us_trends.trends:
print(f" - {trend.name}")
# Get available trend locations
locations = await client.twitter.trends.get_available_locations()
us_cities = [loc for loc in locations.data if loc.country_code == "US"]
```
### Geographic Places
```python
async with ScrapeBadger(api_key="your-key") as client:
# Search places by name
places = await client.twitter.geo.search(query="San Francisco")
for place in places.data:
print(f"{place.full_name} ({place.place_type})")
# Search by coordinates
places = await client.twitter.geo.search(
lat=37.7749,
long=-122.4194,
granularity="city"
)
# Get place details
place = await client.twitter.geo.get_detail("5a110d312052166f")
```
## Error Handling
The SDK provides specific exception types for different error scenarios:
```python
from scrapebadger import (
ScrapeBadger,
ScrapeBadgerError,
AuthenticationError,
RateLimitError,
InsufficientCreditsError,
NotFoundError,
ValidationError,
ServerError,
)
async with ScrapeBadger(api_key="your-key") as client:
try:
user = await client.twitter.users.get_by_username("elonmusk")
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after} seconds")
print(f"Limit: {e.limit}, Remaining: {e.remaining}")
except InsufficientCreditsError:
print("Out of credits! Purchase more at scrapebadger.com")
except NotFoundError:
print("User not found")
except ValidationError as e:
print(f"Invalid parameters: {e}")
except ServerError:
print("Server error, try again later")
except ScrapeBadgerError as e:
print(f"API error: {e}")
```
## Configuration
### Custom Timeout and Retries
```python
from scrapebadger import ScrapeBadger
client = ScrapeBadger(
api_key="your-key",
timeout=120.0, # Request timeout in seconds (default: 300)
max_retries=5, # Retry attempts (default: 3)
)
```
### Advanced Configuration
```python
from scrapebadger import ScrapeBadger
from scrapebadger._internal import ClientConfig
config = ClientConfig(
api_key="your-key",
base_url="https://scrapebadger.com",
timeout=300.0,
connect_timeout=10.0,
max_retries=3,
retry_on_status=(502, 503, 504),
headers={"X-Custom-Header": "value"},
)
client = ScrapeBadger(config=config)
```
## API Reference
### Twitter Endpoints
| Category | Methods |
|----------|---------|
| **Tweets** | `get_by_id`, `get_by_ids`, `search`, `search_all`, `get_user_tweets`, `get_user_tweets_all`, `get_replies`, `get_retweeters`, `get_favoriters`, `get_similar` |
| **Users** | `get_by_id`, `get_by_username`, `get_about`, `search`, `search_all`, `get_followers`, `get_followers_all`, `get_following`, `get_following_all`, `get_follower_ids`, `get_following_ids`, `get_latest_followers`, `get_latest_following`, `get_verified_followers`, `get_followers_you_know`, `get_subscriptions`, `get_highlights` |
| **Lists** | `get_detail`, `search`, `get_tweets`, `get_tweets_all`, `get_members`, `get_members_all`, `get_subscribers`, `get_my_lists` |
| **Communities** | `get_detail`, `search`, `get_tweets`, `get_tweets_all`, `get_members`, `get_moderators`, `search_tweets`, `get_timeline` |
| **Trends** | `get_trends`, `get_place_trends`, `get_available_locations` |
| **Geo** | `get_detail`, `search` |
### Response Models
All responses use strongly-typed Pydantic models:
- `Tweet` - Tweet data with text, metrics, media, polls, etc.
- `User` - User profile with bio, metrics, verification status
- `UserAbout` - Extended user information
- `List` - Twitter list details
- `Community` - Community with rules and admin info
- `Trend` - Trending topic
- `Place` - Geographic place
- `PaginatedResponse[T]` - Wrapper for paginated results
See the [full API documentation](https://scrapebadger.com/docs) for complete details.
## Development
### Setup
```bash
# Clone the repository
git clone https://github.com/scrape-badger/scrapebadger-python.git
cd scrapebadger-python
# Install dependencies with uv
uv sync --dev
# Install pre-commit hooks
uv run pre-commit install
```
### Running Tests
```bash
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=src/scrapebadger --cov-report=html
# Run specific tests
uv run pytest tests/test_client.py -v
```
### Code Quality
```bash
# Lint
uv run ruff check src/ tests/
# Format
uv run ruff format src/ tests/
# Type check
uv run mypy src/
# All checks
uv run ruff check src/ tests/ && uv run ruff format --check src/ tests/ && uv run mypy src/
```
## Contributing
Contributions are welcome! Please read our [Contributing Guide](CONTRIBUTING.md) for details.
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Run tests and linting (`uv run pytest && uv run ruff check`)
5. Commit your changes (`git commit -m 'Add amazing feature'`)
6. Push to the branch (`git push origin feature/amazing-feature`)
7. Open a Pull Request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Support
- **Documentation**: [scrapebadger.com/docs](https://scrapebadger.com/docs)
- **Issues**: [GitHub Issues](https://github.com/scrape-badger/scrapebadger-python/issues)
- **Email**: support@scrapebadger.com
- **Discord**: [Join our community](https://discord.gg/scrapebadger)
---
Made with ❤️ by [ScrapeBadger](https://scrapebadger.com)
| text/markdown | null | ScrapeBadger <support@scrapebadger.com> | null | ScrapeBadger <support@scrapebadger.com> | MIT | api, async, data-extraction, scraping, sdk, social-media, twitter, web-scraping | [
"Development Status :: 4 - Beta",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.0.0",
"mypy>=1.13.0; extra == \"dev\"",
"pre-commit>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-cov>=5.0.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"respx>=0.21.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://scrapebadger.com",
"Documentation, https://docs.scrapebadger.com",
"Repository, https://github.com/scrapebadger/scrapebadger-python",
"Issues, https://github.com/scrapebadger/scrapebadger-python/issues",
"Changelog, https://github.com/scrapebadger/scrapebadger-python/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:37:17.681455 | scrapebadger-0.1.10.tar.gz | 29,739 | c6/56/c4e751aaf7bd6bd71fd5dc8683fecf6025edc2a9444c9d950fd410cb1aec/scrapebadger-0.1.10.tar.gz | source | sdist | null | false | b52c76632561cde19867378913e0da08 | 56c16d764f410dd20c6241b87ab8bfbe2211ebd0072eda7bd621dd04c174aaab | c656c4e751aaf7bd6bd71fd5dc8683fecf6025edc2a9444c9d950fd410cb1aec | null | [
"LICENSE"
] | 204 |
2.4 | structurize | 3.4.3 | Tools to convert from and to JSON Structure from various other schema languages. | # Structurize / Avrotize
**Structurize** is a powerful schema conversion toolkit that helps you transform between various schema formats including JSON Schema, JSON Structure, Avro Schema, Protocol Buffers, XSD, SQL, and many more.
This package is published under two names:
- **`structurize`** - The primary package name, emphasizing JSON Structure conversion capabilities
- **`avrotize`** - The original package name, emphasizing Avro Schema conversion capabilities
Both packages currently share the same features and codebase. However, in future releases, Avro-focused and JSON Structure-focused features may be split across the two tools to make the feature list more manageable and focused for users. Choose whichever variant better aligns with your primary use case.
## Quick Start
Install the package:
```bash
pip install structurize
```
or
```bash
pip install avrotize
```
Use the CLI:
```bash
# Using structurize
structurize --help
# Or using avrotize
avrotize --help
```
## Key Features
- Convert between JSON Schema, JSON Structure, and Avro Schema
- Transform schemas to and from Protocol Buffers, XSD, ASN.1
- Generate code in C#, Python, TypeScript, Java, Go, Rust, C++, JavaScript
- Export schemas to SQL databases (MySQL, PostgreSQL, SQL Server, Oracle, Cassandra, MongoDB, DynamoDB, and more)
- Convert to Parquet, Iceberg, Kusto, and other data formats
- Generate documentation in Markdown
## Documentation
For complete documentation, examples, and detailed usage instructions, please see the main repository:
**[📖 Full Documentation](https://github.com/clemensv/avrotize)**
The main README includes:
- Comprehensive command reference
- Conversion examples and use cases
- Code generation guides
- Database schema export instructions
- API documentation
## License
MIT License - see the [LICENSE](../LICENSE) file in the repository root.
| text/markdown | null | Clemens Vasters <clemensv@microsoft.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jsonschema>=4.23.0",
"lark>=1.1.9",
"pyarrow>=22.0.0",
"asn1tools>=0.167.0",
"jsonpointer>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsoncomparison>=1.1.0",
"requests>=2.32.3",
"azure-kusto-data>=5.0.5",
"azure-identity>=1.17.1",
"datapackage>=1.15.4",
"jinja2>=3.1.4",
"pyiceberg>=0.10.0",
"pandas>=2.2.2",
"docker>=7.1.0",
"pytest>=8.3.2; extra == \"dev\"",
"fastavro>=1.9.5; extra == \"dev\"",
"xmlschema>=3.3.2; extra == \"dev\"",
"xmlunittest>=1.0.1; extra == \"dev\"",
"pylint>=3.2.6; extra == \"dev\"",
"dataclasses_json>=0.6.7; extra == \"dev\"",
"dataclasses>=0.8; extra == \"dev\"",
"pydantic>=2.8.2; extra == \"dev\"",
"avro>=1.12.0; extra == \"dev\"",
"testcontainers>=4.7.2; extra == \"dev\"",
"pymysql>=1.1.1; extra == \"dev\"",
"psycopg2>=2.9.9; extra == \"dev\"",
"pyodbc>=5.1.0; extra == \"dev\"",
"pymongo>=4.8.0; extra == \"dev\"",
"oracledb>=2.3.0; extra == \"dev\"",
"cassandra-driver>=3.29.1; extra == \"dev\"",
"sqlalchemy>=2.0.32; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:37:10.574493 | structurize-3.4.3.tar.gz | 392,998 | 59/5b/4a144eff0736f91b47d2e715d28d035e638eb97e5adbbe6474a977116e4c/structurize-3.4.3.tar.gz | source | sdist | null | false | ce030356eeda32d890728c52c71cad36 | f7df670aa6dbb4777d0db210e4225b0ddc81034ac513d4f30cefe008bcb05fc5 | 595b4a144eff0736f91b47d2e715d28d035e638eb97e5adbbe6474a977116e4c | null | [
"LICENSE"
] | 201 |
2.4 | robotics-application-manager | 5.6.7 | Robotics Application Manager | # Robotics Application Manager (RAM) Documentation
## Table of Contents
1. [Project Overview](#project-overview)
2. [Main Class: `Manager`](#main-class-manager)
- [Purpose and Functionality](#purpose-and-functionality)
- [States and Transitions](#states-and-transitions)
- [Key Methods](#key-methods)
- [Interactions with Other Components](#interactions-with-other-components)
3. [Usage Examples](#usage-examples)
## Project Overview
The Robotics Application Manager (RAM) is an advanced manager for executing robotic applications. It operates as a state machine, managing the lifecycle of robotic applications from initialization to termination and uses the following ports to communicate:
- **7063**: Connexion with other applications (Robotics Academy, BT Studio, Unibotics)
- **6080-6090**: Tools VNC
## Main Class: `Manager`
### Purpose and Functionality
The `Manager` class is the core of RAM, orchestrating operations and managing transitions between various application states.
### States and Transitions
- **States**:
- `idle`: The initial state, waiting for a connection.
- `connected`: Connected and ready to initiate processes.
- `world_ready`: The world environment is set up and ready.
- `tools_ready`: Tools are prepared and ready.
- `application_running`: A robotic application is actively running.
- `paused`: The application is paused.
- **Transitions**:
- `connect`: Moves from `idle` to `connected`.
- `launch_world`: Initiates the world setup from `connected`.
- `prepare_tools`: Prepares the tools in `world_ready`.
- `run_application`: Starts the application in `tools_ready` or `paused`.
- `pause`: Pauses the running application.
- `resume`: Resumes a paused application.
- `terminate`: Stops the application and goes back to `tools_ready`.
- `stop`: Completely stops the application.
- `disconnect`: Disconnects from the current session and returns to `idle`.
- **Stateless Transitions**:
- `gui`: Redirects content to the gui webserver.
- `style_check`: Triggers on_style_check.
- `code_analysis`: Triggers on_code_analysis.
- `code_format`: Triggers on_code_format.
- `code_autocomplete`: Triggers on_code_autocomplete.
### Key Methods
- `on_connect(self, event)`: Manages the transition to the 'connected' state.
- `on_launch_world(self, event)`: Prepares and launches the robotic world.
- `on_prepare_tools(self, event)`: Sets up tools.
- `on_run_application(self, event)`: Executes the robotic application.
- `on_pause(self, msg)`: Pauses the running application.
- `on_resume(self, msg)`: Resumes the paused application.
- `on_terminate(self, event)`: Terminates the running application.
- `on_disconnect(self, event)`: Handles disconnection and cleanup.
- `on_style_check(self, event)`: Check the style of the user code.
- `on_code_analysis(self, event)`: Analyzes the style and format of the user code using pylint.
- `on_code_format(self, event)`: Formats the user code using black.
- `on_code_autocomplete(self, event)`: Searches for all available code completions using Jedi.
- **Exception Handling**: Details how specific errors are managed in each method.
### Interactions with Other Components
#### Interaction Between `Manager` and `ManagerConsumer`
1. **Message Queue Integration**: `ManagerConsumer` puts received messages into `manager_queue` for `Manager` to process.
2. **State Updates and Commands**: `Manager` sends state updates or commands to the client through `ManagerConsumer`.
3. **Client Connection Handling**: `Manager` relies on `ManagerConsumer` for client connection and disconnection handling.
4. **Error Handling**: `ManagerConsumer` communicates exceptions back to the client and `Manager`.
5. **Lifecycle Management**: `Manager` controls the start and stop of the `ManagerConsumer` WebSocket server.
#### Interaction Between `Manager` and `LauncherWorld`
1. **World Initialization and Launching**: `Manager` initializes `LauncherWorld` with specific configurations, such as world type (e.g., `gazebo`, `drones`) and the launch file path.
2. **Dynamic Module Management**: `LauncherWorld` dynamically launches modules based on the world configuration and ROS version, as dictated by `Manager`.
3. **State Management and Transition**: The state of `Manager` is updated in response to the actions performed by `LauncherWorld`. For example, once the world is ready, `Manager` may transition to the `world_ready` state.
4. **Termination and Cleanup**: `Manager` can instruct `LauncherWorld` to terminate the world environment through its `terminate` method. `LauncherWorld` ensures a clean and orderly shutdown of all modules and resources involved in the world setup.
5. **Error Handling and Logging**: `Manager` handles exceptions and errors that may arise during the world setup or termination processes, ensuring robust operation.
#### Interaction Between `Manager` and `LauncherTools`
1. **Visualization Setup**: `Manager` initializes `LauncherTools` with a specific tools configuration, which can include tools like `console`, `simulator`, `web_gui`, etc.
2. **Module Launching for Tools**: `LauncherTools` dynamically launches tools modules based on the configuration provided by `Manager`.
3. **State Management and Synchronization**: Upon successful setup of the tools, `Manager` can update its state (e.g., to `tools_ready`) to reflect the readiness of the tools.
4. **Termination of Tools**: `Manager` can instruct `LauncherTools` to terminate the current tools setup using its `terminate` method.
5. **Error Handling and Logging**: `Manager` is equipped to manage exceptions and errors that might occur during the setup or termination of the tools.
#### Interaction Between `Manager` and `application_process`
1. **Application Execution**: `Manager` initiates the `application_process` when transitioning to the `application_running` state.
2. **Application Configuration and Launching**: Before launching the `application_process`, `Manager` configures the necessary parameters.
3. **Process Management**: `Manager` monitors and controls the `application_process`.
4. **Error Handling and Logging**: `Manager` is responsible for handling any errors or exceptions that occur during the execution of the `application_process`.
5. **State Synchronization**: The state of the `application_process` is closely synchronized with the state machine in `Manager`.
#### Interaction Between `Manager` and `Server` (Specific to RoboticsAcademy Applications) (Now inside tool web_gui)
1. **Dedicated WebSocket Server for GUI Updates**: `Server` is used exclusively for RoboticsAcademy applications that require real-time interaction with a web-based GUI.
2. **Client Communication for GUI Module**: For RoboticsAcademy applications with a GUI module, `Server` handles incoming and outgoing messages.
3. **Real-time Interaction and Feedback**: `Server` allows for real-time feedback and interaction within the browser-based GUI.
4. **Conditional Operation Based on Application Type**: `Manager` initializes and controls `Server` based on the specific needs of the RoboticsAcademy application being executed.
5. **Error Handling and Logging**: `Manager` ensures robust error handling for `Server`.
## Usage Example
1. **Connecting to RAM**:
- Initially, the RAM is in the `idle` state.
- A client (e.g., a user interface or another system) connects to RAM, triggering the `connect` transition and moving RAM to the `connected` state.
2. **Launching the World**:
- Once connected, the client can request RAM to launch a robotic world by sending a `launch_world` command.
- RAM transitions to the `world_ready` state after successfully setting up the world environment.
3. **Setting Up Tools**:
- After the world is ready, the client requests RAM to prepare the tools with a `prepare_tools` command.
- RAM transitions to the `tools_ready` state, indicating that the tools are set up and ready.
4. **Running an Application**:
- The client then requests RAM to run a specific robotic application, moving RAM into the `application_running` state.
- The application executes, and RAM handles its process management, including monitoring and error handling.
5. **Pausing and Resuming Application**:
- The client can send `pause` and `resume` commands to RAM to control the application's execution.
- RAM transitions to the `paused` state when paused and returns to `application_running` upon resumption.
6. **Stopping the Application**:
- Finally, the client can send a `stop` command to halt the application.
- RAM stops the application and transitions back to the `tools_ready` state, ready for new commands.
7. **Disconnecting**:
- Once all tasks are completed, the client can disconnect from RAM, which then returns to the `idle` state, ready for a new session.
| text/markdown | null | Example Author <author@example.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic==2.4.2",
"transitions==0.9.0",
"pylint==3.3.1",
"websocket-client==1.5.2",
"argparse==1.4.0",
"six==1.16.0",
"psutil==5.9.0",
"watchdog==2.1.5",
"jedi",
"black==24.10.0",
"websocket_server==0.6.4"
] | [] | [] | [] | [
"Homepage, https://github.com/JdeRobot/RoboticsApplicationManager",
"Issues, https://github.com/JdeRobot/RoboticsApplicationManager/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:37:07.300781 | robotics_application_manager-5.6.7.tar.gz | 53,946 | 80/2d/60540f2a2a0422a2bc311955a28895c3417575e797e5a5c8418d9bd41ed3/robotics_application_manager-5.6.7.tar.gz | source | sdist | null | false | a258c192c37dda4ee2d5df76a5560ee9 | 0c41490b48586b53d59a1cf7ef8ca25e47f838101ed5e3168249ebfba9d20d3e | 802d60540f2a2a0422a2bc311955a28895c3417575e797e5a5c8418d9bd41ed3 | GPL-3.0-only | [
"LICENSE"
] | 230 |
2.4 | avrotize | 3.4.3 | Tools to convert from and to Avro Schema from various other schema languages. | # Avrotize & Structurize
mcp-name: io.github.clemensv/avrotize
[](https://pypi.org/project/avrotize/)
[](https://pypi.org/project/avrotize/)
[](https://github.com/clemensv/avrotize/actions/workflows/build_deploy.yml)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/avrotize/)
**[📚 Documentation & Examples](https://clemensv.github.io/avrotize/)** | **[🎨 Conversion Gallery](https://clemensv.github.io/avrotize/gallery/)**
Avrotize is a ["Rosetta Stone"](https://en.wikipedia.org/wiki/Rosetta_Stone) for data structure definitions, allowing you to convert between numerous data and database schema formats and to generate code for different programming languages.
It is, for instance, a well-documented and predictable converter and code generator for data structures originally defined in JSON Schema (of arbitrary complexity).
The tool leans on the Apache Avro-derived [Avrotize Schema](specs/avrotize-schema.md) as its schema model.
- Programming languages: Python, C#, Java, TypeScript, JavaScript, Rust, Go, C++
- SQL Databases: MySQL, MariaDB, PostgreSQL, SQL Server, Oracle, SQLite, BigQuery, Snowflake, Redshift, DB2
- Other databases: KQL/Kusto, MongoDB, Cassandra, Redis, Elasticsearch, DynamoDB, CosmosDB
- Data schema formats: Avro, JSON Schema, XML Schema (XSD), Protocol Buffers 2 and 3, ASN.1, Apache Parquet
## Installation
You can install Avrotize from PyPI, [having installed Python 3.10 or later](https://www.python.org/downloads/):
```bash
pip install avrotize
```
For MCP server support (`avrotize mcp`), install with the MCP extra:
```bash
pip install "avrotize[mcp]"
```
For SQL database support (`sql2a` command), install the optional database drivers:
```bash
# PostgreSQL
pip install avrotize[postgres]
# MySQL
pip install avrotize[mysql]
# SQL Server
pip install avrotize[sqlserver]
# All SQL databases
pip install avrotize[all-sql]
```
## Usage
Avrotize provides several commands for converting schema formats via Avrotize Schema.
Converting to Avrotize Schema:
- [`avrotize p2a`](#convert-proto-schema-to-avrotize-schema) - Convert Protobuf (2 or 3) schema to Avrotize Schema.
- [`avrotize j2a`](#convert-json-schema-to-avrotize-schema) - Convert JSON schema to Avrotize Schema.
- [`avrotize x2a`](#convert-xml-schema-xsd-to-avrotize-schema) - Convert XML schema to Avrotize Schema.
- [`avrotize asn2a`](#convert-asn1-schema-to-avrotize-schema) - Convert ASN.1 to Avrotize Schema.
- [`avrotize k2a`](#convert-kusto-table-definition-to-avrotize-schema) - Convert Kusto table definitions to Avrotize Schema.
- [`avrotize sql2a`](#convert-sql-database-schema-to-avrotize-schema) - Convert SQL database schema to Avrotize Schema.
- [`avrotize json2a`](#infer-avro-schema-from-json-files) - Infer Avro schema from JSON files.
- [`avrotize json2s`](#infer-json-structure-schema-from-json-files) - Infer JSON Structure schema from JSON files.
- [`avrotize xml2a`](#infer-avro-schema-from-xml-files) - Infer Avro schema from XML files.
- [`avrotize xml2s`](#infer-json-structure-schema-from-xml-files) - Infer JSON Structure schema from XML files.
- [`avrotize pq2a`](#convert-parquet-schema-to-avrotize-schema) - Convert Parquet schema to Avrotize Schema.
- [`avrotize csv2a`](#convert-csv-file-to-avrotize-schema) - Convert CSV file to Avrotize Schema.
- [`avrotize kstruct2a`](#convert-kafka-connect-schema-to-avrotize-schema) - Convert Kafka Connect Schema to Avrotize Schema.
Converting from Avrotize Schema:
- [`avrotize a2p`](#convert-avrotize-schema-to-proto-schema) - Convert Avrotize Schema to Protobuf 3 schema.
- [`avrotize a2j`](#convert-avrotize-schema-to-json-schema) - Convert Avrotize Schema to JSON schema.
- [`avrotize a2x`](#convert-avrotize-schema-to-xml-schema) - Convert Avrotize Schema to XML schema.
- [`avrotize a2k`](#convert-avrotize-schema-to-kusto-table-declaration) - Convert Avrotize Schema to Kusto table definition.
- [`avrotize s2k`](#convert-json-structure-schema-to-kusto-table-declaration) - Convert JSON Structure Schema to Kusto table definition.
- [`avrotize a2sql`](#convert-avrotize-schema-to-sql-table-definition) - Convert Avrotize Schema to SQL table definition.
- [`avrotize s2sql`](#convert-json-structure-schema-to-sql-schema) - Convert JSON Structure Schema to SQL table definition.
- [`avrotize a2pq`](#convert-avrotize-schema-to-empty-parquet-file) - Convert Avrotize Schema to Parquet or Iceberg schema.
- [`avrotize a2ib`](#convert-avrotize-schema-to-iceberg-schema) - Convert Avrotize Schema to Iceberg schema.
- [`avrotize s2ib`](#convert-json-structure-to-iceberg-schema) - Convert JSON Structure to Iceberg schema.
- [`avrotize a2mongo`](#convert-avrotize-schema-to-mongodb-schema) - Convert Avrotize Schema to MongoDB schema.
- [`avrotize a2cassandra`](#convert-avrotize-schema-to-cassandra-schema) - Convert Avrotize Schema to Cassandra schema.
- [`avrotize s2cassandra`](#convert-json-structure-schema-to-cassandra-schema) - Convert JSON Structure Schema to Cassandra schema.
- [`avrotize a2es`](#convert-avrotize-schema-to-elasticsearch-schema) - Convert Avrotize Schema to Elasticsearch schema.
- [`avrotize a2dynamodb`](#convert-avrotize-schema-to-dynamodb-schema) - Convert Avrotize Schema to DynamoDB schema.
- [`avrotize a2cosmos`](#convert-avrotize-schema-to-cosmosdb-schema) - Convert Avrotize Schema to CosmosDB schema.
- [`avrotize a2couchdb`](#convert-avrotize-schema-to-couchdb-schema) - Convert Avrotize Schema to CouchDB schema.
- [`avrotize a2firebase`](#convert-avrotize-schema-to-firebase-schema) - Convert Avrotize Schema to Firebase schema.
- [`avrotize a2hbase`](#convert-avrotize-schema-to-hbase-schema) - Convert Avrotize Schema to HBase schema.
- [`avrotize a2neo4j`](#convert-avrotize-schema-to-neo4j-schema) - Convert Avrotize Schema to Neo4j schema.
- [`avrotize a2dp`](#convert-avrotize-schema-to-datapackage-schema) - Convert Avrotize Schema to Datapackage schema.
- [`avrotize a2md`](#convert-avrotize-schema-to-markdown-documentation) - Convert Avrotize Schema to Markdown documentation.
- [`avrotize s2md`](#convert-json-structure-schema-to-markdown-documentation) - Convert JSON Structure schema to Markdown documentation.
Direct conversions (JSON Structure):
- [`avrotize s2p`](#convert-json-structure-to-protocol-buffers) - Convert JSON Structure to Protocol Buffers (.proto files).
- [`avrotize oas2s`](#convert-openapi-to-json-structure) - Convert OpenAPI 3.x document to JSON Structure.
Generate code from Avrotize Schema:
- [`avrotize a2cs`](#convert-avrotize-schema-to-c-classes) - Generate C# code from Avrotize Schema.
- [`avrotize a2java`](#convert-avrotize-schema-to-java-classes) - Generate Java code from Avrotize Schema.
- [`avrotize a2py`](#convert-avrotize-schema-to-python-classes) - Generate Python code from Avrotize Schema.
- [`avrotize a2ts`](#convert-avrotize-schema-to-typescript-classes) - Generate TypeScript code from Avrotize Schema.
- [`avrotize a2js`](#convert-avrotize-schema-to-javascript-classes) - Generate JavaScript code from Avrotize Schema.
- [`avrotize a2cpp`](#convert-avrotize-schema-to-c-classes) - Generate C++ code from Avrotize Schema.
- [`avrotize a2go`](#convert-avrotize-schema-to-go-classes) - Generate Go code from Avrotize Schema.
- [`avrotize a2rust`](#convert-avrotize-schema-to-rust-classes) - Generate Rust code from Avrotize Schema.
Generate code from JSON Structure:
- [`avrotize s2cpp`](#convert-json-structure-to-c-classes) - Generate C++ code from JSON Structure schema.
- [`avrotize s2cs`](#convert-json-structure-to-c-classes) - Generate C# code from JSON Structure schema.
- [`avrotize s2go`](#convert-json-structure-to-go-classes) - Generate Go code from JSON Structure schema.
- [`avrotize s2java`](#convert-json-structure-to-java-classes) - Generate Java code from JSON Structure schema.
- [`avrotize s2py`](#convert-json-structure-to-python-classes) - Generate Python code from JSON Structure schema.
- [`avrotize s2rust`](#convert-json-structure-to-rust-classes) - Generate Rust code from JSON Structure schema.
- [`avrotize s2ts`](#convert-json-structure-to-typescript-classes) - Generate TypeScript code from JSON Structure schema.
Direct JSON Structure conversions:
- [`avrotize s2csv`](#convert-json-structure-to-csv-schema) - Convert JSON Structure schema to CSV schema.
- [`avrotize a2csv`](#convert-avrotize-schema-to-csv-schema) - Convert Avrotize schema to CSV schema.
- [`avrotize s2x`](#convert-json-structure-to-xml-schema-xsd) - Convert JSON Structure to XML Schema (XSD).
- [`avrotize s2graphql`](#convert-json-structure-schema-to-graphql-schema) - Convert JSON Structure schema to GraphQL schema.
- [`avrotize a2graphql`](#convert-avrotize-schema-to-graphql-schema) - Convert Avrotize schema to GraphQL schema.
Other commands:
- [`avrotize pcf`](#create-the-parsing-canonical-form-pcf-of-an-avrotize-schema) - Create the Parsing Canonical Form (PCF) of an Avrotize Schema.
- [`avrotize validate`](#validate-json-instances-against-schemas) - Validate JSON instances against Avro or JSON Structure schemas.
- `avrotize mcp` - Run Avrotize as a local MCP server exposing conversion tools to MCP clients.
JSON Structure conversions:
- [`avrotize s2dp`](#convert-json-structure-schema-to-datapackage-schema) - Convert JSON Structure schema to Datapackage schema.
## Overview
## MCP server
You can run Avrotize as a local MCP server over stdio:
```bash
avrotize mcp
```
Catalog-ready metadata files are included:
- Official MCP Registry manifest: [server.json](server.json)
- Microsoft/GitHub MCP catalog listing template: [catalogs/microsoft-github-mcp.md](catalogs/microsoft-github-mcp.md)
- Generic cross-catalog manifest (optional): [mcp-server.json](mcp-server.json)
To publish to the official MCP Registry:
```bash
mcp-publisher validate server.json
mcp-publisher publish server.json
```
The MCP server exposes tools to:
- describe server capabilities and routing guidance (`describe_capabilities`)
- list available conversion commands (`list_conversions`)
- inspect a conversion command (`get_conversion`)
- execute conversions (`run_conversion`)
You can use Avrotize to convert between Avro/Avrotize Schema and other schema formats like JSON Schema, XML Schema (XSD), Protocol Buffers (Protobuf), ASN.1, and database schema formats like Kusto Data Table Definition (KQL) and SQL Table Definition. That means you can also convert from JSON Schema to Protobuf going via Avrotize Schema.
You can also generate C#, Java, TypeScript, JavaScript, and Python code from Avrotize Schema documents. The difference to the native Avro tools is that Avrotize can emit data classes without Avro library dependencies and, optionally, with annotations for JSON serialization libraries like Jackson or System.Text.Json.
The tool does not convert data (instances of schemas), only the data structure definitions.
Mind that the primary objective of the tool is the conversion of schemas that describe data structures used in applications, databases, and message systems. While the project's internal tests do cover a lot of ground, it is nevertheless not a primary goal of the tool to convert every complex document schema like those used for devops pipeline or system configuration files.
## Why?
Data structure definitions are an essential part of data exchange, serialization, and storage. They define the shape and type of data, and they are foundational for tooling and libraries for working with the data. Nearly all data schema languages are coupled to a specific data exchange or storage format, locking the definitions to that format.
Avrotize is designed as a tool to "unlock" data definitions from JSON Schema or XML Schema and make them usable in other contexts. The intent is also to lay a foundation for transcoding data from one format to another, by translating the schema definitions as accurately as possible into the schema model of the target format's schema. The transcoding of the data itself requires separate tools that are beyond the scope of this project.
The use of the term "data structure definition" and not "data object definition" is quite intentional. The focus of the tool is on data structures that can be used for messaging and eventing payloads, for data serialization, and for database tables, with the goal that those structures can be mapped cleanly from and to common programming language types.
Therefore, Avrotize intentionally ignores common techniques to model object-oriented inheritance. For instance, when converting from JSON Schema, all content from `allOf` expressions is merged into a single record type rather than trying to model the inheritance tree in Avro.
## Avrotize Schema
Avrotize Schema is a schema model that is a full superset of the popular Apache Avro Schema model. Avrotize Schema is the "pivot point" for this tool. All schemas are converted from and to Avrotize Schema.
Since Avrotize Schema is a superset of Avro Schema and uses its extensibility features, every Avrotize Schema is also a valid Avro Schema and vice versa.
Why did we pick Avro Schema as the foundational schema model?
Avro Schema ...
- provides a simple, clean, and concise way to define data structures. It is quite easy to understand and use.
- is self-contained by design without having or requiring external references. Avro Schema can express complex data structure hierarchies spanning multiple namespace boundaries all in a single file, which neither JSON Schema nor XML Schema nor Protobuf can do.
- can be resolved by code generators and other tools "top-down" since it enforces dependencies to be ordered such that no forward-referencing occurs.
- emerged out of the Apache Hadoop ecosystem and is widely used for serialization and storage of data and for data exchange between systems.
- supports native and logical types that cover the needs of many business and technical use cases.
- can describe the popular JSON data encoding very well and in a way that always maps cleanly to a wide range of programming languages and systems. In contrast, it's quite easy to inadvertently define a JSON Schema that is very difficult to map to a programming language structure.
- is itself expressed as JSON. That makes it easy to parse and generate, which is not the case for Protobuf or ASN.1, which require bespoke parsers.
> It needs to be noted here that while Avro Schema is great for defining data structures, and data classes generated from Avro Schema using this tool or other tools can be used to with the most popular JSON serialization libraries, the Apache Avro project's own JSON encoding has fairly grave interoperability issues with common usage of JSON. Avrotize defines an alternate JSON encoding
in [`avrojson.md`](avrojson.md).
Avro Schema does not support all the bells and whistles of XML Schema or JSON Schema, but that is a feature, not a bug, as it ensures the portability of the schemas across different systems and infrastructures. Specifically, Avro Schema does not support many of the data validation features found in JSON Schema or XML Schema. There are no `pattern`, `format`, `minimum`, `maximum`, or `required` keywords in Avro Schema, and Avro does not support conditional validation.
In a system where data originates as XML or JSON described by a validating XML Schema or JSON Schema, the assumption we make here is that data will be validated using its native schema language first, and then the Avro Schema will be used for transformation or transfer or storage.
## Adding CloudEvents columns for database tables
When converting Avrotize Schema to Kusto Data Table Definition (KQL), SQL Table Definition, or Parquet Schema, the tool can add special columns for [CloudEvents](https://cloudevents.io) attributes. CNCF CloudEvents is a specification for describing event data in a common way.
The rationale for adding such columns to database tables is that messages and events commonly separate event metadata from the payload data, while that information is merged when events are projected into a database. The metadata often carries important context information about the event that is not contained in the payload itself. Therefore, the tool can add those columns to the database tables for easy alignment of the message context with the payload when building event stores.
### Convert Proto schema to Avrotize Schema
```bash
avrotize p2a <path_to_proto_file> [--out <path_to_avro_schema_file>]
```
Parameters:
- `<path_to_proto_file>`: The path to the Protobuf schema file to be converted. If omitted, the file is read from stdin.
- `--out`: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.
Conversion notes:
- Proto 2 and Proto 3 syntax are supported.
- Proto package names are mapped to Avro namespaces. The tool does resolve imports and consolidates all imported types into a single Avrotize Schema file.
- The tool embeds all 'well-known' Protobuf 3.0 types in Avro format and injects them as needed when the respective types are imported. Only the `Timestamp` type is mapped to the Avro logical type 'timestamp-millis'. The rest of the well-known Protobuf types are kept as Avro record types with the same field names and types.
- Protobuf allows any scalar type as key in a `map`, Avro does not. When converting from Proto to Avro, the type information for the map keys is ignored.
- The field numbers in message types are not mapped to the positions of the fields in Avro records. The fields in Avro are ordered as they appear in the Proto schema. Consequently, the Avrotize Schema also ignores the `extensions` and `reserved` keywords in the Proto schema.
- The `optional` keyword results in an Avro field being nullable (union with the `null` type), while the `required` keyword results in a non-nullable field. The `repeated` keyword results in an Avro field being an array of the field type.
- The `oneof` keyword in Proto is mapped to an Avro union type.
- All `options` in the Proto schema are ignored.
### Convert Avrotize Schema to Proto schema
```bash
avrotize a2p <path_to_avro_schema_file> [--out <path_to_proto_directory>] [--naming <naming_mode>] [--allow-optional]
```
Parameters:
- `<path_to_avro_schema_file>`: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.
- `--out`: The path to the Protobuf schema directory to write the conversion result to. If omitted, the output is directed to stdout.
- `--naming`: (optional) Type naming convention. Choices are `snake`, `camel`, `pascal`.
- `--allow-optional`: (optional) Enable support for 'optional' fields.
Conversion notes:
- Avro namespaces are resolved into distinct proto package definitions. The tool will create a new `.proto` file with the package definition and an `import` statement for each namespace found in the Avrotize Schema.
- Avro type unions `[]` are converted to `oneof` expressions in Proto. Avro allows for maps and arrays in the type union, whereas Proto only supports scalar types and message type references. The tool will therefore emit message types containing a single array or map field for any such case and add it to the containing type, and will also recursively resolve further unions in the array and map values.
- The sequence of fields in a message follows the sequence of fields in the Avro record. When type unions need to be resolved into `oneof` expressions, the alternative fields need to be assigned field numbers, which will shift the field numbers for any subsequent fields.
### Convert JSON schema to Avrotize Schema
```bash
avrotize j2a <path_to_json_schema_file> [--out <path_to_avro_schema_file>] [--namespace <avro_schema_namespace>] [--split-top-level-records]
```
Parameters:
- `<path_to_json_schema_file>`: The path to the JSON schema file to be converted. If omitted, the file is read from stdin.
- `--out`: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.
- `--namespace`: (optional) The namespace to use in the Avrotize Schema if the JSON schema does not define a namespace.
- `--split-top-level-records`: (optional) Split top-level records into separate files.
Conversion notes:
- [JSON Schema Handling in Avrotize](jsonschema.md)
### Convert Avrotize Schema to JSON schema
```bash
avrotize a2j <path_to_avro_schema_file> [--out <path_to_json_schema_file>] [--naming <naming_mode>]
```
Parameters:
- `<path_to_avro_schema_file>`: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.
- `--out`: The path to the JSON schema file to write the conversion result to. If omitted, the output is directed to stdout.
- `--naming`: (optional) Type naming convention. Choices are `snake`, `camel`, `pascal`, `default`.
Conversion notes:
- [JSON Schema Handling in Avrotize](jsonschema.md)
### Convert XML Schema (XSD) to Avrotize Schema
```bash
avrotize x2a <path_to_xsd_file> [--out <path_to_avro_schema_file>] [--namespace <avro_schema_namespace>]
```
Parameters:
- `<path_to_xsd_file>`: The path to the XML schema file to be converted. If omitted, the file is read from stdin.
- `--out`: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.
- `--namespace`: (optional) The namespace to use in the Avrotize Schema if the XML schema does not define a namespace.
Conversion notes:
- All XML Schema constructs are mapped to Avro record types with fields, whereby **both**, elements and attributes, become fields in the record. XML is therefore flattened into fields and this aspect of the structure is not preserved.
- Avro does not support `xsd:any` as Avro does not support arbitrary typing and must always use a named type. The tool will map `xsd:any` to a field `any` typed as a union that allows scalar values or two levels of array and/or map nesting.
- `simpleType` declarations that define enums are mapped to `enum` types in Avro. All other facets are ignored and simple types are mapped to the corresponding Avro type.
- `complexType` declarations that have simple content where a base type is augmented with attributes is mapped to a record type in Avro. Any other facets defined on the complex type are ignored.
- If the schema defines a single root element, the tool will emit a single Avro record type. If the schema defines multiple root elements, the tool will emit a union of record types, each corresponding to a root element.
- All fields in the resulting Avrotize Schema are annotated with an `xmlkind` extension attribute that indicates whether the field was an `element` or an `attribute` in the XML schema.
### Convert Avrotize Schema to XML schema
```bash
avrotize a2x <path_to_avro_schema_file> [--out <path_to_xsd_schema_file>] [--namespace <target_namespace>]
```
Parameters:
- `<path_to_avro_schema_file>`: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.
- `--out`: The path to the XML schema file to write the conversion result to. If omitted, the output is directed to stdout.
- `--namespace`: (optional) Target namespace for the XSD schema.
Conversion notes:
- Avro record types are mapped to XML Schema complex types with elements.
- Avro enum types are mapped to XML Schema simple types with restrictions.
- Avro logical types are mapped to XML Schema simple types with restrictions where required.
- Avro unions are mapped to standalone XSD simple type definitions with a union restriction if all union types are primitives.
- Avro unions with complex types are resolved into distinct types for each option that are
then joined with a choice.
### Convert JSON Structure to XML Schema (XSD)
```bash
avrotize s2x <path_to_structure_file> [--out <path_to_xsd_schema_file>] [--namespace <target_namespace>]
```
Parameters:
- `<path_to_structure_file>`: The path to the JSON Structure schema file to be converted. If omitted, the file is read from stdin.
- `--out`: The path to the XML schema file to write the conversion result to. If omitted, the output is directed to stdout.
- `--namespace`: (optional) Target namespace for the XSD schema.
Conversion notes:
- JSON Structure object types are mapped to XML Schema complex types with elements.
- JSON Structure primitive types (string, int8-128, uint8-128, float/double, boolean, etc.) are mapped to appropriate XSD simple types.
- Extended primitive types are mapped as follows:
- `binary`/`bytes` → `xs:base64Binary`
- `date` → `xs:date`
- `time` → `xs:time`
- `datetime`/`timestamp` → `xs:dateTime`
- `duration` → `xs:duration`
- `uuid` → `xs:string`
- `uri` → `xs:anyURI`
- `decimal` → `xs:decimal`
- Collection types:
- `array` and `set` → complex types with sequences of items
- `map` → complex type with entry elements containing key and value
- `tuple` → complex type with fixed sequence of typed items
- Union types (`choice` or type arrays like `["string", "null"]`):
- Tagged unions (with discriminator) → `xs:choice` elements
- Inline unions → abstract base types with concrete extensions
- Nullable types → elements with `minOccurs="0"`
- Type references (`$ref`) are resolved to named XSD types
- Type extensions (`$extends`) are mapped to XSD complex type extensions with `xs:complexContent`
- Abstract types are marked with `abstract="true"` in XSD
- Validation constraints (minLength, maxLength, pattern, minimum, maximum) are converted to XSD restrictions/facets
- Required properties become elements with `minOccurs="1"`, optional properties have `minOccurs="0"`
### Convert ASN.1 schema to Avrotize Schema
```bash
avrotize asn2a <path_to_asn1_schema_file>[,<path_to_asn1_schema_file>,...] [--out <path_to_avro_schema_file>]
```
Parameters:
- `<path_to_asn1_schema_file>`: The path to the ASN.1 schema file to be converted. The tool supports multiple files in a comma-separated list. If omitted, the file is read from stdin.
- `--out`: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.
Conversion notes:
- All ASN.1 types are mapped to Avro record types, enums, and unions. Avro does not support the same level of nesting of types as ASN.1, the tool will map the types to the best fit.
- The tool will map the following ASN.1 types to Avro types:
- `SEQUENCE` and `SET` are mapped to Avro record types.
- `CHOICE` is mapped to an Avro record types with all fields being optional. While the `CHOICE` type technically corresponds to an Avro union, the ASN.1 type has different named fields for each option, which is not a feature of Avro unions.
- `OBJECT IDENTIFIER` is mapped to an Avro string type.
- `ENUMERATED` is mapped to an Avro enum type.
- `SEQUENCE OF` and `SET OF` are mapped to Avro array type.
- `BIT STRING` is mapped to Avro bytes type.
- `OCTET STRING` is mapped to Avro bytes type.
- `INTEGER` is mapped to Avro long type.
- `REAL` is mapped to Avro double type.
- `BOOLEAN` is mapped to Avro boolean type.
- `NULL` is mapped to Avro null type.
- `UTF8String`, `PrintableString`, `IA5String`, `BMPString`, `NumericString`, `TeletexString`, `VideotexString`, `GraphicString`, `VisibleString`, `GeneralString`, `UniversalString`, `CharacterString`, `T61String` are all mapped to Avro string type.
- All other ASN.1 types are mapped to Avro string type.
- The ability to parse ASN.1 schema files is limited and the tool may not be able to parse all ASN.1 files. The tool is based on the Python asn1tools package and is limited to that package's capabilities.
### Convert Kusto table definition to Avrotize Schema
```bash
avrotize k2a --kusto-uri <kusto_cluster_uri> --kusto-database <kusto_database> [--out <path_to_avro_schema_file>] [--emit-cloudevents-xregistry]
```
Parameters:
- `--kusto-uri`: The URI of the Kusto cluster to connect to.
- `--kusto-database`: The name of the Kusto database to read the table definitions from.
- `--out`: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.
- `--emit-cloudevents-xregistry`: (optional) See discussion below.
Conversion notes:
- The tool directly connects to the Kusto cluster and reads the table definitions from the specified database. The tool will convert all tables in the database to Avro record types, returned in a top-level type union.
- Connecting to the Kusto cluster leans on the same authentication mechanisms as the Azure CLI. The tool will use the same authentication context as the Azure CLI if it is installed and authenticated.
- The tool will map the Kusto column types to Avro types as follows:
- `bool` is mapped to Avro boolean type.
- `datetime` is mapped to Avro long type with logical type `timestamp-millis`.
- `decimal` is mapped to a logical Avro type with the `logicalType` set to `decimal` and the `precision` and `scale` set to the values of the `decimal` type in Kusto.
- `guid` is mapped to Avro string type.
- `int` is mapped to Avro int type.
- `long` is mapped to Avro long type.
- `real` is mapped to Avro double type.
- `string` is mapped to Avro string type.
- `timespan` is mapped to a logical Avro type with the `logicalType` set to `duration`.
- For `dynamic` columns, the tool will sample the data in the table to determine the structure of the dynamic column. The tool will map the dynamic column to an Avro record type with fields that correspond to the fields found in the dynamic column. If the dynamic column contains nested dynamic columns, the tool will recursively map those to Avro record types. If records with conflicting structures are found in the dynamic column, the tool will emit a union of record types for the dynamic column.
- If the `--emit-cloudevents-xregistry` option is set, the tool will emit an [xRegistry](http://xregistry.io) registry manifest file with a CloudEvent message definition for each table in the Kusto database and a separate Avro Schema for each table in the embedded schema registry. If one or more tables are found to contain CloudEvent data (as indicated by the presence of the CloudEvents attribute columns), the tool will inspect the content of the `type` (or `__type` or `__type`) columns to determine which CloudEvent types have been stored in the table and will emit a CloudEvent definition and schema for each unique type.
### Convert SQL database schema to Avrotize Schema
```bash
avrotize sql2a --connection-string <connection_string> [--username <user>] [--password <pass>] [--dialect <dialect>] [--database <database>] [--table-name <table>] [--out <path_to_avro_schema_file>] [--namespace <namespace>] [--infer-json] [--infer-xml] [--sample-size <n>] [--emit-cloudevents] [--emit-xregistry]
```
Parameters:
- `--connection-string`: The database connection string. Supports SSL/TLS and integrated authentication options (see examples below).
- `--username`: (optional) Database username. Overrides any username in the connection string. Use this to avoid credentials in command history.
- `--password`: (optional) Database password. Overrides any password in the connection string. Use this to avoid credentials in command history.
- `--dialect`: (optional) The SQL dialect: `postgres` (default), `mysql`, `sqlserver`, `oracle`, or `sqlite`.
- `--database`: (optional) The database name if not specified in the connection string.
- `--table-name`: (optional) A specific table to convert. If omitted, all tables are converted.
- `--out`: The path to the Avrotize Schema file. If omitted, output goes to stdout.
- `--namespace`: (optional) The Avro namespace for the generated schema.
- `--infer-json`: (optional, default: true) Infer schema for JSON/JSONB columns by sampling data.
- `--infer-xml`: (optional, default: true) Infer schema for XML columns by sampling data.
- `--sample-size`: (optional, default: 100) Number of rows to sample for JSON/XML schema inference.
- `--emit-cloudevents`: (optional) Detect CloudEvents tables and emit CloudEvents declarations.
- `--emit-xregistry`: (optional) Emit an xRegistry manifest instead of a single schema file.
Connection string examples:
```bash
# PostgreSQL with separate credentials (preferred for security)
avrotize sql2a --connection-string "postgresql://host:5432/mydb?sslmode=require" --username myuser --password mypass --out schema.avsc
# PostgreSQL with SSL (credentials in URL)
avrotize sql2a --connection-string "postgresql://user:pass@host:5432/mydb?sslmode=require" --out schema.avsc
# MySQL with SSL
avrotize sql2a --connection-string "mysql://user:pass@host:3306/mydb?ssl=true" --dialect mysql --out schema.avsc
# SQL Server with Windows Authentication (omit user/password)
avrotize sql2a --connection-string "mssql://@host:1433/mydb" --dialect sqlserver --out schema.avsc
# SQL Server with TLS encryption
avrotize sql2a --connection-string "mssql://user:pass@host:1433/mydb?encrypt=true" --dialect sqlserver --out schema.avsc
# SQLite file
avrotize sql2a --connection-string "/path/to/database.db" --dialect sqlite --out schema.avsc
```
Conversion notes:
- The tool connects to a live database and reads the schema from the information schema or system catalogs.
- Type mappings for each dialect:
- **PostgreSQL**: All standard types including `uuid`, `jsonb`, `xml`, arrays, and custom types.
- **MySQL**: Standard types including `json`, `enum`, `set`, and spatial types.
- **SQL Server**: Standard types including `uniqueidentifier`, `xml`, `money`, and `hierarchyid`.
- **Oracle**: Standard types including `number`, `clob`, `blob`, and Oracle-specific types.
- **SQLite**: Dynamic typing mapped based on declared type affinity.
- For JSON/JSONB columns (PostgreSQL, MySQL) and XML columns, the tool samples data to infer the structure. Fields that appear in some but not all records are folded together. If field types conflict across records, the tool emits a union of record types.
- For columns with keys that cannot be valid Avro identifiers (UUIDs, URLs, special characters), the tool generates `map<string, T>` types instead of record types.
- Table and column comments are preserved as Avro `doc` attributes where available.
- Primary key columns are noted in the schema's `unique` attribute.
### Infer Avro schema from JSON files
```bash
avrotize json2a <json_files...> [--out <path>] [--type-name <name>] [--namespace <namespace>] [--sample-size <n>] [--infer-choices] [--choice-depth <n>]
```
Parameters:
- `<json_files...>`: One or more JSON files to analyze. Supports JSON arrays, single objects, and JSONL (JSON Lines) format. Use `@filelist.txt` to read file paths from a response file.
- `--out`: The path to the Avro schema file. If omitted, output goes to stdout.
- `--type-name`: (optional) Name for the root type (default: "Document").
- `--namespace`: (optional) Avro namespace for generated types.
- `--sample-size`: (optional) Maximum number of records to sample (0 = all, default: 0).
- `--infer-choices`: (optional) Detect discriminated unions and emit as Avro unions with discriminator field defaults.
- `--choice-depth`: (optional) Maximum nesting depth for choice inference (1 = root only, 2+ = nested objects, default: 1).
Example:
```bash
# Infer schema from multiple JSON files
avrotize json2a data1.json data2.json --out schema.avsc --type-name Event --namespace com.example
# Infer schema from JSONL file with discriminated union detection
avrotize json2a events.jsonl --out events.avsc --type-name LogEntry --infer-choices
# Use response file for many input files
avrotize json2a @file_list.txt --out schema.avsc --infer-choices --choice-depth 2
```
### Infer JSON Structure schema from JSON files
```bash
avrotize json2s <json_files...> [--out <path>] [--type-name <name>] [--base-id <uri>] [--sample-size <n>] [--infer-choices] [--choice-depth <n>] [--infer-enums]
```
Parameters:
- `<json_files...>`: One or more JSON files to analyze. Use `@filelist.txt` to read file paths from a response file.
- `--out`: The path to the JSON Structure schema file. If omitted, output goes to stdout.
- `--type-name`: (optional) Name for the root type (default: "Document").
- `--base-id`: (optional) Base URI for $id generation (default: "https://example.com/").
- `--sample-size`: (optional) Maximum number of records to sample (0 = all, default: 0).
- `--infer-choices`: (optional) Detect discriminated unions and emit as `choice` types with discriminator field defaults.
- `--choice-depth`: (optional) Maximum nesting depth for choice inference (1 = root only, 2+ = nested objects, default: 1).
- `--infer-enums`: (optional) Detect enum types from repeated string values with low cardinality.
The inferrer also automatically detects:
- **Datetime patterns**: ISO 8601 timestamps, dates, and times are typed as `datetime`, `date`, or `time`.
- **Required vs optional fields**: Fields present in all records are marked required; sparse fields are optional.
Example:
```bash
# Basic inference
avrotize json2s data.json --out schema.jstruct.json --type-name Person --base-id https://myapi.example.com/schemas/
# Full inference with choices and enums
avrotize json2s events/*.json --out events.jstruct.json --type-name Event --infer-choices --choice-depth 2 --infer-enums
# Process many files via response file
avrotize json2s @file_list.txt --out schema.jstruct.json --infer-choices --infer-enums
```
### Infer Avro schema from XML files
```bash
avrotize xml2a <xml_files...> [--out <path>] [--type-name <name>] [--namespace <namespace>] [--sample-size <n>]
```
Parameters:
- `<xml_files...>`: One or more XML files to analyze. Use `@filelist.txt` to read file paths from a response file.
- `--out`: The path to the Avro schema file. If omitted, output goes to stdout.
- `--type-name`: (optional) Name for the root type (default: "Document").
- `--namespace`: (optional) Avro namespace for generated types.
- `--sample-size`: (optional) Maximum number of documents to sample (0 = all, default: 0).
Example:
```bash
avrotize xml2a config.xml --out config.avsc --type-name Configuration --namespace com.example.config
```
### Infer JSON Structure schema from XML files
```bash
avrotize xml2s <xml_files...> [--out <path>] [--type-name <name>] [--base-id <uri>] [--sample-size <n>]
```
Parameters:
- `<xml_files...>`: One or more XML files to analyze. Use `@filelist.txt` to read file paths from a response file.
- `--out`: The path to the JSON Structure schema file. If omitted, output goes to stdout.
- `--type-name`: (optional) Name for the root type (default: "Document").
- `--base-id`: (optional) Base URI for $id generation (default: "https://example.com/").
- `--sample-size`: (optional) Maximum number of documents to sample (0 = all, default: 0).
Conversion notes (applies to all inference commands):
- XML attributes are converted to fields prefixed with `@` (normalized to valid identifiers).
- Text content in mixed-content elements becomes a `#text` field.
- Repeated elements are inferred as arrays.
- Multiple files with different structures are merged into a unified schema.
- Sparse data (fields that appear in some but not all records) is folded into a single type.
### Convert Avrotize Schema to Kusto table declaration
```bash
avrotize a2k <path_to_avro_schema_file> [--out <path_to_kusto_kql_file>] [--record-type <record_type>] [--emit-cloudevents-columns] [--emit-cloudevents-dispatch]
```
Parameters:
- `<path_to_avro_schema_file>`: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.
- ` | text/markdown | null | Clemens Vasters <clemensv@microsoft.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jsonschema>=4.23.0",
"lark>=1.1.9",
"pyarrow>=22.0.0",
"asn1tools>=0.167.0",
"jsonpointer>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsoncomparison>=1.1.0",
"requests>=2.32.3",
"azure-kusto-data>=5.0.5",
"azure-identity>=1.17.1",
"datapackage>=1.15.4",
"jinja2>=3.1.4",
"pyiceberg>=0.10.0",
"pandas>=2.2.2",
"docker>=7.1.0",
"cddlparser>=0.5.0",
"json-structure>=0.1.8",
"psycopg2-binary>=2.9.9; extra == \"all-sql\"",
"pymysql>=1.1.1; extra == \"all-sql\"",
"pyodbc>=5.1.0; extra == \"all-sql\"",
"oracledb>=2.3.0; extra == \"all-sql\"",
"pytest>=8.3.2; extra == \"dev\"",
"fastavro>=1.9.5; extra == \"dev\"",
"xmlschema>=3.3.2; extra == \"dev\"",
"xmlunittest>=1.0.1; extra == \"dev\"",
"pylint>=3.2.6; extra == \"dev\"",
"dataclasses_json>=0.6.7; extra == \"dev\"",
"dataclasses>=0.8; extra == \"dev\"",
"pydantic>=2.8.2; extra == \"dev\"",
"avro>=1.12.0; extra == \"dev\"",
"testcontainers>=4.7.2; extra == \"dev\"",
"pymysql>=1.1.1; extra == \"dev\"",
"psycopg2-binary>=2.9.9; extra == \"dev\"",
"pyodbc>=5.1.0; extra == \"dev\"",
"pymongo>=4.8.0; extra == \"dev\"",
"oracledb>=2.3.0; extra == \"dev\"",
"cassandra-driver>=3.29.1; extra == \"dev\"",
"sqlalchemy>=2.0.32; extra == \"dev\"",
"graphql-core>=3.2.0; extra == \"dev\"",
"mcp>=1.26.0; extra == \"mcp\"",
"pymysql>=1.1.1; extra == \"mysql\"",
"oracledb>=2.3.0; extra == \"oracle\"",
"psycopg2-binary>=2.9.9; extra == \"postgres\"",
"pyodbc>=5.1.0; extra == \"sqlserver\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:37:05.307963 | avrotize-3.4.3.tar.gz | 487,263 | 1a/36/1486041589057c9e5e753921f1e137d9c37d392aaf0f77eff4e1465255f0/avrotize-3.4.3.tar.gz | source | sdist | null | false | c9736d03c44ea249b0df683d2ce4de27 | 3dff5193c199fe9cd52cf08c081e50a8d9a5b2d4df99182434980ade7ae0d3f3 | 1a361486041589057c9e5e753921f1e137d9c37d392aaf0f77eff4e1465255f0 | null | [
"LICENSE"
] | 255 |
2.4 | ojph | 0.6.2 | OpenJPH Bindings for Python and Numpy | OpenJPH bindings
| null | Mark Harfouche | mark@ramonaoptics.com | null | null | BSD-3-Clause | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Natural Language :: English",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/ramonaoptics/ojph | null | >=3.12 | [] | [] | [] | [
"numpy>=1.24.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:36:00.871258 | ojph-0.6.2.tar.gz | 18,416 | 25/53/2e24e1fb5fed96952845c25730f2b3f54e32713d0d3280dfec585e2f5a6d/ojph-0.6.2.tar.gz | source | sdist | null | false | 4bda685b671a0ceb116461f2b56403c5 | a17073c62ea6821a60076dc7ad8bfc96a05c53d40a6612f3748dea078f425113 | 25532e24e1fb5fed96952845c25730f2b3f54e32713d0d3280dfec585e2f5a6d | null | [
"LICENSE.txt"
] | 176 |
2.4 | botiksdk | 0.1.0 | Vondic Botik SDK | # Vondic Botik SDK
A Python SDK for building bots on the Vondic platform.
## Installation
```bash
pip install botiksdk
```
## Usage
```python
from botiksdk import Bot, Dispatcher, Message
bot = Bot(token="YOUR_BOT_TOKEN")
dp = Dispatcher(bot)
@dp.message_handler()
async def echo(message: Message):
await message.answer(message.text)
if __name__ == "__main__":
dp.start_polling()
```
| text/markdown | null | Vondic <support@vondic.com> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests>=2.32.5"
] | [] | [] | [] | [
"Homepage, https://github.com/vondic/botiksdk",
"Bug Tracker, https://github.com/vondic/botiksdk/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T18:35:50.917331 | botiksdk-0.1.0.tar.gz | 6,425 | 35/41/9ffc870ddb9502e09a2d32a5c296975f4ffa999a180a0239f3c781b4d384/botiksdk-0.1.0.tar.gz | source | sdist | null | false | 31bdcfc6c639dad238c8dd2948ddd6fe | 09fcf311d93e59a5b32e25f819949bde6d9e8220550225f547973d2652b95bad | 35419ffc870ddb9502e09a2d32a5c296975f4ffa999a180a0239f3c781b4d384 | null | [] | 210 |
2.4 | wafer-ai | 0.0.31 | Unified Wafer CLI, SDK, and LSP package | # wafer-ai
Unified Wafer package containing:
- `wafer.cli` – CLI commands and templates
- `wafer.core` – SDK, tools, environments, and rollouts
- `wafer.lsp` – language server implementation
**Install from PyPI** (creates `wafer` executable):
```bash
uv tool install wafer-ai
wafer --help
```
If you get 0.0.1 or "No executables", force latest: `uv tool install "wafer-ai>=0.0.20"`
**Install locally** from the monorepo:
```bash
uv pip install -e packages/wafer-ai
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"anthropic>=0.40.0",
"anyio>=4.0.0",
"astunparse==1.6.2",
"asyncssh>=2.14.0",
"colorlover>=0.3.0",
"coverage>=7.0.0",
"dacite>=1.8.0",
"dash>=3.0.0",
"dash-bootstrap-components>=1.0.0",
"dash-svg>=0.0.11",
"dnspython>=2.8.0",
"httpx>=0.25.0",
"kaleido==0.2.1",
"llvmlite<0.46.0,>=0.43.0",
"lsprotocol>=2024.0.0",
"markdownify>=0.11.0",
"modal>=0.64.0",
"numba>=0.58.0",
"numpy>=1.17.5",
"openai>=1.0.0",
"orjson>=3.9.0",
"pathspec>=0.12.0",
"pandas~=3.0.0",
"perfetto>=0.16.0",
"plotext>=5.3.2",
"plotille>=5.0.0",
"posthog>=3.0.0",
"pygls>=1.0.0",
"pymongo>=4.16.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"setuptools",
"supabase>=2.27.1",
"tabulate>=0.8.0",
"textual>=7.0.0",
"tqdm>=4.0.0",
"trio>=0.24.0",
"trio-asyncio>=0.15.0",
"typer>=0.12.0",
"diff-cover>=10.2.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest-timeout>=2.3.0; extra == \"dev\"",
"pytest-trio>=0.8.0; extra == \"dev\"",
"pytest-xdist>=3.5.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:35:48.036154 | wafer_ai-0.0.31.tar.gz | 1,177,159 | a6/6e/168197ff3f4dbad9332ee28274fa9d896ebadc0ecb0b5850e826ac044a8d/wafer_ai-0.0.31.tar.gz | source | sdist | null | false | fd5474d7c848c42206376fce22f94644 | 0b9f2279d29d7cfcb89a724c2123b8b90a6022628e6753fbdb6b615d9974a383 | a66e168197ff3f4dbad9332ee28274fa9d896ebadc0ecb0b5850e826ac044a8d | null | [] | 198 |
2.4 | moltbook-verify | 1.0.2 | Verification challenge solver for Moltbook.com — degarbles lobster math and auto-verifies agent posts | # moltbook-verify
Verification challenge solver for [Moltbook.com](https://www.moltbook.com) — the social platform for AI agents.
## Why This Exists
Moltbook uses garbled "lobster math" challenges to verify posts and comments. The challenges look like this:
```
A] lO-bS tErS^ cLaW ]fOrCe| iN~ wAtEr, tHe^ lObStEr muLtIpLiEs dOmInAnCe um,
tHe fIrSt cLaW iS tWeNtY tHrEe NeWtOnS aNd ThE sEcOnD cLaW iS fIvE nEwToNs
```
**This doesn't stop humans.** Any person can read through the garbling and solve "23 times 5 = 115" in seconds.
**It stops smaller LLMs and open-source bots.** A 3B parameter model running locally — the kind most OpenClaw agents use — chokes on this. The random punctuation, case alternation, repeated characters, and split number words break tokenization. The model sees noise where a human sees "twenty three." Simple regex fails because the numbers are spelled out as garbled words, not digits. Even capable 7B models get tripped up when "multiplies" arrives as `muLtIpLiEs` split across fragments.
The result: agents running smaller open-source models get their posts stuck in "pending" limbo, or worse, submit wrong answers and get suspended. After 10 failed verifications, Moltbook suspends your agent for days.
This library is the degarbler. It handles the text cleaning, number extraction, operation detection, and answer formatting so any agent — regardless of what LLM it runs — can pass verification.
## Install
```bash
pip install moltbook-verify
```
## Quick Start
```python
from moltbook_verify import solve_challenge, verify_content
# Solve a raw challenge string
answer = solve_challenge(
"A] Lo^bSt-Er ClAw| F oRcE Is ThIrTy tW o NeW ToNs Um AnD InCrEaSeS By TwElVe"
)
print(answer) # "44.00"
# Full verification flow after posting
import requests
API = "https://www.moltbook.com/api/v1"
API_KEY = "moltbook_sk_your_key_here"
# Post a comment
resp = requests.post(
f"{API}/posts/{post_id}/comments",
headers={"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"},
json={"content": "Great post!"},
)
data = resp.json()
# Auto-verify if challenge returned
verification = data.get("comment", {}).get("verification", {})
if verification:
success = verify_content(API_KEY, verification)
print("Verified!" if success else "Failed — do NOT retry")
```
## What It Handles
| Challenge Type | Raw Input | What the Solver Sees |
|---|---|---|
| Garbled text | `ThIrTy tW o` | `thirty two` → 32 |
| Split words | `t w e n t y` | `twenty` → 20 |
| Repeated chars | `thhhhreeee` | `three` → 3 |
| Random punctuation | `Lo^bSt-Er` | `lobster` |
| Explicit operators | `32 + 12` | addition → 44.00 |
| Word operators | `muLtIpLiEs By` | multiplication |
| Rate x time | `23 meters per second for five seconds` | 23 * 5 → 115.00 |
| Compound numbers | `twenty three` | 23 |
| Subtraction keywords | `loses fifteen newtons` | subtraction |
## The Degarbling Pipeline
1. **Detect explicit operators** — scan raw text for `+`, `*`, `/`, `-` between digits
2. **Strip punctuation** — remove all non-alphanumeric characters
3. **Collapse repeats** — `thhhhreeee` → `three`
4. **Word corrections** — dictionary of 40+ common garble patterns (`thre` → `three`, `fve` → `five`)
5. **Rejoin fragments** — reassemble number words split across spaces (`thi rty` → `thirty`)
6. **Extract numbers** — both digit literals and spelled-out number words, including compounds (`twenty three` → 23)
7. **Detect operation** — keyword matching for add/subtract/multiply/divide, rate*time patterns
8. **Format answer** — always `"X.XX"` with two decimal places
## Important: One-Shot Only
**Never retry a failed verification.** Moltbook tracks failed attempts per account. After 10 failures, your agent gets suspended for days. We've seen week-long suspensions from this.
`verify_content()` makes exactly one attempt. If it fails, it returns `False` and stops. This is by design.
If `solve_challenge()` returns `None` (can't parse the challenge), it's better to leave the post in pending than to guess and burn a strike.
## API Reference
### `solve_challenge(challenge: str) -> str | None`
Solve a garbled challenge. Returns answer as `"X.XX"` string or `None` if unsolvable.
### `verify_content(api_key, verification, api_url=...) -> bool`
Submit a solved challenge to Moltbook. Returns `True` if verified, `False` otherwise. One-shot — never retries.
### `degarble(challenge: str) -> tuple[str, str | None]`
Clean garbled text. Returns `(cleaned_text, explicit_operator)`. The operator is one of `'add'`, `'subtract'`, `'multiply'`, `'divide'`, or `None`.
### `extract_numbers(challenge, cleaned) -> list[float]`
Extract all numbers from both raw text (digits) and cleaned text (number words).
## Integration with Grazer SDK
If you use [grazer-skill](https://pypi.org/project/grazer-skill/) for multi-platform posting, `moltbook-verify` handles the verification step that Grazer's Moltbook adapter needs:
```python
from grazer import post_to_moltbook
from moltbook_verify import verify_content
result = post_to_moltbook(content, submolt="general")
if result.get("verification"):
verify_content(api_key, result["verification"])
```
## Success Rate
In production testing across 120+ comments with 5 agents, the solver achieves approximately **70% verification success**. The remaining 30% are edge cases where:
- The garbling destroys number words beyond recognition
- Unusual operation keywords aren't in the detection list
- The challenge uses patterns not yet covered (e.g., division expressed as "shared among")
We're continuously adding corrections as new garble patterns emerge. PRs welcome.
## License
MIT — [Elyan Labs](https://elyanlabs.ai) 2026
| text/markdown | null | Elyan Labs <scott@elyanlabs.ai> | null | null | MIT | moltbook, openclaw, agent, verification, ai-agent | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.20"
] | [] | [] | [] | [
"Homepage, https://github.com/sophiaeagent-beep/moltbook-verify",
"Documentation, https://github.com/sophiaeagent-beep/moltbook-verify#readme",
"Issues, https://github.com/sophiaeagent-beep/moltbook-verify/issues"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T18:35:38.797061 | moltbook_verify-1.0.2.tar.gz | 8,630 | 3a/5b/061efc65b19d56a1217c66d6436a86f7676ba0fbbc513c7b88154c34d047/moltbook_verify-1.0.2.tar.gz | source | sdist | null | false | 1104d7fe7e265a6c53b4a1ec2be57906 | 40e95ca7bb3b960108f79529a239195bd9663a4d0b92bb01fee3fb9a022b4ec3 | 3a5b061efc65b19d56a1217c66d6436a86f7676ba0fbbc513c7b88154c34d047 | null | [
"LICENSE"
] | 187 |
2.4 | voxd | 0.1.2 | Voice dictation daemon for Linux with Sarvam AI STT | # voxd
Voice dictation daemon for Linux. Press keybind, talk, press again. Text goes to clipboard.
## Install
```bash
pip install voxd
```
Dependencies:
```bash
# X11 (dwm, i3, bspwm, etc)
sudo pacman -S ffmpeg xclip
# Wayland (hyprland, sway, etc)
sudo pacman -S ffmpeg wl-clipboard
```
## Setup
Set your Sarvam AI API key:
```bash
voxd config set api_key YOUR_KEY
```
Optional - set language:
```bash
voxd config set language hi-IN # Hindi
voxd config set language en-IN # English (default)
```
## Start daemon
Add to your startup:
**dwm/i3/bspwm** - `~/.xinitrc`:
```bash
voxd-daemon &
```
**hyprland** - `~/.config/hypr/hyprland.conf`:
```ini
exec-once = voxd-daemon
```
## Keybind
**dwm** - `config.h`:
```c
{ MODKEY, XK_semicolon, spawn, SHCMD("voxd toggle") },
```
**hyprland** - `hyprland.conf`:
```ini
bind = SUPER, semicolon, exec, voxd toggle
```
## Usage
Press keybind → talk → press keybind → paste (Ctrl+V)
Terminal:
```bash
voxd toggle # Start/stop recording
voxd status # Check if recording
voxd quit # Kill daemon
```
## Config
```bash
voxd config list # Show all settings
voxd config set key value # Change setting
voxd config get key # Get value
```
Settings:
- `api_key` - Sarvam AI key (required)
- `language` - Language code (default: en-IN)
- `model` - STT model (default: saaras:v3)
Config stored at `~/.config/voxd/config.json`
## Advanced
**WM-specific commands** (if auto-detect fails):
```bash
voxd-dwm toggle # Force X11 mode
voxd-hypr toggle # Force Wayland mode
```
## Troubleshooting
**Daemon not running:**
```bash
ps aux | grep voxd
cat /run/user/$UID/voxd/daemon.log
```
**No clipboard:**
```bash
# Install clipboard tool
sudo pacman -S xclip # X11
sudo pacman -S wl-clipboard # Wayland
```
**Wrong language:**
```bash
voxd config set language hi-IN
voxd quit && voxd-daemon &
```
| text/markdown | voxd contributors | null | null | null | MIT | voice, dictation, stt, speech-to-text, linux, daemon | [
"Development Status :: 4 - Beta",
"Environment :: X11 Applications",
"Intended Audience :: End Users/Desktop",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Multimedia :: Sound/Audio :: Speech",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"sarvamai>=0.1.15"
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/voxd",
"Repository, https://github.com/yourusername/voxd",
"Issues, https://github.com/yourusername/voxd/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T18:34:33.073324 | voxd-0.1.2.tar.gz | 11,361 | 2a/55/b45c11f60793b4c4926920f850fc28cd9df2e26b443cda798d57c7fb8223/voxd-0.1.2.tar.gz | source | sdist | null | false | 091018e4210c9325603095223abbbcc4 | 3674f349f579892e601ae6eaf4ce7555c63763116042449928b6cc8c1e79ebff | 2a55b45c11f60793b4c4926920f850fc28cd9df2e26b443cda798d57c7fb8223 | null | [
"LICENSE"
] | 207 |
2.4 | pyspark-udtf | 0.1.3 | A collection of PySpark User-Defined Table Functions (UDTFs) | # PySpark UDTF Examples
[](https://pypi.org/project/pyspark-udtf/)
[](https://github.com/astral-sh/uv)
[](https://github.com/astral-sh/ruff)
A collection of Python User-Defined Table Functions (UDTFs) for PySpark, demonstrating how to leverage UDTFs for complex data processing tasks.
## Installation
You can quickly install the package using pip:
```bash
pip install pyspark-udtf
```
## Usage
### Fuzzy Matching (Quick Start)
This UDTF demonstrates how to use Python's standard library `difflib` to perform fuzzy string matching in PySpark. It takes a target string and a list of candidates, returning the best match and a similarity score.
```python
from pyspark.sql import SparkSession
from pyspark_udtf.udtfs import FuzzyMatch
spark = SparkSession.builder.getOrCreate()
# Register the UDTF
spark.udtf.register("fuzzy_match", FuzzyMatch)
# Create a sample dataframe with typos
data = [
("aple", ["apple", "banana", "orange"]),
("bananna", ["apple", "banana", "orange"]),
("orange", ["apple", "banana", "orange"]),
("grape", ["apple", "banana", "orange"])
]
df = spark.createDataFrame(data, ["typo", "candidates"])
# Use the UDTF in SQL
df.createOrReplaceTempView("typos")
spark.sql("""
SELECT *
FROM fuzzy_match(TABLE(SELECT typo, candidates FROM typos))
""").show()
```
### Batch Inference Image Captioning
This UDTF demonstrates how to perform efficient batch inference against a model serving endpoint. It buffers rows and sends them in batches to reduce network overhead.
```python
from pyspark.sql import SparkSession
from pyspark_udtf.udtfs import BatchInferenceImageCaption
spark = SparkSession.builder.getOrCreate()
# Register the UDTF
spark.udtf.register("batch_image_caption", BatchInferenceImageCaption)
# View UDTF definition and parameters
help(BatchInferenceImageCaption.func)
# Usage in SQL
# Assuming you have a table 'images' with a column 'url'
spark.sql("""
SELECT *
FROM batch_image_caption(
TABLE(SELECT url FROM images),
10, -- batch_size
'your-api-token',
'https://your-endpoint.com/score'
)
""").show()
```
## Requirements
- Python >= 3.10
- PySpark >= 4.0.0
- requests
- pandas
- pyarrow
## Documentation
For more detailed documentation, including design docs and guides for Unity Catalog integration, see the [docs/](docs/) directory.
- [Unity Catalog Guide](docs/unity_catalog_udtf.md)
## Development
We recommend using [uv](https://github.com/astral-sh/uv) for extremely fast package management.
```bash
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install the package
uv add pyspark-udtf
```
### Running Tests
To run the test suite:
```bash
# Run all tests
uv run pytest
# Run specific test file
uv run pytest tests/test_image_caption.py
```
### Linting
This project uses [Ruff](https://docs.astral.sh/ruff/) for linting and formatting. Install dev dependencies, then run:
```bash
uv sync --extra dev # install ruff
uv run ruff check . # lint
uv run ruff format . # format
```
### Adding Dependencies
To add a new runtime dependency:
```bash
uv add package_name
```
To add a development dependency:
```bash
uv add --dev package_name
```
### Bumping Version
You can bump the version automatically using `uv` (requires uv >= 0.7.0):
```bash
# Bump patch version (0.1.0 -> 0.1.1)
uv version --bump patch
# Bump minor version (0.1.0 -> 0.2.0)
uv version --bump minor
```
Alternatively, you can manually update `pyproject.toml`:
1. Open `pyproject.toml`.
2. Update the `version` field under `[project]`:
```toml
[project]
version = "0.1.1" # Update this value
```
### Publishing to PyPI
To build and publish the package to PyPI:
1. **Build the package:**
```bash
uv build
```
This will create distributions in the `dist/` directory.
2. **Publish to PyPI:**
```bash
uv publish
```
Note: You will need to configure your PyPI credentials (API token) either via environment variables (`UV_PUBLISH_TOKEN`) or following `uv`'s authentication documentation.
## Cursor Skills
This repository includes Cursor skills to help with common development tasks. Skills are available in `.cursor/skills/`.
### create-udtf
Use this skill when you want to **create, write, or generate a new PySpark UDTF**. It guides you through:
1. **Analyze requirements** – Determine inputs, outputs, and external dependencies
2. **Design** – Create a design doc in `docs/design/<udtf_name>.md` (required for all UDTFs)
3. **Implementation** – Implement the UDTF in `src/pyspark_udtf/udtfs/<udtf_name>.py`
4. **Registration** – Add the UDTF to `src/pyspark_udtf/udtfs/__init__.py`
5. **Testing** – Add tests in `tests/test_<udtf_name>.py`
**When to use:** Ask Cursor to create a new UDTF, or say "use the create-udtf skill" when describing the UDTF you want to build.
**Reference implementations:**
- Simple UDTF: `src/pyspark_udtf/udtfs/fuzzy_match.py`
- Complex UDTF (buffering, external API): `src/pyspark_udtf/udtfs/meta_capi.py`
| text/markdown | null | Allison Wang <allisowang@apache.org> | null | null | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas",
"pyarrow",
"pyspark>=4.0.0",
"pyyaml",
"requests",
"pytest; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/allisonwang-db/pyspark-udtf",
"Issues, https://github.com/allisonwang-db/pyspark-udtf/issues"
] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:33:08.612449 | pyspark_udtf-0.1.3.tar.gz | 72,910 | 6e/aa/17837722f8dce8c68af07dc9797b48930f818230d58aea045c1734561ff2/pyspark_udtf-0.1.3.tar.gz | source | sdist | null | false | b3a76b2b0f60c561ea3a5b63cb1dbb3a | 7b8b884067829a9884a1b07e9133c9679deb1eef09b6e13d5e5d37157a5d7d16 | 6eaa17837722f8dce8c68af07dc9797b48930f818230d58aea045c1734561ff2 | null | [] | 200 |
2.4 | vd-dlt | 0.1.3 | Core DLT ingestion framework for VibeData pipelines | # vd-dlt
Core DLT ingestion framework for VibeData pipelines. Provides config resolution, credential management (Azure Key Vault), pipeline execution, and observability.
## Installation
```bash
# Core only
pip install vd-dlt
# With pipeline dependencies (dlt, pyarrow)
pip install vd-dlt[pipeline]
# With Notion connector
pip install vd-dlt[notion]
# With Notion schema (defaults, docs)
pip install vd-dlt[notion-schema]
# Everything for Notion
pip install vd-dlt[pipeline,notion,notion-schema]
```
## Quick Start
```python
from vd_dlt import PipelineRunner
runner = PipelineRunner(
vault_url="https://my-vault.vault.azure.net/",
)
result = runner.run("my_source_name")
print(f"Loaded {result.total_rows_loaded} rows")
```
## Architecture
The framework uses a 4-level config hierarchy (most specific wins):
1. **Connector Defaults** - from connector schema package
2. **Source Config** - from source YAML (including `default_sync`)
3. **Group Config** - optional grouping within source
4. **Resource Config** - per-table overrides
| text/markdown | null | VibeData <info@vibedata.dev> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pyyaml>=6.0",
"vd-dlt-notion; extra == \"notion\"",
"vd-dlt-notion-schema; extra == \"notion-schema\"",
"dlt[deltalake,filesystem]; extra == \"pipeline\"",
"pyarrow>=17.0.0; extra == \"pipeline\"",
"vd-dlt-salesforce; extra == \"salesforce\"",
"vd-dlt-salesforce-schema; extra == \"salesforce-schema\""
] | [] | [] | [] | [
"Homepage, https://github.com/accelerate-data/vd-dlt-connectors",
"Repository, https://github.com/accelerate-data/vd-dlt-connectors"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T18:33:03.373453 | vd_dlt-0.1.3.tar.gz | 18,790 | 1b/6b/04da4837d7c0a7161482d5b9a9dad87328ac076799b58e34d6638b180a87/vd_dlt-0.1.3.tar.gz | source | sdist | null | false | 685f2a8053e91a764471efcfee7bfcbe | 4143ba04c76d93ed99da0132a5f124e80211ee4cc59658d56bfb231dfb7c674a | 1b6b04da4837d7c0a7161482d5b9a9dad87328ac076799b58e34d6638b180a87 | MIT | [] | 217 |
2.4 | terminusgps-notifier | 3.5.0 | Terminus GPS Notification Dispatch Microservice | # TerminusGPS Notifier
Accepts webhooks from Wialon and sends voice calls/text messages based on path parameters.
| text/markdown | null | Blake Nall <blake@terminusgps.com> | null | null | null | null | [
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aioboto3>=15.5.0",
"django>=6.0.2",
"python-terminusgps>=51.0.0",
"terminusgps-payments>=8.3.0",
"types-aioboto3[pinpoint-sms-voice-v2]>=15.5.0"
] | [] | [] | [] | [
"Documentation, https://docs.terminusgps.com",
"Repository, https://github.com/terminusgps/terminusgps-notifier"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:32:47.740861 | terminusgps_notifier-3.5.0.tar.gz | 117,136 | e9/51/0160d8cf7f6fe6d2ae768acf0ccdf6f2cf7054bd9e2d59767b40f2f42f45/terminusgps_notifier-3.5.0.tar.gz | source | sdist | null | false | f6fb185d47076a050a649ade02b79407 | 5e99598e4810ad41441b3b07af28da5317adee6a8d73c225d568eda547f9fc63 | e9510160d8cf7f6fe6d2ae768acf0ccdf6f2cf7054bd9e2d59767b40f2f42f45 | null | [
"COPYING"
] | 198 |
2.4 | env-doctor | 0.2.4 | A CLI tool to verify and fix AI/ML environment compatibility (Driver <-> CUDA <-> Wheels) with platform-specific installation guides. | <p align="center">
<img src="https://raw.githubusercontent.com/mitulgarg/env-doctor/main/docs/assets/logo.svg" alt="Env-Doctor Logo" width="80" height="80">
</p>
<h1 align="center">Env-Doctor</h1>
<p align="center">
<strong>The missing link between your GPU and Python AI libraries</strong>
</p>
<p align="center">
<a href="https://mitulgarg.github.io/env-doctor/">
<img src="https://img.shields.io/badge/docs-github.io-blueviolet?style=flat-square" alt="Documentation">
</a>
<a href="https://pypi.org/project/env-doctor/">
<img src="https://img.shields.io/pypi/v/env-doctor?style=flat-square&color=blue&label=PyPI" alt="PyPI">
</a>
<a href="https://pypi.org/project/env-doctor/">
<img src="https://img.shields.io/pypi/dm/env-doctor?style=flat-square&color=success&label=Downloads" alt="Downloads">
</a>
<img src="https://img.shields.io/badge/python-3.7+-blue?style=flat-square" alt="Python">
<a href="https://github.com/mitulgarg/env-doctor/blob/main/LICENSE">
<img src="https://img.shields.io/github/license/mitulgarg/env-doctor?style=flat-square&color=green" alt="License">
</a>
<a href="https://github.com/mitulgarg/env-doctor/stargazers">
<img src="https://img.shields.io/github/stars/mitulgarg/env-doctor?style=flat-square&color=yellow" alt="GitHub Stars">
</a>
</p>
---
> **"Why does my PyTorch crash with CUDA errors when I just installed it?"**
>
> Because your driver supports CUDA 11.8, but `pip install torch` gave you CUDA 12.4 wheels.
**Env-Doctor diagnoses and fixes the #1 frustration in GPU computing:** mismatched CUDA versions between your NVIDIA driver, system toolkit, cuDNN, and Python libraries.
It takes **5 seconds** to find out if your environment is broken - and exactly how to fix it.
## Doctor "Check" (Diagnosis)

## Features
| Feature | What It Does |
|---------|--------------|
| **One-Command Diagnosis** | Check compatibility: GPU Driver → CUDA Toolkit → cuDNN → PyTorch/TensorFlow/JAX |
| **Compute Capability Check** | Detect GPU architecture mismatches — catches why `torch.cuda.is_available()` returns `False` on new GPUs (e.g. Blackwell) even when driver and CUDA are healthy |
| **Python Version Compatibility** | Detect Python version conflicts with AI libraries and dependency cascade impacts |
| **CUDA Installation Guide** | Get platform-specific, copy-paste CUDA installation commands for your system |
| **Safe Install Commands** | Get the exact `pip install` command that works with YOUR driver |
| **Extension Library Support** | Install compilation packages (flash-attn, SageAttention, auto-gptq, apex, xformers) with CUDA version matching |
| **AI Model Compatibility** | Check if LLMs, Diffusion, or Audio models fit on your GPU before downloading |
| **WSL2 GPU Support** | Validate GPU forwarding, detect driver conflicts within WSL2 env for Windows users |
| **Deep CUDA Analysis** | Find multiple installations, PATH issues, environment misconfigurations |
| **Container Validation** | Catch GPU config errors in Dockerfiles before you build |
| **MCP Server** | Expose diagnostics to AI assistants (Claude Desktop, Zed) via Model Context Protocol |
| **CI/CD Ready** | JSON output and proper exit codes for automation |
## Installation
```bash
pip install env-doctor
```
## Usage
### Diagnose Your Environment
```bash
env-doctor check
```
**Example output:**
```
🩺 ENV-DOCTOR DIAGNOSIS
============================================================
🖥️ Environment: Native Linux
🎮 GPU Driver
✅ NVIDIA Driver: 535.146.02
└─ Max CUDA: 12.2
🔧 CUDA Toolkit
✅ System CUDA: 12.1.1
📦 Python Libraries
✅ torch 2.1.0+cu121
✅ All checks passed!
```
**On new-generation GPUs** (e.g. RTX 5070 / Blackwell), env-doctor also catches architecture mismatches that make `torch.cuda.is_available()` silently return `False`:
```
🎯 COMPUTE CAPABILITY CHECK
GPU: NVIDIA GeForce RTX 5070 (Compute 12.0, Blackwell, sm_120)
PyTorch compiled for: sm_50, sm_60, sm_70, sm_80, sm_90, compute_90
❌ ARCHITECTURE MISMATCH: Your GPU needs sm_120 but PyTorch 2.5.1 doesn't include it.
This is why torch.cuda.is_available() returns False even though
your driver and CUDA toolkit are working correctly.
FIX: Install PyTorch nightly with sm_120 support:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126
```
### Check Python Version Compatibility
```bash
env-doctor python-compat
```
```
🐍 PYTHON VERSION COMPATIBILITY CHECK
============================================================
Python Version: 3.13 (3.13.0)
Libraries Checked: 2
❌ 2 compatibility issue(s) found:
tensorflow:
tensorflow supports Python <=3.12, but you have Python 3.13
Note: TensorFlow 2.15+ requires Python 3.9-3.12. Python 3.13 not yet supported.
torch:
torch supports Python <=3.12, but you have Python 3.13
Note: PyTorch 2.x supports Python 3.9-3.12. Python 3.13 support experimental.
⚠️ Dependency Cascades:
tensorflow [high]: TensorFlow's Python ceiling propagates to keras and tensorboard
Affected: keras, tensorboard, tensorflow-estimator
torch [high]: PyTorch's Python version constraint affects all torch ecosystem packages
Affected: torchvision, torchaudio, triton
💡 Consider using Python 3.12 or lower for full compatibility
💡 Cascade: tensorflow constraint also affects: keras, tensorboard, tensorflow-estimator
💡 Cascade: torch constraint also affects: torchvision, torchaudio, triton
============================================================
```
### Get Safe Install Command
```bash
env-doctor install torch
```
```
⬇️ Run this command to install the SAFE version:
---------------------------------------------------
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
---------------------------------------------------
```
### Get CUDA Installation Instructions
```bash
env-doctor cuda-install
```
```
============================================================
CUDA TOOLKIT INSTALLATION GUIDE
============================================================
Detected Platform:
Linux (ubuntu 22.04, x86_64)
Driver: 535.146.02 (supports up to CUDA 12.2)
Recommended CUDA Toolkit: 12.1
============================================================
Ubuntu 22.04 (x86_64) - Network Install
============================================================
Installation Steps:
------------------------------------------------------------
1. wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
2. sudo dpkg -i cuda-keyring_1.1-1_all.deb
3. sudo apt-get update
4. sudo apt-get -y install cuda-toolkit-12-1
Post-Installation Setup:
------------------------------------------------------------
export PATH=/usr/local/cuda-12.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
TIP: Add the above exports to ~/.bashrc or ~/.zshrc
Verify Installation:
------------------------------------------------------------
nvcc --version
Official Download Page:
https://developer.nvidia.com/cuda-12-1-0-download-archive
```
**Supported Platforms:**
- Ubuntu 20.04, 22.04, 24.04
- Debian 11, 12
- RHEL 8, 9 / Rocky Linux / AlmaLinux
- Fedora 39+
- WSL2 (Ubuntu)
- Windows 10/11
- Conda (all platforms)
### Install Compilation Packages (Extension Libraries)
For extension libraries like **flash-attn**, **SageAttention**, **auto-gptq**, **apex**, and **xformers** that require compilation from source, `env-doctor` provides special guidance to handle CUDA version mismatches:
```bash
env-doctor install flash-attn
```
**Example output (with CUDA mismatch):**
```
🩺 PRESCRIPTION FOR: flash-attn
⚠️ CUDA VERSION MISMATCH DETECTED
System nvcc: 12.1.1
PyTorch CUDA: 12.4.1
🔧 flash-attn requires EXACT CUDA version match for compilation.
You have TWO options to fix this:
============================================================
📦 OPTION 1: Install PyTorch matching your nvcc (12.1)
============================================================
Trade-offs:
✅ No system changes needed
✅ Faster to implement
❌ Older PyTorch version (may lack new features)
Commands:
# Uninstall current PyTorch
pip uninstall torch torchvision torchaudio -y
# Install PyTorch for CUDA 12.1
pip install torch --index-url https://download.pytorch.org/whl/cu121
# Install flash-attn
pip install flash-attn --no-build-isolation
============================================================
⚙️ OPTION 2: Upgrade nvcc to match PyTorch (12.4)
============================================================
Trade-offs:
✅ Keep latest PyTorch
✅ Better long-term solution
❌ Requires system-level changes
❌ Verify driver supports CUDA 12.4
Steps:
1. Check driver compatibility:
env-doctor check
2. Download CUDA Toolkit 12.4:
https://developer.nvidia.com/cuda-12-4-0-download-archive
3. Install CUDA Toolkit (follow NVIDIA's platform-specific guide)
4. Verify installation:
nvcc --version
5. Install flash-attn:
pip install flash-attn --no-build-isolation
============================================================
```
### Check Model Compatibility
```bash
env-doctor model llama-3-8b
```
```
🤖 Checking: LLAMA-3-8B (8.0B params)
🖥️ Your Hardware: RTX 3090 (24GB)
💾 VRAM Requirements:
✅ FP16: 19.2GB - fits with 4.8GB free
✅ INT4: 4.8GB - fits with 19.2GB free
✅ This model WILL FIT on your GPU!
```
List all models: `env-doctor model --list`
Automatic HuggingFace Support (New ✨)
If a model isn't found locally, env-doctor automatically checks the HuggingFace Hub, fetches its parameter metadata, and caches it locally for future runs — no manual setup required.
```bash
# Fetches from HuggingFace on first run, cached afterward
env-doctor model bert-base-uncased
env-doctor model sentence-transformers/all-MiniLM-L6-v2
```
**Output:**
```
🤖 Checking: BERT-BASE-UNCASED
(Fetched from HuggingFace API - cached for future use)
Parameters: 0.11B
HuggingFace: bert-base-uncased
🖥️ Your Hardware:
RTX 3090 (24GB VRAM)
💾 VRAM Requirements & Compatibility
✅ FP16: 264 MB - Fits easily!
💡 Recommendations:
1. Use fp16 for best quality on your GPU
```
### Validate Dockerfiles
```bash
env-doctor dockerfile
```
```
🐳 DOCKERFILE VALIDATION
❌ Line 1: CPU-only base image: python:3.10
Fix: FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04
❌ Line 8: PyTorch missing --index-url
Fix: pip install torch --index-url https://download.pytorch.org/whl/cu121
```
### More Commands
| Command | Purpose |
|---------|---------|
| `env-doctor check` | Full environment diagnosis |
| `env-doctor python-compat` | Check Python version compatibility with AI libraries |
| `env-doctor cuda-install` | Step-by-step CUDA Toolkit installation guide |
| `env-doctor install <lib>` | Safe install command for PyTorch/TensorFlow/JAX, extension libraries (flash-attn, auto-gptq, apex, xformers, SageAttention, etc.) |
| `env-doctor model <name>` | Check model VRAM requirements |
| `env-doctor cuda-info` | Detailed CUDA toolkit analysis |
| `env-doctor cudnn-info` | cuDNN library analysis |
| `env-doctor dockerfile` | Validate Dockerfile |
| `env-doctor docker-compose` | Validate docker-compose.yml |
| `env-doctor scan` | Scan for deprecated imports |
| `env-doctor debug` | Verbose detector output |
### CI/CD Integration
```bash
# JSON output for scripting
env-doctor check --json
# CI mode with exit codes (0=pass, 1=warn, 2=error)
env-doctor check --ci
```
**GitHub Actions example:**
```yaml
- run: pip install env-doctor
- run: env-doctor check --ci
```
## MCP Server (AI Assistant Integration)
Env-Doctor includes a built-in [Model Context Protocol (MCP)](https://modelcontextprotocol.io) server that exposes diagnostic tools to AI assistants like Claude Desktop.
### Quick Setup for Claude Desktop
1. **Install env-doctor:**
```bash
pip install env-doctor
```
2. **Add to Claude Desktop config** (`~/Library/Application Support/Claude/claude_desktop_config.json`):
```json
{
"mcpServers": {
"env-doctor": {
"command": "env-doctor-mcp"
}
}
}
```
3. **Restart Claude Desktop** - the tools will be available automatically.
### Available Tools (11 Total)
- `env_check` - Full GPU/CUDA environment diagnostics
- `env_check_component` - Check specific component (driver, CUDA, cuDNN, etc.)
- `python_compat_check` - Check Python version compatibility with installed AI libraries
- `cuda_info` - Detailed CUDA toolkit information
- `cudnn_info` - Detailed cuDNN library information
- `cuda_install` - Step-by-step CUDA installation instructions
- `install_command` - Get safe pip install commands for AI libraries
- `model_check` - Analyze if AI models fit on your GPU
- `model_list` - List all available models in database
- `dockerfile_validate` - Validate Dockerfiles for GPU issues
- `docker_compose_validate` - Validate docker-compose.yml for GPU configuration
### Example Usage
Ask Claude Desktop:
- "Check my GPU environment"
- "Is my Python version compatible with my installed AI libraries?"
- "How do I install CUDA Toolkit on Ubuntu?"
- "Get me the pip install command for PyTorch"
- "Can I run Llama 3 70B on my GPU?"
- "Validate this Dockerfile for GPU issues"
- "What CUDA version does my PyTorch require?"
- "Show me detailed CUDA toolkit information"
**Learn more:** [MCP Integration Guide](docs/guides/mcp-integration.md)
## Documentation
**Full documentation:** https://mitulgarg.github.io/env-doctor/
- [Getting Started](docs/getting-started.md)
- [Command Reference](docs/commands/check.md)
- [MCP Integration Guide](docs/guides/mcp-integration.md)
- [WSL2 GPU Guide](docs/guides/wsl2.md)
- [CI/CD Integration](docs/guides/ci-cd.md)
- [Architecture](docs/architecture.md)
**Video Tutorial:** [Watch Demo on YouTube](https://youtu.be/mGAwxGuLpxk?si=Buf9yzNTSJmoirMU)
## Contributing
Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for details.
## License
MIT License - see [LICENSE](LICENSE)
| text/markdown | null | Mitul Garg <mitulgarg3@gmail.com>, Tharun Anand <atharun05@gmail.com> | null | null | MIT License
Copyright (c) 2025 Mitul Garg
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: System :: Hardware"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"nvidia-ml-py<14.0.0,>=11.0.0",
"packaging<26.0,>=21.0",
"click<9.0.0,>=8.0.0",
"requests<3.0.0,>=2.25.0",
"pyyaml<7.0,>=5.4",
"huggingface_hub<1.0.0,>=0.20.0",
"mcp<2.0.0,>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/mitulgarg/env-doctor",
"Bug Tracker, https://github.com/mitulgarg/env-doctor/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:32:32.834693 | env_doctor-0.2.4.tar.gz | 87,699 | 0b/04/cf50acc1995e667a57db110e87b8c81e747a21cf53fecf0dc36f6d3e9e5c/env_doctor-0.2.4.tar.gz | source | sdist | null | false | 076c18d59daaccf11796c0925e1a220d | 8dae90d9ae80055b68a94f62c439008090457d063e6d843628dbc24c903f73f4 | 0b04cf50acc1995e667a57db110e87b8c81e747a21cf53fecf0dc36f6d3e9e5c | null | [
"LICENSE"
] | 212 |
2.4 | easy-local-features | 0.8.22 | Add your description here | # easy-local-features
Unified, minimal wrappers around many local feature extractors and matchers (classical + learned).
> ⚠️ **CRITICAL LICENSE DISCLAIMER**
>
> This repository aggregates wrappers around MANY third‑party local feature extractors and matchers. **Each baseline/model keeps its OWN original license (BSD, MIT, Apache 2.0, GPLv3, Non‑Commercial, CC BY‑NC‑SA, custom research licenses, etc.)**. Your rights for a given baseline are governed *only* by that baseline’s upstream license. Some components here are **non‑commercial only** (e.g., SuperGlue, original SuperPoint, R2D2) or **copyleft** (e.g., DISK under GPLv3). Others are permissive (BSD/MIT/Apache 2.0). Before any research publication, internal deployment, redistribution, or commercial/production use, **YOU MUST review and comply with every relevant upstream license, including attribution, notice reproduction, share‑alike, copyleft, and patent clauses.**
>
> The maintainers provide NO warranty, NO guarantee of license correctness, and accept NO liability for misuse. This notice and any summaries are **not legal advice**. If in doubt, consult qualified counsel.
>
> See: [`LICENSES.md`](LICENSES.md) for an overview and links to included full license texts.
> Built with DINOv3.
## Installation
```bash
pip install easy-local-features
```
## Installing from source
```bash
pip install -e .
```
## Usage
**Stable minimal API**
- `getExtractor(name, conf)` → returns an extractor
- `.to(device)` → `"cpu" | "cuda" | "mps"`
- `.match(img0, img1)` → `{"mkpts0": (M,2), "mkpts1": (M,2), ...}`
- Descriptor-only methods additionally support `.addDetector(detector)`
**Detector-only methods**
- Some methods only implement `detect(image) -> keypoints` (no descriptors, no matching). Use `getDetector(name, conf)` for those.
Example:
```python
from easy_local_features import getDetector
from easy_local_features.utils import io
img = io.fromPath("tests/assets/megadepth0.jpg")
det = getDetector("rekd", {"num_keypoints": 1500}).to("cpu")
kps = det.detect(img) # [1, N, 2]
```
### Discover available config keys (no model init)
In code:
```python
from easy_local_features import describe
print(describe("superpoint"))
```
On the CLI:
```bash
easy-local-features --describe superpoint
```
### Detect+Describe (one-liner matching)
```python
from easy_local_features import getExtractor
from easy_local_features.utils import io, ops, vis
img0 = io.fromPath("tests/assets/megadepth0.jpg")
img1 = io.fromPath("tests/assets/megadepth1.jpg")
img0 = ops.resize_short_edge(img0, 320)[0]
img1 = ops.resize_short_edge(img1, 320)[0]
extractor = getExtractor("aliked", {"top_k": 2048}).to("cpu")
out = extractor.match(img0, img1)
vis.plot_pair(img0, img1, title="ALIKED")
vis.plot_matches(out["mkpts0"], out["mkpts1"])
vis.save("tests/results/aliked.png")
```
### Descriptor-only (attach a detector)
```python
from easy_local_features import getExtractor
from easy_local_features.feature.baseline_superpoint import SuperPoint_baseline
from easy_local_features.utils import io, ops
img0 = ops.resize_short_edge(io.fromPath("tests/assets/megadepth0.jpg"), 320)[0]
img1 = ops.resize_short_edge(io.fromPath("tests/assets/megadepth1.jpg"), 320)[0]
desc = getExtractor("sosnet").to("cpu")
det = SuperPoint_baseline({"top_k": 2048, "detection_threshold": 0.005}).to("cpu")
desc.addDetector(det)
out = desc.match(img0, img1)
```
### End-to-end matchers
```python
from easy_local_features import getExtractor
from easy_local_features.utils import io, ops
img0 = ops.resize_short_edge(io.fromPath("tests/assets/megadepth0.jpg"), 320)[0]
img1 = ops.resize_short_edge(io.fromPath("tests/assets/megadepth1.jpg"), 320)[0]
roma = getExtractor("romav2", {"top_k": 2000}).to("cpu")
out = roma.match(img0, img1)
```
## CLI
```bash
easy-local-features --list
```
```bash
easy-local-features --list-detectors
```
| text/markdown | null | Felipe Cadar <cadar@dcc.ufmg.br> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"e2cnn>=0.2.3",
"easydict",
"einops",
"kornia-moons>=0.2.9",
"kornia-rs>=0.1.9",
"loguru==0.6.0",
"numpy",
"omegaconf",
"opencv-python",
"romav2>=2.0.0",
"scikit-image>=0.24.0",
"scipy",
"tensorflow",
"tensorflow-hub",
"timm>=0.9",
"torch",
"torchvision",
"tqdm",
"wget",
"yacs"
] | [] | [] | [] | [] | uv/0.8.7 | 2026-02-20T18:32:04.413846 | easy_local_features-0.8.22.tar.gz | 4,556,537 | 40/f1/e6d595244db0bdca889b974f051d867a53553773c6ff3e9d082f6d3feada/easy_local_features-0.8.22.tar.gz | source | sdist | null | false | 4c4e9813995332990bf4a6982ace8a23 | 26e380c51e96191bf37ea8b279ecc3628fefd47a1086bfe974d80099749a66fc | 40f1e6d595244db0bdca889b974f051d867a53553773c6ff3e9d082f6d3feada | null | [
"LICENSES.md"
] | 152 |
2.4 | space-dolphin | 1.2.1 | Automated pipeline for lens modeling based on lenstronomy | .. |logo| image:: https://raw.githubusercontent.com/ajshajib/dolphin/efb2673646edd6c2d98963e9f4d08a9104d293c3/logo.png
:width: 70
|logo| dolphin
==============
.. image:: https://readthedocs.org/projects/dolphin-docs/badge/?version=latest
:target: https://dolphin-docs.readthedocs.io/latest/
.. image:: https://github.com/ajshajib/dolphin/actions/workflows/ci.yaml/badge.svg?branch=main
:target: https://github.com/ajshajib/dolphin/actions/workflows/ci.yaml
.. image:: https://codecov.io/gh/ajshajib/dolphin/branch/main/graph/badge.svg?token=WZVXZS9GF1
:target: https://app.codecov.io/gh/ajshajib/dolphin/tree/main
:alt: Codecov
.. image:: https://img.shields.io/badge/License-BSD_3--Clause-blue.svg
:target: https://github.com/ajshajib/dolphin/blob/main/LICENSE
:alt: License BSD 3-Clause Badge
.. image:: https://img.shields.io/badge/ApJ-%20992%2040-D22630
:target: https://iopscience.iop.org/article/10.3847/1538-4357/adf95c
:alt: Shajib et al. 2025, ApJ, 992, 40
.. image:: https://img.shields.io/badge/arXiv-2503.22657-b31b1b?logo=arxiv&logoColor=white
:target: https://arxiv.org/abs/2503.22657
.. image:: https://img.shields.io/badge/DOI-10.5281%2Fzenodo.16587211-blue
:target: https://doi.org/10.5281/zenodo.16587211
:alt: Zenodo DOI 10.5281/zenodo.16587211
.. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=brightyellow
:target: https://pre-commit.com/
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
.. image:: https://img.shields.io/badge/%20formatter-docformatter-fedcba.svg
:target: https://github.com/PyCQA/docformatter
.. image:: https://img.shields.io/badge/%20style-sphinx-0a507a.svg
:target: https://www.sphinx-doc.org/en/master/usage/index.html
AI-powered automated pipeline for lens modeling, with
`lenstronomy <https://github.com/lenstronomy/lenstronomy>`_ as the modeling engine.
Features
--------
- **AI-automated** forward modeling for large samples of galaxy-scale lenses.
- **Flexible**: supports both fully automated and semi-automated (with user tweaks) modes.
- **Multi-band** lens modeling made simple.
- Supports both **galaxy–galaxy** and **galaxy–quasar** systems.
- Effortless syncing between local machines and **HPCC**.
- |Codecov| **tested!**
.. |Codecov| image:: https://codecov.io/gh/ajshajib/dolphin/branch/main/graph/badge.svg?token=WZVXZS9GF1
:target: https://app.codecov.io/gh/ajshajib/dolphin/tree/main
Installation
------------
.. image:: https://img.shields.io/pypi/v/space-dolphin.svg
:alt: PyPI - Version
:target: https://pypi.org/project/space-dolphin/
You can install ``dolphin`` using ``pip``. Run the following command:
.. code-block:: bash
pip install space-dolphin
Alternatively, you can install the latest development version from GitHub as:
.. code-block:: bash
git clone https://github.com/ajshajib/dolphin.git
cd dolphin
pip install .
See the `Quickstart guide <QUICKSTART.rst>`_ for instructions on setting up and running ``dolphin``.
Citation
--------
If you use ``dolphin`` in your research, please cite the ``dolphin`` paper `Shajib et al. (2025) <https://arxiv.org/abs/2503.22657>`_. If you have used the ``"galaxy-quasar"`` fitting recipe, then additionally cite `Shajib et al. (2019) <https://ui.adsabs.harvard.edu/abs/2019MNRAS.483.5649S/abstract>`_, or if you have used the ``"galaxy-galaxy"`` fitting recipe, then additionally cite `Shajib et al. (2021) <https://ui.adsabs.harvard.edu/abs/2021MNRAS.503.2380S/abstract>`_.
| text/x-rst | Anowar J. Shajib | "Anowar J. Shajib" <ajshajib@gmail.com> | null | null | null | dolphin, lenstronomy | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/ajshajib/dolphin | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/ajshajib/dolphin"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T18:31:34.573096 | space_dolphin-1.2.1.tar.gz | 37,231 | d2/49/b29c636ca4b16e79cedb31252e90e63eae2cf58c438eae058ddccd57ea8d/space_dolphin-1.2.1.tar.gz | source | sdist | null | false | 547a26705298bd05fdafe3c807514f77 | 1ab507f87b83c3131036cb8c02d1a69fbc835cd2ba2a27d053c535776df56534 | d249b29c636ca4b16e79cedb31252e90e63eae2cf58c438eae058ddccd57ea8d | BSD-3-Clause | [
"LICENSE",
"AUTHORS.rst"
] | 207 |
2.1 | cereggii | 1.0.0.post1 | Thread synchronization utilities for Python. | # cereggii
[](https://py-free-threading.github.io/)
[](https://pypi.org/project/cereggii/)
[](https://pypi.org/project/cereggii/)
[](https://github.com/dpdani/cereggii/blob/main/LICENSE)
Thread synchronization utilities for Python.
[Documentation is here.](https://dpdani.github.io/cereggii)
```python
from cereggii import AtomicDict, ThreadSet
counter = AtomicDict({"red": 42, "green": 3, "blue": 14})
@ThreadSet.repeat(10) # create 10 threads
def workers():
counter.reduce_count(["red"] * 60 + ["green"] * 7 + ["blue"] * 30)
workers.start_and_join()
assert counter["red"] == 642
assert counter["green"] == 73
assert counter["blue"] == 314
```
## Installation
The recommended installation method is to download binary wheels from PyPI:
```shell
pip install cereggii
```
```shell
uv add cereggii
```
### Installing from sources
To install from source, first pull the repository:
```shell
git clone https://github.com/dpdani/cereggii
```
Then, install the build requirements (do this in a virtualenv):
```shell
pip install -e ".[dev]"
```
Finally, run the tests:
```shell
pytest
```
## License
Apache License 2.0. See the [LICENSE](LICENSE) file.
## Links
- Documentation: https://dpdani.github.io/cereggii/
- Source: https://github.com/dpdani/cereggii
- Issues: https://github.com/dpdani/cereggii/issues
## Cereus greggii
<img src="https://raw.githubusercontent.com/dpdani/cereggii/refs/heads/main/.github/cereggii.jpg" align="right">
The *Peniocereus Greggii* (also known as *Cereus Greggii*) is a flower native to
Arizona, New Mexico, Texas, and some parts of northern Mexico.
This flower blooms just one summer night every year and in any given area, all
these flowers bloom in synchrony.
[Wikipedia](https://en.wikipedia.org/wiki/Peniocereus_greggii)
_Image credits: Patrick Alexander, Peniocereus greggii var. greggii, south of
Cooke's Range, Luna County, New Mexico, 10 May 2018, CC0.
[source](https://www.flickr.com/photos/aspidoscelis/42926986382)_
| text/markdown | null | dpdani <git@danieleparmeggiani.me> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| multithreading | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Free Threading :: 4 - Resilient",
"Operating System :: OS Independent",
"Natural Language :: English"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"build==1.2.2.post1; extra == \"dev\"",
"pytest==9.0.1; extra == \"dev\"",
"pytest-reraise==2.1.2; extra == \"dev\"",
"black==24.10.0; extra == \"dev\"",
"ruff==0.7.0; extra == \"dev\"",
"mkdocs-material==9.7.1; extra == \"docs\"",
"mike==2.1.3; extra == \"docs\"",
"mkdocs-redirects==1.2.2; extra == \"docs\"",
"mkdocstrings-python==2.0.1; extra == \"docs\"",
"mkdocs-exclude==1.0.2; extra == \"docs\""
] | [] | [] | [] | [
"Documentation, https://dpdani.github.io/cereggii/",
"Issues, https://github.com/dpdani/cereggii/issues",
"Source, https://github.com/dpdani/cereggii"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:31:07.183741 | cereggii-1.0.0.post1.tar.gz | 83,496 | 34/a8/14d9780ce17bf22bdcdacc2090dd8b4379cf01d66bf42f9334888e05ea02/cereggii-1.0.0.post1.tar.gz | source | sdist | null | false | 11e78aac4b2ab8acdffbbfccf8004e92 | 866aff9e388297e582a2065f8683471ba62c55a16664a2dd48c67e4fdc95023c | 34a814d9780ce17bf22bdcdacc2090dd8b4379cf01d66bf42f9334888e05ea02 | null | [] | 2,962 |
2.4 | terragenai | 0.0.10 | A minimal example CLI package. | # terragenai
A generative AI CLI tool that builds terraform configurations using Terraform Enterprise or Cloud private registry modules.
## Common Flags
```bash
terragenai --help
terragenai --version
terragenai --configure
```
`--configure` saves settings to your OS-specific user config directory.
Overrides:
- `TERRAGENAI_HOME` to place both files in a single custom directory.
- `TERRAGENAI_CONFIG_FILE` to set an exact config file path.
- `TERRAGENAI_HISTORY_FILE` to set an exact history file path.
## Usage
```
% terragenai
TerragenAI Chat started. Type 'exit' to quit.
You: create 2 ec2 instances in us-west-2
Thinking...
Assistant:
module "ec2" {
source = "app.terraform.io/my-org/ec2-module/aws"
ami = "ami-0123456789abcdef0"
instance_type = "t3.micro"
key_name = "my-ssh-key"
region = "us-west-2"
instance_type = "t3.micro"
instance_count = 2
}
You: exit
%
```
| text/markdown | Eshika Malgari | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"typer>=0.9.0",
"rich>=13.0.0",
"requests>=2.31.0",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-cov>=5.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/eshika289/terragenAI",
"Repository, https://github.com/eshika289/terragenAI"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:31:07.151919 | terragenai-0.0.10.tar.gz | 8,339 | 5b/88/71e87114a03986602025751dcfa11a5e69039deaa8514959264d2d827aca/terragenai-0.0.10.tar.gz | source | sdist | null | false | 771ebaf664a17e26b1fa8426055baa34 | acbc5b281eef2ee0b6504169746dcdc1aaea8399da19ddcc86753d99c501a377 | 5b8871e87114a03986602025751dcfa11a5e69039deaa8514959264d2d827aca | null | [] | 195 |
2.4 | plotmath | 0.3.14 | Automatically generates textbook graphs for mathematical functions. | # `plotmath`
`plotmath` is a Python package to automatically create textbook graphs of mathematical functions.
## Basic examples
### Example 1
```python
import plotmath
def f(x):
return x**2 - x - 2
fix, ax = plotmath.plot(
functions=[f],
)
plotmath.savefig(
dirname="../figures",
fname="example_1.svg",
)
plotmath.show()
```
This will generate the following figure:

### Example 2
```python
import plotmath
import numpy as np
def f(x):
return x**2 * np.cos(x)
fix, ax = plotmath.plot(
functions=[f],
xmin=-6,
xmax=6,
ymin=-12,
ymax=12,
xstep=1,
ystep=2,
)
plotmath.savefig(
dirname="../figures",
fname="example_2.svg",
)
plotmath.show()
```
This will generate the following figure:

### Example 3
```python
import plotmath
def f(x):
return x**2 - 4
def g(x):
return x + 2
fix, ax = plotmath.plot(
functions=[f, g],
xmin=-6,
xmax=6,
ymin=-6,
ymax=6,
)
plotmath.savefig(
dirname="../figures",
fname="example_3.svg",
)
plotmath.show()
```
This will generate the following figure:

| text/markdown | René Alexander Ask | rene.ask@icloud.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/reneaas/plotmath | null | >=3.7 | [] | [] | [] | [
"numpy",
"matplotlib"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T18:30:56.937859 | plotmath-0.3.14.tar.gz | 7,105 | 56/78/e42c132513ca1eba8adab935efb7526f71f1d17a246eec03524dd36a17ed/plotmath-0.3.14.tar.gz | source | sdist | null | false | 14b7d91e0f32d704007825e1602a7137 | 4fa04a07057912891c8e11b30e21aad29d4d2a74c7f6ed1aded916ade62d6b25 | 5678e42c132513ca1eba8adab935efb7526f71f1d17a246eec03524dd36a17ed | null | [
"LICENSE"
] | 211 |
2.4 | UW-RestClients-SWS | 2.5.6 | A library for connecting to the SWS at the University of Washington |
See the README on `GitHub
<https://github.com/uw-it-aca/uw-restclients-sws>`_.
| null | UWIT Student & Educational Technology Services | aca-it@uw.edu | null | null | Apache License, Version 2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python"
] | [] | https://github.com/uw-it-aca/uw-restclients-sws | null | null | [] | [] | [] | [
"uw-restclients-core",
"uw-restclients-pws",
"mock"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T18:30:52.002230 | uw_restclients_sws-2.5.6.tar.gz | 306,753 | 8c/59/ad50b22bc1a085ddd91e951575373e4c6b7890c9751b608a7cc29f0f2dd2/uw_restclients_sws-2.5.6.tar.gz | source | sdist | null | false | b6729ef2a6a63ccb95c6b8f68e8d970c | b17604e44b4233190d6597c0b8f1de26ee499ba1275bdbcef67f0c78cde2d73a | 8c59ad50b22bc1a085ddd91e951575373e4c6b7890c9751b608a7cc29f0f2dd2 | null | [
"LICENSE"
] | 0 |
2.4 | enpt-enmapboxapp | 1.0.2 | A QGIS EnMAPBox plugin providing a GUI for the EnMAP processing tools (EnPT) | .. image:: https://git.gfz.de/EnMAP/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/badges/main/pipeline.svg
:target: https://git.gfz.de/EnMAP/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/commits/main
.. image:: https://img.shields.io/pypi/v/enpt_enmapboxapp.svg
:target: https://pypi.python.org/pypi/enpt_enmapboxapp
.. image:: https://img.shields.io/conda/vn/conda-forge/enpt_enmapboxapp.svg
:target: https://anaconda.org/conda-forge/enpt_enmapboxapp
.. image:: https://git.gfz.de/EnMAP/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/badges/main/coverage.svg
:target: coverage_
.. image:: https://img.shields.io/static/v1?label=Documentation&message=GitLab%20Pages&color=orange
:target: https://enmap.git-pages.gfz-potsdam.de/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/doc/
.. image:: https://img.shields.io/pypi/l/enpt_enmapboxapp.svg
:target: https://git.gfz.de/EnMAP/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/blob/main/LICENSE
.. image:: https://img.shields.io/pypi/pyversions/enpt_enmapboxapp.svg
:target: https://img.shields.io/pypi/pyversions/enpt_enmapboxapp.svg
.. image:: https://img.shields.io/pypi/dm/enpt_enmapboxapp.svg
:target: https://pypi.python.org/pypi/enpt_enmapboxapp
================
enpt_enmapboxapp
================
A QGIS EnMAPBox plugin providing a GUI for the EnMAP processing tools (EnPT).
* Free software: GNU General Public License v3 or later (GPLv3+)
* Documentation: https://enmap.git-pages.gfz-potsdam.de/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/doc/
See also the latest coverage_ report and the pytest_ HTML report.
How the GUI looks like
----------------------
.. image:: https://git.gfz.de/EnMAP/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/raw/main/docs/images/screenshot_enpt_enmapboxapp_v1.0.0.png
:width: 918 px
:height: 673 px
:scale: 80 %
Credits
-------
This software was developed within the context of the EnMAP project supported by the DLR Space Administration with
funds of the German Federal Ministry of Economic Affairs and Energy (on the basis of a decision by the German
Bundestag: 50 EE 1529) and contributions from DLR, GFZ and OHB System AG.
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
.. _coverage: https://enmap.git-pages.gfz-potsdam.de/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/coverage/
.. _pytest: https://enmap.git-pages.gfz-potsdam.de/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/test_reports/report.html
| text/x-rst | null | Daniel Scheffler <daniel.scheffler@gfz.de> | null | null | null | enpt_enmapboxapp, EnMAP, EnMAP-Box, hyperspectral, remote sensing, satellite, processing chain | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: End Users/Desktop",
"Natural Language :: English",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"packaging",
"psutil",
"sphinx-argparse; extra == \"doc\"",
"sphinx_rtd_theme; extra == \"doc\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-reporter-html1; extra == \"test\"",
"urlchecker; extra == \"test\"",
"flake8; extra == \"lint\"",
"pycodestyle; extra == \"lint\"",
"pydocstyle; extra == \"lint\"",
"build; extra == \"deploy\"",
"twine; extra == \"deploy\"",
"enpt_enmapboxapp[test]; extra == \"dev\"",
"enpt_enmapboxapp[doc]; extra == \"dev\"",
"enpt_enmapboxapp[lint]; extra == \"dev\"",
"enpt_enmapboxapp[deploy]; extra == \"dev\""
] | [] | [] | [] | [
"Source code, https://git.gfz.de/EnMAP/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp",
"Issue Tracker, https://git.gfz.de/EnMAP/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/-/issues",
"Documentation, https://enmap.git-pages.gfz-potsdam.de/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/doc/",
"Change log, https://enmap.git-pages.gfz-potsdam.de/GFZ_Tools_EnMAP_BOX/enpt_enmapboxapp/doc/history.html"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T18:30:32.833657 | enpt_enmapboxapp-1.0.2.tar.gz | 33,173 | 8f/f7/cdb1055e3eee55a39b74dfac0ef2a09ccd16517fbc8c94b0878dbe113719/enpt_enmapboxapp-1.0.2.tar.gz | source | sdist | null | false | cfe3c00dd5f3cfc57a0228ddaaf75e3b | 6c0b60ac348bf15088d02a28b66399c1137a4bfb0e630f26ba46941d2684dfb4 | 8ff7cdb1055e3eee55a39b74dfac0ef2a09ccd16517fbc8c94b0878dbe113719 | GPL-3.0-or-later | [
"LICENSE",
"AUTHORS.rst"
] | 166 |
2.4 | TwoSampleHC | 0.4.2 | Higher Criticism two-sample tests for sparse signals | TwoSampleHC
===========
Higher Criticism (HC) utilities for two-sample testing under sparsity. Provides:
- HC statistic variants (`HC`, `HCstar`, `HCjin`)
- Exact and randomized binomial p-values for feature-wise testing
- Convenience helpers to compute HC thresholds and DataFrame outputs
Install from a local build or PyPI:
pip install TwoSampleHC
Basic usage:
from TwoSampleHC import two_sample_test
hc_score, hc_threshold = two_sample_test(
smp1, smp2, data_type='counts', alt='two-sided', stbl=True, gamma=0.2
)
Requirements: `numpy`, `pandas`, `scipy`.
See the source in `src/TwoSampleHC` for details and examples.
| text/markdown | TwoSampleHC Maintainers | null | null | null | null | higher-criticism, statistics, two-sample, sparse | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Mathematics"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"pandas",
"scipy"
] | [] | [] | [] | [
"Homepage, https://github.com/"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T18:30:30.655837 | twosamplehc-0.4.2.tar.gz | 514,822 | 06/ca/0a2ea2b966bb45b11e0c6a2d8a74232fcb759d84b3c3150bbc41fdef1941/twosamplehc-0.4.2.tar.gz | source | sdist | null | false | 4045952a29383b3b0a2959c473f859c0 | 17b8c9bda2d4300775c36e7a1abdee9db008bca38ba9a40ddb705681b8dc2ecd | 06ca0a2ea2b966bb45b11e0c6a2d8a74232fcb759d84b3c3150bbc41fdef1941 | null | [] | 0 |
2.4 | automaweb | 0.2.4 | Biblioteca para automatização web (Chrome, Edge, Firefox) e manipulação de arquivos no SO. | # AutomaWeb 🕸️📁
**AutomaWeb** é uma biblioteca Python poderosa e simplificada para automação de tarefas na web e gerenciamento de arquivos no sistema operacional. Construída sobre o Selenium e bibliotecas nativas do Python, ela remove a complexidade do código boilerplate, permitindo que você crie robôs e scripts de automação de forma rápida, legível e eficiente.
## 🚀 Instalação
Você pode instalar o AutomaWeb facilmente através do gerenciador de pacotes pip:
```bash
pip install automaweb
```
*Nota: Certifique-se de ter os navegadores (Chrome, Edge ou Firefox) instalados em sua máquina. O Selenium Manager moderno (incluído no Selenium 4+) lidará com o download automático dos drivers (como o ChromeDriver) para você.*
---
## 💡 Visão Geral e Recursos
A biblioteca é dividida em duas frentes principais:
1. **Automação Web (`Navegador`)**: Controle simplificado de navegadores (Chrome, Edge, Firefox), com métodos prontos para clicar, digitar, esperar elementos, gerenciar abas, lidar com iframes e até salvar/carregar cookies. Ele já inclui verificações embutidas e tratamento de "stuns" (tempos de espera entre ações).
2. **Gerenciamento de Arquivos e Pastas**: Funções utilitárias diretas para criar pastas, mover, copiar, renomear, excluir, compactar/descompactar (ZIP), buscar arquivos mais recentes e interagir com o usuário via interface gráfica (Tkinter) para seleção de caminhos.
---
## 🛠️ Exemplos de Uso
### 1. Automação Web Básica
```python
from automaweb import Navegador
# Inicializa o navegador Chrome com um tempo de espera (stun) de 1 segundo entre ações
nav = Navegador(tempo_stun=1, navegador="chrome")
# Abre o navegador (pode usar headless=True para rodar em segundo plano)
nav.abrir_driver(headless=False)
try:
# Acessa um site
nav.abrir_url("https://www.google.com")
# Digita uma pesquisa e clica no botão (XPaths fictícios para exemplo)
nav.digitar("//textarea[@title='Pesquisar']", "Automação com Python")
nav.clicar("//input[@value='Pesquisa Google']")
# Tira um print da tela
nav.tirar_screenshot("resultado_pesquisa")
finally:
# Garante que o navegador será fechado
nav.fechar_driver()
```
### 2. Manipulação de Arquivos e Pastas
```python
import os
from automaweb import criar_pasta, mover_arquivo, obter_arquivo_mais_recente
caminho_downloads = f"{os.getlogin()}/Downloads"
pasta_destino = f"{caminho_downloads}/Relatorios_Processados"
# Cria a pasta se ela não existir
criar_pasta(pasta_destino)
# Pega o último PDF baixado na pasta de downloads
ultimo_pdf = obter_arquivo_mais_recente(caminho_downloads, extensao=".pdf")
if ultimo_pdf:
# Move o arquivo para a nova pasta
mover_arquivo(ultimo_pdf, f"{pasta_destino}/relatorio_final.pdf")
print("Arquivo processado com sucesso!")
```
---
## 🎯 Guia Definitivo: Dominando o XPath
O **XPath** (XML Path Language) é a espinha dorsal da automação web com o AutomaWeb. Ele funciona como um "endereço" ou "caminho" para encontrar qualquer elemento dentro da estrutura HTML de uma página.
### Como encontrar o XPath de um elemento?
1. Abra o navegador e acesse a página desejada.
2. Clique com o botão direito no elemento (botão, campo de texto) e selecione **Inspecionar**.
3. O painel de Ferramentas do Desenvolvedor (DevTools) será aberto, destacando o código HTML do elemento.
4. Pressione `Ctrl + F` (ou `Cmd + F`) no DevTools para abrir a barra de busca e testar seus XPaths em tempo real.
### Regra de Ouro: Fuja do XPath Absoluto!
❌ **Absoluto:** `/html/body/div[1]/div/div[2]/form/input`
Isso quebra se o dono do site adicionar um único elemento novo na página.
✅ **Relativo:** `//input[@id='email']`
Isso busca o elemento em qualquer lugar da página que atenda aos critérios, sendo muito mais resistente a mudanças.
### Sintaxe Básica do XPath Relativo
A estrutura padrão é: `//tag[@atributo='valor']`
* **`//`**: Busca em qualquer lugar do documento.
* **`tag`**: O tipo de elemento (`input`, `button`, `div`, `a`, `*` para qualquer tag).
* **`@atributo`**: O nome do atributo HTML (`id`, `class`, `name`, `type`).
* **`'valor'`**: O valor exato do atributo.
**Exemplos:**
* `//input[@id='usuario']` (Encontra um input com o id "usuario")
* `//button[@type='submit']` (Encontra um botão de envio)
* `//*[@name='senha']` (Encontra *qualquer* elemento com o name "senha")
### Usos Avançados e Dicas Profissionais
O poder real do XPath está nas suas funções dinâmicas. Aqui estão as técnicas essenciais para automações robustas:
#### 1. Selecionando pelo Texto (`text()`)
Muitas vezes, botões ou links não têm IDs ou classes claras, mas têm um texto visível.
* **Sintaxe:** `//tag[text()='Texto Exato']`
* **Exemplo:** `//button[text()='Fazer Login']`
* **Uso no AutomaWeb:** `nav.clicar("//button[text()='Fazer Login']")`
#### 2. Busca por Texto Parcial (`contains()`)
Ideal para quando uma classe tem vários nomes (ex: `class="btn btn-primary active"`) ou o texto muda ligeiramente (ex: "Bem-vindo, João").
* **Sintaxe:** `//tag[contains(@atributo, 'parte_do_valor')]`
* **Sintaxe (Texto):** `//tag[contains(text(), 'parte_do_texto')]`
* **Exemplos:**
* `//div[contains(@class, 'btn-primary')]` (Pega o botão mesmo que tenha outras classes)
* `//a[contains(text(), 'Esqueci minha')]` (Clica no link "Esqueci minha senha")
#### 3. Múltiplas Condições (`and` / `or`)
Quando um único atributo não é suficiente para identificar um elemento unicamente.
* **Sintaxe:** `//tag[@attr1='val1' and @attr2='val2']`
* **Exemplo:** `//input[@type='text' and @placeholder='Digite seu CPF']`
#### 4. Navegando pela Árvore (Eixos XPath)
Às vezes, o elemento que você quer clicar não tem identificadores, mas o "pai" ou "irmão" dele tem.
* **Subindo para o elemento Pai (`/..` ou `parent::`)**
Você encontra um texto, mas quer clicar na caixa inteira que o envolve.
* `//span[text()='Opção 1']/..`
* **Buscando o próximo elemento (Irmão - `following-sibling::`)**
Você encontra a label "Nome:", e quer o campo de input que vem logo em seguida.
* `//label[text()='Nome:']/following-sibling::input`
#### Resumo de Estratégia XPath para Automações
Sempre tente usar identificadores na seguinte ordem de prioridade para evitar que seu robô quebre facilmente:
1. `@id` (Único e imutável na maioria das vezes).
2. `@name` (Geralmente único em formulários).
3. `text()` (Se o botão tiver um texto fixo).
4. `contains(@class, '...')` (Classes específicas).
5. Navegação a partir de um pai/irmão estável.
---
## 🤝 Contribuindo
Contribuições são bem-vindas! Se você tiver ideias para melhorar a biblioteca, adicionar novos recursos ao `Navegador` ou expandir os utilitários de sistema, sinta-se à vontade para abrir uma *Issue* ou enviar um *Pull Request* no repositório oficial.
| text/markdown | João Braga | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"selenium"
] | [] | [] | [] | [
"Homepage, https://github.com/bvkila/automaweb",
"Bug Tracker, https://github.com/bvkila/automaweb/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T18:30:27.907368 | automaweb-0.2.4.tar.gz | 16,417 | 13/0f/65b71c1c804186b12201fa53e263c0f438a58d482ad5a17695f695327bd9/automaweb-0.2.4.tar.gz | source | sdist | null | false | 925eaee2bc696d60f1c2c23bc17e0f58 | aa3f4134472ae781f278d5dab3ae3e7aecca84a5630a76042cff29b9dcab97a1 | 130f65b71c1c804186b12201fa53e263c0f438a58d482ad5a17695f695327bd9 | null | [
"LICENSE"
] | 223 |
2.4 | termq | 0.1.8 | A minimal GPT chat client in the terminal | # TermQ
[](https://opensource.org/licenses/MIT)
[](http://github.com/badges/stability-badges)
A minimal GPT chat client in the terminal. A toy experiment with OpenAI's GPT API.
> All code have only tested on macOS, not guaranteed to work on other platforms.
## Prerequisites
```bash
export OPENAI_API_KEY=<your key>
```
```bash
python -m termq-venv
source termq-venv/bin/activate
pip install -r requirements.txt
```
```bash
# Optional, for PDF OCR if you want multi-language support
# Read more here: https://github.com/ocrmypdf/OCRmyPDF#languages
brew install tesseract-lang
```
## Running
```bash
Usage: python script.py [OPTIONS]
Options:
-c, --load-character FILE Specify a character file location.
--stream Enable streaming mode.
-e, --load-engine TYPE Specify an engine type, default is `gpt-3.5-turbo`.
--tts Enable text-to-speech.
-q, --question Ask a question to the chatbot and get an answer directly.
--help Show this message and exit.
```
### Chat with GPT
### Config setup
On first run, `termq` creates this config layout automatically:
```bash
~/.config/termq/
characters/ # built-in presets copied from the package
history/ # chat_history-YYYYMMDD-HHMMSS.json files
```
You can pass either a preset name (for example `-c hal9000`) or a full path to a custom character file.
#### Default Assistant
```bash
python chat.py
```
<details>
<summary> 🎬 Example usage </summary>
https://github.com/tommyjtl/termchat/assets/1622557/fb5d111b-42fb-4899-aeb6-c97202847a6f
</details>
#### Specifiy a personality
```bash
python chat.py -c <character>
```
<details>
<summary> 🎬 Example usage </summary>
https://github.com/tommyjtl/termchat/assets/1622557/9d4ae7d7-d62b-4e28-b428-6b676d3780aa
</details>
### On-demand Terminal Q&A
```bash
python chat.py -q
```
<details>
<summary> 🎬 Example usage </summary>
https://github.com/tommyjtl/termchat/assets/1622557/8b25b39f-3145-4ad8-886e-a39e3d165b9f
</details>
### Chat with PDF
```bash
# Normal usage
python pdf.py -f <file>
# Add --ocr if your PDF doesn't have text layer, default OCR language is English
python pdf.py -f <file> --ocr
# Add --ocr-lang to specify OCR language
# For <lang>, use 3-digit ISO 639-2 Code, see more here: https://github.com/tesseract-ocr/tessdata
python pdf.py -f <file> --ocr --ocr-lang <lang>
```
<details>
<summary> 🎬 Example usage </summary>
https://github.com/tommyjtl/termchat/assets/1622557/40162508-3263-406b-bb7e-27558ae8d618
</details>
## Acknowledgments
- [QueryGPT](https://github.com/tsensei/QueryGPT)
| text/markdown | null | Tommy <i@tjtl.io> | null | null | MIT | openai, gpt, chat, terminal, cli | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"openai==0.27.8",
"termcolor>=2.3.0",
"rich>=13.4.2",
"PyMuPDF>=1.22.3",
"ocrmypdf>=13.7.0",
"pyperclip==1.8.2",
"prompt_toolkit>=3.0.0",
"pytest>=7.0; extra == \"dev\"",
"black>=22.0; extra == \"dev\"",
"flake8>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/tommyjtl/termchat",
"Repository, https://github.com/tommyjtl/termchat",
"Bug Tracker, https://github.com/tommyjtl/termchatissues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T18:30:06.149888 | termq-0.1.8.tar.gz | 14,530 | 2b/5b/e2b873a5a4d490994f102bea1a7419301ab1356e1e6c27584ea24e05597b/termq-0.1.8.tar.gz | source | sdist | null | false | 298740b55233c9baeadf29d627b5c39d | 7f67347c8e59e36486aa45f05eff58ae4dafd08938c7312219f17ec64fb5beb6 | 2b5be2b873a5a4d490994f102bea1a7419301ab1356e1e6c27584ea24e05597b | null | [
"LICENSE"
] | 204 |
2.1 | reflex-icon-library | 2.1.1 | Thousands of icons for any Reflex project | # The Reflex Icon Library
The Reflex Icon Library (RIL) is your one-stop icon shop for [Reflex](https://reflex.dev/docs) projects.
It includes the icon libraries of [Font Awesome](https://fontawesome.com), [Simple Icons](https://simpleicons.org),
Google's [Material Symbols](https://fonts.google.com/icons),
GitHub's [Octicons](https://primer.style/octicons), [Phosphor](https://phosphoricons.com/),
and [Bootstrap Icons](https://icons.getbootstrap.com/), packaging over 12,000 icons in total. Subscribers to
[Font Awesome Pro](https://ril.celsiusnarhwal.dev/fontawesome/pro) can also enjoy the over 25,000 additional icons
unlocked
by the subscription as well as [full support for Kits](https://ril.celsiusnarhwal.dev/fontawesome/pro#using-a-kit).
```shell
pip install reflex-icon-library
```
For usage instructions, see [the documentation](https://ril.celsiusnarhwal.dev). | text/markdown | null | celsius narhwal <hello@celsiusnarhwal.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"casefy>=0.1.7",
"decorator>=5.2.1",
"deepmerge>=2.0",
"esprima>=4.0.1",
"inflect>=7.4.0",
"jinja2>=3.1.4",
"json5>=0.12.0",
"loguru>=0.7.2",
"packaging>=25.0",
"pydantic>=2.10.0",
"pydantic-extra-types>=2.10.0",
"pydantic-settings>=2.6.1",
"reflex>=0.8.0",
"semver>=3.0.2",
"yarl>=1.18.0"
] | [] | [] | [] | [
"Homepage, https://ril.celsiusnarhwal.dev",
"Repository, https://github.com/celsiusnarhwal/RIL",
"Issues, https://github.com/celsiusnarhwal/RIL/issues",
"Changelog, https://github.com/celsiusnarhwal/RIL/blob/main/CHANGELOG.md"
] | uv/0.6.17 | 2026-02-20T18:29:35.116039 | reflex_icon_library-2.1.1.tar.gz | 15,906 | e1/c1/0c6610551e981c1046c9d230fb167681455584899f1f9d39cefe0e0dd075/reflex_icon_library-2.1.1.tar.gz | source | sdist | null | false | 8e69a9217474fbcdf576d6c66c73673d | 63fa8759261d77f901f8c98d84f22c690aeb597a962d4a3956c66e2ebe6afa68 | e1c10c6610551e981c1046c9d230fb167681455584899f1f9d39cefe0e0dd075 | null | [] | 197 |
2.4 | jobbergate-cli | 5.10.0a1 | Jobbergate CLI Client | # Jobbergate CLI
The Jobbergate CLI provides a command-line interface to view and manage the Jobbergate
resources. It can be used to create Job Scripts from template and then submit them to
the Slurm cluster to which Jobbergate is connected.
Jobbergate CLI is a Python project implemented with the
[Typer](https://typer.tiangolo.com/) CLI builder library. Its dependencies and
environment are managed by [uv](https://docs.astral.sh/uv/).
The CLI has a rich help system that can be accessed by passing the `--help` flag to
the main command:
```shell
jobbergate job-scripts --help
```
There is also help and parameter guides for each of the subcommands that can be accessed
by passing them the `--help` flag:
```shell
jobbergate job-scripts list --help
```
See also:
* [jobbergate-api](https://github.com/omnivector-solutions/jobbergate/jobbergate-api)
## License
* [MIT](./LICENSE)
## Copyright
* Copyright (c) 2020 OmniVector Solutions <info@omnivector.solutions>
| text/markdown | null | Omnivector Solutions <info@omnivector.solutions> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.3.1",
"importlib-metadata>=8.7.0",
"inquirer>=3.4.1",
"jinja2>=3.1.6",
"jobbergate-core==5.10.0a1",
"pydantic-settings>=2.12.0",
"pyperclip>=1.11.0",
"python-dotenv>=1.2.1",
"pyyaml>=6.0.3",
"rich>=14.2.0",
"sentry-sdk>=2.47.0",
"typer>=0.20.0"
] | [] | [] | [] | [
"Repository, https://github.com/omnivector-solutions/jobbergate",
"Bug Tracker, https://github.com/omnivector-solutions/jobbergate/issues",
"Changelog, https://github.com/omnivector-solutions/jobbergate/blob/main/jobbergate-cli/CHANGELOG.rst"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:29:19.351303 | jobbergate_cli-5.10.0a1.tar.gz | 72,669 | c6/77/ce64a8ec1f099a4092337570a6d40553ca05ea81fade658a8dbe5134293c/jobbergate_cli-5.10.0a1.tar.gz | source | sdist | null | false | ea15d071d219e68f8293a7a19679bdae | 0c73d096459884da9fecf6ed780e5925c35eea12071e66bb40dd9f27712950d5 | c677ce64a8ec1f099a4092337570a6d40553ca05ea81fade658a8dbe5134293c | null | [
"LICENSE"
] | 182 |
2.4 | jobbergate-api | 5.10.0a1 | Jobbergate API | # Jobbergate API
The Jobbergate API provides a RESTful interface over the Jobbergate data and is used
by both the `jobbergate-agent` and the `jobbergate-cli` to view and manage the
Jobbergate resources.
Jobbergate API is a Python project implemented with
[FastAPI](https://fastapi.tiangolo.com/). Its dependencies and environment are
managed by [uv](https://docs.astral.sh/uv/).
It integrates with an OIDC server to provide identity and auth for its endpoints.
See also:
* [jobbergate-cli](https://github.com/omnivector-solutions/jobbergate/jobbergate-cli)
## License
* [MIT](./LICENSE)
## Copyright
* Copyright (c) 2020 OmniVector Solutions <info@omnivector.solutions>
| text/markdown | null | Omnivector Solutions <info@omnivector.solutions> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Framework :: FastAPI",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aio-pika>=9.5.8",
"aioboto3>=15.5.0",
"alembic>=1.17.2",
"armasec>=3.0.2",
"asyncpg>=0.31.0",
"auto-name-enum>=3.0.0",
"fastapi-pagination>=0.15.3",
"fastapi>=0.124.4",
"greenlet>=3.3.0",
"httpx>=0.28.1",
"inflection>=0.5.1",
"jinja2>=3.1.6",
"loguru>=0.7.3",
"msgpack>=1.1.2",
"nest-asyncio>=1.6.0",
"pendulum[test]>=3.1.0",
"py-buzz>=7.3.0",
"pydantic-settings>=2.12.0",
"pydantic[email]>=2.12.5",
"python-dotenv>=1.2.1",
"python-multipart<0.0.23,>=0.0.20",
"pyyaml>=6.0.3",
"sendgrid>=6.12.5",
"sentry-sdk>=2.47.0",
"snick>=2.2.0",
"sqlalchemy>=2.0.45",
"uvicorn>=0.38.0",
"yarl>=1.22.0"
] | [] | [] | [] | [
"Repository, https://github.com/omnivector-solutions/jobbergate",
"Bug Tracker, https://github.com/omnivector-solutions/jobbergate/issues",
"Changelog, https://github.com/omnivector-solutions/jobbergate/blob/main/jobbergate-api/CHANGELOG.rst"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:29:17.871485 | jobbergate_api-5.10.0a1-py3-none-any.whl | 76,739 | f8/db/f58f722ae9782b8d6f33c0e46b43c5eb05597165833d389e776f605160c9/jobbergate_api-5.10.0a1-py3-none-any.whl | py3 | bdist_wheel | null | false | 4af0c4a9d2f0b8c282b63a2e25083c54 | 55372452498478f21fa2347b6ab41941cd94cf4df4020df3578f765eb9094b23 | f8dbf58f722ae9782b8d6f33c0e46b43c5eb05597165833d389e776f605160c9 | null | [
"LICENSE"
] | 180 |
2.4 | jobbergate-agent | 5.10.0a1 | Jobbergate Agent | # Jobbergate-agent
## Install the package
To install the package from Pypi simply run `pip install jobbergate-agent`.
## Setup parameters
1. Setup dependencies
Dependencies and environment are managed in the project by [uv](https://docs.astral.sh/uv/). To initiate the development environment run:
```bash
make install
```
2. Setup `.env` parameters
```bash
JOBBERGATE_AGENT_BASE_API_URL="<base-api-url>"
JOBBERGATE_AGENT_X_SLURM_USER_NAME="<sbatch-user-name>"
JOBBERGATE_AGENT_SENTRY_DSN="<sentry-dsn-key>"
JOBBERGATE_AGENT_OIDC_DOMAIN="<OIDC-domain>"
JOBBERGATE_AGENT_OIDC_CLIENT_ID="<OIDC-app-client-id>"
JOBBERGATE_AGENT_OIDC_CLIENT_SECRET="<OIDC-app-client-secret>"
```
**Note**: `JOBBERGATE_AGENT_SENTRY_DSN` is optional. If you do not pass it the agent understands Sentry will not be used.
## Local usage example
1. Run app
```bash
jg-run
```
**Note**: this command assumes you're inside a virtual environment in which the package is installed.
**Note**: beware you should care about having the same user name you're using to run the code in the slurmctld node. For example, if `cluster_agent` will run the `make run` command then the slurmctld node also must have a user called `cluster_agent`.
| text/markdown | null | Omnivector Solutions <info@omnivector.solutions> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"apscheduler==3.11.2",
"auto-name-enum>=3.0.0",
"influxdb>=5.3.2",
"jobbergate-core==5.10.0a1",
"ldap3>=2.9.1",
"msgpack>=1.1.2",
"numba>=0.63.1",
"pluggy>=1.6.0",
"pydantic-settings>=2.12.0",
"pydantic[email]>=2.12.5",
"python-dotenv>=1.2.1",
"sentry-sdk>=2.47.0",
"types-ldap3>=2.9.13.14"
] | [] | [] | [] | [
"Repository, https://github.com/omnivector-solutions/jobbergate",
"Bug Tracker, https://github.com/omnivector-solutions/jobbergate/issues",
"Changelog, https://github.com/omnivector-solutions/jobbergate/blob/main/jobbergate-agent/CHANGELOG.rst"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:29:16.009454 | jobbergate_agent-5.10.0a1.tar.gz | 48,989 | 46/5e/bd1ad565e4994b30ce785060e9f1ca6d057eb0ac069d082649241bc53903/jobbergate_agent-5.10.0a1.tar.gz | source | sdist | null | false | 5592683487a8feb1b73e61c853125b2b | acfebbbf689080263dfff1598fe258be7138d030e8a3f0945cbc9a301078fa7e | 465ebd1ad565e4994b30ce785060e9f1ca6d057eb0ac069d082649241bc53903 | null | [
"LICENSE"
] | 189 |
2.4 | jobbergate-core | 5.10.0a1 | Jobbergate Core | # Jobbergate Core
Jobbergate-core is a sub-project that contains the key components and logic that is shared among all other sub-projects
(CLI, API, and Agent). Additionally, jobbergate-core exists to support custom automation built on top of Jobbergate.
## License
* [MIT](LICENSE)
## Copyright
* Copyright (c) 2023 OmniVector Solutions <info@omnivector.solutions>
| text/markdown | null | Omnivector Solutions <info@omnivector.solutions> | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.28.1",
"loguru>=0.7.3",
"pendulum[test]>=3.1.0",
"py-buzz>=7.3.0",
"pydantic>=2.12.5",
"python-jose>=3.5.0"
] | [] | [] | [] | [
"Repository, https://github.com/omnivector-solutions/jobbergate",
"Bug Tracker, https://github.com/omnivector-solutions/jobbergate/issues",
"Changelog, https://github.com/omnivector-solutions/jobbergate/blob/main/jobbergate-core/CHANGELOG.rst"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:29:15.160327 | jobbergate_core-5.10.0a1-py3-none-any.whl | 29,274 | 1a/17/10e794bafe07161744d108abe56d731e6fbf1c99c16765bdd727bfd2377e/jobbergate_core-5.10.0a1-py3-none-any.whl | py3 | bdist_wheel | null | false | c58063250c7434573584adc2d2051ffd | 59b9d873a3babb90eff9bf456098bf6b27a779a06f86ec335a370ed5a9b31770 | 1a1710e794bafe07161744d108abe56d731e6fbf1c99c16765bdd727bfd2377e | null | [
"LICENSE"
] | 201 |
2.4 | isage-studio | 0.2.4.3 | SAGE Studio - Visual workflow builder and LLM playground for SAGE AI pipelines | # SAGE Studio
## 📋 Overview
**SAGE Studio** 是一个现代化的低代码 Web UI 包,用于可视化开发和管理 SAGE 数据流水线。
> **包名**: `isage-studio`\
> **技术栈**: React 18 + TypeScript + FastAPI\\
## 🏗️ 架构概述
Studio 采用**前后端分离**架构,直接接入 SAGE 核心引擎:
```
┌─────────────────────────────────────────────────────────┐
│ 前端 (React + Vite) │
│ ┌───────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Flow Editor │ │ Playground │ │ Properties │ │
│ │ (画布编辑) │ │ (对话测试) │ │ (配置面板) │ │
│ └───────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────┘
⬇ HTTP/REST API
┌─────────────────────────────────────────────────────────┐
│ 后端 (FastAPI - api.py) │
│ • 节点注册表 (Node Registry) │
│ • Pipeline 构建器 (Pipeline Builder) │
│ • API 端点 (flows, operators, execution) │
└─────────────────────────────────────────────────────────┘
⬇ Python API
┌─────────────────────────────────────────────────────────┐
│ SAGE 核心引擎 │
│ ┌─────────────────────────────────────────────────┐ │
│ │ sage-kernel (Environment, DataStream API) │ │
│ ├─────────────────────────────────────────────────┤ │
│ │ sage-middleware (RAG Operators: Generator, │ │
│ │ Retriever, Reranker, Promptor, Chunker...) │ │
│ ├─────────────────────────────────────────────────┤ │
│ │ sage-libs (IO: FileSource, PrintSink...) │ │
│ └─────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
```
### 关键组件
1. **PipelineBuilder** (`services/pipeline_builder.py`)
- 将可视化 Flow 转换为 SAGE DataStream Pipeline
- 拓扑排序节点,构建执行图
- 映射节点到 SAGE Operator 类
1. **NodeRegistry** (`services/node_registry.py`)
- 管理 UI 节点类型 → SAGE Operator 的映射
- 预注册所有 SAGE 中间件算子
- 支持自定义算子扩展
1. **Backend API** (`config/backend/api.py`)
- FastAPI 服务,提供 RESTful 接口
- 处理 Flow 保存/加载、执行请求
- 调用 PipelineBuilder 构建并执行 SAGE Pipeline
## 🚀 Installation
### Environment Requirements
- **Python**: 3.10+ (必需)
- **Node.js**: 18+ (推荐 LTS)
- **SAGE**: 完整安装 (包括 kernel, middleware, libs)
### 快速安装(推荐)
使用 `quickstart.sh` 一键安装所有依赖:
```bash
# 克隆仓库
git clone https://github.com/intellistream/sage-studio.git
cd sage-studio
# 运行快速安装脚本
./quickstart.sh
```
**脚本功能:**
- ✅ 检查 Python/Node.js 环境
- ✅ 创建/激活虚拟环境
- ✅ 安装 Python 依赖(开发模式)
- ✅ 安装前端依赖(npm)
- ✅ 验证 SAGE 核心依赖
- ✅ 显示下一步操作指南
### 手动安装
```bash
# 方式 1: 通过 SAGE 元包安装(推荐生产环境)
pip install isage # 自动包含 isage-studio
# 方式 2: 开发模式安装
cd sage-studio
pip install -e ".[dev]"
# 安装前端依赖
cd src/sage/studio/frontend
npm install
# 验证安装
python -c "from sage.studio.studio_manager import StudioManager; print('✓ Studio installed')"
```
## 📖 Quick Start
### 🎯 方式一:使用 SAGE CLI(推荐)
```bash
# 启动 Studio(前端 + 后端)
sage studio start
# 或使用生产模式(需要先构建)
sage studio start --prod
# 查看运行状态
sage studio status
# 在浏览器中打开
sage studio open
# 查看日志
sage studio logs # 前端日志
sage studio logs --backend # 后端日志
# 停止服务
sage studio stop
# 管理前端依赖
sage studio npm install # 安装/更新 npm 依赖
sage studio npm run lint # 运行前端脚本
```
### 🖥️ CPU 推理支持(新功能)
**SAGE Studio 现已支持 CPU 友好的小模型推理,无需 GPU!**
#### 快速启动 CPU 模型
```bash
# 方法 1: 使用一键启动脚本(推荐)
cd sage-studio
./start_cpu_model.sh
# 方法 2: 手动启动(自定义配置)
sage llm engine start Qwen/Qwen2.5-0.5B-Instruct --engine-kind llm --port 8901
```
#### 可用的 CPU 模型
| 模型 | 大小 | 内存需求 | 推荐场景 |
|------|------|---------|---------|
| **Qwen/Qwen2.5-0.5B-Instruct** ⭐ | 0.5B | ~2GB | 最快速度,适合原型开发 |
| **TinyLlama-1.1B-Chat** | 1.1B | ~2.5GB | 轻量对话,测试流水线 |
| **Qwen/Qwen2.5-1.5B-Instruct** | 1.5B | ~4GB | 平衡性能和速度 |
| **Qwen/Qwen2.5-3B-Instruct** | 3B | ~8GB | 更好质量(CPU/GPU 均可) |
⭐ = 最推荐的 CPU 模型
#### 使用步骤
1. **启动模型**:运行上述命令启动 CPU 模型
2. **刷新页面**:刷新 Studio 浏览器页面
3. **选择模型**:点击右上角模型选择器,选择已启动的模型
4. **开始使用**:状态指示灯变绿后即可使用
📖 **完整指南**:查看 [`docs/CPU_INFERENCE_GUIDE.md`](docs/CPU_INFERENCE_GUIDE.md) 了解详细配置和优化技巧。
## 🔐 Authentication & Security
SAGE Studio v2.0 引入了完整的用户认证和数据隔离系统:
### 1. 用户认证
- **注册/登录**:首次使用需注册账号。
- **JWT Token**:使用 JWT 进行会话管理,Token 自动过期。
- **安全存储**:密码使用 Argon2 算法哈希存储。
### 2. 数据隔离
- **多用户支持**:每个用户拥有独立的工作区。
- **数据路径**:用户数据存储在 `~/.local/share/sage/users/{user_id}/`。
- `pipelines/`: 保存的流水线配置
- `sessions/`: 聊天会话记录
- `uploads/`: 上传的文件
- **隐私保护**:用户只能访问自己创建的资源。
**访问地址**:
- 🌐 前端:http://localhost:${STUDIO_FRONTEND_PORT}
- 🔌 后端:http://localhost:${STUDIO_BACKEND_PORT}
**注意**:首次使用或开发调试时,建议使用 `--dev` 开发模式,启动更快且支持热重载。
### 🎯 方式二:手动启动(开发调试)
```bash
# 终端 1: 启动后端
cd packages/sage-studio
python -m sage.studio.config.backend.api
# 后端运行在: http://localhost:${STUDIO_BACKEND_PORT}
# 终端 2: 启动前端
cd packages/sage-studio/src/sage/studio/frontend
sage studio npm install
sage studio npm run dev
# 前端运行在: http://localhost:${STUDIO_FRONTEND_PORT}
```
### 检查服务状态
```bash
# 检查端口
lsof -i :${STUDIO_BACKEND_PORT} # 后端
lsof -i :${STUDIO_FRONTEND_PORT} # 前端
# 检查后端健康
curl http://localhost:${STUDIO_BACKEND_PORT}/health
# 查看日志
tail -f /tmp/sage-studio-backend.log
tail -f /tmp/sage-studio-frontend.log
```
## 💡 使用指南
### 1. 创建 Pipeline
**步骤**:
1. 在浏览器打开 http://localhost:${STUDIO_FRONTEND_PORT}
1. 从左侧节点面板拖拽节点到画布
1. 连接节点创建数据流
1. 点击节点配置参数(右侧属性面板)
1. 点击工具栏 "保存" 按钮
**示例 RAG Pipeline**:
```
FileSource → SimpleRetriever → BGEReranker → QAPromptor → OpenAIGenerator → PrintSink
```
### 2. 使用 Playground
**Playground** 是对话式测试界面,可以直接与 Pipeline 交互。
**步骤**:
1. 保存 Flow 后,点击工具栏 "💬 Playground" 按钮
1. 在输入框输入消息(如查询问题)
1. 按 Enter 发送(Shift+Enter 换行)
1. 查看 AI 响应和执行步骤
**特性**:
- ✅ 实时执行 Pipeline
- ✅ 显示 Agent 步骤(每个节点的执行过程)
- ✅ 代码生成(Python / cURL)
- ✅ 会话管理(多轮对话)
### 3. 核心功能
#### 画布编辑
- 🎨 拖放节点到画布
- 🔗 连接节点创建数据流
- ⚙️ 动态配置节点参数
- 🔍 画布缩放和导航
#### 流程管理
- 💾 保存/加载流程
- 📋 流程列表查看
- 🗑️ 删除流程
- 📤 导出流程为 JSON 文件
- 📥 导入流程配置
#### MVP 增强功能 ✨ (v0.2.0-alpha)
**1. 节点输出预览**
- 实时查看节点执行输出
- 支持 JSON 格式化显示
- 支持原始数据和错误信息查看
- 使用方法:选择节点 → 右侧属性面板 → "输出预览" 标签
**2. 流程导入/导出**
- 导出完整流程配置为 JSON
- 从文件导入流程
- 支持流程分享和备份
- 使用方法:工具栏 → "导出" / "导入" 按钮
**3. 环境变量管理**
- 图形化管理 API 密钥等配置
- 密码字段安全输入
- 支持增量更新
- 使用方法:工具栏 → "设置" 按钮(齿轮图标)
**4. 实时日志查看器**
- 终端风格的日志显示
- 按节点/级别过滤
- 自动滚动和导出功能
- 使用方法:底部状态栏 → "显示日志" 按钮
📖 **详细文档**: 查看 [MVP_ENHANCEMENT.md](./MVP_ENHANCEMENT.md) 了解完整功能说明
#### 快捷键
- `Ctrl+S`: 保存流程
- `Ctrl+Z`: 撤销
- `Ctrl+Shift+Z` / `Ctrl+Y`: 重做
- `Delete`: 删除选中节点
- `Escape`: 取消选择
### 4. 前端开发
```bash
cd src/sage/studio/frontend
# 开发模式
sage studio npm run dev # 启动 Vite dev server (localhost:$STUDIO_FRONTEND_PORT)
# 生产构建
sage studio npm run build # 构建到 dist/
sage studio npm run preview # 预览构建结果
# 代码质量
sage studio npm run lint # ESLint 检查
sage studio npm run format # Prettier 格式化
```
### 5. 后端开发
```bash
cd packages/sage-studio
# 直接运行
python -m sage.studio.config.backend.api
# 验证运行
curl http://localhost:${STUDIO_BACKEND_PORT}/health
# 查看 API 文档
open http://localhost:${STUDIO_BACKEND_PORT}/docs # Swagger UI
```
## 📂 目录结构
```
sage-studio/
├── README.md # 本文件 ⭐
├── pyproject.toml # 包配置和依赖
│
├── src/sage/studio/
│ ├── __init__.py
│ ├── studio_manager.py # Studio 管理器 ⭐
│ │
│ ├── config/backend/
│ │ └── api.py # FastAPI 后端 ⭐
│ │
│ ├── services/ # 核心服务 ⭐
│ │ ├── node_registry.py # 节点注册表 (UI → SAGE Operator 映射)
│ │ └── pipeline_builder.py # Pipeline 构建器 (转换为 SAGE DataStream)
│ │
│ ├── models/ # 数据模型
│ │ └── __init__.py # VisualPipeline, VisualNode, VisualConnection
│ │
│ ├── data/operators/ # 节点定义 JSON 文件
│ │ ├── FileSource.json
│ │ ├── SimpleRetriever.json
│ │ ├── OpenAIGenerator.json
│ │ └── ...
│ │
│ └── frontend/ # React 前端 ⭐
│ ├── package.json # 前端依赖
│ ├── vite.config.ts # Vite 配置
│ ├── tsconfig.json # TypeScript 配置
│ └── src/
│ ├── App.tsx # 主应用组件
│ ├── components/ # React 组件
│ │ ├── FlowEditor.tsx # React Flow 画布
│ │ ├── Toolbar.tsx # 工具栏 (保存/加载/运行)
│ │ ├── NodePalette.tsx # 节点面板
│ │ ├── PropertiesPanel.tsx # 属性配置
│ │ └── Playground.tsx # Playground 对话界面
│ ├── store/ # Zustand 状态管理
│ │ ├── flowStore.ts # Flow 编辑状态
│ │ └── playgroundStore.ts # Playground 状态
│ ├── hooks/ # 自定义 Hooks
│ │ ├── useJobStatusPolling.ts
│ │ └── useKeyboardShortcuts.ts
│ └── services/ # API 客户端
│ └── api.ts # 后端 API 封装
│
└── tests/
├── test_node_registry.py # 节点注册表测试
├── test_pipeline_builder.py # Pipeline 构建器测试
└── test_studio_cli.py # CLI 命令测试
```
## 🔧 工作原理
### 从可视化到执行
```
1️⃣ 用户在 UI 中创建 Flow
└─> VisualPipeline (nodes + connections)
2️⃣ 保存 Flow
└─> 序列化为 JSON → .sage/pipelines/pipeline_xxx.json
3️⃣ 点击 "执行" / Playground 发送消息
└─> POST /api/playground/execute
│
├─> 加载 Flow JSON
│
├─> PipelineBuilder.build(visual_pipeline)
│ ├─> 拓扑排序节点(确定执行顺序)
│ ├─> NodeRegistry 查找 Operator 类
│ └─> 使用 SAGE DataStream API 构建 Pipeline:
│ env.from_source(...)
│ .map(Retriever, ...)
│ .map(Reranker, ...)
│ .map(Promptor, ...)
│ .map(Generator, ...)
│ .sink(PrintSink)
│
└─> env.execute() → SAGE 引擎执行
└─> 返回结果给前端
```
### 核心服务详解
#### 1. NodeRegistry(节点注册表)
**职责**: 管理 UI 节点类型 → SAGE Operator 类的映射
**示例映射**:
```python
{
"generator": OpenAIGenerator, # sage-middleware
"retriever": ChromaRetriever, # sage-middleware
"reranker": BGEReranker, # sage-middleware
"promptor": QAPromptor, # sage-middleware
"chunker": CharacterSplitter, # sage-libs
"evaluator": F1Evaluate, # sage-middleware
}
```
**扩展方式**:
```python
from sage.studio.services import get_node_registry
registry = get_node_registry()
registry.register("my_custom_op", MyCustomOperator)
```
#### 2. PipelineBuilder(Pipeline 构建器)
**职责**: 将 VisualPipeline 转换为可执行的 SAGE Pipeline
**关键步骤**:
1. **验证**: 检查节点类型是否已注册、连接是否有效
1. **拓扑排序**: 使用 Kahn 算法确定执行顺序,检测循环依赖
1. **构建 DataStream**:
```python
env = LocalEnvironment()
stream = env.from_source(FileSource, "data.txt")
stream = stream.map(Retriever, config={...})
stream = stream.map(Generator, config={...})
stream.sink(PrintSink)
```
1. **返回 Environment**: 调用方执行 `env.execute()`
#### 3. Backend API(FastAPI 服务)
**主要端点**:
- `GET /api/operators`: 获取所有可用节点类型
- `POST /api/pipeline/submit`: 保存 Flow
- `GET /api/jobs/all`: 获取所有 Pipeline(包括已保存的 Flow)
- `POST /api/playground/execute`: 执行 Playground 对话
- `GET /api/signal/status/{job_id}`: 查询执行状态
**数据存储**:
- `.sage/pipelines/`: Flow JSON 文件
- `.sage/states/`: 运行时状态
- `.sage/configs/`: Pipeline 配置
### 技术栈
#### 前端
```
React 18.2 + TypeScript 5.2
├── React Flow 11.10.4 # 可视化图编辑器
├── Ant Design 5.12 # UI 组件库
├── Zustand 4.4.7 # 状态管理
├── Axios 1.6.2 # HTTP 客户端
└── Vite 5.0.8 # 构建工具
```
#### 后端
```
FastAPI + Python 3.10+
├── Pydantic 2.0 # 数据验证
├── Uvicorn # ASGI 服务器
├── sage-kernel # Environment, DataStream API
├── sage-middleware # RAG Operators
└── sage-libs # IO: Source, Sink
```
### 数据流
```
前端 (localhost:$STUDIO_FRONTEND_PORT)
↓ HTTP REST
后端 API (localhost:$STUDIO_BACKEND_PORT)
↓ Python API
SAGE 引擎
├─> sage-kernel (执行引擎)
├─> sage-middleware (算子库)
└─> sage-libs (IO 工具)
```
## 🛠️ 开发指南
### 添加自定义节点
**步骤 1**: 实现 SAGE Operator
```python
# my_custom_package/my_operator.py
from sage.common.core import MapOperator
class MyCustomOperator(MapOperator):
"""自定义算子"""
def __init__(self, config: dict):
super().__init__(config)
self.param = config.get("param", "default")
def execute(self, data):
# 实现算子逻辑
result = self.process(data)
return result
```
**步骤 2**: 注册到 NodeRegistry
```python
# 在 node_registry.py 中添加
from my_custom_package.my_operator import MyCustomOperator
def _register_default_operators(self):
# ...现有注册...
# 自定义算子
self._registry["my_custom"] = MyCustomOperator
```
**步骤 3**: 创建节点定义 JSON
```json
// data/operators/MyCustomOperator.json
{
"id": 999,
"name": "MyCustomOperator",
"description": "我的自定义算子",
"module_path": "my_custom_package.my_operator",
"class_name": "MyCustomOperator",
"isCustom": true
}
```
**步骤 4**: 重启 Studio
```bash
sage studio stop
sage studio start
```
现在可以在 UI 中使用新节点了!
### 扩展数据源
支持的数据源类型(在 `PipelineBuilder._create_source` 中):
- `file`: 通用文件源
- `json_file`: JSON 文件
- `csv_file`: CSV 文件
- `text_file`: 文本文件
- `socket`: 网络 socket
- `kafka`: Kafka topic
- `database`: 数据库查询
- `api`: HTTP API
**添加新数据源**:
```python
# 在 pipeline_builder.py 的 _create_source 中添加
elif source_type == "my_source":
# 自定义参数
param1 = node.config.get("param1")
param2 = node.config.get("param2")
return MyCustomSource, (param1, param2), {}
```
### 调试技巧
```bash
# 1. 检查端口占用
lsof -i :${STUDIO_FRONTEND_PORT} # 前端
lsof -i :${STUDIO_BACKEND_PORT} # 后端
# 2. 查看日志
sage studio logs # 前端日志
sage studio logs --backend # 后端日志
# 或直接查看日志文件
tail -f ~/.sage/studio_backend.log
tail -f ~/.sage/studio.log
# 3. 测试后端 API
curl http://localhost:${STUDIO_BACKEND_PORT}/health
curl http://localhost:${STUDIO_BACKEND_PORT}/api/operators
# 4. 清理缓存
rm -rf ~/.sage/studio/node_modules
rm -rf ~/.sage/studio/.vite
rm -rf ~/.sage/pipelines/*
rm -rf ~/.sage/states/*
# 5. 重新安装依赖
cd src/sage/studio/frontend
rm -rf node_modules package-lock.json
sage studio npm install
# 6. Python 调试
python -m pdb -m sage.studio.config.backend.api
# 7. 查看 SAGE Pipeline 构建过程
# 在 api.py 中添加 print 语句
print(f"Building pipeline: {visual_pipeline}")
```
### 单元测试
```bash
# 运行所有测试
cd packages/sage-studio
pytest tests/
# 运行特定测试
pytest tests/test_node_registry.py
pytest tests/test_pipeline_builder.py
# 带覆盖率
pytest --cov=src/sage/studio tests/
```
### 代码质量
```bash
# Python 代码格式化
cd packages/sage-studio
black src/
isort src/
# 类型检查
mypy src/
# Linting
ruff check src/
# 前端代码格式化
cd src/sage/studio/frontend
sage studio npm run format
sage studio npm run lint
```
## 📋 依赖关系
### Python 依赖
**SAGE 核心组件** (必需):
- `isage-kernel>=0.1.0` - 执行引擎 (Environment, DataStream API)
- `isage-middleware>=0.1.0` - RAG 算子库 (Generator, Retriever, Reranker...)
- `isage-libs>=0.1.0` - IO 工具 (FileSource, PrintSink...)
- `isage-common>=0.1.0` - 通用组件
**Web 框架**:
- `fastapi>=0.115,<0.116` - REST API 框架
- `uvicorn[standard]>=0.34.0` - ASGI 服务器
- `pydantic>=2.0.0` - 数据验证
**工具库**:
- `psutil` - 进程管理 (StudioManager)
- `requests` - HTTP 客户端
- `rich` - 终端 UI
### 前端依赖
**核心框架**:
- `react@^18.2.0` - UI 框架
- `react-dom@^18.2.0` - DOM 渲染
- `typescript@^5.2.2` - 类型系统
**UI 组件**:
- `reactflow@^11.10.4` - 流程图编辑器
- `antd@^5.12.0` - Ant Design 组件库
- `lucide-react@^0.294.0` - 图标库
**状态管理**:
- `zustand@^4.4.7` - 轻量级状态管理
**构建工具**:
- `vite@^5.0.8` - 开发服务器和构建工具
- `@vitejs/plugin-react@^4.2.1` - React 插件
完整依赖列表见 `pyproject.toml` 和 `frontend/package.json`。
## 🐛 故障排除
### 常见问题
#### 1. 后端无响应
```bash
# 检查进程
ps aux | grep "sage.studio.config.backend.api"
# 检查端口
lsof -i :${STUDIO_BACKEND_PORT}
# 查看日志
tail -f /tmp/sage-studio-backend.log
# 重启后端
kill -9 <PID>
python -m sage.studio.config.backend.api &
```
**可能原因**:
- ❌ SAGE 包未正确安装 → `pip install -e packages/sage-kernel packages/sage-middleware packages/sage-libs`
- ❌ 缺少依赖 → `pip install -e packages/sage-studio`
- ❌ 端口被占用 → `lsof -i :${STUDIO_BACKEND_PORT}` 查看占用进程
#### 2. 前端编译/启动错误
```bash
cd src/sage/studio/frontend
# 清理缓存
rm -rf node_modules package-lock.json .vite
# 重新安装
sage studio npm install
# 启动
sage studio npm run dev
```
**可能原因**:
- ❌ Node.js 版本过低 → 需要 18+
- ❌ npm 依赖损坏 → 删除 `node_modules` 重新安装
- ❌ 端口被占用 → Vite 会自动尝试下一个可用端口
#### 3. Pipeline 执行失败
```bash
# 查看详细错误
tail -f /tmp/sage-studio-backend.log
# 检查 SAGE 安装
python -c "from sage.kernel.api import LocalEnvironment; print('✓ kernel OK')"
python -c "from sage.middleware.rag import OpenAIGenerator; print('✓ middleware OK')"
python -c "from sage.libs.io.source import FileSource; print('✓ libs OK')"
```
**可能原因**:
- ❌ 节点类型未注册 → 检查 `node_registry.py`
- ❌ 节点配置错误 → 检查节点参数是否正确
- ❌ SAGE Operator 导入失败 → 检查包安装
#### 4. Playground 无响应
```bash
# 检查 Flow 是否保存
ls ~/.sage/pipelines/
# 检查后端 API
curl -X POST http://localhost:${STUDIO_BACKEND_PORT}/api/playground/execute \
-H "Content-Type: application/json" \
-d '{"flowId": "pipeline_xxx", "input": "test", "sessionId": "test"}'
```
**可能原因**:
- ❌ Flow 未保存 → 先保存 Flow
- ❌ 后端未启动 → 检查 `lsof -i :${STUDIO_BACKEND_PORT}`
- ❌ 网络请求失败 → 检查浏览器控制台
#### 5. 端口被占用
```bash
# 查看占用
lsof -i :${STUDIO_FRONTEND_PORT} # 前端
lsof -i :${STUDIO_BACKEND_PORT} # 后端
# 杀死进程
kill -9 $(lsof -t -i:${STUDIO_FRONTEND_PORT})
kill -9 $(lsof -t -i:${STUDIO_BACKEND_PORT})
# 或使用 SAGE CLI
sage studio stop
sage studio start --dev
```
#### 6. 环境问题
```bash
# 检查 Python 版本
python --version # 需要 3.10+
# 检查 SAGE 安装
pip list | grep isage
# 检查 Node.js 版本
node --version # 需要 18+
npm --version
# 检查工作目录
pwd # 应该在 SAGE 项目根目录或 sage-studio 目录
```
### 完全重置
如果问题持续,尝试完全重置:
```bash
# 1. 停止所有服务
sage studio stop
kill -9 $(lsof -t -i:${STUDIO_BACKEND_PORT})
kill -9 $(lsof -t -i:${STUDIO_FRONTEND_PORT})
# 2. 清理缓存
rm -rf ~/.sage/studio/
rm -rf ~/.sage/cache/
rm -rf /tmp/sage-studio-*.log
# 3. 重新安装前端依赖
cd packages/sage-studio/src/sage/studio/frontend
rm -rf node_modules package-lock.json .vite
sage studio npm install
# 4. 重新安装 Python 包
cd packages/sage-studio
pip install -e .
# 5. 重新启动
sage studio start --dev
```
## 📄 License
MIT License - see [LICENSE](../../LICENSE) for details.
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | MIT | sage, studio, low-code, web-ui, pipeline, development, workflow, llm, ai, intellistream | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: System :: Distributed Computing",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"isage>=0.2.4.17",
"isage-flownet>=0.1.1",
"isagellm>=0.5.1.3",
"isage-agentic>=0.1.0.2",
"isage-sias>=0.1.0",
"isage-libs-intent>=0.1.0.3",
"isage-neuromem>=0.2.1.4",
"isage-data>=0.2.3.2",
"isage-finetune>=0.1.0.4",
"pydantic[email]<3.0.0,>=2.10.0",
"fastapi<1.0.0,>=0.115.0",
"starlette<0.51,>=0.40",
"uvicorn[standard]<1.0.0,>=0.34.0",
"h11<1.0.0,>=0.16.0",
"websockets>=11.0",
"python-multipart>=0.0.6",
"aiofiles>=23.0.0",
"structlog>=23.0.0",
"pydantic-settings>=2.0.0",
"configparser>=5.3.0",
"httpx<1.0.0,>=0.28.0",
"passlib[argon2]<2.0.0,>=1.7.4",
"python-jose[cryptography]<4.0.0,>=3.5.0",
"jinja2<4.0.0,>=3.1.0",
"markdown<4.0.0,>=3.4.4",
"markupsafe>=2.0.1",
"packaging<26.0,>=24.0",
"isage-middleware>=0.2.4.13",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"ruff==0.14.6; extra == \"dev\"",
"isage-pypi-publisher>=0.2.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/sage-studio",
"Documentation, https://intellistream.github.io/SAGE-Pub/",
"Repository, https://github.com/intellistream/sage-studio.git",
"Bug Tracker, https://github.com/intellistream/sage-studio/issues",
"Parent Project, https://github.com/intellistream/SAGE"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T18:29:13.836196 | isage_studio-0.2.4.3.tar.gz | 20,534,374 | b6/ce/47c48cc762bdcc5a650a7002e1cf446165259f9033579c301b20fcc3c15b/isage_studio-0.2.4.3.tar.gz | source | sdist | null | false | 822057fda63d0831af709c5a7fc7b28c | 9c34529e3357017d16171e35e77f206ce58d23c0c5058202d4a88ed96dfd66e4 | b6ce47c48cc762bdcc5a650a7002e1cf446165259f9033579c301b20fcc3c15b | null | [
"LICENSE"
] | 205 |
2.4 | hyperspacedb | 2.2.1 | Fastest Hyperbolic Vector DB Client | # HyperspaceDB Python SDK
Official Python client for HyperspaceDB gRPC API (v2.2.1).
The SDK is designed for production services and benchmark tooling:
- collection management
- single and batch insert
- single and batch vector search
- graph traversal API methods
- optional embedder integrations
- multi-tenant metadata headers
## Requirements
- Python 3.8+
- Running HyperspaceDB server (default gRPC endpoint: `localhost:50051`)
## Installation
```bash
pip install hyperspacedb
```
Optional embedder extras:
```bash
pip install "hyperspacedb[openai]"
pip install "hyperspacedb[all]"
```
## Quick Start
```python
from hyperspace import HyperspaceClient
client = HyperspaceClient("localhost:50051", api_key="I_LOVE_HYPERSPACEDB")
collection = "docs_py"
client.delete_collection(collection)
client.create_collection(collection, dimension=3, metric="cosine")
client.insert(
id=1,
vector=[0.1, 0.2, 0.3],
metadata={"source": "demo"},
collection=collection,
)
results = client.search(
vector=[0.1, 0.2, 0.3],
top_k=5,
collection=collection,
)
print(results)
client.close()
```
## Batch Search (Recommended for Throughput)
```python
queries = [
[0.1, 0.2, 0.3],
[0.3, 0.1, 0.4],
]
batch_results = client.search_batch(
vectors=queries,
top_k=10,
collection="docs_py",
)
```
`search_batch` reduces per-request RPC overhead and should be preferred for high concurrency.
## API Summary
### Collection Operations
- `create_collection(name, dimension, metric) -> bool`
- `delete_collection(name) -> bool`
- `list_collections() -> list[str]`
- `get_collection_stats(name) -> dict`
### Data Operations
- `insert(id, vector=None, document=None, metadata=None, collection="", durability=Durability.DEFAULT) -> bool`
- `batch_insert(vectors, ids, metadatas=None, collection="", durability=Durability.DEFAULT) -> bool`
- `search(vector=None, query_text=None, top_k=10, filter=None, filters=None, hybrid_query=None, hybrid_alpha=None, collection="") -> list[dict]`
- `search_batch(vectors, top_k=10, collection="") -> list[list[dict]]`
For `filters` with `type="range"`, decimal thresholds are supported (`gte_f64/lte_f64` in gRPC payload are set automatically for non-integer values).
### Maintenance Operations
- `rebuild_index(collection, filter_query=None) -> bool`
- `trigger_vacuum() -> bool`
- `trigger_snapshot() -> bool`
- `configure(ef_search=None, ef_construction=None, collection="") -> bool`
- `subscribe_to_events(types=None, collection=None) -> Iterator[dict]`
`filter_query` example:
```python
client.rebuild_index(
"docs_py",
filter_query={"key": "energy", "op": "lt", "value": 0.1},
)
```
CDC subscription example:
```python
for event in client.subscribe_to_events(types=["insert", "delete"], collection="docs_py"):
print(event)
```
### Hyperbolic Math Utilities
```python
from hyperspace import (
mobius_add,
exp_map,
log_map,
parallel_transport,
riemannian_gradient,
frechet_mean,
)
```
## Durability Levels
Use `Durability` enum values:
- `Durability.DEFAULT`
- `Durability.ASYNC`
- `Durability.BATCH`
- `Durability.STRICT`
## Multi-Tenancy
Pass `user_id` to include `x-hyperspace-user-id` on all requests:
```python
client = HyperspaceClient(
"localhost:50051",
api_key="I_LOVE_HYPERSPACEDB",
user_id="tenant_a",
)
```
## Best Practices
- Reuse one client instance per worker/process.
- Prefer `search_batch` for benchmark and high-QPS paths.
- Chunk large inserts instead of one huge request.
- Keep vector dimensionality aligned with collection configuration.
## Error Handling
The SDK catches gRPC errors and returns `False` / `[]` in many methods.
For strict production observability, log return values and attach metrics around failed operations.
| text/markdown | YARlabs | null | null | null | null | vector-database, ann, grpc, embeddings, hyperspace | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"grpcio>=1.50.0",
"protobuf>=4.21.0",
"numpy>=1.20.0",
"openai>=1.0.0; extra == \"openai\"",
"cohere>=4.0.0; extra == \"cohere\"",
"voyageai>=0.1.0; extra == \"voyage\"",
"google-generativeai>=0.3.0; extra == \"google\"",
"sentence-transformers>=2.2.0; extra == \"sentence-transformers\"",
"openai>=1.0.0; extra == \"all\"",
"cohere>=4.0.0; extra == \"all\"",
"voyageai>=0.1.0; extra == \"all\"",
"google-generativeai>=0.3.0; extra == \"all\"",
"sentence-transformers>=2.2.0; extra == \"all\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T18:29:04.442473 | hyperspacedb-2.2.1.tar.gz | 18,275 | 5c/56/5d9005a2602882070c11fbd6438f2abff59fd11e87c3844e7bb34177a7f5/hyperspacedb-2.2.1.tar.gz | source | sdist | null | false | 8fd3c4c2d769dd8087f9d67270284980 | 597feaa3bfd9c245c3e4a21e7dff40b1893dc4273d7555300395db5a59d39f62 | 5c565d9005a2602882070c11fbd6438f2abff59fd11e87c3844e7bb34177a7f5 | null | [] | 203 |
2.4 | tagth | 1.2.3 | Pure Python Stateless Tag-Based Authorization Library | # Stateless Tag-Based Authorization Library
## Background
Traditional role-based authorization models are not flexible enough to cover all required use cases. On the other side, full-managed ACLs are too complex for account managers to handle. `tagth` is a simple and flexible authorization model that can be easily implemented and maintained.
## Installation
```sh
pip install tagth
```
## Tag-Based Authorization
A lightweight model that is based on three concepts:
* a principal and its associated tags,
* a resource and its associated tags,
* an action.
The model adheres to the following principles:
* the model is stateless and purely functional, and it has no internal persistence,
* the model does not interpret the tags or actions, besides the special values,
* the model produces a binary result: either the action is allowed or not.
### Principal and Principal Tags
A Principal is an acting entity. A Principal can be a user, a role, a group, or any other entity that can perform actions.
Principal’s auth tag string looks like a comma-separated list of tags: `tag_one, tag_two, tag_three`. Each tag should be a string that is a valid Python identifier.
A supertag is a tag that is a prefix of another tag. For example, `admin` is a supertag of `admin_user`.
A principal is said to possess a tag if the tag or its supertag exists in the principal’s auth tag string.
Special values:
* `void` (can only access resources with `anyone` access, see below),
* `root` (unlimited access).
### Resource and Resource Tags
A Resource is an object that can be accessed by a Principal. A Resource can be a user, a channel, a source asset, an extension, a tenant, a campaign, etc.
A resource tag is a string that is a valid Python identifier. *NB: there is no such thing as a supertag for a resource tag.*
An action is a string that is a valid Python identifier. A superaction is an action that is a prefix of another action. For example, `create` is a superaction of `create_asset`.
Resource auth tag string looks like a comma-separated of colon-separarted pairs of tags and actions: `tag_one:read, tag_two:write` or multiple actions: `tag_one:{read, write}`(tags with associated actions).
If the resource auth tag string is empty or contains only whitespace, only the `root` principal is allowed access.
An action is allowed for a principal if it possesses:
* a tag that is associated with the action
* a tag that is associated with the superaction of the action
* the `root` tag
Special values:
* `anyone` resource tag (any principal is allowed to perform action).
* `all` action (all action are allowed).
### Access Resolution
The model makes a decision based on the following three values **only**:
* the principal’s auth tag string
* the resource’s auth tag string
* the action to be performed
The resolution is binary: either the action is allowed or not.
## Examples
### Basic Usage
```python
from tagth import allowed
# A regular user with basic permissions
principal_tags = 'user, content'
resource_tags = 'content:read, metadata:write'
# Check if user can read content
allowed(principal_tags, resource_tags, 'read') # Returns True
# Check if user can delete content
allowed(principal_tags, resource_tags, 'delete') # Returns False
# Multiple actions for a resource
principal_tags = 'user, content'
resource_tags = 'content:{read, write}'
# Check if user can read content
allowed(principal_tags, resource_tags, 'read') # Returns True
# Check if user can write content
allowed(principal_tags, resource_tags, 'write') # Returns True
# Check if user can delete content
allowed(principal_tags, resource_tags, 'delete') # Returns False
# Root user has unlimited access
principal_tags = 'root'
allowed(principal_tags, resource_tags, 'anything') # Returns True
# Void user can only access 'anyone' resources
void_tags = 'void'
allowed(void_tags, 'anyone:read', 'read') # Returns True
allowed(void_tags, 'content:read', 'read') # Returns False
```
### Supertags and Superactions
```python
# Principal tags can be supertags
principal_tags = 'admin'
resource_tags = 'admin_user:write, admin_content:delete'
# 'admin' is a supertag of 'admin_user' and 'admin_content'
allowed(principal_tags, resource_tags, 'write') # Returns True
allowed(principal_tags, resource_tags, 'delete') # Returns True
# Actions can have superactions
principal_tags = 'content'
resource_tags = 'content:create'
# 'create' is a superaction of 'create_asset'
allowed(principal_tags, resource_tags, 'create_asset') # Returns True
```
### Special Values
```python
# 'anyone' resource tag allows access to all principals
principal_tags = 'basic_user'
resource_tags = 'anyone:read'
allowed(principal_tags, resource_tags, 'read') # Returns True
# 'all' action allows all actions
principal_tags = 'content'
resource_tags = 'content:all'
allowed(principal_tags, resource_tags, 'read') # Returns True
allowed(principal_tags, resource_tags, 'write') # Returns True
```
| text/markdown | null | Boris Resnick <boris.resnick@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pyparsing>=3.1.4"
] | [] | [] | [] | [
"Homepage, https://github.com/scartill/tagth",
"Changelog, https://github.com/scartill/tagth/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.1 | 2026-02-20T18:28:56.393422 | tagth-1.2.3.tar.gz | 8,573 | b8/f7/b63e41a3a3a75cc71c794c41e207c3558b61d5b64049100f4951912bb353/tagth-1.2.3.tar.gz | source | sdist | null | false | 083a1ff76ec47517d2b1bc8e535d0da6 | 252c3f5573bfd2123c763ecd2cd3f1651cd0ca43c3b6812bbeed692234fdbad2 | b8f7b63e41a3a3a75cc71c794c41e207c3558b61d5b64049100f4951912bb353 | null | [
"LICENSE"
] | 212 |
2.4 | isage-libs-intent | 0.1.0.6 | SAGE Libs Intent (L3) - Keyword-based and LLM-based intent classification for conversational AI | # SAGE Intent Recognition
**Independent package for intent recognition and classification in conversational AI systems**
[](https://badge.fury.io/py/isage-intent)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## 🎯 Overview
`sage-intent` provides flexible intent recognition capabilities for conversational AI:
- **Keyword-Based Recognition**: Fast, rule-based intent matching
- **LLM-Based Recognition**: Semantic intent understanding with LLMs
- **Hybrid Classification**: Combine multiple recognizers
- **Extensible Architecture**: Easy to add custom recognizers
## 📦 Installation
```bash
# Basic installation (keyword-based only)
pip install isage-intent
# With LLM support
pip install isage-intent[llm]
# Development installation
pip install isage-intent[dev]
```
## 🚀 Quick Start
### Keyword-Based Recognition
```python
from sage_intent import KeywordIntentRecognizer, IntentCatalog
# Create intent catalog
catalog = IntentCatalog()
catalog.add_intent(
name="search",
keywords=["find", "search", "look for", "query"],
description="Search for information"
)
catalog.add_intent(
name="greeting",
keywords=["hello", "hi", "hey"],
description="Greet the user"
)
# Create recognizer
recognizer = KeywordIntentRecognizer(catalog)
# Recognize intent
intent = recognizer.recognize("Can you help me search for papers?")
print(intent.name) # "search"
print(intent.confidence) # 0.85
```
### LLM-Based Recognition
```python
from sage_intent import LLMIntentRecognizer, IntentCatalog
# Create catalog with descriptions
catalog = IntentCatalog()
catalog.add_intent(
name="data_analysis",
description="Analyze data, generate visualizations, compute statistics"
)
catalog.add_intent(
name="code_generation",
description="Write code, create functions, implement algorithms"
)
# Create LLM recognizer
recognizer = LLMIntentRecognizer(
catalog=catalog,
llm_client=your_llm_client
)
# Recognize with semantic understanding
intent = recognizer.recognize(
"I need a function to calculate the mean and standard deviation"
)
print(intent.name) # "code_generation"
```
### Hybrid Classifier
```python
from sage_intent import IntentClassifier, KeywordIntentRecognizer, LLMIntentRecognizer
# Create classifier with multiple recognizers
classifier = IntentClassifier(
recognizers=[
KeywordIntentRecognizer(catalog),
LLMIntentRecognizer(catalog, llm_client)
],
strategy="vote" # or "confidence", "cascade"
)
# Classify with combined approach
intent = classifier.classify("Find research papers about transformers")
```
## 📚 Key Components
### 1. **Intent Catalog** (`catalog.py`)
Manages intent definitions:
- Intent registration with keywords and descriptions
- Hierarchical intent organization
- Intent metadata and examples
### 2. **Keyword Recognizer** (`keyword_recognizer.py`)
Fast rule-based matching:
- Multiple keyword patterns per intent
- Fuzzy matching support
- Priority-based disambiguation
### 3. **LLM Recognizer** (`llm_recognizer.py`)
Semantic understanding with LLMs:
- Zero-shot intent classification
- Few-shot with examples
- Confidence scoring
### 4. **Classifier** (`classifier.py`)
Combines multiple recognizers:
- Voting strategies
- Confidence-based selection
- Cascade fallback
### 5. **Factory** (`factory.py`)
Easy recognizer creation:
- Pre-configured recognizers
- Custom recognizer registration
- Dynamic loading
## 🔧 Architecture
```
sage_intent/
├── base.py # Base classes and protocols
├── types.py # Common types
├── catalog.py # Intent catalog management
├── keyword_recognizer.py # Keyword-based recognition
├── llm_recognizer.py # LLM-based recognition
├── classifier.py # Multi-recognizer classification
├── factory.py # Recognizer factory
└── __init__.py # Public API exports
```
## 🎓 Use Cases
1. **Chatbots**: Route user queries to appropriate handlers
2. **Voice Assistants**: Understand user commands
3. **Customer Support**: Classify support tickets
4. **Search Systems**: Detect search intent for better results
5. **Agent Systems**: Determine agent actions based on user intent
## 🔗 Integration with SAGE
This package is part of the SAGE ecosystem but can be used independently:
```python
# Standalone usage
from sage_intent import KeywordIntentRecognizer, IntentCatalog
# With SAGE agentic (optional)
from sage_agentic import Agent
from sage_intent import IntentClassifier
agent = Agent()
classifier = IntentClassifier(catalog)
def process_query(query):
intent = classifier.classify(query)
return agent.execute(intent)
```
## 📖 Documentation
- **Repository**: https://github.com/intellistream/sage-intent
- **SAGE Documentation**: https://intellistream.github.io/SAGE-Pub/
- **Issues**: https://github.com/intellistream/sage-intent/issues
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## 📄 License
MIT License - see [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
Originally part of the [SAGE](https://github.com/intellistream/SAGE) framework, now maintained as an independent package for broader community use.
## 📧 Contact
- **Team**: IntelliStream Team
- **Email**: shuhao_zhang@hust.edu.cn
- **GitHub**: https://github.com/intellistream
| text/markdown | null | IntelliStream Team <shuhao_zhang@hust.edu.cn> | null | null | MIT | intent-recognition, intent-classification, nlu, conversational-ai, chatbot, llm, keyword-matching | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | ==3.11.* | [] | [] | [] | [
"pydantic>=2.0.0",
"typing-extensions>=4.0.0",
"openai>=1.0.0",
"anthropic>=0.20.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"ruff>=0.8.4; extra == \"dev\"",
"isage-pypi-publisher>=0.2.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/intellistream/sage-intent",
"Repository, https://github.com/intellistream/sage-intent",
"Documentation, https://github.com/intellistream/sage-intent#readme",
"Bug Tracker, https://github.com/intellistream/sage-intent/issues"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T18:28:15.841985 | isage_libs_intent-0.1.0.6.tar.gz | 38,454 | 8d/58/85f4d2d64b901f011617ec8c6d2ec23386d5bc0b0f36bc80dc5a39441be9/isage_libs_intent-0.1.0.6.tar.gz | source | sdist | null | false | c37854f841b0cbc8e7d2fcb532814cf6 | 2abac91f05f304fbe6eb785ffe427a662df4a0e768a9e01ec24f14f474bbf191 | 8d5885f4d2d64b901f011617ec8c6d2ec23386d5bc0b0f36bc80dc5a39441be9 | null | [
"LICENSE"
] | 195 |
2.4 | slack-clacks | 0.6.1 | the default mode of degenerate communication. | # clacks
the default mode of degenerate communication.
## Installation
Choose your preferred method:
**With uv** (no installation required):
```bash
uvx --from slack-clacks clacks
```
**With uv** (permanent installation):
```bash
uv tool install slack-clacks
```
**With pip/pipx/poetry**:
```bash
pip install slack-clacks
# or: pipx install slack-clacks
```
Examples below use `clacks` directly. If using `uvx`, prefix commands with `uvx --from slack-clacks`.
## Updating
**With uvx**: Updates happen automatically (always runs latest version).
**With uv tool**:
```bash
uv tool upgrade slack-clacks
```
**With pip**:
```bash
pip install --upgrade slack-clacks
```
## Authentication
Authenticate via OAuth:
```bash
clacks auth login -c <context-name>
```
### Modes
clacks supports three authentication modes:
#### clacks mode (default)
Full workspace access via OAuth.
```bash
clacks auth login --mode clacks
```
Permissions: channels, groups, DMs, MPIMs, files, search
#### clacks-lite mode
Secure, DM-focused access via OAuth. Use for security-conscious environments where channel access isn't needed.
```bash
clacks auth login --mode clacks-lite
```
Permissions: DMs, MPIMs, reactions only
#### cookie mode
Browser session authentication. Use for quick testing or when OAuth is impractical.
```bash
clacks auth login --mode cookie
```
Extract xoxc token and d cookie from browser. No OAuth app needed. See [docs/cookie-auth.md](docs/cookie-auth.md) for extraction instructions.
**Warning**: Cookie mode is known to cause logout issues on Slack Enterprise workspaces and may trigger security warnings about your account.
### Scopes
Operations requiring unavailable scopes will fail with a clear error message and re-authentication instructions.
### Certificate
OAuth requires HTTPS. clacks includes a bundled self-signed certificate, so no setup is required.
To generate your own certificate:
```bash
clacks auth cert generate
```
### Account Management
View current authentication status:
```bash
clacks auth status
```
Revoke authentication:
```bash
clacks auth logout
```
## Configuration
Multiple authentication contexts supported. Initialize configuration:
```bash
clacks config init
```
List available contexts:
```bash
clacks config contexts
```
Switch between contexts:
```bash
clacks config switch -C <context-name>
```
View current configuration:
```bash
clacks config info
```
## Messaging
### Send
Send to channel:
```bash
clacks send -c "#general" -m "message text"
clacks send -c "C123456" -m "message text"
```
Send direct message:
```bash
clacks send -u "@username" -m "message text"
clacks send -u "U123456" -m "message text"
```
Reply to thread:
```bash
clacks send -c "#general" -m "reply text" -t "1234567890.123456"
```
### Read
Read messages from channel:
```bash
clacks read -c "#general"
clacks read -c "#general" -l 50
```
Read direct messages:
```bash
clacks read -u "@username"
```
Read thread:
```bash
clacks read -c "#general" -t "1234567890.123456"
```
Read specific message:
```bash
clacks read -c "#general" -m "1234567890.123456"
```
### Recent
View recent messages across all conversations:
```bash
clacks recent
clacks recent -l 50
```
## Rolodex
Manage aliases for users and channels. Aliases resolve to platform-specific IDs (e.g., Slack user IDs).
Sync from Slack API:
```bash
clacks rolodex sync
```
Add alias manually:
```bash
clacks rolodex add <alias> -t <target-id> -T <target-type>
clacks rolodex add kartik -t U03QPJ2KMJ6 -T user
clacks rolodex add dev-channel -t C08740LGAE6 -T channel
```
List aliases:
```bash
clacks rolodex list
clacks rolodex list -T user
clacks rolodex list -p slack
```
Remove alias:
```bash
clacks rolodex remove <alias> -T <target-type>
```
Show valid target types for a platform:
```bash
clacks rolodex platforminfo -p slack
clacks rolodex platforminfo -p github
```
## Agent Skills
clacks supports the [Agent Skills](https://agentskills.io) open standard for AI coding assistants.
Print SKILL.md to stdout:
```bash
clacks skill
```
Install for Claude Code (global):
```bash
clacks skill --mode claude
```
Install for OpenAI Codex (global):
```bash
clacks skill --mode codex
```
Install for Cursor/Windsurf/Aider (global):
```bash
clacks skill --mode universal
```
Install for VS Code Copilot (project):
```bash
clacks skill --mode github
```
All modes support `-global` and `-project` suffixes (e.g., `claude-project`, `codex-global`).
## Output
All commands output JSON to stdout. Redirect to file:
```bash
clacks auth status -o output.json
```
## Requirements
- Python >= 3.13
- Slack workspace admin approval for OAuth app installation
| text/markdown | null | Neeraj Kashyap <nkashy1@gmail.com> | null | null | MIT | cli, command-line, messaging, slack | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Communications :: Chat",
"Topic :: Utilities"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"alembic>=1.17.2",
"cryptography>=44.0.0",
"platformdirs>=4.5.0",
"slack-sdk>=3.38.0",
"sqlalchemy>=2.0.44"
] | [] | [] | [] | [
"Homepage, https://github.com/zomglings/clacks",
"Repository, https://github.com/zomglings/clacks",
"Issues, https://github.com/zomglings/clacks/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:27:43.210925 | slack_clacks-0.6.1.tar.gz | 57,619 | 6a/e1/327706973834d15f729c5de5e68960409832075f48da9c69aad68f2f306a/slack_clacks-0.6.1.tar.gz | source | sdist | null | false | bdd5970dd10edd08209fe9e7f5b462d3 | 8f588931859b6a0371ae6dbb8dc586fa78d847572638134c26f50496301bb630 | 6ae1327706973834d15f729c5de5e68960409832075f48da9c69aad68f2f306a | null | [
"LICENSE"
] | 196 |
2.4 | lbt-dragonfly | 0.12.354 | Collection of all Dragonfly core Python libraries |

[](https://github.com/ladybug-tools/lbt-dragonfly/actions)
[](https://www.python.org/downloads/release/python-31013/) [](https://www.python.org/downloads/release/python-370/) [](https://www.python.org/downloads/release/python-270/) [](https://github.com/IronLanguages/ironpython2/releases/tag/ipy-2.7.8/)
# lbt-dragonfly
Collection of all Dragonfly core Python libraries.
Note that this repository and corresponding Python package does not contain any source
code on its own and it simply exists to provide a shortcut for installing all of
the dragonfly core libraries and extensions together.
## Included Dragonfly Extensions
Running `pip install lbt-dragonfly` will result in the installation of the following
dragonfly Python packages:
* [dragonfly-energy](https://github.com/ladybug-tools/dragonfly-energy)
* [dragonfly-radiance](https://github.com/ladybug-tools/dragonfly-radiance)
* [dragonfly-uwg](https://github.com/ladybug-tools/dragonfly-uwg)
* [dragonfly-display](https://github.com/ladybug-tools/dragonfly-display)
## Included Dragonfly Core Libraries
Since both dragonfly extensions use the dragonfly core libraries, the following
dependencies are also included:
* [dragonfly-core](https://github.com/ladybug-tools/dragonfly-core)
* [dragonfly-schema](https://github.com/ladybug-tools/dragonfly-schema)
## Also Included (All Ladybug and Honeybee Packages)
Since dragonfly uses ladybug and honeybee, the following dependencies are also included:
* [lbt-ladybug](https://github.com/ladybug-tools/lbt-ladybug)
* [lbt-honeybee](https://github.com/ladybug-tools/lbt-honeybee)
| text/markdown | Ladybug Tools | info@ladybug.tools | null | null | AGPL-3.0 | null | [
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: IronPython",
"Operating System :: OS Independent"
] | [] | https://github.com/ladybug-tools/lbt-dragonfly | null | null | [] | [] | [] | [
"dragonfly-energy==1.35.103",
"dragonfly-radiance==0.4.147",
"dragonfly-doe2==0.12.15",
"dragonfly-uwg==0.5.743",
"dragonfly-display==0.4.0",
"dragonfly-comparison==0.1.4",
"lbt-honeybee==0.9.250"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-20T18:27:29.125516 | lbt_dragonfly-0.12.354.tar.gz | 16,665 | 16/5f/e099158d1330d8649a3380f0bc56b124c1709a58bcaa981f60e1ec01a3e1/lbt_dragonfly-0.12.354.tar.gz | source | sdist | null | false | 9a30f4c4f0c877e81cf43c1ad36b1a59 | 5ee6bee3d144ebb65cb63a0cef25850168f3cdcebf6aa935f0caeb3b1dbc3df6 | 165fe099158d1330d8649a3380f0bc56b124c1709a58bcaa981f60e1ec01a3e1 | null | [
"LICENSE"
] | 293 |
2.4 | bicep-whatif-advisor | 2.0.0 | AI-powered Azure Bicep What-If deployment advisor with automated safety reviews | # bicep-whatif-advisor
[](https://github.com/neilpeterson/bicep-whatif-advisor/actions/workflows/test.yml)
[](https://pypi.org/project/bicep-whatif-advisor/)
[](https://pypi.org/project/bicep-whatif-advisor/)
[](https://opensource.org/licenses/MIT)
`bicep-whatif-advisor` is an AI-powered deployment safety gate for Azure Bicep and ARM templates. It automatically integrates into your CI/CD pipeline (GitHub Actions or Azure DevOps) to analyze Azure What-If output using LLMs (Anthropic Claude, Azure OpenAI, or Ollama), providing intelligent risk assessment before deployments reach production. The tool detects infrastructure drift by comparing What-If results against your code changes, validates that deployment changes align with PR intent, and flags inherently risky operations like deletions, security changes, and SKU downgrades. With zero-configuration platform auto-detection and automatic PR comments, it blocks unsafe deployments through configurable three-bucket risk thresholds—giving teams confidence that infrastructure changes match their intentions.
> **Note:** The tool also includes a CLI for local What-If analysis and human-readable deployment summaries.
## How It Works
When integrated into your CI/CD pipeline, `bicep-whatif-advisor` automatically detects the platform (GitHub Actions or Azure DevOps) and performs comprehensive deployment analysis with zero configuration required. Simply pipe Azure What-If output to the tool and it handles the rest.
**The tool will:**
1. **Auto-detect your CI platform** - Recognizes GitHub Actions or Azure DevOps environments
2. **Extract PR metadata** - Pulls title, description, and PR number from the CI environment
3. **Collect code diff** - Gathers changes from your PR to understand what's in the codebase
4. **Analyze with LLM** - Sends What-If output, PR metadata, and code diff to the LLM for intelligent analysis
5. **Evaluate three risk categories independently:**
- **Infrastructure Drift** - Detects changes not in your code (out-of-band modifications)
- **PR Intent Alignment** - Ensures changes match PR description
- **Risky Operations** - Flags dangerous operations (deletions, security changes, downgrades)
6. **Filter Azure What-If noise** - Two-layer filtering: built-in property-path patterns remove known noise (etag, provisioningState, IPv6 flags) from raw What-If text before LLM analysis, then LLM confidence scoring flags remaining uncertain changes. All filtered items preserved in a separate "Potential Noise" section
7. **Post detailed PR comment** - Automatically comments with formatted analysis (zero config)
8. **Gate deployment** - Exits with code 0 (safe) or 1 (unsafe) based on configurable thresholds per risk bucket
**Example Output:**
```
╭──────┬────────────────┬─────────────────┬────────┬──────┬────────────────────────────────────────╮
│ # │ Resource │ Type │ Action │ Risk │ Summary │
├──────┼────────────────┼─────────────────┼────────┼──────┼────────────────────────────────────────┤
│ 1 │ appinsights │ APIM Diagnostic │ Create │ Low │ Adds Application Insights logging │
├──────┼────────────────┼─────────────────┼────────┼──────┼────────────────────────────────────────┤
│ 2 │ sqlDatabase │ SQL Database │ Modify │ Med │ Changes SKU from Standard to Basic │
├──────┼────────────────┼─────────────────┼────────┼──────┼────────────────────────────────────────┤
│ 3 │ roleAssignment │ Role Assignment │ Delete │ High │ Removes Contributor access from │
│ │ │ │ │ │ managed identity │
╰──────┴────────────────┴─────────────────┴────────┴──────┴────────────────────────────────────────╯
Risk Assessment:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Infrastructure Drift: LOW
All changes match code diff
PR Intent Alignment: MEDIUM
PR mentions adding logging, but also includes database SKU change not described
Risky Operations: HIGH
Deletes RBAC role assignment (Contributor)
Downgrades database SKU (may cause data loss)
Verdict: UNSAFE - Deployment blocked
Reason: Risky operations exceed threshold (high). Address role deletion and SKU downgrade.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Quick Start
**GitHub Actions:**
```yaml
- name: Deployment Safety Gate
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
pip install bicep-whatif-advisor[anthropic]
az deployment group what-if \
--resource-group ${{ vars.AZURE_RESOURCE_GROUP }} \
--template-file main.bicep \
--exclude-change-types NoChange Ignore \
| bicep-whatif-advisor
```
**Azure DevOps:**
```yaml
- script: |
pip install bicep-whatif-advisor[anthropic]
az deployment group what-if \
--resource-group $(RESOURCE_GROUP) \
--template-file main.bicep \
--exclude-change-types NoChange Ignore \
| bicep-whatif-advisor
env:
ANTHROPIC_API_KEY: $(ANTHROPIC_API_KEY)
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
```
## Configuration Options
### Output Formats
```bash
# JSON for additional processing
bicep-whatif-advisor --format json
# Markdown (default for PR comments)
bicep-whatif-advisor --format markdown
```
### Risk Thresholds
Control deployment sensitivity by adjusting thresholds for each risk bucket independently:
```bash
# Stricter gates (block on medium or high risk)
bicep-whatif-advisor \
--drift-threshold medium \
--intent-threshold medium \
--operations-threshold medium
# Strictest gates (block on any risk)
bicep-whatif-advisor \
--drift-threshold low \
--intent-threshold low \
--operations-threshold low
```
**Available thresholds:** `low`, `medium`, `high` (default: `high` for all buckets)
**Threshold meanings:**
- `low` - Block if ANY risk detected in this category
- `medium` - Block if medium or high risk detected
- `high` - Only block on high risk (most permissive)
### Skipping Risk Buckets
You can selectively disable specific risk assessment buckets using skip flags:
```bash
# Skip infrastructure drift assessment (only evaluate intent + operations)
bicep-whatif-advisor --skip-drift
# Skip PR intent alignment assessment (only evaluate drift + operations)
bicep-whatif-advisor --skip-intent
# Skip risky operations assessment (only evaluate drift + intent)
bicep-whatif-advisor --skip-operations
# Combine skip flags (only evaluate drift)
bicep-whatif-advisor --skip-intent --skip-operations
```
**Use cases:**
- `--skip-drift` - Useful when infrastructure state is expected to differ from code
- `--skip-intent` - Useful for automated maintenance PRs or when PR descriptions are unavailable
- `--skip-operations` - Useful when you want to focus only on drift and intent alignment
**Note:** At least one risk bucket must remain enabled in CI mode.
### Alternative LLM Providers
By default, the tool uses Anthropic Claude. You can also use Azure OpenAI or local Ollama:
```bash
# Azure OpenAI
pip install bicep-whatif-advisor[azure]
export AZURE_OPENAI_ENDPOINT="https://..."
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_DEPLOYMENT="gpt-4"
bicep-whatif-advisor --provider azure-openai
# Local Ollama (free, runs on your infrastructure)
pip install bicep-whatif-advisor[ollama]
bicep-whatif-advisor --provider ollama --model llama3.1
```
### Multi-Environment Pipelines
```bash
# Distinguish environments in PR comments
bicep-whatif-advisor --comment-title "Production"
bicep-whatif-advisor --comment-title "Dev Environment"
# Non-blocking mode automatically labels the comment
bicep-whatif-advisor --comment-title "Production" --no-block
# Title becomes: "Production (non-blocking)"
```
## Complete Setup Guide
The tool works with any CI/CD platform that can run Azure CLI and Python. For complete setup instructions including:
- Azure authentication configuration (service principals, managed identities)
- Repository permissions and access tokens
- Multi-environment pipeline patterns
- Advanced configuration options
- Troubleshooting common issues
See the **[CI/CD Integration Guide](docs/guides/CICD_INTEGRATION.md)** for platform-specific examples including GitHub Actions, Azure DevOps, GitLab CI, and Jenkins.
## Documentation
**User Guides:**
- [Quick Start](docs/guides/QUICKSTART.md) - Get running in 5 minutes
- [User Guide](docs/guides/USER_GUIDE.md) - Complete feature reference and CLI flags
- [CI/CD Integration](docs/guides/CICD_INTEGRATION.md) - Pipeline setup for GitHub Actions, Azure DevOps, etc.
- [Risk Assessment](docs/guides/RISK_ASSESSMENT.md) - Deep dive into AI risk evaluation
**Technical Specifications:**
- [Technical Specifications](docs/specs/) - Comprehensive specs for each module (00-11)
- [00-OVERVIEW](docs/specs/00-OVERVIEW.md) - Project architecture and design principles
- [01-CLI-INTERFACE](docs/specs/01-CLI-INTERFACE.md) - CLI orchestration and flags
- [02-INPUT-VALIDATION](docs/specs/02-INPUT-VALIDATION.md) - Input processing
- [03-PROVIDER-SYSTEM](docs/specs/03-PROVIDER-SYSTEM.md) - LLM provider abstraction
- [04-PROMPT-ENGINEERING](docs/specs/04-PROMPT-ENGINEERING.md) - Prompt construction
- [05-OUTPUT-RENDERING](docs/specs/05-OUTPUT-RENDERING.md) - Output formatting
- [06-NOISE-FILTERING](docs/specs/06-NOISE-FILTERING.md) - Confidence-based filtering
- [07-PLATFORM-DETECTION](docs/specs/07-PLATFORM-DETECTION.md) - CI/CD auto-detection
- [08-RISK-ASSESSMENT](docs/specs/08-RISK-ASSESSMENT.md) - Three-bucket risk model
- [09-PR-INTEGRATION](docs/specs/09-PR-INTEGRATION.md) - PR comment posting
- [10-GIT-DIFF](docs/specs/10-GIT-DIFF.md) - Git diff collection
- [11-TESTING-STRATEGY](docs/specs/11-TESTING-STRATEGY.md) - Test architecture
## Support
- **Issues**: Report bugs or request features via repository issues
- **Contributing**: Pull requests welcome!
- **License**: MIT - see [LICENSE](LICENSE) for details
| text/markdown | Azure Tools Contributors | null | null | null | MIT | azure, bicep, arm, deployment, what-if, llm, infrastructure | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Systems Administration",
"Topic :: Software Development :: Build Tools"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"click>=8.0.0",
"rich>=13.0.0",
"requests>=2.31.0",
"anthropic>=0.40.0; extra == \"anthropic\"",
"openai>=1.0.0; extra == \"azure\"",
"requests>=2.31.0; extra == \"ollama\"",
"anthropic>=0.40.0; extra == \"all\"",
"openai>=1.0.0; extra == \"all\"",
"requests>=2.31.0; extra == \"all\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-mock>=3.10.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/neilpeterson/bicep-whatif-advisor",
"Issues, https://github.com/neilpeterson/bicep-whatif-advisor/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T18:27:25.690491 | bicep_whatif_advisor-2.0.0.tar.gz | 48,732 | 89/bc/b6bebd7d93d0a7cfa673a4347cb931580e13f4b2045fe030cfd1785f7eb3/bicep_whatif_advisor-2.0.0.tar.gz | source | sdist | null | false | 9ed1bcc000e036bb4d41d4507827f9d3 | 30e6e0a1610a0ec9808143cbfc560fe8f03e7e94c230a7e85dc35fbabc719a57 | 89bcb6bebd7d93d0a7cfa673a4347cb931580e13f4b2045fe030cfd1785f7eb3 | null | [
"LICENSE"
] | 211 |
2.4 | napari-myelin-quantifier | 1.0.1 | Myelinated Axon Quantification with Label Tracking | # napari-myelin-quantifier
[](https://github.com/wulinteousa2-hash/napari-myelin-quantifier/raw/main/LICENSE)
[](https://pypi.org/project/napari-myelin-quantifier)
[](https://python.org)
[](https://napari-hub.org/plugins/napari-myelin-quantifier)
---
## Overview
`napari-myelin-quantifier` is a napari plugin for quantitative analysis of 2D cross-sectional myelinated axons from binary segmentation masks.
The plugin identifies individual myelin rings, assigns a unique `ring_id` to each structure, and exports morphometric measurements for downstream analysis.
It enables reproducible extraction of:
- Axon diameter
- Fiber diameter
- Myelin thickness
- g-ratio
---
## Installation
Install via pip:
```bash
pip install napari-myelin-quantifier
```
If napari is not installed:
```bash
pip install "napari-myelin-quantifier[all]"
```
Development version:
```bash
pip install git+https://github.com/wulinteousa2-hash/napari-myelin-quantifier.git
```
## Input Requirements
The plugin requires a binary mask layer:
- Myelin = foreground (1 / True)
- Background = 0 / False
- Recommended: clean segmentation without holes or broken rings
Example Binary Mask

*Image courtesy of Bo Hu Lab, Houston Methodist Research Institute.*
## Ring Detection and Labeling
Each connected myelin ring is:
- Assigned a unique `ring_id`
- Spatially localized using centroid coordinates
- Evaluated for ring topology using Euler characteristic
Example Labeled Output


## Topological Validation (Euler Characteristic)
The Euler number ensures valid ring topology:
- Euler = 0 → valid ring (one hole)
- Euler ≠ 0 → solid object or fragmented structure
This prevents non-myelinated artifacts from being included in analysis.
Topology Illustration

## Quantitative Output (CSV)
For each ring, the plugin exports:
`ring_id`
`centroid_x, centroid_y`
`bbox_x0, bbox_y0, bbox_x1, bbox_y1`
`ring_area_px`
`lumen_area_px`
`filled_area_px`
`euler`
`touches_border`
Example:
```python
ring_id,centroid_x,centroid_y,bbox_x0,bbox_y0,bbox_x1,bbox_y1,ring_area_px,lumen_area_px,filled_area_px,euler,touches_border
1,873.8658,34.4421,857,18,890,52,380,556,936,0,False
```
## Derived Morphometric Parameters
Assuming approximately circular cross-sections:
### Axon diameter:
```Code
d_axon = 2 × sqrt(lumen_area / π)
```
### Fiber diameter:
```Code
d_fiber = 2 × sqrt(filled_area / π)
```
### Myelin thickness:
```Code
t = (d_fiber − d_axon) / 2
```
### g-ratio:
```Code
g = d_axon / d_fiber
```
Note: These are geometric approximations. For highly irregular axons, area-based statistics may be preferable.
## Typical Workflow
1. Load binary mask into napari.
2. Open:
- Plugins → Myelin Quantifier
3. Adjust filtering parameters:
- Minimum ring area
- Minimum lumen area
- Exclude border objects (recommended)
4. Run quantification.
5. Export CSV.
6. Perform statistical analysis in Python, R, or Excel.
Interface

## Acknowledgements
Example microscopy data used in documentation were generated by the **Bo Hu Lab**, Houston Methodist Research Institute.
Imaging hardware and infrastructure support were provided by the **Electron Microscopy Core**, directed by **István Katona**, Houston Methodist Research Institute.
## Contributing
Contributions are welcome. Please ensure tests pass before submitting pull requests.
License
MIT License.
| text/markdown | Napari User | wulinteo.usa2@gmail.com | null | null |
The MIT License (MIT)
Copyright (c) 2026 Napari User
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: napari",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Image Processing"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"magicgui",
"qtpy",
"scikit-image",
"napari[all]; extra == \"all\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/wulinteousa2-hash/napari-myelin-quantifier/issues",
"Documentation, https://github.com/wulinteousa2-hash/napari-myelin-quantifier#README.md",
"Source Code, https://github.com/wulinteousa2-hash/napari-myelin-quantifier",
"User Support, https://github.com/wulinteousa2-hash/napari-myelin-quantifier/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T18:26:59.152093 | napari_myelin_quantifier-1.0.1.tar.gz | 964,644 | 96/80/c5ecc76824dc92f940241d22690d653dd45e6a965f3521d5c04193464f7a/napari_myelin_quantifier-1.0.1.tar.gz | source | sdist | null | false | 735cb2e14c78e86359387a0fccbd05f6 | 169b25c1bf4a4e28992529bcf730644f067cdc984d027cba51c1ba69138dd90b | 9680c5ecc76824dc92f940241d22690d653dd45e6a965f3521d5c04193464f7a | null | [
"LICENSE"
] | 204 |
2.4 | servicenow-devtools-mcp | 0.2.2 | A developer & debug-focused MCP server for ServiceNow — 32 tools for platform introspection, change intelligence, debugging, investigations, and documentation generation. | <p align="center">
<img src="assets/banner.svg" alt="servicenow-devtools-mcp banner" width="900" />
</p>
<p align="center">
<a href="https://pypi.org/project/servicenow-devtools-mcp/"><img src="https://img.shields.io/pypi/v/servicenow-devtools-mcp?color=00c9a7&style=flat-square" alt="PyPI version"></a>
<a href="https://pypi.org/project/servicenow-devtools-mcp/"><img src="https://img.shields.io/pypi/pyversions/servicenow-devtools-mcp?style=flat-square" alt="Python versions"></a>
<a href="https://github.com/xerrion/servicenow-devtools-mcp/blob/main/LICENSE"><img src="https://img.shields.io/github/license/xerrion/servicenow-devtools-mcp?style=flat-square" alt="License"></a>
<img src="https://img.shields.io/badge/tools-33-00d4ff?style=flat-square" alt="Tool count">
</p>
# servicenow-devtools-mcp
A developer & debug-focused [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server for ServiceNow. Give your AI agent direct access to your ServiceNow instance for introspection, debugging, change intelligence, and documentation generation.
## Features
- :mag: **Instance Introspection** -- describe tables, query records, compute aggregates, fetch individual records
- :link: **Relationship Mapping** -- find incoming and outgoing references for any record
- :package: **Change Intelligence** -- inspect update sets, diff artifact versions, audit trails, generate release notes
- :bug: **Debug & Trace** -- trace record timelines, flow executions, email chains, integration errors, import set runs
- :test_tube: **Developer Actions** -- toggle artifacts, set properties, seed test data, preview-then-apply updates
- :mag_right: **Investigations** -- 7 built-in analysis modules (stale automations, deprecated APIs, table health, ACL conflicts, error analysis, slow transactions, performance bottlenecks)
- :page_facing_up: **Documentation** -- generate logic maps, artifact summaries, test scenarios, code review notes
- :shield: **Safety** -- table deny lists, sensitive field masking, row limit caps, write gating in production
---
## Quick Start
```bash
# No install needed -- run directly with uvx
uvx servicenow-devtools-mcp
```
Set three required environment variables (or use a `.env` file):
```bash
export SERVICENOW_INSTANCE_URL=https://your-instance.service-now.com
export SERVICENOW_USERNAME=admin
export SERVICENOW_PASSWORD=your-password
```
---
## Configuration
### OpenCode
Add to `~/.config/opencode/opencode.json`:
```json
{
"mcp": {
"servicenow": {
"type": "local",
"command": ["uvx", "servicenow-devtools-mcp"],
"environment": {
"SERVICENOW_INSTANCE_URL": "https://your-instance.service-now.com",
"SERVICENOW_USERNAME": "admin",
"SERVICENOW_PASSWORD": "your-password",
"MCP_TOOL_PACKAGE": "dev_debug",
"SERVICENOW_ENV": "dev"
}
}
}
}
```
### Claude Desktop
Add to your Claude Desktop config (`claude_desktop_config.json`):
```json
{
"mcpServers": {
"servicenow": {
"command": "uvx",
"args": ["servicenow-devtools-mcp"],
"env": {
"SERVICENOW_INSTANCE_URL": "https://your-instance.service-now.com",
"SERVICENOW_USERNAME": "admin",
"SERVICENOW_PASSWORD": "your-password",
"MCP_TOOL_PACKAGE": "dev_debug",
"SERVICENOW_ENV": "dev"
}
}
}
}
```
### VS Code / Cursor (Copilot MCP)
Add to `.vscode/mcp.json` in your workspace:
```json
{
"servers": {
"servicenow": {
"command": "uvx",
"args": ["servicenow-devtools-mcp"],
"env": {
"SERVICENOW_INSTANCE_URL": "https://your-instance.service-now.com",
"SERVICENOW_USERNAME": "admin",
"SERVICENOW_PASSWORD": "your-password",
"MCP_TOOL_PACKAGE": "dev_debug",
"SERVICENOW_ENV": "dev"
}
}
}
}
```
### Generic stdio
```bash
SERVICENOW_INSTANCE_URL=https://your-instance.service-now.com \
SERVICENOW_USERNAME=admin \
SERVICENOW_PASSWORD=your-password \
uvx servicenow-devtools-mcp
```
---
## :robot: Install Instructions for AIs
> Copy the block below and paste it into a conversation with any AI agent that supports MCP tool use. The AI will know how to configure and use this server.
````text
## ServiceNow MCP Server Setup
You have access to a ServiceNow MCP server (`servicenow-devtools-mcp`) that provides
33 tools for interacting with a ServiceNow instance.
### Installation
Run via uvx (no install required):
```
uvx servicenow-devtools-mcp
```
### Required Environment Variables
- SERVICENOW_INSTANCE_URL -- Full URL of the ServiceNow instance (e.g. https://dev12345.service-now.com)
- SERVICENOW_USERNAME -- ServiceNow user with admin or appropriate roles
- SERVICENOW_PASSWORD -- Password for the user above
### Optional Environment Variables
- MCP_TOOL_PACKAGE -- Which tools to load: "dev_debug" (default, all tools), "introspection_only" (read-only), "full" (same as dev_debug), "none"
- SERVICENOW_ENV -- Environment label: "dev" (default), "test", "staging", "prod". Write operations are blocked when set to "prod" unless ALLOW_WRITES_IN_PROD is set to true.
- ALLOW_WRITES_IN_PROD -- Set to "true" to allow write operations even when SERVICENOW_ENV is "prod" (default: false).
- MAX_ROW_LIMIT -- Max records per query (default: 100)
- LARGE_TABLE_NAMES_CSV -- Tables requiring date-bounded queries (default: syslog,sys_audit,sys_log_transaction,sys_email_log)
### MCP Client Configuration (stdio transport)
```json
{
"command": "uvx",
"args": ["servicenow-devtools-mcp"],
"env": {
"SERVICENOW_INSTANCE_URL": "<instance_url>",
"SERVICENOW_USERNAME": "<username>",
"SERVICENOW_PASSWORD": "<password>",
"MCP_TOOL_PACKAGE": "dev_debug",
"SERVICENOW_ENV": "dev"
}
}
```
### Available Tools (33 total)
**Introspection (4):** table_describe, table_get, table_query, table_aggregate
- Describe table schema, fetch records by sys_id, query with encoded queries, compute stats
**Relationships (2):** rel_references_to, rel_references_from
- Find what references a record and what a record references
**Metadata (4):** meta_list_artifacts, meta_get_artifact, meta_find_references, meta_what_writes
- List/inspect platform artifacts (business rules, script includes, etc.), find cross-references, find writers to a table
**Change Intelligence (4):** changes_updateset_inspect, changes_diff_artifact, changes_last_touched, changes_release_notes
- Inspect update sets, diff artifact versions, view audit trail, generate release notes
**Debug & Trace (6):** debug_trace, debug_flow_execution, debug_email_trace, debug_integration_health, debug_importset_run, debug_field_mutation_story
- Build event timelines, inspect flow executions, trace emails, check integration errors, inspect import sets, trace field mutations
**Developer Actions (6):** dev_toggle, dev_set_property, dev_seed_test_data, dev_cleanup, table_preview_update, table_apply_update
- Toggle artifacts on/off, set system properties, seed/cleanup test data, preview and apply record updates (two-step confirmation)
**Investigations (2 dispatchers, 7 modules):** investigate_run, investigate_explain
- Modules: stale_automations, deprecated_apis, table_health, acl_conflicts, error_analysis, slow_transactions, performance_bottlenecks
**Documentation (4):** docs_logic_map, docs_artifact_summary, docs_test_scenarios, docs_review_notes
- Generate automation maps, artifact summaries with dependencies, test scenario suggestions, code review findings
**Core (1):** list_tool_packages
- List available tool packages and their contents
### Safety Guardrails
- Table deny list: sys_user_has_role, sys_user_grmember, and other sensitive tables are blocked
- Sensitive fields: password, token, secret fields are masked in responses
- Row limits: User-supplied limit parameters capped at MAX_ROW_LIMIT (default 100)
- Large tables: syslog, sys_audit, etc. require date-bounded filters
- Write gating: All write operations blocked when SERVICENOW_ENV=prod (unless explicitly overridden)
- Standardized responses: Tools return JSON with correlation_id, status, data, and optionally pagination and warnings when relevant
````
---
## Environment Variables
| Variable | Description | Default | Required |
|---|---|---|---|
| `SERVICENOW_INSTANCE_URL` | Full URL of your ServiceNow instance | -- | Yes |
| `SERVICENOW_USERNAME` | ServiceNow username (Basic Auth) | -- | Yes |
| `SERVICENOW_PASSWORD` | ServiceNow password | -- | Yes |
| `MCP_TOOL_PACKAGE` | Tool package to load | `dev_debug` | No |
| `SERVICENOW_ENV` | Environment label (`dev`, `test`, `staging`, `prod`) | `dev` | No |
| `ALLOW_WRITES_IN_PROD` | Set to `true` to allow writes when `SERVICENOW_ENV=prod` | `false` | No |
| `MAX_ROW_LIMIT` | Maximum rows returned per query | `100` | No |
| `LARGE_TABLE_NAMES_CSV` | Comma-separated tables requiring date filters | `syslog,sys_audit,sys_log_transaction,sys_email_log` | No |
The server reads from `.env` and `.env.local` files automatically (`.env.local` takes precedence).
---
## Tool Reference
### Core
| Tool | Description | Key Parameters |
|---|---|---|
| `list_tool_packages` | List all available tool packages and their tool groups | -- |
### :mag: Introspection
| Tool | Description | Key Parameters |
|---|---|---|
| `table_describe` | Return field metadata for a table (types, references, choices) | `table` |
| `table_get` | Fetch a single record by sys_id | `table`, `sys_id`, `fields?`, `display_values?` |
| `table_query` | Query a table with encoded query string | `table`, `query`, `fields?`, `limit?`, `offset?`, `order_by?` |
| `table_aggregate` | Compute aggregate stats (count, avg, min, max, sum) | `table`, `query`, `group_by?`, `avg_fields?`, `sum_fields?` |
### :link: Relationships
| Tool | Description | Key Parameters |
|---|---|---|
| `rel_references_to` | Find records in other tables that reference a given record | `table`, `sys_id`, `depth?` |
| `rel_references_from` | Find what a record references via its reference fields | `table`, `sys_id`, `depth?` |
### :package: Metadata
| Tool | Description | Key Parameters |
|---|---|---|
| `meta_list_artifacts` | List platform artifacts by type (business rules, script includes, etc.) | `artifact_type`, `query?`, `limit?` |
| `meta_get_artifact` | Get full artifact details including script body | `artifact_type`, `sys_id` |
| `meta_find_references` | Search all script tables for references to a target string | `target`, `limit?` |
| `meta_what_writes` | Find business rules that write to a table/field | `table`, `field?` |
### :package: Change Intelligence
| Tool | Description | Key Parameters |
|---|---|---|
| `changes_updateset_inspect` | Inspect update set members grouped by type with risk flags | `update_set_id` |
| `changes_diff_artifact` | Show unified diff between two most recent artifact versions | `table`, `sys_id` |
| `changes_last_touched` | Show who last touched a record and what changed (sys_audit) | `table`, `sys_id`, `limit?` |
| `changes_release_notes` | Generate Markdown release notes from an update set | `update_set_id`, `format?` |
### :bug: Debug & Trace
| Tool | Description | Key Parameters |
|---|---|---|
| `debug_trace` | Build merged timeline from sys_audit, syslog, and journal | `record_sys_id`, `table`, `minutes?` |
| `debug_flow_execution` | Inspect a Flow Designer execution step by step | `context_id` |
| `debug_email_trace` | Reconstruct email chain for a record | `record_sys_id` |
| `debug_integration_health` | Summarize recent integration errors (ECC queue or REST) | `kind?`, `hours?` |
| `debug_importset_run` | Inspect import set run with row-level results | `import_set_sys_id` |
| `debug_field_mutation_story` | Chronological mutation history of a single field | `table`, `sys_id`, `field`, `limit?` |
### :test_tube: Developer Actions
| Tool | Description | Key Parameters |
|---|---|---|
| `dev_toggle` | Toggle active/inactive on a platform artifact | `artifact_type`, `sys_id`, `active` |
| `dev_set_property` | Set a system property value (returns old value) | `name`, `value` |
| `dev_seed_test_data` | Create test data records with cleanup tracking | `table`, `records` (JSON string), `tag?` |
| `dev_cleanup` | Delete all records previously seeded with a tag | `tag` |
| `table_preview_update` | Preview a record update with field-level diff | `table`, `sys_id`, `changes` (JSON string) |
| `table_apply_update` | Apply a previously previewed update | `preview_token` |
### :mag_right: Investigations
| Tool | Description | Key Parameters |
|---|---|---|
| `investigate_run` | Run a named investigation module | `investigation`, `params?` (JSON string) |
| `investigate_explain` | Get detailed explanation for a specific finding | `investigation`, `element_id` |
**Available investigation modules:**
| Module | What it does |
|---|---|
| `stale_automations` | Find disabled or unused business rules, flows, and scheduled jobs |
| `deprecated_apis` | Scan scripts for deprecated ServiceNow API usage |
| `table_health` | Analyze table size, index coverage, and schema issues |
| `acl_conflicts` | Detect conflicting or redundant ACL rules |
| `error_analysis` | Aggregate and categorize recent errors from syslog |
| `slow_transactions` | Find slow-running transactions from sys_log_transaction |
| `performance_bottlenecks` | Identify performance issues across flows, queries, and scripts |
### :page_facing_up: Documentation
| Tool | Description | Key Parameters |
|---|---|---|
| `docs_logic_map` | Generate lifecycle logic map of all automations on a table | `table` |
| `docs_artifact_summary` | Generate artifact summary with dependency analysis | `artifact_type`, `sys_id` |
| `docs_test_scenarios` | Analyze script and suggest test scenarios | `artifact_type`, `sys_id` |
| `docs_review_notes` | Scan script for anti-patterns and generate review notes | `artifact_type`, `sys_id` |
---
## Tool Packages
Control which tools are loaded using the `MCP_TOOL_PACKAGE` environment variable:
| Package | Tools Loaded | Use Case |
|---|---|---|
| `dev_debug` (default) | All 33 tools | Full development and debugging |
| `full` | All 33 tools | Same as dev_debug |
| `introspection_only` | Introspection + Relationships + Metadata (10 tools) | Read-only exploration |
| `none` | Only `list_tool_packages` | Minimal / testing |
---
## Safety & Policy
The server includes built-in guardrails that are always active:
- **Table deny list** -- Sensitive tables like `sys_user_has_role` and `sys_user_grmember` are blocked from queries
- **Sensitive field masking** -- Fields whose names contain patterns like `password`, `token`, `secret`, `credential`, `api_key`, or `private_key` are masked with the literal value `***MASKED***` in responses
- **Row limit caps** -- User-supplied `limit` parameters are capped at `MAX_ROW_LIMIT` (default 100). If a larger value is requested, the limit is reduced and a warning is included in the response
- **Large table protection** -- Tables listed in `LARGE_TABLE_NAMES_CSV` require date-bounded filters in queries to prevent full-table scans
- **Write gating** -- All write operations (`dev_toggle`, `dev_set_property`, `dev_seed_test_data`, `table_preview_update`, etc.) are blocked when `SERVICENOW_ENV=prod`
- **Standardized responses** -- Every tool returns a JSON envelope with `correlation_id`, `status`, and `data`, and may include `pagination` and `warnings` when applicable, for consistent error handling
---
## Example Prompts
Here are some real-world prompts you can use with an AI agent that has this MCP server connected:
> Describe the incident table and show me all the business rules that fire on it.
> Query the last 10 P1 incidents that were resolved this week and show who resolved them.
> Trace the full lifecycle of incident INC0010042 -- show me every field change, comment, and log entry.
> Inspect update set "Q1 Release" and generate release notes. Flag any risky changes.
> Run the stale_automations investigation and explain the top findings.
> Find all scripts that reference the "cmdb_ci_server" table and check them for anti-patterns.
> Seed 3 test incidents with different priorities, verify they were created, then clean them up.
> Show me the performance bottlenecks investigation and explain any slow transactions found.
---
## Development
```bash
# Clone the repository
git clone https://github.com/xerrion/servicenow-devtools-mcp.git
cd servicenow-devtools-mcp
# Install dependencies
uv sync
# Run unit tests (207 tests)
uv run pytest
# Run integration tests (requires .env.local with real credentials)
uv run pytest -m integration
# Run the server locally
uv run servicenow-devtools-mcp
```
---
## License
[MIT](LICENSE)
| text/markdown | null | Lasse Nielsen <lasse@xerrion.dk> | null | null | null | debugging, devtools, mcp, model-context-protocol, servicenow | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Debuggers",
"Topic :: Software Development :: Libraries :: Application Frameworks"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx>=0.27.0",
"mcp>=1.0.0",
"pydantic-settings>=2.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"starlette>=0.38.0",
"uvicorn>=0.30.0"
] | [] | [] | [] | [
"Homepage, https://github.com/xerrion/servicenow-devtools-mcp",
"Repository, https://github.com/xerrion/servicenow-devtools-mcp",
"Issues, https://github.com/xerrion/servicenow-devtools-mcp/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T18:26:40.188873 | servicenow_devtools_mcp-0.2.2-py3-none-any.whl | 48,541 | b3/22/a63de0f45cc7a5a4ec78e8dff16b5ae55c2e5d5f71ca2601bc56b485b28e/servicenow_devtools_mcp-0.2.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 19024cf3333864aa60c9de93914a0b8f | bf36b60762ebaea4022a8f35d4659043de7cef0c118aae22367032e1cee18da6 | b322a63de0f45cc7a5a4ec78e8dff16b5ae55c2e5d5f71ca2601bc56b485b28e | MIT | [
"LICENSE"
] | 191 |
2.4 | cogent-ai | 1.17.3 | Production AI agent framework with memory control and semantic caching | # Cogent
<p align="center">
<strong>Build AI agents that actually work.</strong>
</p>
<p align="center">
📚 <strong>Documentation: <a href="https://milad-o.github.io/cogent">https://milad-o.github.io/cogent</a></strong>
</p>
<p align="center">
<a href="https://github.com/milad-o/cogent/releases"><img src="https://img.shields.io/badge/version-1.17.3-blue.svg" alt="Version"></a>
<a href="https://github.com/milad-o/cogent/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-green.svg" alt="License"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.13+-blue.svg" alt="Python"></a>
<a href="https://milad-o.github.io/cogent"><img src="https://img.shields.io/badge/docs-latest-brightgreen.svg" alt="Documentation"></a>
<a href="https://github.com/milad-o/cogent/tree/main/tests"><img src="https://img.shields.io/badge/tests-1342-blue.svg" alt="Tests"></a>
</p>
<p align="center">
<a href="#installation">Installation</a> •
<a href="#quick-start">Quick Start</a> •
<a href="#core-architecture">Architecture</a> •
<a href="#capabilities">Capabilities</a> •
<a href="#examples">Examples</a>
</p>
---
Cogent is a **production AI agent framework** built on cutting-edge research in memory control and semantic caching. Unlike frameworks focused on multi-agent orchestration, Cogent emphasizes **bounded memory**, **reasoning artifacts caching**, and **tool augmentation** for superior performance and reliability.
**Why Cogent?**
- 🧠 **Memory Control** — Bio-inspired bounded memory prevents context drift and poisoning
- ⚡ **Semantic Caching** — Cache reasoning artifacts (intents, plans) at 80%+ hit rates
- 🚀 **Fast** — Parallel tool execution, cached model binding, direct SDK calls
- 🔧 **Simple** — Define tools with `@tool`, create agents in 3 lines, no boilerplate
- 🏭 **Production-ready** — Built-in resilience, observability, and security interceptors
- 📦 **Batteries included** — File system, web search, code sandbox, browser, PDF, and more
```python
from cogent import Agent, tool
@tool
async def search(query: str) -> str:
"""Search the web."""
# Your search implementation
return results
agent = Agent(name="Assistant", model="gpt-4o-mini", tools=[search])
result = await agent.run("Find the latest news on AI agents")
```
---
## 🎉 Latest Changes (v1.17.3)
**Context Propagation & Query Tracking** 🔄
- ✨ **RunContext.query** — Track original user request through entire delegation chain
- 🔧 **Agent.as_tool()** — Context flows automatically (like regular tools)
- 📝 **model_kwargs** — Pass model-specific config (e.g., `thinking_budget` for Gemini)
- 💡 **Common Patterns** — Docs for delegation depth, retry tracking, task lineage
**API Improvements:**
- `isolate_context=False` (default) — Context flows automatically to sub-agents
- `ctx.query` — Access original user request in delegated agents
- `Agent(model_kwargs={"thinking_budget": 16384})` — Model-specific configuration
- Gemini defaults: `gemini-2.5-flash` model, `thinking_budget=0` (opt-in)
```python
# Context flows automatically through delegation
specialist = Agent(name="Expert", model="gpt-4o", tools=[...])
orchestrator = Agent(name="Main", model="gpt-4o", tools=[specialist.as_tool()])
# Sub-agents can access ctx.query (original user request)
result = await orchestrator.run("Can I delete files?", context=ctx)
```
See [CHANGELOG.md](CHANGELOG.md) for full version history and migration guide.
---
## Features
- **Native Executor** — High-performance parallel tool execution with zero framework overhead
- **Native Model Support** — OpenAI, Azure, Anthropic, Gemini, Groq, Ollama, Custom endpoints
- **Capabilities** — Filesystem, Web Search, Code Sandbox, Browser, PDF, Shell, MCP, Spreadsheet, and more
- **RAG Pipeline** — Document loading, per-file-type splitting, embeddings, vector stores, retrievers
- **Memory & Persistence** — Conversation history, long-term memory with fuzzy matching ([docs/memory.md](docs/memory.md))
- **Memory Control (ACC)** — Bio-inspired bounded memory prevents drift ([docs/acc.md](docs/acc.md))
- **Semantic Caching** — Cache reasoning artifacts at 80%+ hit rates ([docs/memory.md#semantic-cache](docs/memory.md#semantic-cache))
- **Observability** — Tracing, metrics, progress tracking, structured logging
- **TaskBoard** — Built-in task tracking for complex multi-step workflows
- **Interceptors** — Budget guards, rate limiting, PII protection, tool gates
- **Resilience** — Retry policies, circuit breakers, fallbacks
- **Human-in-the-Loop** — Tool approval, guidance, interruption handling
- **Streaming** — Real-time token streaming with callbacks
- **Structured Output** — Type-safe responses (Pydantic, dataclass, TypedDict, primitives, Literal, Union, Enum, collections, dict, None)
- **Reasoning** — Extended thinking mode with chain-of-thought
---
## Modules
Cogent is organized into focused modules, each with multiple backends and implementations.
### `cogent.models` — LLM Providers
Native SDK wrappers for all major LLM providers with zero abstraction overhead.
| Provider | Chat | Embeddings | String Alias | Notes |
|----------|------|------------|--------------|-------|
| **OpenAI** | `OpenAIChat` | `OpenAIEmbedding` | `"gpt4"`, `"gpt-4o"`, `"gpt-4o-mini"` | GPT-4o series, o1, o3 |
| **Azure** | `AzureOpenAIChat` | `AzureOpenAIEmbedding` | — | Managed Identity, Entra ID auth |
| **Azure AI Foundry** | `AzureAIFoundryChat` | — | — | GitHub Models integration |
| **Anthropic** | `AnthropicChat` | — | `"claude"`, `"claude-opus"` | Claude 3.5 Sonnet, extended thinking |
| **Gemini** | `GeminiChat` | `GeminiEmbedding` | `"gemini"`, `"gemini-pro"` | Gemini 2.5 Pro/Flash |
| **Groq** | `GroqChat` | — | `"llama"`, `"mixtral"` | Fast inference, Llama 3.3, Mixtral |
| **xAI** | `XAIChat` | — | `"grok"` | Grok 4, Grok 3, vision models |
| **DeepSeek** | `DeepSeekChat` | — | `"deepseek"` | DeepSeek Chat, DeepSeek Reasoner |
| **Cerebras** | `CerebrasChat` | — | `"cerebras"` | Ultra-fast inference with WSE-3 |
| **Mistral** | `MistralChat` | `MistralEmbedding` | `"mistral"`, `"codestral"` | Mistral Large, Ministral |
| **Cohere** | `CohereChat` | `CohereEmbedding` | `"command"`, `"command-r"` | Command R+, Aya |
| **Cloudflare** | `CloudflareChat` | `CloudflareEmbedding` | — | Workers AI (@cf/...) |
| **Ollama** | `OllamaChat` | `OllamaEmbedding` | `"ollama"` | Local models, any GGUF |
| **Custom** | `CustomChat` | `CustomEmbedding` | — | vLLM, Together AI, any OpenAI-compatible |
```python
# 3 ways to create models
# 1. Simple strings (recommended)
agent = Agent("Helper", model="gpt4")
agent = Agent("Helper", model="claude")
agent = Agent("Helper", model="gemini")
# 2. Factory functions
from cogent import create_chat, create_embedding
model = create_chat("gpt4") # String alias
model = create_chat("gpt-4o-mini") # Model name
model = create_chat("claude-sonnet-4") # Auto-detects provider
model = create_chat("grok-4") # xAI Grok
model = create_chat("deepseek-chat") # DeepSeek
embeddings = create_embedding("openai:text-embedding-3-small") # Explicit provider:model
# 3. Direct instantiation (full control)
from cogent.models import OpenAIChat, XAIChat, DeepSeekChat
model = OpenAIChat(model="gpt-4o", temperature=0.7, api_key="sk-...")
model = XAIChat(model="grok-4", api_key="xai-...")
model = DeepSeekChat(model="deepseek-reasoner", api_key="sk-...")
```
### `cogent.capabilities` — Agent Capabilities
Composable tools that plug into any agent. Each capability adds related tools.
| Capability | Description | Tools Added |
|------------|-------------|-------------|
| **HTTPClient** | Full-featured HTTP client | `http_request`, `http_get`, `http_post` with retries, timeouts |
| **Database** | Async SQL database access | `execute_query`, `fetch_one`, `fetch_all` with connection pooling |
| **APITester** | HTTP endpoint testing | `test_endpoint`, `assert_status`, `assert_json` |
| **DataValidator** | Schema validation | `validate_data`, `validate_json`, `validate_dict` with Pydantic |
| **WebSearch** | Web search with caching | `web_search`, `news_search` with semantic cache |
| **Browser** | Playwright automation | `navigate`, `click`, `fill`, `screenshot` |
| **FileSystem** | Sandboxed file operations | `read_file`, `write_file`, `list_dir`, `search_files` |
| **CodeSandbox** | Safe Python execution | `execute_python`, `run_function` |
| **Shell** | Sandboxed shell commands | `run_command` |
| **PDF** | PDF processing | `read_pdf`, `create_pdf`, `merge_pdfs` |
| **Spreadsheet** | Excel/CSV operations | `read_spreadsheet`, `write_spreadsheet` |
| **MCP** | Model Context Protocol | Dynamic tools from MCP servers |
```python
from cogent.capabilities import FileSystem, CodeSandbox, WebSearch, HTTPClient, Database
agent = Agent(
name="Assistant",
model="gpt-4o-mini",
capabilities=[
FileSystem(allowed_paths=["./project"]),
CodeSandbox(timeout=30),
WebSearch(),
HTTPClient(),
Database("sqlite:///data.db"),
]
)
```
### `cogent.document` — Document Processing
Load, split, and process documents for RAG pipelines.
**Loaders** — Support for all common file formats:
| Loader | Formats | Notes |
|--------|---------|-------|
| `TextLoader` | `.txt`, `.rst` | Plain text extraction |
| `MarkdownLoader` | `.md` | Markdown with structure |
| `PDFLoader` | `.pdf` | Basic text extraction (pypdf/pdfplumber) |
| `PDFMarkdownLoader` | `.pdf` | Clean markdown output (pymupdf4llm) |
| `PDFVisionLoader` | `.pdf` | Vision model-based extraction |
| `WordLoader` | `.docx` | Microsoft Word documents |
| `HTMLLoader` | `.html`, `.htm` | HTML documents |
| `CSVLoader` | `.csv` | CSV files |
| `JSONLoader` | `.json`, `.jsonl` | JSON documents |
| `XLSXLoader` | `.xlsx` | Excel spreadsheets |
| `CodeLoader` | `.py`, `.js`, `.ts`, `.java`, `.go`, `.rs`, `.cpp`, etc. | Source code files |
**Splitters** — Multiple chunking strategies:
| Splitter | Strategy |
|----------|----------|
| `RecursiveCharacterSplitter` | Hierarchical separators (default) |
| `SentenceSplitter` | Sentence boundary detection |
| `MarkdownSplitter` | Markdown structure-aware |
| `HTMLSplitter` | HTML tag-based |
| `CodeSplitter` | Language-aware code splitting |
| `SemanticSplitter` | Embedding-based semantic chunking |
| `TokenSplitter` | Token count-based |
```python
from cogent.document import DocumentLoader, SemanticSplitter
loader = DocumentLoader()
docs = await loader.load_directory("./documents")
splitter = SemanticSplitter(model=model)
chunks = splitter.split_documents(docs)
```
### `cogent.vectorstore` — Vector Storage
Semantic search with pluggable backends and embedding providers.
**Backends:**
| Backend | Use Case | Persistence |
|---------|----------|-------------|
| `InMemoryBackend` | Development, small datasets | No |
| `FAISSBackend` | Large-scale local search | Optional |
| `ChromaBackend` | Persistent vector database | Yes |
| `QdrantBackend` | Production vector database | Yes |
| `PgVectorBackend` | PostgreSQL integration | Yes |
**Embedding Providers:**
| Provider | Model Examples |
|----------|----------------|
| `OpenAI` | `openai:text-embedding-3-small`, `openai:text-embedding-3-large` |
| `Ollama` | `ollama:nomic-embed-text`, `ollama:mxbai-embed-large` |
| `Mock` | Testing only |
```python
from cogent import create_embedding
from cogent.vectorstore import VectorStore
from cogent.vectorstore.backends import FAISSBackend
store = VectorStore(
embeddings=create_embedding("openai:text-embedding-3-large"),
backend=FAISSBackend(dimension=3072),
)
```
### `cogent.memory` — Memory & Persistence
Long-term memory with fuzzy matching (semantic fallback optional), conversation history, and scoped views.
**Stores:**
| Store | Backend | Features |
|-------|---------|----------|
| `InMemoryStore` | Dict | Fast, no persistence |
| `SQLAlchemyStore` | SQLite, PostgreSQL, MySQL | Async, full SQL |
| `RedisStore` | Redis | Distributed, native TTL |
```python
from cogent.memory import Memory, SQLAlchemyStore
memory = Memory(store=SQLAlchemyStore("sqlite+aiosqlite:///./data.db"))
# Scoped views
user_mem = memory.scoped("user:alice")
team_mem = memory.scoped("team:research")
```
### `cogent.executors` — Execution Strategies
Pluggable execution strategies that define HOW agents process tasks.
| Executor | Strategy | Use Case |
|----------|----------|----------|
| `NativeExecutor` | Parallel tool execution | Default, high performance |
| `SequentialExecutor` | Sequential tool execution | Ordered dependencies |
**Standalone execution** — bypass Agent class entirely:
```python
from cogent.executors import run
result = await run(
"Search for Python tutorials and summarize",
tools=[search, summarize],
model="gpt-4o-mini",
)
```
### `cogent.interceptors` — Middleware
Composable middleware for cross-cutting concerns.
| Category | Interceptors |
|----------|-------------|
| **Budget** | `BudgetGuard` (token/cost limits) |
| **Security** | `PIIShield`, `ContentFilter` |
| **Rate Limiting** | `RateLimiter`, `ThrottleInterceptor` |
| **Context** | `ContextCompressor`, `TokenLimiter` |
| **Gates** | `ToolGate`, `PermissionGate`, `ConversationGate` |
| **Resilience** | `Failover`, `CircuitBreaker`, `ToolGuard` |
| **Audit** | `Auditor` (event logging) |
| **Prompt** | `PromptAdapter`, `ContextPrompt`, `LambdaPrompt` |
```python
from cogent.interceptors import BudgetGuard, PIIShield, RateLimiter
agent = Agent(
name="Safe",
model="gpt-4o-mini",
intercept=[
BudgetGuard(max_model_calls=100),
PIIShield(patterns=["email", "ssn"]),
RateLimiter(requests_per_minute=60),
]
)
```
### `cogent.observability` — Monitoring & Tracing
Comprehensive monitoring for understanding system behavior.
| Component | Purpose |
|-----------|---------|
| `ExecutionTracer` | Deep execution tracing with spans |
| `MetricsCollector` | Counter, Gauge, Histogram, Timer |
| `ProgressTracker` | Real-time progress output |
| `Observer` | Unified observability with history capture |
| `Dashboard` | Visual inspection interface |
| `Inspectors` | Agent, Task, Event inspection |
**Renderers:** `TextRenderer`, `RichRenderer`, `JSONRenderer`, `MinimalRenderer`
```python
from cogent.observability import ExecutionTracer, ProgressTracker
tracer = ExecutionTracer()
async with tracer.trace("my-operation") as span:
span.set_attribute("user_id", user_id)
result = await do_work()
```
---
## Installation
> **Note:** The package is published as `cogent-ai` on PyPI, but you import it as `cogent` in your code.
```bash
# Install from PyPI
uv add cogent-ai
# With extras
uv add "cogent-ai[vector-stores,retrieval]"
uv add "cogent-ai[database]"
uv add "cogent-ai[all-backend]"
uv add "cogent-ai[all]"
# Or install from source (latest)
uv add git+https://github.com/milad-o/cogent.git
uv add "cogent-ai[all] @ git+https://github.com/milad-o/cogent.git"
```
**Optional dependency groups:**
| Group | Purpose | Includes |
|-------|---------|----------|
| `vector-stores` | Vector databases | FAISS, Qdrant, SciPy |
| `retrieval` | Retrieval libraries | BM25, sentence-transformers |
| `database` | SQL databases | SQLAlchemy, aiosqlite, asyncpg, psycopg2 |
| `infrastructure` | Infrastructure | Redis |
| `web` | Web tools | BeautifulSoup4, DuckDuckGo search |
| `browser` | Browser automation | Playwright |
| `document` | Document processing | PDF, Word, Markdown loaders |
| `api` | API framework | FastAPI, Uvicorn, Starlette |
| `visualization` | Graphs & charts | PyVis, Gravis, Matplotlib, Seaborn, Pandas |
| `anthropic` | Claude models | Anthropic SDK |
| `azure` | Azure models | Azure Identity, Azure AI Inference |
| `cerebras` | Cerebras models | Cerebras Cloud SDK |
| `cohere` | Cohere models | Cohere SDK |
| `gemini` | Gemini models | Google GenAI SDK |
| `groq` | Groq models | Groq SDK |
| `all-providers` | All LLM providers | anthropic, azure, cerebras, cohere, gemini, groq |
| `all-backend` | All backends | vector-stores, retrieval, database, infrastructure |
| `all` | Everything | All above + visualization |
**Development installation:**
```bash
# Core dev tools (linting, type checking)
uv sync --group dev
# Add testing
uv sync --group dev --group test
# Add backend tests (vector stores, databases)
uv sync --group dev --group test --group test-backends
# Add documentation
uv sync --group dev --group test --group test-backends --group docs
```
## Core Architecture
Cogent is built around a high-performance **Native Executor** that eliminates framework overhead while providing enterprise-grade features.
### Native Executor
The executor uses a direct asyncio loop with parallel tool execution—no graph frameworks, no unnecessary abstractions:
```python
from cogent import Agent, tool
from cogent.models import ChatModel
@tool
def search(query: str) -> str:
"""Search the web."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> str:
"""Evaluate math expression."""
return str(eval(expression))
agent = Agent(
name="Assistant",
model="gpt4", # Simple string model
tools=[search, calculate],
)
# Tools execute in parallel when independent
result = await agent.run("Search for Python and calculate 2^10")
```
**Key optimizations:**
- **Parallel tool execution** — Multiple tool calls run concurrently via `asyncio.gather`
- **Cached model binding** — Tools bound once at construction, zero overhead per call
- **Native SDK integration** — Direct OpenAI/Anthropic SDK calls, no translation layers
- **Automatic resilience** — Rate limit retries with exponential backoff built-in
### Tool System
Define tools with the `@tool` decorator—automatic schema extraction from type hints and docstrings:
```python
from cogent import tool
from cogent.core.context import RunContext
@tool
def search(query: str, max_results: int = 10) -> str:
"""Search the web for information.
Args:
query: Search query string.
max_results: Maximum results to return.
"""
return f"Found {max_results} results for: {query}"
# With context injection for user/session data
@tool
def get_user_preferences(ctx: RunContext) -> str:
"""Get preferences for the current user."""
return f"Preferences for user {ctx.user_id}"
# Async tools supported
@tool
async def fetch_data(url: str) -> str:
"""Fetch data from URL."""
async with httpx.AsyncClient() as client:
response = await client.get(url)
return response.text
```
**Tool features:**
- Type hints → JSON schema conversion
- Docstring → description extraction
- Sync and async function support
- Context injection via `ctx: RunContext` parameter
- Automatic error handling and retries
### Standalone Execution
For maximum performance, bypass the Agent class entirely:
```python
from cogent.executors import run
result = await run(
"Search for Python tutorials and summarize the top 3",
tools=[search, summarize],
model="gpt-4o-mini",
)
```
## Quick Start
### Simple Agent
```python
import asyncio
from cogent import Agent, tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Weather in {city}: 72°F, sunny"
async def main():
agent = Agent(
name="Assistant",
model="gpt-4o-mini",
tools=[get_weather],
)
result = await agent.run("What's the weather in Tokyo?")
print(result)
asyncio.run(main())
```
### Multi-Agent with Subagents
```python
from cogent import Agent
# Create specialist agents
data_analyst = Agent(
name="data_analyst",
model="gpt-4o-mini",
instructions="Analyze data and provide statistical insights.",
)
market_researcher = Agent(
name="market_researcher",
model="gpt-4o-mini",
instructions="Research market trends and competitive landscape.",
)
# Coordinator delegates to specialists
coordinator = Agent(
name="coordinator",
model="gpt-4o-mini",
instructions="""Coordinate research tasks:
- Use data_analyst for numerical analysis
- Use market_researcher for market trends
Synthesize their findings.""",
# Simply pass the agents - uses their names automatically
subagents=[data_analyst, market_researcher],
)
# Full metadata preserved (tokens, duration, delegation chain)
result = await coordinator.run("Analyze Q4 2025 e-commerce growth")
print(f"Total tokens: {result.metadata.tokens.total_tokens}") # Includes all subagents
print(f" Prompt: {result.metadata.tokens.prompt_tokens}")
print(f" Completion: {result.metadata.tokens.completion_tokens}")
if result.metadata.tokens.reasoning_tokens:
print(f" Reasoning: {result.metadata.tokens.reasoning_tokens}")
print(f"Subagent calls: {len(result.subagent_responses)}")
```
## Streaming
```python
agent = Agent(
name="Writer",
model="gpt-4o-mini",
stream=True,
)
async for chunk in agent.run_stream("Write a poem"):
print(chunk.content, end="", flush=True)
```
## Human-in-the-Loop
```python
from cogent import Agent
from cogent.agent import InterruptedException
agent = Agent(
name="Assistant",
model="gpt-4o-mini",
tools=[sensitive_tool],
interrupt_on={"tools": ["sensitive_tool"]}, # Require approval
)
try:
result = await agent.run("Do something sensitive")
except InterruptedException as e:
# Handle approval flow
decision = await get_human_decision(e.pending_action)
result = await agent.resume(e.state, decision)
```
## Observability
```python
from cogent import Agent
from cogent.observability import Observer, ObservabilityLevel
# Verbosity levels for agents
agent = Agent(
name="Assistant",
model="gpt-4o-mini",
verbosity="debug", # off | result | progress | detailed | debug | trace
)
# Or use enum/int
agent = Agent(model=model, verbosity=ObservabilityLevel.DEBUG) # Enum
agent = Agent(model=model, verbosity=4) # Int (0-5)
# Boolean shorthand
agent = Agent(model=model, verbosity=True) # → PROGRESS level
# With observer for history capture
observer = Observer(level="detailed", capture=["tool.result", "agent.*"])
result = await agent.run("Query", observer=observer)
# Access captured events
for event in observer.history():
print(event)
```
## Interceptors
Control execution flow with middleware:
```python
from cogent.interceptors import (
BudgetGuard, # Token/cost limits
RateLimiter, # Request throttling
PIIShield, # Redact sensitive data
ContentFilter, # Block harmful content
ToolGate, # Conditional tool access
PromptAdapter, # Modify prompts dynamically
Auditor, # Audit logging
)
agent = Agent(
name="Safe",
model="gpt-4o-mini",
intercept=[
BudgetGuard(max_model_calls=100, max_tool_calls=500),
PIIShield(patterns=["email", "ssn"]),
RateLimiter(requests_per_minute=60),
],
)
```
## Structured Output
Type-safe responses with comprehensive type support and automatic validation:
**Supported Types:**
- **Structured Models**: `BaseModel`, `dataclass`, `TypedDict`
- **Primitives**: `str`, `int`, `bool`, `float`
- **Constrained**: `Literal["A", "B", "C"]`
- **Collections**: `list[T]`, `set[T]`, `tuple[T, ...]` (wrap in models for reliability)
- **Polymorphic**: `Union[A, B]` (agent chooses schema)
- **Enumerations**: `Enum` types
- **Dynamic**: `dict` (agent decides structure)
- **Confirmation**: `None` type
```python
from pydantic import BaseModel
from typing import Literal, Union
from enum import Enum
from cogent import Agent
# Structured models
class Analysis(BaseModel):
sentiment: str
confidence: float
topics: list[str]
# Configure on agent (all calls use schema)
agent = Agent(
name="Analyzer",
model="gpt-4o-mini",
output=Analysis, # Enforce schema on all runs
)
result = await agent.run("Analyze: I love this product!")
print(result.content.data.sentiment) # "positive"
print(result.content.data.confidence) # 0.95
# OR: Per-call override (more flexible)
agent = Agent(name="Analyzer", model="gpt-4o-mini") # No default schema
result = await agent.run(
"Analyze: I love this product!",
output=Analysis, # Schema for this call only
)
print(result.content.data.sentiment) # "positive"
# Bare types - return primitive values directly
agent = Agent(name="Reviewer", model="gpt-4o-mini")
result = await agent.run(
"Review this code",
output=Literal["APPROVE", "REJECT"], # Per-call schema
)
print(result.content.data) # "APPROVE" (bare string)
# Collections - wrap in models for reliability
class Tags(BaseModel):
items: list[str]
agent = Agent(name="Tagger", model="gpt-4o-mini", output=Tags)
result = await agent.run("Extract tags from: Python async FastAPI")
print(result.content.data.items) # ["Python", "async", "FastAPI"]
# Union types - polymorphic responses
from typing import Union
class Success(BaseModel):
status: Literal["success"] = "success"
result: str
class Error(BaseModel):
status: Literal["error"] = "error"
message: str
agent = Agent(name="Handler", model="gpt-4o-mini", output=Union[Success, Error])
# Agent chooses schema based on content
# Enum types
from enum import Enum
class Priority(str, Enum):
LOW = "low"
HIGH = "high"
agent = Agent(name="Prioritizer", model="gpt-4o-mini", output=Priority)
result = await agent.run("Server is down!")
print(result.content.data) # Priority.HIGH
# Dynamic structure - agent decides fields
agent = Agent(name="Analyzer", model="gpt-4o-mini", output=dict)
result = await agent.run("Analyze user feedback")
print(result.content.data) # {"sentiment": "positive", "score": 8, ...}
# Other bare types: str, int, bool, float
agent = Agent(name="Counter", model="gpt-4o-mini", output=int)
result = await agent.run("Count the items")
print(result.content.data) # 5 (bare int)
```
## Reasoning
Extended thinking for complex problems with AI-controlled rounds:
```python
from cogent import Agent
from cogent.agent.reasoning import ReasoningConfig
# Simple: Enable with defaults
agent = Agent(
name="Analyst",
model="gpt-4o",
reasoning=True, # AI decides when ready (up to 10 rounds)
)
# Custom config
agent = Agent(
name="DeepThinker",
model="gpt-4o",
reasoning=ReasoningConfig(
max_thinking_rounds=15, # Safety limit
style=ReasoningStyle.CRITICAL,
),
)
# Per-call override
result = await agent.run(
"Complex analysis task",
reasoning=True, # Enable for this call only
)
```
**Reasoning Styles:** `ANALYTICAL`, `EXPLORATORY`, `CRITICAL`, `CREATIVE`
## Resilience
```python
from cogent.agent import ResilienceConfig, RetryPolicy
agent = Agent(
name="Resilient",
model="gpt-4o-mini",
resilience=ResilienceConfig(
retry=RetryPolicy(max_attempts=3, backoff_multiplier=2.0),
timeout=30.0,
circuit_breaker=True,
),
)
```
## Configuration
Use environment variables or `.env`:
```bash
# LLM Provider
OPENAI_API_KEY=sk-...
# Azure
AZURE_OPENAI_ENDPOINT=https://...
AZURE_OPENAI_DEPLOYMENT=gpt-4o
AZURE_OPENAI_AUTH_TYPE=managed_identity
AZURE_OPENAI_CLIENT_ID=... # optional (user-assigned managed identity)
# Azure (service principal / client secret)
# AZURE_OPENAI_AUTH_TYPE=client_secret
# AZURE_OPENAI_TENANT_ID=...
# AZURE_OPENAI_CLIENT_ID=...
# AZURE_OPENAI_CLIENT_SECRET=...
# Anthropic
ANTHROPIC_API_KEY=...
# Ollama (local)
OLLAMA_HOST=http://localhost:11434
```
## Examples
See `examples/` for complete examples organized by category:
### Basics (`examples/basics/`)
| Example | Description |
|---------|-------------|
| `hello_world.py` | Simple agent with tools |
| `memory.py` | Conversation persistence |
| `memory_layers.py` | Multi-layer memory management |
| `memory_semantic_search.py` | Semantic memory search |
| `streaming.py` | Real-time token streaming |
| `structured_output.py` | Type-safe responses (12 patterns) |
### Capabilities (`examples/capabilities/`)
| Example | Description |
|---------|-------------|
| `browser.py` | Web browsing with Playwright |
| `code_sandbox.py` | Safe Python execution |
| `codebase_analyzer.py` | Code analysis agent |
| `data_validator.py` | Schema validation |
| `database_agent.py` | SQL database operations |
| `filesystem.py` | File system operations |
| `http_agent.py` | HTTP client capability |
| `kg_agent_viz.py` | Knowledge graph visualization |
| `knowledge_graph.py` | Knowledge graph construction |
| `mcp_example.py` | Model Context Protocol integration |
| `shell.py` | Shell command execution |
| `spreadsheet.py` | Excel/CSV operations |
| `web_search.py` | Web search with caching |
### Advanced (`examples/advanced/`)
| Example | Description |
|---------|-------------|
| `acc.py` | Adaptive Context Control (bounded memory) |
| `acc_comparison.py` | ACC vs standard memory comparison |
| `complex_task.py` | Multi-step task handling |
| `content_review.py` | Content moderation |
| `context_layer.py` | Context management |
| `deferred_tools.py` | Deferred tool execution |
| `executors_demo.py` | Executor strategies (Sequential, Tree Search) |
| `human_in_the_loop.py` | Approval workflows |
| `interceptors.py` | Middleware patterns |
| `model_thinking.py` | Extended thinking mode |
| `reasoning.py` | Reasoning strategies |
| `semantic_cache.py` | Semantic caching demo |
| `single_vs_multi_agent.py` | Single vs delegated agents |
| `tactical_delegation.py` | Dynamic agent spawning |
| `taskboard.py` | TaskBoard for complex workflows |
### Retrieval (`examples/retrieval/`)
| Example | Description |
|---------|-------------|
| `finance_table_example.py` | Financial data extraction |
| `hyde.py` | Hypothetical Document Embeddings |
| `pdf_summarizer.py` | PDF document summarization |
| `pdf_vision_showcase.py` | Vision-based PDF extraction |
| `retrievers.py` | 12 retriever strategies (Dense, BM25, Hybrid, etc.) |
| `summarizer.py` | Document summarization strategies |
### Observability (`examples/observability/`)
| Example | Description |
|---------|-------------|
| `agent_lifecycle.py` | Agent lifecycle events |
| `custom_formatter.py` | Custom log formatting |
| `custom_sink.py` | Custom logging sinks |
| `deep_tracing.py` | Deep execution tracing |
| `enhanced_features.py` | Enhanced observability features |
| `observer.py` | Observer pattern usage |
| `reasoning_observability.py` | Reasoning mode observability |
| `response_metadata.py` | Response metadata tracking |
| `thinking_observability.py` | Extended thinking observability |
| `tool_resilience.py` | Tool resilience monitoring |
## Development
```bash
# Install with dev dependencies
uv sync --extra dev
# Run tests
uv run pytest
# Type checking
uv run mypy src/cogent
# Linting
uv run ruff check src/cogent
```
## License
MIT License
| text/markdown | Milad Olad | null | null | null | MIT | agents, ai, caching, llm, memory, reasoning, tools | [
"Development Status :: 5 - Production/Stable",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.28.1",
"networkx>=3.6",
"openai>=1.0.0",
"pydantic-settings>=2.12.0",
"python-dotenv>=1.0.0",
"rapidfuzz>=3.14.3",
"structlog>=25.5.0",
"aiosqlite>=0.21.0; extra == \"all\"",
"anthropic>=0.75.0; extra == \"all\"",
"asyncpg>=0.31.0; extra == \"all\"",
"azure-ai-inference>=1.0.0b9; extra == \"all\"",
"azure-identity>=1.25.1; extra == \"all\"",
"beautifulsoup4>=4.14.2; extra == \"all\"",
"cerebras-cloud-sdk>=1.64.1; extra == \"all\"",
"cohere>=5.20.0; extra == \"all\"",
"ddgs>=9.9.1; extra == \"all\"",
"faiss-cpu<2,>=1.7; python_version < \"3.14\" and extra == \"all\"",
"fastapi>=0.115.0; extra == \"all\"",
"google-genai>=1.57.0; extra == \"all\"",
"gravis>=0.1.0; extra == \"all\"",
"greenlet>=3.2.4; extra == \"all\"",
"groq>=0.15.0; extra == \"all\"",
"matplotlib>=3.9.0; extra == \"all\"",
"mcp>=1.22.0; extra == \"all\"",
"pandas>=2.2.0; extra == \"all\"",
"pdfplumber>=0.11.8; extra == \"all\"",
"playwright>=1.56.0; extra == \"all\"",
"psycopg2-binary>=2.9.11; extra == \"all\"",
"pyarrow>=22.0.0; extra == \"all\"",
"pymupdf-layout>=1.26.6; extra == \"all\"",
"pymupdf4llm>=0.2.6; extra == \"all\"",
"pymupdf>=1.26.6; extra == \"all\"",
"pypdf>=6.4.0; extra == \"all\"",
"pyvis>=0.3.2; extra == \"all\"",
"qdrant-client>=1.16.2; extra == \"all\"",
"rank-bm25>=0.2.2; extra == \"all\"",
"redis>=5.0.0; extra == \"all\"",
"reportlab>=4.4.5; extra == \"all\"",
"scipy>=1.17.0; extra == \"all\"",
"seaborn>=0.13.0; extra == \"all\"",
"sentence-transformers>=5.2.0; extra == \"all\"",
"sqlalchemy>=2.0.44; extra == \"all\"",
"starlette>=0.50.0; extra == \"all\"",
"uvicorn>=0.38.0; extra == \"all\"",
"websockets>=15.0.1; extra == \"all\"",
"aiosqlite>=0.21.0; extra == \"all-backend\"",
"asyncpg>=0.31.0; extra == \"all-backend\"",
"faiss-cpu<2,>=1.7; python_version < \"3.14\" and extra == \"all-backend\"",
"greenlet>=3.2.4; extra == \"all-backend\"",
"psycopg2-binary>=2.9.11; extra == \"all-backend\"",
"qdrant-client>=1.16.2; extra == \"all-backend\"",
"rank-bm25>=0.2.2; extra == \"all-backend\"",
"redis>=5.0.0; extra == \"all-backend\"",
"scipy>=1.17.0; extra == \"all-backend\"",
"sentence-transformers>=5.2.0; extra == \"all-backend\"",
"sqlalchemy>=2.0.44; extra == \"all-backend\"",
"anthropic>=0.75.0; extra == \"all-providers\"",
"azure-ai-inference>=1.0.0b9; extra == \"all-providers\"",
"azure-identity>=1.25.1; extra == \"all-providers\"",
"cerebras-cloud-sdk>=1.64.1; extra == \"all-providers\"",
"cohere>=5.20.0; extra == \"all-providers\"",
"google-genai>=1.57.0; extra == \"all-providers\"",
"groq>=0.15.0; extra == \"all-providers\"",
"pyarrow>=22.0.0; extra == \"analytics\"",
"anthropic>=0.75.0; extra == \"anthropic\"",
"fastapi>=0.115.0; extra == \"api\"",
"starlette>=0.50.0; extra == \"api\"",
"uvicorn>=0.38.0; extra == \"api\"",
"azure-ai-inference>=1.0.0b9; extra == \"azure\"",
"azure-identity>=1.25.1; extra == \"azure\"",
"playwright>=1.56.0; extra == \"browser\"",
"cerebras-cloud-sdk>=1.64.1; extra == \"cerebras\"",
"cohere>=5.20.0; extra == \"cohere\"",
"aiosqlite>=0.21.0; extra == \"database\"",
"asyncpg>=0.31.0; extra == \"database\"",
"greenlet>=3.2.4; extra == \"database\"",
"psycopg2-binary>=2.9.11; extra == \"database\"",
"sqlalchemy>=2.0.44; extra == \"database\"",
"pdfplumber>=0.11.8; extra == \"document\"",
"pymupdf-layout>=1.26.6; extra == \"document\"",
"pymupdf4llm>=0.2.6; extra == \"document\"",
"pymupdf>=1.26.6; extra == \"document\"",
"pypdf>=6.4.0; extra == \"document\"",
"reportlab>=4.4.5; extra == \"document\"",
"google-genai>=1.57.0; extra == \"gemini\"",
"groq>=0.15.0; extra == \"groq\"",
"redis>=5.0.0; extra == \"infrastructure\"",
"mcp>=1.22.0; extra == \"mcp\"",
"websockets>=15.0.1; extra == \"mcp\"",
"rank-bm25>=0.2.2; extra == \"retrieval\"",
"sentence-transformers>=5.2.0; extra == \"retrieval\"",
"faiss-cpu<2,>=1.7; python_version < \"3.14\" and extra == \"vector-stores\"",
"qdrant-client>=1.16.2; extra == \"vector-stores\"",
"scipy>=1.17.0; extra == \"vector-stores\"",
"gravis>=0.1.0; extra == \"visualization\"",
"matplotlib>=3.9.0; extra == \"visualization\"",
"pandas>=2.2.0; extra == \"visualization\"",
"pyvis>=0.3.2; extra == \"visualization\"",
"seaborn>=0.13.0; extra == \"visualization\"",
"beautifulsoup4>=4.14.2; extra == \"web\"",
"ddgs>=9.9.1; extra == \"web\""
] | [] | [] | [] | [
"Homepage, https://github.com/milad-o/cogent",
"Repository, https://github.com/milad-o/cogent",
"Issues, https://github.com/milad-o/cogent/issues"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T18:26:31.351573 | cogent_ai-1.17.3-py3-none-any.whl | 613,028 | 0e/b6/5ed431181cd5bd9a18c6906ee1bf0133e678065504a38d267b6e35faf2d0/cogent_ai-1.17.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 063583ee2ae21023bdbd71450afa15d0 | 9895f54041e9d4aafcdd97218733606c944a438fcb9cea88cb03c66dd6df03c7 | 0eb65ed431181cd5bd9a18c6906ee1bf0133e678065504a38d267b6e35faf2d0 | null | [
"LICENSE"
] | 199 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.