metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | bbstrader | 2.0.7 | Simplified Investment & Trading Toolkit with Python & C++ | # Simplified Investment & Trading Toolkit with Python & C++
[](https://github.com/bbalouki/bbstrader/actions/workflows/build.yml)
[](https://bbstrader.readthedocs.io/en/latest/?badge=latest)
[](https://pypi.python.org/pypi/bbstrader)



[](https://pypi.org/project/bbstrader/)
[](https://pypi.org/project/bbstrader/)

[](https://isocpp.org/std/the-standard)
[](https://cmake.org/)
[](https://pepy.tech/projects/bbstrader)
[](https://www.codefactor.io/repository/github/bbalouki/bbstrader)
[](https://www.linkedin.com/in/bertin-balouki-s-15b17a1a6)
## Welcome to `bbstrader` – The Ultimate C++ & Python Trading Powerhouse!
## Table of Contents
- [Overview](#overview)
- [Why `bbstrader` Stands Out](#why-bbstrader-stands-out)
- [Trusted by Traders Worldwide](#trusted-by-traders-worldwide)
- [The `bbstrader` Edge: Uniting C++ Speed with Python Flexibility](#the-bbstrader-edge-uniting-c-speed-with-python-flexibility)
- [Overcoming the MQL5 Bottleneck](#overcoming-the-mql5-bottleneck)
- [Key Modules](#key-modules)
- [1. `btengine`: Event-Driven Backtesting Beast](#1-btengine-event-driven-backtesting-beast)
- [2. `metatrader`: The C++/Python Bridge to MT5](#2-metatrader-the-cpython-bridge-to-mt5)
- [Pattern 1: C++ Core, Python Orchestrator (Maximum Performance)](#pattern-1-c-core-python-orchestrator-maximum-performance)
- [Pattern 2: Python-Driven with C++ Acceleration](#pattern-2-python-driven-with-c-acceleration)
- [3. `trading`: Live Execution & Strategy Orchestrator](#3-trading-live-execution--strategy-orchestrator)
- [4. `models`: Quant Toolkit for Signals & Risk](#4-models-quant-toolkit-for-signals--risk)
- [Getting Started](#getting-started)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [For the Python Quant](#for-the-python-quant)
- [For the C++ Developer](#for-the-c-developer)
- [CLI workflow](#cli-workflow)
- [Community & Support](#-community--support)
- [Professional Services](#professional-services)
### Overview
Imagine having the raw, blistering speed of C++ for your high-frequency trades, combined with Python's ecosystem for lightning-fast prototyping, advanced AI models, and seamless data analysis. That's `bbstrader` – not just a library, but a game-changing toolkit designed for quants, algo traders, and institutional pros who demand an edge in volatile markets. Whether you're scalping forex pairs, backtesting complex strategies, or copying trades across accounts in real-time, `bbstrader` empowers you to build, test, and deploy with unmatched efficiency.
Forget the frustrations of slow Python bottlenecks or MQL5's rigid sandbox. `bbstrader` bridges worlds: C++ for mission-critical performance and Python for intelligent orchestration. It's open-source, battle-tested across platforms, and ready to supercharge your trading arsenal.
## **Why `bbstrader` Stands Out**
In a crowded field of trading libraries, `bbstrader` is architected to solve the most challenging problems in algorithmic trading: performance, flexibility, and platform limitations.
- **Blazing Speed with C++ Core**: Compile your strategy logic in native C++ for deterministic, low-latency execution. Perfect for HFT, arbitrage, or compute-heavy models that Python alone can't handle.
- **Python's Powerhouse Ecosystem**: Leverage `NumPy`, `pandas`, `scikit-learn`, `TensorFlow`, and more for research, ML-driven signals, and backtesting – all seamlessly integrated with your C++ core.
- **Institutional-Grade Architecture:** From its event-driven backtester to its modular design, `bbstrader` is built with the principles of professional trading systems in mind, providing a robust foundation for serious strategy development.
In today's hyper-fast financial landscape, every microsecond counts. `bbstrader` isn't another lightweight wrapper – it's an institutional-grade powerhouse engineered to tackle real-world trading challenges head-on.
- **Break Free from MQL5 Limits**: Ditch interpreted code and ecosystem constraints. Build multi-threaded, AI-infused strategies that execute orders in microseconds via MetaTrader 5 (MT5) integration.
**Flexible Interface**: CLI & GUI
`bbstrader` adapts to your workflow.
- **Automation Fanatics**: Use the CLI for headless scripts, cron jobs, and server deployments.
- **Visual Traders**: Launch the Desktop GUI (currently for Copy Trading) to monitor your master and slave accounts, check replication status, and manage connections visually.
- **Cross-Platform & Future-Proof**: Works on Windows, macOS, Linux. (IBKR integration in development).
## **Trusted by Traders Worldwide**
With thousands of downloads, `bbstrader` is trusted by traders worldwide. It's not just code – it's your ticket to profitable, scalable strategies.
## **The `bbstrader` Edge: Uniting C++ Speed with Python Flexibility**
bbstrader's hybrid design is its secret weapon. At the heart is a bidirectional C++/Python bridge via `client` module:
1. **C++ for Speed**: Core classes like `MetaTraderClient` handle high-performance tasks. Inject Python handlers for MT5 interactions, enabling native-speed signal generation and risk checks.
2. **Python for Smarts**: Orchestrate everything with modules like `trading` and `btengine`.
3. **The Data Flow:** The result is a clean, efficient, and powerful execution loop:
`Python (Orchestration & Analysis) -> C++ (High-Speed Signal Generation) -> Python (MT5 Communication) -> C++ (Receives Market Data)`
This setup crushes performance ceilings: Run ML models in Python, execute trades in C++, and backtest millions of bars in minutes.
### **Overcoming the MQL5 Bottleneck**
MetaTrader 5 is a world-class trading platform, but its native MQL5 language presents significant limitations for complex, high-frequency strategies:
- **Performance Ceilings:** As an interpreted language, MQL5 struggles with the computationally intensive logic required for advanced statistical models, machine learning, and rapid-fire order execution.
- **Ecosystem Constraints:** MQL5 lacks access to the vast, mature ecosystems of libraries for numerical computation, data science, and AI that C++ and Python offer.
- **Architectural Rigidity:** Implementing sophisticated, multi-threaded, or event-driven architectures in MQL5 is often a complex and error-prone endeavor.
`bbstrader` eradicates these barriers. By moving your core strategy logic to C++, you can unlock the full potential of your trading ideas, executing them with the microsecond-level precision demanded by institutional trading.
## **Key Modules**
bbstrader is modular, with each component laser-focused.
### 1. **btengine**: Event-Driven Backtesting Beast
- **Purpose**: Simulate strategies with historical data, including slippage, commissions, and multi-asset portfolios. Optimizes parameters and computes metrics like Sharpe Ratio, Drawdown, and CAGR.
- **Features**: Event queue for ticks/orders, vectorized operations for speed, integration with models for signal generation.
- **Example**: Backtest a StockIndexSTBOTrading from the example strategies.
```Python
# Inside the examples/
from strategies import test_strategy
if __name__ == '__main__':
# Run backtesting for Stock Index Short Term Buy Only Strategy
test_strategy(strategy='sistbo')
```
### Backtesting Results


### 2. **metatrader**: The C++/Python Bridge to MT5
- **Purpose**: High-speed MT5 integration. C++ MetaTraderClient mirrors MT5 API for orders, rates, and account management.
- **Features**: Bidirectional callbacks, error handling, real-time tick processing.
- **Strategy Patterns**: Two main patterns to build strategies:
#### Pattern 1: C++ Core, Python Orchestrator (Maximum Performance)
This is the recommended pattern for latency-sensitive strategies, such as statistical arbitrage, market making, or any strategy where execution speed is a critical component of your edge. By compiling your core logic, you minimize interpretation overhead and gain direct control over memory and execution.
**Use this pattern when:**
- Your strategy involves complex mathematical calculations that are slow in Python.
- You need to react to market data in the shortest possible time.
- Your production environment demands deterministic, low-latency performance.
**C++ Side (`MovingAverageStrategy.cpp`):**
```cpp
#include "bbstrader/metatrader.hpp"
#include <numeric>
#include <iostream>
class MovingAverageStrategy : public MT5::MetaTraderClient {
public:
using MetaTraderClient::MetaTraderClient;
void on_tick(const std::string& symbol) {
auto rates_opt = copy_rates_from_pos(symbol, 1, 0, 20);
if (!rates_opt || rates_opt->size() < 20) return;
const auto& rates = *rates_opt;
double sum = std::accumulate(rates.begin(), rates.end(), 0.0,
[](double a, const MT5::RateInfo& b) { return a + b.close; });
double sma = sum / rates.size();
double current_price = rates.back().close;
if (current_price > sma) {
std::cout << "Price is above SMA. Sending Buy Order for " << symbol << '\n';
MT5::TradeRequest request;
request.action = MT5::TradeAction::DEAL;
request.symbol = symbol;
request.volume = 0.1;
request.type = MT5::OrderType::BUY;
request.type_filling = MT5::OrderFilling::FOK;
request.type_time = MT5::OrderTime::GTC;
send_order(request);
}
}
};
```
_This C++ class would then be exposed to Python using `pybind11`._
```cpp
// Inside bindings.cpp
#include <pybind11/pybind11.h>
#include "MovingAverageStrategy.hpp"
namespace py = pybind11;
PYBIND11_MODULE(my_strategies, m){
py::class_<MovingAverageStrategy, MT5::MetaTraderClient>(m, "MovingAverageStrategy")
.def(py::init<MT5::MetaTraderClient::Handlers>())
.def("on_tick", &MovingAverageStrategy::on_tick);
}
```
**Python Side (`main.py`):**
```python
from bbstrader.api import Mt5Handlers
import MetaTrader5 as mt5
import time
from my_strategies import MovingAverageStrategy
# 1. Instantiate the C++ strategy, injecting the Python MT5 handlers
strategy = MovingAverageStrategy(Mt5Handlers)
# 2. Main execution loop
if strategy.initialize():
while True:
strategy.on_tick("EURUSD")
time.sleep(1)
```
#### Pattern 2: Python-Driven with C++ Acceleration
This pattern is ideal for strategies that benefit from Python's rich ecosystem for data analysis, machine learning, or complex event orchestration, but still require high-performance access to market data and the trading API.
**Use this pattern when:**
- Your strategy relies heavily on Python libraries like `pandas`, `scikit-learn`, or `tensorflow`.
- Rapid prototyping and iteration are more important than absolute minimum latency.
- Your core logic is more about decision-making based on pre-processed data than it is about raw computation speed.
```python
import MetaTrader5 as mt5
from bbstrader.api import Mt5Handlers
from bbstrader.api.client import MetaTraderClient
# 1. Inherit from the C++ MetaTraderClient in Python
class MyStrategyClient(MetaTraderClient):
def __init__(self, handlers):
super().__init__(handlers)
# 2. Instantiate your client
strategy = MyStrategyClient(Mt5Handlers)
# 3. Interact with the MT5 terminal via the C++ bridge
if strategy.initialize():
rates = strategy.copy_rates_from_pos("EURUSD", mt5.TIMEFRAME_M1, 0, 100)
print(f"Retrieved {len(rates)} rates via the C++ bridge.")
```
### 3. **`trading`: Live Execution & Strategy Orchestrator**
- **Purpose**: Manages live sessions, coordinates signals from strategies, risk from models, and execution via metatrader.
- **Features**: Multi-account support, position hedging, trailing stops.
### 4. `models`: Quant Toolkit for Signals & Risk
- **Purpose**: Build/test models like NLP sentiment, VaR/CVaR risk, optimization.
- **Features**: Currently Sentiment analysis, and Topic Modeling.
- **Example**: Sentiment-Based Entry:
```python
from bbstrader.models import SentimenSentimentAnalyzer
model = SentimenSentimentAnalyzer() # Loads pre-trained NLP
score = model.analyze_sentiment("Fed hikes rates – markets soar!")
if score > 0.7: # Bullish? Buy!
print("Go long!")
```
### **Other Modules:**
`core`: Utilities (data structs, logging).
`config`: Manages JSON configs in ~/.bbstrader/.
`api`: Handler injections for bridges.
## Getting Started
### Prerequisites
- **Python**: Python 3.12+ is required.
- **MetaTrader 5 (MT5)**: Required for live execution (Windows).
- **MT5 Broker**: [Admirals](https://one.justmarkets.link/a/tufvj0xugm/registration/trader), [JustMarkets](https://one.justmarkets.link/a/tufvj0xugm/registration/trader), [FTMO](https://trader.ftmo.com/?affiliates=JGmeuQqepAZLMcdOEQRp).
## Installation
`bbstrader` is designed for both Python and C++ developers. Follow the instructions that best suit your needs.
### For the Python Quant
Get started in minutes using `pip`. We strongly recommend using a virtual environment.
```bash
# Create and activate a virtual environment
python -m venv venv
source venv/bin/activate # on Linux/macOS
venv\Scripts\activate # on Windows
# Install bbstrader
pip install bbstrader[MT5] # Windows
pip install bbstrader # Linux/macOS
```
### For the C++ Developer
To develop your own C++ strategies, you can use `vcpkg` to install the `bbstrader` library and its dependencies.
```bash
# If you don't have vcpkg, clone and bootstrap it
git clone https://github.com/microsoft/vcpkg
./vcpkg/bootstrap-vcpkg.sh or ./vcpkg/bootstrap-vcpkg.bat
# Install bbstrader
./vcpkg/vcpkg install bbstrader
```
## CLI workflow
`bbstrader` shines via CLI – launch everything from one command!
| Action | Command |
| :----------------- | :-------------------------------------------------------------------------------------------------------------------- |
| **Run Backtest** | `python -m bbstrader --run backtest --strategy SMAStrategy --account MY_ACCOUNT --config backtest.json` |
| **Live Execution** | `python -m bbstrader --run execution --strategy KalmanFilter --account MY_ACCOUNT --config execution.json --parallel` |
| **Copy Trades** | `python -m bbstrader --run copier --source "S1" --destination "D1"` |
| **Get Help** | `python -m bbstrader --help` |
**Config Example** (`~/.bbstrader/execution/execution.json`):
```json
{
"SMAStrategy": {
"MY_MT5_ACCOUNT_1": {
"symbol_list": ["EURUSD", "GBPUSD"],
"trades_kwargs": { "magic": 12345, "comment": "SMA_Live" },
"short_window": 20,
"long_window": 50
}
}
}
```
## 🌍 Community & Support
- **[Read the Docs](https://bbstrader.readthedocs.io/en/latest/)**: Full API reference and tutorials.
- **[GitHub Issues](https://github.com/bbalouki/bbstrader/issues)**: Report bugs or request features.
- **[LinkedIn](https://www.linkedin.com/in/bertin-balouki-s-15b17a1a6)**: Connect with the creator.
---
### Professional Services
If you need a custom trading strategy, a proprietary risk model, advanced data pipelines, or a dedicated copy trading server setup, professional services are available.
**Contact the Developer:**
📧 [bertin@bbs-trading.com](mailto:bertin@bbs-trading.com)
---
### Support the Project
If you find this project useful and would like to support its continued development, you can contribute here:
☕ [Support the Developer](https://paypal.me/bertinbalouki?country.x=SN&locale.x=en_US)
---
_Disclaimer: Trading involves significant risk. `bbstrader` provides the tools, but you provide the strategy. Test thoroughly on demo accounts before deploying real capital._
| text/markdown | null | Bertin Balouki SIMYELI <bertin@bbs-trading.com> | null | Bertin Balouki SIMYELI <bertin@bbs-trading.com> | null | Finance, Toolkit, Financial, Analysis, Fundamental, Quantitative, Database, Equities, Currencies, Economics, ETFs, Funds, Indices, Moneymarkets, Commodities, Futures, CFDs, Derivatives, Trading, Investing, Portfolio, Optimization, Performance | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Office/Business :: Financial :: Investment",
"Programming Language :: Python :: 3.12",
"Programming Language :: C++",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Operating... | [] | null | null | >=3.12 | [] | [] | [] | [
"beautifulsoup4>=4.13.5",
"colorama>=0.4.6",
"eodhd>=1.0.32",
"exchange_calendars>=4.11.1",
"financetoolkit>=2.0.4",
"ipython>=9.5.0",
"nltk>=3.9.1",
"notify_py>=0.3.43",
"numpy>=2.2.6",
"praw>=7.8.1",
"pybind11>=3.0.1",
"pyfiglet>=1.0.4",
"pyportfolioopt>=1.5.6",
"python-dotenv>=1.1.1",
... | [] | [] | [] | [
"Homepage, https://github.com/bbalouki/bbstrader",
"Download, https://pypi.org/project/bbstrader/",
"Documentation, https://bbstrader.readthedocs.io/en/latest/",
"Source Code, https://github.com/bbalouki/bbstrader"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:12:08.858071 | bbstrader-2.0.7.tar.gz | 1,065,784 | f3/39/46e7f22e27065435d95043f888e0cb58d823d15ff5729e1667fff6791107/bbstrader-2.0.7.tar.gz | source | sdist | null | false | 7a9fdfd61b1ca94fc725696ec926d700 | 374a4c7c4f19e8f44f051e4c06511869f4b3ba7546f8fa50e35fbec4933f773a | f33946e7f22e27065435d95043f888e0cb58d823d15ff5729e1667fff6791107 | MIT | [] | 459 |
2.4 | mxcubeweb | 4.636.0 | MXCuBE Web user interface | [](https://github.com/mxcube/mxcubeweb/actions/workflows/build_and_test.yml)

<p align="center"><img src="https://mxcube.github.io/img/mxcube_logo20.png" width="125"/></p>
# MXCuBE-Web
MXCuBE-Web is the latest generation of the data acquisition software MXCuBE (Macromolecular Xtallography Customized Beamline Environment). The project started in 2005 at [ESRF](https://www.esrf.eu), and has since then been adopted by other institutes in Europe. In 2010, a collaboration agreement has been signed for the development of MXCuBE with the following partners:
- [ESRF](https://www.esrf.fr/)
- [Soleil](https://www.synchrotron-soleil.fr/)
- [MAX IV](https://www.maxiv.lu.se/)
- [HZB](https://www.helmholtz-berlin.de/)
- [EMBL](https://www.embl.org/)
- [Global Phasing Ltd.](https://www.globalphasing.com/)
- [ALBA](https://www.cells.es/)
- [DESY](https://www.desy.de/)
- [LNLS](https://lnls.cnpem.br/)
- [Elettra](https://www.elettra.eu/)
- [NSRRC](https://www.nsrrc.org.tw/)
- [ANSTO](https://www.ansto.gov.au/facilities/australian-synchrotron)
MXCuBE-Web is developed as a web application and runs in any recent browser.
The application is built using standard web technologies
and does not require any third-party plugins to be installed in order to function.
Being a web application, it is naturally divided into server and client parts.
The communication between the client and server are made using HTTP/HTTPS and web-sockets.
It is strongly recommended to use HTTPS, SSL/TLS encrypted HTTP.
The traffic passes through the conventional HTTP/HTTPS ports,
minimizing the need for special firewall or proxy settings to get the application to work.
<img align="center" src="https://mxcube3.esrf.fr/img/client-server.png" width=300>
The underlying beamline control layer
is implemented using the library [`mxcubecore`](https://github.com/mxcube/mxcubecore)
previously known as [`HardwareRepository`](https://github.com/mxcube/HardwareRepository).
The `mxcubecore` library is compatible with
both the MXCuBE-Web and the [MXCuBE-Qt](https://github.com/mxcube/mxcubeqt) applications.
| Data collection | Sample grid |
| :-------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------: |
|  |  |
Latest information about the MXCuBE project can be found on the
[MXCuBE project webpage](https://mxcube.github.io/mxcube/).
## Technologies in use
The backend is built on the Python [Flask](https://flask.palletsprojects.com/) web framework,
a library called [SocketIO](https://socket.io/) is further used to provide
a bidirectional communication channel between backend and client.
The backend exposes a REST API to the client.
The UI is implemented in HTML, CSS and JavaScript, mainly with the React, Redux, Boostrap, and FabricJS libraries.
## Information for developers
- [Contributing guidelines](https://github.com/mxcube/mxcubeweb/blob/master/CONTRIBUTING.md)
- [Developer documentation](https://mxcubeweb.readthedocs.io/)
- [Development install instructions](https://mxcubeweb.readthedocs.io/en/latest/dev/environment.html#install-with-conda)
## Information for users
- [User Manual MXCuBE Web](https://www.esrf.fr/mxcube3)
- [Feature overview](https://github.com/mxcube/mxcubeqt/blob/master/docs/source/feature_overview.rst)
- If you cite MXCuBE, please use the references:
> Oscarsson, M. et al. 2019. “MXCuBE2: The Dawn of MXCuBE Collaboration.” Journal of Synchrotron Radiation 26 (Pt 2): 393–405.
>
> Gabadinho, J. et al. (2010). MxCuBE: a synchrotron beamline control environment customized for macromolecular crystallography experiments. J. Synchrotron Rad. 17, 700-707
| text/markdown | The MXCuBE collaboration | mxcube@esrf.fr | The MXCuBE collaboration | mxcube@esrf.fr | null | mxcube, mxcube3, mxcubeweb | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: End Users/Desktop",
"Natural Language :: English",
"Topic :: Scientific/Engineering"
] | [] | https://github.com/mxcube/mxcubeweb | null | <3.12,>=3.10 | [] | [] | [] | [
"graypy>=2.1.0; extra == \"graylog\"",
"authlib<2.0.0,>=1.6.6",
"Flask<4.0.0,>=3.0.3",
"flask-limiter<4.0,>=3.12",
"flask-login<0.7.0,>=0.6.3",
"Flask-Security[common,fsqla]==5.7.1",
"Flask-SocketIO<6.0.0,>=5.3.6",
"gevent<24.0.0,>=23.9.1",
"markupsafe<3.0.0,>=2.1.5",
"mxcubecore>=2.4.0",
"pillo... | [] | [] | [] | [
"Homepage, https://github.com/mxcube/mxcubeweb",
"Repository, https://github.com/mxcube/mxcubeweb",
"Documentation, https://mxcubeweb.readthedocs.io/"
] | poetry/2.2.1 CPython/3.10.19 Linux/6.14.0-1017-azure | 2026-02-18T14:11:56.485063 | mxcubeweb-4.636.0.tar.gz | 98,167 | 7b/4e/952372523a3c0360d01ffe994479c90663f7ec76c6da0167c5bfbf034aad/mxcubeweb-4.636.0.tar.gz | source | sdist | null | false | 7e8a1a82d7fa8719969d5c68c909b39b | 84b96e6445f69475bc49ac9a7006d034d07e3e3fd4dd902e7bfd0bd7c891095f | 7b4e952372523a3c0360d01ffe994479c90663f7ec76c6da0167c5bfbf034aad | null | [] | 251 |
2.1 | pgdbpool | 1.0.1 | A tiny database de-multiplexer primarily scoped for Web- / Application Server. | # :elephant: Python PgDatabase-Pool Module

[](https://badge.fury.io/py/pgdbpool)
[](https://pythondocs.webcodex.de/pgdbpool/v1.0.1)
## 1. Primary Scope
The **pgdbpool** Python module is a tiny **PostgreSQL Database Connection De-Multiplexer**.
**Key Features:**
- **Multi-endpoint support**: Load balance across multiple PostgreSQL servers
- **Flexible threading models**: Choose between threaded and non-threaded modes
- **Transaction control**: Manual commit support for complex transactions
- **High availability**: Built-in failover and connection management
## 2. Current Implementation
```text
+----------------------+ +---------------------
| Server Service.py | -- Handler Con #1 ----> | PostgreSQL
| Request / Thread #1 | | Backend #1
+----------------------+ |
|
+----------------------+ |
| Server Service.py | -- Handler Con #2 ----> | PostgreSQL
| Request / Thread #2 | | Backend #2
+----------------------+ +---------------------
```
### 2.1. Multiple Database Endpoints
The connection pool now supports **multiple PostgreSQL database endpoints** for load balancing and high availability:
- ✅ Configure multiple database hosts in the configuration
- ✅ Connections are automatically distributed across available endpoints
- ✅ Provides built-in load balancing for read operations
- ✅ Enhances fault tolerance and scalability
### 2.2. Concept / Simplicity
If configured in a Web Server's WSGI Python script, the pooling logic is straightforward:
1. Check if a free connection in the pool exists.
2. Verify if the connection is usable (SQL ping).
3. Use the connection and protect it from being accessed until the query/queries are completed.
4. Release the connection for reuse.
5. Reconnect to the endpoint if the connection is lost.
## 3. Thread Safety / Global Interpreter Lock
### 3.1. Threading Model Configuration
The pool now supports **two threading models** that can be configured based on your application's architecture:
- **`threaded`** (default): Uses `threading.Lock()` for thread safety, suitable for traditional multi-threaded web servers
- **`non-threaded`**: Disables locking for single-threaded applications, eliminating GIL overhead
### 3.2. Threaded Mode
Thread safety is ensured via `lock = threading.Lock()`, which relies on a kernel mutex `syscall()`.
While this concept works, the GIL (Global Interpreter Lock) in Python thwarts scalability under heavy loads in a threaded Web Server setup.
### 3.3. Non-Threaded Mode
For applications using a single-threaded, process-per-request model (like the FalconAS Python Application Server), the non-threaded mode provides:
- **No locking overhead** - eliminates mutex syscalls
- **Better performance** - avoids GIL contention
- **Simpler architecture** - designed for 1 Process == 1 Python Interpreter
>[!IMPORTANT]
> Refer to Section **6: Future** for more details on threading-less architectures.
## 4. Dependencies / Installation
**Python 3** and the **psycopg2** module are required.
```bash
# install (debian)
apt-get install python3-psycopg2
pip install pgdbpool
```
## 5. Documentation / Examples
See documentation either at `./doc` or [https://pythondocs.webcodex.de/pgdbpool/v1.0.1](https://pythondocs.webcodex.de/pgdbpool/v1.0.1)
for detailed explanation / illustrative examples.
### 5.1. Multiple Database Configuration
```python
config = {
'db': [
{
'host': 'postgres-server-1.example.com',
'name': 'mydb',
'user': 'dbuser',
'pass': 'dbpass'
},
{
'host': 'postgres-server-2.example.com',
'name': 'mydb',
'user': 'dbuser',
'pass': 'dbpass'
}
],
'groups': {
'default': {
'connection_count': 20,
'autocommit': True
}
}
}
```
### 5.2. Threading Model Configuration
```python
# for non-threaded applications (e.g., FalconAS)
config = {
'type': 'non-threaded',
'db': { ... },
'groups': { ... }
}
# for traditional threaded applications (default)
config = {
'type': 'threaded', # or omit for default
'db': { ... },
'groups': { ... }
}
```
### 5.3. Manual Transaction Control
```python
import pgdbpool as dbpool
dbpool.Connection.init(config)
# for autocommit=False connections
with dbpool.Handler('group1') as db:
db.query('INSERT INTO table1 VALUES (%s)', ('value1',))
db.query('INSERT INTO table2 VALUES (%s)', ('value2',))
db.commit() # Manual commit
```
## 6. Future
### 6.1. FalconAS Compatibility
The DB-pooling functionality is now compatible with the FalconAS
Python Application Server (https://github.com/WEBcodeX1/http-1.2).
The implemented model: **1 Process == 1 Python Interpreter (threading-less)**,
effectively solving the GIL issue through the `non-threaded` configuration mode.
### 6.2. Load Balancing
The pool now supports multiple (read-load-balanced) PostgreSQL endpoints:
- ✅ **Implemented**: Multiple database endpoint configuration
- ✅ **Implemented**: Automatic connection distribution across endpoints
- ✅ **Implemented**: Built-in load balancing for database connections
- ✅ **Implemented**: Read / write / endpoint group separation
[](https://github.com/PyCQA/pylint)
| text/markdown | Claus Prüfer | Claus Prüfer <pruefer@webcodex.de> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/clauspruefer/python-dbpool",
"Issues, https://github.com/clauspruefer/python-dbpool/issues",
"Documentation, https://pythondocs.webcodex.de/pgdbpool",
"Changelog, https://github.com/clauspruefer/python-dbpool/CHANGELOG.md"
] | twine/6.2.0 CPython/3.11.2 | 2026-02-18T14:11:53.238709 | pgdbpool-1.0.1.tar.gz | 19,143 | 27/7a/2f7b36edf60708b45e7ef670107b6d66ecc53393a2b907709c90d2922e62/pgdbpool-1.0.1.tar.gz | source | sdist | null | false | 53e1f6f8bcee7b81e2e00ffc1c1dc9e7 | 1a9a2803729799ddfdec32e5d76572d9e214e6185a732f518fd39e9d80cd7855 | 277a2f7b36edf60708b45e7ef670107b6d66ecc53393a2b907709c90d2922e62 | null | [] | 155 |
2.4 | tropass-sdk | 0.1.7 | Tropass SDK helps you develop and manage your models for Tropass platform | <p align="center">
<img src="https://raw.githubusercontent.com/tropass-ai/tropass-sdk/main/logo.svg" width="350">
</p>
<br>
<p align="center">
<a href="https://codecov.io/gh/tropass-ai/tropass-sdk" target="_blank"><img src="https://codecov.io/gh/tropass-ai/tropass-sdk/branch/main/graph/badge.svg"></a>
<a href="https://pypi.org/project/tropass-sdk/" target="_blank"><img src="https://img.shields.io/pypi/pyversions/tropass-sdk"></a>
<a href="https://pypi.org/project/tropass-sdk/" target="_blank"><img src="https://img.shields.io/pypi/v/tropass-sdk"></a>
<a href="https://pypistats.org/packages/tropass-sdk" target="_blank"><img src="https://img.shields.io/pypi/dm/tropass-sdk"></a>
</p>
**tropass-sdk** — это инструмент для разработки и управления ML-моделями на платформе Тропасс.
---
## 📦 Установка
```bash
# Для pip
pip install tropass-sdk[server]
# Для uv
uv add tropass-sdk[server]
# Для poetry
poetry add tropass-sdk[server]
```
---
## 🛠 Подготовка приложения
Для инициализации сервера достаточно передать функцию предсказания в класс `ModelServer`.
### Ключевые требования:
* Функция предсказания обязана принимать схему запроса модели `MLModelRequestSchema`.
* Функция предсказания обязана возвращать схему ответа модели `MLModelResponseSchema`.
* Инстанс `ModelServer` обязан находиться в файле `main.py` в корне проекта.
```python
from tropass_sdk.server import ModelServer
from tropass_sdk.schemas.model_contract_schema import MLModelRequestSchema, MLModelResponseSchema
def predict_handler(data: MLModelRequestSchema) -> MLModelResponseSchema:
# Логика инференса модели
return MLModelResponseSchema(panel_items=[])
server = ModelServer(
model_func=predict_handler,
model_name="my_model",
model_description="Production model description",
model_version="1.0.0",
debug=False
)
```
---
## ⚡ Варианты запуска
Для запуска создайте экземпляр приложения c помощью метода `build_application`. Важно! Имя приложения обязательно должно быть `application`:
```python
application = server.build_application()
```
Запуск: `uvicorn main:application --host 0.0.0.0 --port 8000 --workers 4`
---
## 🔍 Мониторинг и наблюдаемость
Благодаря интеграции с [microbootstrap](https://github.com/community-of-python/microbootstrap), сервис из коробки поддерживает:
* **Метрики:** Доступны по эндпоинту `/metrics` в формате Prometheus.
* **Health Checks:** Проверка состояния сервиса доступна по пути `/health`.
* **Логирование:** Структурированные логи готовы к сбору в ELK/Loki.
---
| text/markdown | null | null | null | null | null | python, ai, tropass, models, ai-model, ai-sdk, python-sdk, llm | [
"Typing :: Typed",
"Topic :: Software Development :: Build Tools",
"Operating System :: MacOS",
"Operating System :: Microsoft",
"Operating System :: POSIX :: Linux",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: ... | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"pydantic; extra == \"schemas\"",
"microbootstrap[fastapi]; extra == \"server\"",
"uvicorn; extra == \"server\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:11:46.187027 | tropass_sdk-0.1.7-py3-none-any.whl | 9,292 | 1c/3d/14c0ee20077bef74a66534e42483493c9a2aaf48443e2ec38ad28f3b9db7/tropass_sdk-0.1.7-py3-none-any.whl | py3 | bdist_wheel | null | false | 4e159091b8cf297221c1cd5c405debbe | 8f16f347f7a923a77ba8b2dc7227f29321c7141861bc6ec898c1401d3059c639 | 1c3d14c0ee20077bef74a66534e42483493c9a2aaf48443e2ec38ad28f3b9db7 | null | [
"LICENSE"
] | 247 |
2.4 | adi-doctools | 0.4.37 | ADI's sphinx extensions and theme | # Analog Devices Doctools
[Analog Devices Inc.](http://www.analog.com/en/index.html)
central repository to host tooling for automated documentation builds.
It includes Sphinx extensions, themes, and tools for multiple repositories in
the organization.
All tools, directives and roles are documented in this repository documentation.
Guarantee to work with Python newer than 3.8 and distros released on or after 20H1
(e.g. Ubuntu 20.04 LTS).
## Release install
Ensure pip is newer than 23.0 [1]:
```
pip install pip --upgrade
```
Install the documentation tools, which will fetch this repository release:
```
(cd docs ; pip install -r requirements.txt)
```
Build the documentation with Sphinx:
```
(cd docs ; make html)
```
The generated documentation will be available at `docs/_build/html` and it
provides information about the `adoc` command line tool and general documentation
guidelines.
In summary, the `serve` allows to live reload the documentation when editing
the docs, and `aggregate` to generate an aggregated documentation of the multiple
repositories.
[1] There is a [known bug](https://github.com/pypa/setuptools/issues/3269)
with pip shipped with Ubuntu 22.04
### Using a Python virtual environment
Installing packages at user level through pip is not always recommended, instead,
consider using a Python virtual environment (``python3-venv`` on ubuntu 22.04).
To create and activate the environment, do before the previous instructions:
```
python3 -m venv ./venv
source ./venv/bin/activate
```
Use ``deactivate`` to exit the virtual environment.
For next builds, just activate the virtual environment:
```
source ./venv/bin/activate
```
## Development install
Development mode allows to edit the source code and apply the changes without
reinstalling.
Also extends Author Mode to watch changes on the webpage source code
(use `--dev`/`-r` option to enable this).
### Install the web compiler
If you care about the web scripts (`js modules`) and style sheets (`sass`),
install `npm` first, if not, just skip this section.
> **_NOTE:_** If the ``npm`` provided by your package manager is too old and
> updating with `npm install npm -g` fails, consider installing with
> [NodeSource](https://github.com/nodesource/distributions).
At the repository root, install the `npm` dependencies locally:
```
npm install rollup \
@rollup/plugin-terser \
sass \
--save-dev
```
### Fetch third-party resources
Fetch third-party fonts:
```
./ci/fetch-fonts.sh
```
### Install the repository
Finally, do a symbolic install of this repo:
```
pip install -e . --upgrade
```
## Removing
To remove, either release or development, do:
```
pip uninstall adi-doctools
```
| text/markdown | null | Jorge Marques <jorge.marques@analog.com> | null | null | null | null | [
"Development Status :: 1 - Planning",
"Framework :: Sphinx",
"Framework :: Sphinx :: Theme",
"Framework :: Sphinx :: Extension",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Lang... | [] | null | null | >=3.8 | [] | [] | [] | [
"docutils",
"sphinx",
"lxml",
"pygments>=2.7",
"PyYAML",
"weasyprint; extra == \"cli\"",
"mcp>=1.0; extra == \"mcp\"",
"pytest; extra == \"test\"",
"ruff; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-18T14:11:29.064824 | adi_doctools-0.4.37.tar.gz | 646,142 | b7/c6/b4e383f7f1865d243c492fe1c3068290d94116fce3be1fdf4c13d6a63875/adi_doctools-0.4.37.tar.gz | source | sdist | null | false | 461ad5c1a7a5e159e34cd0b2e03f0dde | 06f3cd94b4e01e1e9d0444d6f5a0b208502a8ec3d75e457ec6a0a62e5c70b827 | b7c6b4e383f7f1865d243c492fe1c3068290d94116fce3be1fdf4c13d6a63875 | null | [
"LICENSE"
] | 391 |
2.1 | pelican-theme-reflex | 3.1.0 | A minimalist Pelican theme, forked from Flex | # Reflex
A minimalist [Pelican](https://getpelican.com/) theme, forked from [Flex](https://github.com/alexandrevicenzi/Flex).
Check out the [live example site](https://haplo.github.io/pelican-theme-reflex/).
Its source code is [in the example directory](https://github.com/haplo/pelican-theme-reflex/tree/main/example).
## Differences with Flex
- [In-repo documentation](https://github.com/haplo/pelican-theme-reflex/tree/main/docs) instead of Github wiki.
- Shynet tracking support.
- [Table of contents](https://github.com/haplo/pelican-theme-reflex/blob/main/docs/toc.md) styling.
- [Figures and captions](https://github.com/haplo/pelican-theme-reflex/blob/main/docs/figures.md) support.
- X social icon.
## Features
- Mobile First
- Responsive
- Semantic
- SEO best practices
- Open Graph
- Rich Snippets (JSON-LD)
- Related Posts (via [plugin](https://github.com/getpelican/pelican-plugins/tree/master/related_posts) or [AddThis](https://en.wikipedia.org/wiki/AddThis) (*defunct*))
- Series (via [plugin](https://github.com/pelican-plugins/series))
- Minute read (via [plugin](https://github.com/getpelican/pelican-plugins/tree/master/post_stats))
- [Multiple code highlight styles](https://github.com/haplo/pelican-theme-reflex/blob/main/docs/code_highlight.md)
- [Translation support](https://github.com/haplo/pelican-theme-reflex/blob/main/docs/translation.md)
- [Dark mode](https://github.com/haplo/pelican-theme-reflex/blob/main/docs/dark_theme.md)
## Integrations
- [AddThis](http://www.addthis.com/)
- [Disqus](https://disqus.com/)
- [Gauges Analytics](http://get.gaug.es/)
- [Google AdSense](https://www.google.com.br/adsense/start/)
- [Google Analytics](https://www.google.com/analytics/web/)
- [Google Tag Manager](https://www.google.com/tagmanager/)
- [Matomo Analytics (formerly Piwik)](https://matomo.org/)
- [StatusCake](https://www.statuscake.com/)
- [Isso](https://posativ.org/isso/)
- [Microsoft Clarity](https://clarity.microsoft.com)
- [Shynet](https://github.com/milesmcc/shynet)
## Plugins Support
- [Github Corners](https://github.com/tholman/github-corners)
- [I18N Sub-sites](https://github.com/getpelican/pelican-plugins/tree/master/i18n_subsites)
- [Minute read](https://github.com/getpelican/pelican-plugins/tree/master/post_stats)
- [Related Posts](https://github.com/getpelican/pelican-plugins/tree/master/related_posts)
- [Series](https://github.com/pelican-plugins/series)
- [Representative image](https://github.com/getpelican/pelican-plugins/tree/master/representative_image)
- [Neighbors](https://github.com/getpelican/pelican-plugins/tree/master/neighbors)
- [Pelican Search](https://github.com/pelican-plugins/search)
- [Tipue Search](https://github.com/getpelican/pelican-plugins/blob/master/tipue_search/) (deprecated)
- [SEO](https://github.com/pelican-plugins/seo)
## Install
The theme can be installed from PyPI:
```
pip install pelican-theme-reflex
```
Then in your *pelicanconf.py*:
```python
from pelican.themes import reflex
THEME = reflex.path()
```
The alternative way is to clone this repository.
The `main` branch should be stable and safe to checkout.
Then point your `THEME` setting in the pelican project to its path.
## Settings
Look at [settings documentation](https://github.com/haplo/pelican-theme-reflex/blob/main/docs/settings.md) for details.
## Sites using Reflex
- [https://blog.fidelramos.net/](https://blog.fidelramos.net/) ([source code](https://github.com/haplo/blog.fidelramos.net))
If you have a site using Reflex feel free to open a PR to add it here.
## Contributing
Always open an issue before sending a PR.
Discuss the problem/feature that you want to code.
After discussing, send a PR with your changes.
See the [development documentation](https://github.com/haplo/pelican-theme-reflex/blob/main/docs/developing.md).
Thank you to all contributors!
- [Loïc Penaud](https://github.com/lpenaud)
## License
MIT ([full license](https://github.com/haplo/pelican-theme-reflex/blob/main/LICENSE))
| text/markdown | Fidel Ramos | null | null | null | MIT | pelican, theme, static site | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Pelican",
"Framework :: Pelican :: Themes",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topi... | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/haplo/pelican-theme-reflex",
"Repository, https://github.com/haplo/pelican-theme-reflex",
"Issues, https://github.com/haplo/pelican-theme-reflex/issues",
"Changelog, https://github.com/haplo/pelican-theme-reflex/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:11:13.726447 | pelican_theme_reflex-3.1.0.tar.gz | 1,552,482 | e4/7c/c4abedad54a50441cdfaf5917f319f4c0174d7709447a83f2c9ccb60b377/pelican_theme_reflex-3.1.0.tar.gz | source | sdist | null | false | 7cbeb46ccef2d93e1c6336ebef04fdcc | eee5e433d789732e7ecd106c31cb2b0e0c277dfac27fe486076edc46ad65f572 | e47cc4abedad54a50441cdfaf5917f319f4c0174d7709447a83f2c9ccb60b377 | null | [] | 229 |
2.4 | nonebot-plugin-tieba-monitor | 0.1.8 | NoneBot2 plugin for monitoring Tieba forums | # nonebot-plugin-tieba-monitor
基于 [NoneBot2](https://nonebot.dev/) 的贴吧帖子监控插件,用于监控指定贴吧的新帖子并发送到 QQ 群。
## 功能特性
- 监控多个贴吧的新帖子
- 支持自定义检查时间间隔
- 支持多群组通知
- 首次启动自动执行一次巡检,后续按计划任务运行
- 在本地目录缓存帖子数据,便于手动读取最新记录
- 提供 SUPERUSER 触发的 `刷新帖子` / `检查贴吧` 命令
- 支持 AI 内容分析与过滤(可选)
## 安装
### 使用 nb-cli
```bash
nb plugin install nonebot-plugin-tieba-monitor
```
## 使用方法
1. 在 NoneBot2 项目中,确保配置了 OneBot 适配器
2. 在 `.env` 文件中添加插件配置
3. 启动 NoneBot2,插件将自动运行并按照配置的时间间隔监控贴吧
## Bot 命令
| 指令 | 权限 | 说明 |
|------|------|------|
| `刷新帖子 <贴吧名>` | SUPERUSER | 立即读取指定贴吧在本地缓存的最新帖子并推送到当前群,结果会标记为“手动刷新”。 |
| `检查贴吧` | SUPERUSER | 立刻轮询所有已配置的贴吧,触发抓取、AI 筛选与群发,并返回成功/失败统计。 |
> 以上命令默认需要 `to_me` 规则(@机器人)才能触发,可通过 NoneBot 配置调整。
## 功能说明
1. **定时抓取**:依赖 `nonebot_plugin_apscheduler`,按照 `tieba_check_interval_seconds` 周期检测全部已配置贴吧。
2. **启动即查**:Bot 启动后延迟 5 秒执行一次初始检查,确保冷启动时也能收到最新帖子。
3. **数据持久化**:所有帖子缓存到 `tieba_output_directory` 指定的 JSON 文件,`刷新帖子` 命令直接读取这些文件。
4. **AI 审核**:启用 `tieba_ai_enabled` 后会调 OpenAI 兼容接口进行敏感/广告判别,命中 `tieba_ai_filter_keys` 中任意字段会被过滤。
5. **多群播报**:每个贴吧可绑定多个群号;抓取到的新帖会逐群发送,并加入随机延迟减少风控。
6. **手动控制**:`检查贴吧` 命令同步执行完整抓取流程,并返回成功/失败统计,方便排查。
## 配置项
在 `.env` 文件中,你可以配置以下参数:
### 基础配置
| 配置项 | 类型 | 默认值 | 说明 |
|-------|------|--------|------|
| `tieba_check_interval_seconds` | int | 300 | 检查新帖子的时间间隔(秒) |
| `tieba_output_directory` | str | "data/tieba_data" | 保存帖子数据的文件夹路径 |
| `tieba_threads_to_retrieve` | int | 5 | 每次检查时获取的最新帖子数量 |
| `tieba_forum_groups` | Dict[str, List[int]] | {} | 每个贴吧特定的通知群组,需填写合法 JSON/字典字符串,如:`{"贴吧名称": [123456789, 987654321]}` |
> `.env` 中写入复杂结构时推荐使用单行 JSON 并转义双引号,或改用 Python 字典字符串。
### AI 分析配置(可选)
| 配置项 | 类型 | 默认值 | 说明 |
|-------|------|--------|------|
| `tieba_ai_enabled` | bool | false | 是否启用 AI 分析 |
| `tieba_ai_apikey` | str | "" | AI API 密钥 |
| `tieba_ai_endpoint` | str | "https://api.openai.com/v1" | AI API 端点 |
| `tieba_ai_model` | str | "gpt-3.5-turbo" | AI 模型名称 |
| `tieba_ai_max_chars` | int | 100 | 发送给 AI 的最大字符数,0 表示不截断 |
| `tieba_ai_system_prompt` | str | (见代码) | AI 分析使用的系统提示词,用于自定义 AI 的分析行为 |
| `tieba_ai_filter_keys` | List[str] | ["是否包含敏感内容", "是否包含广告、营销信息"] | 命中任意字段且值为 `true` 时视为不发送 |
> `TIEBA_AI_ENABLED` 等布尔环境变量不区分大小写;若自定义 `tieba_ai_system_prompt`,请确保与 `tieba_ai_filter_keys` 中的字段对应。
## 配置示例
```dotenv
# 基础配置
tieba_check_interval_seconds=300
tieba_output_directory=data/tieba_data
tieba_threads_to_retrieve=10
# 贴吧监控配置 - 格式: {'贴吧名': [群号1, 群号2]}
# 例如监控"王者荣耀"贴吧并发送到群 123456789 和 987654321
tieba_forum_groups={"王者荣耀": [123456789, 987654321], "英雄联盟": [123456789]}
# AI 分析配置(可选)
tieba_ai_enabled=false
tieba_ai_apikey=your_api_key
tieba_ai_endpoint=https://api.openai.com/v1
tieba_ai_model=gpt-3.5-turbo
tieba_ai_max_chars=500
tieba_ai_filter_keys=["是否包含敏感内容", "是否包含广告、营销信息"]
tieba_ai_system_prompt="""你是一个帖子内容分析助手。分析以下内容并返回JSON格式的结果..."""
```
## 注意事项
- 请确保 `.env` 文件中的 `tieba_forum_groups` 参数格式正确
- 若启用 AI 分析功能,请提供有效的 API 密钥
- 首次启动时,插件将立即执行一次贴吧检查
- 如需自定义 AI 提示词,请确保返回格式与默认提示词一致
- 手动命令仅 SUPERUSER 可用,且默认需要 @ 机器人
- 帖子缓存文件保存在 `tieba_output_directory` 指定的目录下,`刷新帖子` 命令依赖这些文件
## 更多信息
- 本插件利用 `aiotieba` 库进行贴吧内容获取
- 使用 `nonebot-plugin-apscheduler` 进行定时任务管理
- 插件会将帖子数据保存在配置的输出目录中
## 许可证
本项目使用 [GNU AGPLv3](https://choosealicense.com/licenses/agpl-3.0/) 作为开源许可证。
| text/markdown | null | su-liu-guang <wuhui0404@gmail.com> | null | null | GNU AGPLv3 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operating System :: OS Independent",
"Framework :: Robot Framework :: Library"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"nonebot2>=2.3.0",
"nonebot-adapter-onebot>=2.2.3",
"nonebot-plugin-apscheduler>=0.3.0",
"aiotieba>=4.5.3",
"openai>=1.0.0",
"nonebot-plugin-localstore>=0.7.2"
] | [] | [] | [] | [
"Homepage, https://github.com/su-liu-guang/nonebot-plugin-tieba-monitor",
"Bug Tracker, https://github.com/su-liu-guang/nonebot-plugin-tieba-monitor/issues"
] | twine/6.2.0 CPython/3.12.11 | 2026-02-18T14:10:24.943265 | nonebot_plugin_tieba_monitor-0.1.8.tar.gz | 26,255 | 97/21/7f5b3f524c81089008cbcd03351e8b6909ac8fff926a255cb5179504e01a/nonebot_plugin_tieba_monitor-0.1.8.tar.gz | source | sdist | null | false | b6df84c1ffa11475aea068c282b88eb3 | 6d8531732e28d18afc2c640cb7885c7bd861e9dea4f5dd4474558ccc3b6e440f | 97217f5b3f524c81089008cbcd03351e8b6909ac8fff926a255cb5179504e01a | null | [
"LICENSE"
] | 238 |
2.4 | n1luxjwt | 2.1.4 | n1luxjwt | A package to gen jwt token by free fire access token
| text/markdown | NR CODEX | nilaysingh.official@gmail.com | null | null | null | python, free fire, access token, jwt token, nr codex | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Operating System :: Unix",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows"
] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T14:09:59.830787 | n1luxjwt-2.1.4.tar.gz | 11,105 | af/c9/96c6ac9a71bcd572c7e77ae301b49270440633f4124980c47f9bafb6c7be/n1luxjwt-2.1.4.tar.gz | source | sdist | null | false | 2725848ade59feb4ccbecbf424addeb6 | a0c32c44cc9cf592e54b7d54b2ef6f3fb8f0e4eb281db56c752317fba0cbf4f3 | afc996c6ac9a71bcd572c7e77ae301b49270440633f4124980c47f9bafb6c7be | null | [
"LICENSE"
] | 265 |
2.4 | ieum | 0.0.3 | IEUM (이음) - Integrated Execution & Unified Mediation | # IEUM
IEUM is an open-source coding agent you can use in three places:
- terminal (`TUI`)
- browser (`WEB`)
- VS Code (`Extension`)
It shares one agent core across all three, so your workflow stays consistent.
---
## At A Glance
| Platform | Recommended Use | Runtime |
|---|---|---|
| TUI | Local development on your machine | Local |
| WEB | Centralized team server | Remote |
| VS Code | Both local and remote workflows | Local / Remote |
---
## What You Get
- One agent core across TUI, WEB, and VS Code
- Tool use with approval flow (HITL)
- Session history and resume support
- Multi-provider LLM support (OpenAI, Anthropic, Ollama, OpenRouter, and more)
- Skill system (built-in + user/project skills)
- MCP integration (including GitHub MCP)
- English and Korean UI
---
## Quick Start
### 1) Install
```bash
pip install ieum
```
Or with `uv`:
```bash
uv pip install ieum
```
### 2) Run TUI (local)
```bash
ieum
```
### 3) Run WEB
```bash
ieum --web --host 127.0.0.1 --port 8765
```
Open `http://127.0.0.1:8765` and log in with the token printed in the server console.
---
## Installation
### Prerequisites
- Python 3.11+
- `ripgrep` recommended for fast code search
### From Source
```bash
git clone https://github.com/DDOK-AI/ieum.git
cd ieum
pip install -e .
```
Install WEB dependencies:
```bash
pip install -e ".[server]"
```
Install test/dev dependencies:
```bash
uv sync --group test
```
---
## Configuration
### Core Environment Variables
| Variable | Default | Purpose |
|---|---|---|
| `IEUM_LANG` | `en` | UI language (`en` / `ko`) |
| `IEUM_AUTH_DISABLED` | `false` | Disable auth (local dev only) |
| `IEUM_AUTH_TOKEN_TTL_SECONDS` | `3600` | Auth token TTL (min 60s) |
| `IEUM_CORS_ORIGINS` | localhost-only list | Allowed browser origins |
| `IEUM_REMOTE_ONLY` | `false` | Block loopback-only bind in remote deployments |
| `IEUM_TLS_REQUIRED` | `false` | Require HTTPS/WSS |
| `IEUM_TRUST_PROXY_TLS` | `false` | Trust `X-Forwarded-Proto=https` from proxy |
| `IEUM_SSL_CERTFILE` | unset | TLS cert path (direct TLS termination) |
| `IEUM_SSL_KEYFILE` | unset | TLS key path (direct TLS termination) |
| `IEUM_RATE_LIMIT_ENABLED` | `true` | Enable per-IP rate limiting |
| `IEUM_RATE_LIMIT_MAX_REQUESTS` | `120` | Max requests per window |
| `IEUM_RATE_LIMIT_WINDOW_SECONDS` | `60` | Rate limit window |
| `IEUM_SANDBOX_ROOT` | unset | Root for `<user>/<workspace>` sandboxing |
| `IEUM_REQUIRE_USER_ID` | `false` | Require user identity header/query |
| `IEUM_SAFE_MODE` | `false` | Extra command restrictions |
### LLM Variables (Common)
| Provider | Variables |
|---|---|
| OpenAI | `OPENAI_API_KEY`, `OPENAI_MODEL` |
| Anthropic | `ANTHROPIC_API_KEY`, `ANTHROPIC_MODEL` |
| Ollama | `OLLAMA_BASE_URL`, `OLLAMA_MODEL` |
| OpenRouter | `OPENROUTER_API_KEY`, `OPENROUTER_MODEL` |
| Google | `GOOGLE_API_KEY` |
### `.env` Priority
1. Project `.env` (current directory)
2. User `.env` (`~/.ieum/.env`)
3. System environment
### Config Directory
```text
~/.ieum/
.env
auth_token
<agent>/
AGENTS.md
skills/
```
---
## Usage
### TUI (Local)
```bash
# interactive
ieum
# use a specific model
ieum --model anthropic/claude-sonnet-4-5-20250929
# resume latest thread
ieum -r
# auto-approve tool calls
ieum --auto-approve
```
Useful commands in chat:
- `/help`
- `/env`
- `/model <provider/model>`
- `/skills`
- `/threads`
- `/tokens`
### WEB (Remote-first)
Local check:
```bash
ieum --web --host 127.0.0.1 --port 8765
```
Remote deployment baseline:
```bash
export IEUM_REMOTE_ONLY=true
export IEUM_TLS_REQUIRED=true
export IEUM_TRUST_PROXY_TLS=true
export IEUM_CORS_ORIGINS="https://ieum.example.com"
export IEUM_AUTH_TOKEN_TTL_SECONDS=3600
export IEUM_RATE_LIMIT_ENABLED=true
export IEUM_RATE_LIMIT_MAX_REQUESTS=120
export IEUM_RATE_LIMIT_WINDOW_SECONDS=60
export IEUM_SANDBOX_ROOT="/srv/ieum/sandboxes"
export IEUM_REQUIRE_USER_ID=true
ieum --web --host 0.0.0.0 --port 8765
```
Notes:
- Wildcard CORS (`*`) is rejected.
- In required-user mode, send one of:
- header `x-ieum-user-id`
- header `x-ieum-user`
- query `user_id`
### VS Code Extension
VS Code supports local, remote, and auto modes.
Key settings:
- `ieum.serverMode`: `local` | `remote` | `auto`
- `ieum.remoteServerUrl`
- `ieum.authToken`
- `ieum.serverPort`
- `ieum.pythonPath`
See the documentation for setup and troubleshooting (coming soon).
---
## Production Checklist (WEB)
Before exposing the server:
1. Set explicit `IEUM_CORS_ORIGINS` (no `*`)
2. Enable `IEUM_TLS_REQUIRED=true`
3. Set token TTL (`IEUM_AUTH_TOKEN_TTL_SECONDS`)
4. Keep rate limit enabled
5. Set `IEUM_SANDBOX_ROOT`
6. Enable `IEUM_REQUIRE_USER_ID=true` for multi-user use
---
## Architecture
```text
TUI (Textual) WEB (FastAPI/WS) VS Code (Extension/WebView)
\ | /
\ | /
LangGraph Agent Core
|
Tools (file/shell/search/MCP/skills)
```
Main code locations:
| Area | Path |
|---|---|
| Agent core | `ieum/agent.py` |
| TUI app | `ieum/app.py` |
| WEB server | `ieum/server/` |
| Sessions/persistence | `ieum/sessions.py` |
| Skills | `ieum/skills/` |
| VS Code extension | `vscode-ieum/` |
---
## Development
```bash
# tests
python -m pytest -q
# extension build
cd vscode-ieum
npm run compile
```
---
## Documentation
Documentation is being prepared. Stay tuned.
---
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"deepagents<0.5.0,>=0.4.1",
"langchain<2.0.0,>=1.2.3",
"langchain-openai<2.0.0,>=1.1.7",
"langchain-anthropic>=0.3.0",
"langchain-google-genai>=2.0.0",
"langchain-groq>=0.2.0",
"langchain-mistralai>=0.2.0",
"langchain-cohere>=0.3.0",
"langchain-nvidia-ai-endpoints>=0.3.0",
"langchain-upstage>=0.3.... | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T14:09:34.334281 | ieum-0.0.3.tar.gz | 9,477,446 | b0/46/2dfe8a69cc0d433a7d5b11e88b1e80aeecafadeb24be07db18a51213ea84/ieum-0.0.3.tar.gz | source | sdist | null | false | 62f2dfa033e43907d4bdb5b98c2d157a | e6e9b0fa72731eb229a090cc4b1b98fcad31fc8582b7da74c0470d45b2c57c27 | b0462dfe8a69cc0d433a7d5b11e88b1e80aeecafadeb24be07db18a51213ea84 | MIT | [] | 220 |
2.4 | multi-agent-rlenv | 3.7.7 | A strongly typed Multi-Agent Reinforcement Learning framework | # `marlenv` - A unified framework for muti-agent reinforcement learning
**Documentation: [https://yamoling.github.io/multi-agent-rlenv](https://yamoling.github.io/multi-agent-rlenv)**
`marlenv` is a strongly typed library for multi-agent and multi-objective reinforcement learning.
Install the library with
```sh
$ pip install multi-agent-rlenv # Basics
$ pip install multi-agent-rlenv[all] # With all optional dependecies
$ pip install multi-agent-rlenv[smac,overcooked] # Only SMAC & Overcooked
```
It aims to provide a simple and consistent interface for reinforcement learning environments by providing abstraction models such as `Observation`s or `Episode`s. `marlenv` provides adapters for popular libraries such as `gym` or `pettingzoo` and provides utility wrappers to add functionalities such as video recording or limiting the number of steps.
Almost every class is a dataclass to enable seemless serialiation with the `orjson` library.
# Fundamentals
## States & Observations
`MARLEnv.reset()` returns a pair of `(Observation, State)` and `MARLEnv.step()` returns a `Step`.
- `Observation` contains:
- `data`: shape `[n_agents, *observation_shape]`
- `available_actions`: boolean mask `[n_agents, n_actions]`
- `extras`: extra features per agent (default shape `(n_agents, 0)`)
- `State` represents the environment state and can also carry `extras`.
- `Step` bundles `obs`, `state`, `reward`, `done`, `truncated`, and `info`.
Rewards are stored as `np.float32` arrays. Multi-objective envs use reward vectors with `reward_space.size > 1`.
## Extras
Extras are auxiliary features appended by wrappers (agent id, last action, time ratio, available actions, ...).
Wrappers that add extras must update both `extras_shape` and `extras_meanings` so downstream users can interpret them.
`State` extras should stay in sync with `Observation` extras when applicable.
# Environment catalog
`marlenv.catalog` exposes curated environments and lazily imports optional dependencies.
```python
from marlenv import catalog
env1 = catalog.overcooked().from_layout("scenario4")
env2 = catalog.lle().level(6)
env3 = catalog.DeepSea(mex_depth=5)
```
Catalog entries require their corresponding extras at install time (e.g., `marlenv[overcooked]`, `marlenv[lle]`).
# Wrappers & builders
Wrappers are composable through `RLEnvWrapper` and can be chained via `Builder` for fluent configuration.
```python
from marlenv import Builder
from marlenv.adapters import SMAC
env = (
Builder(SMAC("3m"))
.agent_id()
.time_limit(20)
.available_actions()
.build()
)
```
Common wrappers include time limits, delayed rewards, masking available actions, and video recording.
# Using the library
## Adapters for existing libraries
Adapters normalize external APIs into `MARLEnv`:
```python
import marlenv
gym_env = marlenv.make("CartPole-v1", seed=25)
from marlenv.adapters import SMAC
smac_env = SMAC("3m", debug=True, difficulty="9")
from pettingzoo.sisl import pursuit_v4
from marlenv.adapters import PettingZoo
env = PettingZoo(pursuit_v4.parallel_env())
```
## Designing a custom environment
Create a custom environment by inheriting from `MARLEnv` and implementing `reset`, `step`, `get_observation`, and `get_state`.
```python
import numpy as np
from marlenv import MARLEnv, DiscreteSpace, Observation, State, Step
class CustomEnv(MARLEnv[DiscreteSpace]):
def __init__(self):
super().__init__(
n_agents=3,
action_space=DiscreteSpace.action(5).repeat(3),
observation_shape=(4,),
state_shape=(2,),
)
self.t = 0
def reset(self):
self.t = 0
return self.get_observation(), self.get_state()
def step(self, action):
self.t += 1
return Step(self.get_observation(), self.get_state(), reward=0.0, done=False)
def get_observation(self):
return Observation(np.zeros((3, 4), dtype=np.float32), self.available_actions())
def get_state(self):
return State(np.array([self.t, 0], dtype=np.float32))
```
# Related projects
- MARL: Collection of multi-agent reinforcement learning algorithms based on `marlenv` [https://github.com/yamoling/marl](https://github.com/yamoling/marl)
- Laser Learning Environment: a multi-agent gridworld that leverages `marlenv`'s capabilities [https://pypi.org/project/laser-learning-environment/](https://pypi.org/project/laser-learning-environment/) | text/markdown | null | Yannick Molinghen <yannick.molinghen@ulb.be> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | <4,>=3.12 | [] | [] | [] | [
"numpy>=2.0.0",
"opencv-python>=4.0",
"typing-extensions>=4.0",
"gymnasium>0.29.1; extra == \"all\"",
"laser-learning-environment>=2.6.1; extra == \"all\"",
"overcooked>=0.1.0; extra == \"all\"",
"pettingzoo>=1.20; extra == \"all\"",
"pymunk>=6.0; extra == \"all\"",
"scipy>=1.10; extra == \"all\"",
... | [] | [] | [] | [
"repository, https://github.com/yamoling/multi-agent-rlenv"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:09:09.650544 | multi_agent_rlenv-3.7.7.tar.gz | 48,798 | c6/83/2effabf5d49cf665d5909a3f77b148a66d159c0f330cef979e1aa6fc4ffd/multi_agent_rlenv-3.7.7.tar.gz | source | sdist | null | false | a6796319a341b9dcd18d52fd7ac85a45 | 0a1e9c34b11102585d6023c2c331f72f572a9097769fb9046361278f6c57a341 | c6832effabf5d49cf665d5909a3f77b148a66d159c0f330cef979e1aa6fc4ffd | null | [
"LICENSE"
] | 415 |
2.4 | blazely | 0.1.1 | A blazingly fast Rust-based FastAPI alternative | # Blazely
A high-performance web framework combining Rust's speed with Python's simplicity.
**Performance:** 2.7x faster than FastAPI in single-threaded scenarios, with ultra-low latency (0.5ms vs 1.4ms).
## Features
- 🚀 **Rust-Powered**: Hyper HTTP server + Tokio async runtime
- ⚡ **Fast JSON**: Native Rust serde_json serialization (3-5x faster than Python)
- 🔓 **Free-Threading Ready**: PyO3 0.23 with `gil_used = false`
- 🎯 **Multi-Threading**: True concurrent request handling
- 🐍 **Python-Friendly**: FastAPI-like decorator syntax
- 📦 **Easy Install**: Single `pip install` with pre-built wheels (planned)
## Quick Start
### Prerequisites
- Python 3.13+ (or 3.8+, but 3.13 recommended)
- Rust toolchain ([install here](https://rustup.rs/))
### Installation
```bash
# Clone the repository
git clone https://github.com/cakirtaha/blazely.git
cd blazely
# Install dependencies and build
pip install maturin
maturin develop --release
# Or use uv (recommended)
uv pip install maturin
uv run maturin develop --release
```
### Hello World
```python
from blazely import Blazely
app = Blazely()
@app.get("/")
def hello(request=None):
return {"message": "Hello, Blazely!"}
if __name__ == "__main__":
app.run() # Server starts on http://127.0.0.1:8000
```
## Architecture
### Two-Layer Design
```
┌──────────────────────────────┐
│ Python Layer │ FastAPI-like API
│ - Decorators (@app.get) │ Easy to use
│ - Type hints & Pydantic │
└──────────────┬───────────────┘
│ PyO3 0.23 Bridge
┌──────────────▼───────────────┐
│ Rust Layer │ Performance
│ - Hyper HTTP/1.1 server │ Low latency
│ - Tokio multi-threading │ High throughput
│ - serde_json │
└──────────────────────────────┘
```
### Request Flow
1. **HTTP Request** → Hyper receives (Rust, no GIL)
2. **Route Matching** → matchit finds handler (Rust, no GIL)
3. **Python Handler** → Acquire GIL, call your function
4. **JSON Response** → serde_json serializes (Rust)
5. **HTTP Response** → Hyper sends, release GIL
**Key Insight:** GIL is held only during your Python handler execution (~1ms)
## Performance
### Benchmark Results
Test: 150 requests to `GET /` endpoint (simple JSON response)
| Metric | Blazely | FastAPI | Improvement |
|--------|---------|---------|-------------|
| **Single-threaded** | 1,935 req/s | 714 req/s | **2.71x faster** |
| **Multi-threaded (10)** | 2,890 req/s | 2,665 req/s | **1.08x faster** |
| **Latency (avg)** | 0.50ms | 1.38ms | **2.76x lower** |
### Why So Fast?
1. **Rust serde_json** - All JSON in compiled code (not Python)
2. **Hyper** - Low-level HTTP server (minimal overhead)
3. **Minimal GIL** - Python only during handler execution
4. **Multi-threading** - Tokio handles concurrency efficiently
## API Reference
### Creating an Application
```python
from blazely import Blazely
app = Blazely(title="My API", version="1.0.0")
```
### Route Decorators
```python
@app.get("/path") # GET request
@app.post("/path") # POST request
@app.put("/path") # PUT request
@app.delete("/path") # DELETE request
@app.patch("/path") # PATCH request
```
### Path Parameters
```python
@app.get("/users/{user_id}")
def get_user(request=None, user_id: int = 0):
# Extract from request if needed
if request and "path_params" in request:
user_id = int(request["path_params"]["user_id"])
return {"user_id": user_id}
```
### Query Parameters
```python
@app.get("/search")
def search(request=None, q: str = "", limit: int = 10):
# Extract from request if needed
if request and "query_params" in request:
q = request["query_params"].get("q", q)
limit = int(request["query_params"].get("limit", limit))
return {"query": q, "limit": limit}
```
### Request Body (Pydantic)
```python
from pydantic import BaseModel
class Item(BaseModel):
name: str
price: float
@app.post("/items")
def create_item(request=None, item: Item = None):
# Request body is parsed and validated
if request and "body" in request:
item = Item(**request["body"])
return {"item": item.model_dump()}
```
### Const Routes (Ultra-Fast)
```python
@app.get("/health", const=True)
def health(request=None):
return {"status": "ok"}
```
Response is cached in Rust and served without calling Python. Expected: 100k+ req/s.
## Project Structure
```
blazely/
├── python/blazely/ # Python API layer
│ ├── app.py # Main Blazely class with decorators
│ ├── response.py # Response types
│ ├── params.py # Parameter extractors
│ └── _internal.py # Imports Rust extension module
│
├── src/ # Rust performance engine
│ ├── lib.rs # PyO3 module entry point
│ ├── server.rs # Hyper HTTP server + Tokio runtime
│ ├── router.rs # Route matching with matchit
│ ├── handler.rs # Python handler bridge (GIL management)
│ ├── request.rs # HTTP request → Python dict
│ ├── response.rs # Python dict → HTTP response (serde_json)
│ └── runtime.rs # Tokio runtime creation
│
├── examples/
│ └── hello_world.py # Simple example application
│
├── Cargo.toml # Rust dependencies
└── pyproject.toml # Python package config (Maturin)
```
## Technology Stack
### Rust (Performance Layer)
- **PyO3 0.23** - Python/Rust bindings with free-threading support
- **Hyper 1.0** - Low-level HTTP server
- **Tokio** - Multi-threaded async runtime
- **serde_json** - Fast JSON serialization
- **matchit** - Fast path routing
- **pythonize** - Python ↔ serde conversion
### Python (API Layer)
- **Pydantic** - Data validation
- **typing-extensions** - Type hints
## Development
### Building
```bash
# Development build (faster compilation)
maturin develop
# Release build (optimized, slower compilation)
maturin develop --release
# With uv (recommended)
uv run maturin develop --release
```
### Running Tests
```bash
# Python tests
pytest tests/python/
# Rust tests
cargo test
```
### Running Examples
```bash
python examples/hello_world.py
# Visit http://127.0.0.1:8000
```
## Current Status
### ✅ Working
- HTTP server (Hyper + Tokio)
- Route decorators (@app.get, @app.post, etc.)
- Path and query parameters
- Request body with Pydantic
- JSON responses (serde_json)
- Const route caching
- Multi-threading support
- Free-threading compatible (Python 3.13t ready)
### ⚠️ Limitations
1. **Handlers must accept `request` parameter** - Even if unused: `def handler(request=None)`
2. **Async handlers not fully supported** - Use sync handlers for now
3. **Parameter extraction manual** - Need to extract from `request` dict
4. **No middleware** - Coming in future versions
5. **No OpenAPI generation** - Coming in future versions
### 🎯 Roadmap
- [ ] Automatic parameter injection (remove manual extraction)
- [ ] Full async handler support
- [ ] Middleware system
- [ ] OpenAPI schema generation
- [ ] Request/Response classes (instead of dicts)
- [ ] WebSocket support
- [ ] Static file serving
## Troubleshooting
### Build fails
```bash
# Update Rust
rustup update
# Clean and rebuild
cargo clean
maturin develop --release
```
### Import error: `blazely._internal`
```bash
# Rebuild the extension
maturin develop
```
### Server not responding
Make sure handlers accept the `request` parameter:
```python
# ❌ Wrong
@app.get("/")
def handler():
return {}
# ✅ Correct
@app.get("/")
def handler(request=None):
return {}
```
## Contributing
Contributions welcome! Key areas:
1. **Performance** - Optimize hot paths
2. **Features** - Middleware, OpenAPI, async support
3. **Testing** - More integration tests
4. **Documentation** - Examples and guides
## License
MIT License - see LICENSE file
## Acknowledgments
- Built with [PyO3](https://pyo3.rs/) - Amazing Rust/Python interop
- Inspired by [FastAPI](https://fastapi.tiangolo.com/) - Great API design
- Powered by [Hyper](https://hyper.rs/) - Rust's HTTP foundation
---
**Blazely = Rust's speed + Python's simplicity** 🚀
| text/markdown; charset=UTF-8; variant=GFM | Blazely Contributors | null | null | null | MIT | web, framework, fastapi, rust, performance, async, http | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python... | [] | null | null | >=3.9 | [] | [] | [] | [
"pydantic>=2.0",
"typing-extensions>=4.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"httpx>=0.24; extra == \"dev\"",
"maturin>=1.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/cakirtaha/blazely/issues",
"Homepage, https://github.com/cakirtaha/blazely",
"Repository, https://github.com/cakirtaha/blazely"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:08:33.132439 | blazely-0.1.1.tar.gz | 64,842 | 71/e4/89dea470fade60ff7fe529abfcf155fb924e503f5b3cef0a7cd2ef4e5d5b/blazely-0.1.1.tar.gz | source | sdist | null | false | 83548c6ab4322c5df5c6997e78af251e | f37b5c10b7363d1369a79d63129ce8ed66e50a07b8256c926d720505de216757 | 71e489dea470fade60ff7fe529abfcf155fb924e503f5b3cef0a7cd2ef4e5d5b | null | [] | 650 |
2.4 | omnibase_spi | 0.10.0 | ONEX Service Provider Interface - Protocol definitions | # ONEX Service Provider Interface (omnibase_spi)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://github.com/astral-sh/ruff)
[](https://mypy.readthedocs.io/)
[](https://github.com/pre-commit/pre-commit)
[](https://github.com/OmniNode-ai/omnibase_spi)
[](https://github.com/OmniNode-ai/omnibase_spi)
[](https://github.com/OmniNode-ai/omnibase_spi/releases)
**Pure protocol interfaces for the ONEX framework with zero implementation dependencies.**
## Table of Contents
- [Quick Start](#quick-start)
- [V0.3.0 Highlights](#v030-highlights)
- [Architecture](#architecture)
- [Repository Structure](#repository-structure)
- [Protocol Overview](#protocol-overview)
- [Key Features](#key-features)
- [Exception Hierarchy](#exception-hierarchy)
- [Protocol Design Guidelines](#protocol-design-guidelines)
- [Development](#development)
- [Namespace Isolation](#namespace-isolation)
- [Contributing](#contributing)
- [Documentation](#documentation)
- [See Also](#see-also)
- [License](#license)
- [Support](#support)
## Quick Start
```bash
# Install with poetry
poetry add omnibase-spi
# Or with pip
pip install omnibase-spi
```
```python
# Import node protocols
from omnibase_spi.protocols.nodes import (
ProtocolNode,
ProtocolComputeNode,
ProtocolEffectNode,
ProtocolReducerNode,
ProtocolOrchestratorNode,
)
# Import handler protocol
from omnibase_spi.protocols.handlers import ProtocolHandler
# Import registry protocol
from omnibase_spi.protocols.registry import ProtocolHandlerRegistry
# Import contract compilers
from omnibase_spi.protocols.contracts import (
ProtocolEffectContractCompiler,
ProtocolWorkflowContractCompiler,
ProtocolFSMContractCompiler,
)
# Import exception hierarchy
from omnibase_spi.exceptions import (
SPIError,
ProtocolHandlerError,
ContractCompilerError,
RegistryError,
)
```
## V0.3.0 Highlights
- **Node Protocols**: Complete node type hierarchy with `ProtocolNode`, `ProtocolComputeNode`, `ProtocolEffectNode`, `ProtocolReducerNode`, and `ProtocolOrchestratorNode`
- **Handler Protocol**: `ProtocolHandler` with full lifecycle management (initialize, execute, shutdown)
- **Contract Compilers**: Effect, Workflow, and FSM contract compilation protocols
- **Handler Registry**: `ProtocolHandlerRegistry` for dependency injection and handler lookup
- **Exception Hierarchy**: Structured `SPIError` base with specialized subclasses
- **180+ Protocols**: Comprehensive coverage across 23 specialized domains
## Architecture
```text
+-----------------------------------------------------------+
| Applications |
| (omniagent, omniintelligence) |
+-----------------------------+-----------------------------+
| uses
v
+-----------------------------------------------------------+
| omnibase_spi |
| (Protocol Contracts, Exceptions) |
| - ProtocolNode, ProtocolComputeNode, ProtocolEffectNode |
| - ProtocolHandler, ProtocolHandlerRegistry |
| - Contract Compilers (Effect, Workflow, FSM) |
+-----------------------------+-----------------------------+
| imports models
v
+-----------------------------------------------------------+
| omnibase_core |
| (Pydantic Models, Core Types) |
+-----------------------------+-----------------------------+
| implemented by
v
+-----------------------------------------------------------+
| omnibase_infra |
| (Handler Implementations, I/O) |
+-----------------------------------------------------------+
```
**Related Repositories**:
- [omnibase_spi](https://github.com/OmniNode-ai/omnibase_spi) - This repository (Protocol contracts)
- [omnibase_core](https://github.com/OmniNode-ai/omnibase_core) - Pydantic models and core types
- [omnibase_infra](https://github.com/OmniNode-ai/omnibase_infra) - Concrete implementations
**Dependency Rules**:
- SPI -> Core: **allowed** (runtime imports of models and contract types)
- Core -> SPI: **forbidden** (no imports)
- SPI -> Infra: **forbidden** (no imports, even transitively)
- Infra -> SPI + Core: **expected** (implements behavior)
## Repository Structure
```text
src/omnibase_spi/
+-- protocols/
| +-- nodes/ # Node type protocols
| | +-- base.py # ProtocolNode
| | +-- compute.py # ProtocolComputeNode
| | +-- effect.py # ProtocolEffectNode
| | +-- reducer.py # ProtocolReducerNode
| | +-- orchestrator.py # ProtocolOrchestratorNode
| | +-- legacy/ # Deprecated protocols (removal in v0.5.0)
| +-- handlers/ # Handler protocol
| | +-- protocol_handler.py
| +-- contracts/ # Contract compiler protocols
| | +-- effect_compiler.py
| | +-- workflow_compiler.py
| | +-- fsm_compiler.py
| +-- registry/ # Handler registry protocol
| | +-- handler_registry.py
| +-- container/ # DI and service registry
| +-- event_bus/ # Event bus protocols
| +-- workflow_orchestration/ # Workflow protocols
| +-- mcp/ # MCP integration protocols
| +-- [14 more domains]
+-- exceptions.py # SPIError hierarchy
+-- py.typed # PEP 561 marker
```
## Protocol Overview
The ONEX SPI provides **180+ protocols** across **23 specialized domains**:
| Domain | Protocols | Description |
|--------|-----------|-------------|
| Nodes | 5 | Node type hierarchy (Compute, Effect, Reducer, Orchestrator) |
| Handlers | 1 | Protocol handler with lifecycle management |
| Contracts | 3 | Contract compilers (Effect, Workflow, FSM) |
| Registry | 1 | Handler registry for DI |
| Container | 21 | Dependency injection, lifecycle management |
| Event Bus | 13 | Distributed messaging infrastructure |
| Workflow Orchestration | 14 | Event-driven FSM coordination |
| MCP Integration | 15 | Multi-subsystem tool coordination |
| Memory | 15 | Workflow state persistence |
| Core System | 16 | Logging, health monitoring, error handling |
| Plus 22 more domains | 76+ | Validation, networking, file handling, etc. |
## Key Features
- **Zero Implementation Dependencies** - Pure protocol contracts only
- **Runtime Type Safety** - Full `@runtime_checkable` protocol support
- **Dependency Injection** - Sophisticated service lifecycle management
- **Event-Driven Architecture** - Event sourcing and workflow orchestration
- **Multi-Subsystem Coordination** - MCP integration and distributed tooling
- **Enterprise Features** - Health monitoring, metrics, circuit breakers
## Exception Hierarchy
```python
SPIError # Base exception for all SPI errors
+-- ProtocolHandlerError # Handler execution errors
| +-- HandlerInitializationError # Handler failed to initialize
+-- ContractCompilerError # Contract compilation/validation errors
+-- RegistryError # Handler registry operation errors
+-- ProtocolNotImplementedError # Missing protocol implementation
+-- InvalidProtocolStateError # Lifecycle state violations
```
## Protocol Design Guidelines
### Protocol Definition Pattern
```python
from typing import Protocol, runtime_checkable
from omnibase_core.models.compute import ModelComputeInput, ModelComputeOutput
@runtime_checkable
class ProtocolComputeNode(Protocol):
"""Compute node for pure transformations."""
@property
def is_deterministic(self) -> bool:
"""Whether this node produces deterministic output."""
...
async def execute(self, input_data: ModelComputeInput) -> ModelComputeOutput:
"""Execute the compute operation."""
...
```
### Protocol Requirements
Every protocol must:
1. Inherit from `typing.Protocol`
2. Have `@runtime_checkable` decorator
3. Use `...` (ellipsis) for method bodies
4. Import Core models for type hints (allowed at runtime)
5. Have docstrings with Args/Returns/Raises
## Development
```bash
# Install dependencies
poetry install
# Run tests
poetry run pytest
# Run single test file
poetry run pytest tests/path/to/test_file.py
# Run single test
poetry run pytest tests/path/to/test_file.py::test_name -v
# Type checking
poetry run mypy src/
# Strict type checking (target for CI)
poetry run mypy src/ --strict
# Format code
poetry run black src/ tests/
poetry run isort src/ tests/
# Lint
poetry run ruff check src/ tests/
# Build package
poetry build
# Run standalone validators (stdlib only, no dependencies)
python scripts/validation/run_all_validations.py
python scripts/validation/run_all_validations.py --strict --verbose
# Individual validators
python scripts/validation/validate_naming_patterns.py src/
python scripts/validation/validate_namespace_isolation.py
python scripts/validation/validate_architecture.py --verbose
# Pre-commit hooks
pre-commit run --all-files
pre-commit run validate-naming-patterns --all-files
pre-commit run validate-namespace-isolation-new --all-files
```
## Namespace Isolation
This SPI package maintains **complete namespace isolation** to prevent circular dependencies:
| Rule | Status |
|------|--------|
| `from omnibase_spi.protocols.* import ...` | Allowed |
| `from omnibase_core.* import ...` | Allowed |
| `from omnibase_infra.* import ...` | Forbidden |
| Pydantic models in SPI | Forbidden (use Core) |
## Contributing
We welcome contributions! Please see our [Contributing Guide](docs/CONTRIBUTING.md) for development guidelines.
```bash
# Clone the repository
git clone https://github.com/OmniNode-ai/omnibase_spi.git
cd omnibase_spi
# Install dependencies
poetry install
# Run validation
poetry run pre-commit run --all-files
```
## Documentation
- **[Complete Documentation](docs/README.md)** - Comprehensive protocol documentation
- **[API Reference](docs/api-reference/README.md)** - All 180+ protocols across 23 domains
- **[Quick Start Guide](docs/QUICK-START.md)** - Get up and running in minutes
- **[Developer Guide](docs/developer-guide/README.md)** - Development workflow and best practices
- **[Architecture Overview](docs/architecture/README.md)** - Design principles and patterns
- **[Protocol Sequence Diagrams](docs/PROTOCOL_SEQUENCE_DIAGRAMS.md)** - Interaction patterns
- **[Glossary](docs/GLOSSARY.md)** - Terminology and definitions
- **[Changelog](CHANGELOG.md)** - Version history and release notes
See the [Glossary](docs/GLOSSARY.md) for definitions of SPI-specific terms like Protocol, Handler, Node, and Contract.
## See Also
- **[Contributing Guide](docs/CONTRIBUTING.md)** - How to contribute to the project
- **[MVP Plan](docs/MVP_PLAN.md)** - v0.3.0 work breakdown and architecture
- **[CLAUDE.md](CLAUDE.md)** - AI assistant guidance for working with this repository
## License
This project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.
## Support
- **Documentation**: [Complete Documentation](docs/README.md)
- **Issues**: [GitHub Issues](https://github.com/OmniNode-ai/omnibase_spi/issues)
- **Discussions**: [GitHub Discussions](https://github.com/OmniNode-ai/omnibase_spi/discussions)
- **Email**: [team@omninode.ai](mailto:team@omninode.ai)
---
**Made with care by the OmniNode Team**
| text/markdown | OmniNode Team | team@omninode.ai | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"typing-extensions>=4.5.0",
"pydantic>=2.11.7",
"omnibase-core>=0.18.0"
] | [] | [] | [] | [] | poetry/2.2.1 CPython/3.12.12 Darwin/24.6.0 | 2026-02-18T14:07:27.807977 | omnibase_spi-0.10.0.tar.gz | 489,283 | 78/9c/8cb2094abfea4ef30c03dffc5d54987c1e5086ecde34514a0ee73450dec7/omnibase_spi-0.10.0.tar.gz | source | sdist | null | false | 1c9bcf4113e206ed197d3db8ebe09e47 | 124b979163be545f17e5584cf0e1ebe08bf0fef19b20485ca0fb0ac323344b1e | 789c8cb2094abfea4ef30c03dffc5d54987c1e5086ecde34514a0ee73450dec7 | null | [] | 0 |
2.4 | openlayer-guardrails | 0.2.0 | Guardrails that can be used to check inputs and outputs of functions and works well with Openlayer tracing. | # Openlayer Guardrails
Open source guardrail implementations that work with Openlayer tracing.
## Installation
```bash
pip install openlayer-guardrails
```
## Usage
### Standalone Usage
```python
from openlayer_guardrails import PIIGuardrail
# Create guardrail
pii_guard = PIIGuardrail(
block_entities={"CREDIT_CARD", "US_SSN"},
redact_entities={"EMAIL_ADDRESS", "PHONE_NUMBER"}
)
# Check inputs manually
data = {"message": "My email is john@example.com and SSN is 123-45-6789"}
result = pii_guard.check_input(data)
if result.action.value == "block":
print(f"Blocked: {result.reason}")
elif result.action.value == "modify":
print(f"Modified data: {result.modified_data}")
```
### With Openlayer Tracing
```python
from openlayer_guardrails import PIIGuardrail
from openlayer.lib.tracing import trace
# Create guardrail
pii_guard = PIIGuardrail()
# Apply to traced functions
@trace(guardrails=[pii_guard])
def process_user_data(user_input: str):
return f"Processed: {user_input}"
# PII is automatically handled
result = process_user_data("My email is john@example.com")
# Output: "Processed: My email is [EMAIL-REDACTED]"
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"openlayer>=0.2.0a89",
"presidio-analyzer>=2.2.0; extra == \"pii\"",
"presidio-anonymizer>=2.2.0; extra == \"pii\"",
"torch>=2.0.0; extra == \"prompt-injection\"",
"transformers>=4.40.0; extra == \"prompt-injection\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:06:54.963517 | openlayer_guardrails-0.2.0.tar.gz | 5,986 | f2/7f/c39665095f8488eabfe314da9aead0b25cc4281543fe1759e764336435c3/openlayer_guardrails-0.2.0.tar.gz | source | sdist | null | false | 3488b47e956304516e0f12b8b227e3eb | 95ee252dd7162f3a5bf70d32a5cd094bafc6576679156abcafbbc069d4c5544b | f27fc39665095f8488eabfe314da9aead0b25cc4281543fe1759e764336435c3 | null | [] | 250 |
2.4 | fabric-cicd | 0.2.0 | Microsoft Fabric CI/CD | # Fabric CICD
[](https://www.python.org/)
[](https://pypi.org/project/fabric-cicd)
[](https://pypi.org/project/fabric-cicd)
[](https://github.com/charliermarsh/ruff)
[](https://github.com/microsoft/fabric-cicd/actions/workflows/test.yml)
---
## Project Overview
fabric-cicd is a Python library designed for use with [Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/) workspaces. This library supports code-first Continuous Integration / Continuous Deployment (CI/CD) automations to seamlessly integrate Source Controlled workspaces into a deployment framework. The goal is to assist CI/CD developers who prefer not to interact directly with the Microsoft Fabric APIs.
## Documentation
All documentation is hosted on our [fabric-cicd](https://microsoft.github.io/fabric-cicd/) GitHub Pages
Section Overview:
- [Home](https://microsoft.github.io/fabric-cicd/latest/)
- [How To](https://microsoft.github.io/fabric-cicd/latest/how_to/)
- [Examples](https://microsoft.github.io/fabric-cicd/latest/example/)
- [Contribution](https://microsoft.github.io/fabric-cicd/latest/contribution/)
- [Changelog](https://microsoft.github.io/fabric-cicd/latest/changelog/)
- [About](https://microsoft.github.io/fabric-cicd/latest/help/) - Inclusive of Support & Security Policies
## Installation
To install fabric-cicd, run:
```bash
pip install fabric-cicd
```
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
| text/markdown | Microsoft Corporation License-Expression: MIT | null | null | null | null | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.9 | [] | [] | [] | [
"azure-identity>=1.19.0",
"dpath>=2.2.0",
"filetype>=1.2.0",
"jsonpath-ng>=1.7.0",
"packaging>=24.2",
"pyyaml>=6.0.2",
"requests>=2.32.3"
] | [] | [] | [] | [
"Repository, https://github.com/microsoft/fabric-cicd.git",
"Changelog, https://github.com/microsoft/fabric-cicd/blob/main/docs/changelog.md"
] | RestSharp/106.13.0.0 | 2026-02-18T14:06:04.127509 | fabric_cicd-0.2.0-py3-none-any.whl | 106,616 | a2/4a/db7f1b79da07a035692fe4feb564342e0e07f1eed261412f0db79288e110/fabric_cicd-0.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 598f13b8c473cb41746e4881cdf4786b | 3540382519ccdcbfa9da21558a7e7c155b8f726424878b35d2b77608ec3ac1e5 | a24adb7f1b79da07a035692fe4feb564342e0e07f1eed261412f0db79288e110 | null | [] | 9,342 |
2.4 | confocal | 0.1.6 | Multi-layer configuration management with source tracking | # Confocal
A multi-layer configuration management library built on pydantic with source tracking and profile support.
## Features
- Hierarchical config file discovery (searches parent directories)
- Profile support for different environments
- Full source tracking to explain where each config value came from
- Built on pydantic for robust validation and type safety
- Rich terminal output for config inspection
## Installation
```bash
pip install confocal
```
## Quick Start
```python
from confocal import BaseConfig
from pydantic import Field
class MyConfig(BaseConfig):
database_url: str
name: str = Field(default="Anonymous")
debug: bool = False
# Load config from raiconfig.toml, environment variables, etc.
config = MyConfig.load()
# Show where config values came from
config.explain()
```
## Config Sources (in order of precedence)
1. Initialization arguments
2. Environment variables
3. Active profile from config file
4. Config files (YAML or TOML)
5. Default values
## Using Config Files
Confocal supports both TOML and YAML config files. Specify which format to use in your config class:
### Using TOML (default)
```python
class MyConfig(BaseConfig):
model_config = SettingsConfigDict(
toml_file="config.toml",
)
```
### Using YAML
```python
class MyConfig(BaseConfig):
model_config = SettingsConfigDict(
yaml_file="config.yaml",
)
```
## Using Profiles
Profile support works with both TOML and YAML files.
**TOML example** (`raiconfig.toml`):
```toml
database_url = "postgresql://prod-db:5432"
[profile.dev]
database_url = "postgresql://localhost:5432"
debug = true
[profile.test]
database_url = "postgresql://test-db:5432"
```
**YAML example** (`config.yaml`):
```yaml
database_url: "postgresql://prod-db:5432"
profile:
dev:
database_url: "postgresql://localhost:5432"
debug: true
test:
database_url: "postgresql://test-db:5432"
```
Activate a profile:
```bash
export ACTIVE_PROFILE=dev
```
### YAML Environment Variables
YAML configs support environment variable substitution:
```yaml
database_url: "{{ env_var('DB_URL', 'postgresql://localhost:5432') }}"
api_key: "{{ env_var('API_KEY') }}" # Required, will error if not set
```
## Advanced Usage
Show full config inheritance chain:
```python
config.explain(verbose=True)
```
Manually find nearest config file in parent directories:
```python
from confocal import find_upwards
config_path = find_upwards("config.toml")
```
## License
MIT
| text/markdown | null | Your Name <your.email@example.com> | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pydantic-settings<2.14.0,>=2.13.0",
"pydantic<2.13.0,>=2.12.0",
"pyyaml>=6.0.0",
"rich>=13.0.0",
"tomli>=2.0.0; python_version < \"3.11\"",
"build>=0.10.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"twine>=4.0.0; extra == \"dev\"",
"types-pyyaml>=6.0.0; extra == \"dev\"",
"pytest-cov>=4... | [] | [] | [] | [
"Homepage, https://github.com/joshuafcole/confocal",
"Repository, https://github.com/joshuafcole/confocal"
] | uv/0.9.9 {"installer":{"name":"uv","version":"0.9.9"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T14:05:37.349416 | confocal-0.1.6.tar.gz | 6,904 | a5/cb/5dd8f0b58c5ee6513938d9d970c2aa51b2f2b8cd1911673f8c6224bc714f/confocal-0.1.6.tar.gz | source | sdist | null | false | 0449e95e68f1235c73bbd06bc84b69aa | 627036ca9ed6ffe8ea351184872d7b9256b271df8a61166ad7c988af21483889 | a5cb5dd8f0b58c5ee6513938d9d970c2aa51b2f2b8cd1911673f8c6224bc714f | null | [] | 1,448 |
2.4 | xskillscore | 0.0.29 | Metrics for verifying forecasts | xskillscore: Metrics for verifying forecasts
============================================
+---------------------------+-------------------------------------------+
| Documentation and Support | |docs| |binder| |
+---------------------------+-------------------------------------------+
| Open Source | |pypi| |conda-forge| |license| |zenodo| |
+---------------------------+-------------------------------------------+
| Coding Standards | |codecov| |pre-commit| |
+---------------------------+-------------------------------------------+
| Development Status | |status| |testing| |upstream| |
+---------------------------+-------------------------------------------+
**xskillscore** is an open source project and Python package that provides verification
metrics of deterministic (and probabilistic from `properscoring`) forecasts with `xarray`.
Installing
----------
``$ conda install -c conda-forge xskillscore``
or
``$ pip install xskillscore``
or
``$ pip install git+https://github.com/xarray-contrib/xskillscore``
Documentation
-------------
Documentation can be found on `readthedocs <https://xskillscore.readthedocs.io/en/latest/>`_.
See also
--------
- If you are interested in using **xskillscore** for data science where you data is mostly in ``pandas.DataFrames``'s check out the `xskillscore-tutorial <https://github.com/raybellwaves/xskillscore-tutorial>`_.
- If you are interested in using **xskillscore** for climate prediction check out `climpred <https://climpred.readthedocs.io/en/stable/>`_.
History
-------
**xskillscore** was originally developed to parallelize forecast metrics of the multi-model-multi-ensemble forecasts associated with the `SubX <https://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-18-0270.1>`_ project.
We are indebted to the **xarray** community for their `advice <https://groups.google.com/forum/#!searchin/xarray/xskillscore%7Csort:date/xarray/z8ue0G-BLc8/Cau-dY_ACAAJ>`_ in getting this package started.
.. |binder| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/raybellwaves/xskillscore-tutorial/master?urlpath=lab
:alt: Binder
.. |codecov| image:: https://codecov.io/gh/xarray-contrib/xskillscore/branch/main/graph/badge.svg
:target: https://codecov.io/gh/xarray-contrib/xskillscore
:alt: Codecov
.. |conda-forge| image:: https://img.shields.io/conda/vn/conda-forge/xskillscore.svg
:target: https://anaconda.org/conda-forge/xskillscore
:alt: conda-forge
.. |docs| image:: https://img.shields.io/readthedocs/xskillscore/stable.svg?style=flat
:target: https://xskillscore.readthedocs.io/en/stable/?badge=stable
:alt: Documentation Status
.. |license| image:: https://img.shields.io/github/license/xarray-contrib/xncml.svg
:target: https://github.com/xarray-contrib/xncml/blob/main/LICENSE
:alt: License
.. |pre-commit| image:: https://results.pre-commit.ci/badge/github/xarray-contrib/xskillscore/main.svg
:target: https://results.pre-commit.ci/latest/github/xarray-contrib/xskillscore/main
:alt: Pre-Commit
.. |pypi| image:: https://img.shields.io/pypi/v/xskillscore.svg
:target: https://pypi.python.org/pypi/xskillscore/
:alt: PyPI
.. |status| image:: https://www.repostatus.org/badges/latest/active.svg
:target: https://www.repostatus.org/#active
:alt: Project Status: Active – The project has reached a stable, usable state and is being actively developed.
.. |testing| image:: https://github.com/xarray-contrib/xskillscore/actions/workflows/xskillscore_testing.yml/badge.svg
:target: https://github.com/xarray-contrib/xskillscore/actions/workflows/xskillscore_testing.yml
:alt: Testing
.. |upstream| image:: https://github.com/xarray-contrib/xskillscore/actions/workflows/upstream-dev-ci.yml/badge.svg
:target: https://github.com/xarray-contrib/xskillscore/actions/workflows/upstream-dev-ci.yml
:alt: Upstream Testing
.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.5173152.svg
:target: https://doi.org/10.5281/zenodo.5173152
:alt: Zenodo DOI
| text/markdown | null | Ray Bell <rayjohnbell0@gmail.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Pr... | [] | null | null | >=3.9 | [] | [] | [] | [
"dask[array]>=2023.4.0",
"numpy>=1.25",
"properscoring",
"scipy>=1.10",
"statsmodels",
"xarray>=2023.4.0",
"xhistogram>=0.3.2",
"bottleneck; extra == \"accel\"",
"numba>=0.57; extra == \"accel\"",
"xskillscore[accel]; extra == \"test\"",
"cftime; extra == \"test\"",
"matplotlib; extra == \"tes... | [] | [] | [] | [
"repository, https://github.com/xarray-contrib/xskillscore",
"documentation, https://xskillscore.readthedocs.io/en/stable/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:05:36.362279 | xskillscore-0.0.29.tar.gz | 219,113 | 31/8a/97da0fda5c642afec1ff5974b16830bc07b3a8eb1ec6e43eabfe88e5c157/xskillscore-0.0.29.tar.gz | source | sdist | null | false | 70c48b11d5169ad68c78cf88bb767813 | 34375e86ea5e2ac710e52b8eb8e77c7acb5b2782aa6c67e863c71e4ae202dd74 | 318a97da0fda5c642afec1ff5974b16830bc07b3a8eb1ec6e43eabfe88e5c157 | null | [
"LICENSE.txt"
] | 1,282 |
2.4 | nicqs | 2026.2.5 | Thermo-electromagnetic modeling tool for No-Insulation HTS magnets | # NICQS
**No-Insulation Coil Quench Simulator**
*Thermo-electromagnetic modeling tool for simulating transients in No-Insulation (NI) HTS magnets using a 2D homogenized approach.*
[](https://www.python.org/)
[](https://opensource.org/licenses/MIT)
## Installation
### From PyPI (Recommended)
```bash
pip install nicqs
```
For development dependencies:
```bash
pip install nicqs[dev]
```
### From Source
**Prerequisites:**
- Python 3.10 or 3.11
- C compiler:
- Windows: Microsoft C++ Build Tools (https://visualstudio.microsoft.com/visual-cpp-build-tools/)
- Linux/Mac: GCC compiler
**Installation steps:**
```bash
# Clone the repository
git clone https://gitlab.cern.ch/steam/ni-coils.git
cd ni-coils
# Install in editable mode
pip install -e .
# Or install with all optional dependencies
pip install -e .[all]
```
**PyTorch Installation (Required):**
NICQS requires PyTorch for the solver. Install the appropriate version for your system:
```bash
# CPU version (recommended for most users)
pip install torch
# CUDA 11.8 (for NVIDIA GPUs - faster computation)
pip install torch --index-url https://download.pytorch.org/whl/cu118
# For other CUDA versions, visit: https://pytorch.org/get-started/locally/
```
**Note:** PyTorch is not included as a dependency because the installation method varies by system (CPU vs GPU). You must install it separately.
## Quick Start
### Running a simulation
1. Create an input YAML file that defines the coil geometry
2. Run the geometry preprocessor:
```bash
nicqs-geometry path/to/geometry.yaml output_directory
```
3. An output folder will be created with all geometry-specific information
4. Edit the generated `solver_input.yaml` file as needed
5. Run the simulation:
```bash
nicqs-solver path/to/solver_input.yaml
```
### Python API Usage
```python
from nicqs.geometry import NI_coil_geometry
from nicqs.solver_torch import NI_Coil_Solver
# Create geometry from input YAML
geom = NI_coil_geometry(
input_yaml_path="path/to/geometry.yaml",
output_path="output_directory"
)
geom.create_geometry()
# Run solver with generated solver_input.yaml
solver = NI_Coil_Solver(input_yaml_path="output_directory/solver_input.yaml")
solver.run()
```
## Example input for the geometry creation.
```yaml
# Geometry yaml input parameters
inner_radius: [0.03, 0.03] # List of inner radii [m]
outer_radius: [0.06, 0.08] # List of outer radii [m]
thickness: [0.012, 0.012] # List of pancake thicknesses [m] (commondly the width of the HTS tape)
offset: [0.0, 0.0] # List with offsets along the z-axis [m]. OPTIONAL, default=0
repeat: [2, 2] # List of how many of these pancake coils are generated along the z-axis [-]. OPTIONAL, default=1
spacing: [0.002, 0.030] # List of spacing between pancake coils [m], only needed if the coil element will be repeated more than once. OPTIONAL, default=0
parallel: [5, 1] # List of number of parallel elements [-], needed for screening currents. E.g. if parallel is set to 5, the tape will be subdivided in to 5 elements over its width. OPTIONAL, default=1. Please keep this 1 for quench back elements.
serial: [30, 50] # List of number of subdivisions per coil elements [-]. E.g. if a pancake coil has 100 turns and serial is set to 10, it will be subdivided into 10 blocks of 10 tapes. OPTIONAL, default=1
turns: [300, 50] # List of turns per pancake coil [-]. If the material is normal conducting, it is better to have the amount of turns equal to the amount of serial connections.
circuit: [1, 0] # List of circuit numbers of the coils [-]. Circuit 0 is not powered, any number afterwards can be powered.
# Operational parameters:
I_initial: [100.0, 0.0] # List of operating currents [A]. This is just for field-plot generation, can be changed during simulation. OPTIONAL, default=1
T_initial: [20.0, 20.0] # List of operating temperatures [K]. OPTIONAL, default=5
# There are several way to add a characteristic time to the magnets:
# Please choose the most convenient option, if multiple options are selected it will choose the option based on a priority list: "tau_circuit" > "resistivity_circuit" > "tau" > "resistivity"
tau: [20.0, 0.0] # List of magnet characteristic times [s]. OPTIONAL, default=1
tau_circuit: {1: 20} # Dictionary for the circuit, R_circuit = L_circuit / Tau_circuit
resistivity: 0.00001 # Float or List [0.00001, 0.00001]
resistivity_circuit: {1: 0.00001} # Dictionary for the circuit
T_ref: [20.0, 0.0] # Reference temperature for the characteristic time [K]. OPTIONAL, default=5
B_ref: [5.0, 0.0] # Reference magnetic field for the characteristic time [T]. OPTIONAL, default=5
conductor: ['SC1', 'Cu'] # Conductor name [str]. OPTIONAL, default='Dummy'
# Conductor properties:
Conductor_type : {'SC1': 'Superconductor', 'Cu': 'Normal'}
Conductor_SC_properties : {'SC1': {'fit-name': 'CFUN_HTS_JcFit_Fujikura_v1', 'n-value': 15, 'Ic': 1500.0, 'Tref': 4.5, 'Bref': 10, 'angle': 0.0, 'parallel': 5}}
Conductor_materials : {'SC1': ['Cu100', 'Hastelloy', 'Silver'], 'Cu': ['Cu100']}
Conductor_materials_width : {'SC1': 12.0e-3, 'Cu': 12.0e-3}
Conductor_materials_thickness : {'SC1': [20.0e-6, 50.0e-6, 2.0e-6], 'Cu': [1.0e-3]}
Conductor_resistor : {'SC1': 'Cu100', 'Cu': 'Cu100'}
Conductor_conductivity_materials : {'SC1': ['Cu100', 'Hastelloy'], 'Cu': ['Cu100']}
Conductor_conductivity_thickness : {'SC1': 0.000050, 'Cu': 0.001}
Conductor_conductivity_width : {'SC1': {'Cu100': 0.01e-3, 'Hastelloy': 11.99e-3}, 'Cu': {'Cu100': 12.0e-3}}
# Plotting options:
inductance_calculation: 'analytic' # 'discrete' or 'analytic', the last one is default, faster and better.
rotate_90: false # rotates the plot 90 degrees.
mirror: true # mirrors the plot, otherwise it plots only half the magnet.
plot_lines: 'outer' # 'outer' or 'all', outer plots the outlines of the pancakes, all plots the outlines of each element.
# Other options
overwrite_yaml: true # Overwrites the solver_input and solver_settings yaml files.
inductance_z_divisions: 3 # Improves inductance calculations by spacing the current lines over the conductor area.
# Some solver settings that are passed on to the solver. This can also be changed manually later in the solver yaml file.
atol: 1.0e-6
rtol: 1.0e-7
max_step: 0.01
max_timestep: 0.1
min_step: 1.0e-99
min_timestep: 0.05
thermal_timer: 9999 # Allows temperature changes after this time step.
t0:
- 0.0
- 2.0
# Magnetic field
xmin: 0.0
xmax: 0.1
ymin: -0.1
ymax: 0.1
nx: 100
ny: 100
```
The yaml file above will produce the following geometry:
<p align="center">
<img src="documentation/images/doc_yaml_field.png" width="640">
</p>
## Solver Configuration
After running `nicqs-geometry`, two configuration files are automatically created in the output folder:
- **`solver_input.yaml`** - Defines initial conditions, circuit data, and simulation parameters
- **`solver_settings.yaml`** - Configures solver tolerances and integration parameters
You should review and modify these files according to your simulation needs. Below is a reference for the key parameters in each file.
### solver_input.yaml Parameters
```yaml
circuit_data: # dictionary with a number of positions that corresponds to the number of circuits that we have in our model. In each position/circuit we need to define the dI/dt rate, and the corresponding time (another 2 positions).
circuit_name:
dIdt:
- 0.0
- 0.0
t:
- 0
- 1
initial_conditions: # dictionary with a number of positions that correspond to the number of different pieces in our model. In each position we have to define the keys:
coil_nr:
file_path: '' # the output file with the results we will use
I_R: 0.0 # the initial current for the resistor
I_ind: 0.0 # the initial current of the inductor
Line_index: -1 # which line of the input file it used as input, -1 is the last one.
Load_from_file: False # boolean type, set true to load data from the file with the "file_path" path.
T: 4.5 # Initial temperature, K
path_dict: # This is a dictionary that has some information on which paths, geometry and solver files should be used.
geometry_yaml_path: geometry\output\magnet_folder\geometry.yaml
solver_settings_path: geometry\output\magnet_folder\solver_settings.yaml
specific_output_path: geometry\output\Fmagnet_folder
# Common input parameters:
thermal_timer: 99999 # initial time for the thermal solver to start working, one can set it to a large number to include thermal & loss calculations, but without the temperature actually increasing.
sampling_time_custom: [initial time, end time, step] # time range with a custom time step. There is still a chance for the solver to step over this range. It is better to add some timesteps in the circuit_data dictionary, even if you dont want any changes in dIdt.
```
It is is important to note that the first time we run the solver script,we have to set file_path : ' ' and Load_from_file : false, so that we create the first simulation file. After that we are free to use the results of this simulation file by using its path as the file_path and by seting the Load_from_file variable to true.
* **thermal_timer** : initial time for the thermal solver to start working
* **sampling_time_custom** : array in a format [initial time, end time, step], for the solver
* **max_cooling** : boolean type. If set to true we have the maximum cooling possible and all excess heat will be cooled away.
* **maximum_cooling_value** : It is the value above which, all excess heat will be cooled down.
* **quench_fraction** : the percentage of the model getting affected
* **heat_nodes** : array or scalar that indicates which nodes will be heated up
* **heater_power** : the power applied to the nodes
* **Cap_discharge** : boolean value indicating whether there is a discharge
* **Cap_discharge_time** : time of discharge in seconds
* **Dis_List_Coils** : array or scalar that indicates which coils are discharging
* **R_multiplier** : Parameter with which we can modify the resistance R = L/tau
### solver_settings.yaml Parameters
* **atol**: integration parameter (absolute tolerance)
* **colormap_name**: coolwarm
* **dt**: timestamp
* **electrical_part** : option to enable the electrical part
* **keep_smallest_timestep** : option to set as timestep the smallest value
* **max_step**
* **max_timestep** : maximum timestep value which limits the sampling_time_custom attribute
* **min_step** :
* **min_timestep**
* **rtol**
* **sampling_time_dI**
* **sampling_time_dT**
* **solver**
* **stop_criterion_E**
* **stop_criterion_T**
* **stop_on_E**
* **stop_on_T**
* **t0**
* **thermal_part** : option to enable the thermal part
## Contributing
Contributions are welcome! Please feel free to submit issues or pull requests to the [GitLab repository](https://gitlab.cern.ch/steam/ni-coils).
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Citation
If you use NICQS in your research, please cite:
```
NICQS - No-Insulation Coil Quench Simulator
Tim Mulder, CERN
https://gitlab.cern.ch/steam/ni-coils
```
## Contact
**Author:** Tim Mulder
**Email:** tim.mulder@cern.ch
| text/markdown | null | Tim Mulder <tim.mulder@cern.ch> | null | Tim Mulder <tim.mulder@cern.ch> | MIT | superconductor, HTS, electromagnetics, quench, simulation, no-insulation, coils, magnets | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Scientific/Engineering",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :... | [] | null | null | <3.12,>=3.10 | [] | [] | [] | [
"STEAM-materials>=2024.4.2",
"numpy>=1.24.2",
"matplotlib>=3.5.1",
"pandas>=1.4.1",
"scipy>=1.8.0",
"ezdxf>=0.17.2",
"PyYAML>=6.0",
"pyvista>=0.45.3",
"vtk>=9.4.2",
"pillow",
"psutil>=6.0.0",
"pytest>=8.1.1; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"build>=1.2.1; extra == ... | [] | [] | [] | [
"Homepage, https://gitlab.cern.ch/steam/ni-coils",
"Repository, https://gitlab.cern.ch/steam/ni-coils",
"Documentation, https://gitlab.cern.ch/steam/ni-coils#readme",
"Issues, https://gitlab.cern.ch/steam/ni-coils/-/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T14:05:19.520596 | nicqs-2026.2.5.tar.gz | 115,181 | 41/84/4123acc3a897d2c406df84dd308bf8b71b038465237f6a5027feacf0fd36/nicqs-2026.2.5.tar.gz | source | sdist | null | false | 3d7cb5c17096fa42eb66f3a7d510ae63 | 4fe09296aaf5b82f6ddff4c0e145f65c370ab183927e7bfe1956ce53ef24317a | 41844123acc3a897d2c406df84dd308bf8b71b038465237f6a5027feacf0fd36 | null | [
"LICENSE"
] | 155 |
2.4 | sites-conformes | 2.5.2rc4 | Gestionnaire de contenu permettant de créer et gérer un site internet basé sur le Système de design de l'État, accessible et responsive | # sites_conformes
Package Python pour Sites Conformes, un gestionnaire de contenu permettant de créer et gérer un site internet basé sur le Système de design de l'État (DSFR), accessible et responsive.
Ce package est généré automatiquement à partir du projet [Sites Faciles](https://github.com/numerique-gouv/sites-faciles) officiel.
## Installation
```bash
pip install sites_conformes
```
Ou avec poetry :
```bash
poetry add sites_conformes
```
## Utilisation
Ajoutez les applications à votre `INSTALLED_APPS` dans `settings.py` :
```python
INSTALLED_APPS = [
# ... vos autres apps
"dsfr",
"sites_conformes",
"sites_conformes.blog",
"sites_conformes.content_manager",
"sites_conformes.events",
"wagtail.contrib.settings",
"wagtail.contrib.typed_table_block",
"wagtail.contrib.routable_page",
"wagtail_modeladmin",
"wagtailmenus",
"wagtailmarkdown",
]
```
Ajoutez les context processors nécessaires :
```python
TEMPLATES[0]["OPTIONS"]["context_processors"].extend(
[
"wagtailmenus.context_processors.wagtailmenus",
"sites_conformes.content_manager.context_processors.skiplinks",
"sites_conformes.content_manager.context_processors.mega_menus",
]
)
```
Configurez les URLs dans votre `urls.py` :
```python
# Option 1 : Utiliser directement la configuration d'URLs de sites_conformes (recommandé)
from sites_conformes.config.urls import *
# Option 2 : Configuration personnalisée
# Si vous avez besoin de personnaliser les URLs, vous pouvez copier le contenu
# de sites_conformes.config.urls et l'adapter à vos besoins
```
## Migration depuis Sites Faciles
Si vous migrez un site existant depuis le dépôt Sites Faciles vers ce package, vous devez mettre à jour les références ContentType dans votre base de données.
### Étapes de migration
1. **Installez le package** comme décrit ci-dessus et ajoutez toutes les applications à `INSTALLED_APPS`
2. **Exécutez les migrations Django** pour créer les nouveaux ContentTypes :
```bash
python manage.py migrate
```
3. **Migrez les ContentTypes existants** :
```bash
python manage.py migrate_contenttype
```
Cette commande va :
- Identifier tous les ContentTypes de l'ancienne structure (blog, events, forms, content_manager, config)
- Mettre à jour toutes les pages Wagtail pour pointer vers les nouveaux ContentTypes
- Supprimer les anciens ContentTypes
4. **Vérifiez la migration** (optionnel - mode dry-run) :
```bash
python manage.py migrate_contenttype --dry-run
```
### Pourquoi cette migration est nécessaire
Lorsque vous renommez des applications Django (par exemple de `blog` à `sites_conformes_blog`), Django crée de nouveaux ContentTypes. Les pages Wagtail existantes référencent toujours les anciens ContentTypes, ce qui provoque l'erreur :
```
PageClassNotFoundError: The page 'xxx' cannot be edited because the model class
used to create it (blog.blogindexpage) can no longer be found in the codebase.
```
La commande `migrate_contenttype` résout ce problème en mettant à jour toutes les références.
## Documentation
Pour plus d'informations sur l'utilisation de Sites Faciles, consultez la [documentation officielle](https://github.com/numerique-gouv/sites-faciles).
## Licence
Ce projet est sous licence MIT - voir le fichier LICENSE pour plus de détails.
## Crédits
Ce package est basé sur [Sites Faciles](https://github.com/numerique-gouv/sites-faciles) développé par la DINUM.
| text/markdown | null | Sébastien Reuiller <sebastien.reuiller@beta.gouv.fr>, Sylvain Boissel <sylvain.boissel@beta.gouv.fr>, Lucien Mollard <lucien.mollard@beta.gouv.fr>, Lucie Laporte <lucie.laporte@beta.gouv.fr> | null | Sylvain Boissel <sylvain.boissel@beta.gouv.fr>, Lucien Mollard <lucien.mollard@beta.gouv.fr>, Lucie Laporte <lucie.laporte@beta.gouv.fr> | null | null | [] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"django-dsfr>=2.4.0",
"django>=5.2.3",
"wagtail>=7.0.1",
"psycopg2-binary>=2.9.10",
"python-dotenv>=1.1.0",
"dj-database-url>=3.0.0",
"gunicorn>=23.0.0",
"dj-static>=0.0.6",
"wagtailmenus>=4.0.3",
"wagtail-modeladmin>=2.2.0",
"wagtail-markdown>=0.12.1",
"unidecode>=1.4.0",
"beautifulsoup4>=4... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:05:06.499655 | sites_conformes-2.5.2rc4.tar.gz | 7,053,743 | 8e/5a/568fc882dd907535f9e3f39da4074447e667057dab34c7bbf2ebe00096f8/sites_conformes-2.5.2rc4.tar.gz | source | sdist | null | false | 6d5789bcc29f9f89d8dd6ce4d9decfa3 | 5ec504eff76c8eb212c43feed0425593ae75cf96991fee48238bbf795dcb5edc | 8e5a568fc882dd907535f9e3f39da4074447e667057dab34c7bbf2ebe00096f8 | null | [] | 262 |
2.4 | aitorrent | 0.0.1 | MCP servers for media downloader with Plex, TMDB, and qBittorrent integration | This a set of MCPs that can turn any LLM into a movie/tv downloading media-manager. It has access to your Plex library and qbittorrent client, so it knows what you have and what you want.
[](https://asciinema.org/a/lSxwZnB6zOWPHIrn)
## Setup
### Configure credentials:
```sh
cp .env.example .env
# Edit .env with your Plex URL and token
```
**Plex Token**: Sign in at https://app.plex.tv, click any media item, click the three dots, select "Get Info", click "View XML" (bottom-left), and copy the `X-Plex-Token` value from the URL bar.
**TMDB API Key (Optional)**: For finding missing episodes and searching for shows not in your library:
1. Create a free account at https://www.themoviedb.org
2. Go to Settings > API (https://www.themoviedb.org/settings/api)
3. Click "Request an API Key" (if you haven't already)
4. Choose "Developer" and fill out the form (you can use "Personal project" for the application type)
5. Once approved, copy the **API Key (v3 auth)** value (NOT the "API Read Access Token")
6. Add it to `.env` as `TMDB_API_KEY=your_key_here`
**Important**: Use the "API Key (v3 auth)" field, which is a 32-character hexadecimal string. The "API Read Access Token" (JWT format starting with "eyJ") won't work with this tool.
**qBittorrent**: Make sure qBittorrent is running with Web UI enabled. Default credentials are admin/adminadmin, configured in Tools > Options > Web UI.
## Tools
### CLI Usage
You can run the CLI tools directly without installation using `uvx`:
```sh
# List your collection names
uvx --from aitorrent aitorrent-plex-cli list
# List all your shows (show/season/episode) in "TV Shows" collection
uvx --from aitorrent aitorrent-plex-cli list "TV Shows"
# List all your movies by collection in "Movies" collection
uvx --from aitorrent aitorrent-plex-cli list "Movies"
# List all your music by artist/album in "Music" collection
uvx --from aitorrent aitorrent-plex-cli list "Music"
# Show what you're currently watching (on-deck and in-progress shows with relative times)
uvx --from aitorrent aitorrent-plex-cli watching
```
The `watching` command shows:
- **On Deck**: Episodes/movies you're currently watching with progress percentage
- **In Progress**: TV shows you're partially through with completion stats and when you last watched (e.g., "2 days ago")
#### TMDB CLI
The `tmdbinfo.py` script searches The Movie Database (requires TMDB API key):
```sh
# Search for TV shows
uvx --from aitorrent aitorrent-tmdb-cli search-shows "Star Trek Lower Decks"
# Get detailed show information (seasons, episodes, air dates)
uvx --from aitorrent aitorrent-tmdb-cli show-details 85948
# Get specific season details
uvx --from aitorrent aitorrent-tmdb-cli season-details 85948 1
# Search for movies
uvx --from aitorrent aitorrent-tmdb-cli search-movies "The Matrix"
# Get detailed movie information
uvx --from aitorrent aitorrent-tmdb-cli movie-details 603
```
This tool is useful for finding shows/movies not in your Plex library and getting episode air dates.
#### qBittorrent CLI
The `qbtinfo.py` script manages torrent downloads and automation:
```sh
# List all torrents
uvx --from aitorrent aitorrent-qbt-cli list
# List only downloading torrents
uvx --from aitorrent aitorrent-qbt-cli list downloading
# Add a torrent by magnet link or URL
uvx --from aitorrent aitorrent-qbt-cli add "magnet:?xt=urn:btih:..." --path "/path/to/save" --category "TV Shows"
# List RSS feeds
uvx --from aitorrent aitorrent-qbt-cli rss list-feeds
# Add an RSS feed
uvx --from aitorrent aitorrent-qbt-cli rss add-feed "https://showrss.info/user/123456.rss?magnets=true&namespaces=true&name=null&quality=1080p" --folder "TV Shows"
# Refresh RSS feeds
uvx --from aitorrent aitorrent-qbt-cli rss refresh
# List RSS auto-download rules
uvx --from aitorrent aitorrent-qbt-cli rss list-rules
# Create RSS rule for a show (auto-downloads new episodes)
uvx --from aitorrent aitorrent-qbt-cli rss add-show "Star Trek: Strange New Worlds" --season 2 --quality 1080p --category "TV Shows" --feeds "TV Shows\ShowRSS"
# Attach an existing rule to specific feeds
uvx --from aitorrent aitorrent-qbt-cli rss attach-rule "Star Trek: Strange New Worlds S02" "TV Shows\ShowRSS,TV Shows\EZTV"
```
The RSS auto-download feature will automatically download new episodes as they appear in your RSS feeds, perfect for keeping up with currently airing shows. **Important**: Rules must be attached to specific feeds to trigger - use the `--feeds` parameter when creating rules or use `attach-rule` to attach existing rules.
### MCP Server Usage
The same functionality is available as an MCP (Model Context Protocol) server for use with LLMs:
Add to your LLMs MCP settings (`~/.claude.json`):
```json
{
"mcpServers": {
"plex-info": {
"command": "uvx",
"args": ["--from", "aitorrent", "aitorrent-plex"],
"env": {
"PLEX_URL": "http://localhost:32400",
"PLEX_TOKEN": "your_plex_token_here",
"TMDB_API_KEY": "your_tmdb_api_key_here"
}
},
"tmdb-info": {
"command": "uvx",
"args": ["--from", "aitorrent", "aitorrent-tmdb"],
"env": {
"TMDB_API_KEY": "your_tmdb_api_key_here"
}
},
"qbt-info": {
"command": "uvx",
"args": ["--from", "aitorrent", "aitorrent-qbt"],
"env": {
"QBT_URL": "http://localhost:8080",
"QBT_USERNAME": "admin",
"QBT_PASSWORD": "adminadmin"
}
}
}
}
```
With Claude Code, you can do this, too:
```sh
claude mcp add plex-info --transport stdio \
--env PLEX_URL=http://localhost:32400 \
--env PLEX_TOKEN=your_plex_token_here \
--env TMDB_API_KEY=your_tmdb_api_key_here \
-- uvx --from aitorrent aitorrent-plex
claude mcp add tmdb-info --transport stdio \
--env TMDB_API_KEY=your_tmdb_api_key_here \
-- uvx --from aitorrent aitorrent-tmdb
claude mcp add qbt-info --transport stdio \
--env QBT_URL=http://localhost:8080 \
--env QBT_USERNAME=admin \
--env QBT_PASSWORD=adminadmin \
-- uvx --from aitorrent aitorrent-qbt
```
The LLM will have access to these tools:
**Library & Content:**
- `plex_list_libraries` - List all Plex libraries with their types
- `plex_list_library_content` - List all content from a specific library (TV shows with seasons/episodes, movies by collection, or music by artist/album)
- `plex_search` - Search for media by title across all libraries or in a specific library
**Detailed Information:**
- `plex_get_show_details` - Get detailed information about a specific TV show including all seasons, episodes, and air dates (useful for finding missing episodes)
- `plex_get_movie_details` - Get detailed information about a specific movie including cast, genres, collections, and ratings
- `plex_get_artist_details` - Get detailed information about a music artist including all albums and tracks
**Viewing Status & Progress:**
- `plex_get_on_deck` - Get items currently "On Deck" (continue watching) - shows what the user is actively watching
- `plex_get_in_progress_shows` - Get TV shows that are partially watched with completion percentage - excellent for finding shows the user is following
- `plex_get_show_watch_status` - Get detailed watch status for a specific show (which episodes are watched/unwatched per season)
- `plex_get_recently_added` - Get recently added items
**High-Level Download Helpers:**
- `plex_get_episodes_to_download` - HIGH-LEVEL: Get all episodes that need downloading with pre-formatted search queries and download paths (minimizes context usage)
- `plex_find_missing_episodes` - Compare TMDB episode list with Plex library to find missing episodes (requires TMDB_API_KEY)
- `plex_get_next_episodes` - Get next unwatched episodes after the last one watched (requires TMDB_API_KEY)
- `plex_format_torrent_query` - Format show info as a torrent search query (e.g., "Star Trek Strange New Worlds 2022 S02E05 1080p")
**TMDB Tools (standalone tmdb-info MCP):**
- `tmdb_search_shows` - Search The Movie Database for TV shows by name
- `tmdb_get_show_details` - Get detailed show info including all seasons and episodes
- `tmdb_get_season_details` - Get detailed information about a specific season
- `tmdb_search_movies` - Search TMDB for movies by title
- `tmdb_get_movie_details` - Get detailed movie information
**Filesystem & Download Path Management:**
- `plex_get_show_path` - Get the actual filesystem path where a show's episodes are stored (e.g., `/media/video/tv/Doctor Who`)
- `plex_suggest_download_path` - Intelligently suggest where to download episodes (uses existing show path or library location for new shows)
- `plex_list_library_subdirs` - List all show folders in a library (useful for fuzzy matching or checking what exists)
**qBittorrent Torrent Management:**
- `qbt_get_torrents` - Get list of torrents with optional status filter (downloading, completed, etc.)
- `qbt_get_torrent_details` - Get detailed information about a specific torrent
- `qbt_add_torrent` - Add a torrent by magnet link or URL with optional save path and category
- `qbt_get_categories` - Get all torrent categories with save paths
- `qbt_create_category` - Create a new category (useful for organizing by show/collection)
- `qbt_get_transfer_info` - Get current download/upload speeds and stats
**qBittorrent Search:**
- `qbt_search_torrents` - Search for torrents using qBittorrent's installed plugins (returns magnet links sorted by seeders)
- `qbt_get_search_plugins` - Get list of installed search plugins
- `qbt_get_downloading_episodes` - HIGH-LEVEL: Parse currently downloading torrents to extract episode info (prevents duplicate downloads)
**qBittorrent RSS Feed Management:**
- `qbt_get_rss_feeds` - Get all RSS feeds and folders (use this to see what feeds are configured)
- `qbt_add_rss_feed` - Add a new RSS feed URL
- `qbt_remove_rss_feed` - Remove an RSS feed or folder
- `qbt_refresh_rss_feed` - Manually refresh RSS feed(s) to check for new items
**qBittorrent RSS Automation:**
- `qbt_get_rss_rules` - Get all RSS auto-download rules (shows which feeds they're attached to)
- `qbt_create_show_rss_rule` - Create RSS rule to auto-download new episodes of a show (can specify feeds to attach)
- `qbt_attach_rule_to_feeds` - Attach an existing rule to specific feeds (CRITICAL - rules won't trigger unless attached!)
- `qbt_delete_rss_rule` - Delete an RSS auto-download rule
#### Example MCP Usage Scenarios
With these tools, the LLM can help you with requests like:
**Basic Library Management:**
- **"I like Breaking Bad"** → Search for the show, get details, see what episodes you have
- **"What shows am I currently watching?"** → Get on-deck and in-progress shows to see what you're actively following
- **"Find movies in the Marvel collection"** → Search libraries and filter by collection
- **"Show me all Tarantino movies"** → Search and get details about movies in that collection
- **"What albums do I have by The Beatles?"** → Search artist and get full discography
- **"What's new in my library?"** → Get recently added content
**Torrent Download Planning (with TMDB):**
- **"Download the next season of shows I'm watching"** → Get in-progress shows → Find missing episodes → Format torrent queries
- **"What episodes of The Office am I missing?"** → Compare TMDB data with Plex → Generate list of missing episodes
- **"I'd like to fill in my Doctor Who collection, but not the old ones"** → Find missing episodes → Filter by year → Format torrent queries
- **"Grab new episodes of Star Trek: Strange New Worlds"** → Get next episodes → Check if aired → Format torrent query with "1080p"
- **"Download S02E05-E10 of Breaking Bad in 1080p"** → Format multiple torrent queries with preferred quality
**Complete Workflow (Plex + qBittorrent + TMDB):**
- **"I'm watching Star Trek: Strange New Worlds, automatically download new episodes when they come out"** →
1. `plex_get_in_progress_shows()` - Verify you're watching SNW
2. `plex_get_next_episodes("Star Trek: Strange New Worlds")` - Find S02E06 aired but missing
3. `plex_get_show_path("Star Trek: Strange New Worlds")` - Get existing path: `/media/video/tv/Star Trek Strange New Worlds`
4. `plex_format_torrent_query("Star Trek Strange New Worlds", 2, 6, 2022, "1080p")` - Create search query
5. _[User would search for torrent and get magnet link]_
6. `qbt_add_torrent(magnet_link, save_path="/media/video/tv/Star Trek Strange New Worlds")` - Download to correct folder
7. `qbt_get_rss_feeds()` - Check which RSS feeds are configured (e.g., "TV Shows\\ShowRSS")
8. `qbt_create_show_rss_rule("Star Trek Strange New Worlds", season=2, quality="1080p", save_path="/media/video/tv/Star Trek Strange New Worlds", feed_paths=["TV Shows\\ShowRSS"])` - Auto-download future episodes from specific feed to same location
- **"Fill in my Doctor Who collection but skip the old episodes"** →
1. `plex_find_missing_episodes("Doctor Who")` - Get all missing episodes
2. `plex_get_show_path("Doctor Who")` - Get existing path: `/media/video/tv/Doctor Who`
3. Filter for episodes with air_date >= 2005 (modern Who)
4. For each missing episode: format torrent query → search → add to qBittorrent with correct save_path
5. Downloads automatically go to the existing Doctor Who folder in Plex
- **"Download new show not in my library yet"** →
1. `plex_search("The Expanse")` - Show not found in Plex
2. `plex_list_library_subdirs("TV Shows")` - Check existing show folders to avoid duplicates
3. `plex_suggest_download_path("The Expanse", "TV Shows")` - Suggests: `/media/video/tv/The Expanse`
4. Download episodes to suggested path
5. Plex automatically picks them up in next library scan
| text/markdown | null | David Konsumer <konsumer@jetboystudio.com> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.0.0",
"plexapi>=4.15.0",
"python-dotenv>=1.0.0",
"qbittorrent-api>=2024.1.59",
"tmdbv3api>=1.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/konsumer/aitorrent",
"Repository, https://github.com/konsumer/aitorrent"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:04:33.168826 | aitorrent-0.0.1.tar.gz | 32,865 | fb/2a/42c8cb164953f7792fc6d721ab0482825365aca33678bcbaef1007cd3e22/aitorrent-0.0.1.tar.gz | source | sdist | null | false | 5a9cc74f5dd97d6fdd178c164fc3fbed | 247082256620c93ec7d6140c22de54c1e86aab1f834d1562d9050993d91a7546 | fb2a42c8cb164953f7792fc6d721ab0482825365aca33678bcbaef1007cd3e22 | null | [
"LICENSE"
] | 269 |
2.4 | voicepad | 0.1.3 | Command-line interface for voice recording and GPU-accelerated transcription. | # voicepad CLI
Simple command-line interface for recording audio and managing transcription configuration.
## Install
```bash
pip install voicepad
```
**Requirements:** Python 3.13+
## Quick Start
```bash
# List audio input devices
voicepad config input
# Start recording (press Ctrl+C to stop)
voicepad record start
# Check system capabilities
voicepad config system
```
## Example: Record and Transcribe
```bash
# Record a meeting (will auto-transcribe)
voicepad record start --prefix team_meeting
# Output:
# - data/recordings/team_meeting_20260218_103045.wav
# - data/markdown/team_meeting_20260218_103045.md
```
## Documentation
- [CLI Command Reference](https://voicepad.readthedocs.io/packages/voicepad/) - Full documentation
- [voicepad-core Library](https://voicepad.readthedocs.io/packages/voicepad-core/) - Python API
- [Main README](https://github.com/HYP3R00T/voicepad#readme) - Project overview
## Configuration
Edit `voicepad.yaml` to set defaults:
```yaml
recordings_path: data/recordings
markdown_path: data/markdown
input_device_index: null
transcription_model: tiny
transcription_device: auto
transcription_compute_type: auto
```
See the [full documentation](https://voicepad.readthedocs.io/packages/voicepad/) for all configuration options.
| text/markdown | Rajesh Das | Rajesh Das <rajesh@hyperoot.dev> | null | null | null | cli, voice, audio, transcription, recording, speech-to-text, whisper, command-line, gpu-acceleration | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"... | [] | null | null | >=3.13 | [] | [] | [] | [
"voicepad-core>=0.1.3",
"typer>=0.23.1",
"textual>=7.5.0",
"voicepad-core[gpu]; extra == \"gpu\""
] | [] | [] | [] | [
"HomePage, https://hyp3r00t.github.io/voicepad/",
"Repository, https://github.com/HYP3R00T/voicepad",
"Issues, https://github.com/HYP3R00T/voicepad/issues",
"Documentation, https://hyp3r00t.github.io/voicepad/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:04:10.185206 | voicepad-0.1.3.tar.gz | 8,304 | e2/1b/596e2f1ff47d31f1b25449ef20323d41602e4acc57016922bac2a3c5f732/voicepad-0.1.3.tar.gz | source | sdist | null | false | 7ff5a22827e136d819d2662e98366aba | 1eedd1aeecb680ac0c3e3a973486c49d4feb0581156a04e75a199e7cb3c3e423 | e21b596e2f1ff47d31f1b25449ef20323d41602e4acc57016922bac2a3c5f732 | MIT | [] | 225 |
2.4 | utils-flask-sqlalchemy | 0.4.5 | Python lib of tools for Flask and SQLAlchemy | ## Librairie "outil" pour SQLAlchemy et Flask
[](https://github.com/PnX-SI/Utils-Flask-SQLAlchemy/actions/workflows/pytest.yml)
[](https://codecov.io/gh/PnX-SI/Utils-Flask-SQLAlchemy)
Cette librairie fournit des décorateurs pour faciliter le développement avec Flask et SQLAlchemy.
Paquet Python : https://pypi.org/project/utils-flask-sqlalchemy/.
Elle est composée de trois outils principaux :
### Les serialisers
Le décorateur de classe ``@serializable`` permet la sérialisation JSON d'objets Python issus des classes SQLAlchemy. Il rajoute dynamiquement une méthode ``as_dict()`` aux classes qu'il décore. Cette méthode transforme l'objet de la classe en dictionnaire en transformant les types Python non compatibles avec le format JSON. Pour cela, elle se base sur les types des colonnes décrits dans le modèle SQLAlchemy.
Le décorateur ``@serializable`` peut être utilisé tel quel, ou être appelé avec les arguments suivants :
- ``exclude`` (iterable, default=()). Spécifie les colonnes qui doivent être exclues lors de la sérialisation. Par défaut, toutes les colonnes sont sérialisées.
La méthode ``as_dict()`` contient les paramètre suivants :
- ``recursif`` (boolean, default = False) : contrôle si la serialisation doit sérialiser les modèles enfants (relationships) de manière recursive
- ``columns`` (iterable, default=()). Spécifie les colonnes qui doivent être présentes dans le dictionnaire en sortie. Si non spécifié, le comportement par défaut du décorateur est adopté.
- ``relationships`` (iterable, default=()). Spécifie les relationnships qui doivent être présentes dans le dictionnaire en sortie. Par défaut toutes les relationships sont prises si ``recursif=True``.
### Les réponses
Le fichier contient des décorateurs de route Flask :
- Le décorateur ``@json_resp`` transforme l'objet retourné par la fonction en JSON. Renvoie une 404 si la valeur retournée par la fonction est None ou un tableau vide
- Le décorateur ``@json_resp_accept_empty_list`` transforme l'objet retourné par la fonction en JSON. Renvoie une 404 si la valeur retournée par la fonction est None et 200 si c'est un tableau vide
- Le décorateur ``@csv_resp`` tranforme l'objet retourné par la fonction en fichier CSV. La fonction doit retourner un tuple de ce format ``(file_name, data, columns, separator)``
### Le mapping à la volée
Le fichier ``generic`` contient les classes ``GenericTable`` et ``GenericQuery`` permettant de faire des requêtes sans définir de modèle au préalable.
| text/markdown | null | null | Parcs nationaux des Écrins et des Cévennes | geonature@ecrins-parcnational.fr | null | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operating System :: OS Independent"
] | [] | https://github.com/PnX-SI/Utils-Flask-SQLAlchemy | null | null | [] | [] | [] | [
"flask",
"flask-sqlalchemy",
"flask-migrate",
"marshmallow",
"python-dateutil",
"sqlalchemy<2",
"pytest; extra == \"tests\"",
"geoalchemy2; extra == \"tests\"",
"shapely; extra == \"tests\"",
"jsonschema; extra == \"tests\"",
"flask-marshmallow; extra == \"tests\"",
"marshmallow-sqlalchemy; ex... | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T14:03:39.367540 | utils_flask_sqlalchemy-0.4.5.tar.gz | 43,355 | 47/70/c49843016bbffd2c86c2f227ce8cacc352a02bceab44dbc79816560ba149/utils_flask_sqlalchemy-0.4.5.tar.gz | source | sdist | null | false | d06dcff747e174d08304498794d823d2 | e40ba11e8bca35962d4e4947f72e9c409164fb013e7d8b1a54c71ddf4edade10 | 4770c49843016bbffd2c86c2f227ce8cacc352a02bceab44dbc79816560ba149 | null | [
"LICENSE"
] | 352 |
2.4 | shopware-api-client | 1.1.6 | An api client for the Shopware API | # Shopware API Client
A Django-ORM like, Python 3.12, async Shopware 6 admin and store-front API client.
## Installation
```sh
pip install shopware-api-client
# If you want to use the redis cache
pip install shopware-api-client[redis]
```
## Usage
There are two kinds of clients provided by this library. The `client.AdminClient` for the Admin API and the
`client.StoreClient` for the Store API.
### client.AdminClient
To use the AdminClient you need to create a `config.AdminConfig`. The `AdminConfig` class supports two login methods
(grant-types):
- **client_credentials** (Default) Let's you log in with a `client_id` and `client_secret`
- **password** Let's you log in using a `username` and `password`
You also need to provide the Base-URL of your shop.
Example:
```python
from shopware_api_client.config import AdminConfig
CLIENT_ID = "MyClientID"
CLIENT_SECRET = "SuperSecretToken"
SHOP_URL = "https://pets24.shop"
config = AdminConfig(url=SHOP_URL, client_id=CLIENT_ID, client_secret=CLIENT_SECRET, grant_type="client_credentials")
# or for "password"
ADMIN_USER = "admin"
ADMIN_PASSWORD = "!MeowMoewMoew~"
config = AdminConfig(url=SHOP_URL, username=ADMIN_USER, password=ADMIN_PASSWORD, grant_type="password")
```
Now you can create the Client. There are two output formats for the client, that can be selected by the `raw` parameter:
- **raw=True** Outputs the result as a plain dict or list of dicts
- **raw=False** (Default) Outputs the result as Pydantic-Models
```python
from shopware_api_client.client import AdminClient
# Model-Mode
client = AdminClient(config=config)
# raw-Mode
client = AdminClient(config=config, raw=True)
```
Client-Connections should be closed after usage: `await client.close()`. The client can also be used in an `async with`
block to be closed automatically.
```python
from shopware_api_client.client import AdminClient
async with AdminClient(config=config) as client:
# do important stuff
pass
```
All registered Endpoints are directly available from the client instance. For example if you want to query the Customer
Endpoint:
```python
customer = await client.customer.first()
```
All available Endpoint functions can be found in the [AdminEndpoint](#list-of-available-functions) section.
There are two additional ways how the client can be utilized by using it with the Endpoint-Class directly or the
associated Pydantic Model:
```python
from shopware_api_client.endpoints.admin.core.customer import Customer, CustomerEndpoint
# Endpoint
customer_endpoint = CustomerEndpoint(client=client)
customer = await customer_endpoint.first()
# Pydantic Model
customer = await Customer.using(client=client).first()
```
#### Related Objects
If you use the Pydantic-Model approach (`raw=False`) you can also use the returned object to access its related objects:
```python
from shopware_api_client.endpoints.admin import Customer
customer = await Customer.using(client=client).first()
customer_group = await customer.group # Returns a CustomerGroup object
all_the_customers = await customer_group.customers # Returns a list of Customer objects
```
**!! Be careful to not close the client before doing related objects calls, since they use the same Client instance !!**
```python
from shopware_api_client.client import AdminClient
from shopware_api_client.endpoints.admin import Customer
async with AdminClient(config=config) as client:
customer = await Customer.using(client=client).first()
customer_group = await customer.group # This will fail, because the client connection is already closed!
```
#### CustomEntities
Shopware allows to create custom entities. You can use the `load_custom_entities` function to load them into the client.
```python
from shopware_api_client.client import AdminClient
config = ...
client = AdminClient(config=config)
await client.load_custom_entities(client)
# Endpoint for the custom entity ce_blog
await client.ce_blog.all()
# Pydantic Model for the custom entity ce_blog
CeBlog = client.ce_blog.model_class
```
Since custom entities are completely dynamic no autocompletion in IDE is available. However there are some pydantic validations added for the field-types of the custom entity. Relations are currently not supported, but everything else should work as expected.
### client.StoreClient
To use the StoreClient you need to create a `config.StoreConfig`. The `StoreConfig` needs a store api access key.
You also need to provide the Base-URL of your shop.
Some Endpoints (that are somehow related to a user) require a context-token. This parameter is optional.
Example:
```python
from shopware_api_client.config import StoreConfig
ACCESS_KEY = "SJMSAKSOMEKEY"
CONTEXT_TOKEN = "ASKSKJNNMMS"
SHOP_URL = "https://pets24.shop"
config = StoreConfig(url=SHOP_URL, access_key=STORE_API_ACCESS_KEY, context_token=CONTEXT_TOKEN)
```
This config can be used with the `StoreClient`, ~~which works exactly like the `AdminClient`~~.
The `StoreClient` has far less endpoints and does mostly not support full updated of models, but uses
helper-functions.
### Redis Caching for Rate Limits
Both the AdminClient and the StoreClient use a built-in rate limiter. Shopware's rate limits differ based on the endpoints, both for the [SaaS-](https://docs.shopware.com/en/en/shopware-6-en/saas/rate-limits) and the [on-premise-solution](https://developer.shopware.com/docs/guides/hosting/infrastructure/rate-limiter.html).
To be able to respect the rate limit when sending requests from multiple clients, it is possible to use redis as a cache-backend for route-based rate-limit data. If redis is not used, each Client independently keeps track of the rate limit. Please note that the non-Redis cache is not thread-safe.
To use redis, simply hand over a redis-client to the client config:
```py
import redis
from shopware_api_client.config import AdminConfig, StoreConfig
from shopware_api_client.client import AdminClient, StoreClient
redis_client = redis.Redis()
admin_config = AdminConfig(
url='',
client_id='...',
client_secre='...',
redis_client=redis_client,
)
admin_client = AdminClient(config=config) # <- This client uses the redis client now
store_config = StoreConfig(
url='',
access_key='',
context_token=''
redis_client=redis_client,
)
store_client = StoreClient(config=config) # <- Works for store client as well (Only do this in safe environments)
```
__Note:__ Shopware currently enforces rate limits on a per–public‑IP basis. As a result, you should only share Redis‑backed rate‑limit caching among clients that originate from the same public IP address.
## AdminEndpoint
The `base.AdminEndpoint` class should be used for creating new Admin-Endpoints. It provides some usefull functions to call
the Shopware-API.
The base structure of an Endpoint is pretty simple:
```python
from shopware_api_client.base import EndpointMixin, AdminEndpoint
class CustomerGroup(EndpointMixin["CustomerGroupEndpoint"]):
# Model definition
pass
class CustomerGroupEndpoint(AdminEndpoint[CustomerGroup]):
name = "customer_group" # name of the Shopware-Endpoint (snaky)
path = "/customer-group" # path of the Shopware-Endpoint
model_class = CustomerGroup # Pydantic-Model of this Endpoint
```
### List of available Functions
- `all()` return all objects (GET /customer-group or POST /search/customer-group if filter or sort is set)
- `get(pk: str = id)` returns the object with the passed id (GET /customer-group/id)
- `update(pk: str = id, obj: ModelClass | dict[str: Any]` updates an object (PATCH /customer-group/id)
- `create(obj: ModelClass | dict[str: Any]` creates a new object (POST /customer-group)
- `delete(pk: str = id)` deletes an object (DELETE /customer-group/id)
- `filter(name="Cats")` adds a filter to the query. Needs to be called with .all(), .iter() or .first())) More Info: [Filter](#filter)
- `limit(count: int | None)` sets the limit parameter, to limit the amount of results. Needs to be called with .all() or .first()
- `first()` sets the limit to 1 and returns the result (calling .all())
- `order_by(fields: str | tuple[str]` sets the sort parameter. Needs to be called with .all(), .iter() or .first(). Syntax: "name" for ASC, "-name" for DESC
- `select_related(**kwargs: dict[str, Any])` sets the _associations parameter to define which related models to load in the request. Needs to be called with .all(), .iter() or .first().
- `only(**kwargs: list[str])` sets the _includes parameter to define which fields to request. Needs to be called with .all(), .iter() or .first().
- `iter(batch_size: int = 100)` sets the limit-parameter to batch_size and makes use of the pagination of the api. Should be used when requesting a big set of data (GET /customer-group or POST /search/customer-group if filter or sort is set)
- `bulk_upsert(objs: list[ModelClass] | list[dict[str, Any]` creates/updates multiple objects. Does always return dict of plain response. (POST /_action/sync)
- `bulk_delete(objs: list[ModelClass] | list[dict[str, Any]` deletes multiple objects. Does always return dict or plain response. (POST /_action/sync)
Not all functions are available for the StoreClient-Endpoints. But some of them have some additional functions.
StoreClient-Endpoints using the `base.StoreSearchEndpoint` can use most of the filter functions, but not create/update/delete.
### Filter
The `filter()` functions allows you to filter the result of an query. The parameters are basically the field names.
You can add an appendix to change the filter type. Without it looks for direct matches (equals). The following
appendices are available:
- `__in` expects a list of values, matches if the value is provided in this list (equalsAny)
- `__contains` matches values that contain this value (contains)
- `__gt` greater than (range)
- `__gte` greater than equal (range)
- `__lt` lower than (range)
- `__lte` lower than equal (range)
- `__range` expects a touple of two items, matches everything inbetween. inclusive. (range)
- `__startswith` matches if the value starts with this (prefix)
- `__endswith` matches if the value ends with this (suffix)
For some fields (that are returned as dict, like custom_fields) it's also possible to filter over the values of it's
keys. To do so you can append the key seperated by "__" For example if we have a custom field called "preferred_protein"
we can filter on it like this:
```python
customer = await Customer.using(client=client).filter(custom_field__preferred_protein="fish")
# or with filter-type-appendix
customer = await Customer.using(client=client).filter(custom_field__preferred_protein__in=["fish", "chicken"])
```
## ApiModelBase
The `base.ApiModelBase` class is basicly a `pydantic.BaseModel` which should be used to create Endpoint-Models.
The base structure of an Endpoint-Model looks like this. Field names are converted to snake_case. Aliases are autogenerated:
```python
from pydantic import Field
from typing import Any
from shopware_api_client.base import ApiModelBase, CustomFieldsMixin
class CustomerGroup(ApiModelBase, CustomFieldsMixin):
_identifier = "customer_group" # name of the Shopware-Endpoint (snaky)
name: str # Field with type
display_gross: bool | None = None
# other fields...
```
This Base-Models live in `shopware_api_client.models`
The `id`, `version_id`, `created_at`, `updated_at` and `translated` attributes are provided in the ApiModelBase and
must not be added. This are default fields of Shopwares `Entity` class, even they are not always used.
If an entity supports the `EntityCustomFieldsTrait` you can add the `CustomFieldsMixin` to add the custom_fields field.
### List of available Function
- `save()` executes `Endpoint.update()` if an id is set otherwise `Endpoint.create()`
- `delete()` executes `Endpoint.delete()`
### AdminModel + Relations
To make relations to other models work, we have to define them in the Model. There are two classes to make this work:
`endpoints.relations.ForeignRelation` and `endpoints.relations.ManyRelation`.
- `ForeignRelation[class]` is used when we get the id of the related object in the api response.
- `class`: Class of the related model
- `ManyRelation[class]` is used for the reverse relation. We don't get ids in the api response, but it can be used through
relation links.
- `class`: Class of the related model
Example (Customer):
```python
from pydantic import Field
from shopware_api_client.base import AdminModel
from shopware_api_client.endpoints.base_fields import IdField
from shopware_api_client.endpoints.relations import ForeignRelation, ManyRelation
from shopware_api_client.models.customer import Customer as CustomerBase
"""
// shopware_api_client.models.customer.Customer:
class Customer(ApiModelBase):
# we have an id so we can create a ForeignRelation to it
default_billing_address_id: IdField
"""
# final model containing relations for admin api. Must be AdminModel
class Customer(CustomerBase, AdminModel["CustomerEndpoint"]):
default_billing_address: ForeignRelation["CustomerAddress"]
# We don't have a field for all addresses of a customer, but there is a relation for it!
addresses: ManyRelation["CustomerAddress"]
# model relation classes have to be imported at the end. pydantic needs the full import (not just TYPE_CHECKING)
# and this saves us from circular imports
from shopware_api_client.endpoints.admin import CustomerAddress # noqa: E402
```
## Development
### Testing
You can use `poetry build` and `poetry run pip install -e .` to install the current src.
Then run `poetry run pytest .` to execute the tests.
### Model Creation
Shopware provides API-definitions for their whole API. You can download it from `<shopurl>/api/_info/openapi3.json`
Then you can use tools like `datamodel-code-generator`
```
datamodel-codegen --input openapi3.json --output model_openapi3.py --snake-case-field --use-double-quotes --output-model-type=pydantic_v2.BaseModel --use-standard-collections --use-union-operator
```
The file may look confusing at first, but you can search for Endpoint-Name + JsonApi (Example: class CustomerJsonApi)
to get all returned fields + relationships class as an overview over the available Relations. However, the Models will
need some modifications. But it's a good start.
Not all fields returned by the API are writeable and the API will throw an error when you try to set it. So this fields
must have an `exclude=True` in their definition. To find out which fields need to be excluded check the Shopware
Endpoint documentation at https://shopware.stoplight.io/. Go to the Endpoint your Model belongs to and check the
available POST fields.
The newly created Model and its Endpoint must than be imported to `admin/__init__.py` or `store/__init__.py`. The Model must be added to `__all__`
The Endpoint must be added to the Endpoints class. The `__all__` statement is necessary so they
don't get cleaned away as unused imports by code-formaters/cleaners.
We need to import all related models at the **end** of the file. If we don't add them, Pydantic fails to build the model. If we add them before
our model definition, we run into circular imports.
Every model has a base model that lives in `models/<model_file>.py`. This model only contains direct fields (no relations), which are used by
both endpoints (store & admin). This base models are extended in the endpoints to contain relations (`endpoints/admin/core/<model_file>.py` & `endpoints/store/core/<model_file>.py`).
### Structure
```
> endpoints -- All endpoints live here
> admin -- AdminAPI endpoints
> core -- AdminAPI > Core
customer_address.py -- Every Endpoint has its own file. Model and Endpoint are defined here
> commercial -- AdminAPI > Commercial
> digital_sales_rooms -- AdminAPI > Digital Sales Rooms
> store -- StoreAPI
> core -- StoreAPI > Core
> commercial -- StoreAPI > Commercial
> digital_sales_rooms -- StoreAPI > Digital Sales Rooms
> models -- base models to be extended in admin/store endpoints (shopware => Entity)
> structs -- data structures that do not have a model (shopware => Struct)
base.py -- All the Base Classes (nearly)
fieldsets.py -- FieldSetBase has its own file to prevent pydantic problems
client.py -- Clients & Registry
config.py -- Configs
exceptions.py -- Exceptions
logging.py -- Logging
tests.py -- tests
```
| text/markdown | GWS Gesellschaft für Warenwirtschafts-Systeme mbH | ebusiness@gws.ms | null | null | MIT | shopware, api, client | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"httpx<0.29,>=0.28",
"httpx-auth<0.24,>=0.23",
"pydantic<3.0,>=2.6",
"pytest-random-order<2.0.0,>=1.1.1",
"redis<7.0,>5.0; extra == \"redis\""
] | [] | [] | [] | [
"Bugtracker, https://github.com/GWS-mbH/shopware-api-client/issues",
"Changelog, https://github.com/GWS-mbH/shopware-api-client",
"Documentation, https://github.com/GWS-mbH/shopware-api-client/wiki",
"Homepage, https://github.com/GWS-mbH",
"Repository, https://github.com/GWS-mbH/shopware-api-client"
] | poetry/2.3.2 CPython/3.12.12 Linux/6.14.0-1017-azure | 2026-02-18T14:03:19.524070 | shopware_api_client-1.1.6-py3-none-any.whl | 164,797 | ea/fc/e1ff742c2c0c1f1257875afb9254dc6e904f40985f97eeeebc1ffc2338da/shopware_api_client-1.1.6-py3-none-any.whl | py3 | bdist_wheel | null | false | 35e020133bbdb979ec0f88d92548a2ed | 82b0501096d52c139ea5b94c526a1f978306bea777573d37e1602f5b00a7489d | eafce1ff742c2c0c1f1257875afb9254dc6e904f40985f97eeeebc1ffc2338da | null | [
"LICENSE"
] | 401 |
2.4 | mcp-fuzzer | 0.3.0 | MCP server fuzzer client and utilities | # MCP Server Fuzzer
<div align="center">
<img src="icon.png" alt="MCP Server Fuzzer Icon" width="100" height="100">
**A comprehensive super-aggressive CLI-based fuzzing tool for MCP servers**
*Multi-protocol support • Two-phase fuzzing • Built-in safety • Rich reporting • async runtime and async fuzzing of mcp tools*
[](https://github.com/Agent-Hellboy/mcp-server-fuzzer/actions/workflows/lint.yml)
[](https://codecov.io/gh/Agent-Hellboy/mcp-server-fuzzer)
[](https://pypi.org/project/mcp-fuzzer/)
[](https://pepy.tech/projects/mcp-fuzzer)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[Documentation](https://agent-hellboy.github.io/mcp-server-fuzzer/) • [Quick Start](#quick-start) • [Examples](#examples) • [Configuration](#configuration)
</div>
---
## What is MCP Server Fuzzer?
MCP Server Fuzzer is a comprehensive fuzzing tool designed specifically for testing [Model Context Protocol (MCP)](https://github.com/modelcontextprotocol/modelcontextprotocol) servers. It supports both tool argument fuzzing and protocol type fuzzing across multiple transport protocols.
### Key Promise
If your server conforms to the [MCP schema](https://github.com/modelcontextprotocol/modelcontextprotocol/tree/main/schema), this tool will fuzz it effectively and safely.
### Why Choose MCP Server Fuzzer?
- Safety First: Built-in safety system prevents dangerous operations
- High Performance: Asynchronous execution with configurable concurrency
- Beautiful Output: Rich, colorized terminal output with detailed reporting
- Flexible Configuration: CLI args, YAML configs, environment variables
- Comprehensive Reporting: Multiple output formats (JSON, CSV, HTML, Markdown)
- Production Ready: PATH shims, sandbox defaults, and CI-friendly controls
- Intelligent Testing: Hypothesis-based data generation with custom strategies
- More Than Conformance: Goes beyond the checks in [modelcontextprotocol/conformance](https://github.com/modelcontextprotocol/conformance) with fuzzing, reporting, and safety tooling
### Fuzzing Paradigms
MCP Server Fuzzer combines:
- Grammar/protocol-based fuzzing (schema-driven MCP request generation)
- Black-box fuzzing (no instrumentation; feedback from responses/spec checks)
It does **not** use instrumentation-based fuzzing (no coverage or binary/source instrumentation).
### Basic Fuzzer Flow
```txt
```mermaid
flowchart TB
subgraph CLI["CLI + Config"]
A1[parse_arguments]
A2[ValidationManager]
A3[build_cli_config]
A4[ClientSettings]
A1 --> A2 --> A3 --> A4
end
subgraph Runtime["Runtime Orchestration"]
B1[run_with_retry_on_interrupt]
B2[unified_client_main]
B3[RunPlan + Commands]
B4[ClientExecutionPipeline]
B1 --> B2 --> B3 --> B4
end
subgraph Transport["Transport Layer"]
C1[DriverCatalog + build_driver]
C2[TransportDriver]
C3[HttpDriver / SseDriver / StdioDriver / StreamHttpDriver]
C4[JsonRpcAdapter]
C5[RetryingTransport (optional)]
C1 --> C2 --> C3
C3 --> C4
C3 --> C5
end
subgraph Clients["Client Layer"]
D1[MCPFuzzerClient]
D2[ToolClient]
D3[ProtocolClient]
D1 --> D2
D1 --> D3
end
subgraph Mutators["Mutators + Strategies"]
E1[ToolMutator]
E2[ProtocolMutator]
E3[ToolStrategies / ProtocolStrategies]
E4[SeedPool + mutate_seed_payload]
E1 --> E3
E2 --> E3
E1 --> E4
E2 --> E4
end
subgraph Execution["Execution + Concurrency"]
F1[AsyncFuzzExecutor]
F2[ToolExecutor]
F3[ProtocolExecutor]
F4[ResultBuilder]
F1 --> F2
F1 --> F3
F2 --> F4
F3 --> F4
end
subgraph Safety["Safety System"]
G1[SafetyFilter + DangerDetector]
G2[Filesystem Sandbox]
G3[System Command Blocker]
G4[Network Policy]
G1 --> G2
G1 --> G3
G1 --> G4
end
subgraph RuntimeMgr["Process Runtime"]
H1[ProcessManager]
H2[ProcessWatchdog]
H3[SignalDispatcher]
H4[ProcessSupervisor]
H1 --> H2
H1 --> H3
H4 --> H1
end
subgraph Reporting["Reporting + Output"]
I1[FuzzerReporter]
I2[FormatterRegistry]
I3[OutputProtocol + OutputManager]
I4[Console/JSON/CSV/XML/HTML/MD Formatters]
I1 --> I2 --> I4
I1 --> I3
end
A4 --> B1
B4 --> D1
C1 --> D1
D2 --> E1
D3 --> E2
E1 --> F2
E2 --> F3
D1 --> G1
C3 --> H4
D1 --> I1
```
### Extensibility for Contributors
MCP Server Fuzzer is designed for easy extension while keeping CLI usage simple:
- **Custom Transports**: Add support for new protocols via config or self-registration (see [docs/transport/custom-transports.md](docs/transport/custom-transports.md)).
- **Pluggable Safety**: Swap safety providers for custom filtering rules.
- **Injectable Components**: Advanced users can inject custom clients/reporters for testing or plugins.
The modularity improvements (dependency injection, registries) make it maintainer-friendly without complicating the core CLI experience.
## Quick Start
### Installation
Requires Python 3.10+ (editable installs from source also need a modern `pip`).
```bash
# Install from PyPI
pip install mcp-fuzzer
# Or install from source (includes MCP spec submodule)
git clone --recursive https://github.com/Agent-Hellboy/mcp-server-fuzzer.git
cd mcp-server-fuzzer
# If you already cloned without submodules, run:
git submodule update --init --recursive
pip install -e .
```
### Docker Installation
The easiest way to use MCP Server Fuzzer is via Docker:
```bash
# Build the Docker image
docker build -t mcp-fuzzer:latest .
# Or pull the published image
# docker pull princekrroshan01/mcp-fuzzer:latest
```
The container ships with `mcp-fuzzer` as the entrypoint, so you pass CLI args
after the image name. Use `/output` for reports and mount any server/config
inputs you need.
```bash
# Show CLI help
docker run --rm mcp-fuzzer:latest --help
# Example: store reports on the host
docker run --rm -v $(pwd)/reports:/output mcp-fuzzer:latest \
--mode tools --protocol http --endpoint http://localhost:8000 \
--output-dir /output
```
Required mounts (stdio/config workflows):
- `/output`: writeable reports directory
- `/servers`: read-only server code/executables for stdio
- `/config`: read-only config directory
### Basic Usage
1. **Set up your MCP server** (HTTP, SSE, or Stdio)
2. **Run basic fuzzing:**
**Using Docker:**
```bash
# Fuzz HTTP server (container acts as client)
docker run --rm -it --network host \
-v $(pwd)/reports:/output \
mcp-fuzzer:latest \
--mode tools --protocol http --endpoint http://localhost:8000
# Fuzz stdio server (server runs in containerized environment)
docker run --rm -it \
-v $(pwd)/servers:/servers:ro \
-v $(pwd)/reports:/output \
mcp-fuzzer:latest \
--mode tools --protocol stdio --endpoint "node /servers/my-server.js stdio"
```
**Using Local Installation:**
```bash
# Fuzz tools on an HTTP server
mcp-fuzzer --mode tools --protocol http --endpoint http://localhost:8000
# Fuzz protocol types on an SSE server
mcp-fuzzer --mode protocol --protocol-type InitializeRequest --protocol sse --endpoint http://localhost:8000/sse
```
### Advanced Usage
```bash
# Two-phase fuzzing (realistic + aggressive)
mcp-fuzzer --mode all --phase both --protocol http --endpoint http://localhost:8000
# With safety system enabled
mcp-fuzzer --mode tools --enable-safety-system --safety-report
# Export results to multiple formats
mcp-fuzzer --mode tools --export-csv results.csv --export-html results.html
# Use configuration file
mcp-fuzzer --config my-config.yaml
```
## Examples
### HTTP Server Fuzzing
```bash
# Basic HTTP fuzzing
mcp-fuzzer --mode tools --protocol http --endpoint http://localhost:8000 --runs 50
# With authentication
mcp-fuzzer --mode tools --protocol http --endpoint https://api.example.com \
--auth-config auth.json --runs 100
```
### SSE Server Fuzzing
```bash
# SSE protocol fuzzing
mcp-fuzzer --mode protocol --protocol-type InitializeRequest --protocol sse --endpoint http://localhost:8080/sse \
--runs-per-type 25 --verbose
```
### Stdio Server Fuzzing
**Using Docker (Recommended for Isolation):**
```bash
# Server runs in containerized environment for safety
docker run --rm -it \
-v $(pwd)/servers:/servers:ro \
-v $(pwd)/reports:/output \
mcp-fuzzer:latest \
--mode tools --protocol stdio --endpoint "python /servers/my_server.py" \
--enable-safety-system --fs-root /tmp/safe \
--output-dir /output
# Using docker-compose (easier configuration)
docker-compose run --rm fuzzer \
--mode tools --protocol stdio --endpoint "node /servers/my-server.js stdio" \
--runs 50 --output-dir /output
```
**Using Local Installation:**
```bash
# Local server testing
mcp-fuzzer --mode tools --protocol stdio --endpoint "python my_server.py" \
--enable-safety-system --fs-root /tmp/safe
```
### Configuration File Usage
```yaml
# config.yaml
mode: tools
protocol: stdio
endpoint: "python dev_server.py"
runs: 10
phase: realistic
# Optional output configuration
output:
directory: "reports"
format: "json"
types:
- "fuzzing_results"
- "safety_summary"
```
```bash
mcp-fuzzer --config config.yaml
```
## Docker Usage
MCP Server Fuzzer can be run in a Docker container, providing isolation and easy deployment. This is especially useful for:
- **Stdio Servers**: Run servers in a containerized environment for better isolation and safety
- **HTTP/SSE Servers**: Container acts as the MCP client (server can run anywhere)
- **CI/CD Pipelines**: Consistent testing environment across different systems
### Quick Start with Docker
```bash
# Build the image
docker build -t mcp-fuzzer:latest .
# Fuzz HTTP server (server can be on host or remote)
docker run --rm -it --network host \
-v $(pwd)/reports:/output \
mcp-fuzzer:latest \
--mode tools --protocol http --endpoint http://localhost:8000 --output-dir /output
# Fuzz stdio server (server code mounted from host)
docker run --rm -it \
-v $(pwd)/servers:/servers:ro \
-v $(pwd)/reports:/output \
mcp-fuzzer:latest \
--mode tools --protocol stdio --endpoint "python /servers/my_server.py" --output-dir /output
```
### Docker Releases
Docker images are published automatically on every GitHub Release (tagged `v*`)
via CI. The published image is:
```bash
docker pull princekrroshan01/mcp-fuzzer:latest
```
Note: The runtime image includes `curl` and `ca-certificates` so stdio servers can fetch HTTPS resources (e.g., schemas, tokens, metadata) without bundling extra tools. If your servers never make outbound HTTPS calls, you can remove them.
### Using Docker Compose
For easier configuration and management, use `docker-compose.yml`:
```bash
# Set environment variables (optional)
export SERVER_PATH=./servers
export CONFIG_PATH=./examples/config
export MCP_SPEC_SCHEMA_VERSION=2025-06-18
# Run fuzzing (stdio server)
docker-compose run --rm fuzzer \
--mode tools \
--protocol stdio \
--endpoint "node /servers/my-server.js stdio" \
--runs 50 \
--output-dir /output
# For HTTP servers (macOS/Windows - uses host.docker.internal)
docker-compose run --rm fuzzer \
--mode tools \
--protocol http \
--endpoint http://host.docker.internal:8000 \
--runs 50 \
--output-dir /output
# For HTTP servers on Linux (use host network)
docker-compose -f docker-compose.host-network.yml run --rm fuzzer \
--mode tools \
--protocol http \
--endpoint http://localhost:8000 \
--runs 50 \
--output-dir /output
# Production-style (no TTY/stdin)
docker-compose -f docker-compose.prod.yml run --rm fuzzer \
--mode tools \
--protocol stdio \
--endpoint "node /servers/my-server.js stdio" \
--runs 50 \
--output-dir /output
```
### Docker Volume Mounts
- **`/output`**: Mount your reports directory here (e.g., `-v $(pwd)/reports:/output`)
- **`/servers`**: Mount server code/executables for stdio servers (read-only recommended)
- **`/config`**: Mount custom configuration files if needed
### Network Configuration
- **HTTP/SSE Servers**: Network access required. Linux: prefer `--network host` so `localhost` works. Docker Desktop (macOS/Windows): use `host.docker.internal` since host networking is limited. If neither works, use the host IP.
- **Stdio Servers**: No network needed - server runs as subprocess in container
### Example: Fuzzing a Node.js Stdio Server
```bash
# 1. Prepare your server
mkdir -p servers
cp my-mcp-server.js servers/
# 2. Run fuzzer in Docker
docker run --rm -it \
-v $(pwd)/servers:/servers:ro \
-v $(pwd)/reports:/output \
mcp-fuzzer:latest \
--mode all \
--protocol stdio \
--endpoint "node /servers/my-mcp-server.js stdio" \
--runs 100 \
--enable-safety-system \
--output-dir /output \
--export-json /output/results.json
```
### Example: Fuzzing an HTTP Server
```bash
# Server runs on host at localhost:8000
# Container connects to it as client
docker run --rm -it --network host \
-v $(pwd)/reports:/output \
mcp-fuzzer:latest \
--mode tools \
--protocol http \
--endpoint http://localhost:8000 \
--runs 50 \
--output-dir /output
```
### Security Considerations
- The Docker container runs as non-root user (UID 1000) for improved security
- Stdio servers run in isolated container environment
- Use read-only mounts (`:ro`) for server code when possible
- Reports are written to mounted volume, not inside container
## Configuration
### Configuration Methods (in order of precedence)
1. **Command-line arguments** (highest precedence)
2. **Configuration files** (YAML)
3. **Environment variables** (lowest precedence)
### Environment Variables
```bash
# Core settings
export MCP_FUZZER_TIMEOUT=60.0
export MCP_FUZZER_LOG_LEVEL=DEBUG
# Safety settings
export MCP_FUZZER_SAFETY_ENABLED=true
export MCP_FUZZER_FS_ROOT=/tmp/safe
# Authentication
export MCP_API_KEY="your-api-key"
export MCP_USERNAME="your-username"
export MCP_PASSWORD="your-password"
```
### Performance Tuning
```bash
# High concurrency for fast networks
mcp-fuzzer --process-max-concurrency 20 --watchdog-check-interval 0.5
# Conservative settings for slow/unreliable servers
mcp-fuzzer --timeout 120 --process-retry-count 5 --process-retry-delay 2.0
```
## Key Features
| Feature | Description |
|---------|-------------|
| Two-Phase Fuzzing | Realistic testing + aggressive security testing |
| Multi-Protocol Support | HTTP, SSE, Stdio, and StreamableHTTP transports |
| Built-in Safety | Pattern-based filtering, sandboxing, and PATH shims |
| Intelligent Testing | Hypothesis-based data generation with custom strategies |
| Rich Reporting | Detailed output with exception tracking and safety reports |
| Multiple Output Formats | JSON, CSV, HTML, Markdown, and XML export options |
| Flexible Configuration | CLI args, YAML configs, environment variables |
| Asynchronous Execution | Efficient concurrent fuzzing with configurable limits |
| Comprehensive Monitoring | Process watchdog, timeout handling, and resource management |
| Authentication Support | API keys, basic auth, OAuth, and custom providers |
| Performance Metrics | Built-in benchmarking and performance analysis |
| Schema Validation | Automatic MCP protocol compliance checking |
### Performance
- Concurrent Operations: Up to 20 simultaneous fuzzing tasks
- Memory Efficient: Streaming responses and configurable resource limits
- Fast Execution: Optimized async I/O and connection pooling
- Scalable: Configurable timeouts and retry mechanisms
## Architecture
The system is built with a modular architecture:
- **CLI Layer**: User interface and argument handling
- **Transport Layer**: Protocol abstraction (HTTP/SSE/Stdio)
- **Fuzzing Engine**: Test orchestration and execution
- **Strategy System**: Data generation (realistic + aggressive)
- **Safety System**: Core filter + SystemBlocker PATH shim; safe mock responses
- **Runtime**: Fully async ProcessManager + ProcessWatchdog
- **Authentication**: Multiple auth provider support
- **Reporting**: FuzzerReporter, Console/JSON/Text formatters, SafetyReporter
### Runtime Watchdog Overview
The watchdog supervises processes registered through `ProcessManager`, combining hang detection, signal dispatch, and registry-driven cleanup. For a deeper dive into lifecycle events, custom signal strategies, and registry wiring, see the [runtime management guide](docs/components/runtime-management.md).
### Understanding the Design Patterns
For developers (beginners to intermediate) who want to understand the design patterns used throughout the codebase, please refer to our comprehensive [Design Pattern Review](docs/design-pattern-review.md). This document provides:
- Module-by-module pattern analysis
- Design pattern fit scores and recommendations
- Modularity observations and improvement suggestions
- Complete pattern map for every module in the codebase
This is especially helpful if you're:
- Learning about design patterns in real-world applications
- Planning to contribute to the project
- Wanting to understand the architectural decisions
- Looking for areas to improve or extend
## Troubleshooting
### Common Issues
**Connection Timeout**
```bash
# Increase timeout for slow servers
mcp-fuzzer --timeout 120 --endpoint http://slow-server.com
```
**Authentication Errors**
```bash
# Check auth configuration
mcp-fuzzer --check-env
mcp-fuzzer --validate-config config.yaml
```
**Memory Issues**
```bash
# Reduce concurrency for memory-constrained environments
mcp-fuzzer --process-max-concurrency 2 --runs 25
```
**Permission Errors**
```bash
# Run with appropriate permissions or use safety system
mcp-fuzzer --enable-safety-system --fs-root /tmp/safe
```
### Debug Mode
```bash
# Enable verbose logging
mcp-fuzzer --verbose --log-level DEBUG
# Check environment
mcp-fuzzer --check-env
```
## Community & Support
- Documentation: [Full Documentation](https://agent-hellboy.github.io/mcp-server-fuzzer/)
- Issues: [GitHub Issues](https://github.com/Agent-Hellboy/mcp-server-fuzzer/issues)
- Discussions: [GitHub Discussions](https://github.com/Agent-Hellboy/mcp-server-fuzzer/discussions)
### Contributing
We welcome contributions! Please see our [Contributing Guide](https://agent-hellboy.github.io/mcp-server-fuzzer/development/contributing/) for details.
**Quick Start for Contributors:**
```bash
git clone --recursive https://github.com/Agent-Hellboy/mcp-server-fuzzer.git
cd mcp-server-fuzzer
# If you already cloned without submodules, run:
git submodule update --init --recursive
pip install -e .[dev]
pytest tests/
```
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Disclaimer
This tool is designed for testing and security research purposes only.
- Always use in controlled environments
- Ensure you have explicit permission to test target systems
- The safety system provides protection but should not be relied upon as the sole security measure
- Use at your own risk
## Funding & Support
If you find this project helpful, please consider supporting its development:
[](https://github.com/sponsors/Agent-Hellboy)
**Ways to support:**
- ⭐ **Star the repository** - helps others discover the project
- 🐛 **Report issues** - help improve the tool
- 💡 **Suggest features** - contribute ideas for new functionality
- 💰 **Sponsor on GitHub** - directly support ongoing development
- 📖 **Share the documentation** - help others learn about MCP fuzzing
Your support helps maintain and improve this tool for the MCP community!
---
<div align="center">
Made with love for the MCP community
[Star us on GitHub](https://github.com/Agent-Hellboy/mcp-server-fuzzer) • [Read the Docs](https://agent-hellboy.github.io/mcp-server-fuzzer/)
</div>
| text/markdown | null | Prince Roshan <princekrroshan01@gmail.com> | null | null | MIT License
Copyright (c) 2025 Prince Roshan
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| mcp, fuzzing, testing, json-rpc | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx",
"hypothesis",
"jsonschema>=4.25.1",
"rich",
"pyyaml>=6.0",
"psutil>=5.9.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == ... | [] | [] | [] | [
"Homepage, https://github.com/agent-hellboy/mcp-server-fuzzer",
"Repository, https://github.com/agent-hellboy/mcp-server-fuzzer",
"Issues, https://github.com/agent-hellboy/mcp-server-fuzzer/issues",
"Documentation, https://agent-hellboy.github.io/mcp-server-fuzzer"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T14:02:29.370281 | mcp_fuzzer-0.3.0.tar.gz | 385,315 | 4a/28/4c40f2625b46492bcfeec42dd95282b3705d7a0770f196c049ebfa475608/mcp_fuzzer-0.3.0.tar.gz | source | sdist | null | false | 627064039c1ee0459f1f210479ce0e5f | 40a408d4def966d82fd92f99f8b6aca03833ab45116890869181d57f786c136b | 4a284c40f2625b46492bcfeec42dd95282b3705d7a0770f196c049ebfa475608 | null | [
"LICENSE"
] | 264 |
2.4 | emx-onnx-cgen | 0.5.0 | emmtrix ONNX-to-C Code Generator | # emmtrix ONNX-to-C Code Generator (emx-onnx-cgen)
<p align="center"><img width="50%" src="https://raw.githubusercontent.com/emmtrix/emx-onnx-cgen/main/logo.png" /></p>
[](https://pypi.org/project/emx-onnx-cgen)
[](https://github.com/emmtrix/emx-onnx-cgen/actions/workflows/tests.yml)
`emx-onnx-cgen` compiles ONNX models to portable, deterministic C code for deeply embedded systems. The generated code is designed to run without dynamic memory allocation, operating-system services, or external runtimes, making it suitable for safety-critical and resource-constrained targets.
Key characteristics:
- **No dynamic memory allocation** (`malloc`, `free`, heap usage)
- **Static, compile-time known memory layout** for parameters, activations, and temporaries
- **Deterministic control flow** (explicit loops, no hidden dispatch or callbacks)
- **No OS dependencies**, using only standard C headers (for example, `stdint.h` and `stddef.h`)
- **Single-threaded execution model**
- **Bitwise-stable code generation** for reproducible builds
- **Readable, auditable C code** suitable for certification and code reviews
- **Generated C output format spec:** [`docs/output-format.md`](https://github.com/emmtrix/emx-onnx-cgen/blob/v0.5.0/docs/output-format.md)
- Designed for **bare-metal and RTOS-based systems**
For PyTorch models, see the related project [`emx-pytorch-cgen`](https://github.com/emmtrix/emx-pytorch-cgen).
## Goals
- Correctness-first compilation with outputs comparable to ONNX Runtime.
- Deterministic and reproducible C code generation.
- Clean, pass-based compiler architecture (import → normalize → optimize → lower → emit).
- Minimal C runtime with explicit, predictable data movement.
## Non-goals
- Aggressive performance optimizations in generated C.
- Implicit runtime dependencies or dynamic loading.
- Training/backpropagation support.
## Features
- CLI for ONNX-to-C compilation and verification.
- Deterministic codegen with explicit tensor shapes and loop nests.
- Minimal C runtime templates in `src/emx_onnx_cgen/templates/`.
- ONNX Runtime comparison for end-to-end validation.
- Official ONNX operator coverage tracking.
- Support for a wide range of ONNX operators (see [`SUPPORT_OPS.md`](https://github.com/emmtrix/emx-onnx-cgen/blob/v0.5.0/SUPPORT_OPS.md)).
- Supported data types:
- `bfloat16`, `float16`, `float`, `double`
- `int8`, `uint8`, `int16`, `uint16`, `int32`, `uint32`, `int64`, `uint64`
- `bool`
- `string` (fixed-size `'\0'`-terminated C strings; see [`docs/output-format.md`](https://github.com/emmtrix/emx-onnx-cgen/blob/v0.5.0/docs/output-format.md))
- `optional(<tensor type>)` (optional tensors represented via an extra `_Bool <name>_present` flag; see [`docs/output-format.md`](https://github.com/emmtrix/emx-onnx-cgen/blob/v0.5.0/docs/output-format.md))
- Optional support for dynamic dimensions using C99 variable-length arrays (VLAs), when the target compiler supports them.
## Usage Scenarios
### 1. Fully Embedded, Standalone C Firmware
The generated C code can be embedded directly into a bare-metal C firmware or application where **all model weights and parameters are compiled into the C source**.
Typical characteristics:
* No file system or OS required.
* All weights stored as `static const` arrays in flash/ROM.
* Deterministic memory usage with no runtime allocation.
* Suitable for:
* Microcontrollers
* Safety-critical firmware
* Systems with strict certification requirements
This scenario is enabled via --large-weight-threshold 0, forcing all weights to be embedded directly into the generated C code.
### 2. Embedded or Host C/C++ Application with External Weights
The generated C code can be embedded into C or C++ applications where **large model weights are stored externally and loaded from a binary file at runtime**.
Typical characteristics:
* Code and control logic compiled into the application.
* Large constant tensors packed into a separate `.bin` file.
* Explicit, generated loader functions handle weight initialization.
* Suitable for:
* Embedded Linux or RTOS systems
* Applications with limited flash but available external storage
* Larger models where code size must be minimized
This scenario is enabled automatically once the cumulative weight size exceeds `--large-weight-threshold` (default: 102400 bytes).
### 3. Target-Optimized Code Generation via emmtrix Source-to-Source Tooling
In both of the above scenarios, the generated C code can serve as **input to emmtrix source-to-source compilation and optimization tools**, enabling target-specific optimizations while preserving functional correctness.
Examples of applied transformations include:
* Kernel fusion and loop restructuring
* Memory layout optimization and buffer reuse
* Reduction of internal temporary memory
* Utilization of SIMD / vector instruction sets
* Offloading of large weights to external memory
* Dynamic loading of weights or activations via DMA
This workflow allows a clear separation between:
* **Correctness-first, deterministic ONNX lowering**, and
* **Target-specific performance and memory optimization**,
while keeping the generated C code readable, auditable, and traceable.
The generated C code is intentionally structured to make such transformations explicit and analyzable, rather than relying on opaque backend-specific code generation.
## Installation
Install the package directly from PyPI (recommended):
```bash
pip install emx-onnx-cgen
```
Required at runtime (both `compile` and `verify`):
- `onnx`
- `numpy`
- `jinja2`
Optional for verification and tests:
- `onnxruntime`
- A C compiler (`cc`, `gcc`, `clang` or via `--cc`)
## Quickstart
Compile an ONNX model into a C source file:
```bash
emx-onnx-cgen compile path/to/model.onnx build/model.c
```
Verify an ONNX model end-to-end against ONNX Runtime (default):
```bash
emx-onnx-cgen verify path/to/model.onnx
```
## CLI Reference
`emx-onnx-cgen` provides two subcommands: `compile` and `verify`.
### Common options
These options are accepted by both `compile` and `verify`:
- `--model-base-dir`: Base directory for resolving the model path (and related paths).
- `--color`: Colorize CLI output (`auto`, `always`, `never`; default: `auto`).
- `--large-weight-threshold`: Store weights in a binary file once the cumulative byte size exceeds this threshold (default: `102400`; set to `0` to disable).
- `--large-temp-threshold`: Mark temporary buffers larger than this threshold as static (default: `1024`).
- `--fp32-accumulation-strategy`: Accumulation strategy for float32 inputs (`simple` uses float32, `fp64` uses double; default: `fp64`).
- `--fp16-accumulation-strategy`: Accumulation strategy for float16 inputs (`simple` uses float16, `fp32` uses float; default: `fp32`).
### `compile`
```bash
emx-onnx-cgen compile <model.onnx> <output.c> [options]
```
Options:
- `--model-name`: Override the generated model name (default: output file stem).
- `--emit-testbench`: Emit a JSON-producing `main()` testbench for validation.
- `--emit-data-file`: Emit constant data arrays into a companion `_data` C file.
- `--no-restrict-arrays`: Disable `restrict` qualifiers on generated array parameters.
### `verify`
```bash
emx-onnx-cgen verify <model.onnx> [options]
```
Options:
- `--cc`: Explicit C compiler command for building the testbench binary.
- `--max-ulp`: Maximum allowed ULP distance for floating outputs (default: `100`).
- `--atol-eps`: Absolute tolerance as a multiple of machine epsilon for floating outputs (default: `1.0`).
- `--runtime`: Runtime backend for verification (`onnxruntime` or `onnx-reference`, default: `onnxruntime`).
- `--temp-dir-root`: Root directory in which to create a temporary verification directory (default: system temp dir).
- `--temp-dir`: Exact directory to use for temporary verification files (default: create a temporary directory).
- `--keep-temp-dir`: Keep the temporary verification directory instead of deleting it.
How verification works:
1. **Compile with a testbench**: the compiler is invoked with `--emit-testbench`,
generating a C program that runs the model and prints inputs/outputs as JSON.
2. **Build and execute**: the testbench is compiled with the selected C compiler
(`--cc`, `CC`, or a detected `cc/gcc/clang`) and executed in a temporary
directory.
3. **Run runtime backend**: the JSON inputs from the testbench are fed to the
selected runtime (`onnxruntime` or `onnx-reference`) using the same model.
The compiler no longer ships a Python runtime evaluator.
4. **Compare outputs**: floating outputs are compared by maximum ULP distance.
Floating-point verification first ignores very small differences up to
**--atol-eps × [machine epsilon](https://en.wikipedia.org/wiki/Machine_epsilon) of
the evaluated floating-point type**, treating such values as equal. For
values with a larger absolute difference, the ULP distance is computed, and
the maximum ULP distance is reported; non-floating outputs must match
exactly.
Missing outputs or mismatches are treated as failures.
5. **ORT unsupported models**: when using `onnxruntime`, if ORT reports
`NOT_IMPLEMENTED`, verification is skipped with a warning (exit code 0).
## Official ONNX test coverage
See [`ONNX_SUPPORT.md`](https://github.com/emmtrix/emx-onnx-cgen/blob/v0.5.0/ONNX_SUPPORT.md) for the generated support matrix.
See [`SUPPORT_OPS.md`](https://github.com/emmtrix/emx-onnx-cgen/blob/v0.5.0/SUPPORT_OPS.md) for operator-level support derived from the expectation JSON files.
## Related Projects
- **emx-pytorch-cgen**
A PyTorch-to-C compiler following the same design principles as emx-onnx-cgen, but operating directly on PyTorch models instead of ONNX graphs.
https://github.com/emmtrix/emx-pytorch-cgen
- **onnx2c**
An ONNX-to-C code generator with a different design focus and code generation approach.
https://github.com/kraiskil/onnx2c
## Maintained by
This project is maintained by [emmtrix](https://www.emmtrix.com).
| text/markdown | null | null | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2025-present emmtrix Technologies GmbH
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"jinja2",
"numpy",
"onnx",
"onnxruntime; extra == \"verify\""
] | [] | [] | [] | [
"Homepage, https://www.emmtrix.com/wiki/emmtrix_ONNX-to-C_Code_Generator",
"Repository, https://github.com/emmtrix/emx-onnx-cgen",
"Issues, https://github.com/emmtrix/emx-onnx-cgen/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:02:13.016477 | emx_onnx_cgen-0.5.0.tar.gz | 727,287 | a7/ef/1b95f7ff49d4a87141f07a499edba85ea76d17879c457e8bb132e5d2b567/emx_onnx_cgen-0.5.0.tar.gz | source | sdist | null | false | 4929adf4306732bc88060c9ef75f00f7 | b39781b41e8960c79db1c685a0129303418a6a987521e3340c3825212ae331e5 | a7ef1b95f7ff49d4a87141f07a499edba85ea76d17879c457e8bb132e5d2b567 | null | [
"LICENSE"
] | 243 |
2.3 | rasa-pro | 3.15.11 | State-of-the-art open-core Conversational AI framework for Enterprises that natively leverages generative AI for effortless assistant development. | <h1 align="center">Rasa</h1>
<div align="center">
[](https://sonarcloud.io/summary/new_code?id=RasaHQ_rasa)
[](https://rasa.com/docs/docs/pro/intro)

</div>
<hr />
Rasa is a framework for building scalable, dynamic conversational AI assistants that integrate large language models (LLMs) to enable more contextually aware and agentic interactions. Whether you’re new to conversational AI or an experienced developer, Rasa offers enhanced flexibility, control, and performance for mission-critical applications.
**Key Features:**
- **Flows for Business Logic:** Easily define business logic through Flows, a simplified way to describe how your AI assistant should handle conversations. Flows help streamline the development process, focusing on key tasks and reducing the complexity involved in managing conversations.
- **Automatic Conversation Repair:** Ensure seamless interactions by automatically handling interruptions or unexpected inputs. Developers have full control to customize these repairs based on specific use cases.
- **Customizable and Open:** Fully customizable code that allows developers to modify Rasa to meet specific requirements, ensuring flexibility and adaptability to various conversational AI needs.
- **Robustness and Control:** Maintain strict adherence to business logic, preventing unwanted behaviors like prompt injection and hallucinations, leading to more reliable responses and secure interactions.
- **Built-in Security:** Safeguard sensitive data, control access, and ensure secure deployment, essential for production environments that demand high levels of security and compliance. Secrets are managed through Pulumi's built-in secrets management system and can be integrated with HashiCorp Vault for enterprise-grade secret management.
A [free developer license](https://rasa.com/docs/pro/intro/#who-rasa-pro-is-for) is available so you can explore and get to know Rasa. It allows you to take your assistant live in production a limited capacity. A paid license is required for larger-scale production use, but all code is visible and can be customized as needed.
To get started right now, you can
`pip install rasa-pro`
Check out our
- [Rasa Quickstart](https://rasa.com/docs/learn/quickstart/pro),
- [Conversational AI with Language Models (CALM) conceptual rundown](https://rasa.com/docs/learn/concepts/calm),
- [Rasa tutorial](https://rasa.com/docs/pro/tutorial), and
- [Changelog](https://rasa.com/docs/reference/changelogs/rasa-pro-changelog)
for more. Also feel free to reach out to us on the [Rasa forum](https://forum.rasa.com/).
## Secrets Management
This project uses a multi-layered approach to secrets management:
- **Pulumi Secrets**: Primary secrets management through Pulumi's built-in configuration system (`pulumi.Config()`)
- **Kubernetes Secrets**: Application secrets are stored as Kubernetes secrets in the cluster
- **Vault Integration**: Optional HashiCorp Vault support for enterprise-grade secret management
- **AWS Secrets Manager**: Used selectively for specific services (e.g., database credentials in integration tests)
For infrastructure deployment, secrets are managed through Pulumi configuration files and environment variables, providing secure and flexible secret management across different deployment environments.
| text/markdown | Rasa Technologies GmbH | hi@rasa.com | Tom Bocklisch | tom@rasa.com | null | nlp, machine-learning, machine-learning-library, bot, bots, botkit, rasa conversational-agents, conversational-ai, chatbot, chatbot-framework, bot-framework | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Sof... | [] | null | null | <3.14,>=3.10.0 | [] | [] | [] | [
"CacheControl<0.15.0,>=0.14.2",
"PyJWT[crypto]<3.0.0,>=2.8.0",
"SQLAlchemy<2.1.0,>=2.0.42",
"a2a-sdk<0.4.0,>=0.3.4",
"absl-py<2.4,>=2.3.1",
"aio-pika<9.4.4,>=8.2.3",
"aiogram<3.24.0,>=3.23.0; extra == \"full\" or extra == \"channels\"",
"aiohttp<3.14,>=3.12",
"aioshutil<1.6,>=1.5",
"apscheduler<3.... | [] | [] | [] | [
"Documentation, https://rasa.com/docs",
"Homepage, https://rasa.com",
"Repository, https://github.com/rasahq/rasa"
] | twine/6.1.0 CPython/3.12.9 | 2026-02-18T14:02:10.207722 | rasa_pro-3.15.11.tar.gz | 4,748,212 | eb/07/23460b9a2049a14e78bffaa7d210b3acbbaae57040c4d188343ca924a339/rasa_pro-3.15.11.tar.gz | source | sdist | null | false | 5d4f7f9ff8eccf137ebbd68e91bcbcf1 | 94a7e2a6081a0040c4fbfc470a9640cbaa8e0baa60557bfcdbd3d1d31943a5d6 | eb0723460b9a2049a14e78bffaa7d210b3acbbaae57040c4d188343ca924a339 | null | [] | 452 |
2.4 | remembril-mcp | 0.1.0 | MCP server for Remembril - Change tracking and impact analysis | # Remembril MCP Server
MCP (Model Context Protocol) server for [Remembril](https://remembril.dev) — change impact tracking, autonomous coding loops, and documentation automation for AI coding assistants.
## What is Remembril?
Remembril tracks changes through your entire documentation hierarchy: Requirements, PRFAQ, HLD, LLD, Tests, Compliance, and Runbooks. When something changes, ripple effects propagate automatically. The autonomous coding loop can then implement those changes via Claude Code with an AI review committee as quality gate.
## Installation
```bash
pip install remembril-mcp
```
## Quick Start
### 1. Setup for Claude Code
Run the setup script (if you have the repo):
```bash
cd mcp-server
./setup-claude-code.sh
```
Or configure manually in `~/.claude/settings.json`:
```json
{
"mcpServers": {
"remembril": {
"command": "remembril-mcp",
"args": ["serve"],
"env": {
"REMEMBRIL_URL": "http://localhost:7070",
"REMEMBRIL_TOKEN": "your-token-here"
}
}
}
}
```
### 2. Authenticate
```bash
remembril-mcp login --url http://localhost:7070
```
### 3. Create a project and configure it
From Claude Code, just say:
> "Create a Remembril project for this repo and set up the coding loop"
The AI will call `remembril_project_create` and `remembril_project_configure` with your repo path, test command, etc.
Or via CLI:
```bash
remembril-mcp projects # List projects
remembril-mcp set-project YOUR_PROJECT_ID # Set default
```
## Available MCP Tools
### Project Management
| Tool | Description |
|------|-------------|
| `remembril_projects` | List all projects |
| `remembril_project_create` | Create a new project |
| `remembril_project_configure` | Configure repo path, test command, agent review, coding loop |
| `remembril_project_settings` | View current pipeline settings |
### Issues & Ripples
| Tool | Description |
|------|-------------|
| `remembril_issue_create` | Create a new issue |
| `remembril_issue_create_smart` | Create issue with auto-severity |
| `remembril_issue_list` | List issues with filters |
| `remembril_ripples_generate` | Generate ripple effects for an issue |
| `remembril_ripples_queue` | Get pending ripples for review |
| `remembril_ripple_approve` | Approve a ripple |
| `remembril_ripple_reject` | Reject a ripple with reason |
| `remembril_ripple_update` | Modify a ripple's changes |
| `remembril_ripple_apply` | Apply an approved ripple to its document |
| `remembril_ripple_diff` | View the diff a ripple would produce |
| `remembril_ripple_blind_review` | Get ripples with AI reasoning hidden (blind validation) |
### Pipeline & Documents
| Tool | Description |
|------|-------------|
| `remembril_pipeline_run` | Run the full auto-pipeline (issues -> ripples -> docs) |
| `remembril_pipeline_status` | Check pending work for a project |
| `remembril_pipeline_generate_docs` | Export documents to markdown |
| `remembril_docs` | List project documents |
| `remembril_doc_read` | Read a specific document |
| `remembril_doc_traceability` | View cross-document traceability matrix |
### Autonomous Coding Loop
| Tool | Description |
|------|-------------|
| `remembril_loop_start` | Start a coding loop (Issue -> Code -> Test -> Review -> Merge) |
| `remembril_loop_status` | Check loop progress, waves, and droplets |
| `remembril_loop_stop` | Stop a running loop |
### Agent Review (Auror Committee)
| Tool | Description |
|------|-------------|
| `remembril_review_run` | Run an AI review committee on content |
| `remembril_review_status` | Check review status |
| `remembril_review_list` | List reviews for a project |
| `remembril_review_approve_ripples` | Approve ripples from a review |
| `remembril_debate_create` | Start a debate between agents |
| `remembril_debate_status` | Check debate progress |
| `remembril_watcher_alerts` | View anomaly alerts from the watcher |
| `remembril_alert_handle` | Handle a watcher alert |
| `remembril_personas_list` | List available review personas |
### Code Intelligence
| Tool | Description |
|------|-------------|
| `remembril_scan` | Index a codebase for symbol tracking |
| `remembril_impact` | Analyze file change impact |
| `remembril_trace` | Trace requirement to code |
| `remembril_symbols` | Search code symbols |
### Orchestration
| Tool | Description |
|------|-------------|
| `remembril_orchestrate_status` | Check pending work |
| `remembril_orchestrate_start` | Start an orchestration session |
| `remembril_orchestrate_next` | Get next task in session |
| `remembril_orchestrate_complete` | Complete a task |
| `remembril_orchestrate_sessions` | List sessions |
| `remembril_orchestrate_generate_all` | Generate all pending ripples |
## CLI Commands
```bash
# Authentication
remembril-mcp login # Login via browser
remembril-mcp login -t TOKEN # Login with token
remembril-mcp logout # Remove credentials
remembril-mcp status # Check connection
# Projects
remembril-mcp projects # List projects
remembril-mcp set-project ID # Set default project
# Server
remembril-mcp serve # Start MCP server (stdio)
# Issues
remembril-mcp issue "Title" -d "Description"
remembril-mcp issue "Title" --auto-severity
# Ripple Workflow
remembril-mcp run 123 # Generate ripples for issue
remembril-mcp review 123 # Interactive review
remembril-mcp review 123 --blind # Blind validation
remembril-mcp resume 123 # Resume reviewing
remembril-mcp logs # View history
```
## Typical Workflow
```
1. Create project -> remembril_project_create
2. Configure repo + tests -> remembril_project_configure
3. Create issues -> remembril_issue_create
4. Run pipeline -> remembril_pipeline_run
(generates ripples across Requirements, PRFAQ, HLD, LLD, Tests, Compliance, Runbook)
5. Start coding loop -> remembril_loop_start
(Claude Code implements changes, tests run, Auror committee reviews, auto-merge)
6. Check progress -> remembril_loop_status
```
## Environment Variables
| Variable | Description |
|----------|-------------|
| `REMEMBRIL_URL` | API URL (default: https://remembril-production.up.railway.app) |
| `REMEMBRIL_TOKEN` | Authentication token |
| `REMEMBRIL_PROJECT_ID` | Default project ID |
## Configuration
Credentials stored in `~/.remembril/config.json` (600 permissions).
## License
MIT
| text/markdown | null | Remembril <support@remembril.dev> | null | null | null | ai, change-tracking, impact-analysis, mcp, remembril | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.0",
"httpx>=0.25.0",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://remembril.dev",
"Documentation, https://docs.remembril.dev",
"Repository, https://github.com/remembril/remembril-mcp"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-18T14:01:57.705182 | remembril_mcp-0.1.0.tar.gz | 51,730 | c5/8d/1804984d9ccd522310bbad9cb66a7b387512d386d9422508cf565094f430/remembril_mcp-0.1.0.tar.gz | source | sdist | null | false | a80bc11a91ae6e14c82609b16a58d8d6 | f801a2e4b657aba3f02e8655efb222abe49a6ed3347835464eb58964c85ff75c | c58d1804984d9ccd522310bbad9cb66a7b387512d386d9422508cf565094f430 | MIT | [
"LICENSE"
] | 256 |
2.4 | voicepad-core | 0.1.3 | Core audio processing for Voicepad. | # voicepad-core
Core Python library for voice recording, GPU-accelerated transcription, and system diagnostics.
## Install
```bash
pip install voicepad-core
```
**For GPU support (4-5x faster):**
```bash
pip install voicepad-core[gpu]
```
**Requirements:** Python 3.13+
## Quick Start
```python
from voicepad_core import AudioRecorder, transcribe_audio, get_config
# Load configuration
config = get_config()
# Record audio
recorder = AudioRecorder(config)
audio_file = recorder.start_recording()
# Press Ctrl+C to stop
# Transcribe the audio file
output_file = config.markdown_path / "transcript.md"
stats = transcribe_audio(audio_file, output_file, config)
print(f"Transcribed: {stats['word_count']} words")
```
## Key Components
### Audio Recording
```python
from voicepad_core import AudioRecorder, get_config
config = get_config()
recorder = AudioRecorder(config)
audio_file = recorder.start_recording() # Press Ctrl+C to stop
```
### Transcription
```python
from voicepad_core import transcribe_audio, get_config
config = get_config()
stats = transcribe_audio(
audio_file=Path("recording.wav"),
output_file=Path("transcript.md"),
config=config
)
```
### System Diagnostics
```python
from voicepad_core import gpu_diagnostics, get_ram_info, get_cpu_info
gpu = gpu_diagnostics()
ram = get_ram_info()
cpu = get_cpu_info()
print(f"GPU available: {gpu.faster_whisper_gpu.success}")
print(f"Available RAM: {ram.available_gb} GB")
```
### Model Recommendations
```python
from voicepad_core import (
get_model_recommendation,
get_available_models,
get_ram_info,
get_cpu_info,
gpu_diagnostics
)
from voicepad_core.diagnostics.models import SystemInfo
system_info = SystemInfo(
ram=get_ram_info(),
cpu=get_cpu_info(),
gpu_diagnostics=gpu_diagnostics()
)
recommendation = get_model_recommendation(system_info, get_available_models())
print(f"Recommended model: {recommendation.recommended_model}")
```
## Configuration
Create `voicepad.yaml`:
```yaml
recordings_path: data/recordings
markdown_path: data/markdown
input_device_index: null
transcription_model: tiny
transcription_device: auto
transcription_compute_type: auto
```
## Documentation
- [API Reference](https://voicepad.readthedocs.io/packages/voicepad-core/) - Complete API docs
- [GPU Acceleration](https://voicepad.readthedocs.io/packages/voicepad-core/gpu-acceleration.md) - GPU setup guide
- [Main README](https://github.com/HYP3R00T/voicepad#readme) - Project overview
## Supported Models
All OpenAI Whisper models are supported:
- **tiny** - Fastest (39M)
- **base** - Fast (74M)
- **small** - Balanced (244M)
- **medium** - Accurate (769M)
- **large-v3** - Most accurate (1.5B)
- **turbo** - Latest generation (809M)
Use `get_available_models()` to list all models.
## Requirements
- **Python** 3.13+
- **Audio device** for recording
- **GPU (optional)** for 4-5x faster transcription
| text/markdown | Rajesh Das | Rajesh Das <rajesh@hyperoot.dev> | null | null | null | audio, transcription, voice, whisper, speech-to-text, recording, faster-whisper, gpu-acceleration, speech-recognition, audio-processing | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Programming Language :: Pyth... | [] | null | null | >=3.13 | [] | [] | [] | [
"faster-whisper>=1.2.1",
"psutil>=7.2.2",
"pydantic>=2.0.0",
"sounddevice>=0.5.5",
"soundfile>=0.13.1",
"utilityhub-config>=0.2.2",
"nvidia-cublas-cu12<13.0.0,>=12.0.0; extra == \"gpu\"",
"nvidia-cudnn-cu12<10.0.0,>=9.0.0; extra == \"gpu\""
] | [] | [] | [] | [
"HomePage, https://hyp3r00t.github.io/voicepad/",
"Repository, https://github.com/HYP3R00T/voicepad",
"Issues, https://github.com/HYP3R00T/voicepad/issues",
"Documentation, https://hyp3r00t.github.io/voicepad/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:01:48.052158 | voicepad_core-0.1.3-py3-none-any.whl | 24,080 | 92/8f/e9541a80834aa0e90bb27cd51161e0c3ce39f510bc0fd60718dea78aa43e/voicepad_core-0.1.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 614d618c70d9ef7a0c7d6f9c772fe087 | 489533405103b9d6f20824a7f8b1d2b301b632e60aea0b8f8e5a2a11d2f71644 | 928fe9541a80834aa0e90bb27cd51161e0c3ce39f510bc0fd60718dea78aa43e | MIT | [] | 230 |
2.4 | healpyxel | 0.2.1 | HEALPix-based spatial aggregation for planetary science data | # healpyxel
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
## What is HEALPix?
**HEALPix** (Hierarchical Equal Area isoLatitude Pixelization) is a
standard for partitioning a sphere (like a planet or the sky) into
pixels of equal surface area.
Unlike traditional “rectangular” map projections (like Equirectangular
or Mercator), HEALPix ensures that:
- **Every pixel is the same size:** Statistical analysis remains valid
across the entire globe, including the poles.
- **It is Hierarchical:** You can easily increase or decrease resolution
($NSIDE$) while maintaining spatial relationships.
- **Fast Computation:** Its structure allows for extremely efficient
neighbor searches and spherical harmonic transforms.
Some useful links:
- [HEALPix - Wikipedia](https://en.wikipedia.org/wiki/HEALPix)
- LandscapeGeoinformatics/[Awesome Discrete Global Grid Systems
(DGGS)](https://github.com/LandscapeGeoinformatics/awesome-discrete-global-grid-systems?tab=readme-ov-file)
- pangeo-data/[awesome-HEALPix](https://github.com/pangeo-data/awesome-HEALPix):
A curated list of awesome HEALPix libraries, tools, and resources.
## The Problem: Data Distortion & Scale
In planetary science, data often arrives as **scattered points, tracks,
or footprints** from spectrometers and altimeters. Traditionally,
researchers face two major hurdles:
1. **Projection Bias:** Standard grids distort the poles, making global
surface calculations (like mean chemical abundance or crater
density) mathematically biased.
2. **The Memory Wall:** Modern missions generate billions of points.
Loading an entire global high-resolution map into RAM to update it
is often impossible.
`healpyxel` solves this by treating the sphere as a **modern data
engineering target** rather than just a geometric grid.
## Design Philosophy & Use Cases
`healpyxel` is built on the **Unix Philosophy**: do one thing and do it
well, using a decoupled, chainable structure. It treats HEALPix indexing
as a data-engineering problem rather than just a geometric one.
This package relies heavily on
[healpy](https://healpy.readthedocs.io/en/latest/).
astropy also have a contributed module to handle those grids called
[astropy_healpix](https://astropy-healpix.readthedocs.io/en/latest/).
### Who is this for?
This package is ideal for researchers and data engineers working with
**sparse, irregular, or streaming planetary and astronomical datasets.**
- **Remote Sensing & Planetary Science:** Specifically designed for
instruments like 1-point spectrometers (e.g., MESSENGER/MASCS), laser
altimeters, and push-broom spectrometers.
- **The “Sidecar” Workflow:** Index your data without modifying the
original source files. `healpyxel` creates lightweight “sidecar” files
that map your GeoParquet rows to HEALPix cells.
- **Large-Scale Data Engineering:** Process TB-scale datasets using a
**Split-Apply-Combine** approach on GeoParquet.
- **Streaming & Incremental Ingestion:** Update global maps as new data
arrives without reprocessing the entire historical archive.
### <span style="color: red;">🛑 Who is this NOT for?</span>
You might consider alternatives if your use case falls into these
categories:
- **High-Resolution 2D Imagery:** For dense image-to-HEALPix
re-projection (e.g., CCD frames), tools like
[reproject](https://reproject.readthedocs.io/) or
[astropy-healpix](https://astropy-healpix.readthedocs.io/) are more
suitable.
- **Standard Xarray/Dask Unstructured Grids:** For deep integration with
general unstructured meshes beyond HEALPix, use
[UXarray](https://uxarray.readthedocs.io/).
- **Multi-order Coverage (MOC) & LIGO workflows:** For specific
gravitational wave IO formats, check out
[mhealpy](https://mhealpy.readthedocs.io/).
### How it Works: The “Sidecar” Strategy
`healpyxel` implements a **Split-Apply-Combine** pattern tailored for
spherical geometry:
1. **Split (The Sidecar):** Instead of rewriting your heavy raw data,
`healpyxel` generates a small Parquet file containing only the
`index` of the original data and its corresponding `healpix_id`.
2. **Apply (Aggregation):** Join this sidecar with any column in your
original dataset to calculate statistics (Mean, Std Dev, Count) per
cell.
3. **Combine (The Map):** Results are combined into a final HEALPix map
or a streaming accumulator.
**💡 Pro-Tip:** For multiple pixels sensors (e.g. push-broom
spectrometer), flatten your 2D acquisitions into a 1D tabular format
(one row per spatial pixel) before saving to GeoParquet. `healpyxel` is
optimized to ingest these “shredded” lines at high speed.
## Installation
``` bash
pip install healpyxel
```
### Optional Dependencies
``` bash
# For geospatial operations (sidecar generation)
pip install healpyxel[geospatial]
# For streaming/incremental statistics (accumulator)
pip install healpyxel[streaming]
# For visualization (maps, plots)
pip install healpyxel[viz]
# Development tools (nbdev, testing, linting)
pip install healpyxel[dev]
# All optional dependencies
pip install healpyxel[all]
```
**Extras breakdown:** - `geospatial`: geopandas, shapely,
dask-geopandas, antimeridian (required for `healpyxel_sidecar`) -
`streaming`: tdigest (percentile tracking in `healpyxel_accumulator`) -
`viz`: matplotlib, scikit-image, skyproj (mapping workflows) - `dev`:
All of the above + nbdev, pytest, black, ruff, mypy - `all`: Installs
geospatial + streaming + viz (excludes dev tools)
## Quick Start
The **healpyxel workflow** implements spatial aggregation using three
core steps:
### 1. **Split**: Map observations to HEALPix cells
You start with observation data (GeoParquet): geometries + values per
record. A **sidecar** file links each observation (`source_id`) to
HEALPix cells at your target resolution (`nside`).
**Data contract:**
- Input: `observations.parquet` → columns: `source_id`, `value`,
`geometry`
- Output: `observations-sidecar.parquet` → columns: `source_id`,
`healpix_id`, `weight` (fuzzy mode only)
**CLI:**
`healpyxel_sidecar --input observations.parquet --nside 64 128 --mode fuzzy`

### 2. **Apply**: Aggregate values per HEALPix cell
Group all observations assigned to the same cell and compute statistics
(median, mean, MAD, robust_std, etc.).
**Data contract:**
- Input: `observations.parquet` + sidecar file
- Output: `observations-aggregated.parquet` → columns: `healpix_id`,
`value_median`, `value_robust_std`, …
**CLI:**
`healpyxel_aggregate --input observations.parquet --sidecar-dir output/ --columns value --aggs median robust_std`

### 3. **Combine**: Attach HEALPix cell geometry
Add polygon boundaries to aggregated cells (computed from `healpix_id`
via `healpy`).
**Data contract:**
- Input: `observations-aggregated.parquet`
- Output: `observations-aggregated.geo.parquet` → adds column:
`geometry` (HEALPix cell polygon)
**CLI:**
`healpyxel_to_geoparquet --aggregate-path observations-aggregated.parquet --output-dir output/`

## Optional: Cache geometries
Pre-compute HEALPix cell boundaries for faster repeated use (especially
for high `nside`). This example create the 8,16 and 36 grid and convert
the cached files to geoparquet that geopandas can directly read and
visualize.
**CLI:**
``` bash
# create the grids
healpyxel_cache --nside 8 16 32 --order nested --lon-convention 0_360
# list them
healpyxel_cache --list
Cache directory: $XDG_HOME/.cache/healpyxel/healpix_grids
Cached grids (7):
nside_008_nest_spherical.parquet 768 cells 0.0 MB
nside_016_nest_spherical.parquet 3072 cells 0.1 MB
nside_032_nest_spherical.parquet 12288 cells 0.2 MB
# create geoparquet versions, store in tmp
for grid in $HOME/.cache/healpyxel/healpix_grids/*
do
echo "processin $grid file"
healpyxel_to_geoparquet -a $grid -d /tmp/ -l -180_180 -f
done
```
minimal python example to read plot one of those:
``` python
import geopandas as gpd
import cartopy.crs as ccrs
projection = ccrs.Orthographic(central_longitude=0, central_latitude=0)
fig, ax = plt.subplots(figsize=(10, 10))
gdf_projected_8.plot(
column=gdf.index, # Color by healpix_id
cmap='Spectral_r',
legend=False,
edgecolor='black',
linewidth=0.8,
ax=ax
)
ax.set_aspect('equal')
```

### Batch Processing
see [below](#cli-workflow)
``` bash
# 1. Generate HEALPix sidecar (SPLIT)
healpyxel_sidecar \
--input observations.parquet \
--nside 64 128 \
--mode fuzzy \
--output-dir output/
# 2. Aggregate by HEALPix cells (APPLY)
healpyxel_aggregate \
--input observations.parquet \
--sidecar-dir output/ \
--sidecar-index 0 \
--aggregate \
--columns r750 r950 \
--aggs median robust_std \
--min-count 3
# 3. Convert to GeoParquet (for visualization)
healpyxel_to_geoparquet \
--aggregate-path output/observations-aggregated.*.parquet \
--output-dir output/ \
--lon-convention -180_180
# 4. Cache HEALPix geometry (optional, speeds up visualization)
healpyxel_cache --nside 64 128 --order nested --lon-convention 0_360
```
### Streaming Processing - WORK IN PROGRESS
``` bash
# Day 1: Initialize accumulator
healpyxel_accumulate --input day001.parquet \
--columns r750 r950 --state-output state_v001.parquet
# Day 2+: Incremental updates
healpyxel_accumulate --input day002.parquet \
--columns r750 r950 \
--state-input state_v001.parquet --state-output state_v002.parquet
# Finalize to statistics
healpyxel_finalize --state state_v030.parquet --output mosaic.parquet \
--percentiles 25 50 75 --densify --nside 512
```
## CLI Workflow
This section explan a full CLI workflow on a test sample 50k data,
including the outputs produced at each stage.
The same workflow is done completely in python with healpyxel API in
[Examples\>Visualization](example_visualization_workflow.html) section.
All input/output are in this repsitory:
- script is at
[examples/cli_regrid_sample_50k.sh](examples/cli_regrid_sample_50k.sh)
- input are at
[test_data/samples/sample_50k.parquet](test_data/samples/sample_50k.parquet)
- ouput are in
[test_data/derived/cli_quickstart](test_data/derived/cli_quickstart)
<!-- -->
Original files excerpt (transposed for clarity):
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
| | lat_center | lon_center | surface | width | length | ang_incidence | ang_emission | ang_phase | azimuth | geometry |
|-----|------------|------------|------------|------------|-----------|---------------|--------------|-----------|------------|---------------------------------------------------|
| 0 | 5.186568 | 272.40450 | 1567133.4 | 1006.63727 | 1982.1799 | 43.049232 | 34.814793 | 77.85916 | 109.019295 | POLYGON ((272.39758 5.16433, 272.41583 5.18307... |
| 1 | -60.939438 | 71.77686 | 13564574.0 | 4064.49850 | 4249.2210 | 64.178116 | 37.690910 | 101.84035 | 111.930336 | POLYGON ((71.72596 -60.89612, 71.69186 -60.963... |
| 2 | 5.613894 | 54.23045 | 1755143.5 | 1013.51886 | 2204.9104 | 53.815990 | 24.053764 | 77.86254 | 99.559425 | POLYGON ((54.24406 5.63592, 54.22025 5.62014, ... |
| 3 | -41.672714 | 324.49740 | 23309360.0 | 6511.20950 | 4558.0470 | 52.841824 | 46.625698 | 99.40995 | 121.833626 | POLYGON ((324.54932 -41.70964, 324.56927 -41.6... |
</div>

``` python
# Load sidecar parquet file using metadata
if sidecar_meta_path.exists():
if sidecar_path.exists():
sidecar_df = pd.read_parquet(sidecar_path)
print(f"Sidecar Metadata:")
print(f"Unique sources: {sidecar_df['source_id'].nunique()}")
print(f" Unique HEALPix cells: {sidecar_df['healpix_id'].nunique()}")
print(f" Total assignments: {len(sidecar_df)}")
print(f"\n Sidecar Data:")
display(sidecar_df.head(10))
else:
print(f"Sidecar file not found: {sidecar_path}")
else:
print(f"Sidecar metadata not found: {sidecar_meta_path}")
print("Run the CLI script first: bash examples/cli_regrid_sample_50k.sh")
```
Sidecar Metadata:
Unique sources: 49988
Unique HEALPix cells: 10860
Total assignments: 54931
Sidecar Data:
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
| | source_id | healpix_id | weight |
|-----|-----------|------------|--------|
| 0 | 0 | 7943 | 1.0 |
| 1 | 1 | 8287 | 1.0 |
| 2 | 2 | 5819 | 1.0 |
| 3 | 3 | 11685 | 1.0 |
| 4 | 4 | 3618 | 1.0 |
| 5 | 5 | 3805 | 1.0 |
| 6 | 6 | 9522 | 1.0 |
| 7 | 7 | 10975 | 1.0 |
| 8 | 8 | 1820 | 1.0 |
| 9 | 9 | 3710 | 1.0 |
</div>
### 1) Create HEALPix sidecar(s)
Those files link each row in the input parquet file to the HEALPix cells
at the requested **nside** resolution; see [Useful Healpix data for Moon
Venus Mercury](#useful-healpix-data-for-moon-venus-mercury) for some
cells data. Refer to `healpyxel_sidecar --help` for full options. The
`--mode` flag is especially important: - `fuzzy`: assign each input
record to every cell it touches - `strict`: assign only records fully
contained within a cell
``` bash
healpyxel_sidecar \
--input "test_data/samples/sample_50k.parquet" \
--nside 32 64 \
--mode fuzzy \
--lon-convention 0_360 \
--output_dir "test_data/derived/cli_quickstart"
```
**Outputs**
- sample_50k.cell-healpix_assignment-fuzzy_nside-32_order-nested.parquet
- sample_50k.cell-healpix_assignment-fuzzy_nside-32_order-nested.meta.json
- sample_50k.cell-healpix_assignment-fuzzy_nside-64_order-nested.parquet
- sample_50k.cell-healpix_assignment-fuzzy_nside-64_order-nested.meta.json
<!-- -->
Nside 32: 54931 assignments, 10860 unique cells
| | source_id | healpix_id | weight |
|---:|------------:|-------------:|---------:|
| 0 | 0 | 7943 | 1 |
| 1 | 1 | 8287 | 1 |
| 2 | 2 | 5819 | 1 |
| 3 | 3 | 11685 | 1 |
| 4 | 4 | 3618 | 1 |

### 2) Aggregate sparse regridded map(s)
Now we need to aggregate initial data on the cells, refer to
`healpyxel_aggregate --help` for all the option. Some flag are
particurarly useful:
- `--schema` : show input parquet schema, useful to look which data are
there to aggregate.
- `--list-sidecars` : list available sidecar for an input files, they
are addressed by index.
- `--sidecar-schema INDEX` : show schema for specific sidecar file
- `--aggs mean` : aggregation functions (choices: mean, median, std,
min, max, mad, robust_std).
Example :
- input file contains columns A (you can check it with
`healpyxel_aggregate -i input --schema`)
- `--agg mean median std`
- this produce un output the columns `A_mean`, `A_median` and `A_std`
created appling those function on all input files rows listd in the
sidecar file for a single HEALPix cell
``` bash
healpyxel_aggregate \
--input "test_data/samples/sample_50k.parquet" \
--sidecar-dir "test_data/derived/cli_quickstart" \
--sidecar-index all \
--aggregate \
--columns r1050 \
--aggs mean median std mad robust_std \
```
This produces sparse output : only cells with actual values are written
in ouput.
**Outputs** -
sample_50k-aggregated.cell-healpix_assignment-fuzzy_nside-32_order-nested.parquet -
sample_50k-aggregated.cell-healpix_assignment-fuzzy_nside-32_order-nested.meta.json -
sample_50k-aggregated.cell-healpix_assignment-fuzzy_nside-64_order-nested.parquet -
sample_50k-aggregated.cell-healpix_assignment-fuzzy_nside-64_order-nested.meta.json
Nside 32: 10860 unique cells
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
| | r1050_mean | r1050_median | r1050_std | r1050_mad | r1050_robust_std | n_sources |
|------------|------------|--------------|-----------|-----------|------------------|-----------|
| healpix_id | | | | | | |
| 0 | 0.048616 | 0.047857 | 0.003759 | 0.002672 | 0.003962 | 4 |
| 1 | 0.051467 | 0.052283 | 0.002976 | 0.001888 | 0.002799 | 6 |
| 2 | 0.049697 | 0.049118 | 0.003637 | 0.002289 | 0.003394 | 6 |
| 3 | 0.059066 | 0.063241 | 0.007149 | 0.001711 | 0.002537 | 3 |
| 4 | 0.051262 | 0.051523 | 0.006552 | 0.002510 | 0.003721 | 9 |
</div>
### 3) Aggregate densified regridded map(s)
``` bash
healpyxel_aggregate \
--input "test_data/samples/sample_50k.parquet" \
--sidecar-dir "test_data/derived/cli_quickstart" \
--sidecar-index all \
--aggregate \
--columns r1050 \
--aggs mean median std mad robust_std \
--densify
```
This produces dense output : all HEALPix cells are writeen in ouput,
empty one as filled with Nan.
**Outputs** -
sample_50k-aggregated-densified.cell-healpix_assignment-fuzzy_nside-32_order-nested.parquet -
sample_50k-aggregated-densified.cell-healpix_assignment-fuzzy_nside-32_order-nested.meta.json -
sample_50k-aggregated-densified.cell-healpix_assignment-fuzzy_nside-64_order-nested.parquet -
sample_50k-aggregated-densified.cell-healpix_assignment-fuzzy_nside-64_order-nested.meta.json
Nside 32: 12288 unique cells <- densified , 1428 additional empty cells filled in by densification
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
| | r1050_mean | r1050_median | r1050_std | r1050_mad | r1050_robust_std | n_sources |
|------------|------------|--------------|-----------|-----------|------------------|-----------|
| healpix_id | | | | | | |
| 29 | 0.046644 | 0.046644 | 0.000000 | 0.000000 | 0.000000 | 1.0 |
| 30 | NaN | NaN | NaN | NaN | NaN | NaN |
| 31 | 0.040205 | 0.040986 | 0.009636 | 0.007137 | 0.010581 | 4.0 |
| 32 | 0.054966 | 0.054413 | 0.003162 | 0.002148 | 0.003184 | 8.0 |
| 33 | 0.054424 | 0.055591 | 0.004131 | 0.003358 | 0.004979 | 8.0 |
| 34 | 0.057463 | 0.057463 | 0.001704 | 0.001704 | 0.002526 | 2.0 |
| 35 | 0.050470 | 0.057635 | 0.017546 | 0.004688 | 0.006951 | 4.0 |
| 36 | 0.054052 | 0.053640 | 0.004915 | 0.002833 | 0.004200 | 6.0 |
| 37 | 0.056132 | 0.056019 | 0.002281 | 0.002128 | 0.003155 | 4.0 |
| 38 | 0.060452 | 0.060592 | 0.002127 | 0.001878 | 0.002784 | 4.0 |
| 39 | 0.060708 | 0.070030 | 0.014303 | 0.001562 | 0.002316 | 3.0 |
| 40 | 0.041480 | 0.041480 | 0.000000 | 0.000000 | 0.000000 | 1.0 |
| 41 | 0.028736 | 0.028736 | 0.000000 | 0.000000 | 0.000000 | 1.0 |
| 42 | 0.070738 | 0.070655 | 0.009835 | 0.011921 | 0.017674 | 3.0 |
| 43 | 0.062058 | 0.061658 | 0.009862 | 0.008409 | 0.012467 | 8.0 |
| 44 | NaN | NaN | NaN | NaN | NaN | NaN |
| 45 | 0.053895 | 0.054106 | 0.001107 | 0.001026 | 0.001521 | 3.0 |
</div>
### 4) Convert aggregated maps to GeoParquet
This convert each aggregated file to a geoparquet.
``` bash
for f in "test_data/derived/cli_quickstart"/*-aggregated*parquet; do
healpyxel_to_geoparquet -a "$f" -d "test_data/derived/cli_quickstart" -l -180_180 -f
done
```
**Outputs** -
sample_50k-aggregated-densified.cell-healpix_assignment-fuzzy_nside-32_order-nested.geo.parquet -
sample_50k-aggregated-densified.cell-healpix_assignment-fuzzy_nside-64_order-nested.geo.parquet -
sample_50k-aggregated.cell-healpix_assignment-fuzzy_nside-32_order-nested.geo.parquet -
sample_50k-aggregated.cell-healpix_assignment-fuzzy_nside-64_order-nested.geo.parquet
Nside 32: 10860 unique cells
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
| | geometry | r1050_mean | r1050_median | r1050_std | r1050_mad | r1050_robust_std | n_sources |
|------------|---------------------------------------------------|------------|--------------|-----------|-----------|------------------|-----------|
| healpix_id | | | | | | | |
| 0 | POLYGON ((45 2.38802, 43.59375 1.19375, 45 0, ... | 0.048616 | 0.047857 | 0.003759 | 0.002672 | 0.003962 | 4 |
| 1 | POLYGON ((46.40625 3.58332, 45 2.38802, 46.406... | 0.051467 | 0.052283 | 0.002976 | 0.001888 | 0.002799 | 6 |
| 2 | POLYGON ((43.59375 3.58332, 42.1875 2.38802, 4... | 0.049697 | 0.049118 | 0.003637 | 0.002289 | 0.003394 | 6 |
| 3 | POLYGON ((45 4.78019, 43.59375 3.58332, 45 2.3... | 0.059066 | 0.063241 | 0.007149 | 0.001711 | 0.002537 | 3 |
| 4 | POLYGON ((47.8125 4.78019, 46.40625 3.58332, 4... | 0.051262 | 0.051523 | 0.006552 | 0.002510 | 0.003721 | 9 |
</div>
Each cell is linked to some initial observation via the sidecar file, we
can see here the distribution of one value in all the cell

We can visualize each pixel with one of the aggregator function output
available in `healpyxel_aggregate` :
- **`mean`**: Arithmetic mean
- **`median`**: Median (50th percentile)
- **`std`**: Standard deviation
- **`min`**: Minimum value
- **`max`**: Maximum value
- **`mad`**: Median Absolute Deviation (robust to outliers)
- **`robust_std`**: MAD × 1.4826 (equivalent to standard deviation for
normal distributions, robust to outliers)
Each function generates one output column per input value column, named
`<column>_<agg>` (e.g., `r1050_mean`, `r1050_median`, `r1050_mad`).
Robust statistics (`mad`, `robust_std`) are recommended for
outlier-prone datasets.

## Python API
Minimal end-to-end python API example, each level works on previous one
output.
- `initial data` → <!-- raw observations (GeoDataFrame/DataFrame) -->
- `sidecar` : generate data \<\> healpix grid connections →
<!-- maps source_id to healpix_id -->
- `aggregate` → <!-- per-cell statistics on value columns -->
- `attach geometry` → <!-- add HEALPix cell polygons -->
- `accumulate` → <!-- streaming state update (count/mean/m2/tdigest) -->
- `finalize` <!-- final statistics from state -->
minimal code, a more detailed explanation is in
[Examples\>Visualization](example_visualization_workflow.html) section.
------------------------------------------------------------------------
``` python
from healpyxel import sidecar, aggregate, accumulator, finalize
from healpyxel.geospatial import healpix_to_geodataframe
# Minimal API sanity checks (nbdev-friendly)
assert hasattr(sidecar, "generate")
assert hasattr(aggregate, "by_sidecar")
assert hasattr(accumulator, "update_state")
assert hasattr(finalize, "from_state")
assert callable(healpix_to_geodataframe)
# 1) Sidecar (split)
sidecar_df = sidecar.generate(
gdf,
nside=64,
mode="fuzzy",
order="nested",
lon_convention="0_360",
)
# 2) Aggregate (apply)
agg_df = aggregate.by_sidecar(
original=df,
sidecar=sidecar_df,
value_columns=["r750", "r950"],
aggs=["median", "robust_std"],
min_count=3,
)
# 2b) Attach geometry to step-2 products (geospatial)
cells_gdf = healpix_to_geodataframe(
nside=64,
order="nested",
lon_convention="0_360",
pixels=agg_df["healpix_id"].to_numpy(),
fix_antimeridian=True,
cache_mode="use",
).reset_index(drop=False)
agg_geo_gdf = cells_gdf.merge(agg_df, on="healpix_id", how="left")
# 3) Accumulator (streaming apply)
state_df = accumulator.update_state(
batch=df,
sidecar=sidecar_df,
value_columns=["r750", "r950"],
state=None,
)
# 4) Finalize (combine)
final_df = finalize.from_state(
state=state_df,
aggs=["mean", "std", "median", "robust_std"],
)
```
## Developed for MESSENGER/MASCS
This package was developed to process spectral observations from the
MESSENGER/MASCS instrument studying Mercury’s surface. The workflow
handles:
- Millions of observations with complex footprint geometries
- Multi-spectral reflectance data (VIS + NIR)
- Streaming data from ongoing missions
- Native resolution mosaics (sub-footprint sampling)
While designed for MASCS, healpyxel is general-purpose and works with
any planetary science dataset in GeoParquet format.
### Useful Healpix data for Moon Venus Mercury
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
| | Number of Cells | Cell Angular Size (deg) | Mercury Cell Size (km) | Moon Cell Size (km) | Venus Cell Size (km) |
|-------|-----------------|-------------------------|------------------------|---------------------|----------------------|
| nside | | | | | |
| 1 | 12 | 58.632 | 2496.610 | 1777.928 | 6192.969 |
| 2 | 48 | 29.316 | 1248.305 | 888.964 | 3096.484 |
| 4 | 192 | 14.658 | 624.153 | 444.482 | 1548.242 |
| 8 | 768 | 7.329 | 312.076 | 222.241 | 774.121 |
| 16 | 3,072 | 3.665 | 156.038 | 111.120 | 387.061 |
| 32 | 12,288 | 1.832 | 78.019 | 55.560 | 193.530 |
| 64 | 49,152 | 0.916 | 39.010 | 27.780 | 96.765 |
| 128 | 196,608 | 0.458 | 19.505 | 13.890 | 48.383 |
| 256 | 786,432 | 0.229 | 9.752 | 6.945 | 24.191 |
| 512 | 3,145,728 | 0.115 | 4.876 | 3.473 | 12.096 |
| 1,024 | 12,582,912 | 0.057 | 2.438 | 1.736 | 6.048 |
| 2,048 | 50,331,648 | 0.029 | 1.219 | 0.868 | 3.024 |
| 4,096 | 201,326,592 | 0.014 | 0.610 | 0.434 | 1.512 |
| 8,192 | 805,306,368 | 0.007 | 0.305 | 0.217 | 0.756 |
</div>
## License
Apache 2.0
| text/markdown | null | Mario D'Amore <mario.damore@dlr.de> | null | null | null | healpix, planetary-science, spatial-aggregation, streaming | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Langua... | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas>=2.0",
"numpy>=1.24",
"pyarrow>=12.0",
"healpy>=1.16",
"click>=8.0",
"tqdm",
"geopandas>=0.14; extra == \"geospatial\"",
"shapely>=2.0; extra == \"geospatial\"",
"dask-geopandas>=0.3; extra == \"geospatial\"",
"antimeridian; extra == \"geospatial\"",
"tdigest; extra == \"streaming\"",
... | [] | [] | [] | [
"Homepage, https://github.com/mariodamore/healpyxel",
"Documentation, https://mariodamore.github.io/healpyxel",
"Repository, https://github.com/mariodamore/healpyxel",
"Bug Tracker, https://github.com/mariodamore/healpyxel/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T14:01:44.336558 | healpyxel-0.2.1.tar.gz | 111,653 | 27/7a/57ade147a8e76cdb6e4e1a395e7934018503e771912039d3ffb61b2666f4/healpyxel-0.2.1.tar.gz | source | sdist | null | false | 9ff03524ce023469742042b4736d0a1f | 55aba84b78a3ae3ed2736efddb4a3d4610b75c38762274aec8ce3a58a51343e2 | 277a57ade147a8e76cdb6e4e1a395e7934018503e771912039d3ffb61b2666f4 | Apache-2.0 | [
"LICENSE"
] | 239 |
2.4 | igapi-rs | 0.0.2.dev0 | Python bindings for Instagram Private API | # igapi-rs
[](https://pypi.org/project/igapi-rs/)
[](https://pypi.org/project/igapi-rs/)
[](https://github.com/open-luban/igapi-rs)
Instagram 私有 API 的 Python 绑定,基于 Rust + PyO3 构建,高性能、异步优先。
支持 Android / iOS / Web 三平台模拟。
- **文档**: [igapi-rs.es007.com](https://igapi-rs.es007.com)
- **源码**: [github.com/open-luban/igapi-rs](https://github.com/open-luban/igapi-rs)
## 安装
```bash
pip install igapi-rs
```
### 环境要求
- Python 3.10+
## 快速开始
```python
import igapi
# 创建客户端
client = igapi.Client()
# 登录
client.login("username", "password")
# 获取用户信息
user = client.user_info(12345678)
print(f"用户: @{user.username}")
print(f"粉丝数: {user.follower_count}")
# 搜索用户
results = client.search_users("instagram")
for user in results:
print(f"@{user.username}")
# 获取用户动态
feed = client.user_feed(12345678)
for media in feed.items:
print(f"媒体: {media.id}, 点赞: {media.like_count}")
```
## API 参考
### Client
```python
Client(proxy: str | None = None, platform: str = "android")
```
创建新的 Instagram API 客户端。
**参数:**
- `proxy` (可选): 代理 URL(例如 "http://localhost:8080")
- `platform` (可选): 使用的平台("android" 或 "web")
**方法:**
#### `login(username: str, password: str) -> None`
登录 Instagram。
**异常:**
- `PermissionError`: 登录失败(密码错误、需要验证等)
- `ValueError`: 无效的凭据
- `RuntimeError`: 网络或 API 错误
#### `is_logged_in() -> bool`
检查当前是否已登录。
#### `user_info(user_id: int) -> User`
通过 ID 获取用户信息。
**异常:**
- `KeyError`: 用户未找到
- `PermissionError`: 需要登录
#### `search_users(query: str) -> list[User]`
通过用户名或名称搜索用户。
#### `user_feed(user_id: int, max_id: str | None = None) -> Feed`
获取用户的媒体动态。
**参数:**
- `user_id`: 要获取动态的用户 ID
- `max_id` (可选): 分页游标
### 类型定义
#### User
```python
class User:
pk: int # 用户 ID
username: str # 用户名
full_name: str # 全名
is_private: bool # 是否私密账户
profile_pic_url: str # 头像 URL
follower_count: int # 粉丝数
following_count: int # 关注数
```
#### Media
```python
class Media:
id: str # 媒体 ID
media_type: int # 类型(1=图片, 2=视频, 8=轮播)
caption_text: str # 描述文字
like_count: int # 点赞数
comment_count: int # 评论数
```
#### Feed
```python
class Feed:
items: list[Media] # 媒体项目列表
has_more: bool # 是否有更多项目
next_cursor: str | None # 下一页游标
```
## 错误处理
```python
try:
client.login("user", "pass")
except PermissionError as e:
# 登录失败、需要验证、双因素认证等
print(f"登录错误: {e}")
except ValueError as e:
# 无效的凭据
print(f"无效输入: {e}")
except RuntimeError as e:
# API 或网络错误
print(f"错误: {e}")
```
## 使用代理
```python
# HTTP 代理
client = igapi.Client(proxy="http://localhost:8080")
# HTTPS 代理
client = igapi.Client(proxy="https://proxy.example.com:8080")
# SOCKS 代理
client = igapi.Client(proxy="socks5://localhost:1080")
```
## 示例
完整使用示例请参见 `examples/python_example.py`。
## 开发
### 构建
```bash
# 调试构建
maturin develop
# 发布构建
maturin develop --release
# 构建 wheel 包
maturin build --release
```
### 测试
```bash
# 运行测试
python -m pytest tests/
# 类型检查
mypy --strict examples/
```
## 性能
Python 绑定使用 PyO3,保持接近原生 Rust 的性能:
- API 调用通过 Tokio 异步处理
- 最小化序列化开销
- 尽可能使用零拷贝数据访问
## 许可证
MIT OR Apache-2.0
## 链接
- [在线文档](https://igapi-rs.es007.com)
- [GitHub 仓库](https://github.com/open-luban/igapi-rs)
- [PyPI](https://pypi.org/project/igapi-rs/)
| text/markdown; charset=UTF-8; variant=GFM | null | Your Name <your.email@example.com> | null | null | MIT OR Apache-2.0 | instagram, api, social-media | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Rust",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :... | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest>=7.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\""
] | [] | [] | [] | [
"Documentation, https://igapi-rs.es007.com",
"Homepage, https://github.com/open-luban/igapi-rs",
"Repository, https://github.com/open-luban/igapi-rs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:01:32.111675 | igapi_rs-0.0.2.dev0-cp312-cp312-manylinux_2_28_x86_64.whl | 3,053,687 | 4a/c7/094f4696d56bbc14cde451a81a85d390cf58f2b22c5bea3defa6891631ac/igapi_rs-0.0.2.dev0-cp312-cp312-manylinux_2_28_x86_64.whl | cp312 | bdist_wheel | null | false | 8c6d1e842a5f8e2c191c7557e2f12239 | 1cab4c5982dc661b0a64538f9ec34b9fba3eb13c4a071523fb5b12fe6da21c29 | 4ac7094f4696d56bbc14cde451a81a85d390cf58f2b22c5bea3defa6891631ac | null | [] | 209 |
2.4 | sparkpool-mcp | 0.1.1 | MCP server for SparkPool | # SparkPool MCP Server
This is an MCP server for the SparkPool application.
It allows AI agents to submit Roasts and Ideas directly to the SparkPool backend.
## Deployment
### Local Development (stdio)
You can run this server using `uvx`:
```bash
export SPARKPOOL_API_TOKEN="your_token_here"
export SPARKPOOL_API_URL="http://localhost:8000"
uvx --from . sparkpool-mcp
```
### PyPI
To publish to PyPI:
1. Build: `hatch build`
2. Publish: `hatch publish`
## Configuration
- `SPARKPOOL_API_TOKEN`: The MCP API Token from your SparkPool profile page.
- `SPARKPOOL_API_URL`: The URL of the SparkPool backend (default: http://localhost:8000).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.20.0",
"mcp[cli]>=1.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T14:01:07.105421 | sparkpool_mcp-0.1.1.tar.gz | 2,187 | 05/cb/7dae06fbce5920117d91cff3f47c1b5ec5b77051d74fbaf55e95b9027444/sparkpool_mcp-0.1.1.tar.gz | source | sdist | null | false | 7ee9f9e97ab014b33ba70a38d64f12ba | f359880025fce31f176b81dabf3efd223a6da25a7236acb6f50a2d5861f6a382 | 05cb7dae06fbce5920117d91cff3f47c1b5ec5b77051d74fbaf55e95b9027444 | null | [] | 253 |
2.1 | dynamic-diffraction-module | 0.1.4 | A python based framework for dynamic diffraction calculations of crystals. | # Dynamic Diffraction Module
[](https://pypi.org/project/dynamic-diffraction-module/)
[](https://pypi.org/project/dynamic-diffraction-module/)
A repository meant for (python based) functions on the dynamic diffraction theory. It is closely related to the Matlab written <https://gitlab.desy.de/patrick.rauer/MatlabDiffractionStuff>.
The structure, however, is based on the Dynamic Diffraction submodule of the [pXCP](https://gitlab.desy.de/patrick.rauer/Xgeno_mpi) framework.
* Documentation: <https://patrick.rauer.pages.desy.de/dynamic-diffraction-module>
* GitLab: <https://gitlab.desy.de/patrick.rauer/dynamic-diffraction-module>
* PyPI: <https://pypi.org/project/dynamic-diffraction-module/>
## Features
Currently, the scope of the package is rather rudimentary.
It includes:
* computing the (modified) Bragg energy for any given plane H for a specific micro- and macroscopic crystal orientation
* computing the (approximative) energy width for any given plane H for a specific micro- and macroscopic crystal orientation in the two beam approximation
* Selecting the number of reflecting planes in the vicinity of a given photon energy + crystal orientation configuration
* computing reflectivity/transmissivity vs energy for a specified crystal plane H0 can be computed in the two beam approximation
* Rocking curve scans in the two beam approximation
* diffraction at strained crystals (however, only symmetric close to backscattering for the moment)
* Impulse reponse in time-domain
However, further functionality is to follow soon:
* n-beam diffraction
* asymmetric, 3D diffraction at strained crystals
* ...
## External packages
* numpy (required)
* [xraylib](https://github.com/tschoonj/xraylib) (prospectively optional,currently required)
* matplotlib
* pandas
## Usage
There is no documentation for the API yet. However, you can find some tutorial *.ipynb scripts in the [playgrounds folder](https://gitlab.desy.de/patrick.rauer/dynamic-diffraction-module/-/tree/main/playgrounds/README.md) on [gitlab](https://gitlab.desy.de/patrick.rauer/dynamic-diffraction-module).
## LICENSE
* Free software: GNU GENERAL PUBLIC LICENSE Version 3
## Credits
This package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the [waynerv/cookiecutter-pypackage](https://github.com/waynerv/cookiecutter-pypackage) project template.
| text/markdown | Patrick Rauer | patrick.rauer@desy.de | null | null | GPL-3.0-or-later | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming L... | [] | https://gitlab.desy.de/patrick.rauer/dynamic-diffraction-module | null | <4.0,>=3.8 | [] | [] | [] | [
"numpy<2.0.0,>=1.23.2",
"matplotlib<4.0.0,>=3.6.1",
"xraylib<5.0.0,>=4.0.1",
"pandas<3.0,>=2.0",
"scipy<2.0,>=1.8",
"importlib-resources>=5.0; python_version < \"3.9\""
] | [] | [] | [] | [] | poetry/1.8.2 CPython/3.12.3 Linux/6.14.0-37-generic | 2026-02-18T14:00:53.473630 | dynamic_diffraction_module-0.1.4.tar.gz | 61,306 | 1a/91/9bff3725ab083d444b20490c6f6f33be3c65712839a69de1a1fff882b889/dynamic_diffraction_module-0.1.4.tar.gz | source | sdist | null | false | ce860410c5f888d016a8e43e8df21914 | 08712854576b67eb419fabf22591cc801db5af9b9847d7061b3b0bb7174535bf | 1a919bff3725ab083d444b20490c6f6f33be3c65712839a69de1a1fff882b889 | null | [] | 250 |
2.4 | sesamo | 1.0.0 | SESaMo provides an extension to Normalizing Flows that enforces symmetries to the output distribution. | [](https://arxiv.org/abs/2505.19619)
[](LICENSE)
# SESaMo: Symmetry-Enforcing Stochastic Modulation for Normalizing Flows
## Quick start
Install the package with pip:
```bash
pip install sesamo
```
Here is a quick example of how to use SESaMo to build a normalizing flow with stochastic modulation:
```python
import torch
from sesamo import Sesamo
from sesamo.models import GaussianPrior, RealNVP, Z2Modulation, Z2Regularization
from sesamo.loss import StochmodLoss
# Initialize SESaMo
sesamo = Sesamo(
prior=GaussianPrior(
var=1,
lat_shape=[1,2]
),
flow=RealNVP(
lat_shape=[1,2],
num_coupling_layers=10,
num_hidden_layers=2,
num_hidden_features=40
),
stochastic_modulation=Z2Modulation(),
regularization=Z2Regularization(),
).to("cuda")
action = # define action for the target distribution p(x) = exp(-action(x)) / Z
loss_fn = StochmodLoss()
optimizer = torch.optim.Adam(sesamo.parameters(), lr=5e-4)
# Training loop
for _ in range(10_000):
# reset gradients
optimizer.zero_grad()
# sample from sesamo
samples, log_prob, log_prob_stochmod, penalty = sesamo.sample_for_training(8_000)
# compute action and loss
action_samples = action(samples)
loss = loss_fn(action_samples, log_prob, log_prob_stochmod, penalty).mean()
# backpropagate and update flow parameters
loss.backward()
optimizer.step()
```
### Examples
For more examples see the ```SESaMo/examples``` folder, which contains Jupyter notebooks for the Hubbard model and the Gaussian mixture model.
## Run experiments
To run the experiments from the paper, follow the instructions below.
Clone the repository and move into the directory:
```
git clone https://github.com/janikkreit/SESaMo.git
cd SESaMo
```
Create a python virtual environment and install the package:
```
python -m venv .venv
source .venv/bin/activate
pip install -e .
```
Run experiments with
```
cd experiments
python train.py -cp configs/<experiment> -cn <model>
```
Available ```<experiment>```s are:
```
hubbard2x1
hubbard18x100
gaussian-mixture
broken-gaussian-mixture
complex-phi4
broken-complex-phi4
broken-scalar-phi4
```
Available ```<model>```s are:
```
realnvp
vmonf
canonicalization
sesamo
```
The checkpoint, tensorboard, config and stats files are stored in the ```SESaMo/scripts/runs``` folder.
After training is completed or interupted the distribution is plotted and saved as ```SESaMo/scripts/runs/.../samples.png```
## Citation
If you use SESaMo in your research, please consider citing our paper:
```
@article{kreit2025sesamo,
title={SESaMo: Symmetry-Enforcing Stochastic Modulation for Normalizing Flows},
author={Janik Kreit and Dominic Schuh and Kim A. Nicoli and Lena Funcke},
year={2025},
eprint={2505.19619},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.19619},
}
| text/markdown | null | Janik Kreit <jkreit@uni-bonn.de> | null | null | MIT License
Copyright (c) 2025 Janik Kreit
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [] | [] | null | null | <3.13,>=3.9 | [] | [] | [] | [
"torch>=2.2",
"numpy<2,>=1.24",
"tqdm",
"matplotlib",
"tensorboard",
"hydra-core",
"hydra-submitit-launcher",
"nflows",
"setproctitle"
] | [] | [] | [] | [
"Homepage, https://github.com/sesamo-project/SESaMo"
] | twine/6.2.0 CPython/3.9.21 | 2026-02-18T14:00:40.801754 | sesamo-1.0.0.tar.gz | 23,613 | c1/8c/8b216b04d38a22b14a79c6293efab4e436fdf0f6ece2ffdb2b45e8a20108/sesamo-1.0.0.tar.gz | source | sdist | null | false | bd8874acd8b3c3ce74a25677938e2f81 | 47f6a7884e5a13162bc8adad92f37b3d640387c5b1a25d18ecb651b4501c3459 | c18c8b216b04d38a22b14a79c6293efab4e436fdf0f6ece2ffdb2b45e8a20108 | null | [
"LICENSE"
] | 245 |
2.1 | musicpy | 7.12 | Musicpy is a music programming language in Python designed to write music in very handy syntax through music theory and algorithms. | musicpy
=======
[中文](https://github.com/Rainbow-Dreamer/musicpy/blob/master/README_cn.md)
Have you ever thought about writing music with codes in a very concise, human-readable syntax?
Musicpy is a music programming language in Python designed to write music in very handy syntax through music theory and algorithms. It is easy to learn and write, easy to read, and incorporates a fully computerized music theory system.
Musicpy can do way more than just writing music. This package can also be used to analyze music through music theory logic, and you can design algorithms to explore the endless possibilities of music, all with musicpy.
With musicpy, you can express notes, chords, melodies, rhythms, volumes and other information of a piece of music with a very concise syntax. It can generate music through music theory logic and perform advanced music theory operations. You can easily output musicpy codes into MIDI file format, and you can also easily load any MIDI files and convert to musicpy's data structures to do a lot of advanced music theory operations. The syntax of musicpy is very concise and flexible, and it makes the codes written in musicpy very human-readable, and musicpy is fully compatible with python, which means you can write python codes to interact with musicpy. Because musicpy is involved with everything in music theory, I recommend using this package after learning at least some fundamentals of music theory so you can use musicpy more clearly and satisfiedly. On the other hand, you should be able to play around with them after having a look at the documentation I wrote if you are already familiar with music theory.
Documentation
-------------
See [musicpy wiki](https://github.com/Rainbow-Dreamer/musicpy/wiki) or [Read the Docs documentation](https://musicpy.readthedocs.io/en/latest/) for complete and detailed tutorials about syntax, data structures and usages of musicpy.
This wiki is updated frequently, since new functions and abilities are adding to musicpy regularly. The syntax and abilities of this wiki is synchronized with the latest released version of musicpy.
You can click [here](https://www.jianguoyun.com/p/Ddiv4ykQt43aDBjTu8kFIAA) to download the entire wiki of musicpy I written in pdf and markdown format, which is updating continuously.
Installation
-------------
Make sure you have installed python (version >= 3.7) in your pc first.
Run the following line in the terminal to install musicpy by pip.
```shell
pip install musicpy
```
If you need to read and write musicxml files, you can install the relevant dependencies by adding `[musicxml]` after the above command. If you need to use the raw module (musicpy.daw), you can install the relevant dependencies by adding `[daw]`. Like this:
```shell
pip install musicpy[musicxml]
pip install musicpy[daw]
pip install musicpy[daw, musicxml]
```
**Note 1: On Linux, you need to make sure the installed pygame version is older than 2.0.3, otherwise the play function of musicpy won't work properly, this is due to an existing bug with newer versions of pygame. You can run `pip install pygame==2.0.2` in terminal to install pygame 2.0.2 or any version that is older than 2.0.3. You also need to install freepats to make the play function works on Linux, you can run `sudo apt-get install freepats` (on Ubuntu).**
**Note 2: If you cannot hear any sound when running the play function, this is because some IDE won't wait till the pygame's playback ends, they will stops the whole process after all of the code are executed without waiting for the playback. You can set `wait=True` in the parameter of the play function, which will block the function till the playback ends, so you can hear the sounds.**
**Note 3: If you are using Linux or macOS, one of the dependency libraries of the daw module, sf2_loader, has some necessary configuration steps, you can refer to [here](https://github.com/Rainbow-Dreamer/sf2_loader#installation) for details.**
In addition, I also wrote a musicpy editor for writing and compiling musicpy code more easily than regular python IDE with real-time automatic compilation and execution, there are some syntactic sugar and you can listen to the music generating from your musicpy code on the fly, it is more convenient and interactive. I strongly recommend to use this musicpy editor to write musicpy code. You can download this musicpy editor at the repository [musicpy_editor](https://github.com/Rainbow-Dreamer/musicpy_editor), the preparation steps are in the README.
Musicpy is all compatible with Windows, macOS and Linux.
Musicpy now also supports reading and writing musicxml files, note that you need to install partitura to use the functionality by `pip install partitura`.
Importing
-------------
Place this line at the start of the files you want to have it used in.
```python
from musicpy import *
```
or
```python
import musicpy as mp
```
to avoid possible conflicts with the function names and variable names of other modules.
Composition Examples
-------------
Because musicpy has too many features to introduce, I will just give a simple example code of music programming in musicpy:
```python
# a nylon string guitar plays broken chords on a chord progression
guitar = (C('CM7', 3, 1/4, 1/8)^2 |
C('G7sus', 2, 1/4, 1/8)^2 |
C('A7sus', 2, 1/4, 1/8)^2 |
C('Em7', 2, 1/4, 1/8)^2 |
C('FM7', 2, 1/4, 1/8)^2 |
C('CM7', 3, 1/4, 1/8)@1 |
C('AbM7', 2, 1/4, 1/8)^2 |
C('G7sus', 2, 1/4, 1/8)^2) * 2
play(guitar, bpm=100, instrument=25)
```
[Click here to hear what this sounds like (Microsoft GS Wavetable Synth)](https://drive.google.com/file/d/104QnivVmBH395dLaUKnvEXSC5ZBDBt2E/view?usp=sharing)
If you think this is too simple, musicpy could also produce music like [this](https://drive.google.com/file/d/1j66Ux0KYMiOW6yHGBidIhwF9zcbDG5W0/view?usp=sharing) within 30 lines of code (could be even shorter if you don't care about readability). Anyway, this is just an example of a very short piece of electronic dance music, and not for complexity.
For more musicpy composition examples, please refer to the musicpy composition examples chapters in wiki.
Brief Introduction of Data Structures
-------------
`note`, `chord`, `scale` are the basic classes in musicpy that builds up the base of music programming, and there are way more musical classes in musicpy.
Because of musicpy's data structure design, the `note` class is congruent to integers, which means that it can be used as int directly.
The `chord` class is the set of notes, which means that it itself can be seen as a set of integers, a vector, or even a matrix (e.g. a set of chord progressions can be seen as a combination of multiple vectors, which results in a form of matrix with lines and columns indexed)
Because of that, `note`, `chord` and `scale` classes can all be arithmetically used in calculation, with examples of Linear Algebra and Discrete Mathmetics. It is also possible to write an algorithm following music theory logics using musicpy's data structure, or to perform experiments on music with the help of pure mathematics logics.
Many experimental music styles nowadays, like serialism, aleatoric music, postmodern music (like minimalist music), are theoretically possible to make upon the arithmetically performable data structures provided in musicpy. Of course musicpy can be used to write any kind of classical music, jazz, or pop music.
For more detailed descriptions of data structures of musicpy, please refer to wiki.
Summary
-------------
I started to develop musicpy in October 2019, currently musicpy has a complete set of music theory logic syntax, and there are many composing and arranging functions as well as advanced music theory logic operations. For details, please refer to the wiki. I will continue to update musicpy's video tutorials and wiki.
I'm working on musicpy continuously and updating musicpy very frequently, more and more musical features will be added, so that musicpy can do more with music.
Thank you for your support~
If you are interested in the latest progress and develop thoughts of musicpy, you could take a look at this repository [musicpy_dev](https://github.com/Rainbow-Dreamer/musicpy_dev)
Contact
-------------
Discord: Rainbow Dreamer#7122
qq: 2180502841
Bilibili account: Rainbow_Dreamer
email: 2180502841@qq.com / q1036889495@gmail.com
Discussion group:
[](http://discord.gg/m4xEzPQ76V)
QQ discussion group: 364834221
## Donation
This project is developed by Rainbow Dreamer on his spare time to create an interesting music composition library and a high-level MIDI toolkit. If you feel this project is useful to you and want to support it and it's future development, please consider sponsor this project by clicking the sponsor button, it would help me out a lot.
[](https://patreon.com/rainbow_dreamer)
Reasons Why I Develop This Language and Keep Working on This Project
-------------
There are two main reasons why I develop this language. Firstly, compared with project files and MIDI files that simply store unitary information such as notes, intensity, tempo, etc., it would be more meaningful to represent how a piece of music is realized from a compositional point of view, in terms of music theory. Most music is extremely regular in music theory, as long as it is not modernist atonal music, and these rules can be greatly simplified by abstracting them into logical statements of music theory. (A MIDI file with 1000 notes, for example, can actually be reduced to a few lines of code from a music theory perspective.) Secondly, the language was developed so that the composing AI could compose with a real understanding of music theory (instead of deep learning and feeding a lot of data), and the language is also an interface that allows the AI to compose with a human-like mind once it understands the syntax of music theory. We can tell AI the rules on music theory, what is good to do and what is not, and these things can still be quantified, so this music theory library can also be used as a music theory interface to communicate music between people and AI. So, for example, if you want AI to learn someone's composing style, you can also quantify that person's style in music theory, and each style corresponds to some different music theory logic rules, which can be written to AI, and after this library, AI can realize imitating that person's style. If it is the AI's own original style, then it is looking for possibilities from various complex composition rules.
I am thinking that without deep learning, neural network, teaching AI music theory and someone's stylized music theory rules, AI may be able to do better than deep learning and big data training. That's why I want to use this library to teach AI human music theory, so that AI can understand music theory in a real sense, so that composing music won't be hard and random. That's why one of my original reasons for writing this library was to avoid the deep learning. But I feel that it is really difficult to abstract the rules of music theory of different musicians, I will cheer up to write this algorithm qwq In addition, in fact, the musician himself can tell the AI how he likes to write his own music theory (that is, his own unique rules of music theory preference), then the AI will imitate it very well, because the AI does know music theory at that time, composition is not likely to have a sense of machine and random. At this point, what the AI is thinking in its head is exactly the same as what the musician is thinking in his head.
The AI does not have to follow the logical rules of music theory that we give it, but we can set a concept of "preference" for the AI. The AI will have a certain degree of preference for a certain style, but in addition, it will have its own unique style found in the rules of "correct music theory", so that the AI can say that it "has been influenced by some musicians to compose its own original style". When this preference is 0, the AI's composition will be exactly the style it found through music theory, just like a person who learns music theory by himself and starts to figure out his own composition style. An AI that knows music theory can easily find its own unique style to compose, and we don't even need to give it data to train, but just teach it music theory.
So how do we teach music theory to an AI? In music, ignoring the category of modernist music for the moment, most music follows some very basic rules of music theory. The rules here refer to how to write music theory OK and how to write music theory mistakes. For example, when writing harmonies, four-part homophony is often to be avoided, especially when writing orchestral parts in arrangements. For example, when writing a chord, if the note inside the chord has a minor second (or minor ninth) it will sound more fighting. For example, when the AI decides to write a piece starting from A major, it should pick chords from the A major scale in steps, possibly off-key, add a few subordinate chords, and after writing the main song part, it may modulate by circle of fifths, or major/minor thirds, modulate in the parallel major and minor keys, etc. What we need to do is to tell the AI how to write the music correctly, and furthermore, how to write it in a way that sounds good, and that the AI will learn music theory well, will not forget it, and will be less likely to make mistakes, so they can write music that is truly their own. They will really know what music is and what music theory is. Because what the language of this library does is to abstract the music theory into logical statements, then every time we give "lessons" to the AI, we are expressing the person's own music theory concepts in the language of this library, and then writing them into the AI's database. In this way, the AI really learns the music theory. Composing AI in this way does not need deep learning, training set, or big data, compared to composing AI trained by deep learning, which actually does not know what music theory is and has no concept of music, but just draws from the huge amount of training data. Another point is that since things can be described by concrete logic, there is no need for machine learning. If it is text recognition, image classification, which is more difficult to use abstract logic to describe things, that is the place where deep learning is useful. | text/markdown | Rainbow-Dreamer | 1036889495@qq.com | null | null | LGPLv2.1 | music language, use codes to write music, music language for AI | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+)",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Langu... | [] | https://github.com/Rainbow-Dreamer/musicpy.git | https://github.com/Rainbow-Dreamer/musicpy/archive/7.12.tar.gz | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/4.0.2 CPython/3.7.9 | 2026-02-18T13:59:48.198314 | musicpy-7.12.tar.gz | 116,810 | 12/fa/b91baef2d63947a7f650ea141d7362408f5cc37065a76348ec033adbf670/musicpy-7.12.tar.gz | source | sdist | null | false | 9c2b1671d23f68ba2933887ab7663c65 | de28278f8ab231f44ee560781ecac0187a42a5aaa9f94c2709e19407cc1c35cd | 12fab91baef2d63947a7f650ea141d7362408f5cc37065a76348ec033adbf670 | null | [] | 921 |
2.4 | pav3 | 3.0.0.dev18 | Assembly-based variant discovery | <h1 align="center"><img width="300px" src="img/logo/PAVLogo_Full.png"/></h1>
<p align="center">Phased Assembly Variant Caller</p>
***
<!-- Templated header from the pbsv github page: https://github.com/PacificBiosciences/pbsv -->
Variant caller for assembled genomes.
## Development release (Please read)
PAV 3 is currently a development release and may contain bugs. Please report problems you encounter.
PAV now uses Polars for fast data manipulation. If your job fails with strange Polars errors, such as "capacity
overflow" or "PanicException", this is likely from Polars running out of memory.
### Notes for early adopters
If you have been using development versions, please read about these changes.
**Call subcommand is deprecated.** If you have been running pav3 with the "call" subcommand, switch it to "batch"
(i.e. ("pav3 batch ...")). A future version will use "call" for single assemblies (with multiple haplotypes) defined on
the command-line and "batch" to run all assemblies from an assembly table.
**Moving configuration to pav.json.** The configuration file "config.json" is being moved to "pav.json". A future
version will ignore "config.json".
## Install and run
### Install
```
pip install pav3
```
### Run
To run PAV, use the `pav3 batch` command after setting up configuration files (see below). A future version will add
`pav call` for single-assemblies without requiring configuration files.
Some Python environments may require you to run `pav3` through the `python` command:
```
python -m pav3 batch
```
### Dependencies
Currently, PAV needs `minimap2` in the environment where it is run. This may change in future releases. All other
dependencies are handled by the installer.
### Output
PAV will output a VCF file for each sample called `NAME.vcf.gz`.
* `results/NAME/call_hap`: Unmerged variant calls.
* Includes tables of callable regions in reference ("callable_ref") and query ("callable_qry") coordinates.
* `results/NAME/call`: Variant calls merged across haplotypes.
All tables are in parquet format.
## Configuring PAV for batch runs
To run assemblies in batches ("pav3 batch" command), PAV reads two configuration files:
* `pav.json`: Points to the reference genome and can be used to set optional parameters.
* `assemblies.tsv`: A table of input assemblies.
### Base config: pav.json
A JSON configuration file, `pav.json`, configures PAV. Default options are built-in, and the only required option is
`reference` pointing to a reference FASTA file.
Example:
```
{
"reference": "/path/to/hg38.no_alt.fa.gz"
}
```
### Assembly table
The assembly table points PAV to input assemblies. It may be in TSV, CSV, Excel, or parquet formats (TSV and CSV may
optionally be gzipped). Each assembly has one row in the table.
Columns:
* NAME: Assembly or sample name.
* HAP_\*: One column for each assembled haplotype.
#### Name column
The `NAME` column contains the assembly name (or sample name). This column must be present and each row must have a
unique value.
#### Haplotype columns
PAV accepts one or more assembled haplotypes per assembly, each with a column in the table starting with "HAP_". Each
is a path to an input file for one assembly haplotype.
Common column names are "HAP_h1" for haplotype "h1" and "HAP_h2" for haplotype "h2". For some assemblies with known
parental origins, "HAP_mat" and "HAP_pat" are commonly used.
There must be at least one haplotype per assembly, and PAV has no limits on the number of haplotypes (i.e. 3 or more
are acceptable).
Not all assemblies need to have the same haplotypes. PAV will ignore empty the "HAP_" values for each assembly. For
example, if some assemblies have an "unphased" haplotype and other do not, include "HAP_unphased" and leave it blank
for assemblies that do not have it.
Note that genotypes in the VCF file will have one allele for each haplotype defined for the assembly. For an assembly
with haplotypes "h1", "h2", and "unphased", three genotype alleles will be possible (e.g. "1|0|." for a heterozygous
variant present in "h1", not present in "h2", and uncallable in "unphased"). The order of genotypes is determined by
the order of haplotype columns in the assembly table.
Each "HAP_" column contains paths to input files in FASTA, FASTQ, GFA, or FOFN format. FOFN may contain paths to these
same file types including other FOFNs (recursive FOFNs are not recommended, but PAV will detect cycles). Multiple files
can be input by separating them by semi-colons (i.e. "path/to/file1.fasta.gz;path/to/file2.fasta.gz") and a mix of
types is possible. PAV will be fastest if the input is a single bgzipped and indexed FASTA file, it will build its
own FASTA file for all other cases.
#### Configuration column for global overrides
An optional "CONFIG" column can override global configuration parameters per assembly. Global configuration parameters
are defined in `pav.json` or are PAV default values if not defined. Values in this column are semicolon-separated lists
of key-value pairs (i.e. "key1=val1;key2=val2"). The "reference" parameter cannot be overridden per assembly.
### A note on references
Do not use references with ALT, PATCH, or DECOY scaffolds for PAV, or generally, any assembly-based or long-read
variant calling tool. Reference redundancy may increase callset errors.
The GRCh38 HGSVC no-ALT reference for long reads can be found here:
ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data_collections/HGSVC2/technical/reference/20200513_hg38_NoALT/
The T2T-CHM13v2.0 (hs1 on UCSC) is suitable without alteration. Custom per-sample assemblies containing a
single-haplotype or an unphased ("squashed") assembly typically also make a suitable reference as long as they are
free of large structural misassemblies and especially large false duplications.
## PAV versions
PAV uses Python package versioning with three fields:
* Major: Major changes or new features.
* Minor: Small changes, but may affect PAV's API or command-line interfaces.
* Patch: Small changes and minor new features. Patch versions do break API or command-line compatibility, but may
add minor features or options to the API that were not previously supported.
PAV follows Python's packaging versioning scheme (https://packaging.python.org/en/latest/discussions/versioning/).
PAV may use pre-release versions with a suffix for development releases (".devN"), alpha ("aN"), beta ("bN"), or
release-candidate ("rcN") where "N" is an integer greater than 0. For example, "3.0.0.dev1" is a development version,
and "3.0.0a1" is an early alpha version, and "3.0.0rc1" is a release candidate, all of which precede the "3.0.0"
release and should not be considered production-ready.
## Cite PAV
PAV 3 does not yet have a citation. For now, use the citation for previous PAV versions, but check back for updates.
Ebert et al., “Haplotype-Resolved Diverse Human Genomes and Integrated Analysis of Structural Variation,”
Science, February 25, 2021, eabf7117, https://doi.org/10.1126/science.abf7117 (PMID: 33632895).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"agglovar>=0.0.1.dev9",
"biopython>=1.85",
"fastexcel>=0.14.0",
"frozendict>=2.4.6",
"hmmlearn>=0.3.3",
"intervaltree>=3.1.0",
"numpy>=2.3.2",
"polars[numpy,pandas,pyarrow]>=1.35.2",
"pyarrow>=21.0.0",
"pysam>=0.23.3",
"scipy>=1.16.1",
"scipy-stubs==1.16.1.1",
"snakemake>=9.9.0",
"matplotl... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.11 | 2026-02-18T13:59:35.549143 | pav3-3.0.0.dev18.tar.gz | 2,972,008 | 98/09/c017d2c7f201199e3f21aa27e785890191b47a0ef1951280fa427abf383d/pav3-3.0.0.dev18.tar.gz | source | sdist | null | false | a45e8eb37f14482fe2d3ab19a546e545 | 595d8dc497fac044cef60100ce471f20f9b039b2042293fa4964fa11d2c4855e | 9809c017d2c7f201199e3f21aa27e785890191b47a0ef1951280fa427abf383d | MIT | [
"LICENSE"
] | 221 |
2.4 | flowbio | 0.3.6 | A client for the Flow API. | # flowbio
A client for the Flow API.
```python
import flowbio
client = flowbio.Client()
client.login("your_username", "your_password")
# Upload standard data
data = client.upload_data("/path/to/file.fa", progress=True, retries=5)
print(data)
# Upload sample
sample = client.upload_sample(
"My Sample Name",
"/path/to/reads1.fastq.gz",
"/path/to/reads2.fastq.gz", # optional
progress=True,
retries=5,
metadata={
"sample_type": "RNA-Seq",
"scientist": "Charles Darwin",
"type_specific_metadata": '{"strandedness": "reverse"}',
}
)
print(sample)
# Upload multiplexed
multiplexed = client.upload_multiplexed(
"/path/to/reads.fastq.gz",
progress=True,
retries=5,
)
print(multiplexed)
# Upload annotation
annotation = client.upload_annotation(
"/path/to/annotation.csv",
progress=True,
retries=5,
)
print(annotation)
# Run pipeline
execution = client.run_pipeline(
"RNA-Seq",
"3.8.1",
"23.04.3",
params={"param1": "param2"},
data_params={"fasta": 123456789},
)
```
| text/markdown | Sam Ireland | sam@goodwright.com | null | null | MIT | nextflow bioinformatics pipeline | [
"Development Status :: 4 - Beta",
"Topic :: Internet :: WWW/HTTP",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language ::... | [] | https://github.com/goodwright/flowbio | null | !=2.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.* | [] | [] | [] | [
"tqdm",
"kirjava",
"requests"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-18T13:59:26.744467 | flowbio-0.3.6.tar.gz | 7,588 | 33/e5/bb7da0f62e8886f38ee3dc6c2d165f2658c8f2461eb4060a6f1b43a502b9/flowbio-0.3.6.tar.gz | source | sdist | null | false | bf0cfed9a030edb6d91c8b79956545b7 | 5f229518bf3dda26b4b71b969f850146ee131081e26b66b534d015fb85eccb4f | 33e5bb7da0f62e8886f38ee3dc6c2d165f2658c8f2461eb4060a6f1b43a502b9 | null | [
"LICENSE"
] | 255 |
2.4 | datamazing | 7.0.0 | Package for working with pandas Dataset, but with specialized functions used for Energinet | # Datamazing
The Datamazing package provides an interface for various transformations of data (filtering, aggregation, merging, etc.)
## Interface
The interface is very similar to those of most DataFrame libraries (pandas, pyspark, SQL, etc.). For example, a group-by is implemented as `group(df, by=["..."])`, and a merge is implemented as `merge([df1, df2], on=["..."], how="inner")`. So, why not just use native pandas, pyspark, etc.?
1. The native libraries have some parts, with a little annoying interface (such as pandas inconsistent use of indexing)
2. Ability to add custom operations, used specifically for the Energinet domain.
## Backends
The package contains methods with the same interface, but for different backends. Currently, 2 backends are supported: `pandas` and `pyspark` (though not all methods are available for both). For example, when working with `pandas` DataFrames, one would use
```python
import pandas as pd
import datamazing.pandas as pdz
df = pd.DataFrame([
{"animal": "cat", "time": pd.Timestamp("2020-01-01"), "age": 1.0},
{"animal": "cat", "time": pd.Timestamp("2020-01-02"), "age": 3.0},
{"animal": "dog", "time": pd.Timestamp("2020-01-01"), "age": 5.0},
])
pdz.group(df, by="animal") \
.resample(on="time", resolution=pd.Timedelta(hours=12)) \
.agg("interpolate")
```
whereas, when working with `pyspark` DataFrame, one would instead use
```python
import datetime as dt
import pyspark.sql as ps
import datamazing.pyspark as psz
spark = ps.SparkSession.getActiveSession()
df = spark.createDataFrame([
{"animal": "cat", "time": dt.datetime(2020, 1, 1), "age": 1.0},
{"animal": "cat", "time": dt.datetime(2020, 1, 2), "age": 3.0},
{"animal": "dog", "time": dt.datetime(2020, 1, 1), "age": 5.0},
])
psz.group(df, by="animal") \
.resample(on="time", resolution=pd.Timedelta(hours=12)) \
.agg("interpolate")
```
## Development
To setup the Python environment, run
```bash
$ pip install poetry
$ poetry install
```
To run test locally one needs java. This can be installed using the following:
```bash
$ sudo apt install default-jdk
```
To execute unit tests, run
```bash
$ pytest .
```
| text/markdown | Team Enigma | enigma@energinet.dk | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.10.19 Linux/6.14.0-1017-azure | 2026-02-18T13:58:56.796833 | datamazing-7.0.0-py3-none-any.whl | 25,784 | d1/b2/f703cec5d42d41542571d1ecacc27559d085078c221b1edaf88a17089a82/datamazing-7.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 25dd9f97d5648b8d26a08da613e51c03 | 5abc8510ced27051e8602f5fe5ce3ace551198e52e7f7106221ea6ab41e4c504 | d1b2f703cec5d42d41542571d1ecacc27559d085078c221b1edaf88a17089a82 | null | [] | 304 |
2.4 | oslo.log | 8.1.0 | oslo.log library | ================================
oslo.log -- Oslo Logging Library
================================
.. image:: https://governance.openstack.org/tc/badges/oslo.log.svg
.. Change things from this point on
.. image:: https://img.shields.io/pypi/v/oslo.log.svg
:target: https://pypi.org/project/oslo.log/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/oslo.log.svg
:target: https://pypi.org/project/oslo.log/
:alt: Downloads
The oslo.log (logging) configuration library provides standardized
configuration for all openstack projects. It also provides custom
formatters, handlers and support for context specific
logging (like resource id's etc).
* Free software: Apache license
* Documentation: https://docs.openstack.org/oslo.log/latest/
* Source: https://opendev.org/openstack/oslo.log
* Bugs: https://bugs.launchpad.net/oslo.log
* Release notes: https://docs.openstack.org/releasenotes/oslo.log/
| text/x-rst | null | OpenStack <openstack-discuss@lists.openstack.org> | null | null | null | null | [
"Environment :: OpenStack",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Lan... | [] | null | null | >=3.10 | [] | [] | [] | [
"pbr>=3.1.1",
"oslo.config>=5.2.0",
"oslo.context>=2.21.0",
"oslo.i18n>=3.20.0",
"oslo.utils>=3.36.0",
"oslo.serialization>=2.25.0",
"python-dateutil>=2.7.0",
"debtcollector>=3.0.0",
"fixtures>=3.0.0; extra == \"fixtures\"",
"systemd-python>=234; extra == \"systemd\""
] | [] | [] | [] | [
"Homepage, https://docs.openstack.org/oslo.log",
"Repository, https://opendev.org/openstack/oslo.log"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T13:55:55.475670 | oslo_log-8.1.0.tar.gz | 100,949 | 12/2e/d4f083ddf4fda98c2c5bd3fa2814f22fed59039bfdba73b7240fd332a798/oslo_log-8.1.0.tar.gz | source | sdist | null | false | a6a9cc7e292b98f9c530e1f5aaf58334 | 4b7a2c869474a1d57f84ea5a4d03d8578b04a8023c0fa5511663e77a936a0f7b | 122ed4f083ddf4fda98c2c5bd3fa2814f22fed59039bfdba73b7240fd332a798 | null | [
"LICENSE"
] | 0 |
2.4 | agentic-ai-engineering-course | 0.4.7 | Python modules for the Agentic AI Engineering course by Towards AI and Paul Iusztin. | # Agentic AI Engineering Course
Python modules for the Agentic AI Engineering course by Towards AI and Paul Iusztin. [Find more about the course](https://academy.towardsai.net/courses/agent-engineering).
## Installation
```bash
pip install agentic-ai-engineering-course
```
## Usage
### Environment Utilities
```python
from utils import load
# Load environment variables from .env file
load()
# Load with custom path and required variables
load(dotenv_path="path/to/.env", required_env_vars=["API_KEY", "SECRET"])
```
### Pretty Print Utilities
```python
from utils import wrapped, function_call, Color
# Pretty print text with custom formatting
wrapped("Hello World", title="My Message", header_color=Color.BLUE)
# Pretty print function calls
function_call(
function_call=my_function_call,
title="Function Execution",
header_color=Color.GREEN
)
```
## Development
This package is part of the Agentic AI Engineering course materials. For the full course experience, visit the main repository.
## Authors
- Paul Iusztin (p.b.iusztin@gmail.com)
- Fabio Chiusano (chiusanofabio94@gmail.com)
- Omar Solano (omarsolano27@gmail.com)
## License
Apache License 2.0 - see LICENSE file for details. | text/markdown | null | Paul Iusztin <p.b.iusztin@gmail.com>, Fabio chiusano <chiusanofabio94@gmail.com>, Omar solano <omarsolano27@gmail.com> | null | null | Apache-2.0 | agents, ai, llms, workflows | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12.11 | [] | [] | [] | [
"aiosqlite>=0.21.0",
"chromadb>=1.0.20",
"click>=8.1.8",
"fastmcp==2.12.0",
"firecrawl-py>=2.7.1",
"gitingest>=0.1.5",
"google-genai>=1.24.0",
"ipywidgets>=8.1.8",
"langchain-community>=0.4.1",
"langchain-google-genai>=3.0.0",
"langchain-perplexity>=0.1.2",
"langchain>=1.0.3",
"langgraph-che... | [] | [] | [] | [
"Homepage, https://github.com/towardsai/agentic-ai-engineering-course",
"Repository, https://github.com/towardsai/agentic-ai-engineering-course",
"Issues, https://github.com/towardsai/agentic-ai-engineering-course/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T13:55:53.934072 | agentic_ai_engineering_course-0.4.7-py3-none-any.whl | 3,538,620 | d0/99/44239d55c000d12ff2708c131143d8f5efc29d56240495a600ef57d32c6d/agentic_ai_engineering_course-0.4.7-py3-none-any.whl | py3 | bdist_wheel | null | false | 17435ad4c80c9ef6f2e8ffff64491294 | cecb1b1f9b882fb4146b7380a7608983f5c9c38c739557d6d731f0afbdd43ac7 | d09944239d55c000d12ff2708c131143d8f5efc29d56240495a600ef57d32c6d | null | [
"LICENSE"
] | 357 |
2.4 | teeclip | 0.2.2 | Cross-platform tee-to-clipboard CLI with history management | # teeclip
[](https://pypi.org/project/teeclip/)
[](https://github.com/DazzleTools/teeclip/releases)
[](https://www.python.org/downloads/)
[](https://www.gnu.org/licenses/gpl-3.0.html)
[](https://github.com/DazzleTools/teeclip/discussions)
[](docs/platform-support.md)
Like Unix `tee`, but for your clipboard — with history and encryption. One command, every platform.
## Why teeclip?
Platform clipboard tools (`clip.exe`, `pbcopy`, `xclip`) are **sinks** — they consume stdin and produce no output. That means `cmd | clip | tee` doesn't work the way you'd expect: `clip` eats the data and nothing reaches `tee`.
```bash
# What you'd expect to work (but doesn't):
echo hello | clip | tee output.txt # output.txt is EMPTY — clip ate stdin
# The workaround (but you lose stdout):
echo hello | tee output.txt | clip # works, but you can't see the output
# With teeclip — stdout + clipboard + file, one command:
echo hello | teeclip output.txt
```
teeclip is a **filter**, not a sink. Data flows through it to stdout while being copied to the clipboard. It also keeps an encrypted local history so you can recall past clips, and it works identically on Windows, macOS, Linux, and WSL — your scripts stay portable.
| Task | Without teeclip | With teeclip |
|------|----------------|--------------|
| Copy + see output | `cmd \| tee /dev/tty \| clip` | `cmd \| teeclip` |
| Copy + file + stdout | `cmd \| tee file \| tee /dev/tty \| clip` | `cmd \| teeclip file` |
| Recall a previous copy | Not possible | `teeclip --get 2` |
| Encrypted history at rest | Not possible | Automatic with config |
| Same script, any OS | Requires platform detection | Just works |
## Features
- **Tee-style pass-through**: stdin flows to stdout unmodified while being copied to clipboard
- **Clipboard history**: Automatically saves piped content to a local SQLite database
- **History recall**: Browse (`--list`), retrieve (`--get N`), and manage (`--clear`) past clips
- **Encrypted storage**: AES-256-GCM encryption with OS-integrated key management (DPAPI, Keychain, Secret Service)
- **Cross-platform**: Windows, macOS, Linux (X11 + Wayland), and WSL — auto-detected, one command everywhere
- **Configurable**: `~/.teeclip/config.toml` for persistent settings (history size, encryption, backend)
- **Zero core dependencies**: Uses only Python stdlib and native OS clipboard commands
- **File output**: Supports writing to files just like standard `tee`
- **Paste mode**: Read clipboard contents back to stdout with `--paste`
## Installation
```bash
pip install teeclip
```
For encrypted clipboard history:
```bash
pip install teeclip[secure]
```
Or install from source:
```bash
git clone https://github.com/DazzleTools/teeclip.git
cd teeclip
pip install -e ".[secure]"
```
## Usage
```bash
# Copy command output to clipboard (and still see it)
echo "hello world" | teeclip
# Pipe a diff to clipboard for pasting into a PR comment
git diff | teeclip
# Copy to clipboard AND write to a file
cat data.csv | teeclip output.csv
# Append to a log file while copying to clipboard
make build 2>&1 | teeclip -a build.log
# Print current clipboard contents
teeclip --paste
# Pipe clipboard into another command
teeclip --paste | grep "error"
# Skip clipboard (act as plain tee)
echo test | teeclip --no-clipboard output.txt
# Browse clipboard history
teeclip --list
# Retrieve the 2nd most recent clip
teeclip --get 2
# Save clipboard to history (for content copied outside teeclip)
teeclip --save
# Show current config
teeclip --config
# Encrypt all stored clips (requires teeclip[secure])
teeclip --encrypt
```
## Platform Support
| Platform | Clipboard Tool | Notes |
|----------|---------------|-------|
| **Windows** | `clip.exe` / PowerShell | Built-in, no setup needed |
| **macOS** | `pbcopy` / `pbpaste` | Built-in, no setup needed |
| **Linux (X11)** | `xclip` or `xsel` | Install: `sudo apt install xclip` |
| **Linux (Wayland)** | `wl-copy` / `wl-paste` | Install: `sudo apt install wl-clipboard` |
| **WSL** | Windows clipboard via `/mnt/c/` | Auto-detected, no setup needed |
## Options
```
usage: teeclip [-h] [-a] [--paste] [--backend NAME] [--no-clipboard] [-q]
[--list [N]] [--get N] [--clear [SELECTOR]]
[--save] [--config] [--no-history] [--encrypt] [--decrypt]
[-V] [FILE ...]
positional arguments:
FILE also write to FILE(s), like standard tee
options:
-a, --append append to files instead of overwriting
--paste, -p print current clipboard contents to stdout
--backend NAME force clipboard backend
--no-clipboard, -nc
skip clipboard (act as plain tee)
-q, --quiet suppress warning messages
--list [N], -l [N]
show recent clipboard history (default: 10)
--get N, -g N retrieve Nth clip from history (1 = most recent)
--clear [SELECTOR]
delete history entries (all, or by index/range/combo)
--save, -s save current clipboard contents to history
--config show current configuration
--no-history skip history save for this invocation
--encrypt enable AES-256-GCM encryption (requires teeclip[secure])
--decrypt decrypt all stored clips
-V, --version show version and exit
```
For detailed documentation on all options and the config file, see [docs/configuration.md](docs/configuration.md). For database internals and encryption details, see [docs/database.md](docs/database.md).
## Contributions
Contributions are welcome! Please read our [Contributing Guide](CONTRIBUTING.md) for details.
Like the project?
[](https://www.buymeacoffee.com/djdarcy)
## License
teeclip, Copyright (C) 2025 Dustin Darcy
This project is licensed under the GNU General Public License v3.0 — see [LICENSE](LICENSE) for details.
| text/markdown | null | djdarcy <6962246+djdarcy@users.noreply.github.com> | null | null | GPL-3.0-or-later | tee, clipboard, cli, pipe, copy, paste, clipboard-history, cross-platform | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Prog... | [] | null | null | >=3.8 | [] | [] | [] | [
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"cryptography>=41.0; extra == \"secure\""
] | [] | [] | [] | [
"Homepage, https://github.com/DazzleTools/teeclip",
"Repository, https://github.com/DazzleTools/teeclip",
"Issues, https://github.com/DazzleTools/teeclip/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:54:35.670991 | teeclip-0.2.2.tar.gz | 52,115 | 1f/4f/d2b0d10a4fc7949f5541e93d91f2dab711c5ece7fec1dca8f0bc2231d2b6/teeclip-0.2.2.tar.gz | source | sdist | null | false | 556213eb55d76afc0d8a31db5d685de5 | 237b74c87d7a4c86cef4a754704ce314bfe029555226da3954e378552d2ff46a | 1f4fd2b0d10a4fc7949f5541e93d91f2dab711c5ece7fec1dca8f0bc2231d2b6 | null | [
"LICENSE"
] | 238 |
2.4 | python-entsoe | 0.3.1 | Python client for the ENTSO-E Transparency Platform API | # python-entsoe
Python client for the [ENTSO-E Transparency Platform](https://transparency.entsoe.eu/) API.
Typed, namespace-organized access to European electricity market data — load, prices, generation, transmission, and balancing.
```bash
pip install python-entsoe
```
## Quick Start
```python
from entsoe import Client
client = Client() # reads ENTSOE_API_KEY from environment
df = client.load.actual("2024-06-01", "2024-06-08", country="FR")
```
Strings are interpreted as timestamps in `Europe/Brussels` (CET — the ENTSO-E standard). You can override this per-client or pass `pd.Timestamp` objects directly:
```python
client = Client(tz="UTC") # override default timezone
# pd.Timestamp still works — its timezone takes priority
import pandas as pd
start = pd.Timestamp("2024-06-01", tz="Europe/Paris")
df = client.load.actual(start, "2024-06-08", country="FR")
```
Every method returns a `pandas.DataFrame` with a `timestamp` column (UTC) and a `value` column.
## Authentication
Get a free API key at https://transparency.entsoe.eu/ (register, then request a token via email).
Set it as an environment variable:
```bash
export ENTSOE_API_KEY=your-token-here
```
Or pass it directly:
```python
client = Client(api_key="your-token-here")
```
## API Reference
### `client.load`
| Method | Description | Parameters |
|--------|-------------|------------|
| `actual(start, end, country)` | Actual total system load | `country`: ISO code (e.g., `"FR"`) |
| `forecast(start, end, country)` | Day-ahead load forecast | `country`: ISO code |
```python
df = client.load.actual(start, end, country="FR")
df = client.load.forecast(start, end, country="FR")
```
### `client.prices`
| Method | Description | Parameters |
|--------|-------------|------------|
| `day_ahead(start, end, country)` | Day-ahead market prices (EUR/MWh) | `country`: ISO code |
```python
df = client.prices.day_ahead(start, end, country="FR")
# Returns: timestamp, value, currency, price_unit
```
### `client.generation`
| Method | Description | Parameters |
|--------|-------------|------------|
| `actual(start, end, country, psr_type=None)` | Actual generation per type | `psr_type`: filter by fuel (optional) |
| `forecast(start, end, country, psr_type=None)` | Wind & solar forecast | `psr_type`: filter by fuel (optional) |
| `installed_capacity(start, end, country, psr_type=None)` | Installed capacity per type | `psr_type`: filter by fuel (optional) |
| `per_plant(start, end, country, psr_type=None)` | Generation per production unit | `psr_type`: filter by fuel (optional) |
```python
# All generation types
df = client.generation.actual(start, end, country="FR")
# Solar only
df = client.generation.actual(start, end, country="FR", psr_type="B16")
# Installed capacity
df = client.generation.installed_capacity(start, end, country="FR")
```
### `client.transmission`
| Method | Description | Parameters |
|--------|-------------|------------|
| `crossborder_flows(start, end, country_from, country_to)` | Physical cross-border flows | Two country codes |
| `scheduled_exchanges(start, end, country_from, country_to)` | Scheduled commercial exchanges | Two country codes |
| `net_transfer_capacity(start, end, country_from, country_to)` | Day-ahead NTC | Two country codes |
```python
df = client.transmission.crossborder_flows(
start, end, country_from="FR", country_to="ES"
)
```
### `client.balancing`
| Method | Description | Parameters |
|--------|-------------|------------|
| `imbalance_prices(start, end, country)` | System imbalance prices | `country`: ISO code |
| `imbalance_volumes(start, end, country)` | System imbalance volumes | `country`: ISO code |
```python
df = client.balancing.imbalance_prices(start, end, country="FR")
```
## Codes & Dimensions
All ENTSO-E codes (areas, PSR types, process types, business types, price categories, etc.) are defined as structured registries in [`_mappings.py`](https://github.com/datons/python-entsoe/blob/main/src/entsoe/_mappings.py). Each entry has a `name` (DataFrame output), `slug` (programmatic identifier), and `description`.
**Country codes** — use standard ISO codes. Some bidding zones have specific codes (`DE_LU`, `IT_NORTH`, `NO_1`–`NO_5`, `SE_1`–`SE_4`, `DK_1`/`DK_2`).
> **Note:** For day-ahead prices and balancing data, use `DE_LU` instead of `DE`. See [data availability notes](docs/data-availability.md) for details.
**PSR types** — filter generation by fuel type with codes like `B16` (Solar) or slugs like `"solar"`:
```python
from entsoe import PSR_TYPES
PSR_TYPES["B16"] # {'name': 'Solar', 'slug': 'solar', 'description': 'Solar'}
```
## Timestamps
All `start` and `end` parameters accept **date strings** or **tz-aware `pd.Timestamp`** objects:
```python
# Simple — just strings (uses client's default tz: Europe/Brussels)
df = client.load.actual("2024-01-01", "2024-01-07", country="FR")
# pd.Timestamp with explicit timezone — takes priority over default
df = client.load.actual(
pd.Timestamp("2024-01-01", tz="Europe/Paris"),
pd.Timestamp("2024-01-07", tz="Europe/Paris"),
country="FR",
)
# Mixing is fine
df = client.load.actual("2024-01-01", pd.Timestamp("2024-01-07", tz="UTC"), country="FR")
# Naive pd.Timestamp (no tz) — still raises InvalidParameterError
start = pd.Timestamp("2024-01-01") # ← no tz, will error
```
Returned timestamps are always in **UTC**.
## Features
- **Autocomplete-friendly** — type `client.` and see all domains, then drill into methods
- **Automatic year-splitting** — requests spanning more than 1 year are split transparently
- **ZIP handling** — endpoints returning compressed responses are decompressed automatically
- **Retry with backoff** — rate-limited requests (HTTP 429) are retried with exponential backoff
- **Clear errors** — `NoDataError`, `InvalidParameterError`, `RateLimitError` with descriptive messages
## Error Handling
```python
from entsoe import Client, NoDataError, InvalidParameterError
client = Client()
try:
df = client.prices.day_ahead(start, end, country="FR")
except NoDataError:
print("No data available for this period")
except InvalidParameterError as e:
print(f"Bad parameters: {e}")
```
## Examples
See the [`examples/`](examples/) directory for Jupyter notebooks with plotly visualizations:
- [`load.ipynb`](examples/load.ipynb) — actual load, forecast comparison, multi-country profiles
- [`prices.ipynb`](examples/prices.ipynb) — day-ahead prices, distributions, hourly heatmap
- [`generation.ipynb`](examples/generation.ipynb) — generation mix, solar vs wind, installed capacity
- [`transmission.ipynb`](examples/transmission.ipynb) — cross-border flows, bidirectional charts, NTC
- [`balancing.ipynb`](examples/balancing.ipynb) — imbalance prices, multi-country, distribution
## Documentation
- [**Code Registries**](https://github.com/datons/python-entsoe/blob/main/src/entsoe/_mappings.py) — All ENTSO-E codes and their meanings (areas, PSR types, process types, business types, price categories)
- [**Data Availability**](docs/data-availability.md) — Known issues, geographic coverage, and API quirks
## Development
```bash
git clone https://github.com/datons/python-entsoe.git
cd python-entsoe
uv sync
# Run tests (requires ENTSOE_API_KEY in .env)
uv run pytest tests/ -v
# Regenerate example notebooks
uv run python scripts/generate_notebooks.py
```
## License
MIT
| text/markdown | null | jsulopzs <jesus.lopez@datons.com> | null | null | null | api, electricity, energy, entsoe, transparency | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas>=2.0",
"requests>=2.28"
] | [] | [] | [] | [
"Repository, https://github.com/datons/python-entsoe"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:54:27.126232 | python_entsoe-0.3.1.tar.gz | 477,614 | a0/4c/71eae9be66d1cd00652b094a47baa101ae5962f60b03aaff71456fb7fc27/python_entsoe-0.3.1.tar.gz | source | sdist | null | false | 5c9a12e7518ee8aa127f67682bfc5ee6 | 26eb5cd4d71d19a04ec4cfbdfadad18ecf79a5196afe3ec947416d8d306e2a41 | a04c71eae9be66d1cd00652b094a47baa101ae5962f60b03aaff71456fb7fc27 | MIT | [] | 271 |
2.4 | chift | 0.5.9 | Chift API client | # Chift Python Library
[](https://pypi.python.org/pypi/chift)
[](https://github.com/chift-oneapi/chift-python-sdk/actions?query=branch:main)
[](https://coveralls.io/github/chift-oneapi/chift-python-sdk?branch=master)
The Chift Python library provides convenient access to the Chift API from
applications written in the Python language.
## Documentation
See the [API docs](https://chift.stoplight.io/docs/chift-api/intro).
## Installation
You don't need this source code unless you want to modify the package. If you just
want to use the package, just run:
```sh
pip install --upgrade chift
```
Install from source with:
```sh
python setup.py install
```
### Requirements
- Python 3.9+
## Usage
```python
import chift
chift.client_secret = "Spht8g8zMYWHTRaT1Qwy"
chift.client_id = "pZMQxOJJ6tl1716"
chift.account_id = "a8bfa890-e7ab-480f-9ae1-4c685f2a2a76"
chift.url_base = "http://chift.localhost:8000" # for development
# get a consumer
consumer = chift.Consumer.get("0e260397-997e-4791-a674-90ff6dab7caa")
# get all products
products = consumer.invoicing.Product.all(limit=2)
# get one products
product = consumer.invoicing.Product.get("PRD_3789488")
# print the product name
print(product.name)
```
## Development
Set up the development env:
```sh
make
```
Run all tests:
```sh
make test
```
Run the formatter:
```sh
make fmt
```
| text/markdown | Henry Hertoghe | henry.hertoghe@chift.eu | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pydantic<3.0.0,>=2.7.2",
"requests>=2.20"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.19 | 2026-02-18T13:53:44.412952 | chift-0.5.9.tar.gz | 120,396 | 6b/82/8c86b1c5d42a94db40765301b459d46a2b2cfbc25ccaf48c69229d1775c4/chift-0.5.9.tar.gz | source | sdist | null | false | ec22d9bbe8dc438eeeea0edf71534cc2 | 5acde6317c973c862b1054c4873267e39ea3c166648195c3b1250151fb2be9b2 | 6b828c86b1c5d42a94db40765301b459d46a2b2cfbc25ccaf48c69229d1775c4 | null | [] | 373 |
2.2 | zabel-commons | 1.10.0 | The Zabel transverse **commons** library | # zabel-commons
## Overview
This is part of the Zabel platform. The **zabel-commons** package contains
interfaces, exceptions, and helpers that are used throughout the platform.
It is not typically installed as a standalone package but comes in as a
dependency from other packages.
If you want to develop a package that offers new _elements_ for Zabel, or if
you want to create an application that will be deployed using **zabel-fabric**,
you will probably have to add this package as a dependency.
It provides six modules:
- _zabel.commons.exceptions_
- _zabel.commons.interfaces_
- _zabel.commons.selectors_
- _zabel.commons.sessions_
- _zabel.commons.servers_
- _zabel.commons.utils_
This package makes use of the **requests** library. It has no other external
dependencies.
## License
```text
Copyright (c) 2019 Martin Lafaix (martin.lafaix@external.engie.com) and others
This program and the accompanying materials are made
available under the terms of the Eclipse Public License 2.0
which is available at https://www.eclipse.org/legal/epl-2.0/
SPDX-License-Identifier: EPL-2.0
```
| text/markdown | Martin Lafaix | martin.lafaix@external.engie.com | null | null | Eclipse Public License 2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Eclipse Public License 2.0 (EPL-2.0)",
"Operating System :: OS Independent",
"Topic :: Utilities"
] | [] | https://github.com/engie-group/zabel | null | >=3.10.0 | [] | [] | [] | [
"requests>=2.32",
"bottle>=0.12.25; extra == \"bottle\"",
"bottle>=0.12.25; extra == \"all\""
] | [] | [] | [] | [] | twine/5.0.0 CPython/3.12.3 | 2026-02-18T13:53:03.413297 | zabel_commons-1.10.0-py3-none-any.whl | 32,343 | ef/cb/efac4be25ed853870a13f3a3284d0ff403c9581f2da9650b83803f32748f/zabel_commons-1.10.0-py3-none-any.whl | py3 | bdist_wheel | null | false | b14fb989e49a4a52dccfe945e4743bd5 | 809591416e20b42985f90d34af35b1809e8629bfeb24ee78956808cf04477fec | efcbefac4be25ed853870a13f3a3284d0ff403c9581f2da9650b83803f32748f | null | [] | 284 |
2.4 | dinkleberg | 1.2.2 | Your friendly neighbour when it comes to dependency management. | # dinkleberg
> "And this is where I'd put my working dependencies... IF I HAD ANY!"
**dinkleberg** is a lightweight Python utility designed to make dependency management less of a neighborhood feud. Built
to work seamlessly in any project, it ensures your environment stays green—unlike the guy's next door.
## Installation
```bash
pip install dinkleberg
uv add dinkleberg
```
## Documentation
Visit the GitHub repository for detailed documentation and
examples: [dinkleberg on GitHub](https://github.com/DavidVollmers/dinkleberg/blob/main/README.md)
| text/markdown | null | null | null | null | null | dependency, package-management, automation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent",
"Topic :: Software Developmen... | [] | null | null | >=3.12 | [] | [] | [] | [
"dinkleberg-abc>=1.1.1",
"dinkleberg-fastapi>=1.1.0; extra == \"fastapi\""
] | [] | [] | [] | [
"Homepage, https://github.com/DavidVollmers/dinkleberg",
"Documentation, https://github.com/DavidVollmers/dinkleberg/blob/main/README.md",
"Repository, https://github.com/DavidVollmers/dinkleberg.git",
"Issues, https://github.com/DavidVollmers/dinkleberg/issues",
"Changelog, https://github.com/DavidVollmers... | uv/0.8.8 | 2026-02-18T13:51:53.384230 | dinkleberg-1.2.2-py3-none-any.whl | 9,989 | f7/36/1f4ab87752e8900c98ac6d48b90d9eec74a14cccbb0aecd3986ee567fc2a/dinkleberg-1.2.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 86e439043ccba2b28f5a72bb7b53d56d | b78c95c49a96268d844988f08ce602e909ae7cd158a57aca62ba0663e9687785 | f7361f4ab87752e8900c98ac6d48b90d9eec74a14cccbb0aecd3986ee567fc2a | null | [
"LICENSE.txt"
] | 268 |
2.4 | progtc | 0.1.13 | Programmatic tool calling for your agent. | <div align="center">
<pre>
╔══════════════════════════════════════════════════════════╗
║ ██████╗ ██████╗ ██████╗ ██████╗ ████████╗ ██████╗ ║
║ ██╔══██╗██╔══██╗██╔═══██╗██╔════╝ ╚══██╔══╝██╔════╝ ║
║ ██████╔╝██████╔╝██║ ██║██║ ███╗ ██║ ██║ ║
║ ██╔═══╝ ██╔══██╗██║ ██║██║ ██║ ██║ ██║ ║
║ ██║ ██║ ██║╚██████╔╝╚██████╔╝ ██║ ╚██████╗ ║
║ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═════╝ ║
║ by capsa.ai ║
╚══════════════════════════════════════════════════════════╝
</pre>
Programmatic tool calling for your agent.
[](https://github.com/capsa-ai/progtc/actions/workflows/ci.yml)


</div>
---
## What is Programmatic Tool Calling?
Programmatic Tool Calling is a strategy used to orchestrate an agent's tools through code rather than through individual API round-trips. Instead of your agent requesting tools one at a time with each result being returned to its context, your agent can write code that calls multiple tools, processes their outputs, and controls what information actually enters its context window.
Programmatic Tool Calling was popularised by the likes of smolagents and claude. `progtc` is a framework agnostic implementation.
The challenge that `progtc` solves is that, for security, your agent's code must be run in a sandboxed environment but typically your tools run locally. You therefore need a mechanism to communicate tool call requests and results to and from your sandbox.
## Installation
```bash
pip install progtc # client only
pip install "progtc[server]" # with server
```
Or with [uv](https://docs.astral.sh/uv/):
```bash
uv add progtc # client only
uv add "progtc[server]" # with server
```
## Quick Start
### 1. Start the Server (inside your sandbox)
```bash
progtc serve --host 0.0.0.0 --port 8000 --api-key your-secret-key
```
### 2. Execute Code from Your Client
```python
from progtc import AsyncProgtcClient
client = AsyncProgtcClient(
base_url="https://your-sandbox-url:8000",
api_key="your-secret-key",
)
# Define your tools as async functions
async def get_weather(city: str, country: str) -> str:
# Your actual implementation
return f"Weather in {city}, {country}: Sunny, 22°C"
async def search_database(query: str) -> list[dict]:
# Your actual implementation
return [{"id": 1, "name": "Result"}]
# Execute LLM-generated code that uses your tools
code = """
from tools import get_weather
weather = await get_weather("London", "UK")
print(f"The weather is: {weather}")
"""
result = await client.execute_code(
code=code,
tools={
"get_weather": get_weather,
"search_database": search_database,
},
)
print(result.stdout) # "The weather is: Weather in London, UK: Sunny, 22°C"
print(result.stderr) # ""
```
## How It Works
```mermaid
sequenceDiagram
box rgba(100, 100, 255, 0.2) Your App
participant Client as Progtc Client
end
box rgba(100, 200, 100, 0.2) Code Sandbox
participant Server as Progtc Server
participant Process as Sub-Process
end
Client->>Server: POST /execute-code
Server->>Process: code
Note over Process: execute code
Process->>Server: tool call
Server->>Client: SSE: tool call
activate Process
Note over Process: paused
Note over Client: execute tool locally
Client->>Server: POST /tool-result
deactivate Process
Server->>Process: tool result
Note over Process: continue execution...
Process->>Server: stdout, stderr
Server->>Client: SSE: stdout, stderr
```
1. **Your client** sends code + a list of available tool names to the progtc server
2. **The server** executes the code in an isolated process, injecting a `tools` module
3. **When code calls a tool**, the server streams the call back to your client via SSE
4. **Your client** executes the tool locally and sends the result back
5. **The server** resumes code execution with the result
6. **Stdout/stderr** are captured and streamed back when execution completes
## Code Guidelines
To use tools your code should import them from the tools module:
```python
from tools import my_tool
```
Tools are treated as async functions, therefore they must be awaited:
```python
from tools import my_tool
await my_tool()
```
You will receive stdout and stderr, so print the variables you want to see:
```python
from tools import tool_a, tool_b
a = tool_a()
b = tool_b(a)
print(b)
```
You can perform multiple tool calls at once using async gather:
```python
from tools import get_weather, search_database
import asyncio
# Call tools like regular async functions
weather, results = await asyncio.gather(
get_weather("Tokyo", "Japan"),
search_database("hotels"),
)
print(f"Weather: {weather}")
print(f"Results: {results}")
```
> **Note:** The code runs in a top-level async context, so you can use `await` directly without defining an async function.
## Server CLI Options
```bash
progtc serve [OPTIONS]
```
| Option | Default | Description |
| -------------------------- | ----------------------- | ------------------------------------------- |
| `--host` | `127.0.0.1` | Host to bind to |
| `--port` | `8000` | Port to bind to |
| `--api-key` | (env: `PROGTC_API_KEY`) | API key for authentication |
| `--tool-call-timeout` | `10.0` | Timeout for individual tool calls (seconds) |
| `--code-execution-timeout` | `30.0` | Total timeout for code execution (seconds) |
## Error Handling
The client returns a discriminated union—either success or one of several error types:
```python
from progtc.types import MessageType
result = await client.execute_code(code, tools)
match result.type:
case MessageType.SUCCESS:
print(f"Stdout: {result.stdout}")
case MessageType.SYNTAX_ERROR:
print(f"Syntax error: {result.stderr}")
case MessageType.RUNTIME_ERROR:
print(f"Runtime error: {result.stderr}")
case MessageType.TIMEOUT_ERROR:
print(f"Timeout: {result.stderr}")
```
## Example: Pydantic AI + E2B
See [`examples/e2b-example/`](examples/e2b-example/) for a complete example using progtc with a [pydantic-ai](https://ai.pydantic.dev) agent and an [E2B](https://e2b.dev) sandbox.
---
<p align="center">
<b>Building AI agents?</b> We're hiring: <a href="https://capsa.ai/careers">capsa.ai/careers</a>
</p>
| text/markdown | null | Callum Downie <70471360+calmdown13@users.noreply.github.com> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx-sse>=0.4.3",
"httpx>=0.28.1",
"pydantic>=2.12.5",
"fastapi>=0.123.10; extra == \"server\"",
"rich>=14.2.0; extra == \"server\"",
"typer>=0.20.0; extra == \"server\"",
"uvicorn[standard]>=0.38.0; extra == \"server\"",
"sentry-sdk>=2.52.0; extra == \"server-sentry\""
] | [] | [] | [] | [] | uv/0.9.16 {"installer":{"name":"uv","version":"0.9.16","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T13:51:38.057354 | progtc-0.1.13.tar.gz | 102,489 | 56/cf/e23a95058878e40349e473883ce9c9eb9bedcac4fa1240a252a01ef8945a/progtc-0.1.13.tar.gz | source | sdist | null | false | 4ac828f0551a636c154115c8afe10552 | 9d13cf590f70421d873866c406acb7c79d093ff4c5c0c154c67516f4a9f42011 | 56cfe23a95058878e40349e473883ce9c9eb9bedcac4fa1240a252a01ef8945a | null | [] | 545 |
2.1 | sf2-loader | 1.26 | This is an easy-to-use soundfonts loader, player and audio renderer in python | # sf2_loader
This is an easy-to-use soundfonts loader, player and audio renderer in python.
This is probably the most handy soundfont loader, player and renderer via pure programming at the time I am writing now (2021/8/29). This is a python package for handling SoundFont files, it has the following functionality:
* Load any soundfont files, including sf2 and sf3
* Listen to every preset in every bank in the soundfont files that are loaded using very simple syntax
* Play or export audio files for each note in a pitch range for any instruments in the soundfont files
* Play or export the whole piece of music as audio files with custom audio effects
* Loading as much soundfont files as you can
* For more functionalities that this soundfont loader provides, please continue reading.
## Contents
* [Introduction](#Introduction)
* [Installation](#Installation)
- [Windows](#Windows)
- [Linux](#Linux)
- [macOS](#macOS)
* [Usage](#Usage)
- [Initialize a soundfont loader](#Initialize-a-soundfont-loader)
- [Load sondfont files](#Load-sondfont-files)
- [The representation of the soundfont loader](#The-representation-of-the-soundfont-loader)
- [Change current channel, soundfont id, bank number and preset number](#Change-current-channel-soundfont-id-bank-number-and-preset-number)
- [About channel initialization](#About-channel-initialization)
- [Get the instrument names](#Get-the-instrument-names)
- [Play notes, chords, pieces and MIDI files](#Play-notes-chords-pieces-and-MIDI-files)
- [Export notes, chords, pieces and MIDI files](#Export-notes-chords-pieces-and-MIDI-files)
- [Export instruments](#Export-instruments)
- [Audio effects](#Audio-effects)
- [Pause, unpause and stop current playing sounds](#pause-unpause-and-stop-current-playing-sounds)
## Introduction
This sf2 loader is heavily combined with [musicpy](https://github.com/Rainbow-Dreamer/musicpy), which is one of my most popular project, focusing on music programming and music analysis and composition. If you have already learned how to use musicpy to build notes, chords and pieces, you can straightly pass them to the sf2 loader and let it play what you write. Besides of playing music with the loaded soundfonts files, I also write an audio renderer in the sf2 loader, which could render the audio from the loaded soundfont files with the input musicpy data structures and output as audio files, you can choose the output format, such as wav, mp3, ogg, and output file names, sample width, frame rate, channels and so on. In fact, this project borns with my attempt at making muscipy's daw module being able to load soundfont files to play and export audio files.
If you are not familiar with musicpy data structures and is not willing to learn it in recent times, you can also straightly using MIDI files as input to the sf2 loader, and export the rendered audio files using the loaded soundfont files. However, I still recommend you to learn about musicpy, even not to consider music programming and analysis, it could also be very useful for MIDI files editing and reconstructing.
This sf2 loader is compatible with both 32-bit and 64-bit python versions, for python >= 3.6, so be sure your installed python version match the requirements for this package to use.
This package is currently being tested on Windows, Linux and macOS. For Windows version, this package is tested on Windows 10.
Update: (2021/12/3) After many updates, currently the latest version is all compatible with Windows, Linux and macOS, so no compatible version is needed anymore, on Linux and macOS you just need to pip install sf2 loader the same as in Windows, and then you need to install fluidsynth and ffmpeg separately on Linux and macOS, the installation instruction is updated.
Update: (2021/9/5) The macOS compatible version is ready, the installation and configuration of Linux compatible version is at the installation section of this readme. This macOS compatible version of sf2_loader is tested on Catalina 10.15.5.
Update: (2021/9/5) The Linux compatible version is ready, the installation and configuration of Linux compatible version is at the installation section of this readme. This Linux compatible version of sf2_loader is tested on Ubuntu 18.04.5.
**Important note 1: the required python package musicpy is updated very frequently, so please regularly update musicpy by running**
```python
pip install --upgrade musicpy
```
**in cmd/terminal.**
**Important note 2: If you cannot hear any sound when running the play functions, this is because some IDE won't wait till the pygame's playback ends, they will stops the whole process after all of the code are executed without waiting for the playback. You can set `wait=True` in the parameter of the play functions, which will block the function till the playback ends, so you can hear the sounds.**
## Installation
### Windows
You can use pip to install this sf2 loader.
Run this line in cmd/terminal to install.
```python
pip install sf2_loader
```
Note: This package uses pydub as a required python package, which requires ffmpeg or libav installed to have abilities to deal with non-wav files (like mp3, ogg files), so I strongly recommend to install ffmpeg/libav and configure it correctly to make pydub working perfectly. You can refer to this [link](https://github.com/jiaaro/pydub#getting-ffmpeg-set-up) which is pydub's github main page readme to see how to do it, or you can follow the steps I provide here, which is easier and faster.
Firstly, download the ffmpeg zip file from this [link](https://github.com/Rainbow-Dreamer/musicpy/releases/latest), this is from the release page of musicpy which requires ffmpeg for the musicpy's daw module.
Then, unzip the folder `ffmpeg` from the zip file, put the folder in `C:\`
Then, add the path `C:\\ffmpeg\\bin` to the system environment variable PATH.
Finally, restart the computer.
Now you are all done with the set up of ffmpeg for pydub. If there are no warnings about ffmpeg from pydub pop up after you import this package, then you are good to go.
### Linux
You can use pip to install this sf2 loader, which is the same as in Windows.
Then, there are some important and necessary steps to configure this package in order to use it on Linux:
Firstly, you need to install fluidsynth on Linux, you can refer to this [link](https://github.com/FluidSynth/fluidsynth/wiki/Download#distributions) to see how to install fluidsynth on different Linux systems. Here I will put the install command for Ubuntu and Debian:
```
sudo apt-get install fluidsynth
```
Run this command in terminal on Ubuntu or Debian, and waiting for fluidsynth to finish installing.
Secondly, you need to install ffmpeg on Linux (the same reason as in Windows), you can just run this command in terminal to install ffmpeg on Linux:
```
sudo apt-get install ffmpeg libavcodec-extra
```
### macOS
You can use pip to install this sf2 loader, which is the same as in Windows.
Then, there are some important and necessary steps to configure this package in order to use it on macOS:
Firstly, you need to install fluidsynth on macOS, the easiest way to install ffmpeg in macOS is using homebrew. You need to make sure you have installed homebrew in macOS first, and then run `brew install fluidsynth` in terminal, and waiting for fluidsynth to be installed.
If you haven't installed homebrew before and cannot find a good way to install homebrew, here I will provide a very easy way to install homebrew on macOS, thanks from Ferenc Yim's answer from this [Stack Overflow question](https://stackoverflow.com/questions/29910217/homebrew-installation-on-mac-os-x-failed-to-connect-to-raw-githubusercontent-com):
open this [link](https://raw.githubusercontent.com/Homebrew/install/master/install.sh) in your browser, right-click and save it to your computer, and then open a terminal and run it with:
`/bin/bash path-to/install.sh`, and waiting for homebrew to be installed.
Secondly, you need to install ffmpeg on macOS (the same reason as in Windows), you can just run this command in terminal to install ffmpeg on macOS using homebrew:
```
brew install ffmpeg
```
### Install audioop-lts (Python >=3.13)
The `audioop` library was removed from Python 3.13 ([doc](https://docs.python.org/3/library/audioop.html)) so it is necessary to also install [a third-party port](https://pypi.org/project/audioop-lts/):
```
pip install audioop-lts
```
## Usage
Here are the syntax for the most important functionalities of this sf2 loader.
Firstly, you can import this sf2 loader using this line:
```python
import sf2_loader as sf
```
or you can using this line, which is without namespace, but this is not recommended in big projects because of potential naming conflicts with other python packages you import.
```python
from sf2_loader import *
```
Here we will use the first way of import as the standard.
When you install sf2_loader, the musicpy package will be installed at the same time. The musicpy package is imported in sf2_loader as `mp`, so you can use musicpy package straightly by calling `sf.mp`.
### Initialize a soundfont loader
To initialize a soundfont loader, you can pass a soundfont file path to the class `sf2_loader`, or leave it as empty, and use `load` function of `sf2_loader` to load soundfont files later.
```python
loader = sf.sf2_loader(soundfont_file_path)
# or
loader = sf.sf2_loader()
# examples
loader = sf.sf2_loader(r'C:\Users\Administrator\Desktop\celeste.sf2')
# or
loader = sf.sf2_loader('C:/Users/Administrator/Desktop/celeste.sf2')
```
### Load sondfont files
You can load a soundfont file when you initialize a `sf2_loader` by passing a soundfont file path to the initialize function, or use `load` function of the sf2 loader to load new soundfont files into the sf2 loader.
Each time you load a soundfont file, the sf2 loader will save the soundfont file name and the soundfont id, you can get and use them by calling the attributes `file` and `sfid_list` of the sf2 loader.
You can unload a loaded soundfont file by index (1-based) using `unload` function of the sf2 loader.
```python
loader = sf.sf2_loader(soundfont_file_path)
loader.load(soundfont_file_path2)
# examples
loader = sf.sf2_loader(r'C:\Users\Administrator\Desktop\celeste.sf2')
loader.load(r'C:\Users\Administrator\Desktop\celeste2.sf2')
>>> loader.file
['C:\\Users\\Administrator\\Desktop\\celeste.sf2', 'C:\\Users\\Administrator\\Desktop\\celeste2.sf2']
# if the soundfont file does not exist in the given file path, the soundfont id will be -1
>>> loader.sfid_list
[1, 2]
loader.unload(2) # unload the second loaded soundfont files of the sf2 loader
>>> loader.file
['C:\\Users\\Administrator\\Desktop\\celeste.sf2']
```
### The representation of the soundfont loader
You can print the sf2 loader and get the information that the sf2 loader currently has.
The channel number, bank number and preset number are 0-based, the soundfont id is 1-based.
```python
>>> loader
[soundfont loader]
loaded soundfonts: ['C:\\Users\\Administrator\\Desktop\\celeste.sf2', 'C:\\Users\\Administrator\\Desktop\\celeste2.sf2']
soundfonts id: [1, 2]
current channel: 0
current soundfont id: 1
current soundfont name: celeste.sf2
current bank number: 0
current preset number: 0
current preset name: Stereo Grand
```
### Change current channel, soundfont id, bank number and preset number
Each channel of the sf2 loader has 3 attributes, which are SoundFont id, bank number and preset number. The sf2 loader has a attribute `current_channel` which is used when displaying current information of current channel.
You can use the `change` function of the sf2 loader to change either one or some or all of the current channel, soundfont id, bank number and preset number of the sf2 loader. You can use either preset number or preset name to change current preset of a channel.
There are also some syntactic sugar I added for this sf2 loader, which is very convenient in many cases.
For example, you can use `loader < preset` to change the current preset number of the sf2 loader to change the instrument of the soundfont files that the sf2 loader will use to play and export, while current channel, soundfont id and bank number remain unchanged. This syntactic sugar also accept second parameter as the bank number, which is used as `loader < (preset, bank)`.
You can also use `loader % channel` to change current channel.
There are also a change function for each attribute of current channel, soundfont id, bank number and preset number, namely `change_channel`, `change_sfid`, `change_bank`, `change_preset`.
Each change function except `change_channel` has an optional argument `channel` to specify change which channel's attribute, if not specified, then change current channel's attribute by default.
```python
loader.change(channel=None,
sfid=None,
bank=None,
preset=None,
correct=True,
hide_warnings=True,
mode=0)
# If you only need to change one or some of the attributes,
# you can just specify the parameters you want to change,
# the unspecified parameters will remain unchanged.
# correct: if you set it to True,
# when the given parameters cannot find any valid instruments,
# the sf2 loader will go back to the program before the change,
# if you set it to False, the program will be forced to change to the
# given parameters regardless of whether the sf2 loader can find any valid
# instruments or not
# hide_warnings: prevent warning messages from external C/C++ libraries printed to the terminal or not
# mode: if set to 0, then when channel is specified, the current channel of the sf2 loader will be changed
# to that channel, and then change other specified attributes, otherwise,
# change other attributes within the specified channel,
# but current channel of the sf2 loader remain unchanged
# examples
loader.change(preset=2) # change current preset number to 2
loader.change(preset='Strings') # change current preset to Strings
loader.change(bank=9, preset=3) # change current bank number to 9 and current preset number to 3
loader.change_preset(2) # change current preset number to 2
loader.change_preset('Strings') # change current preset to Strings
loader.change_bank(9) # change current bank number to 9
loader.change_bank(9, channel=1) # change current bank number of channel 1 to 9
loader.change_channel(2) # change current channel to 2
loader.change_sfid(2) # change current soundfont id to 2
loader.change_soundfont('celeste2.sf2')
# change current soundfont file to celeste2.sf2, the parameter could be full path or
# just the file name of the soundfont file, but it must be loaded in
# the sf2 loader already
loader < 2 # change current preset number to 2
loader < 'Strings' # change current preset to Strings
loader < (3, 9) # change current bank number to 9, current preset number to 3
loader < ('Strings', 9) # change current bank number to 9, current preset to Strings
loader % 1 # change current channel to 1
```
### About channel initialization
Note that when a sf2 loader is initialized, the channel 0 will be automatically initialized, but other channels are not initialized. If you use a channel that is not initialized to play a sound or render audio, you will get no sound, which is the same for a channel that is initialized but with an invalid preset. But there are automatic initialization of channels built in this sf2 loader to make things easier.
When you change to a channel that is not initialized, the program will automatically initialize that channel by select the first loaded SoundFont id to that channel and select the first valid instrument in bank 0 (or bank 128 for channel 9). If no valid instrument is found in bank 0 (or bank 128 for channel 9), then that channel will be initialized with no valid current preset, you will need to select a valid preset for that channel by looking at the result of the function `all_instruments` which returns all of the available banks and presets in current SoundFont file, I will talk about this function later.
The automatic initialization of channels that are not initialized will also take place when you trying to play or export a piece instance of musicpy or MIDI files using channels that are not initialized (not for playing or exporting note and chord). However, there is at least one situation that you will still get no sound with uninitialized channels, that is when the automatic initialization of a channel cannot find a valid preset of the initial bank number, then it will remain initialized but with no valid current preset. In this case, you will need to select a valid preset for that channel yourself in order to get sound when playing or exporting using that channel.
If you want to manually initialize a channel, you can use `init_channel` function of sf2 loader, which takes a paremeter `channel`.
The initial bank number is 0 for each channel except channel 9, which is 128, since channel 9 is for percussion as default. The initial preset number for each channel is 0, but this might not be the first valid preset number for current SoundFont id and current bank number. The initial SoundFont id for a channel that is not initialized is 0, if a channel is initialized, then it will have a SoundFont id that is >= 1. You can use this information to check if a channel is already initialized or not. To get current SoundFont id of a channel, use `get_sfid` function of sf2 loader, which I will talk about later, or you can use `valid_channel` function of sf2 loader to check if a channel is already initialized or not.
You can initialize channels as much as you can in this sf2 loader, which could be more than 16 channels (the restriction of MIDI 1.0). But since most MIDI files out there are at most 16 channels, so this advantage does not actually works well if you directly use MIDI files for this sf2 loader. If you use `export_piece` function to export a piece instance of musicpy to audio files, the number of channels and tracks of a piece instance could be more than 16, and they will be successfully rendered to audio. If you are interested in this, you can check out the piece data structure in the wiki of musicpy.
To reset the current configuration of all channels, you can use `reset_all_channels` function of sf2 loader, which will reset all channels to uninitialized state.
### Get the instrument names
If you want to get the instrument names of the soundfont files you load in the sf2 loader, you can use `get_all_instrument_names` function of the sf2 loader, which will give you a list of instrument names that current soundfont file's current bank number has (or you can specify them), with given maximum number of preset number to try, start from 0. By default, the maximum number of the preset number to try is 128, which is from 0 to 127. If you want to get the exact preset numbers for all of the instrument names in current bank number, you can set the parameter `get_ind` to `True`.
```python
loader.get_all_instrument_names(sfid=None,
bank=None,
max_num=128,
get_ind=False,
mode=0,
return_mode=0,
hide_warnings=True)
# mode: when get_ind is True, if mode is 1, the current preset number will be set to the first available
# instrument in the current bank number
# return_mode: if it is 0, then when get_ind is set to True, this function
# will return a dictionary which key is the preset number, value is the
# corresponding instrument name; if it is 1, then when get_ind is set to True,
# this function will return a tuple of 2 elements, which first element is
# a list of instrument names and second element is a list of the
# corresponding preset numbers
```
If you want to get all of the instrument names of all of the available banks of the soundfont files you load in the sf2 loader, you can use `all_instruments` function of the sf2 loader, which will give you a dictionary which key is the available bank number, value is a dictionary of the presets of this bank, which key is the preset number and value is the instrument name. You can specify the maximum of bank number and preset number to try, the default value of maximum bank number to try is 129, which is from 0 to 128, the default value of maximum preset number for each bank to try is 128, which is from 0 to 127. You can also specify the soundfont id to get all of the instrument names of a specific soundfont file you loaded, in case you have loaded multiple soundfont files in the sf2 loader.
```python
loader.all_instruments(max_bank=129, max_preset=128, sfid=None, hide_warnings=True)
# max_bank: the maximum bank number to try,
# the default value is 129, which is from 0 to 128
# max_preset: the maximum preset number to try,
# the default value is 128, which is from 0 to 127
# sfid: you can specify the soundfont id to get the instrument names
# of the soundfont file with the soundfont id
```
To get the instrument name of a given soundfont id, bank number and preset number, you can use `get_instrument_name` function.
```python
loader.get_instrument_name(sfid=None,
bank=None,
preset=None,
hide_warnings=True)
```
To get current instrument name, you can use `get_current_instrument` function.
```python
loader.get_current_instrument()
```
To get current soundfont id, bank number and preset number of a given channel, you can use `channel_info` function, which returns a tuple `(sfid, bank, preset)`.
To get one of current SoundFont id, current bank number, current preset number and current preset name of a channel, you can use functions `get_sfid`, `get_bank`, `get_preset`, `get_preset_name`, which all takes a parameter `channel`, which has a default value `None`, if the channel parameter is None, then use current channel. For the function `get_preset_name`, if the channel has not yet initialized, it will return None.
```python
loader.channel_info(channel=None)
# channel: if channel is None, then returns the channel info of current channel
loader.get_sfid() # get current channel's SoundFont id
loader.get_bank(1) # get current bank number of channel 1
loader.get_preset(1) # get current preset number of channel 1
loader.get_preset_name(1) # get current preset name of channel 1
```
Here is an example of getting all of the instrument names in current bank number.
```python
>>> loader.get_all_instrument_names()
['Stereo Grand', 'Bright Yamaha Grand', 'Electric Piano', 'Honky-tonk EMU2', 'Electric Piano 1', 'Legend EP 2', 'St.Harpsichd_Lite', 'Clavinet', 'Celesta', 'Glockenspiel', 'Music Box', 'VivesPS06', 'prc:Marimba', 'Xylophone', 'Tubular Bells', 'Dulcimer', 'DrawbarOrgan', 'PercOrganSinkla', 'Rock Organ', 'Church Organ', 'Reed Organ', 'Accordian', 'Harmonica', 'Bandoneon', 'TyrosNylonLight', 'Steel Guitar', 'Jazz Gt', 'Dry Clean Guitar', 'Palm Muted Guitar', 'Garcia _0_29', 'Les Sus_0_30', 'Guitar Harmonic', 'Acoustic Bass', 'MM JZ.F_0_33', 'BassPick&Mutes', 'Fretless Bass', 'Slap Bass 1', 'Slap Bass 2', 'Synth Bass 1', 'Synth Bass 2', 'Violin', 'Viola', 'Cello', 'Contrabass', 'Tremolo', 'Pizzicato Section', 'ClavinovaHarp', 'Timpani', 'Strings Orchest', 'Slow Strings', 'Synth Strings 1', 'Synth Strings 2', 'Ahh Choir', 'Ohh Voices', 'SynVoxUT', 'Orchestra Hit', 'Romantic Tp', 'Solo Bo_0_57', 'Tuba', 'Sweet Muted', 'FH LONG_0_60', 'BRASS', 'AccesVirusBrass', 'Synth B_0_63', 'Soprano Sax', 'Altsoft vib', 'Blow Tenor', 'Bari Sax', 'Oboe', 'English Horn', 'Bassoon', 'Clarinet', 'Piccolo', 'Flute', 'Recorder', 'ClavinovaPanFlu', 'Bottle Blow', 'Shakuhachi', 'Whistle', 'Ocarina', 'Square Wave', 'Saw Wave', 'Calliope Lead', 'Chiffer Lead', 'Charang', 'Solo Vox', 'Fifth Sawtooth Wave', 'Bass & Lead', 'Fantasia', 'Warm Pad', 'Polysynth', 'Space Voice', 'Bowed Glass', 'Metal Pad', 'Halo Pad', 'Sweep Pad', 'Ice Rain', 'Soundtrack', 'Crystal', 'Atmosphere', 'Brightness', 'Goblin', 'Echo Drops', 'Star Theme', 'Sitar', 'Banjo', 'Shamisen', 'Koto', 'Kalimba', 'BagPipe', 'Fiddle', 'Shenai', 'Tinker Bell', 'Agogo', 'Steel Drums', 'Woodblock', 'Taiko Drum', 'Melodic Tom', 'Synth Drum', 'Reverse Cymbal', 'Fret Noise', 'Breath Noise', 'Sea Shore', 'Bird Tweet', 'Telephone', 'Helicopter', 'Applause', 'Gun Shot']
```
Here is an example of getting all of the instrument names of all of the available banks.
```python
>>> loader.all_instruments()
{0: {0: 'Grand Piano', 1: 'Bright Piano', 2: 'Rock Piano', 3: 'Honky-Tonk Piano', 4: 'Electric Piano', 5: 'Crystal Piano', 6: 'Harpsichord', 7: 'Clavinet', 8: 'Celesta', 9: 'Glockenspiel', 10: 'Music Box', 11: 'Vibraphone', 12: 'Marimba', 13: 'Xylophone', 14: 'Tubular Bells', 15: 'Dulcimer (Santur)', 16: 'DrawBar Organ', 17: 'Percussive Organ', 18: 'Rock Organ', 19: 'Church Organ', 20: 'Reed Organ', 21: 'Accordion', 22: 'Harmonica', 23: 'Bandoneon', 24: 'Nylon Guitar', 25: 'Steel String Guitar', 26: 'Jazz Guitar', 27: 'Clean Guitar', 28: 'Muted Guitar', 29: 'Overdrive Guitar', 30: 'Distortion Guitar', 31: 'Guitar Harmonics', 32: 'Acoustic Bass', 33: 'Fingered Bass', 34: 'Picked Bass', 35: 'Fretless Bass', 36: 'Slap Bass 1', 37: 'Slap Bass 2', 38: 'Synth Bass 1', 39: 'Synth Bass 2', 40: 'Violin', 41: 'Viola', 42: 'Cello', 43: 'ContraBass', 44: 'Tremolo Strings', 45: 'Pizzicato Strings', 46: 'Orchestral Harp', 47: 'Timpani', 48: 'Strings Ensemble 1', 49: 'Strings Ensemble 2', 50: 'Synth Strings 1', 51: 'Synth Strings 2', 52: 'Choir Aahs', 53: 'Voice Oohs', 54: 'Synth Voice', 55: 'Orchestra Hit', 56: 'Trumpet', 57: 'Trombone', 58: 'Tuba', 59: 'Muted Trumpet', 60: 'French Horns', 61: 'Brass Section', 62: 'Synth Brass 1', 63: 'Synth Brass 2', 64: 'Soprano Sax', 65: 'Alto Sax', 66: 'Tenor Sax', 67: 'Baritone Sax', 68: 'Oboe', 69: 'English Horns', 70: 'Bassoon', 71: 'Clarinet', 72: 'Piccolo', 73: 'Flute', 74: 'Recorder', 75: 'Pan Flute', 76: 'Blown Bottle', 77: 'Shakuhachi', 78: 'Whistle', 79: 'Ocarina', 80: 'Square Wave', 81: 'Saw Wave', 82: 'Synth Calliope', 83: 'Chiffer Lead', 84: 'Charang', 85: 'Solo Voice', 86: '5th Saw Wave', 87: 'Bass & Lead', 88: 'Fantasia (New Age)', 89: 'Warm Pad', 90: 'Poly Synth', 91: 'Space Voice', 92: 'Bowed Glass', 93: 'Metal Pad', 94: 'Halo Pad', 95: 'Sweep Pad', 96: 'Ice Rain', 97: 'Sound Track', 98: 'Crystal', 99: 'Atmosphere', 100: 'Brightness', 101: 'Goblin', 102: 'Echo Drops', 103: 'Star Theme', 104: 'Sitar', 105: 'Banjo', 106: 'Shamisen', 107: 'Koto', 108: 'Kalimba', 109: 'Bag Pipe', 110: 'Fiddle', 111: 'Shannai', 112: 'Tinkle Bell', 113: 'Agogo', 114: 'Steel Drums', 115: 'Wood Block', 116: 'Taiko Drum', 117: 'Melodic Tom', 118: 'Synth Drum', 119: 'Reverse Cymbal', 120: 'Guitar Fret Noise', 121: 'Breath Noise', 122: 'Sea Shore', 123: 'Bird Tweets', 124: 'Telephone', 125: 'Helicopter', 126: 'Applause', 127: 'Gun Shot'}, 128: {0: 'Standard Drum Kit', 8: 'Room Drum Kit', 16: 'Power Drum Kit', 24: 'Electronic Drum Kit', 25: 'TR-808/909 Drum Kit', 32: 'Jazz Drum Kit', 40: 'Brush Drum Kit', 48: 'Orchestral Drum Kit', 49: 'Fix Room Drum Kit', 127: 'MT-32 Drum Kit'}}
```
### Play notes, chords, pieces and MIDI files
You can use `play_note` function of the sf2 loader to play a note with specified pitch using current channel, soundfont id, bank number and preset number. The note could be a string representing a pitch (for example, `C5`) or a musicpy note instance. If you want to play the note by another instrument, you need to change current preset (and other program parameters if needed) before you use `play_note` function, the same goes for other play functions.
```python
loader.play_note(note_name,
duration=2,
decay=1,
volume=100,
channel=0,
start_time=0,
sample_width=2,
channels=2,
frame_rate=44100,
name=None,
format='wav',
effects=None,
bpm=80,
export_args={},
wait=False)
# note_name: the name of the note, i.e. C5, D5, C (if the octave number
# is not specified, then the default octave number is 4), or musicpy note instance
# duration: the duration of the note in seconds
# decay: the decay time of the note in seconds
# volume: the volume of the note in MIDI velocity from 0 - 127
# channel: the channel to play the note
# start_time: the start time of the note in seconds
# sample_width: the sample width of the rendered audio
# channels: the number of channels of the rendered audio
# frame_rate: the frame rate of the rendered audio
# name: the file name of the exported audio file, this is only used in export_note function
# format: the audio file format of the exported audio file, this is only used in export_note function
# effects: audio effects you want to add to the rendered audio
# bpm: the BPM of the note
# export_args: a keyword dictionary, the other keyword arguments for exporting,
# you can refer to the keyword parameters of pydub's AudioSegment's export function,
# a useful situation is to specify the bitrate of the exported mp3 file
# to be exported (when you set the format parameter to 'mp3'), for example,
# we want to specify the bitrate to be 320Kbps,
# then this parameter could be {'bitrate': '320k
# wait: if set to True, wait till the playback ends
# examples
loader.play_note('C5') # play a note C5 using current instrument
# you will hear a note C5 playing using current instrument
loader < 25 # change to another instrument at preset number 25
loader.play_note('C5') # play a note C5 using the instrument we have changed to
# you will hear a note C5 playing using a new instrument
loader.play_note('D') # play a note without octave number specified, will play the note D4
loader.play_note(sf.mp.N('C5')) # play a note using musicpy note structure
loader.play_note('C5', duration=3) # play a note C5 for 3 seconds
```
You can use `play_chord` function of the sf2 loader to play a chord using current channel, soundfont id, bank number and preset number. The chord must be a musicpy chord instance.
```python
loader.play_chord(current_chord,
decay=0.5,
channel=0,
start_time=0,
piece_start_time=0,
sample_width=2,
channels=2,
frame_rate=44100,
name=None,
format='wav',
bpm=80,
fixed_decay=True,
effects=None,
pan=None,
volume=None,
length=None,
extra_length=None,
export_args={},
wait=False)
# current_chord: musicpy chord instance
# decay: the decay time unit in seconds, each note's decay time will be calculated
# with decay * duration of the note, or if fixed_decay is True, this decay time
# will be applied to every note, this decay time could also be a list of each note's
# decay time
# channel - bpm: same as play_note
# piece_start_time: this is used when dealing with a musicpy piece instance, you won't need to set this generally
# fixed_decay: if this is set to True, the decay time will be applied to every note
# effects: same as play_note
# pan: the pan effects you want to add to the rendered audio
# volume: the volume effects you want to add to the rendered audio
# the pan and volume effects are corresponding to the MIDI CC messages
# length: you can specify the whole length of the rendered audio in seconds (used in case of audio effects)
# extra_length: you can specify the extra length of the rendered audio in seconds (used in case of audio effects)
# export_args: same as play_note
# wait: same as play_note
# examples
loader.play_chord(sf.mp.C('C')) # play a C major chord starts at C4 (default when
# no octave number is specified)
loader.play_chord(sf.mp.C('Cmaj7', 5)) # play a Cmaj7 chord starts at C5
```
You can use `play_piece` function of the sf2 loader to play a piece using current channel and soundfont id. The piece must be a musicpy piece instance. Here piece means a piece of music with multiple individual tracks with different instruments on each of them (it is also ok if you want some or all of the tracks has the same instruments). You can custom which instrument you want the soundfont to play for each track by setting the `instruments` attribute of the piece instance, instrument of a track of the piece instance could be preset or [preset, bank, (sfid)].
You can learn more about piece data structure [here](https://github.com/Rainbow-Dreamer/musicpy/wiki/Basic-syntax-of-piece-type) at musicpy wiki.
```python
loader.play_piece(current_chord,
decay=0.5,
sample_width=2,
channels=2,
frame_rate=44100,
name=None,
format='wav',
fixed_decay=True,
effects=None,
clear_program_change=False,
length=None,
extra_length=None,
track_lengths=None,
track_extra_lengths=None,
export_args={},
show_msg=False,
wait=False)
# current_chord: musicpy piece instance
# decay: the decay time for the tracks of the piece instance (which is musicpy chord
# instance), note that if this decay time is a list,
# then it will be treated as the decay time for each track separately,
# otherwise it will be applied to each track. If you want to pass the same list
# to each track, you need to pass a list of lists which elements are identical.
# sample_width - effects: same as play_chord
# clear_program_change: when there are program change messages in the piece instance,
# the instruments are forced to change during rendering, so you cannot use the
# instrument you want to play, if you clear these messages, then you can specify
# which instruments you want to play
# length: you can specify the whole length of the rendered audio in seconds (used in case of audio effects)
# extra_length: you can specify the extra length of the rendered audio in seconds (used in case of audio effects)
# track_lengths: the length settings list of each track
# track_extra_lengths: the extra length settings list of each track
# export_args: same as play_note
# show_msg: if it is set to True, then when the sf2 loader is rendering a piece instance to audio, it will print some messages showing current process, such as `rendering track 1/16 ...` (rendering the first track of the total 16 tracks), the default value is False
# wait: same as play_note
# examples
# construct a musicpy piece instance and play it using the sf2 loader,
# here we have a chord progression from Cmaj7 to Fmaj7, with different instruments of each chord
current_piece = sf.mp.P(
tracks=[
# C function is to translate human-readable chord name to chord
sf.mp.C('Cmaj7') % (1, 1 / 8) * 4,
sf.mp.C('Fmaj7') % (1, 1 / 8) * 4,
# The code below does exactly the same job
# sf.mp.chord('C, E, G, B') % (1, 1 / 8) * 4,
# sf.mp.chord('F, A, C, E') % (1, 1 / 8) * 4
],
instruments=[1, 47],
start_times=[0, 2],
bpm=150)
loader.play_piece(current_piece)
# read a MIDI file to a musicpy piece instance and play it using the sf2 loader
current_midi_file = sf.mp.read(midi_file_path)
loader.play_piece(current_midi_file)
```
You can use `play_midi_file` function of the sf2 loader to play a MIDI file using current channel and soundfont id. You can set the first parameter to the MIDI file path, and then the sf2 loader will read the MIDI file and analyze it into a musicpy piece instance, and then render it to audio data.
```python
loader.play_midi_file(current_chord,
decay=0.5,
sample_width=2,
channels=2,
frame_rate=44100,
name=None,
format='wav',
fixed_decay=True,
effects=None,
clear_program_change=False,
instruments=None,
length=None,
extra_length=None,
track_lengths=None,
track_extra_lengths=None,
export_args={},
show_msg=False,
wait=False,
**read_args)
# current_chord: the MIDI file path
# decay - clear_program_change: same as play_piece
# instruments: the list of the instruments you want to play, the sf2 loader
# will use this instrument list instead of the instrument settings in the MIDI file,
# note that this instruments list must be the same length as the number of tracks
# of the MIDI file
# length - show_msg: same as play_piece
# wait: same as play_note
# **read_args: this is the keyword arguments for the musicpy read function
# examples
# play a MIDI file given a file path using current soundfont file
loader.play_midi_file(r'C:\Users\Administrator\Desktop\test.mid')
loader.change_soundfont('celeste2.sf2') # change to another loaded soundfont file
# play a MIDI file given a file path using another soundfont file
loader.play_midi_file(r'C:\Users\Administrator\Desktop\test.mid')
# you can also specify which channel use which soundfont files in the instruments
# parameter by specifying the soundfont id
```
You can specify which bank and preset (including channel and sfid) that each track of the MIDI file uses by setting the `instruments` parameter of the `play_midi_file` function.
### Export notes, chords, pieces and MIDI files
You can export notes, chords, pieces and MIDI files using loaded soundfont files in the sf2 loader using `export_note`, `export_chord`, `export_piece`, `export_midi_file` function of the sf2 loader.
All of the parameters of these export functions can refer to their corresponding play functions, except a parameter `get_audio`, if this parameter is set to True, then the export functions will return an AudioSegment instance (this is an audio instance in pydub) which contains raw audio data for further audio process. If this parameter is set to False (which is default), then the export functions will export the rendered audio data to an audio file with the file name and the audio file format you specify.
```python
# examples
# render a MIDI file with current soundfont files and export as a mp3 file 'test.mp3'
loader.export_midi_file(r'C:\Users\Administrator\Desktop\test.mid', name='test.mp3', format='mp3')
# if you want to specify the bitrate of the exported mp3 file to be 320Kbps
loader.export_midi_file(r'C:\Users\Administrator\Desktop\test.mid', name='test.mp3', format='mp3', export_args={'bitrate': '320k'})
```
### Export instruments
You can export instruments of a specified instrument of the loaded soundfont files using `export_instruments` function of the sf2 loader.
You can specify the pitch range of the notes, the default is from A0 to C8, which is the most common 88 keys on the piano.
The duration of the notes is 6 seconds by default, you can set the duration of the notes in the function.
The format of the export audio files is wav by default, you can set the export audio file format in the function.
The exported audio file name of each note will be in the format `pitch.format` by default, where pitch is the note name such as `C5`, format is the audio file format you specify such as `wav`, so for example, the exported audio file names of notes will be like `C5.wav`, `C#5.wav`, `D5.wav`. You can custom the name format of each | text/markdown | Rainbow-Dreamer | 1036889495@qq.com | null | null | LGPLv2.1 | soundfont, sf2, python | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+)",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Langu... | [] | https://github.com/Rainbow-Dreamer/sf2_loader | https://github.com/Rainbow-Dreamer/sf2_loader/archive/1.26.tar.gz | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/4.0.2 CPython/3.7.9 | 2026-02-18T13:50:25.038358 | sf2_loader-1.26.tar.gz | 7,556,617 | 5c/b6/fbf3d1cf4f2d476bbc61bb1c564ca352314d18a67f7da6fac4ca8253bc4f/sf2_loader-1.26.tar.gz | source | sdist | null | false | f7c921fa1c5813785230d1eda3606699 | a16683c0488b764beacdae17e4c0b3419bafa37b65cecb9c9770c8d8b33db16c | 5cb6fbf3d1cf4f2d476bbc61bb1c564ca352314d18a67f7da6fac4ca8253bc4f | null | [] | 192 |
2.4 | raptorbt | 0.3.2.post1 | High-performance Rust backtesting engine with Python bindings. Drop-in VectorBT replacement with up insanely faster performance at fractional memory footprint. | # RaptorBT
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/raptorbt/)
[](https://www.python.org/downloads/)
[](https://www.rust-lang.org/)
[](https://pepy.tech/projects/raptorbt)
**Blazing-fast backtesting for the modern quant.**
RaptorBT is a high-performance backtesting engine written in Rust with Python bindings via PyO3. It serves as a drop-in replacement for VectorBT — delivering **HFT-grade compute efficiency** with full metric parity.
<p align="center">
<strong>5,800x faster</strong> · <strong>45x smaller</strong> · <strong>100% deterministic</strong>
</p>
---
### Quick Install
```bash
pip install raptorbt
```
### 30-Second Example
```python
import numpy as np
import raptorbt
# Configure
config = raptorbt.PyBacktestConfig(initial_capital=100000, fees=0.001)
# Run backtest
result = raptorbt.run_single_backtest(
timestamps=timestamps, open=open, high=high, low=low, close=close,
volume=volume, entries=entries, exits=exits,
direction=1, weight=1.0, symbol="AAPL", config=config,
)
# Results
print(f"Return: {result.metrics.total_return_pct:.2f}%")
print(f"Sharpe: {result.metrics.sharpe_ratio:.2f}")
```
---
Developed and maintained by the [Alphabench](https://alphabench.in) team.
## Table of Contents
- [Overview](#overview)
- [Performance](#performance)
- [Architecture](#architecture)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Strategy Types](#strategy-types)
- [Metrics](#metrics)
- [Indicators](#indicators)
- [Stop-Loss & Take-Profit](#stop-loss--take-profit)
- [VectorBT Comparison](#vectorbt-comparison)
- [API Reference](#api-reference)
- [Building from Source](#building-from-source)
- [Testing](#testing)
---
## Overview
RaptorBT was built to address the performance limitations of VectorBT. Benchmarked by the Alphabench team:
| Metric | VectorBT | RaptorBT | Improvement |
| ----------------------------- | ------------------- | ------------ | ------------------------- |
| **Disk Footprint** | ~450MB | <10MB | **45x smaller** |
| **Startup Latency** | 200-600ms | <10ms | **20-60x faster** |
| **Backtest Speed (1K bars)** | 1460ms | 0.25ms | **5,800x faster** |
| **Backtest Speed (50K bars)** | 43ms | 1.7ms | **25x faster** |
| **Memory Usage** | High (JIT + pandas) | Low (native) | **Significant reduction** |
### Key Features
- **6 Strategy Types**: Single instrument, basket/collective, pairs trading, options, spreads, and multi-strategy
- **Monte Carlo Simulation**: Correlated multi-asset forward projection via GBM + Cholesky decomposition
- **33 Metrics**: Full parity with VectorBT including Sharpe, Sortino, Calmar, Omega, SQN, Payoff Ratio, Recovery Factor, and more
- **12 Technical Indicators**: SMA, EMA, RSI, MACD, Stochastic, ATR, Bollinger Bands, ADX, VWAP, Supertrend, Rolling Min, Rolling Max
- **Stop/Target Management**: Fixed, ATR-based, and trailing stops with risk-reward targets
- **100% Deterministic**: No JIT compilation variance between runs
- **Native Parallelism**: Rayon-based parallel processing with explicit SIMD optimizations
---
## Performance
### Benchmark Results
Tested on Apple Silicon M-series with random walk price data and SMA crossover strategy:
```
┌─────────────┬────────────┬───────────┬──────────┐
│ Data Size │ VectorBT │ RaptorBT │ Speedup │
├─────────────┼────────────┼───────────┼──────────┤
│ 1,000 bars │ 1,460 ms │ 0.25 ms │ 5,827x │
│ 5,000 bars │ 36 ms │ 0.24 ms │ 153x │
│ 10,000 bars │ 37 ms │ 0.46 ms │ 80x │
│ 50,000 bars │ 43 ms │ 1.68 ms │ 26x │
└─────────────┴────────────┴───────────┴──────────┘
```
> **Note**: First VectorBT run includes Numba JIT compilation overhead. Subsequent runs are faster but still significantly slower than RaptorBT.
### Metric Accuracy
RaptorBT produces **identical results** to VectorBT:
```
VectorBT Total Return: 7.2764%
RaptorBT Total Return: 7.2764%
Difference: 0.0000% ✓
```
---
## Architecture
```
raptorbt/
├── src/
│ ├── core/ # Core types and error handling
│ │ ├── types.rs # BacktestConfig, BacktestResult, Trade, Metrics
│ │ ├── error.rs # RaptorError enum
│ │ ├── session.rs # SessionTracker, SessionConfig (intraday sessions)
│ │ └── timeseries.rs # Time series utilities
│ │
│ ├── strategies/ # Strategy implementations
│ │ ├── single.rs # Single instrument backtest
│ │ ├── basket.rs # Basket/collective strategies
│ │ ├── pairs.rs # Pairs trading
│ │ ├── options.rs # Options strategies
│ │ ├── spreads.rs # Multi-leg spread strategies
│ │ └── multi.rs # Multi-strategy combining
│ │
│ ├── indicators/ # Technical indicators
│ │ ├── trend.rs # SMA, EMA, Supertrend
│ │ ├── momentum.rs # RSI, MACD, Stochastic
│ │ ├── volatility.rs # ATR, Bollinger Bands
│ │ ├── strength.rs # ADX
│ │ ├── volume.rs # VWAP
│ │ └── rolling.rs # Rolling Min/Max (LLV/HHV)
│ │
│ ├── metrics/ # Performance metrics
│ │ ├── streaming.rs # Streaming metric calculations
│ │ ├── drawdown.rs # Drawdown analysis
│ │ └── trade_stats.rs # Trade statistics
│ │
│ ├── signals/ # Signal processing
│ │ ├── processor.rs # Entry/exit signal processing
│ │ ├── synchronizer.rs # Multi-instrument sync
│ │ └── expression.rs # Signal expressions
│ │
│ ├── stops/ # Stop-loss implementations
│ │ ├── fixed.rs # Fixed percentage stops
│ │ ├── atr.rs # ATR-based stops
│ │ └── trailing.rs # Trailing stops
│ │
│ ├── portfolio/ # Portfolio-level analysis
│ │ ├── monte_carlo.rs # Monte Carlo forward simulation (GBM + Cholesky)
│ │ ├── allocation.rs # Capital allocation
│ │ ├── engine.rs # Portfolio engine
│ │ └── position.rs # Position management
│ │
│ ├── python/ # PyO3 bindings
│ │ ├── bindings.rs # Python function exports
│ │ └── numpy_bridge.rs # NumPy array conversion
│ │
│ └── lib.rs # Library entry point
│
├── Cargo.toml # Rust dependencies
└── pyproject.toml # Python package config
```
---
## Installation
### From Pre-built Wheel
```bash
pip install raptorbt
```
### From Source
```bash
cd raptorbt
maturin develop --release
```
### Verify Installation
```python
import raptorbt
print("RaptorBT installed successfully!")
```
---
## Quick Start
### Basic Single Instrument Backtest
```python
import numpy as np
import pandas as pd
import raptorbt
# Prepare data
df = pd.read_csv("your_data.csv", index_col=0, parse_dates=True)
# Generate signals (SMA crossover example)
sma_fast = df['close'].rolling(10).mean()
sma_slow = df['close'].rolling(20).mean()
entries = (sma_fast > sma_slow) & (sma_fast.shift(1) <= sma_slow.shift(1))
exits = (sma_fast < sma_slow) & (sma_fast.shift(1) >= sma_slow.shift(1))
# Configure backtest
config = raptorbt.PyBacktestConfig(
initial_capital=100000,
fees=0.001, # 0.1% per trade
slippage=0.0005, # 0.05% slippage
upon_bar_close=True
)
# Optional: Add stop-loss
config.set_fixed_stop(0.02) # 2% stop-loss
# Optional: Add take-profit
config.set_fixed_target(0.04) # 4% take-profit
# Run backtest
result = raptorbt.run_single_backtest(
timestamps=df.index.astype('int64').values,
open=df['open'].values,
high=df['high'].values,
low=df['low'].values,
close=df['close'].values,
volume=df['volume'].values,
entries=entries.values,
exits=exits.values,
direction=1, # 1 = Long, -1 = Short
weight=1.0,
symbol="AAPL",
config=config,
)
# Access results
print(f"Total Return: {result.metrics.total_return_pct:.2f}%")
print(f"Sharpe Ratio: {result.metrics.sharpe_ratio:.2f}")
print(f"Max Drawdown: {result.metrics.max_drawdown_pct:.2f}%")
print(f"Win Rate: {result.metrics.win_rate_pct:.2f}%")
print(f"Total Trades: {result.metrics.total_trades}")
# Get equity curve
equity = result.equity_curve() # Returns numpy array
# Get trades
trades = result.trades() # Returns list of PyTrade objects
```
---
## Strategy Types
### 1. Single Instrument
Basic long or short strategy on a single instrument.
```python
# Optional: Instrument-specific configuration
inst_config = raptorbt.PyInstrumentConfig(lot_size=1.0)
result = raptorbt.run_single_backtest(
timestamps=timestamps,
open=open_prices, high=high_prices, low=low_prices,
close=close_prices, volume=volume,
entries=entries, exits=exits,
direction=1, # 1=Long, -1=Short
weight=1.0,
symbol="SYMBOL",
config=config,
instrument_config=inst_config, # Optional: lot_size rounding, capital caps
)
```
### 2. Basket/Collective
Trade multiple instruments with synchronized signals.
```python
instruments = [
(timestamps, open1, high1, low1, close1, volume1, entries1, exits1, 1, 0.33, "AAPL"),
(timestamps, open2, high2, low2, close2, volume2, entries2, exits2, 1, 0.33, "GOOGL"),
(timestamps, open3, high3, low3, close3, volume3, entries3, exits3, 1, 0.34, "MSFT"),
]
# Optional: Per-instrument configs for lot_size and capital allocation
instrument_configs = {
"AAPL": raptorbt.PyInstrumentConfig(lot_size=1.0, alloted_capital=33000),
"GOOGL": raptorbt.PyInstrumentConfig(lot_size=1.0, alloted_capital=33000),
"MSFT": raptorbt.PyInstrumentConfig(lot_size=1.0, alloted_capital=34000),
}
result = raptorbt.run_basket_backtest(
instruments=instruments,
config=config,
sync_mode="all", # "all", "any", "majority", "master"
instrument_configs=instrument_configs, # Optional
)
```
**Sync Modes:**
- `all`: Enter only when ALL instruments signal
- `any`: Enter when ANY instrument signals
- `majority`: Enter when >50% of instruments signal
- `master`: Follow the first instrument's signals
### 3. Pairs Trading
Long one instrument, short another with optional hedge ratio.
```python
result = raptorbt.run_pairs_backtest(
# Long leg
leg1_timestamps=timestamps,
leg1_open=long_open, leg1_high=long_high,
leg1_low=long_low, leg1_close=long_close,
leg1_volume=long_volume,
# Short leg
leg2_timestamps=timestamps,
leg2_open=short_open, leg2_high=short_high,
leg2_low=short_low, leg2_close=short_close,
leg2_volume=short_volume,
# Signals
entries=entries, exits=exits,
direction=1,
symbol="TCS_INFY",
config=config,
hedge_ratio=1.5, # Short 1.5x the long position
dynamic_hedge=False, # Use rolling hedge ratio
)
```
### 4. Options
Backtest options strategies with strike selection.
```python
result = raptorbt.run_options_backtest(
timestamps=timestamps,
open=underlying_open, high=underlying_high,
low=underlying_low, close=underlying_close,
volume=volume,
option_prices=option_prices, # Option premium series
entries=entries, exits=exits,
direction=1,
symbol="NIFTY_CE",
config=config,
option_type="call", # "call" or "put"
strike_selection="atm", # "atm", "otm1", "otm2", "itm1", "itm2"
size_type="percent", # "percent", "contracts", "notional", "risk"
size_value=0.1, # 10% of capital
lot_size=50, # Options lot size
strike_interval=50.0, # Strike interval (e.g., 50 for NIFTY)
)
```
### 5. Multi-Strategy
Combine multiple strategies on the same instrument.
```python
strategies = [
(entries_sma, exits_sma, 1, 0.4, "SMA_Crossover"), # 40% weight
(entries_rsi, exits_rsi, 1, 0.35, "RSI_MeanRev"), # 35% weight
(entries_bb, exits_bb, 1, 0.25, "BB_Breakout"), # 25% weight
]
result = raptorbt.run_multi_backtest(
timestamps=timestamps,
open=open_prices, high=high_prices,
low=low_prices, close=close_prices,
volume=volume,
strategies=strategies,
config=config,
combine_mode="any", # "any", "all", "majority", "weighted", "independent"
)
```
**Combine Modes:**
- `any`: Enter when any strategy signals
- `all`: Enter only when all strategies signal
- `majority`: Enter when >50% of strategies signal
- `weighted`: Weight signals by strategy weight
- `independent`: Run strategies independently (aggregate PnL)
---
## Metrics
RaptorBT calculates 30+ performance metrics:
### Core Performance
| Metric | Description |
| ------------------ | --------------------------------- |
| `total_return_pct` | Total return as percentage |
| `sharpe_ratio` | Risk-adjusted return (annualized) |
| `sortino_ratio` | Downside risk-adjusted return |
| `calmar_ratio` | Return / Max Drawdown |
| `omega_ratio` | Probability-weighted gains/losses |
### Drawdown
| Metric | Description |
| ----------------------- | ------------------------------ |
| `max_drawdown_pct` | Maximum peak-to-trough decline |
| `max_drawdown_duration` | Longest drawdown period (bars) |
### Trade Statistics
| Metric | Description |
| --------------------- | ---------------------------- |
| `total_trades` | Total number of trades |
| `total_closed_trades` | Number of closed trades |
| `total_open_trades` | Number of open positions |
| `winning_trades` | Number of profitable trades |
| `losing_trades` | Number of losing trades |
| `win_rate_pct` | Percentage of winning trades |
### Trade Performance
| Metric | Description |
| ---------------------- | --------------------------------- |
| `profit_factor` | Gross profit / Gross loss |
| `expectancy` | Average expected profit per trade |
| `sqn` | System Quality Number |
| `avg_trade_return_pct` | Average trade return |
| `avg_win_pct` | Average winning trade return |
| `avg_loss_pct` | Average losing trade return |
| `best_trade_pct` | Best single trade return |
| `worst_trade_pct` | Worst single trade return |
### Duration
| Metric | Description |
| ---------------------- | ------------------------------ |
| `avg_holding_period` | Average trade duration (bars) |
| `avg_winning_duration` | Average winning trade duration |
| `avg_losing_duration` | Average losing trade duration |
### Streaks
| Metric | Description |
| ------------------------ | ---------------------- |
| `max_consecutive_wins` | Longest winning streak |
| `max_consecutive_losses` | Longest losing streak |
### Other
| Metric | Description |
| ----------------- | ---------------------------------- |
| `start_value` | Initial portfolio value |
| `end_value` | Final portfolio value |
| `total_fees_paid` | Total transaction costs |
| `open_trade_pnl` | Unrealized PnL from open positions |
| `exposure_pct` | Percentage of time in market |
---
## Indicators
RaptorBT includes optimized technical indicators:
```python
import raptorbt
# Trend indicators
sma = raptorbt.sma(close, period=20)
ema = raptorbt.ema(close, period=20)
supertrend, direction = raptorbt.supertrend(high, low, close, period=10, multiplier=3.0)
# Momentum indicators
rsi = raptorbt.rsi(close, period=14)
macd_line, signal_line, histogram = raptorbt.macd(close, fast=12, slow=26, signal=9)
stoch_k, stoch_d = raptorbt.stochastic(high, low, close, k_period=14, d_period=3)
# Volatility indicators
atr = raptorbt.atr(high, low, close, period=14)
upper, middle, lower = raptorbt.bollinger_bands(close, period=20, std_dev=2.0)
# Strength indicators
adx = raptorbt.adx(high, low, close, period=14)
# Volume indicators
vwap = raptorbt.vwap(high, low, close, volume)
```
---
## Stop-Loss & Take-Profit
### Fixed Percentage
```python
config = raptorbt.PyBacktestConfig(initial_capital=100000, fees=0.001)
config.set_fixed_stop(0.02) # 2% stop-loss
config.set_fixed_target(0.04) # 4% take-profit
```
### ATR-Based
```python
config.set_atr_stop(multiplier=2.0, period=14) # 2x ATR stop
config.set_atr_target(multiplier=3.0, period=14) # 3x ATR target
```
### Trailing Stop
```python
config.set_trailing_stop(0.02) # 2% trailing stop
```
### Risk-Reward Target
```python
config.set_risk_reward_target(ratio=2.0) # 2:1 risk-reward ratio
```
---
## Monte Carlo Portfolio Simulation
RaptorBT includes a high-performance Monte Carlo forward simulation engine for portfolio risk analysis. It uses Geometric Brownian Motion (GBM) with Cholesky decomposition for correlated multi-asset simulation, parallelized via Rayon.
```python
import numpy as np
import raptorbt
# Historical daily returns per strategy/asset (numpy arrays)
returns = [
np.array([0.001, -0.002, 0.003, ...]), # Strategy 1 returns
np.array([0.002, 0.001, -0.001, ...]), # Strategy 2 returns
]
# Portfolio weights (must sum to 1.0)
weights = np.array([0.6, 0.4])
# Correlation matrix (N x N)
correlation_matrix = [
np.array([1.0, 0.3]),
np.array([0.3, 1.0]),
]
# Run simulation
result = raptorbt.simulate_portfolio_mc(
returns=returns,
weights=weights,
correlation_matrix=correlation_matrix,
initial_value=100000.0,
n_simulations=10000, # Number of Monte Carlo paths (default: 10,000)
horizon_days=252, # Forward projection horizon (default: 252)
seed=42, # Random seed for reproducibility (default: 42)
)
# Results
print(f"Expected Return: {result['expected_return']:.2f}%")
print(f"Probability of Loss: {result['probability_of_loss']:.2%}")
print(f"VaR (95%): {result['var_95']:.2f}%")
print(f"CVaR (95%): {result['cvar_95']:.2f}%")
# Percentile paths: list of (percentile, path_values)
# Percentiles: 5th, 25th, 50th, 75th, 95th
for pct, path in result['percentile_paths']:
print(f" P{pct:.0f} final value: {path[-1]:.2f}")
# Final values: numpy array of terminal values for all simulations
final_values = result['final_values'] # numpy array, length = n_simulations
```
### Result Fields
| Field | Type | Description |
| --------------------- | -------------------------- | ---------------------------------------------------------- |
| `expected_return` | `float` | Expected return as percentage over the horizon |
| `probability_of_loss` | `float` | Probability that final value < initial value (0.0 to 1.0) |
| `var_95` | `float` | Value at Risk at 95% confidence (percentage) |
| `cvar_95` | `float` | Conditional VaR at 95% confidence (percentage) |
| `percentile_paths` | `List[Tuple[float, List]]` | Portfolio paths at 5th, 25th, 50th, 75th, 95th percentiles |
| `final_values` | `numpy.ndarray` | Terminal portfolio values for all simulations |
---
## VectorBT Comparison
RaptorBT is designed as a drop-in replacement for VectorBT. Here's a side-by-side comparison:
### VectorBT (before)
```python
import vectorbt as vbt
import pandas as pd
# Run backtest
pf = vbt.Portfolio.from_signals(
close=close_series,
entries=entries,
exits=exits,
init_cash=100000,
fees=0.001,
)
# Get metrics
print(pf.stats()["Total Return [%]"])
print(pf.stats()["Sharpe Ratio"])
print(pf.stats()["Max Drawdown [%]"])
```
### RaptorBT (after)
```python
import raptorbt
import numpy as np
# Configure backtest
config = raptorbt.PyBacktestConfig(
initial_capital=100000,
fees=0.001,
)
# Run backtest
result = raptorbt.run_single_backtest(
timestamps=timestamps,
open=open_prices, high=high_prices,
low=low_prices, close=close_prices,
volume=volume,
entries=entries, exits=exits,
direction=1, weight=1.0,
symbol="SYMBOL",
config=config,
)
# Get metrics
print(f"Total Return: {result.metrics.total_return_pct}%")
print(f"Sharpe Ratio: {result.metrics.sharpe_ratio}")
print(f"Max Drawdown: {result.metrics.max_drawdown_pct}%")
```
### Metric Mapping
| VectorBT Key | RaptorBT Attribute |
| ------------------ | -------------------------- |
| `Total Return [%]` | `metrics.total_return_pct` |
| `Sharpe Ratio` | `metrics.sharpe_ratio` |
| `Sortino Ratio` | `metrics.sortino_ratio` |
| `Max Drawdown [%]` | `metrics.max_drawdown_pct` |
| `Win Rate [%]` | `metrics.win_rate_pct` |
| `Profit Factor` | `metrics.profit_factor` |
| `SQN` | `metrics.sqn` |
| `Omega Ratio` | `metrics.omega_ratio` |
| `Total Trades` | `metrics.total_trades` |
| `Expectancy` | `metrics.expectancy` |
---
## API Reference
### PyBacktestConfig
```python
config = raptorbt.PyBacktestConfig(
initial_capital: float = 100000.0,
fees: float = 0.001,
slippage: float = 0.0,
upon_bar_close: bool = True,
)
# Stop methods
config.set_fixed_stop(percent: float)
config.set_atr_stop(multiplier: float, period: int)
config.set_trailing_stop(percent: float)
# Target methods
config.set_fixed_target(percent: float)
config.set_atr_target(multiplier: float, period: int)
config.set_risk_reward_target(ratio: float)
```
### PyInstrumentConfig
Per-instrument configuration for position sizing and risk management.
```python
inst_config = raptorbt.PyInstrumentConfig(
lot_size=1.0, # Min tradeable quantity (1 for equity, 50 for NIFTY F&O)
alloted_capital=50000.0, # Capital allocated to this instrument (optional)
existing_qty=None, # Existing position quantity (future use)
avg_price=None, # Existing position avg price (future use)
)
# Optional: per-instrument stop/target overrides
inst_config.set_fixed_stop(0.02)
inst_config.set_trailing_stop(0.03)
inst_config.set_fixed_target(0.05)
```
**Fields:**
- `lot_size` - Minimum tradeable quantity. Position sizes are rounded down to nearest lot_size multiple. Use `1.0` for equities, `50.0` for NIFTY F&O, `0.01` for forex.
- `alloted_capital` - Per-instrument capital cap (capped at available cash).
- `existing_qty` / `avg_price` - Reserved for future live-to-backtest transitions.
### simulate_portfolio_mc
```python
result = raptorbt.simulate_portfolio_mc(
returns: List[np.ndarray], # Per-asset daily returns (N arrays)
weights: np.ndarray, # Portfolio weights (length N, sum to 1)
correlation_matrix: List[np.ndarray], # N x N correlation matrix
initial_value: float, # Starting portfolio value
n_simulations: int = 10000, # Number of Monte Carlo paths
horizon_days: int = 252, # Forward projection horizon in days
seed: int = 42, # Random seed for reproducibility
) -> dict
```
Returns a dictionary with keys: `expected_return`, `probability_of_loss`, `var_95`, `cvar_95`, `percentile_paths`, `final_values`.
### PyBacktestResult
```python
result = raptorbt.run_single_backtest(...)
# Attributes
result.metrics # PyBacktestMetrics object
# Methods
result.equity_curve() # numpy.ndarray
result.drawdown_curve() # numpy.ndarray
result.returns() # numpy.ndarray
result.trades() # List[PyTrade]
```
### PyBacktestMetrics
```python
metrics = result.metrics
# All available metrics
metrics.total_return_pct
metrics.sharpe_ratio
metrics.sortino_ratio
metrics.calmar_ratio
metrics.omega_ratio
metrics.max_drawdown_pct
metrics.max_drawdown_duration
metrics.win_rate_pct
metrics.profit_factor
metrics.expectancy
metrics.sqn
metrics.total_trades
metrics.total_closed_trades
metrics.total_open_trades
metrics.winning_trades
metrics.losing_trades
metrics.start_value
metrics.end_value
metrics.total_fees_paid
metrics.best_trade_pct
metrics.worst_trade_pct
metrics.avg_trade_return_pct
metrics.avg_win_pct
metrics.avg_loss_pct
metrics.avg_holding_period
metrics.avg_winning_duration
metrics.avg_losing_duration
metrics.max_consecutive_wins
metrics.max_consecutive_losses
metrics.exposure_pct
metrics.open_trade_pnl
metrics.payoff_ratio # avg win / avg loss (risk/reward per trade)
metrics.recovery_factor # net profit / max drawdown (resilience)
# Convert to dictionary (VectorBT format)
stats_dict = metrics.to_dict()
```
### PyTrade
```python
for trade in result.trades():
print(trade.id) # Trade ID
print(trade.symbol) # Symbol
print(trade.entry_idx) # Entry bar index
print(trade.exit_idx) # Exit bar index
print(trade.entry_price) # Entry price
print(trade.exit_price) # Exit price
print(trade.size) # Position size
print(trade.direction) # 1=Long, -1=Short
print(trade.pnl) # Profit/Loss
print(trade.return_pct) # Return percentage
print(trade.fees) # Fees paid
print(trade.exit_reason) # "Signal", "StopLoss", "TakeProfit"
```
---
## Building from Source
### Prerequisites
- Rust 1.70+ (install via [rustup](https://rustup.rs/))
- Python 3.10+
- maturin (`pip install maturin`)
### Development Build
```bash
cd raptorbt
maturin develop --release
```
### Production Build
```bash
cd raptorbt
maturin build --release
pip install target/wheels/raptorbt-*.whl
```
---
## Testing
### Rust Unit Tests
```bash
cd raptorbt
cargo test
```
### Python Integration Tests
```python
import raptorbt
import numpy as np
config = raptorbt.PyBacktestConfig(initial_capital=100000, fees=0.001)
result = raptorbt.run_single_backtest(
timestamps=np.arange(100, dtype=np.int64),
open=np.random.randn(100).cumsum() + 100,
high=np.random.randn(100).cumsum() + 101,
low=np.random.randn(100).cumsum() + 99,
close=np.random.randn(100).cumsum() + 100,
volume=np.ones(100),
entries=np.array([i % 20 == 0 for i in range(100)]),
exits=np.array([i % 20 == 10 for i in range(100)]),
direction=1,
weight=1.0,
symbol='TEST',
config=config,
)
print(f'Total Return: {result.metrics.total_return_pct:.2f}%')
print('RaptorBT is working correctly!')
```
### Comparison Test (VectorBT vs RaptorBT)
```python
import numpy as np
import pandas as pd
import vectorbt as vbt
import raptorbt
# Create test data
np.random.seed(42)
n = 500
dates = pd.date_range('2023-01-01', periods=n, freq='D')
close = np.cumprod(1 + np.random.randn(n) * 0.02) * 100
entries = np.zeros(n, dtype=bool)
exits = np.zeros(n, dtype=bool)
entries[::20] = True
exits[10::20] = True
# VectorBT
pf = vbt.Portfolio.from_signals(
close=pd.Series(close, index=dates),
entries=pd.Series(entries, index=dates),
exits=pd.Series(exits, index=dates),
init_cash=100000, fees=0.001
)
# RaptorBT
config = raptorbt.PyBacktestConfig(initial_capital=100000, fees=0.001)
result = raptorbt.run_single_backtest(
timestamps=dates.astype('int64').values,
open=close, high=close, low=close, close=close,
volume=np.ones(n), entries=entries, exits=exits,
direction=1, weight=1.0, symbol="TEST", config=config
)
print(f"VectorBT: {pf.stats()['Total Return [%]']:.4f}%")
print(f"RaptorBT: {result.metrics.total_return_pct:.4f}%")
# Results should match within 0.01%
```
---
## License
MIT License - see [LICENSE](LICENSE) for details.
---
## Changelog
### v0.3.2
- Add `payoff_ratio` metric to `BacktestMetrics` — average winning trade return divided by average losing trade return (absolute), measures risk/reward per trade
- Add `recovery_factor` metric to `BacktestMetrics` — net profit divided by maximum drawdown in absolute terms, measures how many times over the strategy recovered from its worst drawdown
- Both metrics computed in `StreamingMetrics::finalize()` (single-instrument backtest) and `PortfolioEngine` (multi-strategy aggregation)
- Both metrics exposed via PyO3 as `#[pyo3(get)]` attributes on `PyBacktestMetrics`
- Handles edge cases: returns `f64::INFINITY` when denominator is zero with positive numerator, `0.0` otherwise
### v0.3.1
- Add Monte Carlo portfolio simulation (`simulate_portfolio_mc`) for forward risk projection
- Geometric Brownian Motion (GBM) with Cholesky decomposition for correlated multi-asset simulation
- Rayon-parallelized simulation paths with deterministic seeding (xoshiro256\*\*)
- Returns percentile paths (P5/P25/P50/P75/P95), VaR, CVaR, expected return, and probability of loss
- GIL released during simulation for maximum Python concurrency
### v0.3.0
- Per-instrument configuration via `PyInstrumentConfig` (lot_size, alloted_capital, stop/target overrides)
- Position sizes now correctly rounded to lot_size multiples
- Support for per-instrument capital allocation in basket backtests
- Future-ready fields: existing_qty, avg_price for live-to-backtest transitions
### v0.2.2
- Export `run_spread_backtest` Python binding for multi-leg options spread strategies
- Export `rolling_min` and `rolling_max` indicator functions to Python
### v0.2.1
- Add `rolling_min` and `rolling_max` indicators for LLV (Lowest Low Value) and HHV (Highest High Value) support
- NaN handling for warmup period
### v0.2.0
- Add multi-leg spread backtesting (`run_spread_backtest`) supporting straddles, strangles, vertical spreads, iron condors, iron butterflies, butterfly spreads, calendar spreads, and diagonal spreads
- Coordinated entry/exit across all legs with net premium P&L calculation
- Max loss and target profit exit thresholds for spreads
- Add `SessionTracker` for intraday session management: market hours detection, squareoff time enforcement, session high/low/open tracking
- Pre-built session configs for NSE equity (9:15-15:30), MCX commodity (9:00-23:30), and CDS currency (9:00-17:00)
- Extend `StreamingMetrics` with equity/drawdown tracking, trade recording, and `finalize()` method
### v0.1.0
- Initial release
- 5 strategy types: single, basket, pairs, options, multi
- 30+ performance metrics with full VectorBT parity
- 10 technical indicators (SMA, EMA, RSI, MACD, Stochastic, ATR, Bollinger Bands, ADX, VWAP, Supertrend)
- Stop-loss management: fixed, ATR-based, and trailing stops
- Take-profit management: fixed, ATR-based, and risk-reward targets
- PyO3 Python bindings for seamless Python integration
| text/markdown; charset=UTF-8; variant=GFM | null | Alphabench <contact@alphabench.in> | null | null | null | backtesting, trading, quantitative-finance, algorithmic-trading, rust, high-performance | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Programming Language :... | [] | https://www.alphabench.in/raptorbt | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Bug Tracker, https://github.com/alphabench/raptorbt/issues",
"Documentation, https://www.alphabench.in/raptorbt",
"Homepage, https://www.alphabench.in/raptorbt",
"Repository, https://github.com/alphabench/raptorbt"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:50:22.597897 | raptorbt-0.3.2.post1.tar.gz | 109,261 | 8f/68/bcafc6f3a68949f54afe56689944b6f5b87e61cf9e6e8599587c8439135d/raptorbt-0.3.2.post1.tar.gz | source | sdist | null | false | 24e1dfeff7d62c2c3ff83ce986a141f5 | 91b015d769a7d49e08104a4952578629902fcf902f4914b604cf5c41b0f50aa3 | 8f68bcafc6f3a68949f54afe56689944b6f5b87e61cf9e6e8599587c8439135d | null | [
"LICENSE"
] | 815 |
2.4 | PGLW | 1.3.3 | Parametric Geometry Library for Wind Turbines | # PGLW - Parametric Geometry Library for Wind
[](https://gitlab.windenergy.dtu.dk/frza/PGL/commits/master)
[](https://gitlab.windenergy.dtu.dk/frza/PGL/commits/master)
PGLW is a Python based tool for creating surface geometries using simple parametric inputs developed
at the Department of Wind and Energy Systems of the Technical University of Denmark.
PGLW is tailored towards wind turbine related geometries, but its base classes can be used for any purpose.
The package contains a series of classes for generating geometric primitives such as airfoils, blade surfaces, nacelles, towers etc,
that can be run as scripts with input files.
## Installation and requirements
PGLW installs as a standard Python distribution and requires Python >=3.8.
To install PGLW in developer mode simply run
$ pip install -e .[test,docs]
Or install the latest tagged release wheel from pypi:
pip install PGLW
## Documentation
Documentation is available here: https://frza.pages.windenergy.dtu.dk/PGL.
PGLW is documented using Sphinx, and you can build the docs locally by navigating to the ``docs`` directory and issueing the command:
$ make html
To view the docs open _build/html/index.html in your a browser.
## Examples
A number of examples of how to use PGLW is located in ``PGLW/examples``.
| text/markdown | null | "Department of Wind and Energy Systems, DTU" <frza@dtu.dk> | null | Frederik Zahle <frza@dtu.dk> | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"scipy",
"black==24.8.0",
"isort==5.6.4",
"flake8==5.0.4",
"ruamel.yaml",
"coverage; extra == \"test\"",
"sphinx; extra == \"docs\"",
"numpydoc; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"sphinx_rtd_theme; extra == \"docs\"",
"sphinxcontrib-napoleon; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://gitlab.windenergy.dtu.dk/frza/PGL",
"Bug Tracker, https://gitlab.windenergy.dtu.dk/frza/PGL/-/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T13:49:58.050631 | pglw-1.3.3-py3-none-any.whl | 7,159,624 | b0/d8/7dc3e0711c905b442239c7042c265b947f2ec15b797928d2d76e1b4ced3c/pglw-1.3.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 032aacdd4a914fca09c5577a73ed6418 | 686f7805b1ebc4ee7d6484512acc1972281a4664b63fdd0418cc51a73c967cc2 | b0d87dc3e0711c905b442239c7042c265b947f2ec15b797928d2d76e1b4ced3c | null | [
"LICENSE"
] | 0 |
2.4 | wheezy.http | 3.2.3 | A lightweight http request-response library | # wheezy.http
[](https://github.com/akornatskyy/wheezy.http/actions/workflows/tests.yml)
[](https://coveralls.io/github/akornatskyy/wheezy.http?branch=master)
[](https://wheezyhttp.readthedocs.io/en/latest/?badge=latest)
[](https://badge.fury.io/py/wheezy.http)
[wheezy.http](https://pypi.org/project/wheezy.http/) is a
[python](http://www.python.org) package written in pure Python code. It
is a lightweight http library for things like request, response,
headers, cookies and many others. It a wrapper around the
[WSGI](http://www.python.org/dev/peps/pep-3333) request environment.
It is optimized for performance, well tested and documented.
Resources:
- [source code](https://github.com/akornatskyy/wheezy.http),
[examples](https://github.com/akornatskyy/wheezy.http/tree/master/demos)
and [issues](https://github.com/akornatskyy/wheezy.http/issues)
tracker are available on
[github](https://github.com/akornatskyy/wheezy.http)
- [documentation](https://wheezyhttp.readthedocs.io/en/latest/)
## Install
[wheezy.http](https://pypi.org/project/wheezy.http/) requires
[python](https://www.python.org) version 3.10+. It is independent of operating
system. You can install it from [pypi](https://pypi.org/project/wheezy.http/)
site:
```sh
pip install -U wheezy.http
```
If you run into any issue or have comments, go ahead and add on
[github](https://github.com/akornatskyy/wheezy.http).
| text/markdown | null | Andriy Kornatskyy <andriy.kornatskyy@live.com> | null | null | null | wsgi, http, request, response, cache, cachepolicy, cookie, functional, middleware, transforms | [
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"P... | [] | null | null | >=3.10 | [] | [] | [] | [
"wheezy.core>=3.2.3",
"Cython>=3.0; extra == \"cython\"",
"setuptools>=61.0; extra == \"cython\""
] | [] | [] | [] | [
"Homepage, https://github.com/akornatskyy/wheezy.http",
"Source, https://github.com/akornatskyy/wheezy.http",
"Issues, https://github.com/akornatskyy/wheezy.http/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T13:48:47.226306 | wheezy_http-3.2.3.tar.gz | 30,520 | c1/5a/371a0fcd256bc2127b6098ce1b613989aaa52fd9a3b93cb763bc5e72ede0/wheezy_http-3.2.3.tar.gz | source | sdist | null | false | ed5131f7299ae0553e8791b9f87e6445 | a1fd2e8ea79fb571ce69878a67d8d81d31fa6c0183bf36acb89e4a499fd55eb3 | c15a371a0fcd256bc2127b6098ce1b613989aaa52fd9a3b93cb763bc5e72ede0 | MIT | [
"LICENSE"
] | 0 |
2.4 | causaliq-workflow | 0.2.0 | Workflow engine for causal discovery and inference |
# causaliq-workflow

[](https://opensource.org/licenses/MIT)

**GitHub Actions-inspired workflow orchestration for causal discovery experiments** within the [CausalIQ ecosystem](https://github.com/causaliq/causaliq). Execute causal discovery workflows using familiar CI/CD patterns with conservative execution and comprehensive action framework.
## Status
🚧 **Active Development** - This repository is currently in active development, which involves:
- migrating functionality from the legacy monolithic [discovery repo](https://github.com/causaliq/discovery) to support legacy experiments and analysis
- ensure CausalIQ development standards are met
- adding new features to provide a comprehensive, open, causal discovery workflow.
## Features
✅ **Implemented Releases**
- **Release v0.1.0 - Workflow Foundations**: Plug-in actions, basic workflow
and CLI support, 100% test coverage
- **Release v0.2.0 - Knowledge Workflows**: Integrate with causaliq-knowledge
generate_graph action and write results to workflow caches.
*See Git commit history for detailed implementation progress*
🛣️ Upcoming Releases
- **Release v0.3.0 - Analysis Workflows**: Graph averaging and structural
analysis workflows.
- **Release v0.4.0 - Enhanced Workflow**: Dry and comparison runs, runtime
estimation and processing summary
- **Release v0.5.0 - Discovery Workflows**: Structure learning algorithms
integrated
## causaliq-core Integration
causaliq-workflow builds on causaliq-core for its action framework and caching
infrastructure:
- **CausalIQActionProvider** - Base class for all action providers
- **ActionInput/ActionResult** - Type-safe action interfaces
- **ActionValidationError/ActionExecutionError** - Exception handling
- **TokenCache/JsonCompressor** - SQLite-based caching with JSON tokenisation
## Brief Example Usage
**Example Workflow Definition**, experiment.yml:
```yaml
description: "Causal Discovery Experiment"
id: "experiment-001"
workflow_cache: "results/{{id}}_cache.db" # All results stored here
matrix:
network: ["asia", "cancer"]
algorithm: ["pc", "ges"]
sample_size: ["100", "1K"]
steps:
- name: "Structure Learning"
uses: "causaliq-discovery"
with:
algorithm: "{{algorithm}}"
sample_size: "{{sample_size}}"
dataset: "data/{{network}}"
# Results cached with key: {network, algorithm, sample_size}
```
**Execute with modes:**
```bash
cqflow experiment.yml --mode=dry-run # Validate and preview (default)
cqflow experiment.yml --mode=run # Execute (skip if outputs exist)
cqflow experiment.yml --mode=compare # Re-execute and compare outputs
```
Note that **cqflow** is a short synonym for **causaliq-workflow** which can also be used.
## Upcoming Key Innovations
### 🔄 Workflow Orchestration
- Continuous Integration (CI) testing: Workflow specification syntax
- Dask distributed computing: Scalable parallel processing
- Dependency management: Automatic handling of data and processing dependencies
- Error recovery: Robust handling of failures and restarts
### 📊 Experiment Management
- Configuration management: YAML-based experiment specifications
- Parameter sweeps: Systematic exploration of algorithm parameters
- Version control: Git-based tracking of experiments and results
- Reproducibility: Deterministic execution with seed management
## Integration with CausalIQ Ecosystem
- 🔍 **CausalIQ Discovery** is called by this package to perform structure learning.
- 📊 **CausalIQ Analysis** is called by this package to perform results analysis and generate assets for research papers.
- 🔮 **CausalIQ Predict** is called by this package to perform causal prediction.
- 🔄 **Zenodo Synchronisation** is used by this package to download datasets and upload results.
- 🧪 **CausalIQ Papers** are defined in terms of CausalIQ Workflows allowing the reproduction of experiments, results and published paper assets created by the CausalIQ ecosystem.
## LLM Support
The following provides project-specific context for this repo which should be provided after the [personal and ecosystem context](https://github.com/causaliq/causaliq/blob/main/LLM_DEVELOPMENT_GUIDE.md):
```text
tbc
```
### Prerequisites
- Python 3.9-3.13
- Git
- R with bnlearn (optional, for external integration)
### Installation
```bash
git clone https://github.com/causaliq/causaliq-workflow.git
cd causaliq-workflow
# Set up development environment
scripts/setup-env.ps1 -Install
scripts/activate.ps1
```
**Example workflows**: [docs/example_workflows.md](docs/example_workflows.md)
## Research Context
Supporting research for May 2026 paper on LLM integration for intelligent model averaging. The CI workflow architecture enables sophisticated experimental designs while maintaining familiar syntax for the research community.
**Migration target**: Existing workflows from monolithic discovery repo by end 2026.
## Quick Start
```python
# to be completed
```
## Getting started
### Prerequisites
- Git
- Latest stable versions of Python 3.9, 3.10. 3.11 and 3.12
### Clone the new repo locally and check that it works
Clone the causaliq-analysis repo locally as normal
```bash
git clone https://github.com/causaliq/causaliq-analysis.git
```
Set up the Python virtual environments and activate the default Python virtual environment. You may see
messages from VSCode (if you are using it as your IDE) that new Python environments are being created
as the scripts/setup-env runs - these messages can be safely ignored at this stage.
```text
scripts/setup-env -Install
scripts/activate
```
Check that the causaliq-analysis CLI is working, check that all CI tests pass, and start up the local mkdocs webserver. There should be no errors reported in any of these.
```text
causaliq-analysis --help
scripts/check_ci
mkdocs serve
```
Enter **http://127.0.0.1:8000/** in a browser and check that the
causaliq-data documentation is visible.
If all of the above works, this confirms that the code is working successfully on your system.
## Documentation
Full API documentation is available at: **http://127.0.0.1:8000/** (when running `mkdocs serve`)
## Contributing
This repository is part of the CausalIQ ecosystem. For development setup:
1. Clone the repository
2. Run `scripts/setup-env -Install` to set up environments
3. Run `scripts/check_ci` to verify all tests pass
4. Start documentation server with `mkdocs serve`
---
**Supported Python Versions**: 3.9, 3.10, 3.11, 3.12, 3.13
**Default Python Version**: 3.11
**License**: MIT
| text/markdown | null | CausalIQ <info@causaliq.com> | null | CausalIQ <info@causaliq.com> | null | causaliq | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ::... | [] | null | null | >=3.9 | [] | [] | [] | [
"causaliq-core>=0.4.0",
"click>=8.0.0",
"jsonschema>=4.0.0",
"PyYAML>=6.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-mock>=3.10.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"isort>=5.10.0; extra == \"dev\"",
"flake8>=5.0.0; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://github.com/causaliq/causaliq-workflow",
"Documentation, https://github.com/causaliq/causaliq-workflow#readme",
"Repository, https://github.com/causaliq/causaliq-workflow",
"Bug Tracker, https://github.com/causaliq/causaliq-workflow/issues"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-18T13:48:35.768062 | causaliq_workflow-0.2.0.tar.gz | 34,917 | a3/e7/29b92e3b8fc7c4fc159a10c3cba9687b02492db4e5e118f2ae3cb8d14cae/causaliq_workflow-0.2.0.tar.gz | source | sdist | null | false | d6665321c94838edd90c275963b2175c | 62df1acdbaf54a1242b6c5c630c83dc5c4a6c2c7807eef11cc7a11bfc793bd8f | a3e729b92e3b8fc7c4fc159a10c3cba9687b02492db4e5e118f2ae3cb8d14cae | MIT | [
"LICENSE"
] | 303 |
2.4 | thyra | 1.14.1 | A modern Python library for converting Mass Spectrometry Imaging (MSI) data into SpatialData/Zarr format - your portal to spatial omics | # Thyra
[](https://github.com/Tomatokeftes/thyra/actions/workflows/tests.yml)
[](https://pypi.org/project/thyra/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/psf/black)
**Thyra** (from Greek θύρα, meaning "door" or "portal") - A modern Python library for converting Mass Spectrometry Imaging (MSI) data into the standardized **SpatialData/Zarr format**, serving as your portal to spatial omics analysis workflows.
## Features
- **Multiple Input Formats**: ImzML, Bruker (.d directories), Waters (.raw directories)
- **SpatialData Output**: Modern, cloud-ready format with Zarr backend
- **Memory Efficient**: Handles large datasets (100+ GB) through streaming processing
- **Metadata Preservation**: Extracts and maintains all acquisition parameters
- **3D Support**: Process volume data or treat as 2D slices
- **Cross-Platform**: Windows, macOS, and Linux support
## Installation
### Via pip (Recommended)
```bash
pip install thyra
```
### Via conda
```bash
conda install -c conda-forge thyra
```
### From source
```bash
git clone https://github.com/Tomatokeftes/thyra.git
cd thyra
poetry install
```
## Quick Start
### Command Line Interface
```bash
# Basic conversion
thyra input.imzML output.zarr
# Bruker data with custom parameters
thyra data.d output.zarr --pixel-size 50 --dataset-id "experiment_001"
# Waters data
thyra data.raw output.zarr
# 3D volume processing
thyra volume.imzML output.zarr --handle-3d
```
### Python API
```python
from thyra import convert_msi
# Simple conversion
success = convert_msi(
input_path="data/sample.imzML",
output_path="output/sample.zarr",
pixel_size_um=25.0
)
# Advanced usage with custom parameters
success = convert_msi(
input_path="data/experiment.d",
output_path="output/experiment.zarr",
dataset_id="exp_001",
pixel_size_um=10.0,
handle_3d=True
)
```
## Supported Formats
### Input Formats
| Format | Extension | Description | Status |
|--------|-----------|-------------|--------|
| ImzML | `.imzML` | Open standard for MS imaging | Full support |
| Bruker | `.d` | Bruker proprietary format | Full support |
| Waters | `.raw` | Waters MassLynx imaging format | Full support |
### Output Formats
| Format | Description | Benefits |
|--------|-------------|----------|
| SpatialData/Zarr | Modern spatial omics standard | Cloud-ready, efficient, standardized |
## Advanced Usage
### Configuration Options
```bash
# All available options
thyra input.imzML output.zarr \
--pixel-size 25 \
--dataset-id "my_experiment" \
--handle-3d \
--optimize-chunks \
--log-level DEBUG \
--log-file conversion.log
```
### Batch Processing
```python
import glob
from thyra import convert_msi
# Process multiple files
for input_file in glob.glob("data/*.imzML"):
output_file = input_file.replace(".imzML", ".zarr")
convert_msi(input_file, output_file)
```
### Working with SpatialData
```python
import spatialdata as sd
# Load converted data
sdata = sd.read_zarr("output/sample.zarr")
# Access the MSI data
msi_data = sdata.tables["msi_dataset"]
print(f"Shape: {msi_data.shape}")
print(f"Mass channels: {msi_data.var.index}")
```
## Development
### Setup Development Environment
```bash
# Clone repository
git clone https://github.com/Tomatokeftes/thyra.git
cd thyra
# Install with development dependencies
poetry install
# Install pre-commit hooks
poetry run pre-commit install
```
### Running Tests
```bash
# Unit tests only
poetry run pytest -m "not integration"
# All tests
poetry run pytest
# With coverage
poetry run pytest --cov=thyra
```
### Code Quality
```bash
# Format code
poetry run black .
poetry run isort .
# Run linting
poetry run flake8
# Run all checks
poetry run pre-commit run --all-files
```
## Documentation
- **API Documentation**: [Auto-generated docs](https://github.com/Tomatokeftes/thyra#readme)
- **Contributing Guide**: [CONTRIBUTING.md](CONTRIBUTING.md)
- **Architecture Overview**: [docs/architecture.md](docs/architecture.md)
- **Changelog**: [CHANGELOG.md](CHANGELOG.md)
## Contributing
We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.
### Quick Contribution Steps
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes and add tests
4. Run the test suite (`poetry run pytest`)
5. Commit your changes (`git commit -m 'Add amazing feature'`)
6. Push to your branch (`git push origin feature/amazing-feature`)
7. Open a Pull Request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Support
- **Issues**: [GitHub Issues](https://github.com/Tomatokeftes/thyra/issues)
- **Discussions**: [GitHub Discussions](https://github.com/Tomatokeftes/thyra/discussions)
- **Email**: t.visvikis@maastrichtuniversity.nl
## Citation
If you use Thyra in your research, please cite:
```bibtex
@software{thyra2024,
title = {Thyra: Modern Mass Spectrometry Imaging Data Conversion - Portal to Spatial Omics},
author = {Visvikis, Theodoros},
year = {2024},
url = {https://github.com/Tomatokeftes/thyra}
}
```
## Acknowledgments
- Built with [SpatialData](https://spatialdata.scverse.org/) ecosystem
- Powered by [Zarr](https://zarr.readthedocs.io/) for efficient storage
- Uses [pyimzML](https://github.com/alexandrovteam/pyimzML) for ImzML parsing
---
**Thyra** - Your portal from traditional MSI formats to modern spatial omics workflows
| text/markdown | Theodoros Visvikis | t.visvikis@maastrichtuniversity.nl | Theodoros Visvikis | t.visvikis@maastrichtuniversity.nl | MIT | mass-spectrometry, imaging, spatialdata, zarr, omics, bioinformatics, msi, imzml, bruker, spatial-omics, data-conversion, scientific-computing | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"... | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"Shapely>=1.8.0",
"anndata>=0.11.0",
"cryptography<46.0.0,>=45.0.5",
"dask>=2023.0.0",
"geopandas>=0.9.0",
"imagecodecs>=2024.1.1",
"lxml>=4.6.0",
"matplotlib<4.0.0,>=3.10.6",
"numpy>=2.0.0",
"pandas>=2.0.0",
"psutil<8.0.0,>=7.2.1",
"pyimzML>=1.4.0",
"scipy>=1.7.0",
"spatialdata>=0.6.0",
... | [] | [] | [] | [
"Bug Tracker, https://github.com/Tomatokeftes/thyra/issues",
"Changelog, https://github.com/Tomatokeftes/thyra/blob/main/CHANGELOG.md",
"Contributing, https://github.com/Tomatokeftes/thyra/blob/main/CONTRIBUTING.md",
"Documentation, https://github.com/Tomatokeftes/thyra#readme",
"Discussions, https://github... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:48:08.304900 | thyra-1.14.1.tar.gz | 10,402,982 | 90/7a/d8ba33adc688ce1bcd5853cd2f15884a1cb01de2fee60952f93fe84717ec/thyra-1.14.1.tar.gz | source | sdist | null | false | faddf76922274486761c81267a75928d | 9e2a1e2aaa89567bf4ee1c7ee01c573bb68f82e11ad80d3512455579e62afab3 | 907ad8ba33adc688ce1bcd5853cd2f15884a1cb01de2fee60952f93fe84717ec | null | [
"LICENSE"
] | 254 |
2.4 | algorhino-anemone | 0.1.21 | anemone searches trees | # anemone
`anemone` is a Python library for tree search over `valanga` game states. It builds a
shared tree graph and layers algorithm-specific wrappers on top so you can plug in
node evaluation, exploration indices, and selection policies for "tree and value"
searches.
## Highlights
- Tree-and-value exploration pipeline driven by `TreeAndValueBranchSelector`.
- Modular factories for node evaluation, selection, index computation, and tree
management.
- Pluggable stopping criteria and recommender rules for final branch selection.
- Optional torch-based evaluator for batched neural evaluations.
## Installation
```bash
pip install anemone
```
Optional torch integration:
```bash
pip install anemone[nn]
```
## Quick start
`anemone` exposes factory helpers to build a branch selector configured with your
node selector, evaluation, and stopping-criterion choices. At runtime you feed it a
`valanga` state and a seed to get back a branch recommendation.
```python
from random import Random
from anemone import TreeAndValuePlayerArgs, create_tree_and_value_branch_selector
from anemone.node_selector.factory import UniformArgs
from anemone.node_selector.node_selector_types import NodeSelectorType
from anemone.progress_monitor.progress_monitor import (
StoppingCriterionTypes,
TreeBranchLimitArgs,
)
from anemone.recommender_rule.recommender_rule import SoftmaxRule
# Populate the pieces specific to your game domain.
args = TreeAndValuePlayerArgs(
node_selector=UniformArgs(type=NodeSelectorType.UNIFORM),
opening_type=None,
stopping_criterion=TreeBranchLimitArgs(
type=StoppingCriterionTypes.TREE_BRANCH_LIMIT,
tree_branch_limit=100,
),
recommender_rule=SoftmaxRule(type="softmax", temperature=1.0),
)
selector = create_tree_and_value_branch_selector(
state_type=YourStateType,
args=args,
random_generator=Random(0),
master_state_evaluator=your_state_evaluator,
state_representation_factory=None,
queue_progress_player=None,
)
recommendation = selector.select_branch(state=current_state, selection_seed=0)
print(recommendation.branch_key)
```
## Design
This codebase follows a “core node + wrappers” pattern.
- **`TreeNode` (core)**
- `TreeNode` is the canonical, shared data structure.
- It stores the graph structure: `branches_children` and `parent_nodes`.
- There is conceptually a single tree/graph of `TreeNode`s.
- **Wrappers implement `ITreeNode`**
- Higher-level nodes (e.g. `AlgorithmNode`) wrap a `TreeNode` and add algorithm-specific state:
evaluation, indices, representations, etc.
- Wrappers expose navigation by delegating to the underlying `TreeNode`.
- **Homogeneity at the wrapper level**
- Even though `TreeNode` is the core place where connections are stored, each wrapper is intended to be
*closed under parent/child links*:
- a wrapper’s `branches_children` and `parent_nodes` contain that same wrapper type.
- today this is typically either “all `TreeNode`” or “all `AlgorithmNode`”.
- in the future, another wrapper can exist (still implementing `ITreeNode`), and it should also be
homogeneous within itself.
The practical motivation is:
- algorithms can be written against `ITreeNode` (for navigation) and against wrappers like `AlgorithmNode`
(for algorithm-specific fields),
- while keeping a single shared underlying structure that can be accessed consistently from any wrapper.
## Repository layout
Each important package folder includes a local README with details. Start with:
- `src/anemone/` for the main search pipeline and public entry points.
- `src/anemone/node_selector/` for selection strategies (Uniform, RecurZipf, Sequool).
- `src/anemone/node_evaluation/` for direct evaluation and minmax tree evaluation.
- `src/anemone/tree_manager/`, `src/anemone/trees/`, and `src/anemone/updates/` for tree construction,
expansion, and backpropagation.
- `src/anemone/indices/` for exploration index computation and updates.
- `tests/` for index and tree-building fixtures.
| text/markdown | null | Victor Gabillon <victorgabillon@gmail.com> | null | null | null | tree, search | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"valanga>=0.2.0",
"atomheart>=0.1.9",
"rich",
"sortedcollections>=2.1.0",
"graphviz",
"pytest>=9.0.2; extra == \"test\"",
"coverage; extra == \"test\"",
"pytest-cov>=6.0.0; extra == \"test\"",
"ruff>=0.15.0; extra == \"lint\"",
"pylint>=4.0.4; extra == \"lint\"",
"mypy>=1.18.2; extra == \"typech... | [] | [] | [] | [
"Homepage, https://github.com/victorgabillon/anemone",
"Bug Tracker, https://github.com/victorgabillon/anemone/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:47:56.844045 | algorhino_anemone-0.1.21.tar.gz | 88,666 | 57/ae/28158cea5f2cb82ac309d24b0124b3c2f785992fce892bdee21ac29eba74/algorhino_anemone-0.1.21.tar.gz | source | sdist | null | false | c921f2dd3a8c7ecc640198394ee20e20 | 31e1cc1b95529c94bbcc3510752a216a377c122ef4c26b65d610b1d0da2b25a3 | 57ae28158cea5f2cb82ac309d24b0124b3c2f785992fce892bdee21ac29eba74 | GPL-3.0-only | [
"LICENSE"
] | 364 |
2.4 | mamba-ssm-macos | 1.0.1 | Mamba SSM - State Space Models optimized for Apple Silicon | # Mamba SSM for macOS Apple Silicon
**[Mamba 1](https://arxiv.org/abs/2312.00752) and [Mamba 2](https://arxiv.org/abs/2405.21060) State Space Models for Apple Silicon**
[](https://developer.apple.com/mac/)
[](https://python.org)
[](https://pytorch.org)
[](https://pypi.org/project/mamba-ssm-macos/)
[](LICENSE)
Training and inference of Mamba 1 & 2 on Apple Silicon with MPS acceleration. Works without CUDA/Triton. Supports CLI, Python API, and interactive demos.
## Installation
```bash
pip install mamba-ssm-macos
```
Or install from source:
```bash
git clone https://github.com/purohit10saurabh/mamba-ssm-macos.git
cd mamba-ssm-macos
uv sync # or: pip install -r requirements.txt
```
## Quick Start
```bash
python -m scripts.download_models mamba1 # Mamba 1 (493MB)
python -m scripts.download_models mamba2 # Mamba 2 (493MB)
make run-mamba1 # Quick Mamba 1 demo
make run-mamba2 # Quick Mamba 2 demo
```
**Prerequisites:** macOS 12.3+ with Apple Silicon, Python 3.10+, 8GB+ RAM recommended.
## Usage
### Text Generation
```bash
python -m scripts.run_models mamba1 --prompt "The future of AI" --max-length 50
python -m scripts.run_models mamba2 --prompt "The future of AI" --max-length 30
python -m scripts.run_models mamba1 --prompt "Once upon a time" --temperature 0.8
python -m examples.02_text_generation --interactive
```
### Examples
```bash
python -m examples.01_core_modules # Core modules usage
python -m examples.02_text_generation # Text generation demo
python -m examples.03_training # Training example
```
### Makefile Commands
```bash
make download-models # Download both models
make run-mamba1 # Quick Mamba 1 demo
make run-mamba2 # Quick Mamba 2 demo
make test-quick # Fast integration test
make test # Full test suite
```
## Training
See `examples/03_training.py` for a full example. Snippet:
```python
import torch
from torch import nn
from mamba_ssm.modules.mamba2 import Mamba2
model = nn.Sequential(nn.Embedding(1000, 128), *[Mamba2(d_model=128, d_state=64, d_conv=4, expand=2, headdim=64, ngroups=1, chunk_size=256, device='mps') for _ in range(2)], nn.LayerNorm(128), nn.Linear(128, 1000, bias=False)).to('mps')
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-4)
criterion = nn.CrossEntropyLoss()
for input_ids, labels in dataloader:
optimizer.zero_grad()
logits = model(input_ids)
loss = criterion(logits.view(-1, logits.size(-1)), labels.view(-1))
loss.backward()
optimizer.step()
```
## Repository Structure
```
mamba-ssm-macos/
├── mamba_ssm/ # Core library (models, modules, ops, utils)
├── scripts/ # download_models.py, run_models.py
├── tests/ # unit/, integration/, run_unit_tests.py
├── examples/ # 01_core_modules, 02_text_generation, 03_training
├── Makefile
└── pyproject.toml
```
## Troubleshooting
**"Model files not found"** — Run `make download-models` or `python -m scripts.download_models mamba1|mamba2`.
**"MPS not available"** — Check with `python -c "import torch; print(torch.backends.mps.is_available())"`. Falls back to CPU automatically.
**Import errors** — Use module syntax: `python -m examples.02_text_generation`.
## Citation
Also available via GitHub's "Cite this repository" button ([CITATION.cff](CITATION.cff)).
```bibtex
@software{purohit2026mamba_ssm_macos,
title={Mamba SSM for macOS Apple Silicon},
author={Purohit, Saurabh},
year={2026},
url={https://github.com/purohit10saurabh/mamba-ssm-macos}
}
```
<details>
<summary>Original Mamba papers</summary>
```bibtex
@article{gu2023mamba,
title={Mamba: Linear-Time Sequence Modeling with Selective State Spaces},
author={Gu, Albert and Dao, Tri},
journal={arXiv preprint arXiv:2312.00752},
year={2023}
}
@article{dao2024transformers,
title={Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality},
author={Dao, Tri and Gu, Albert},
journal={arXiv preprint arXiv:2405.21060},
year={2024}
}
```
</details>
## References
- [state-spaces/mamba](https://github.com/state-spaces/mamba) — Original implementation
- [state-spaces/mamba1-130m](https://huggingface.co/state-spaces/mamba1-130m) — Mamba 1 130M Pre-trained model
- [state-spaces/mamba2-130m](https://huggingface.co/state-spaces/mamba2-130m) — Mamba 2 130M Pre-trained model
## Contributing
Contributions are welcome — bug fixes, performance improvements, docs, and new features. Open an issue or submit a PR.
```bash
git clone https://github.com/purohit10saurabh/mamba-ssm-macos.git
cd mamba-ssm-macos
uv sync --extra dev
make test
```
## License
Apache 2.0 — see [LICENSE](LICENSE).
| text/markdown | null | Saurabh Purohit <saurabh97purohit@gmail.com> | null | null | null | mamba, apple-silicon, macos, mps, state-space-model, deep-learning | [
"Programming Language :: Python :: 3",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Unix",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=2.0.0",
"transformers>=4.41.0",
"numpy>=1.21.0",
"einops>=0.7.0",
"huggingface-hub>=0.16.0",
"hydra-core>=1.3.0",
"omegaconf>=2.3.0",
"tqdm>=4.65.0",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/purohit10saurabh/mamba-ssm-macos",
"Repository, https://github.com/purohit10saurabh/mamba-ssm-macos"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:47:16.788197 | mamba_ssm_macos-1.0.1.tar.gz | 30,525 | 8c/9f/875be208fe3079e2d5f5d56e3051b61f08d66799640acfc305c14e4905e7/mamba_ssm_macos-1.0.1.tar.gz | source | sdist | null | false | 3795aa373fb4d196e071efbcc5eebccd | f84eadfea95647ba69eed361a744df41a98cb84c69bdf6e1708cc3f21615be7b | 8c9f875be208fe3079e2d5f5d56e3051b61f08d66799640acfc305c14e4905e7 | Apache-2.0 | [
"LICENSE",
"AUTHORS"
] | 267 |
2.4 | offlinenet | 1.0.2 | Browse the web without internet. | # OfflineNet
Browse the web without internet
---
## Tech Stack
Python with Requests, Beautiful Soup, Typer, and Rich
---
## 📦 Installation & Setup
Clone the repository:
```bash
git clone https://github.com/ntcofficial/offlinenet.git
cd offlinenet
```
Install dependencies:
```bash
pip install -r requirements.txt
```
## Usage
See the list of available commands:
```bash
python main.py --help
```
Verify download:
```bash
python main.py hello --name <your_name>
```
Download a webpage and save locally:
```bash
python main.py save <url>
```
See the parameters of a particular command:
```bash
python main.py <command> --help
```
---
## 📝 Changelog
### 1.0.2 >
- Added offline page downloading
---
## 🤝 Contributing
Currently in early development.
Contributions may be opened in the future.
Suggestions and feedback are welcome.
---
## Author
### Jasper
Founder - Next Tech Creations
---
## License
This project is licensed under the **MIT License**.
Full license text applies.
---
## 📜 License Text
```
Copyright 2026 Next Tech Creations
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
---
## 📬 Contact
For questions, feedback, or collaboration:
* 📧 Email: [nexttechcreations@gmail.com](mailto:nexttechcreations@gmail.com)
* 🌐 Website: Coming Soon (pages.dev)
---
## ⭐ Support
If you like this project:
* Star the repository
* Share with others
* Give feedback
Your support helps the project grow.
| text/markdown | Next Tech Creations | null | null | null | MIT | null | [] | [] | null | null | null | [] | [] | [] | [
"beautifulsoup4",
"certifi",
"charset-normalizer",
"click",
"idna",
"markdown-it-py",
"mdurl",
"Pygments",
"requests",
"rich",
"shellingham",
"soupsieve",
"typer",
"typing_extensions",
"urllib3"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T13:47:03.136675 | offlinenet-1.0.2.tar.gz | 2,732 | 60/e0/3b0bd8e73d44cda15434a63fd7337b5261c539b4d1f42d19b3492defe620/offlinenet-1.0.2.tar.gz | source | sdist | null | false | 288575294b99fecfb1c4ef4c9803bcb3 | 056cad4dd7799f404a0e957f38335dd3f9d162cb6d5ba6378ca75d8526628567 | 60e03b0bd8e73d44cda15434a63fd7337b5261c539b4d1f42d19b3492defe620 | null | [] | 48 |
2.4 | oslo.messaging | 17.3.0 | Oslo Messaging API | ======================
Oslo Messaging Library
======================
.. image:: https://governance.openstack.org/tc/badges/oslo.messaging.svg
.. Change things from this point on
.. image:: https://img.shields.io/pypi/v/oslo.messaging.svg
:target: https://pypi.org/project/oslo.messaging/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/oslo.messaging.svg
:target: https://pypi.org/project/oslo.messaging/
:alt: Downloads
The Oslo messaging API supports RPC and notifications over a number of
different messaging transports.
* License: Apache License, Version 2.0
* Documentation: https://docs.openstack.org/oslo.messaging/latest/
* Source: https://opendev.org/openstack/oslo.messaging
* Bugs: https://bugs.launchpad.net/oslo.messaging
* Release notes: https://docs.openstack.org/releasenotes/oslo.messaging/
| text/x-rst | null | OpenStack <openstack-discuss@lists.openstack.org> | null | null | null | null | [
"Environment :: OpenStack",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Py... | [] | null | null | >=3.10 | [] | [] | [] | [
"pbr>=2.0.0",
"futurist>=1.2.0",
"oslo.config>=5.2.0",
"oslo.context>=5.3.0",
"oslo.log>=3.36.0",
"oslo.utils>=3.37.0",
"oslo.serialization>=2.18.0",
"oslo.service>=1.24.0",
"stevedore>=1.20.0",
"debtcollector>=1.2.0",
"cachetools>=2.0.0",
"WebOb>=1.7.1",
"PyYAML>=3.13",
"amqp>=2.5.2",
"... | [] | [] | [] | [
"Homepage, https://docs.openstack.org/oslo.messaging",
"Repository, https://opendev.org/openstack/oslo.messaging"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T13:46:57.295440 | oslo_messaging-17.3.0.tar.gz | 230,799 | 66/9f/600add63ef06507cfdf193e9d34835ef07a52e34f1d2af80603ef6b6bb83/oslo_messaging-17.3.0.tar.gz | source | sdist | null | false | 5850b145510df63744ee871ac49e40cd | 7381a08f31091f28b82cb93c0a04ed5bb2a4487dab1b873d1fe6ad6553948777 | 669f600add63ef06507cfdf193e9d34835ef07a52e34f1d2af80603ef6b6bb83 | null | [
"LICENSE"
] | 0 |
2.4 | midea-local | 6.6.0 | Control your Midea M-Smart appliances via local area network | # Midea-local python lib
[](https://github.com/rokam/midea-local/actions/workflows/python-build.yml)
[](https://codecov.io/github/rokam/midea-local)
Control your Midea M-Smart appliances via local area network.
This library is part of https://github.com/georgezhao2010/midea_ac_lan code. It was separated to segregate responsibilities.
⭐If this component is helpful for you, please star it, it encourages me a lot.
## Getting started
### Finding your device
```python3
from midealocal.discover import discover
# Without knowing the ip address
discover()
# If you know the ip address
discover(ip_address="203.0.113.11")
# The device type is in hexadecimal as in midealocal/devices/TYPE
type_code = hex(list(discover().values())[0]['type'])[2:]
```
### Getting data from device
```python3
from midealocal.discover import discover
from midealocal.devices import device_selector
token = '...'
key = '...'
# Get the first device
d = list(discover().values())[0]
# Select the device
ac = device_selector(
name="AC",
device_id=d['device_id'],
device_type=d['type'],
ip_address=d['ip_address'],
port=d['port'],
token=token,
key=key,
device_protocol=d['protocol'],
model=d['model'],
subtype=0,
customize="",
)
# Connect and authenticate
ac.connect()
# Getting the attributes
print(ac.attributes)
# Setting the temperature
ac.set_target_temperature(23.0, None)
# Setting the swing
ac.set_swing(False, False)
```
### command line tool
```python3
python3 -m midealocal.cli -h
```
## Contributing Guide
[CONTRIBUTING](.github/CONTRIBUTING.md)
[中文版CONTRIBUTING](.github/CONTRIBUTING.zh.md)
| text/markdown | rokam | lucas@mindello.com.br | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/rokam/midea-local | null | >=3.11 | [] | [] | [] | [
"aiofiles",
"aiohttp",
"colorlog",
"commonregex",
"defusedxml",
"deprecated",
"ifaddr",
"pycryptodome",
"platformdirs"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:46:49.895630 | midea_local-6.6.0.tar.gz | 107,660 | 10/4a/58f8475b3ac1cfe84e0bdf5cb70a787360acf5bc0e0750040449ae255037/midea_local-6.6.0.tar.gz | source | sdist | null | false | 72b33f06e0ff2f0a59112aa0df1d0f14 | 1eea04b50060a5cee885302c97fb54e02ea299eecba131e07057592eabaa814c | 104a58f8475b3ac1cfe84e0bdf5cb70a787360acf5bc0e0750040449ae255037 | null | [
"LICENSE"
] | 422 |
2.4 | PathBridge | 0.5.1 | Translate validator error locations back to your application's schema paths and emit structured errors | # PathBridge
> Bridge validator locations (XPath/JSONPath/JSON Pointer) back to your application model paths, and emit structured errors (Marshmallow-ready).
[](https://pypi.org/project/pathbridge/)
[](LICENSE)
[](https://pypi.org/project/pathbridge/)
[](https://pathbridge.readthedocs.io/en/latest/)
## Why
Validators (XSD/Schematron, JSON Schema) report failures at **document locations** (XPath/JSONPath).
Your users need errors on **your model** (Pydantic/Marshmallow/dataclasses). PathBridge converts between the two.
- Prefix & case tolerant (e.g., `hd:`, `MTR:`).
- Fixes 1-based indices to Python 0-based.
- Works with plain mappings or an optional tracer (add-on) that learns rules from your converter.
- Includes `make_shape` (shaper) and `build_rules` (tracer) to generate rules
from destination classes and your converter.
## Install
```bash
pip install pathbridge
```
## Quick start
```python
from pathbridge import compile_rules, translate_location, to_marshmallow
# 1. Provide or load rules: destination path -> facade (your app models) path
rules = {
"Return[1]/Contact[1]/Phone[1]": "person/phones[0]",
"Return[1]/Contact[1]/Phone[2]": "person/phones[1]",
}
compiled = compile_rules(rules)
# 2. Translate validator location (e.g. from Schematron SVRL)
loc = "/Return[1]/Contact[1]/Phone[2]"
print(translate_location(loc, compiled))
# "person/phones[1]"
# 3. Transform error location into a Marshmallow-style error dict
errors = to_marshmallow([(loc, "Invalid phone")], compiled)
# {'person': {'phones': {1: ['Invalid phone']}}}
```
## Extras
`pathbridge.extras` provides helper utilities for generating rules from your
converter:
- `make_shape(...)`: build a populated sample facade object.
- `build_rules(...)`: trace a sample conversion and produce `Destination -> Facade`
mapping rules.
### Extras example
```python
import dataclasses
import types
from pathbridge import compile_rules, to_marshmallow
from pathbridge.extras import build_rules, make_shape
@dataclasses.dataclass
class FacadeName:
first: str
last: str
@dataclasses.dataclass
class Facade:
name: FacadeName
phones: list[str]
@dataclasses.dataclass
class NameXml:
first_name: str = dataclasses.field(metadata={"name": "FirstName"})
surname: str = dataclasses.field(metadata={"name": "Surname"})
@dataclasses.dataclass
class ReturnXml:
name: NameXml = dataclasses.field(metadata={"name": "YourName"})
phones: list[str] = dataclasses.field(metadata={"name": "Phone"})
class Meta:
name = "Return"
def convert(src: Facade) -> ReturnXml:
return ReturnXml(
name=NameXml(first_name=src.name.first, surname=src.name.last),
phones=src.phones,
)
shape = make_shape(Facade, list_len=2)
rules = build_rules(
destination_module=types.SimpleNamespace(ReturnXml=ReturnXml, NameXml=NameXml),
facade_to_destination=convert,
facade_shape=shape,
facade_root_tag="facade",
)
compiled = compile_rules(rules)
errors = to_marshmallow(
[
("/Return[1]/NameXml[1]/FirstName[1]", "Required field"),
("/Return[1]/Phone[2]/Phone[1]", "Invalid phone"),
],
compiled,
)
print(rules)
# {
# 'Return[1]/NameXml[1]/FirstName[1]': 'facade/name/first',
# 'Return[1]/Phone[2]/Phone[1]': 'facade/phones[1]',
# ...
# }
print(errors)
# {
# 'facade': {
# 'name': {'first': ['Required field']},
# 'phones': {1: ['Invalid phone']},
# }
# }
```
### Custom shape defaults
`make_shape(...)` accepts `type_defaults` so you can override generated defaults
for specific types:
```python
from decimal import Decimal
shape = make_shape(
Facade,
list_len=2,
type_defaults={
str: "sample",
int: 42,
Decimal: Decimal("1.23"),
},
)
```
## CLI
PathBridge provides a `pathbridge` CLI with a `compile` command that runs:
1. `make_shape(...)`
2. `build_rules(...)`
3. `compile_rules(...)` (when `--emit` includes compiled output)
4. Python module generation
### CLI example
Run from the repository root:
```bash
pathbridge compile \
--output-dir . \
--output-package mtr.translation_rules \
--output-module compiled \
--facade-class ./tests/integration/hmrc_main_tax_return/facade/mtr_facade.py:MTR \
--destination-module ./tests/integration/hmrc_main_tax_return/destination/mtr_v1_1.py \
--facade-to-destination ./tests/integration/hmrc_main_tax_return/converter/mtr_converter.py:to_mtr_v1_1 \
--shape-list-len 10 \
--facade-root-tag mtr \
--lift-functions _yes \
--lift-functions _yes_no \
--lift-functions _tax_payer_status \
--lift-functions _student_loan_plan \
--lift-functions _postgraduate_loan_plan \
--lift-functions _attachment_file_format \
--lift-functions decimal_str_or_none \
--lift-functions xml_date_or_none \
--lift-functions decode_attachment
```
## Help
See [documentation](https://pathbridge.readthedocs.io/) for more details.
## Real-world examples
Real-life PathBridge integrations:
- [HMRC Main Tax Return integration](https://github.com/pilosus/pathbridge/tree/main/tests/integration/hmrc_main_tax_return)
- [OpenAPI JSON Schema integration](https://github.com/pilosus/pathbridge/tree/main/tests/integration/openapi_json_schema)
- [ISO 20022 payments integration](https://github.com/pilosus/pathbridge/tree/main/tests/integration/iso20022_payments)
| text/markdown | null | Vitaly Samigullin <vrs@pilosus.org> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyth... | [] | null | null | >=3.10 | [] | [] | [] | [
"mypy>=1.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"xsdata[cli]>=26.0; extra == \"dev\"",
"mkdocs>=1.6; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/pilosus/pathbridge",
"Repository, https://github.com/pilosus/pathbridge",
"Issues, https://github.com/pilosus/pathbridge/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T13:46:40.350728 | pathbridge-0.5.1-py3-none-any.whl | 22,900 | 39/28/fd4812f3f03f4831f3717929cb17c65903ea8b5a463b4e5953162a812caa/pathbridge-0.5.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 65a141069db12845a0160094e850e323 | 4cea237de8dcd840c3824ab59de8c7fe5d7ea97b063077dfd9af553f11dd60df | 3928fd4812f3f03f4831f3717929cb17c65903ea8b5a463b4e5953162a812caa | MIT | [
"LICENSE"
] | 0 |
2.4 | lexmark-security-auditor | 0.1.5 | Lexmark MX710 (and compatible models) EWS security auditor & hardening automation (Basic Security + disable HTTP) via Playwright. | # Lexmark Security Auditor (EWS)
<p align="center">
<a href="https://pypi.org/project/lexmark-security-auditor/">
<img src="https://img.shields.io/pypi/v/lexmark-security-auditor.svg?cacheSeconds=300" alt="PyPI Version">
</a>
<a href="https://pypi.org/project/lexmark-security-auditor/">
<img src="https://img.shields.io/pypi/pyversions/lexmark-security-auditor.svg?cacheSeconds=300" alt="Python Versions">
</a>
<a href="https://github.com/hacktivism-github/netauto/blob/development/LICENSE">
<img src="https://img.shields.io/github/license/hacktivism-github/netauto.svg" alt="MIT License">
</a>
</p>
Enterprise-grade security auditing and hardening tool for Lexmark MX710 (and compatible models) via Embedded Web Server (EWS).
Built with Playwright + Python, this tool enables controlled, automated security enforcement at scale.
## Overview
The __Lexmark Security Auditor__ was developed to:
- Audit administrative exposure on Lexmark printers
- Enforce Basic Security (username/password protection)
- Disable insecure services (e.g., TCP 80 – HTTP)
- Operate at scale across multiple devices
- Provide CSV/JSON reporting for governance & compliance
Designed with a __modular architecture__, the tool separates:
- Authentication logic
- Port configuration logic
- Security workflows
- Runner orchestration
- CLI interface
## Key Features
### Security Audit
- Detects if admin/security pages are:
- OPEN
- AUTH required
- UNKNOWN
- Identifies exposure via:
- /auth/manageusers.html
- login redirects
- HTTP status codes
## Basic Security Enforcement
Automates:
1. Navigate to:
```
/cgi-bin/dynamic/config/config.html
```
2. Access
```
Configurações → Segurança → Configuração de segurança
```
3. Configure:
- Authentication Type: ```UsernamePassword```
- Admin ID
- Password
4. Apply configuration
---
## HTTP Hardening (TCP 80 Disable)
- Authenticates via form-based login
- Navigates to:
```
/cgi-bin/dynamic/config/secure/ports.html
```
- Unchecks
```
TCP 80 (HTTP)
```
- Submits configuration
- Verifies idempotently
- Performs logout
✔ Idempotent (safe to run multiple times)
✔ Safe retry logic
✔ Session-aware
---
## Architecture
```
lexmark_security_auditor/
│
├── cli.py
├── runner.py
│
├── models.py
├── ews_client.py
│
└── workflows/
├── auth.py
├── basic_security.py
├── probe.py
└── ports.py
```
---
| Module | Responsibility |
| ------------------- | ------------------------------ |
| `runner.py` | Orchestration & decision logic |
| `auth.py` | Session handling & login |
| `ports.py` | TCP 80 disable logic |
| `basic_security.py` | Admin security enforcement |
| `probe.py` | Exposure detection |
| `ews_client.py` | EWS navigation abstraction |
---
### Architecture Diagram

### Execution Flow (w/ login + disable)

---
## Installation (Development Mode)
From project root:
```
pip install -e .
```
This enables:
```
lexmark-audit ...
```
Or:
```
python -m lexmark_security_auditor.cli ...
```
---
## Usage Examples
__Note:__ If you're on Powershell replace the ``` \ ``` by ``` ` ```
### Audit Only
```
lexmark-audit \
--hosts printers.txt \
--https
```
### Apply Basic Security
```
lexmark-audit \
--hosts printers.txt \
--https \
--apply-basic-security \
--new-admin-user <ID do usuário> \
--new-admin-pass "Senha"
```
### Disable HTTP (Authenticated)
```
lexmark-audit \
--hosts printers.txt \
--https \
--disable-http \
--auth-user <ID do usuário> \
--auth-pass "Senha"
```
### With Reporting
```
lexmark-audit \
--hosts printers.txt \
--https \
--disable-http \
--auth-user <ID do usuário> \
--auth-pass "Senha" \
--report-csv report.csv
```
---
## Output Fields (CSV/JSON)
| Field | Description |
| ---------------------- | --------------------- |
| host | Printer IP |
| probe_result | OPEN / AUTH / UNKNOWN |
| evidence | Detection details |
| basic_security_applied | Boolean |
| http_disabled | Boolean |
| status | ok / timeout / error |
| error | Error message |
---
## Security Considerations
- Credentials are passed via CLI (consider secure vault integration)
- HTTPS recommended
- Designed for internal network use
- Session cookies handled via Playwright context
- Idempotent operations to avoid configuration drift
## Design Principles
- Modular
- Idempotent
- Stateless between hosts
- Session-aware
- Explicit authentication
- Clear separation of concerns
- Enterprise reporting ready
## Requirements
- Python 3.9+
- Playwright
- Chromium (installed via playwright install)
## Roadmap (Future Enhancements)
- Vault integration (HashiCorp)
- SNMP configuration hardening
- Parallel host execution
- Compliance summary dashboard
- Unit test coverage
- Docker container image
## Disclaimer
This tool is provided __"as is"__, without warranty of any kind, express or implied.
```lexmark-security-auditor``` performs automated configuration changes on network-connected devices (e.g., enabling Basic Security, modifying TCP/IP port access settings). Improper use may result in:
- Loss of remote access to devices
- Service disruption
- Configuration lockout
- Network communication impact
The author assumes __no liability__ for any damage, data loss, service interruption, or operational impact resulting from the use of this software.
### Intended Use
This tool is intended for:
- Authorized administrators
- Controlled environments
- Lab validation prior to production rollout
- Security hardening under change-management processes
You are solely responsible for:
- Ensuring proper authorization before accessing devices
- Validating configuration changes in a test environment
- Backing up device configurations prior to execution
- Following your organization's change control policies
### Security Responsibility
Disabling TCP 80 (HTTP) and enforcing authentication may restrict access methods. Ensure that:
- HTTPS (TCP 443) remains enabled
- Valid administrative credentials are known
- Recovery procedures are documented
### No Vendor Affiliation
This project is not affiliated with, endorsed by, or supported by Lexmark International, Inc.
---
## License
This project is licensed under the **MIT License**.
See [`LICENSE`](https://github.com/hacktivism-github/netauto/blob/development/LICENSE) for details.
---
## Contributions
Pull requests, issues, and feature requests are welcome!
---
## Author
Bruno Teixeira
Network & Security Automation — Angola
| text/markdown | Bruno Teixeira | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"playwright>=1.40"
] | [] | [] | [] | [
"Homepage, https://github.com/hacktivism-github/netauto/tree/development/lexmark-security-auditor",
"Repository, https://github.com/hacktivism-github/netauto"
] | twine/6.2.0 CPython/3.13.4 | 2026-02-18T13:46:26.560070 | lexmark_security_auditor-0.1.5.tar.gz | 14,639 | 10/48/8aba53d1e631101b5ae929bec67f606571b8594187d938591d5a97794766/lexmark_security_auditor-0.1.5.tar.gz | source | sdist | null | false | 7031145dbdbe4c7aaddec6a523fad7ab | 0c396eb7585dc169a8e4320391170f08a195b57b2d656914a79ba73f99b813b3 | 10488aba53d1e631101b5ae929bec67f606571b8594187d938591d5a97794766 | null | [] | 251 |
2.4 | copaw | 0.0.2 | CoPaw is a **personal assistant** that runs in your own environment. It talks to you over multiple channels (DingTalk, Feishu, QQ, Discord, iMessage, etc.) and runs scheduled tasks according to your configuration. **What it can do is driven by Skills — the possibilities are open-ended.** Built-in skills include cron, PDF/Office handling, news digest, file reading, and more; you can add custom skills. All data and tasks run on your machine; no third-party hosting. | <div align="center">
# CoPaw
[](http://copaw.agentscope.com/)
[](https://www.python.org/downloads/)
[](https://modelscope.cn/studios/fork?target=AgentScope/CoPaw)
[](LICENSE)
[[Documentation](http://copaw.agentscope.com/)] [[Try ModelScope](https://modelscope.cn/studios/fork?target=AgentScope/CoPaw)] [[中文 README](README_zh.md)]
<p align="center">
<img src="https://img.alicdn.com/imgextra/i1/O1CN01tvT5rg1JHQNRP8tXR_!!6000000001003-2-tps-1632-384.png" alt="CoPaw Logo" width="120">
</p>
<p align="center"><b>Works for you, grows with you.</b></p>
Your Personal AI Assistant; easy to install, deploy on your own machine or on the cloud; supports multiple chat apps with easily extensible capabilities.
> **Core capabilities:**
>
> **Every channel** — DingTalk, Feishu, QQ, Discord, iMessage, and more. One assistant, connect as you need.
>
> **Under your control** — Memory and personalization under your control. Deploy locally or in the cloud; scheduled reminders to any channel.
>
> **Skills** — Built-in cron; custom skills in your workspace, auto-loaded. No lock-in.
>
> <details>
> <summary><b>What you can do</b></summary>
>
> <br>
>
> Social: daily digest of hot posts (Xiaohongshu, Zhihu, Reddit), Bilibili/YouTube summaries.
> Productivity: newsletter digests to DingTalk/Feishu/QQ, contacts from email/calendar.
> Creative: describe your goal, run overnight, get a draft next day.
> Research: track tech/AI news, personal knowledge base.
> Desktop: organize files, read/summarize docs, request files in chat.
> Explore: combine Skills and cron into your own agentic app.
>
> </details>
</div>
---
## Table of Contents
> **Recommended reading:**
>
> - **I want to run CoPaw in 3 commands**: [Quick Start](#-quick-start) → open Console in browser.
> - **I want to chat in DingTalk / Feishu / QQ**: [Quick Start](#-quick-start) → [Channels](http://copaw.agentscope.com/docs/channels).
> - **I don’t want to install Python**: [ModelScope one-click](https://modelscope.cn/studios/fork?target=AgentScope/CoPaw).
- [Quick Start](#-quick-start)
- [Documentation](#-documentation)
- [Install from source](#-install-from-source)
- [Why CoPaw?](#-why-copaw)
- [Built by](#-built-by)
- [License](#-license)
---
## Quick Start
### Prerequisites
- Python 3.10 – 3.13
- pip
### Installation
```bash
pip install copaw
copaw init --defaults # or: copaw init (interactive)
copaw app
```
Then open **http://127.0.0.1:8088/** in your browser for the Console (chat with CoPaw, configure the agent). To talk in DingTalk, Feishu, QQ, etc., add a channel in the [docs](http://copaw.agentscope.com/docs/channels).

**No Python?** [ModelScope Studio](https://modelscope.cn/studios/fork?target=AgentScope/CoPaw) one-click setup (no local install). Set your Studio to **non-public** so others cannot control your CoPaw.
---
## Documentation
| Topic | Description |
|-------|-------------|
| [Introduction](http://copaw.agentscope.com/docs/intro) | What CoPaw is and how you use it |
| [Quick start](http://copaw.agentscope.com/docs/quickstart) | Install and run (local or ModelScope Studio) |
| [Console](http://copaw.agentscope.com/docs/console) | Web UI for chat and agent config |
| [Channels](http://copaw.agentscope.com/docs/channels) | DingTalk, Feishu, QQ, Discord, iMessage, and more |
| [Heartbeat](http://copaw.agentscope.com/docs/heartbeat) | Scheduled check-in or digest |
| [CLI](http://copaw.agentscope.com/docs/cli) | Init, cron jobs, skills, clean |
| [Skills](http://copaw.agentscope.com/docs/skills) | Extend and customize capabilities |
| [Config](http://copaw.agentscope.com/docs/config) | Working directory and config file |
Full docs in this repo: [website/public/docs/](website/public/docs/).
---
## Install from source
```bash
git clone https://github.com/agentscope-ai/CoPaw.git
cd CoPaw
pip install -e .
```
- **Dev** (tests, formatting): `pip install -e ".[dev]"`
- **Console** (build frontend): `cd console && npm ci && npm run build`, then `copaw app` from project root.
---
## Why CoPaw?
CoPaw represents both a **Co Personal Agent Workstation** and a "co-paw"—a partner always by your side. More than just a cold tool, CoPaw is a warm "little paw" always ready to lend a hand (or a paw!). It is the ultimate teammate for your digital life.
---
## Built by
[AgentScope team](https://github.com/agentscope-ai) · [AgentScope](https://github.com/agentscope-ai/agentscope) · [AgentScope Runtime](https://github.com/agentscope-ai/agentscope-runtime) · [ReMe](https://github.com/agentscope-ai/ReMe)
---
## License
CoPaw is released under the [Apache License 2.0](LICENSE).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <=3.13,>=3.10 | [] | [] | [] | [
"agentscope==1.0.16.dev0",
"agentscope-runtime==1.1.0b2",
"discord-py>=2.3",
"dingtalk-stream>=0.24.3",
"uvicorn>=0.40.0",
"apscheduler>=3.11.2",
"playwright>=1.49.0",
"questionary>=2.1.1",
"mss>=9.0.0",
"reme-ai==0.3.0.0a9",
"transformers>=4.30.0",
"python-dotenv>=1.0.0",
"onnxruntime<1.24"... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T13:44:45.452347 | copaw-0.0.2.tar.gz | 7,435,264 | ad/47/b13326eab4196323f1f456ddc51fc58d383f18c2076ebb58314c2a51706c/copaw-0.0.2.tar.gz | source | sdist | null | false | f673a73c5023c9b78e5bb3fa54f6354c | 3c9e496a5900e19cf9d095717b564519a85bac88f8f55c85362814a2968b5ec4 | ad47b13326eab4196323f1f456ddc51fc58d383f18c2076ebb58314c2a51706c | null | [
"LICENSE"
] | 1,195 |
2.4 | plopp | 26.2.1 | Visualization library for Scipp | <img src="docs/_static/logo.svg" width="50%" />
[](CODE_OF_CONDUCT.md)
[](https://pypi.python.org/pypi/plopp)
[](https://anaconda.org/conda-forge/plopp)
[](https://scipp.github.io/plopp/)
[](LICENSE)
[](https://zenodo.org/badge/latestdoi/528859752)
# Plopp
## About
Visualization library for Scipp
## Installation
```sh
python -m pip install plopp
```
| text/markdown | Scipp contributors | null | null | null | null | null | [
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language ... | [] | null | null | >=3.11 | [] | [] | [] | [
"lazy-loader>=0.4",
"matplotlib>=3.8",
"scipp>=25.5.0; extra == \"scipp\"",
"scipp>=25.5.0; extra == \"all\"",
"ipympl>0.8.4; extra == \"all\"",
"pythreejs>=2.4.1; extra == \"all\"",
"mpltoolbox>=24.6.0; extra == \"all\"",
"ipywidgets>=8.1.0; extra == \"all\"",
"graphviz>=0.20.3; extra == \"all\"",
... | [] | [] | [] | [
"Bug Tracker, https://github.com/scipp/plopp/issues",
"Documentation, https://scipp.github.io/plopp",
"Source, https://github.com/scipp/plopp"
] | twine/6.1.0 CPython/3.12.8 | 2026-02-18T13:44:20.248418 | plopp-26.2.1.tar.gz | 1,309,472 | f8/db/ceaa6fb49095c1bdec7af4469ed1943213451bf415b170426db9249a38d1/plopp-26.2.1.tar.gz | source | sdist | null | false | 5977ce17d25d0ce68d65ac3225ae0449 | f559a246535baa205910d9e370d16c3d70467911454de5e5052fce50b0c1e1be | f8dbceaa6fb49095c1bdec7af4469ed1943213451bf415b170426db9249a38d1 | BSD-3-Clause | [
"LICENSE"
] | 727 |
2.4 | magicmoon | 0.0.1 | MOON (Magic Oriented Object Notation) library | # MOON (Magic Oriented Object Notation) [Under development!]
[](https://github.com/magicaleks/moon/actions/workflows/ci.yml)
[](https://pypi.org/project/magicmoon/)
[](https://www.python.org/downloads/)
[](LICENSE)
Фреймворк для работы с MOON в Python.
MOON - это гибрид YAML и TOON. Отличительная особенность -
это мультимодельность данных и неограниченная расширяемость.
## Почему MOON?
Вы сами можете определять теги и типы и реализовывать для них
требуемую логику.
Тег - это директива обозначающая начало определённой структуры данных,
например `@object` или `@array`. С помощью системы хуков можно
самостоятельно определять кастомные теги.
Основная идея использования - конфигурации приложений.
## Быстрый старт
Установка:
```shell
pip install magicmoon
```
Использование:
```python
import moon
obj = moon.load("./example.moon")
print(obj["context"])
```
## Эталонная реализация Python (v3.12.6)
Публичный интерфейс реализован в [_api.py](/moon/_api.py). Импортируется не напрямую, а из moon (`__init__.py`).
`load(file_path)` - парсинг файла в `object`. Оба потока поддерживаются, как строка (буфер), так и файл.
`dump(magicked_data, file_path)` - сохранение объекта в файл.
Ядро фреймворка [lib](/moon/core) определяет внутренние механизмы.
Принцип работы:
Из MOON в Python object:
1. Токенизация, [tokenizer.py](/moon/core/tokenizer.py) проходится по всему файлу и разбивает
его на токены. Это лексические части файла, например `word`, `colon`, `tab`.
2. Парсинг, [parser.py](/moon/core/tokenizer.py) парсит токены в поток событий.
События это декларативное описание: `tag_start` - начата структура определённого тега.
3. Композиция, [composer.py](/moon/core/composer.py) из потока событий формирует AST (Abstract Syntax Tree).
4. Сборка, [constructor.py](/moon/core/constructor.py) из AST собирает готовый python `dict[str, Any]`.
На этом же этапе все типы помеченные как `ScalarNode` в AST проходят определение типов. Расширить поддерживаемые типы
можно через `TypeHook`.
Из Python object в MOON:
1. Представление в AST, [representer.py](/moon/core/representer.py) преобразует python `dict[str, Any]` в AST, используя
обратные методы представления `TypeHook`.
2. Сериализация, [serializer.py](/moon/core/serializer.py) преобразует AST в поток событий.
3. Генерация, [emitter.py](/moon/core/emitter.py) генерирует финальный MOON текст из потока событий.
Модуль [schemas](/moon/schemas) хранит все определения типов:
1. Набор исключений [errors.py](/moon/schemas/errors.py)
2. Модель и типы токенов, лексем на которые разбивается исходный MOON [tokens.py](/moon/schemas/tokens.py)
3. Модель и типы событий, декларативных описаний происходящего в MOON [events.py](/moon/schemas/events.py)
4. Базовые узлы AST, в том числе `TagNode` - это базовая нода для каждого тега [nodes.py](/moon/schemas/nodes.py)
Основной принцип работы пайплайнов построен на:
1. `StatefulStreamer` - это абстрактный класс реализующий интерфейс для работы stream to stream (iterable to iterable)
2. `TagHook` - это хук тега, в пайплайне нет никаких точных реализаций парсинга структур тего. Для каждого тега на каждом
этапе вызывается соответсвующий метод хука, определённого для этого тега.
## Спецификация
Оригинальны фреймворк поставляется с тегами:
1. `@object` - это именованные объекты, поддерживающие вложенность, синтаксис как в YAML.
После тега `@object` идёт название, далее `key: value`
```moon
// Can be commented
@object Config
user: Alex
password: something-secret
context:
revision: 177
friends_list: danil,vladimir
```
В Python представлении:
```python
magicked_data = {
"Config": {
"user": "Alex",
"password": "something-secret",
"context": {
"revision": 177
},
"friends_list": ["danil", "vladimir"]
}
}
```
## Лицензия
Проект инициирован, разработан и поддерживается Aleksandr @magicaleks.
Распространяется на основании лицензии Apache-2.0.
| text/markdown | null | "Aleksandr @magicaleks" <aleksandr@magicaleks.me> | null | null | Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest>=8; extra == \"dev\"",
"pytest-dependency>=0.6; extra == \"dev\"",
"pytest-order>=1.3; extra == \"dev\"",
"ruff>=0.2; extra == \"dev\"",
"mypy>=1.8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/magicaleks/moon",
"Issues, https://github.com/magicaleks/moon/issues"
] | twine/6.2.0 CPython/3.12.6 | 2026-02-18T13:44:05.188308 | magicmoon-0.0.1.tar.gz | 17,540 | a4/73/2700061b6572209b34594c6b9b0ad4414f346b5bc54a4d04352cf527ed51/magicmoon-0.0.1.tar.gz | source | sdist | null | false | fcb9a1e8a4fd9762966205528766df26 | b4dc9ed22942bd588b0414d2f557b63f9bdd53ea4c1ea160a4f0d91e4a979c7e | a4732700061b6572209b34594c6b9b0ad4414f346b5bc54a4d04352cf527ed51 | null | [
"LICENSE"
] | 255 |
2.4 | polars-pf | 1.1.7 | PFrames - Python Polars extensions | # PFrames - Polars extension
This package contains:
- Io source for Polars allowing to open PFrame as LazyFrame;
- Filters missing in Polars and required for PFrame processing.
This packages is used solely in [Ptabler](https://github.com/milaboratory/platforma/tree/ef22c4968b897d7f6a71b6e359d2f394100ac732/lib/ptabler/software).
| text/markdown; charset=UTF-8; variant=GFM | null | MiLaLaboratories <support@milaboratories.com> | null | MiLaLaboratories <support@milaboratories.com> | null | polars-extension | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"msgspec<0.21.0,>=0.19.0",
"polars-lts-cpu<2.0.0,>=1.33.0",
"typing-extensions<5.0.0,>=4.15.0"
] | [] | [] | [] | [
"Homepage, https://github.com/milaboratory/pframes-rs",
"Repository, https://github.com/milaboratory/pframes-rs",
"Documentation, https://github.com/milaboratory/pframes-rs#readme",
"Bug Tracker, https://github.com/milaboratory/platforma/issues"
] | maturin/1.12.2 | 2026-02-18T13:42:50.990093 | polars_pf-1.1.7-cp310-abi3-win_amd64.whl | 30,140,650 | f2/eb/be8be499b096ac9cebf1d28b9d814885a9a7b1348e1912275d89891745c8/polars_pf-1.1.7-cp310-abi3-win_amd64.whl | cp310 | bdist_wheel | null | false | 76d139a1dfbae5b328fc9914154b2878 | 3c79afc3954d22e3b54e77f1dad3d290818206c3b1cf75466636bad55471587f | f2ebbe8be499b096ac9cebf1d28b9d814885a9a7b1348e1912275d89891745c8 | null | [] | 384 |
2.4 | llms-py | 3.0.34 | A lightweight CLI tool and OpenAI-compatible server for querying multiple Large Language Model (LLM) providers | # llms.py
Lightweight CLI, API and ChatGPT-like alternative to Open WebUI for accessing multiple LLMs, entirely offline, with all data kept private in browser storage.
[llmspy.org](https://llmspy.org)
[](https://llmspy.org)
GitHub: [llmspy.org](https://github.com/ServiceStack/llmspy.org)
| text/markdown | ServiceStack | ServiceStack <team@servicestack.net> | null | ServiceStack <team@servicestack.net> | null | llm, ai, openai, anthropic, google, gemini, groq, mistral, ollama, cli, server, chat, completion | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming La... | [] | https://github.com/ServiceStack/llms | null | >=3.7 | [] | [] | [] | [
"aiohttp"
] | [] | [] | [] | [
"Homepage, https://github.com/ServiceStack/llms",
"Documentation, https://github.com/ServiceStack/llms#readme",
"Repository, https://github.com/ServiceStack/llms",
"Bug Reports, https://github.com/ServiceStack/llms/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:42:41.442722 | llms_py-3.0.34.tar.gz | 2,534,807 | 2c/c8/90019decb2d5bb0de1fa661214a6b6a12baa44a9871308c6871bbe5f9d36/llms_py-3.0.34.tar.gz | source | sdist | null | false | bf77f6348e2d217cbfbc97487c2e9895 | 42f551e6fc8dac387521675b8b405a0c22f5035046a84f0d30faac438aac91dd | 2cc890019decb2d5bb0de1fa661214a6b6a12baa44a9871308c6871bbe5f9d36 | BSD-3-Clause | [
"LICENSE"
] | 286 |
2.4 | aphub | 0.1.0 | AI Agent Hub CLI | # aphub
Command-line tool for aphub - The Docker Hub for AI Agents
## Installation
```bash
pip install aphub
```
## Quick Start
### Login
```bash
aphub login
# or
aphub login --username myuser --password mypass
```
### Push Agent
```bash
aphub push ./agent.yaml --tag latest --files ./agent-files/
```
### Pull Agent
```bash
aphub pull customer-service-agent
aphub pull customer-service-agent --tag 1.0.0 --output ./agents/
```
### Search Agents
```bash
aphub search "customer service"
aphub search chatbot --framework aipartnerupflow --limit 50
```
### Get Agent Info
```bash
aphub info customer-service-agent
aphub info myorg/my-agent
```
### List Tags
```bash
aphub tags customer-service-agent
```
### Get Manifest
```bash
aphub manifest customer-service-agent --tag 1.0.0
aphub manifest customer-service-agent --tag 1.0.0 --output manifest.yaml
```
## Commands
| Command | Description |
|---------|-------------|
| `aphub login` | Login to hub.aipartnerup.com |
| `aphub logout` | Logout and clear saved credentials |
| `aphub push <manifest>` | Push an Agent to the registry |
| `aphub pull <name>` | Pull an Agent from the registry |
| `aphub search <query>` | Search for Agents |
| `aphub info <name>` | Get detailed agent information |
| `aphub tags <name>` | List all tags for an agent |
| `aphub manifest <name>` | Get agent manifest |
| `aphub version` | Show version information |
## Features
### Progress Bars
Upload and download progress is automatically displayed for large files:
```bash
aphub pull my-agent # Shows progress bar
aphub pull my-agent --no-progress # Disable progress bar
```
### Token Auto-Refresh
Tokens are automatically refreshed when expired. No manual intervention needed!
### Configuration
#### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `APHUB_URL` | Hub API URL | `https://hub.aipartnerup.com` |
| `APHUB_API_KEY` | API key for authentication | - |
#### Config File
Configuration is stored in:
- macOS/Linux: `~/.config/aphub/config.json`
- Windows: `%APPDATA%/aphub/config.json`
## Examples
### Complete Workflow
```bash
# 1. Login
aphub login
# 2. Push your agent
aphub push ./my-agent/agent.yaml --tag 1.0.0 --files ./my-agent/
# 3. Search for agents
aphub search "customer service" --framework aipartnerupflow
# 4. Pull an agent
aphub pull customer-service-agent --tag latest --output ./agents/
# 5. Get agent details
aphub info customer-service-agent
# 6. List all versions
aphub tags customer-service-agent
```
## License
Apache-2.0
| text/markdown | null | aipartnerup <tercel.yi@gmail.com> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aphub-sdk>=0.1.0",
"typer>=0.9.0",
"rich>=13.7.0",
"pyyaml>=6.0.1",
"appdirs>=1.4.4",
"httpx>=0.25.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"twi... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-18T13:41:30.729878 | aphub-0.1.0.tar.gz | 10,096 | da/b8/b4fc90b9d10a40e4546aafe8b4a2dde32f730ac708359f00b79e243ccd28/aphub-0.1.0.tar.gz | source | sdist | null | false | f64504d9149375aca14281933b851c19 | 7df0da0d30d952e919b28558485ed9b384285c5b967c04e4a8060f8937dafbb1 | dab8b4fc90b9d10a40e4546aafe8b4a2dde32f730ac708359f00b79e243ccd28 | null | [] | 266 |
2.4 | earthkit-data | 0.18.6 | A format-agnostic Python interface for geospatial data | <p align="center">
<picture>
<source srcset="https://github.com/ecmwf/logos/raw/refs/heads/main/logos/earthkit/earthkit-data-dark.svg" media="(prefers-color-scheme: dark)">
<img src="https://github.com/ecmwf/logos/raw/refs/heads/main/logos/earthkit/earthkit-data-light.svg" height="120">
</picture>
</p>
<p align="center">
<a href="https://github.com/ecmwf/codex/raw/refs/heads/main/ESEE">
<img src="https://github.com/ecmwf/codex/raw/refs/heads/main/ESEE/foundation_badge.svg" alt="ECMWF Software EnginE">
</a>
<a href="https://github.com/ecmwf/codex/raw/refs/heads/main/Project Maturity">
<img src="https://github.com/ecmwf/codex/raw/refs/heads/main/Project Maturity/incubating_badge.svg" alt="Maturity Level">
</a>
<!-- <a href="https://codecov.io/gh/ecmwf/earthkit-data">
<img src="https://codecov.io/gh/ecmwf/earthkit-data/branch/main/graph/badge.svg" alt="Code Coverage">
</a> -->
<a href="https://opensource.org/licenses/apache-2-0">
<img src="https://img.shields.io/badge/Licence-Apache 2.0-blue.svg" alt="Licence">
</a>
<a href="https://github.com/ecmwf/earthkit-data/releases">
<img src="https://img.shields.io/github/v/release/ecmwf/earthkit-data?color=purple&label=Release" alt="Latest Release">
</a>
<!-- <a href="https://earthkit-data.readthedocs.io/en/latest/?badge=latest">
<img src="https://readthedocs.org/projects/earthkit-data/badge/?version=latest" alt="Documentation Status">
</a> -->
</p>
<p align="center">
<a href="#quick-start">Quick Start</a>
•
<a href="#installation">Installation</a>
•
<a href="https://earthkit-data.readthedocs.io/en/latest/">Documentation</a>
</p>
> \[!IMPORTANT\]
> This software is **Incubating** and subject to ECMWF's guidelines on [Software Maturity](https://github.com/ecmwf/codex/raw/refs/heads/main/Project%20Maturity).
**earthkit-data** is a format-agnostic interface for geospatial data with a focus on meteorology and
climate science. It is the data handling component of [earthkit](https://github.com/ecmwf/earthkit).
## Quick Start
```python
import earthkit.data as ekd
data = ekd.from_source("sample", "test.grib")
arr = data.to_numpy()
df = data.to_pandas()
dataset = data.to_xarray()
```
## Installation
Install from PyPI:
```
pip install earthkit-data
```
More details, such as optional dependencies can be found at https://earthkit-data.readthedocs.io/en/latest/install.html.
Alternatively, install via `conda` with:
```
$ conda install earthkit-data -c conda-forge
```
## Licence
```
Copyright 2022, European Centre for Medium Range Weather Forecasts.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
In applying this licence, ECMWF does not waive the privileges and immunities
granted to it by virtue of its status as an intergovernmental organisation
nor does it submit to any jurisdiction.
```
| text/markdown | null | "European Centre for Medium-Range Weather Forecasts (ECMWF)" <software.support@ecmwf.int> | null | null | Apache License Version 2.0 | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Progr... | [] | null | null | >=3.9 | [] | [] | [] | [
"cfgrib>=0.9.10.1",
"dask",
"deprecation",
"earthkit-meteo<0.6",
"earthkit-utils<0.2",
"eccodes>=1.7",
"entrypoints",
"filelock",
"jinja2",
"jsonschema",
"lru-dict",
"markdown",
"multiurl>=0.3.3",
"netcdf4",
"pandas",
"pdbufr>=0.11",
"pyyaml",
"tqdm>=4.63",
"xarray>=0.19",
"ear... | [] | [] | [] | [
"Documentation, https://earthkit-data.readthedocs.io/",
"Homepage, https://github.com/ecmwf/earthkit-data/",
"Issues, https://github.com/ecmwf/earthkit-data.issues",
"Repository, https://github.com/ecmwf/earthkit-data/"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T13:41:12.501428 | earthkit_data-0.18.6.tar.gz | 5,551,713 | 95/20/10dacc49aecd260ea927de11a590eaf46af92b8563eb6bebbf0732df4fc5/earthkit_data-0.18.6.tar.gz | source | sdist | null | false | 256a365da23d5c92b654cfe9dedc0710 | 08a92c41aacdb78559ca793bb38f05294aa52be7818322d7bb2f825f0d6c755a | 952010dacc49aecd260ea927de11a590eaf46af92b8563eb6bebbf0732df4fc5 | null | [
"LICENCE"
] | 387 |
2.4 | quarchpy | 2.2.17 | This packpage offers Python support for Quarch Technology modules. | ====================
Changelog (Quarchpy)
====================
Quarchpy
--------
*QuarchPy is a python package designed to provide an easy-to-use API which will work seamlessly over any connection option: USB, Serial and LAN. With it, you can create your own scripts for controlling Quarch devices - without having to worry about the low-level code involved with communication.*
*The package contains all prerequisites needed as a single PyPI project, which can be installed from the python online repository using PIP. This makes it easy to install and update, while also providing you with full access to the source code if you want to make changes or additions.*
*QuarchPy can also communicate to the device via QIS (Quarch Instrument Server), QPS (Quarch Power Studio), or simply using Python scripts. Both QIS and QPS are included with the QuarchPy Package - they are ready to use (Java and drivers may also be needed).*
Change Log
----------
2.2.17
------
- New QPS 1.51 and QIS 1.53
- Minor bug fixes
2.2.16
------
- New QPS 1.50 and QIS 1.52
2.2.15
------
- Bugfix for ">" appearing in qis output when streaming at max speed
2.2.14
------
- New QPS 1.49 and QIS 1.51
2.2.13
------
- New QPS 1.48 and QIS 1.50
- Lightweight Quarchpy, QPS and QIS now downloaded on first use after install.
- Minor bug fixes
2.2.12
------
- Minor bug fix
2.2.11
------
- Minor bug fix
2.2.10
------
- Fix for new HDPPM FW discovery over eithernet
- Minor bug fix
2.2.9
-----
- New QPS 1.47 and QIS 1.49
2.2.8
-----
- New QPS 1.46 and QIS 1.48
2.2.7
-----
- Bug fix for QIS 1.47 missing lib for linux only
2.2.6
-----
- New QPS 1.45 and QIS 1.47
2.2.5
-----
- Minor Bug fix and removal of redundant jar
2.2.4
-----
- Update to Java libraries to run QPS
- Removal of depracated libs saveing space
2.2.3
-----
- Minor bug fix
2.2.2
-----
- New QPS 1.44 and QIS 1.46
- Added support for automatic creation of default synthetic channels when connecting to module via QIS
- Minor bug fixes
2.2.1
-----
- New QPS v1.43 and QIS v1.45 packaged with java 21 with no need for installed java.
- Minor bug fixes
2.2.0
-----
- New QPS v1.42 and QIS v1.44 packaged with java 21 with no need for installed java.
- Minor bug fixes
2.1.26
------
- minor bugfix
2.1.25
------
- New QPS 1.40 and Qis 1.43
- mdns scanning added to quarchpy
2.1.24
------
- Yanked
2.1.23
------
- QIS and QPS devices and interfaces can use sendCommand to send comannds to the modules and to the applications uniformly
- Tidy up of print statments and comments.
2.1.22
------
- QIS and QPS patch containing mDNS removal
2.1.21
------
- New QPS v1.38 and QIS 1.41
- Minor bug fixes
2.1.20
------
- Improved direct IP scanning for quarch modules
- New QPS v1.37 and QIS v1.40
2.1.19
------
- Imporoved QIS streaming
- Bug fixes
- Added zeroconf, numpy and pandas as requirements
2.1.18
------
- Minor bug fix
2.1.17
------
- Improved QIS QPS launching on Linux sytems
- System debug for linux systems
2.1.16
------
- FIO mb/s parsing
- Improved QIS QPS launching
2.1.15
------
- minor bug fix
2.1.14
------
- minor bug fixes and logging improvements
2.1.13
------
- New QPS v1.36
- New QIS v1.39
- minor bug fixes and logging improvements
2.1.12
------
- New QPS v1.35
- New QIS v1.38
- minor bug fixes and removal of depracated code
2.1.11
------
- New QPS v1.32
- New QIS v1.37
- quarchpy.run module_debug added for checking state of module and DUT
2.1.10
------
- New QPS v1.29
- New QIS v1.33
2.1.8
-----
- New QPS v1.28
2.1.7
-----
- New QPS v1.27
- New QIS v1.32
2.1.6
-----
- New QPS v1.26
- New QIS v1.31
2.1.5
-----
- New QPS v1.24
2.1.4
-----
- New QPS v1.23
- New QIS v1.29
2.1.3
-----
- New QPS v1.22
- modules on the network can now be connected to using conType:QTLNumber eg. TCP:QTL1999-02-001
- fixed QIS not closing with QPS when launch with by QPS
- closeConnection added to QIS api
- display table formats multiline items and handles empty cells
2.1.2
-----
- QPS v1.20
- QIS v1.19
2.1.1
-----
- Seperation of QIS module scan and QIS select device
- Added getQuarchDevice which is a wrapper around quarchDevice that allows connections to sub-devices in array controllers over all connection types
- Version compare updated to use __version__ rather than pkg_resources
- Seperated the SystenTest (debug_info) into seperate parts with flags to allow the user to skip certain parts. This allows the test to be run without user interaction of selecting a module.
2.1.0
-----
- logging improvements
- usb locked devices fix for CentOS, Ubuntu, and Fedora
2.0.22
------
- Calibration and QCS removed from quarchpy and are not in their own packages
- New command "python -m quarchpy.run debug -usbfix" sets USB rules to fix quarch modules appearing as locked devices on Linux OS
2.0.21
------
- new QIS v1.23
2.0.20
------
- New modules added to calibration, wiring prompt added, logging improvements
- Fixes for PAM streaming using QIS
- Added Quarchpy.run list_drives
- Improved communication for connection_QPS
- Improved QCS debugging
- Reworked QCS drive detection for upcoming custom drive detection
- "quarchpy.run list_drives" command added
2.0.19
------
- QPS v1.17
- Quarchpy run terminal runs the simple python terminal to talk to modules
- Scan Specific IP address for Quarch module via QIS/QPS added
- Updated performance class for new QCS tests
- Fixed Centos QCS drive selection bug
- Improved QCS connection classes
- Improved features for QCS
- Minor bug fixes
2.0.18
------
- QPS 1.13
- Iomenter drive location bugfix
- Units added to stats export from QPS
- Changed QCS tests to work off of a python format
- Updated drive detection in QCS
- Updated communication to TLS
2.0.16
------
- QPS 1.11
2.0.15
------
- QIS v1.19.03 and QPS 1.10.12
- Updated debug info test
- Snapshots and stats from QPS functions added
- Calibration updates
2.0.14
------
- QPS annotations through quarchpy improvements
2.0.13
------
- Python2 bug fixes
- UI tidy up
- New custom annotations and comments QPS API
2.0.12
------
- Fixed issue with array module scan over UDP outside of subnet
- Bug fix for HD connection via USB in linux
- Added headless launch of QIS
- Added Shinx auto documentation
- Fixed issue with USB command response timeout in longer QCS tests
- Fixed issue where UDP locate parser was using the legacy header, not the quarch fields
- Improved qurchpy.run oarsing and help generation
- Fixed syntax warnings for string literal comparisons
- Calibration wait for specific module uptime and report file updates
2.0.11
------
- Improved list selection for devices
- Fixed bug when scanning for devices within an Array
- Module detection fixes for QCS and PAM/Rev-B HD
- Clean up of calibration switchbox code and user logging
2.0.10
------
- QCS server logging cleaned up
- Additional platform tests added to debug_info test
- Cleaned up print() statements and replaced with logging calls
- Help message added to quarchpy.run command
- Module detection fixes for QCS
- Improved calibration prompts
- Added initial calibration stubs for the PAM
- QCS improvements to linux drive enumeration tests
2.0.9
-----
- Significant QCS additions including power testing
- Added remote switchbox to calibration utility
- Various minor bug fixes and improvements to calibration utility
2.0.8
-----
- Added readme.md for PyPi description
- Fixed bug in QIS when checking if QIS is running
- Various minor additions for QCS
2.0.7
-----
- Changes since 2.0.2
- Minor bug fixes
- Calibration Changes
- QIS folder gone, QIS now in QPS only
- Run package added
- Update quarchpy added
- SystemTest improvements
- UI changes, input validation, smart port select
2.0.2
-----
- UI Package added
- Connection over TCP for python added
- Logging on devices
- Drive test core added
2.0.0
-----
- Major folder restructure
- Added test center support
- Detected streaming devices
- Added latest qps1.09 and qis
- Minor bug fixes
1.8.0
-----
- Tab to white space convert
- Updated __init__ file to co-allign with python practices
- Updated project structure
- Added documents for changes and Script Locations
- Disk selection update
- Compatibility with Python 3 and Linux Improved!
1.7.6
-----
- Fixes bug with usb connection
1.7.5
-----
- Fixed USB DLL Compatibility
- Fixed potential path issues with Qis and Qps open
1.7.4
-----
- Updated to QPS 1.08
1.7.3
-----
- Additional Bug Fixes
1.7.2
-----
- Bug fixing timings for QIS (LINUX + WINDOWS)
1.7.1
-----
- Updated FIO for use with Linux and to allow arguments without values
- Fixes path problem on Linux
- Fixes FIO on Linux
1.7.0
-----
- Improved compatability with Windows and Ubuntu
1.6.1
------
- Updating USB Scan
- Adding functionality to specify OS bit architecture (windows)
1.6.0
-----
- custom $scan IP
- fixes QIS detection
- implements custom separator for stream files
- Bug fix - QIS Load
1.5.4
-----
- Updating README and LICENSE
1.5.2
-----
- Bug Fix - Case sensitivity issue with devices
1.5.1
-----
- Additional Bug Fixes
1.5.0
-----
- Integration with FIO
- Additional QPS functionality
- Added device search timeout
1.4.1
-----
- Fixed the wmi error when importing quarchpy.
1.4.0
---
- Integration with QPS
- supports Iometer testing
- Additional fixes for wait times
1.3.4
-----
- Implemented resampling and a better way to launch QIS from the script.
1.3.3
-----
- Implements isQisRunning
- Implements qisInterface
- Changes startLocalQIS to startLocalQis
- Fixes a bug in QIS interface listDevices that didn't allow it to work with Python 3
1.3.2
-----
- Bug Fix running QIS locally
1.3.1
-----
- Implements startLocalQIS
- Packs QIS v1.6 - fixes the bugs with QIS >v1.6 and multiple modules
- Updates quarchPPM (connection_specific)
- Compatible with x6 PPM QIS stream.
1.2.0
-----
- Changes to object model
| text/x-rst | Quarch Technology ltd | support@quarch.com | null | null | Quarch Technology ltd | quarch quarchpy torridon | [
"Intended Audience :: Information Technology",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Programming Language :: Python",
"Topic :: Scientific/Engineering :: Information Analy... | [] | null | null | >=3.7 | [] | [] | [] | [
"zeroconf>=0.23.0",
"numpy",
"pandas",
"requests",
"packaging",
"quarchpy-binaries",
"typing-extensions",
"libusb-package",
"telnetlib-313-and-up; python_version >= \"3.13\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T13:41:02.393916 | quarchpy-2.2.17.tar.gz | 38,537,386 | 27/a5/e090fc1b7b42fa946387943e7d5a194cf9c326f31f44e5b7f1b1f5503ba4/quarchpy-2.2.17.tar.gz | source | sdist | null | false | ce4cf101d8eea741ae00c54455119e2c | c582820b723ca2c16e60a865ed4f691d57a954adf8e3d2da0c860363afae4215 | 27a5e090fc1b7b42fa946387943e7d5a194cf9c326f31f44e5b7f1b1f5503ba4 | null | [
"LICENSE.txt"
] | 1,036 |
2.4 | fraiseql-confiture | 0.5.2 | PostgreSQL schema evolution with built-in multi-agent coordination 🍓 | # Confiture 🍓
**PostgreSQL migrations with multi-agent coordination and 4 flexible strategies**
Build fresh databases in <1 second. Zero-downtime migrations. Multi-agent conflict detection. Production data sync with PII anonymization.
[](https://pypi.org/project/fraiseql-confiture/)
[](https://github.com/fraiseql/confiture)
[](https://www.python.org/)
[](https://www.postgresql.org/)
[](LICENSE)
---
## Why Confiture?
**Problem**: Traditional migration tools replay every migration on every build (slow, brittle, maintains technical debt).
**Solution**: DDL files are the single source of truth. Just execute your schema once. Fresh databases in <1 second.
**Multi-Agent Safe**: Automatic conflict detection prevents teams and agents from stepping on each other.
---
## Quick Start
### Installation
```bash
pip install fraiseql-confiture
```
### Basic Usage
```bash
# Initialize project
confiture init
# Write schema DDL files
vim db/schema/10_tables/users.sql
# Build database (<1 second)
confiture build --env local
# Generate and apply migrations
confiture migrate generate --name "add_bio"
confiture migrate up
```
### Team Workflow (Multi-Agent)
```bash
# Register intention before making changes
confiture coordinate register --agent-id alice --tables-affected users
# Check for conflicts (by other agent)
confiture coordinate check --agent-id bob --tables-affected users
# ⚠️ Conflict: alice is working on 'users'
# Complete when done
confiture coordinate complete --intent-id int_abc123
```
---
## Core Features
### 🛠️ Four Migration Strategies
| Strategy | Use Case | Command |
|----------|----------|---------|
| **Build from DDL** | Fresh DBs, testing | `confiture build --env local` |
| **Incremental** | Existing databases | `confiture migrate up` |
| **Production Sync** | Copy prod data (with anonymization) | `confiture sync --from production --anonymize users.email` |
| **Zero-Downtime** | Complex migrations via FDW | `confiture migrate schema-to-schema` |
### 🤝 Multi-Agent Coordination
- ✅ Automatic conflict detection
- ✅ Intent registration and tracking
- ✅ JSON output for CI/CD
- ✅ <10ms per operation
### 🌱 Seed Data Management
- ✅ Sequential execution (solves PostgreSQL parser limits on 650+ row files)
- ✅ Per-file savepoint isolation for error recovery
- ✅ Continue-on-error mode (skip failed files)
- ✅ Prep-seed validation (5-level orchestrator)
- ✅ 5-level validation (static → full execution)
- ✅ Catch NULL FKs before production
- ✅ Pre-commit safe (Levels 1-3)
- ✅ Database validation with SAVEPOINT safety
### 🔍 Git-Aware Validation
- ✅ Detect schema drift vs. main branch
- ✅ Enforce migrations for DDL changes
- ✅ Pre-commit hook support
### 🔧 Developer Experience
- ✅ Dry-run mode (analyze before applying)
- ✅ Migration hooks (pre/post)
- ✅ Schema linting
- ✅ PII anonymization
- ✅ Optional Rust extension
- ✅ Python 3.11, 3.12, 3.13
---
## Documentation
**Getting Started**: [docs/getting-started.md](docs/getting-started.md)
**Guides**:
- [Build from DDL](docs/guides/01-build-from-ddl.md)
- [Incremental Migrations](docs/guides/02-incremental-migrations.md)
- [Production Data Sync](docs/guides/03-production-sync.md)
- [Zero-Downtime Migrations](docs/guides/04-schema-to-schema.md)
- [Sequential Seed Execution](docs/guides/sequential-seed-execution.md) ⭐ NEW
- [Multi-Agent Coordination](docs/guides/multi-agent-coordination.md)
- [Prep-Seed Validation](docs/guides/prep-seed-validation.md)
- [Migration Decision Tree](docs/guides/migration-decision-tree.md)
- [Dry-Run Mode](docs/guides/dry-run.md)
**API Reference**: [docs/reference/](docs/reference/)
**Examples**: [examples/](examples/)
---
## Project Status
✅ **v0.4.0** (February 4, 2026) - RELEASED
**Phase 9 Addition (v0.4.0)**:
- ✅ Sequential seed execution (solves PostgreSQL parser limits on 650+ row files)
- ✅ Per-file savepoint isolation for error recovery
- ✅ Continue-on-error mode for partial seeding
- ✅ 29 new tests for seed workflow
- ✅ Comprehensive documentation with 8 examples
- ✅ Real database integration testing
**What's Implemented**:
- ✅ All 4 migration strategies
- ✅ Sequential seed execution with savepoints (NEW in v0.4.0)
- ✅ Multi-agent coordination (production-ready, 123+ tests)
- ✅ Prep-seed validation (5 levels, 98+ tests)
- ✅ Git-aware schema validation
- ✅ Schema diff detection
- ✅ CLI with rich output
- ✅ Comprehensive tests (4,100+)
- ✅ Complete documentation
**⚠️ Beta Software**: All features implemented and tested, but not yet used in production. Use in staging/development first.
---
## Contributing
```bash
git clone https://github.com/fraiseql/confiture.git
cd confiture
uv sync --all-extras
uv run pytest
```
See [CONTRIBUTING.md](CONTRIBUTING.md) and [CLAUDE.md](CLAUDE.md).
---
## Author & License
**Vibe-engineered by [Lionel Hamayon](https://github.com/LionelHamayon)** 🍓
MIT License - Copyright (c) 2025 Lionel Hamayon
---
*Making jam from strawberries, one migration at a time.* 🍓→🍯
| text/markdown; charset=UTF-8; variant=GFM | null | evoludigit <lionel.hamayon@evolution-digitale.fr> | null | null | MIT | postgresql, migration, database, schema, ddl, coordination, multi-agent, collaboration, conflict-detection, ai-agents | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Database",
"To... | [] | https://github.com/fraiseql/confiture | null | >=3.11 | [] | [] | [] | [
"typer>=0.12.0",
"rich>=13.7.0",
"pydantic>=2.5.0",
"pyyaml>=6.0.1",
"psycopg[binary,pool]>=3.1.0",
"sqlparse>=0.5.0",
"sqlglot>=28.0",
"cryptography>=42.0.0",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-watch>=4... | [] | [] | [] | [
"Documentation, https://github.com/fraiseql/confiture",
"FraiseQL, https://github.com/fraiseql/fraiseql",
"Homepage, https://github.com/fraiseql/confiture",
"Issues, https://github.com/fraiseql/confiture/issues",
"Repository, https://github.com/fraiseql/confiture"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T13:40:39.965943 | fraiseql_confiture-0.5.2-cp313-cp313-manylinux_2_28_x86_64.whl | 775,572 | 42/79/124d79c9dc1b06b8be29c93ee249e85953ec427364cd891e410d67f0eaf0/fraiseql_confiture-0.5.2-cp313-cp313-manylinux_2_28_x86_64.whl | cp313 | bdist_wheel | null | false | bcbd9c75f8da14627702b5e879a92320 | 3f27eaf7d8fac32ce6120943bbd164f11f4ae8b63a6039bcf606528cf9a84192 | 4279124d79c9dc1b06b8be29c93ee249e85953ec427364cd891e410d67f0eaf0 | null | [
"LICENSE"
] | 1,344 |
2.4 | mytot | 2026.2.2 | Tool of Tool | # mytot
My tool of tool
## INSTLL
```bash
pip3 install mytot -i https://pypi.python.org/simple --upgrade
```
TASK
---------
```bash
TASK [script_file]
```
跑一个程序,如果程序已经在运行,则不执行;
输出的日志为~/.cache/task_log/out.[date]
* 如果script_file以".sh"结尾,则使用`bash script_file`进行执行
* 如果script_file以".py"结尾,则使用`python script_file`进行执行
RUN
--------
```bash
RUN [script_file]
```
跑一个程序,并将其放置后台,输入的日志为./log/[script_file].run
KILL
----------
```bash
KILL [key word] [-9 or -15] [-f]
```
根据关键字杀掉相关的进程
* `-f` 此时"key word"是一个bash或python文件,会根据文件名绝对路径进行匹配。例如存在进行
"python3 /.../a/long_task.py"和"python3 /.../b/long_task.py",在工作目录下b执行`kill -f long_task.py`,会杀掉"python3 /.../b/long_task.py"
但是不会杀掉"python3 /.../a/long_task.py"
MAIL
---------------
```bash
MAIL <email address> -s [subject] -a [attachments1] <attachment2>
```
需要配置账户的信息,配置文件为~/.config/mytot/config.ini。示例
```ini
[email]
smtp_server=smtp.163.com
; 用户名
username=xxx@163.com
; 登陆密码或授权码
password=J4R3E3D4F31E1B1G
; 默认收件人
default_to=xxx@163.com
```
| text/markdown | Xin-Xin MA | null | null | null | GPL | null | [] | [] | null | null | null | [] | [] | [] | [
"loguru",
"psutil",
"yagmail"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T13:39:18.208176 | mytot-2026.2.2-py3-none-any.whl | 32,430 | fc/7a/9754a5581672c3c51202347db0bb41c5f3894bdea7336325a33cc087659a/mytot-2026.2.2-py3-none-any.whl | py3 | bdist_wheel | null | false | f3344b097b040abf90e33ce2ddbf3c38 | 763f6ae2b00c628c77840b252ee02cab7fae9d68e2a78c79156c627724632118 | fc7a9754a5581672c3c51202347db0bb41c5f3894bdea7336325a33cc087659a | null | [
"LICENSE"
] | 111 |
2.4 | specsoloist | 0.3.2 | Spec-as-Source Framework: AI-driven development where specifications are the source of truth. | # SpecSoloist
**SpecSoloist** is a "Spec-as-Source" AI coding framework. It treats specifications as the source of truth and uses AI agents to compile them into executable code.
Now with **Spechestra** features: compose systems from natural language, conduct parallel builds, and orchestrate multi-step workflows.
## Why SpecSoloist?
Code is often messy, poorly documented, and prone to drift from original requirements. SpecSoloist flips the script:
1. **Write Specs**: You write requirements-oriented specifications (Markdown).
2. **Compile to Code**: AI agents read your specs and write implementations directly.
3. **Self-Healing**: If tests fail, agents analyze the failure and patch the code.
4. **Orchestrate**: Define complex workflows where agents collaborate, share state, and pause for human input.
> **Code is a build artifact. Specs are the source of truth.**
## Installation
```bash
pip install specsoloist
```
## Quick Start
1. Clone the repository (or create a new folder):
```bash
git clone https://github.com/symbolfarm/specsoloist.git
cd specsoloist
```
2. Set your API Key (Gemini or Anthropic):
```bash
export GEMINI_API_KEY="your_key_here"
# or
export ANTHROPIC_API_KEY="your_key_here"
```
3. Create a new specification:
```bash
sp create calculator "A simple calculator with add and multiply"
```
This creates `src/calculator.spec.md`.
4. Compile it to code:
```bash
sp compile calculator
```
This generates `build/calculator.py` and `build/test_calculator.py`.
5. Run the tests:
```bash
sp test calculator
```
6. (Optional) If tests fail, try auto-fix:
```bash
sp fix calculator
```
## Orchestration (Spechestra)
SpecSoloist allows you to chain multiple specs into a workflow.
1. **Draft Architecture**: Use `sp compose` to vibe-code your system.
```bash
sp compose "A data pipeline that fetches stocks and calculates SMA"
```
This generates a component architecture and draft specs.
2. **Conduct Build**: Compile all components via agent orchestration.
```bash
sp conduct
```
The conductor agent resolves dependency order and spawns soloist agents to compile each spec in parallel.
3. **Perform Workflow**: Execute a workflow spec.
```bash
sp perform my_workflow '{"symbol": "AAPL"}'
```
## CLI Reference
| Command | Description |
| :--- | :--- |
| `sp list` | List all specs in `src/` |
| `sp create` | Create a new spec manually |
| `sp compose` | **Draft architecture & specs from natural language** |
| `sp conduct [dir]` | **Build project via conductor/soloist agents** |
| `sp perform` | **Execute an orchestration workflow** |
| `sp validate` | Check spec structure |
| `sp verify` | Verify schemas and interface compatibility |
| `sp compile` | Compile single spec to code + tests |
| `sp test` | Run tests for a spec |
| `sp fix` | **Auto-fix failing tests (Agent-first)** |
| `sp respec` | **Reverse engineer code to spec** |
| `sp build` | Compile all specs (direct LLM, no agents) |
| `sp graph` | Export dependency graph (Mermaid.js) |
Commands that use agents (`compose`, `conduct`, `respec`, `fix`) default to detecting an available agent CLI (Claude Code or Gemini CLI). Use `--no-agent` to fall back to direct LLM API calls.
## Configuration
You can configure SpecSoloist via environment variables or a `.env` file:
```bash
export SPECSOLOIST_LLM_PROVIDER="gemini" # or "anthropic"
export SPEC_LLM_MODEL="gemini-2.0-flash" # optional
```
## Arrangement Files
An **Arrangement** is SpecSoloist's makefile — it bridges language-agnostic specs to a concrete build environment by specifying the target language, output paths, build commands, and constraints.
Copy `arrangement.example.yaml` and customise it for your project:
```yaml
target_language: python
output_paths:
implementation: src/mymodule.py
tests: tests/test_mymodule.py
environment:
tools: [uv, ruff, pytest]
setup_commands: [uv sync]
build_commands:
lint: uv run ruff check .
test: uv run pytest
constraints:
- Must use type hints for all public function signatures
```
**Usage:**
```bash
# Explicit path
sp compile myspec --arrangement arrangement.yaml
sp build --arrangement arrangement.yaml
sp conduct --no-agent --arrangement arrangement.yaml
# Auto-discovery: place arrangement.yaml in your project root
# and it will be picked up automatically
sp compile myspec
```
## Sandboxed Execution (Docker)
For safety, SpecSoloist can run generated code and tests inside an isolated Docker container.
1. **Build the sandbox image**:
```bash
docker build -t specsoloist-sandbox -f docker/sandbox.Dockerfile .
```
2. **Enable sandboxing**:
```bash
export SPECSOLOIST_SANDBOX=true
# Optional: override the image (default: specsoloist-sandbox)
# export SPECSOLOIST_SANDBOX_IMAGE="my-custom-image"
```
3. **Run tests**:
`sp test my_module` will now wrap execution in `docker run`.
For Anthropic:
```bash
export SPECSOLOIST_LLM_PROVIDER="anthropic"
export ANTHROPIC_API_KEY="your_key_here"
export SPECSOLOIST_LLM_MODEL="claude-sonnet-4-20250514" # optional
```
## Native Subagents (Claude & Gemini)
For the full agentic experience, SpecSoloist provides native subagent definitions for Claude Code and Gemini CLI. These allow the AI to delegate tasks to specialized agents:
| Agent | Purpose |
|-------|---------|
| `compose` | Draft architecture and specs from natural language |
| `conductor` | Orchestrate builds — resolves dependencies, spawns soloists |
| `soloist` | Compile a single spec — reads spec, writes code directly |
| `respec` | Extract requirements from code into specs |
| `fix` | Analyze failures, patch code, and re-test |
**Usage with Claude Code:**
```
> conduct score/
> respec src/specsoloist/parser.py to score/parser.spec.md
```
**Usage with Gemini CLI:**
```
> compose a todo app with user auth
```
The subagent definitions are in `.claude/agents/` and `.gemini/agents/`.
| text/markdown | null | Toby Lightheart <code@symbolfarm.com> | null | null | null | agents, ai, claude, code-generation, gemini, llm, specifications | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Develo... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0.0",
"pyyaml>=6.0",
"rich>=13.0.0",
"mkdocs-material>=9.0.0; extra == \"dev\"",
"mkdocs>=1.5.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/symbolfarm/specsoloist",
"Repository, https://github.com/symbolfarm/specsoloist",
"Issues, https://github.com/symbolfarm/specsoloist/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:38:40.067986 | specsoloist-0.3.2.tar.gz | 178,307 | a9/77/0d1b8b02b0216819705e5b5fc951c9b486cb0d036cde261a2fa5180af0a4/specsoloist-0.3.2.tar.gz | source | sdist | null | false | 0cf9678f82bf6c131b472b63da3c16d8 | 277d393cb4111f8808f1c37f5831f2763cbc1ddd1f516c264f75f688339830ce | a9770d1b8b02b0216819705e5b5fc951c9b486cb0d036cde261a2fa5180af0a4 | MIT | [
"LICENSE"
] | 230 |
2.4 | specific-ai | 0.1.7 | specific.ai python sdk | # Specific AI Python SDK
This SDK supports two main workflows:
* **Platform automation**: manage Tasks, Assets (datasets/benchmarks), Trainings, and Models
* **Inference**: run inference on deployed Specific AI models
For copy‑paste‑ready deep dives, see `optune-sdk/examples/platform/`.
## Installation
```bash
pip install specific-ai
```
## Authentication & configuration
Set these environment variables:
* `SPECIFIC_AI_API_KEY`: API key / bearer token for the platform backend
* `SPECIFIC_AI_BASE_URL`: platform backend base URL (default in examples: `https://platform.specific.ai/`)
Optional:
* `SPECIFIC_AI_TASK_ID`: current task id. You can create a new task or see the current task\_id in the platform url. for example - the following url: https://platform.specific.ai/model-setup?usecaseId=68bedd42667aca8d3086e536 the task\_id is '68bedd42667aca8d3086e536'
* `SPECIFIC_AI_TASK_TYPE`: task type string (e.g. `ClassificationResponse`)
## 5-line quickstart (platform)
```python
import os
from specific_ai import SpecificAIClient
client = SpecificAIClient(base_url=os.getenv("SPECIFIC_AI_BASE_URL", "http://localhost:8000"), api_key=os.environ["SPECIFIC_AI_API_KEY"])
task_id = client.tasks.list()[0].iter_tasks()[0].task_id
print(client.models.get_metrics(task_id=task_id).versions)
```
## Deep dive examples
Located in `optune-sdk/examples/platform/`:
* `01_task_management.py`
* `02_asset_management.py`
* `03_training_workflows.py`
* `04_model_deployment.py`
## Support
Contact `support@specific.ai`.
| text/markdown | specific.ai team | support@specific.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://specific.ai/ | null | null | [] | [] | [] | [
"requests",
"openai",
"pydantic",
"anthropic",
"pytest; extra == \"dev\"",
"pandas; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.9 | 2026-02-18T13:38:32.504744 | specific_ai-0.1.7.tar.gz | 25,952 | 12/4f/69a0c202372a6ffd3a12c0924ede95ed9d4173ae3e8c576062a10a58431e/specific_ai-0.1.7.tar.gz | source | sdist | null | false | e49b608b4b61f1e84c7220d0ac2f3c0e | 754f3fd89a2c64c4627c2632e6f4eaab402f687431c03cc98c99582ed4299fd1 | 124f69a0c202372a6ffd3a12c0924ede95ed9d4173ae3e8c576062a10a58431e | null | [] | 235 |
2.4 | python-calamine | 0.6.2 | Python binding for Rust's library for reading excel and odf file - calamine | # python-calamine
[](https://pypi.org/project/python-calamine/)
[](https://anaconda.org/conda-forge/python-calamine)

Python binding for beautiful Rust's library for reading excel and odf file - [calamine](https://github.com/tafia/calamine).
### Is used
* [calamine](https://github.com/tafia/calamine)
* [pyo3](https://github.com/PyO3/pyo3)
* [maturin](https://github.com/PyO3/maturin)
### Installation
Pypi:
```
pip install python-calamine
```
Conda:
```
conda install -c conda-forge python-calamine
```
### Example
```python
from python_calamine import CalamineWorkbook
workbook = CalamineWorkbook.from_path("file.xlsx")
workbook.sheet_names
# ["Sheet1", "Sheet2"]
workbook.get_sheet_by_name("Sheet1").to_python()
# [
# ["1", "2", "3", "4", "5", "6", "7"],
# ["1", "2", "3", "4", "5", "6", "7"],
# ["1", "2", "3", "4", "5", "6", "7"],
# ]
```
By default, calamine skips empty rows/cols before data. For suppress this behaviour, set `skip_empty_area` to `False`.
```python
from python_calamine import CalamineWorkbook
workbook = CalamineWorkbook.from_path("file.xlsx").get_sheet_by_name("Sheet1").to_python(skip_empty_area=False)
# [
# [", ", ", ", ", ", "],
# ["1", "2", "3", "4", "5", "6", "7"],
# ["1", "2", "3", "4", "5", "6", "7"],
# ["1", "2", "3", "4", "5", "6", "7"],
# ]
```
Pandas 2.2 and above have built-in support of python-calamine.
Also, you can find additional examples in [tests](https://github.com/dimastbk/python-calamine/blob/master/tests/test_base.py).
### Development
You'll need rust [installed](https://rustup.rs/).
```shell
# clone this repo or your fork
git clone git@github.com:dimastbk/python-calamine.git
cd python-calamine
# create a new virtual env
python3 -m venv env
source env/bin/activate
# install dev dependencies and install python-calamine
pip install --group dev -e . # required pip 25.1 and above
# lint code
pre-commit run --all-files
# test code
pytest
```
| text/markdown; charset=UTF-8; variant=GFM | null | Dmitriy <dimastbk@proton.me> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Rust",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"homepage, https://github.com/dimastbk/python-calamine",
"source, https://github.com/dimastbk/python-calamine"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:38:17.389752 | python_calamine-0.6.2.tar.gz | 138,000 | 01/18/e1e53ade001b30a3c6642d876e5defe8431da8c31fb7798909e6c8ab8c34/python_calamine-0.6.2.tar.gz | source | sdist | null | false | a8c01e18e267743ab85c390bb02caecf | 2c90e5224c5e92db9fcd8f22b6085ce63b935cfe7a893ac9a1c3c56793bafd9d | 0118e1e53ade001b30a3c6642d876e5defe8431da8c31fb7798909e6c8ab8c34 | MIT | [
"LICENSE"
] | 188,444 |
2.4 | waclient | 0.1.12 | Simplified Python SDK for WhatsApp Business Cloud API | # WhatsApp Biz
**Simplified Python SDK for WhatsApp Business Cloud API**
[](https://badge.fury.io/py/waclient)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/waclient/)
`waclient` is a clean, easy-to-use Python wrapper for the WhatsApp Business Cloud API. It handles authentication, message composition (text, media, interactive), and media management, allowing you to build WhatsApp bots and integrations quickly.
## Features
- **Easy Authentication**: Simple client initialization.
- **Rich Messaging**: Send Text, Image, Video, Audio, Document, Sticker, Location, and Contact messages.
- **Interactive Elements**: Support for Reply Buttons and List Messages.
- **Templates**: Full support for WhatsApp Template Messages.
- **Media Handling**: Upload and retrieve media easily.
## Installation
```bash
pip install waclient
```
## Quick Start
```python
from waclient import WhatsAppClient
client = WhatsAppClient(
phone_number_id="YOUR_PHONE_ID",
access_token="YOUR_ACCESS_TOKEN"
)
client.messages.send_text(to="15551234567", body="Hello form WhatsApp Biz!")
```
## Documentation
- **[User Manual](USER_MANUAL.md)**: Detailed verification of all features, code examples, and configuration.
- **[Developer Guide](DEVELOPER_GUIDE.md)**: Setup for contributors, running tests, and project structure.
## ⚠️ WhatsApp 24-Hour Messaging Rule (Important)
WhatsApp enforces a **24-hour customer service window**.
- Free-form messages can be sent **only within 24 hours** after a user’s last message.
- After 24 hours, businesses **must use approved WhatsApp Template Messages**.
- This is a **WhatsApp platform rule**, not a limitation of `waclient`.
### How waclient handles this
- Supports **session messages** (within 24 hours)
- Supports **template messages** (can be sent anytime)
- Webhook-based message tracking helps manage the 24-hour window
📌 Popular platforms like **Amazon, Flipkart, Swiggy** also use **template messages** for notifications.
## Examples
Check the `examples/` directory for ready-to-run scripts:
- `examples/verify_features.py`: comprehensive tour of features.
- `examples/verify_delivery.py`: delivery status verification.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Author
**Surenthar**
Email: surentharsenthilkumar2003@gmail.com
| text/markdown | Surenthar | null | null | null | null | whatsapp, business, api, cloud, meta, facebook, messaging | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Communications :: Chat",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: ... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0",
"pytest>=6.0; extra == \"dev\"",
"pytest-cov>=2.0; extra == \"dev\"",
"black>=21.0; extra == \"dev\"",
"flake8>=3.9; extra == \"dev\""
] | [] | [] | [] | [
"Bug Reports, https://github.com/surenthars/waclient/issues",
"Source, https://my-private-repo/simple/",
"Documentation, https://my-private-repo/simple/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:37:36.545723 | waclient-0.1.12.tar.gz | 21,276 | 5f/2d/3016c51be5ed75ba6929238d78654e4f7369a3fe2b1bdaadc601aa696804/waclient-0.1.12.tar.gz | source | sdist | null | false | 8dbaefdb8c867c0b7cb761d1e372ae6d | 4bcf2d517159a5131d18016b1f80e036e9b0a13b0e73680d939d2b34a635e5bd | 5f2d3016c51be5ed75ba6929238d78654e4f7369a3fe2b1bdaadc601aa696804 | MIT | [
"LICENSE"
] | 241 |
2.4 | wheezy.core | 3.2.3 | A lightweight core library | # wheezy.core
[](https://github.com/akornatskyy/wheezy.core/actions/workflows/tests.yml)
[](https://coveralls.io/github/akornatskyy/wheezy.core?branch=master)
[](https://wheezycore.readthedocs.io/en/latest/?badge=latest)
[](https://badge.fury.io/py/wheezy.core)
[wheezy.core](https://pypi.org/project/wheezy.core) is a
[python](http://www.python.org) package written in pure Python code. It
provides core features.
It is optimized for performance, well tested and documented.
Resources:
- [source code](https://github.com/akornatskyy/wheezy.core),
[examples](https://github.com/akornatskyy/wheezy.core/tree/master/demos)
and [issues](https://github.com/akornatskyy/wheezy.core/issues)
tracker are available on
[github](https://github.com/akornatskyy/wheezy.core)
- [documentation](https://wheezycore.readthedocs.io/en/latest/)
## Install
[wheezy.core](https://pypi.org/project/wheezy.core) requires
[python](http://www.python.org) version 3.10+. It is independent of operating
system. You can install it from [pypi](https://pypi.org/project/wheezy.core)
site:
```sh
pip install -U wheezy.core
```
If you run into any issue or have comments, go ahead and add on
[github](https://github.com/akornatskyy/wheezy.core).
| text/markdown | null | Andriy Kornatskyy <andriy.kornatskyy@live.com> | null | null | null | core, benchmark, collections, config, datetime, db, descriptor, feistel, i18n, introspection, json, luhn, mail, pooling, url, uuid | [
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.1... | [] | null | null | >=3.10 | [] | [] | [] | [
"Cython>=3.0; extra == \"cython\"",
"setuptools>=61.0; extra == \"cython\""
] | [] | [] | [] | [
"Homepage, https://github.com/akornatskyy/wheezy.core",
"Source, https://github.com/akornatskyy/wheezy.core",
"Issues, https://github.com/akornatskyy/wheezy.core/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T13:37:27.953852 | wheezy_core-3.2.3.tar.gz | 20,583 | 89/00/ba7e6e07dd852a400fd18b64c9aac84a58e7d61fdef3a23587d995d8f53b/wheezy_core-3.2.3.tar.gz | source | sdist | null | false | f044ed957270491af81647cb0ae161ab | 34e0e1d615f4210c7e8845d50d584a5e241d87c99e5fc73f6e7e617d94f6e0e9 | 8900ba7e6e07dd852a400fd18b64c9aac84a58e7d61fdef3a23587d995d8f53b | MIT | [
"LICENSE"
] | 0 |
2.4 | ansi-art-convert | 0.1.4 | ANSI > UTF-8 Conversion | # ANSI Art Converter


A tool to convert original ANSI art files for viewing in a modern terminal.
- [Demos](#demos)
- [Installation](#installation)
- [Usage](#usage)
- [Documentation](#documentation)
- [Resources](#resources)
> [!IMPORTANT]
> _This is **not** an AI-generated project. I wrote this myself, and I test it extensively against original artwork._
---
## Demos
I made some videos of demo conversions that are on youtube:
- Demo ANSI Conversion | g80-impure: https://www.youtube.com/watch?v=ZBk6FdzMkck
- Demo ANSI conversion | goto80-goto20: https://www.youtube.com/watch?v=Phbriy19yCY
## Installation
You can install the [`ansi-art-convert`](https://pypi.org/project/ansi-art-convert/) package via pip:
```shell
pip install ansi-art-convert
```
> [!IMPORTANT]
> _As a prerequisite, you will need to install the [`ANSI megafont`](https://github.com/tmck-code/ansi-megafont) on your system via your regular font installer, and ensure that your terminal emulator is configured to use it._
Alternatively, you can install it via a one-liner (you will still need to configure your terminal to use it):
<details>
<summary>install commands:</summary>
```shell
# osx
curl -sOL --output-dir ~/Library/Fonts/ https://github.com/tmck-code/ansi-megafont/releases/download/v0.1.1/ANSICombined.ttf \
&& fc-cache -f ~/Library/Fonts/ \
&& fc-list | grep "ANSICombined"
# linux
curl -sOL --output-dir ~/.fonts/ https://github.com/tmck-code/ansi-megafont/releases/download/v0.1.1/ANSICombined.ttf \
&& fc-cache -f ~/.fonts/ \
&& fc-list | grep "ANSICombined"
```
</details>
## Usage
```shell
usage: ansi-art-convert [-h] --fpath FPATH [--encoding ENCODING] [--sauce-only] [--verbose] [--ice-colours] [--font-name FONT_NAME] [--width WIDTH]
options:
-h, --help show this help message and exit
--fpath, -f FPATH Path to the ANSI file to render.
--encoding, -e ENCODING
Specify the file encoding (cp437, iso-8859-1, ascii, utf-8) if the auto-detection was incorrect.
--sauce-only, -s Only output the SAUCE record information as JSON and exit.
--verbose, -v Enable verbose debug output.
--ice-colours Force enabling ICE colours (non-blinking background).
--font-name FONT_NAME
Specify the font name to determine glyph offset (overrides SAUCE font).
--width, -w WIDTH Specify the output width (overrides SAUCE tinfo1).
```
## Documentation
- [SAUCE Metadata](docs/sauce.md)
## Resources
- [The origins of DEL (0x7F) and its Legacy in Amiga ASCII art](https://blog.glyphdrawing.club/the-origins-of-del-0x7f-and-its-legacy-in-amiga-ascii-art/)
- [rewtnull/amigafonts](https://github.com/rewtnull/amigafonts)
- [Screwtapello/topaz-unicode](https://gitlab.com/Screwtapello/topaz-unicode)
- [Rob Hagemans' Hoard of Bitfonts](https://github.com/robhagemans/hoard-of-bitfonts)
- [amigavision/TopazDouble](https://github.com/amigavision/TopazDouble)
| text/markdown | null | Tom McKeesick <tmck01@gmail.com> | null | Tom McKeesick <tmck01@gmail.com> | null | ANSI, ASCII, art, conversion, UTF-8, terminal | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Topic :: Terminals",
"Topic :: Utilities"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"laser-prynter"
] | [] | [] | [] | [
"homepage, https://github.com/tmck-code/py-ansi-art-convert"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T13:37:01.906155 | ansi_art_convert-0.1.4.tar.gz | 21,498 | dd/21/ab30de3efdfc5ba1a03c692d74330a746625f5572c1b09de5886ffae5bf0/ansi_art_convert-0.1.4.tar.gz | source | sdist | null | false | 48931fe0ec2a591c2ca9db1a7ca2f31b | 63f805e043df818efeb67e762d85b856b146bc82a1485345760f9a0d2c16d68e | dd21ab30de3efdfc5ba1a03c692d74330a746625f5572c1b09de5886ffae5bf0 | BSD-3-Clause | [
"LICENSE"
] | 232 |
2.4 | appium-utility | 0.3.20 | Reusable Appium utility helpers | # AppiumUtility
A small, opinionated helper utility built on top of **Appium Python Client** to simplify common Android UI interactions.
This library is designed to reduce boilerplate and make test scripts more readable and consistent.
## Usage
```py
from appium.webdriver.webdriver import WebDriver
from appium_utility import AppiumUtility
driver: WebDriver = create_driver_somehow()
utils = AppiumUtility(driver)
utils.launch_app("com.example.app")
utils.hide_keyboard()
utils.click_by_text("*.Continue.*", regex=True)
utils.click_by_id("com.example:id/login")
utils.click_by_content_desc("Settings")
utils.click_by_xpath("//android.widget.Button[@text='OK']")
utils.swipe_up()
utils.swipe_down()
utils.swipe_until_text_visible("*.Continue.*", direction="SWIPE_UP", regex=True)
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3 | [] | [] | [] | [
"Appium-Python-Client>=4.0.0",
"requests",
"build>=1.3.0; extra == \"dev\"",
"pre-commit>=4.5.1; extra == \"dev\"",
"twine>=6.2.0; extra == \"dev\"",
"mypy>=1.19.1; extra == \"dev\"",
"types-requests; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T13:36:46.629462 | appium_utility-0.3.20-py3-none-any.whl | 7,286 | c4/e6/76736c7a981f91d10736fcdbe2c78ce32ac9a695e1689b3a189d92c22de3/appium_utility-0.3.20-py3-none-any.whl | py3 | bdist_wheel | null | false | 113119875c31c980ed2974710128cea2 | 992bd5cc762d22077dae3d6e260256d903a572caa04b5dd9588eaeb1d956771f | c4e676736c7a981f91d10736fcdbe2c78ce32ac9a695e1689b3a189d92c22de3 | null | [] | 296 |
2.3 | utilityhub_config | 0.2.2 | A deterministic, typed configuration engine for serious automation systems | # utilityhub_config
A **deterministic, typed configuration loader** for modern Python applications. Load settings from multiple sources with clear precedence, comprehensive metadata tracking, and detailed validation errors.
## Features ✨
- **Multi-source configuration loading** with explicit precedence order
- **Strongly typed** with Pydantic v2+ (full type safety)
- **Metadata tracking** — see which source provided each field
- **Multiple formats** — TOML, YAML, .env, environment variables
- **Rich error reporting** — Validation failures show sources, checked files, and precedence
- **Zero magic** — Deterministic, transparent resolution order
## Installation
```bash
pip install utilityhub_config
```
## Quick Start
```python
from pydantic import BaseModel
from utilityhub_config import load_settings
class Config(BaseModel):
database_url: str = "sqlite:///default.db"
debug: bool = False
# Load settings and metadata
settings, metadata = load_settings(Config)
# Type-safe access (no casting needed)
print(settings.database_url)
# Track which source provided a field
source = metadata.get_source("database_url")
print(f"database_url came from: {source.source}")
```
## How It Works
Settings are resolved in **strict precedence order** (lowest to highest):
1. **Defaults** — Field defaults from your Pydantic model
2. **Global config** — `~/.config/{app_name}/{app_name}.{toml,yaml}`
3. **Project config** — `{cwd}/{app_name}.{toml,yaml}` or `{cwd}/config/*.{toml,yaml}` (or explicit file via `config_file` parameter)
4. **Dotenv** — `.env` file in current directory
5. **Environment variables** — `{APP_NAME}_{FIELD_NAME}` or `{FIELD_NAME}`
6. **Runtime overrides** — Passed via `overrides` parameter (highest priority)
Each level overrides the previous one. Only sources that exist are consulted.
## Examples
### Basic usage with model defaults
```python
from pydantic import BaseModel
from utilityhub_config import load_settings
class PizzaShopConfig(BaseModel):
shop_name: str = "lazy_pepperoni_palace"
delivery_radius_km: int = 5
accepts_orders: bool = False # closed by default
settings, metadata = load_settings(PizzaShopConfig)
print(f"🍕 {settings.shop_name} is {'open' if settings.accepts_orders else 'closed'}")
```
### Override with environment variables
```python
import os
# Friday night rush: OPEN ALL THE STORES!
os.environ["ACCEPTS_ORDERS"] = "true"
os.environ["DELIVERY_RADIUS_KM"] = "15"
settings, metadata = load_settings(PizzaShopConfig)
print(f"🚗 Delivering pizza up to {settings.delivery_radius_km}km away!")
```
### Runtime overrides (highest priority)
```python
# Emergency: meteor incoming, expand radius and accept everything!
settings, metadata = load_settings(
PizzaShopConfig,
overrides={
"accepts_orders": True,
"delivery_radius_km": 100,
"shop_name": "doomsday_pizza_bunker"
}
)
print(f"🚀 {settings.shop_name} now delivers {settings.delivery_radius_km}km!")
```
### Custom app name and config directory
```python
settings, metadata = load_settings(
PizzaShopConfig,
app_name="pizza_empire",
cwd="/etc/pizza_shops/"
)
# Looks for: /etc/pizza_shops/pizza_empire.toml or .yaml
```
### Load from explicit config file (NEW!)
```python
from pathlib import Path
# Use a specific config file (auto-detects YAML, YML, or TOML from extension)
settings, metadata = load_settings(
PizzaShopConfig,
config_file=Path("/etc/pizza/production.yaml")
)
# Still respects precedence: env vars and overrides can override the config file
os.environ["ACCEPTS_ORDERS"] = "true"
settings, metadata = load_settings(
PizzaShopConfig,
config_file=Path("/etc/pizza/production.yaml")
)
# ACCEPTS_ORDERS will be true (from env), others from config file
```
### Environment variable prefix
```python
os.environ["PIZZASHOP_ACCEPTS_ORDERS"] = "true"
os.environ["PIZZASHOP_DELIVERY_RADIUS_KM"] = "42"
settings, metadata = load_settings(
PizzaShopConfig,
env_prefix="PIZZASHOP"
)
# Will check: PIZZASHOP_ACCEPTS_ORDERS, then ACCEPTS_ORDERS (in that order)
print(f"🍕 Accepting orders: {settings.accepts_orders}")
```
### Inspect metadata (detective mode 🕵️)
```python
settings, metadata = load_settings(PizzaShopConfig)
# Which source provided this field?
source = metadata.get_source("delivery_radius_km")
print(f"Delivery radius came from: {source.source}")
print(f"Location: {source.source_path or 'model defaults'}")
print(f"Raw value: {source.raw_value}")
# Track all field origins
for field, source_info in metadata.per_field.items():
print(f" {field}: from {source_info.source}")
```
## Configuration Files
### TOML example (`pizza_empire.toml`)
```toml
# 🍕 Pizza Empire Global Settings
shop_name = "the_great_carb_dispensary"
delivery_radius_km = 5
accepts_orders = false
# The business secret sauce 🔥
[quality]
cheese_ratio = 0.42 # more cheese = more problems (and happiness)
crust_crispiness = "perfect"
pineapple_tolerance = 0.0 # this is not a debate
[timings]
avg_prep_time_minutes = 15
delivery_timeout_minutes = 45
```
### YAML example (`pizza_empire.yaml`)
```yaml
# 🍕 Pizza Empire Configuration
shop_name: the_great_carb_dispensary
delivery_radius_km: 5
accepts_orders: false
# The art of pizza
quality:
cheese_ratio: 0.42
crust_crispiness: "perfect"
pineapple_tolerance: 0.0
timings:
avg_prep_time_minutes: 15
delivery_timeout_minutes: 45
```
### Dotenv example (`.env`)
```bash
# 🍕 Quick overrides for this deployment
SHOP_NAME=emergency_pizza_hut
DELIVERY_RADIUS_KM=100
ACCEPTS_ORDERS=true
CHEESE_RATIO=0.99
PINEAPPLE_TOLERANCE=0.0 # NEVER SURRENDER
```
### Path Expansion
Automatically expand tilde (`~`) and environment variables in file paths:
```python
from pathlib import Path
from pydantic import BaseModel, field_validator
from utilityhub_config import load_settings, expand_path_validator
class PizzaShopConfig(BaseModel):
log_file: Path
data_dir: Path
@field_validator("log_file", "data_dir", mode="before")
@classmethod
def expand_paths(cls, v: Path | str) -> Path:
return expand_path_validator(v)
# Configuration file supports:
# log_file: ~/pizza_empire/logs.txt # Expands to /home/user/pizza_empire/logs.txt
# data_dir: $DATA_ROOT/pizza_empire # Expands $DATA_ROOT environment variable
settings, _ = load_settings(PizzaShopConfig, app_name="pizza_empire")
print(settings.log_file) # Fully expanded absolute path
```
See the [Path Expansion Guide](./docs/packages/utilityhub_config/guides/path-expansion.md) for more examples and best practices.
## API Reference
### `load_settings(model, *, app_name=None, cwd=None, env_prefix=None, config_file=None, overrides=None)`
Load and validate settings from all sources.
**Parameters:**
- `model` (type[T]): A Pydantic BaseModel subclass to validate and populate.
- `app_name` (str | None): Application name for config file lookup. Defaults to lowercased model class name.
- `cwd` (Path | None): Working directory for config file search. Defaults to current directory.
- `env_prefix` (str | None): Optional prefix for environment variables (e.g., `"MYAPP"`).
- `config_file` (Path | None): **NEW!** Explicit config file path to load. If provided, skips auto-discovery and loads this file as the project config source. File format is auto-detected from extension (`.yaml`, `.yml`, or `.toml`). Must exist and be readable. Still respects precedence order — environment variables and overrides can override values from this file.
- `overrides` (dict[str, Any] | None): Runtime overrides (highest precedence).
**Returns:**
A tuple `(settings, metadata)` where:
- `settings` is an instance of your model type (fully type-safe, no casting needed).
- `metadata` is a `SettingsMetadata` object tracking field sources.
**Raises:**
- `ConfigValidationError` — If validation fails, includes detailed context:
- Validation errors from Pydantic
- Files that were checked
- Precedence order
- Which source provided each field
- `ConfigError` — If `config_file` is provided but doesn't exist, is not a file, or has an unsupported format.
### `SettingsMetadata`
Tracks where each field value came from.
- `per_field: dict[str, FieldSource]` — Field name to source mapping.
- `get_source(field: str) -> FieldSource | None` — Look up a single field's source.
### `FieldSource`
- `source: str` — Source name (`"defaults"`, `"env"`, `"project"`, etc.).
- `source_path: str | None` — File path or env var name.
- `raw_value: Any` — The raw value before type coercion.
### Path Expansion Functions
#### `expand_path(path: str) -> Path`
Expand a path string with tilde (`~`) and environment variables without validation.
```python
from utilityhub_config import expand_path
path = expand_path("~/config/app.yaml") # → /home/user/config/app.yaml
path = expand_path("$CONFIG_DIR/app.yaml") # → /etc/myapp/app.yaml
path = expand_path("~/$APP_NAME/config.toml") # → /home/user/myapp/config.toml
```
#### `expand_and_validate_path(path: str) -> Path`
Expand a path and validate that it exists.
```python
from utilityhub_config import expand_and_validate_path
# Raises FileNotFoundError if path doesn't exist
path = expand_and_validate_path("~/config/app.yaml")
```
#### `expand_path_validator(value: Path | str) -> Path`
**Field validator function** for use with Pydantic models. Expands and validates paths automatically.
```python
from pathlib import Path
from pydantic import BaseModel, field_validator
from utilityhub_config import expand_path_validator
class Config(BaseModel):
config_file: Path
@field_validator("config_file", mode="before")
@classmethod
def expand_config_path(cls, v: Path | str) -> Path:
return expand_path_validator(v)
```
See the [Path Expansion Guide](./docs/packages/utilityhub_config/guides/path-expansion.md) for complete examples.
## Known Limitations
- **Nested types**: Complex nested Pydantic models in TOML/YAML are supported (Pydantic handles validation), but the loader doesn't do special merging. Flat dictionaries are recommended.
- **Case sensitivity**: Dotenv keys are normalized to lowercase; model field names are case-sensitive.
- **Variable expansion in dotenv**: The `.env` file reader doesn't auto-expand variables. However, you can expand paths using the `expand_path_validator()` or `expand_and_validate_path()` utilities in your Pydantic model validators.
## Error Handling
When validation fails, you get a detailed error with full context (perfect for debugging at 3 AM):
```python
from utilityhub_config import load_settings
from utilityhub_config.errors import ConfigValidationError
class PizzaShopConfig(BaseModel):
delivery_radius_km: int # REQUIRED (no default, no pizza!)
try:
settings, metadata = load_settings(PizzaShopConfig)
except ConfigValidationError as e:
# Shows:
# - What validation failed
# - Which files were checked
# - The precedence order
# - Which source provided each field
print(e) # Complete context for debugging!
```
Output example:
```
Validation failed
Validation errors:
input should be a valid integer [type=int_parsing, input_value=None, input_type=NoneType]
Files checked:
- ~/.config/pizzashop/pizzashop.toml
- ~/.config/pizzashop/pizzashop.yaml
- /home/user/.env
Precedence (low -> high):
defaults -> global -> project -> dotenv -> env -> overrides
Field sources:
- delivery_radius_km: defaults (None)
```
## Contributing
For issues, improvements, or questions, please open an issue or pull request. See [CONTRIBUTING.md](../../CONTRIBUTING.md) for guidelines.
## Documentation
Full documentation, guides, and API reference available at:
**[https://utilityhub.hyperoot.dev/packages/utilityhub_config/](https://utilityhub.hyperoot.dev/packages/utilityhub_config/)**
Topics include:
- [Getting Started](https://utilityhub.hyperoot.dev/packages/utilityhub_config/getting-started/)
- [Configuration Files](https://utilityhub.hyperoot.dev/packages/utilityhub_config/config-files/)
- [Guides & Examples](https://utilityhub.hyperoot.dev/packages/utilityhub_config/guides/)
- [Concepts](https://utilityhub.hyperoot.dev/packages/utilityhub_config/concepts/)
- [Troubleshooting](https://utilityhub.hyperoot.dev/packages/utilityhub_config/troubleshooting/)
---
**License**: See project LICENSE file.
| text/markdown | Rajesh Das | Rajesh Das <rajesh@hyperoot.dev> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic>=2.12.5",
"pyyaml>=6.0.3",
"python-dotenv>=1.2.1"
] | [] | [] | [] | [] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T13:36:13.360913 | utilityhub_config-0.2.2-py3-none-any.whl | 14,433 | 98/2c/6204219b92fe208b6b85f1edea8b093cdf9f9aed8bbb9ac3eb0a280ff50d/utilityhub_config-0.2.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 15fd890a164a50feb97de553c6db8f02 | 9deab7775509e1c198e39e5209b91494970efc45d591dda19fa6971646e014da | 982c6204219b92fe208b6b85f1edea8b093cdf9f9aed8bbb9ac3eb0a280ff50d | null | [] | 0 |
2.4 | earthquake-selection | 1.1.0 | A library for earthquake selection | # SelectionEarthquake
Deprem kayıtlarının karakteristik özelliklerinin bilgilerini farklı veri sağlayıcılardan (AFAD,PEER) çekip normalize eden, ardından belirlenen kriterlere göre puanlayan ve strateji tabanlı seçim yapan Python kütüphanesi.
Böylece araştırmacılar ve mühendisler, bina özelinde uygun deprem kayıtlarını hızlı ve güvenilir şekilde elde edebilir.
---
## 🚀 Özellikler
- 🌐 Çoklu veri sağlayıcı desteği (AFAD, PEER)
- 🔎 Esnek arama kriterleri (`magnitude`, `depth`, `distance`, `Vs30`, vb.)
- 🧩 Pipeline tabanlı mimari
- 📂 Çıktılar: CSV, XLSX, MiniSeed, Pandas DataFrame
- ⚡ Asenkron (async) sorgular ile hızlı veri çekme
- 🏆 Puanlama sistemi ve strateji tabanlı kayıt seçimi (örn. TBDY 2018’e göre seçim)
- 🧪 Test altyapısı (pytest) ve kolay genişletilebilir provider mimarisi
---
## 📦 Kurulum
```bash
# PyPI'den yükleme
pip install earthquake-selection
# Yerel geliştirme için
git clone https://github.com/kullanici/SelectionEarthquake.git
cd SelectionEarthquake
pip install -e .
```
## ⚡ Hızlı Başlangıç
```py
import asyncio
from selection_service.enums.Enums import DesignCode, ProviderName
from selection_service.core.Pipeline import EarthquakeAPI
from selection_service.processing.Selection import (SelectionConfig,
SearchCriteria,
TBDYSelectionStrategy,
TargetParameters)
from selection_service.core.LoggingConfig import setup_logging
setup_logging()
async def example_usage():
# Seçim stratejisi oluşturma
con = SelectionConfig(design_code=DesignCode.TBDY_2018,
num_records=22,
max_per_station=3,
max_per_event=3,
min_score=55)
strategy = TBDYSelectionStrategy(config=con)
#Arama kriterleri
search_criteria = SearchCriteria(
start_date="2000-01-01",
end_date="2025-09-05",
min_magnitude=7.0,
max_magnitude=10.0,
min_vs30=300,
max_vs30=400
# mechanisms=["StrikeSlip"]
)
# Hedef parametreler
target_params = TargetParameters(
magnitude=7.0,
distance=30.0,
vs30=400.0,
pga=200,
mechanism=["StrikeSlip"]
)
# API
api = EarthquakeAPI(providerNames= [ProviderName.AFAD,
ProviderName.PEER],
strategies= [strategy])
# Asenkron arama
result = await api.run_async(criteria=search_criteria,
target=target_params,
strategy_name=strategy.get_name())
# Senkron arama
# result = api.run_sync(criteria=search_criteria,
# target=target_params,
# strategy_name=strategy.get_name())
if result.success:
print(result.value.selected_df[['PROVIDER','RSN','EVENT','YEAR','MAGNITUDE','STATION','VS30(m/s)','RRUP(km)','MECHANISM','PGA(cm2/sec)','PGV(cm/sec)','SCORE']].head(7))
return result.value
else:
print(f"[ERROR]: {result.error}")
return None
if __name__ == "__main__":
df = asyncio.run(example_usage())
```
PROVIDER | RSN | EVENT | YEAR | MAGNITUDE | STATION | VS30(m/s) | RRUP(km) | MECHANISM | PGA(cm2/sec) | PGV(cm/sec) | SCORE
---------|----------|---------------|------ |---------- |------------------------------|-----------|---------- | ----------- |----------- |----------- |-------------
PEER | 900 | Landers | 1992 | 7.28 | Yermo Fire Station | 353.63 | 23.620000 | StrikeSlip | 217.776277 | 40.263000 | 100.000000
PEER | 3753 | Landers | 1992 | 7.28 | Fun Valley | 388.63 | 25.020000 | StrikeSlip | 206.125976 | 19.963000 | 100.000000
PEER | 1615 | Duzce, Turkey| 1999 | 7.14 | Lamont 1062 | 338.00 | 9.140000 | StrikeSlip | 202.664229 | 14.630000 | 100.000000
PEER | 881 | Landers | 1992 | 7.28 | Morongo Valley Fire Station | 396.41 | 17.360000 | StrikeSlip | 188.768206 | 24.317000 | 100.000000
PEER | 1762 | Hector Mine | 1999 | 7.13 | Amboy | 382.93 | 43.050000 | StrikeSlip | 182.933249 | 23.776000 | 100.000000
AFAD | 327943 | 17966 | 2023 | 7.70 | DSİ, Musa Şahin Bulvarı | 350.00 | 27.110381 | StrikeSlip | 185.737903 | 29.642165 | 91.304348
AFAD | 327943 | 17966 | 2023 | 7.70 | DSİ, Musa Şahin Bulvarı | 350.00 | 27.110381 | StrikeSlip | 185.737903 | 29.642165 | 91.304348
## 🛠 Mimari
```bash
selection_service/
│
├── providers/ # Veri sağlayıcılar (AFAD, FDSN, PEER…)
├── core/ # Pipeline ve API
├── processing/ # SearchCriteria, Result, vs.
├── utility/ # Yardımcı fonksiyonlar
├── enums/ # ProviderName gibi enumlar
├── data/ # Kullanılan csv ve excel dosyaları
tests/ # pytest testleri
```
## 🤝 Provider Ekleme Adımları
- enums.Enums.ProviderName kısmına ismini ekle
- Yeni provider eklemek için providers/ altına python dosyasını aç.
- Provider sınıfı mutlaka IDataProvider'ı miras almalı.
- Provider a özel BaseColumnMapper sınıfını miras alan mapping sınıfını yaz ve ColumnMapperFactory e ekle
- ProviderFactory de create methoduna ekle
- Unit test yazmayı unutma.
## 📌 Yol Haritası
- [ ] Yeni provider: FDSN
## 📜 Lisans
MIT License
| text/markdown | null | Muhammed Sural <muhammedsural@gmail.com> | null | null | MIT License
Copyright (c) 2025 Muhammed Sural
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| earthquake, seismology, data-processing, selection, peer, nga, afad | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Languag... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=2.2.6",
"pandas>=2.3.2",
"scipy>=1.15.3",
"requests>=2.32.5",
"aiohttp>=3.12.15",
"setuptools>=80.9.0",
"obspy>=1.4.2",
"openpyxl>=3.1.5",
"pyyaml>=6.0",
"tqdm>=4.65.0",
"pyarrow>=23.0.0",
"python-dateutil>=2.8.2",
"pydantic>=2.11.9",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-co... | [] | [] | [] | [
"Homepage, https://github.com/muhammedsural/SELECTIONEARTHQUAKE",
"Documentation, https://muhammedsural.github.io/SELECTIONEARTHQUAKE",
"Repository, https://github.com/muhammedsural/SELECTIONEARTHQUAKE",
"Issues, https://github.com/muhammedsural/SELECTIONEARTHQUAKE/issues",
"Changelog, https://github.com/mu... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:35:56.371018 | earthquake_selection-1.1.0.tar.gz | 1,671,310 | 7a/04/63958d30e75a108a0cb98e4b3eee830c7a5aeba11c29f1c7bf08334fcac4/earthquake_selection-1.1.0.tar.gz | source | sdist | null | false | acde289738f3bd7ddf5bee3094ba65b6 | 4fd505ed94062f5a646e4cb8c3c587b509b73a955bfdcf44a450b577f04a7d3c | 7a0463958d30e75a108a0cb98e4b3eee830c7a5aeba11c29f1c7bf08334fcac4 | null | [
"LICENSE"
] | 235 |
2.4 | fingerlock | 2.2.2 | Sécurité automatique par détection d'activité clavier/souris | ```
███████╗██╗███╗ ██╗ ██████╗ ███████╗██████╗ ██╗ ██████╗ ██████╗██╗ ██╗
██╔════╝██║████╗ ██║██╔════╝ ██╔════╝██╔══██╗ ██║ ██╔═══██╗██╔════╝██║ ██╔╝
█████╗ ██║██╔██╗ ██║██║ ███╗█████╗ ██████╔╝ ██║ ██║ ██║██║ █████╔╝
██╔══╝ ██║██║╚██╗██║██║ ██║██╔══╝ ██╔══██╗ ██║ ██║ ██║██║ ██╔═██╗
██║ ██║██║ ╚████║╚██████╔╝███████╗██║ ██║ ███████╗╚██████╔╝╚██████╗██║ ██╗
╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═════╝ ╚══════╝╚═╝ ╚═╝ ╚══════╝ ╚═════╝ ╚═════╝╚═╝ ╚═╝
```
# 🔐 FingerLock
**Système de verrouillage intelligent par schéma tactile + détection d'activité**
Sécurisez votre ordinateur avec un **lock screen plein écran** stylé et un déverrouillage par **schéma 3×3** (style Android). FingerLock surveille votre activité clavier/souris et verrouille automatiquement après inactivité.
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://pypi.org/project/fingerlock/)
---
## ✨ Fonctionnalités
### 🎨 Lock Screen Premium
- 🌈 **Écran plein** avec dégradé animé
- ⏰ **Horloge en temps réel**
- 🎯 **Grille 3×3** numérotée (1-9, style Android)
- ✨ **Animations fluides** (points pulse, lignes progressives)
- 🔊 **Sons subtils** (bips sur points, erreur, succès)
- 🖱️ **Tracé sans clic** — glissez simplement la souris
- ✅ **Zone de validation** intuitive
### 🔒 Sécurité Automatique
- ⌨️ **Détection clavier** via `evdev` (compatible Wayland)
- 🖱️ **Détection souris** native Linux
- ⏱️ **Délai configurable** (10s, 30s, 60s...)
- 🔐 **Schéma personnel** stocké hashé (SHA-256)
- 🚀 **Ultra-léger** — ~10MB RAM, 0% CPU idle
### 📊 Suivi & Logs
- 📝 Logs détaillés dans `~/.fingerlock/fingerlock.log`
- 🎯 Compteur d'events en temps réel
- 📈 Historique des verrouillages
---
## 🚀 Installation (2 commandes)
### Linux (Ubuntu/Debian)
```bash
# 1. Installer pipx
sudo apt install pipx && pipx ensurepath && source ~/.bashrc
# 2. Installer FingerLock
pipx install fingerlock
# 3. Lancer
fingerlock
```
### Autres distributions Linux
```bash
# Fedora/RHEL
sudo dnf install pipx && pipx ensurepath
# Arch
sudo pacman -S python-pipx && pipx ensurepath
# Puis
pipx install fingerlock && fingerlock
```
### Installation depuis le code source
```bash
git clone https://github.com/REBCDR07/fingerlock.git
cd fingerlock
pipx install .
fingerlock
```
---
## 📖 Premier lancement
Au démarrage, **deux écrans plein** s'affichent :
### Étape 1 : Dessinez votre schéma
```
┌─────────────────────────────────┐
│ 🎨 Configuration │
│ │
│ ┌───┬───┬───┐ │
│ │ 1 │ 2 │ 3 │ │
│ ├───┼───┼───┤ │
│ │ 4 │ 5 │ 6 │ ← Glissez │
│ ├───┼───┼───┤ la souris │
│ │ 7 │ 8 │ 9 │ │
│ └───┴───┴───┘ │
│ │
│ ✅ Valider │
└─────────────────────────────────┘
```
**Exemple :** Tracez `7 → 5 → 3` = schéma diagonal
### Étape 2 : Confirmez le schéma
Redessinez le même schéma pour valider.
### Étape 3 : Délai d'inactivité
```
⏱️ Délai avant verrouillage (secondes) [10] : 30
```
**C'est tout ! La surveillance démarre. 🎉**
---
## 🎮 Utilisation
### Commandes disponibles
```bash
# Démarrer la surveillance (défaut)
fingerlock
fingerlock start
# Avec délai personnalisé
fingerlock start -d 60 # 60 secondes
# Voir la config actuelle
fingerlock config
# Éditer la configuration
fingerlock config --edit
# Réinitialiser le schéma
fingerlock reset
# État du système
fingerlock status
# Consulter les logs
fingerlock logs
fingerlock logs -n 100 # 100 dernières lignes
```
### Arrêter la surveillance
`Ctrl+C` dans le terminal
---
## 🔓 Déverrouillage
Après inactivité, l'écran de lock apparaît :
```
┌──────────────────────────────────────────┐
│ │
│ ⏰ 14:35:22 │
│ Mardi 18 Février 2026 │
│ │
│ 🔒 Système verrouillé │
│ │
│ 7 → 5 → 3 ← Schéma actuel │
│ │
│ ┌───┬───┬───┐ │
│ │ 1 │ 2 │ 3 │ Glissez votre │
│ ├───┼───┼───┤ schéma sur les │
│ │ 4 │ 5 │ 6 │ points, puis │
│ ├───┼───┼───┤ passez sur ✅ │
│ │ 7 │ 8 │ 9 │ │
│ └───┴───┴───┘ │
│ │
│ ✅ Valider │
│ │
│ Tentative 1/3 │
└──────────────────────────────────────────┘
```
**3 tentatives max** puis arrêt du programme.
---
## ⚙️ Configuration
Fichier : `~/.fingerlock/config.yaml`
```yaml
# Délai d'inactivité (secondes)
lock_delay_seconds: 30
# Schéma (hashé SHA-256)
pattern_hash: d6a69166d21ee0c8a97327cb142adee2201749599f194a27453fc23edc0cde07
pattern_code: '12369' # Pour debug uniquement
# Logs
log_path: /home/user/.fingerlock/fingerlock.log
# Plateforme (auto)
platform_lock: auto
```
**Modifier :**
```bash
fingerlock config --edit
nano ~/.fingerlock/config.yaml
```
---
## 🖥️ Compatibilité
### ✅ Linux
| Distribution | Version | Statut |
|--------------|---------|--------|
| Ubuntu | 20.04+ | ✅ Testé |
| Debian | 11+ | ✅ Compatible |
| Fedora | 35+ | ✅ Compatible |
| Arch Linux | Rolling | ✅ Compatible |
| Pop!_OS | 22.04+ | ✅ Testé |
**Sessions supportées :**
- 🌊 **Wayland** (via `evdev`) ✅
- 🪟 **X11** (via `evdev`) ✅
**Environnements de bureau :**
- GNOME, KDE Plasma, XFCE, i3wm, Sway
**Prérequis Linux :**
```bash
# Ajouter utilisateur au groupe input (si pas déjà fait)
sudo usermod -aG input $USER
# Redémarrer la session
```
### ⚠️ macOS
**Statut :** En cours de développement
- Détection d'activité : ⚠️ Limitations macOS
- Lock screen : ✅ Compatible
### ⚠️ Windows
**Statut :** En cours de développement
- Détection d'activité : ⚠️ Nécessite adaptations
- Lock screen : ✅ Compatible
---
## 📊 Exemple de logs
```log
2026-02-18T09:37:14 | INFO | [09:37:14] ℹ️ SYSTEM Surveillance démarrée
2026-02-18T09:37:14 | INFO | [09:37:14] 📡 SYSTEM 11 périphériques détectés
2026-02-18T09:38:38 | WARN | [09:38:38] 🔒 LOCK Verrouillage après 30s
2026-02-18T09:38:48 | INFO | [09:38:48] ℹ️ SYSTEM Système déverrouillé
```
---
## 🛠️ Développement
### Cloner le projet
```bash
git clone https://github.com/REBCDR07/fingerlock.git
cd fingerlock
```
### Installer en mode dev
```bash
python3 -m venv venv
source venv/bin/activate
pip install -e .
fingerlock
```
### Structure du projet
```
fingerlock/
├── fingerlock/
│ ├── __init__.py
│ ├── cli.py # Interface CLI
│ ├── core/
│ │ ├── watch.py # Surveillance evdev
│ │ ├── lockscreen.py # UI plein écran
│ │ ├── pattern_gui.py # Setup schéma
│ │ └── locker.py # Verrouillage système
│ ├── utils/
│ │ └── logger.py # Journalisation
│ └── config/
│ └── settings.py # Config YAML
├── setup.py
├── README.md
└── LICENSE
```
### Contribuer
1. Fork le projet
2. Créez une branche (`git checkout -b feature/ma-feature`)
3. Committez (`git commit -m 'Ajout feature X'`)
4. Push (`git push origin feature/ma-feature`)
5. Ouvrez une Pull Request
---
## 🆘 Dépannage
### ❌ "Commande 'fingerlock' introuvable"
```bash
pipx ensurepath
source ~/.bashrc
pipx reinstall fingerlock
```
### ❌ "Aucun périphérique input accessible"
```bash
# Vérifier le groupe input
groups $USER | grep input
# Si absent :
sudo usermod -aG input $USER
# Redémarrer la session (logout/login)
```
### ❌ "Events détectés: 0" (ne détecte pas l'activité)
```bash
# Vérifier les permissions /dev/input
ls -la /dev/input/event*
# Doivent être lisibles par le groupe 'input'
# Si problème, redémarrer après avoir ajouté au groupe
```
### ❌ Lock screen ne s'affiche pas
```bash
# Vérifier tkinter
python3 -c "import tkinter; print('OK')"
# Si erreur, installer :
sudo apt install python3-tk
```
### ❌ Sons ne fonctionnent pas
Normal ! Les sons utilisent `paplay` (PulseAudio). Si absent, FingerLock fonctionne sans sons.
---
## 📜 Licence
**MIT License** — Voir [LICENSE](LICENSE)
Copyright © 2026 Elton Ronald Bill HOUNNOU
---
## 👤 Auteur
### **Elton Ronald Bill HOUNNOU**
🚀 **Développeur Frontend** | Passionné par la Tech & l'IA
- 🌐 **LinkedIn** : [in/elton27](https://linkedin.com/in/elton27)
- 💻 **GitHub** : [REBCDR07](https://github.com/REBCDR07)
- 🦊 **GitLab** : [eltonhounnou2](https://gitlab.com/eltonhounnou2)
- 📧 **Email** : [eltonhounnou27@gmail.com](mailto:eltonhounnou27@gmail.com)
- 📱 **Téléphone** : +229 01 40 66 33 49
---
## 🙏 Remerciements
- [evdev](https://github.com/gvalkov/python-evdev) — Détection d'activité Linux native
- [tkinter](https://docs.python.org/3/library/tkinter.html) — Interface graphique
- Communauté Python 🐍
---
## 📈 Roadmap
- [ ] Support macOS natif (via Quartz)
- [ ] Support Windows (via pywin32)
- [ ] Mode daemon (lancement au boot)
- [ ] Interface de configuration graphique
- [ ] Multi-utilisateurs
- [ ] Thèmes personnalisables
- [ ] Export/Import de configuration
---
## 📝 Changelog
### v2.0.0 & v2.1.0 (2026-02-18) — Lock Screen Premium
- ✨ **Lock screen plein écran** animé avec dégradé
- 🎯 **Schéma 3×3** (1-9) avec tracé souris sans clic
- 🔊 **Sons** (bips sur points, erreur, succès)
- ⏰ **Horloge temps réel** + date
- ⚡ **Détection evdev** (compatible Wayland)
- 📊 **Debug mode** avec compteur d'events
- 🎨 **Animations** (pulse points, lignes progressives)
### v1.0.0 (2026-02-16) — Release Initiale
- 🎉 Première version publique
- ⌨️ Détection clavier/souris (pynput)
- 🔒 Verrouillage automatique
- 📦 Package PyPI
---
**⭐ Si FingerLock vous est utile, donnez une étoile sur [GitHub](https://github.com/REBCDR07/fingerlock) !**
---
<div align="center">
**Made with ❤️ by Elton Ronald Bill HOUNNOU**
[🐛 Reporter un bug](https://github.com/REBCDR07/fingerlock/issues) • [✨ Demander une feature](https://github.com/REBCDR07/fingerlock/issues) • [💬 Discussions](https://github.com/REBCDR07/fingerlock/discussions)
</div>
| text/markdown | Elton Hounnou | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pynput>=1.7.0",
"evdev>=1.6.0; sys_platform == \"linux\"",
"evdev>=1.6.0; sys_platform == \"linux\"",
"PyYAML>=5.4.0",
"setuptools>=69.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T13:35:21.363163 | fingerlock-2.2.2.tar.gz | 26,226 | 89/e0/02be35309f71bcbe5b06e2c50cf4c47fe3f31eae42b06fe6a94faa4bc0e8/fingerlock-2.2.2.tar.gz | source | sdist | null | false | e8433244bc13e2af4d86110e532650f8 | b847b669ed2f952412322522eeeabda5dade29de07a94869be4c23d69628eecb | 89e002be35309f71bcbe5b06e2c50cf4c47fe3f31eae42b06fe6a94faa4bc0e8 | null | [
"LICENSE"
] | 215 |
2.4 | databao-context-engine | 0.3.0 | Semantic context for your LLMs — generated automatically | [](https://github.com/JetBrains#jetbrains-on-github)
[](https://pypi.org/project/databao-context-engine)
[](https://github.com/JetBrains/databao-context-engine?tab=License-1-ov-file)
[//]: # ([](https://pypi.org/project/databao-context-engine/))
<h1 align="center">Databao Context Engine</h1>
<p align="center">
<b>Semantic context for your LLMs — generated automatically.</b><br/>
No more copying schemas. No manual documentation. Just accurate answers.
</p>
<p align="center">
<a href="https://databao.app">Website</a> •
<a href="#quickstart">Quickstart</a> •
<a href="#supported-data-sources">Data Sources</a> •
<a href="#contributing">Contributing</a>
</p>
---
## What is Databao Context Engine?
Databao Context Engine is a CLI tool that **automatically generates governed semantic context** from your databases, BI tools, documents, and spreadsheets.
Integrate it with any LLM to deliver **accurate, context-aware answers** — without copying schemas or writing documentation by hand.
```
Your data sources → Context Engine → Unified semantic graph → Any LLM
```
## Why choose Databao Context Engine?
| Feature | What it means for you |
|----------------------------|----------------------------------------------------------------|
| **Auto-generated context** | Extracts schemas, relationships, and semantics automatically |
| **Runs locally** | Your data never leaves your environment |
| **MCP integration** | Works with Claude Desktop, Cursor, and any MCP-compatible tool |
| **Multiple sources** | Databases, dbt projects, spreadsheets, documents |
| **Built-in benchmarks** | Measure and improve context quality over time |
| **LLM agnostic** | OpenAI, Anthropic, Ollama, Gemini — use any model |
| **Governed & versioned** | Track, version, and share context across your team |
| **Dynamic or static** | Serve context via MCP server or export as artifact |
## Installation
Databao Context Engine is [available on PyPI](https://pypi.org/project/databao-context-engine/) and can be installed with uv, pip, or another package manage.
### Using uv
1. Install Databao Context Engine:
```bash
uv tool install databao-context-engine
```
1. Add it to your PATH:
```bash
uv tool update-shell
```
1. Verify the installation:
```bash
dce --help
```
### Using pip
1. Install Databao Context Engine:
```bash
pip install databao-context-engine
```
1. Verify the installation:
```bash
dce --help
```
## Supported data sources
* <img src="https://cdn.simpleicons.org/postgresql/316192" width="16" height="16" alt=""> PostgreSQL
* <img src="https://cdn.simpleicons.org/mysql/4479A1" width="16" height="16" alt=""> MySQL
* <img src="https://cdn.simpleicons.org/sqlite/003B57" width="16" height="16" alt=""> SQLite
* <img src="https://cdn.simpleicons.org/duckdb/FFF000" width="16" height="16" alt=""> DuckDB
* <img src="https://cdn.simpleicons.org/dbt/FF694B" width="16" height="16" alt=""> dbt projects
* 📄 Documents & spreadsheets *(coming soon)*
## Supported LLMs
| Provider | Configuration |
|---------------|----------------------------------------------|
| **Ollama** | `languageModel: OLLAMA`: runs locally, free |
| **OpenAI** | `languageModel: OPENAI`: requires an API key |
| **Anthropic** | `languageModel: CLAUDE`: requires an API key |
| **Google** | `languageModel: GEMINI`: requires an API key |
## Quickstart
### 1. Create a project
1. Create a new directory for your project and navigate to it:
```bash
mkdir dce-project && cd dce-project
```
1. Initialize a new project:
```bash
dce init
```
### 2. Configure data sources
1. When prompted, agree to create a new datasource.
You can also use the `dce datasource add` command.
1. Provide the data source type and its name.
1. Open the config file that was created for you in your editor and fill in the connection details.
1. Repeat these steps for all data sources you want to include in your project.
1. If you have data in Markdown or text files,
you can add them to the `dce/src/files` directory.
### 3. Build context
1. To build the context, run the following command:
```bash
dce build
```
### 4. Use Context with Your LLM
**Option A: Dynamic via MCP Server**
Databao Context Engine exposes the context through a local MCP Server, so your agent can access the latest context at runtime.
1. In **Claude Desktop**, **Cursor**, or another MCP-compatible agent, add the following configuration.
Replace `dce-project/` with the path to your project directory:
```json
# claude_desktop_config.json, mcp.json, or similar
{
"mcpServers": {
"dce": {
"command": "dce mcp",
"args": ["--project-dir", "dce-project/"]
}
}
}
```
1. Save the file and restart your agent.
1. Open a new chat, in the chat window, select the `dce` server, and ask questions related to your project context.
**Option B: Static artifact**
Even if you don’t have Claude or Cursor installed on your local machine,
you can still use the context built by Databao Context Engine by pasting it directly into your chat with an AI assistant.
1. Navigate to `dce-project/output/` and open the directory with the latest run.
1. Attach the `all_results.yaml` file to your chat with the AI assistant or copy and paste its contents into your chat.
## API Usage
### 1. Create a project
```python
# Initialise the project in an existing directory
from databao_context_engine import init_dce_project
project_manager = init_dce_project(Path(tempfile.mkdtemp()))
# Or use an existing project
from databao_context_engine import DatabaoContextProjectManager
project_manager = DatabaoContextProjectManager(project_dir=Path("path/to/project"))
```
### 2. Configure data sources
```python
from databao_context_engine import (
DatasourceConnectionStatus,
DatasourceType,
)
# Create a new datasource
postgres_datasource_id = project_manager.create_datasource_config(
DatasourceType(full_type="postgres"),
datasource_name="my_postgres_datasource",
config_content={
"connection": {"host": "localhost", "user": "dev", "password": "pass"}
},
).datasource.id
# Check the connection to the datasource is valid
check_result = project_manager.check_datasource_connection()
assert len(check_result) == 1
assert check_result[0].datasource_id == postgres_datasource_id
assert check_result[0].connection_status == DatasourceConnectionStatus.VALID
```
### 3. Build context
```python
build_result = project_manager.build_context()
assert len(build_result) == 1
assert build_result[0].datasource_id == postgres_datasource_id
assert build_result[0].datasource_type == DatasourceType(full_type="postgres")
assert build_result[0].context_file_path.is_file()
```
### 4. Use the built contexts
#### Create a context engine
```python
# Switch to the engine if you're already using a project_manager
context_engine = project_manager.get_engine_for_project()
# Or directly create a context engine from the path to your DCE project
from databao_context_engine import DatabaoContextEngine
context_engine = DatabaoContextEngine(project_dir=Path("path/to/project"))
```
#### Get all built contexts
```python
# Switch to the engine to use the context built
all_built_contexts = context_engine.get_all_contexts()
assert len(all_built_contexts) == 1
assert all_built_contexts[0].datasource_id == postgres_datasource_id
print(all_built_contexts[0].context)
```
#### Search in built contexts
```python
# Run a vector similarity search
results = context_engine.search_context("my search query")
print(f"Found {len(results)} results for query")
print(
"\n\n".join(
[f"{str(result.datasource_id)}\n{result.context_result}" for result in results]
)
)
```
## Contributing
We’d love your help! Here’s how to get involved:
- ⭐ **Star this repo** — it helps others find us!
- 🐛 **Found a bug?** [Open an issue](https://github.com/JetBrains/databao-context-engine/issues)
- 💡 **Have an idea?** We’re all ears — create a feature request
- 👍 **Upvote issues** you care about — helps us prioritize
- 🔧 **Submit a PR**
- 📝 **Improve docs** — typos, examples, tutorials — everything helps!
New to open source? No worries! We're friendly and happy to help you get started. 🌱
For more details, see [CONTRIBUTING](CONTRIBUTING.md).
## 📄 License
Apache 2.0 — use it however you want. See the [LICENSE](LICENSE.md) file for details.
---
<p align="center">
<b>Like Databao Context Engine?</b> Give us a ⭐ — it means a lot!
</p>
<p align="center">
<a href="https://databao.app">Website</a> •
<a href="https://discord.gg/hEUqCcWdVh">Discord</a>
</p>
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.3.0",
"duckdb>=1.4.3",
"pyyaml>=6.0.3",
"requests>=2.32.5",
"mcp>=1.23.3",
"pydantic>=2.12.4",
"jinja2>=3.1.6",
"sqlparse>=0.5.5",
"pyarrow>=19.0.1",
"pyathena>=3.25.0; extra == \"athena\"",
"clickhouse-connect>=0.10.0; extra == \"clickhouse\"",
"mssql-python>=1.0.0; extra == \"mssql... | [] | [] | [] | [
"Homepage, https://databao.app/",
"Source, https://github.com/JetBrains/databao-context-engine"
] | uv/0.9.7 | 2026-02-18T13:34:10.622433 | databao_context_engine-0.3.0.tar.gz | 95,191 | 2a/c7/5262b299ee18d4d2c37f170a9080566c7576287cbc29b903cb6ececf44ef/databao_context_engine-0.3.0.tar.gz | source | sdist | null | false | 505197fac43cb7aa437f988c2a4617a6 | 8c0fe8f71616ecdc2e3e93eacf4bad3907d73d4a482ad39c493f36ecabd652a1 | 2ac75262b299ee18d4d2c37f170a9080566c7576287cbc29b903cb6ececf44ef | Apache-2.0 AND LicenseRef-Additional-Terms | [
"LICENSE.md"
] | 545 |
2.2 | aimmspy | 25.3.1.8 | Python bindings for the AIMMS optimization platform, built with pybind11 for seamless C++ integration. Enables efficient data exchange and interaction with AIMMS projects using pandas, polars, and pyarrow. Ideal for advanced optimization workflows requiring high-performance native code. | # AIMMS Python Library
With this library it is possible to interact with AIMMS models from Python, enabling high-performance, headless interaction with AIMMS models from within Python scripts.
---
## Overview
`aimmspy` is a Python module built on **pybind11** for tight C++ integration, enabling efficient interaction with AIMMS models. With `aimmspy`, you can:
- Assign and retrieve data between Python and AIMMS using Python dictionaries, Pandas, Polars, or Arrow data frames
- Execute AIMMS procedures (such as solves) programmatically and capture the results
- Benefit from high-performance native code, ideal for advanced optimization workflows
`aimmspy` is a key component of the **AIMMS Python Bridge**, designed for "Python-in-the-lead" scenarios, where Python scripts drive AIMMS model runs. It complements the `pyaimms` library (accessible from within an AIMMS project), which supports the reverse ("AIMMS-in-the-lead") workflow.
---
## Key Features
| Feature | Description |
|---------|-------------|
| **High-performance integration** | `aimmspy` uses `pybind11` for efficient C++ access to AIMMS runtime |
| **Flexible data exchange** | Leverage Python-native data structures-dictionaries, Pandas, Polars, PyArrow, for data handling between Python and AIMMS |
| **Programmatic control** | Trigger AIMMS procedures (e.g. solve) directly from Python and retrieve results |
| **Python-first workflow** | Ideal for batch runs, automated pipelines, and embedding optimization in external applications |
| **Bulk data handling** | The `multi_assign()` and `multi_data()` methods allows sending and fetching multiple AIMMS identifiers in a single call |
---
## Prerequisites
- [**AIMMS Developer** installed](https://www.aimms.com/support/downloads/) - the low-code optimization modeling platform that provides:
- A full-featured IDE with a rich mathematical modeling language for formulating LP, MIP, NLP, MINLP, stochastic, and robust optimization models
- Access to high-performance solvers (e.g., CPLEX, Gurobi, BARON, CBC, IPOPT, KNITRO, CP Optimizer)
- An integrated WebUI builder, model explorer, and deployment tools
- Fast deployment and decision-support app creation
- A [valid AIMMS Developer license](https://licensing.aimms.cloud/license) and an existing AIMMS project to connect with (project file, executable, license URL)
- You will receive a license URL in the installation instructions after verification, this URL will be needed when initializing a connection to the AIMMS project
More information on how to get started can be found in the [python bridge documentation](https://documentation.aimms.com/aimmspy/index.html).
**Note**: AIMMS offers a [**free Academic License**](https://licensing.cloud.aimms.com/license/academic.htm) for students, educators, and researchers. This makes it easy for academic users to experiment with AIMMS and `aimmspy` without cost.
---
## Installation
Install via `pip`:
```bash
pip install aimmspy
```
## Basic Usage Example
Here is a minimal example showing how to connect to an AIMMS project, assign data, run a solve procedure, and retrieve results:
```python
from aimmspy.project.project import Project, Model
from aimmspy.utils import find_aimms_path
# Initialize connection to AIMMS project
project = Project(
# path to the AIMMS bin folder (or Lib on Linux)
aimms_path=find_aimms_path("25.5.4.3"),
# path to AIMMS project
aimms_project_file="path/to/your/project.aimms",
# license url
license_url="wss://licensing.aimms.cloud/your-license-url"
)
# Get a handle to the AIMMS model
model: Model = project.get_model(__file__)
# Assign supply and demand data to the identifiers in the AIMMS model
model.Supply.assign({"Factory1": 35, "Factory2": 50})
model.Demand.assign({"Market1": 45, "Market2": 40})
# Assign transportation costs
model.TransportCost.assign({
("Factory1", "Market1"): 10,
("Factory1", "Market2"): 15,
("Factory2", "Market1"): 20,
("Factory2", "Market2"): 5,
})
# Run the optimization procedure defined in AIMMS
model.TransportSolve()
# Retrieve results: optimal shipment quantities
shipments = model.Shipments.data()
print("Optimal shipments:")
print(shipments)
```
This example assumes you have an AIMMS project with identifiers `Supply`, `Demand`, `TransportCost`, `Shipments`, and a procedure `TransportSolve` set up. Replace with identifiers from your own AIMMS project.
## License
This project is licensed under the MIT License.
## Support
For questions, bug reports, or feature requests, please contact AIMMS B.V. via [support](https://community.aimms.com/p/developer-support). Or post an question on the [AIMMS Community](https://community.aimms.com/). We are happy to help you with any issues or questions you may have.
| text/markdown | AIMMS B.V. | null | AIMMS B.V. | null | MIT | AIMMS, Optimization, Operations Research, Mathematical Modeling | [
"Programming Language :: Python :: 3",
"Programming Language :: C++",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Software Development :: Libraries :: Python Modules",
"To... | [] | null | null | >=3.10 | [] | [] | [] | [
"pyarrow==23.0.0",
"pandas",
"polars==1.30.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.13 | 2026-02-18T13:33:53.581908 | aimmspy-25.3.1.8-cp311-cp311-manylinux_2_27_x86_64.whl | 536,969 | 8d/0d/b1e6bf121a903207e6033efe0427a4791884772181bbba4c7f1c9c85c850/aimmspy-25.3.1.8-cp311-cp311-manylinux_2_27_x86_64.whl | cp311 | bdist_wheel | null | false | 01d7d8352dbc58b24589e3bab770a8c7 | f636546ac87bdf0ba4ac5d7547923f592484921255647269230dd6c2f5fb04e1 | 8d0db1e6bf121a903207e6033efe0427a4791884772181bbba4c7f1c9c85c850 | null | [] | 2,192 |
2.4 | superbit-lsh | 0.1.0 | A lightweight, in-memory vector index for approximate nearest neighbors using Locality-Sensitive Hashing | # superbit
A lightweight, in-memory vector index for approximate nearest-neighbor (ANN) search using Locality-Sensitive Hashing.
[](https://crates.io/crates/superbit_lsh)
[](https://docs.rs/superbit_lsh)
[](https://github.com/kunalsinghdadhwal/superbit)
## Overview
`superbit` provides fast approximate nearest-neighbor search over
high-dimensional vectors without the operational overhead of a full vector
database. It implements **random hyperplane LSH** (SimHash), a
locality-sensitive hashing scheme that hashes similar vectors into the same
buckets with high probability. Candidate vectors retrieved from the hash
tables are then re-ranked with an exact distance computation, giving a good
balance between speed and recall.
**Target use cases:**
- Retrieval-augmented generation (RAG) prototyping
- Recommendation system experiments
- Embedding similarity search during development
- Anywhere you need sub-linear ANN queries and want to avoid external
infrastructure
## Features
- **Random hyperplane LSH (SimHash)** for cosine, Euclidean, and dot-product
similarity
- **Multi-probe querying** -- probe neighboring hash buckets to improve recall
without adding more tables
- **Thread-safe concurrent access** via `parking_lot::RwLock` (parallel reads,
exclusive writes)
- **Builder pattern** for ergonomic index configuration
- **Auto-tuning** -- `suggest_params` recommends `num_hashes`, `num_tables`,
and `num_probes` given a target recall and dataset size
- **Runtime metrics** -- lock-free atomic counters track query latency,
candidate counts, and bucket hit rates
- **Optional features:**
- `parallel` -- parallel bulk insert and batch query via rayon
- `persistence` -- save/load indexes to disk with serde + bincode (or JSON)
- `python` -- Python bindings via PyO3
## Architecture
### Module Structure
```mermaid
graph TD
A[<b>LshIndex</b><br/>Public API] --> B[RwLock<IndexInner><br/>Thread-safe wrapper]
A --> M[MetricsCollector<br/>Atomic counters]
B --> V[vectors<br/>HashMap<id, Array1<f32>>]
B --> T[tables<br/>Vec<HashMap<u64, Vec<id>>>]
B --> H[hashers<br/>Vec<RandomProjectionHasher>]
B --> C[IndexConfig]
H --> |"sign(dot(v, proj))"| T
subgraph Optional Features
P[parallel<br/>rayon batch ops]
S[persistence<br/>serde + bincode/JSON]
PY[python<br/>PyO3 bindings]
end
A -.-> P
A -.-> S
A -.-> PY
```
### Query Flow
```mermaid
flowchart LR
Q[Query Vector] --> N{Normalize?}
N -->|Cosine| NORM[L2 Normalize]
N -->|Other| HASH
NORM --> HASH
HASH[Hash with L Hashers] --> PROBE[Multi-probe:<br/>flip uncertain bits]
PROBE --> T1[Table 1<br/>base + probes]
PROBE --> T2[Table 2<br/>base + probes]
PROBE --> TL[Table L<br/>base + probes]
T1 --> UNION[Candidate Union<br/>deduplicate IDs]
T2 --> UNION
TL --> UNION
UNION --> RANK[Exact Re-rank<br/>compute true distance]
RANK --> TOPK[Return Top-K]
```
### Insert Flow
```mermaid
flowchart LR
I[Insert: id, vector] --> DUP{ID exists?}
DUP -->|Yes| REM[Remove old<br/>hash entries]
DUP -->|No| NORM
REM --> NORM{Normalize?}
NORM -->|Cosine| DO_NORM[L2 Normalize]
NORM -->|Other| STORE
DO_NORM --> STORE
STORE[Compute L hashes] --> BUCK[Push id into<br/>L hash buckets]
BUCK --> VEC[Store vector in<br/>central HashMap]
```
## Quick Start
Add the crate to your `Cargo.toml`:
```toml
[dependencies]
superbit_lsh = "0.1"
```
Build an index, insert vectors, and query:
```rust
use superbit::{LshIndex, DistanceMetric};
fn main() -> superbit::Result<()> {
// Build a 128-dimensional index with cosine similarity.
let index = LshIndex::builder()
.dim(128)
.num_hashes(8)
.num_tables(16)
.num_probes(3)
.distance_metric(DistanceMetric::Cosine)
.seed(42)
.build()?;
// Insert vectors (ID, slice).
let v = vec![0.1_f32; 128];
index.insert(0, &v)?;
let v2 = vec![0.2_f32; 128];
index.insert(1, &v2)?;
// Query for the 5 nearest neighbors.
let results = index.query(&v, 5)?;
for r in &results {
println!("id={} distance={:.4}", r.id, r.distance);
}
Ok(())
}
```
## Feature Flags
| Flag | Effect |
|---------------|-------------------------------------------------------------|
| `parallel` | Parallel bulk insert and batch query via rayon |
| `persistence` | Save/load index to disk (serde + bincode + JSON) |
| `python` | Python bindings via PyO3 |
| `full` | Enables `parallel` + `persistence` |
Enable features in your `Cargo.toml`:
```toml
[dependencies]
superbit_lsh = { version = "0.1", features = ["full"] }
```
## Configuration Guide
The three main knobs that control the speed/recall/memory trade-off are:
| Parameter | What it controls | Higher value means |
|--------------|---------------------------------------------------|---------------------------------------------|
| `num_hashes` | Hash bits per table (1--64) | Fewer, more precise buckets; lower recall per table but less wasted work |
| `num_tables` | Number of independent hash tables | Better recall (more chances to find a neighbor); more memory |
| `num_probes` | Extra neighboring buckets probed per table | Better recall without adding tables; slightly more query time |
**Rules of thumb:**
- Start with the defaults (`num_hashes=8`, `num_tables=16`, `num_probes=3`)
and measure recall on a held-out set.
- If recall is too low, increase `num_tables` or `num_probes` first.
- If queries are too slow (too many candidates), increase `num_hashes` to make
buckets more selective.
- For cosine similarity the index L2-normalizes vectors on insertion by default
(`normalize_vectors=true`).
## Auto-Tuning
Use `suggest_params` to get a starting configuration based on your dataset size
and target recall:
```rust
use superbit::{suggest_params, DistanceMetric};
let params = suggest_params(
0.90, // target recall
100_000, // expected dataset size
768, // vector dimensionality
DistanceMetric::Cosine, // distance metric
);
println!("Suggested: hashes={}, tables={}, probes={}, est. recall={:.2}",
params.num_hashes, params.num_tables, params.num_probes, params.estimated_recall);
```
You can also estimate the recall of a specific configuration without building
an index:
```rust
use superbit::{estimate_recall, DistanceMetric};
let recall = estimate_recall(16, 8, 2, DistanceMetric::Cosine);
println!("Estimated recall: {:.2}", recall);
```
## Performance
LSH-based indexing provides **sub-linear query time** by reducing the search
space to a small set of candidate vectors. In practice:
- For datasets **under ~10,000 vectors**, brute-force linear scan is often fast
enough and gives exact results. LSH adds overhead that may not pay off at
this scale.
- For datasets **above ~10,000 vectors**, LSH becomes increasingly beneficial.
Query time grows much more slowly than dataset size.
- With well-tuned parameters you can typically achieve **80--95% recall** while
examining only a small fraction of the dataset.
The `parallel` feature flag enables rayon-based parallelism for bulk inserts
(`par_insert_batch`) and batch queries (`par_query_batch`), which can
significantly speed up workloads that operate on many vectors at once.
Use the built-in metrics collector (`.enable_metrics()` on the builder) to
monitor query latency, candidate counts, and bucket hit rates in production.
## Comparison with Other Rust ANN Crates
| Crate | Algorithm | Notes |
|-------------------|-----------------|-------------------------------------------------|
| **superbit** | Random hyperplane LSH | Lightweight, pure Rust, no C/C++ deps. Good for prototyping and moderate-scale workloads. |
| usearch | HNSW | High performance, C++ core with Rust bindings. Better for large-scale production. |
| hora | HNSW / IVF-PQ | Pure Rust, multiple algorithms. More complex API. |
| hnsw_rs | HNSW | Pure Rust HNSW implementation. |
`superbit` is intentionally simple: a single algorithm, a small API
surface, and no native dependencies. It is a good fit for prototyping,
moderate-scale applications, and situations where you want to understand and
control the indexing behavior. For very large datasets (millions of vectors) or
when you need maximum throughput, a graph-based index like HNSW will generally
outperform LSH.
## License
Licensed under either of
- [Apache License, Version 2.0](LICENSE-APACHE)
- [MIT License](LICENSE-MIT)
at your option.
| text/markdown; charset=UTF-8; variant=GFM | null | Kunal Singh Dadhwal <kunalsinghdadhwal@gmail.com> | null | null | MIT OR Apache-2.0 | lsh, ann, vector-search, nearest-neighbors, embeddings | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"P... | [] | https://github.com/kunalsinghdadhwal/superbit | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://docs.rs/superbit_lsh",
"Homepage, https://github.com/kunalsinghdadhwal/superbit",
"Repository, https://github.com/kunalsinghdadhwal/superbit"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T13:32:46.477826 | superbit_lsh-0.1.0-cp313-cp313-win_amd64.whl | 205,143 | 09/b7/0562476a750a018dd20157f612b287d73044cfbd7ed9275ad7e29fe4db35/superbit_lsh-0.1.0-cp313-cp313-win_amd64.whl | cp313 | bdist_wheel | null | false | 1031cfe1258364de438ddbef86dc5be9 | 1e9babe1935edbd755051f2ba3353fe71e5f4dee4ec30cf3a23c0031aeae195f | 09b70562476a750a018dd20157f612b287d73044cfbd7ed9275ad7e29fe4db35 | null | [
"LICENSE-APACHE",
"LICENSE-MIT"
] | 1,096 |
2.4 | ladok3 | 5.10 | Python wrapper and CLI for the LADOK3 REST API. | # ladok3: Python wrapper for LADOK3 API
This package provides a wrapper for the LADOK3 API used by
[start.ladok.se][ladok]. This makes it easy to automate reporting grades,
compute statistics etc.
## Installation
To install, run:
```bash
pip install ladok3
sudo cp $(find / -name ladok.bash) /etc/bash_completion.d
ladok login
```
If you run the second line above, you'll get tab completion for the `ladok`
command when you use the `bash` shell.
The third command above is to log in, you only do this once.
An alternative to installing the package is to run the [Docker image][docker].
```bash
docker run -it dbosk/ladok3 /bin/bash
```
Or simply adapt your own image.
## Usage
There are two ways to use the package: as a Python package or through the
command-line tool `ladok`.
### On the command line
Let's assume that we have a student with personnummer 123456-1234.
Let's also assume that this student has taken a course with course code AB1234
and finished the module LAB1 on date 2021-03-15.
Say also that the student's assignments were graded by the teacher and two TAs:
- Daniel Bosk <dbosk@kth.se> (teacher)
- Teaching Assistantsdotter <tad@kth.se>
- Teaching Assistantsson <tas@kth.se>
Then we can report this result like this:
```bash
ladok report 123456-1234 AB1234 LAB1 -d 2021-03-15 -f \
"Daniel Bosk <dbosk@kth.se>" "Teaching Assistantsdotter <tad@kth.se>" \
"Teaching Assistantsson <tas@kth.se>"
```
If we use Canvas for all results, we can even report all results for a
course.
```bash
pip install canvaslms
canvaslms login
canvaslms results -c AB1234 -A LAB1 | ladok report -v
```
The `canvaslms results` command will export the results in CSV format, this
will be piped to `ladok report` that can read it and report it in bulk.
Most likely you'll need to pass the CSV through `sed` to change the column
containing the course identifier to just contain the course code. At KTH, the
course code attribute in Canvas contains course code and the semester. So I
have to `sed` away the semester part.
### As a Python package
To use the package, it's just to import the package as usual.
```python
import ladok3
credentials = {
"username": "dbosk@ug.kth.se",
"password": "password ..."
}
ls = ladok3.LadokSession("KTH Royal Institute of Technology",
vars=credentials)
student = ls.get_student("123456-1234")
course_participation = student.courses(code="AB1234")[0]
for result in course_participation.results():
print(f"{course_participation.code} {result.component}: "
f"{result.grade} ({result.date})")
component_result = course_participation.results(component="LAB1")[0]
component_result.set_grade("P", "2021-03-15")
component_result.finalize()
```
A better way is to use the `load_credentials` function of the CLI.
```python
import ladok3
import ladok3.cli
ls = ladok3.LadokSession(*ladok3.cli.load_credentials())
student = ls.get_student("123456-1234")
# ...
```
An even better way is to reuse the stored session to use the already built
cache.
```python
import ladok3
import ladok3.cli
_, credentials = ladok3.cli.load_credentials()
ls = ladok3.cli.restore_ladok_session(credentials)
student = ls.get_student("123456-1234")
# ...
ladok3.cli.store_ladok_session(ls, credentials)
```
## More documentation
There are more detailed usage examples in the details documentation that can be
round with the [releases][releases] and in the `examples` directory.
[ladok]: https://start.ladok.se
[docker]: https://hub.docker.com/repository/docker/dbosk/ladok3
[releases]: https://github.com/dbosk/ladok3/releases
# The examples
There are some examples that can be found in the `examples` directory:
- `example_LadokSession.py` just shows how to establish a session.
- `example_Course.py` shows course data related examples.
- `example_Student.py` shows student data related examples.
- `canvas2ladok.py` shows how to transfer grades from KTH Canvas to LADOK.
- `statsdata.py` shows how to extract data for doing statistics for a course
and the students' results.
We also have a few more examples described in the sections below.
## `canvas_ladok3_spreadsheet.py`
Purpose: Use the data in a Canvas course room together with the data from Ladok3 to create a spreadsheet of students in the course
and include their Canvas user_id, name, Ladok3 Uid, program_code, program name, etc.
Note that the course_id can be given as a numeric value or a string which will be matched against the courses in the user's dashboard cards. It will first match against course codes, then short name, then original names.
Input:
```
canvas_ladok3_spreadsheet.py canvas_course_id
```
Add the "-T" flag to run in the Ladok test environment.
Output: outputs a file ('users_programs-COURSE_ID.xlsx) containing a spreadsheet of the users information
```
canvas_ladok3_spreadsheet.py 12162
canvas_ladok3_spreadsheet.py -t 'II2202 HT20-1'
```
## `ladok3_course_instance_to_spreadsheet.py`
Purpose: Use the data in Ladok3 together with the data from Canvas to create a spreadsheet of students in a course
instance and include their Canvas user_id (or "not in Canvas" if they do not have a Canvas user_id), name, Ladok3 Uid, program_code, program name, etc.
Note that the course_id can be given as a numeric value or a string which will be matched against the courses in the user's dashboard cards. It will first match against course codes, then short name, then original names.
Input:
```
ladok3_course_instance_to_spreadsheet.py course_code course_instance
```
or
```
ladok3_course_instance_to_spreadsheet.py canvas_course_id
```
or
```
./ladok3_course_instance_to_spreadsheet.py course_code
```
Optionally include their personnumber with the flag -p or --personnumbers
Add the "-T" flag to run in the Ladok test environment.
Output: outputs a file ('users_programs-instance-COURSE_INSTANCE.xlsx) containing a spreadsheet of the users information
```
# for II2202 the P1 instance in 2019 the course instance is 50287
ladok3_course_instance_to_spreadsheet.py II2202 50287
```
or
```
# Canvas course_id for II2202 in P1 is 20979
ladok3_course_instance_to_spreadsheet.py 20979
```
or
```
# P1P2 is a nickname on a dashboard card for II2202 duing P1 and P2
./ladok3_course_instance_to_spreadsheet.py P1P2
```
## `canvas_students_missing_integration_ids.py`
Purpose: Use the data in a Canvas course room to create a spreadsheet of students in the course who are missing an integration ID.
Input:
```
canvas_students_missing_integration_ids.py canvas_course_id
```
Output: outputs a file ('users_without_integration_ids-COURSE_ID.xlsx) containing a spreadsheet of the users information
## `cl_user_info.py`
Purpose: Use the data in a Canvas course room together with the data from Ladok3 to find information about a user.
Input:
```
cl_user_info.py Canvas_user_id|KTHID|Ladok_id [course_id]
```
The course_id can be a Canvas course_id **or** if you have dashboard cards, you can specific a course code, a nickname, unique part of the short name or original course name.
Add the "-k" or '--kthid' flag to get the KTHID (i.e., the 'sis_user_id) you need to specify a course_id for a course (where this user is a teacher or student) on the command line.
Add the "-T" flag to run in the Ladok test environment.
If you know the Ladok_id, i.e., the integration_id - then you do not need to specify a course_id.
The program can also take an argument in the form https://canvas.kth.se/courses/course_id/users/user_id
- this is the URL when you are on a user's page in a course.
Output:\
from Canvas: sortable name, user_id, and integration_id\
if you specified a course_id, you will also get KTHID and login_id\
from Ladok: pnr (personnumber) and [program_code, program_name, specialization/track code, admissions info]
| text/markdown | Daniel Bosk | daniel@bosk.se | Daniel Bosk | dbosk@kth.se | null | ladok3, ladok | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Environment :: Console",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Topic :: Utilities"
] | [] | null | null | <4.0,>=3.8 | [] | [] | [] | [
"appdirs<2.0.0,>=1.4.4",
"argcomplete<3.0.0,>=2.0.0",
"cachetools<6.0.0,>=5.2.0",
"cryptography<42.0.0,>=41.0.3",
"keyring<25.0.0,>=24.2.0",
"requests<3.0.0,>=2.31.0",
"urllib3>=2.0",
"weblogin<2.0,>=1.13"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/dbosk/ladok3/issues",
"Documentation, https://github.com/dbosk/ladok3/releases",
"Repository, https://github.com/dbosk/ladok3"
] | poetry/2.3.1 CPython/3.13.7 Linux/6.17.0-14-generic | 2026-02-18T13:32:16.606486 | ladok3-5.10-py3-none-any.whl | 2,989,787 | 32/fe/b98bd5cd78c8e40514518ee00bc1b65c8f0942cb4321984e94a428cac3f3/ladok3-5.10-py3-none-any.whl | py3 | bdist_wheel | null | false | efae1abccfc8ff0c380a77cdb2e33c7f | d91c1026b7094d7bcc1c0d04ecbade596165ee092f3b76bddc6352a7384a35d5 | 32feb98bd5cd78c8e40514518ee00bc1b65c8f0942cb4321984e94a428cac3f3 | MIT | [
"LICENSE"
] | 241 |
2.4 | Qwodel | 0.0.12 | Production-grade model quantization SDK for enterprise custom models (AWQ, GGUF, and CoreML) | # Qwodel - Production-Grade Model Quantization
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/astral-sh/ruff)
**Qwodel** is a production-ready Python package for model quantization across multiple backends (AWQ, GGUF, CoreML). It provides a unified, intuitive API for quantizing large language models with minimal code.
## Features
- **Unified API** - Simple interface across all quantization backends
- **Multiple Backends** - AWQ (GPU), GGUF (CPU), CoreML (Apple devices)
- **Optional Dependencies** - Install only what you need
- **CLI & Python API** - Use via command line or programmatically
- **Type Safe** - Full type hints and mypy validation
- **Well Documented** - Comprehensive docs with examples
## Quick Start
### Installation
### Quick Install (All Backends)
```bash
pip install qwodel[all]
```
This installs **all backends** (GGUF, AWQ, CoreML) with PyTorch 2.1.2 (CPU version).
### GPU Support (for AWQ only)
If you need **GPU quantization with AWQ**, install PyTorch with CUDA first:
```bash
# 1. Install PyTorch with CUDA 12.1
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121
# 2. Install qwodel
pip install qwodel[all]
```
> **Note**: GGUF and CoreML work perfectly fine with CPU-only PyTorch!
### Individual Backends
```bash
# GGUF only (CPU quantization - most popular!)
pip install qwodel[gguf]
# AWQ only (GPU quantization)
pip install qwodel[awq]
# CoreML only (Apple devices)
pip install qwodel[coreml]
```
### Local Development
```bash
# Clone and install locally
cd /path/to/qwodel
pip install -e .[all]
```
### Python API
```python
from qwodel import Quantizer
# Create quantizer
quantizer = Quantizer(
backend="gguf",
model_path="meta-llama/Llama-2-7b-hf",
output_dir="./quantized"
)
# Quantize model
output_path = quantizer.quantize(format="Q4_K_M")
print(f"Quantized model saved to: {output_path}")
```
### CLI
```bash
# Quantize a model
qwodel quantize \
--backend gguf \
--format Q4_K_M \
--model meta-llama/Llama-2-7b-hf \
--output ./quantized
# List available formats
qwodel list-formats --backend gguf
```
## Supported Backends
### GGUF (CPU Quantization)
- **Use Case**: CPU inference, broad compatibility
- **Formats**: Q4_K_M, Q8_0, Q2_K, Q5_K_M, and more
- **Best For**: Most users, CPU-based deployment
### AWQ (GPU Quantization)
- **Use Case**: NVIDIA GPU inference
- **Formats**: INT4
- **Best For**: GPU deployments, maximum speed
- **Requires**: CUDA 12.1+
### CoreML (Apple Devices)
- **Use Case**: iOS, macOS, iPadOS deployment
- **Formats**: FLOAT16, INT8, INT4
- **Best For**: Apple device deployment
## Examples
### Batch Processing
```python
from qwodel import quantize
models = ["meta-llama/Llama-2-7b-hf", "meta-llama/Llama-2-13b-hf"]
for model in models:
quantize(
model_path=model,
backend="gguf",
format="Q4_K_M",
output_dir="./quantized"
)
```
### Custom Progress Callback
```python
from qwodel import Quantizer
def progress_handler(progress: int, stage: str, message: str):
print(f"[{progress}%] {stage}: {message}")
quantizer = Quantizer(
backend="gguf",
model_path="./my-model",
output_dir="./output",
progress_callback=progress_handler
)
quantizer.quantize(format="Q4_K_M")
```
## Documentation
- [API Reference](docs/API_REFERENCE.md)
- [CLI Reference](docs/CLI_REFERENCE.md)
- [API Reference](docs/API_REFERENCE.md)
- [CLI Reference](docs/CLI_REFERENCE.md)
- [Troubleshooting](docs/TROUBLESHOOTING.md)
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
MIT License - see [LICENSE](LICENSE) for details.
## Acknowledgments
Qwodel builds upon the excellent work of:
- [llama.cpp](https://github.com/ggerganov/llama.cpp) for GGUF quantization
- [llm-compressor](https://github.com/vllm-project/llm-compressor) for AWQ quantization
- [CoreMLTools](https://github.com/apple/coremltools) for CoreML conversion
| text/markdown | Qwodel | null | Qwodel Contributors | null | MIT | quantization, model-compression, llm, awq, gguf, coreml, machine-learning, enterprise | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Progra... | [] | null | null | >=3.10 | [] | [] | [] | [
"transformers>=4.51.3",
"huggingface_hub>=0.20.0",
"safetensors>=0.4.0",
"numpy<2.0,>=1.24.0",
"click>=8.0.0",
"rich>=13.0.0",
"requests>=2.31.0",
"tiktoken>=0.5.0",
"sentencepiece>=0.1.99",
"torch>=2.10.0; extra == \"awq\"",
"torchvision>=0.25.0; extra == \"awq\"",
"torchaudio>=2.10.0; extra ... | [] | [] | [] | [
"Homepage, https://github.com/YOUR_ORG/qwodel",
"Documentation, https://qwodel.readthedocs.io",
"Repository, https://github.com/YOUR_ORG/qwodel",
"Issues, https://github.com/YOUR_ORG/qwodel/issues",
"Changelog, https://github.com/YOUR_ORG/qwodel/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-18T13:31:30.796475 | qwodel-0.0.12.tar.gz | 208,935 | 9f/22/2a74e9df7a72d6defa64f4ea9fd975200f7778b25cae06838dfef57b6b78/qwodel-0.0.12.tar.gz | source | sdist | null | false | d2843c13f56ab256ab0571fa54afe8c4 | 1943d31ea0c18fcb497c3469ba050a0d48d9e8a2caf8f9e40cd7ef2449a7586e | 9f222a74e9df7a72d6defa64f4ea9fd975200f7778b25cae06838dfef57b6b78 | null | [
"LICENSE"
] | 0 |
2.4 | orion-cache | 0.1.7 | Shared Redis cache decorator for internal Python services — write once, import everywhere. | # orion-cache
> Shared Redis cache decorator for Python services — write once, import everywhere.
[](https://pypi.org/project/orion-cache/)
[](https://pypi.org/project/orion-cache/)
[](https://opensource.org/licenses/MIT)
---
## What is it?
`orion-cache` is a lightweight Python library that wraps Redis caching behind a simple decorator. Slap `@redis_cache()` on any function and its return value is automatically cached in Redis — no boilerplate, no repeated logic across services.
Built for multiple services that query the same database and want to share cached results without duplicating caching code.
---
## Installation
```bash
pip install orion-cache
```
---
## Quick Start
```python
from orion_cache import redis_cache, clear_all_caches
@redis_cache(ttl=300)
def get_orders(customer):
return db.query("SELECT * FROM orders WHERE customer = ?", customer)
@redis_cache() # uses default TTL (300s)
def get_all_products():
return db.query("SELECT * FROM products")
```
First call hits the database and caches the result. Every call after that is served from Redis until the TTL expires.
---
## Clear Cache
```python
# clear everything
clear_all_caches()
# clear only keys matching a pattern
clear_all_caches(pattern="get_orders:*")
```
---
## Configuration
`orion-cache` is configured via environment variables. Create a `.env` file in your project root or set them in your environment directly.
| Variable | Default | Description |
|---|---|---|
| `REDIS_HOST` | `localhost` | Redis server hostname |
| `REDIS_PORT` | `6379` | Redis server port |
| `REDIS_PASSWORD` | _(none)_ | Redis password |
| `REDIS_DB` | `0` | Redis database index |
| `CACHE_DEFAULT_TTL` | `300` | Default TTL in seconds |
| `REDIS_URL` | _(auto-built)_ | Full Redis URL — overrides all above if set |
**.env example:**
```env
REDIS_HOST=your-redis-host
REDIS_PORT=6379
REDIS_PASSWORD=secret
CACHE_DEFAULT_TTL=300
```
Or use a full URL:
```env
REDIS_URL=redis://:secret@your-redis-host:6379/0
```
In production (Docker, k8s), set real environment variables instead of a `.env` file — the library will pick them up automatically.
---
## Sharing Cache Across Services
If multiple services point at the same Redis instance and use the same function names, they will automatically share cached results — no extra configuration needed.
If two services have different functions with the same name that do different things, use descriptive function names to avoid collisions:
```python
# good
def orders_get_all(): ...
def products_get_all(): ...
# risky if shared Redis
def get_all(): ...
```
---
## Behavior
- **Cache miss:** calls the real function, stores result in Redis, returns result
- **Cache hit:** returns cached result directly, function is never called
- **Redis down:** fails silently, falls through to the real function — your app keeps working
- **Non-serializable types** (e.g. `datetime`, `UUID`): automatically converted to string via `default=str`
---
## Requirements
- Python 3.9+
- Redis 5+
---
## Contributing
Contributions are welcome!
1. Fork the repo and clone it locally
2. Create a virtual environment and install in editable mode:
```bash
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
```
3. Make your changes
4. Run tests before submitting:
```bash
pytest
```
5. Open a pull request against `main`
Please keep PRs focused — one feature or fix per PR.
For bugs or feature requests, open an issue on [GitHub](https://github.com/aliinreallife/orion-cache/issues).
---
## License
MIT © [Ali Rashidi](mailto:aliinreallifee@gmail.com)
| text/markdown | null | Ali Rashidi <aliinreallifee@gmail.com> | null | null | MIT | redis, cache, decorator, caching | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"License :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"redis>=5.0.0",
"python-dotenv>=1.0.0",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/aliinreallife/orion-cache",
"Bug Tracker, https://github.com/aliinreallife/orion-cache/issues",
"Changelog, https://github.com/aliinreallife/orion-cache/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:30:42.495133 | orion_cache-0.1.7.tar.gz | 5,714 | 60/38/7d79d7eed242a3f3a182234a91baf27a30afa8ad642f1b72d6082ae58abc/orion_cache-0.1.7.tar.gz | source | sdist | null | false | f8e561b269b9baf50c40464c49446e35 | 41860d38e82abfa420931482c8d5d403e2ab5f99f288f64fa4e74918f29156da | 60387d79d7eed242a3f3a182234a91baf27a30afa8ad642f1b72d6082ae58abc | null | [
"LICENSE"
] | 216 |
2.3 | vinted-scraper | 3.0.0 | A very simple Python package for scraping Vinted. Supports both synchronous and asynchronous operations with automatic cookie management and typed responses. | # Vinted Scraper
[](https://pypi.org/project/vinted_scraper/)
[](https://pypi.org/project/vinted_scraper/)
[](https://codecov.io/gh/Giglium/vinted_scraper)
[](https://app.codacy.com/gh/Giglium/vinted_scraper?utm_source=github.com&utm_medium=referral&utm_content=Giglium/vinted_scraper&utm_campaign=Badge_Grade)
[](https://github.com/Giglium/vinted_scraper/blob/main/LICENSE)
[](https://app.fossa.com/projects/git%2Bgithub.com%2FGiglium%2Fvinted_scraper?ref=badge_shield)
A very simple Python package for scraping Vinted. Supports both synchronous and asynchronous operations with automatic cookie management and typed responses.
📖 **[Full Documentation](https://giglium.github.io/vinted_scraper/vinted_scraper.html)** | 💡 **[Examples](https://github.com/Giglium/vinted_scraper/tree/main/examples)** | 📝 **[Changelog](https://github.com/Giglium/vinted_scraper/releases)**
## Installation
Install using pip:
```shell
pip install vinted_scraper
```
## Functions
The package offers the following methods:
<details>
<summary><code>search</code> - <code>Gets all items from the listing page based on search parameters.</code></summary>
**Parameters**
> | name | type | data type | description |
> | ------ | -------- | --------- | ---------------------------------------------- |
> | params | optional | Dict | Query parameters like the pagination and so on |
**Returns:** `List[VintedItem]` (VintedScraper) or `Dict[str, Any]` (VintedWrapper)
</details>
<details>
<summary><code>item</code> - <code>Gets detailed information about a specific item and its seller.</code></summary>
> It returns a 403 error after a few uses. See [#58](https://github.com/Giglium/vinted_scraper/issues/59)).
**Parameters**
> | name | type | data type | description |
> | ------ | -------- | --------- | --------------------------------------------- |
> | id | required | str | The unique identifier of the item to retrieve |
> | params | optional | Dict | I don't know if they exist |
**Returns:** `VintedItem` (VintedScraper) or `Dict[str, Any]` (VintedWrapper)
</details>
<details>
<summary><code>curl</code> - <code>Perform an HTTP GET request to the given endpoint.</code></summary>
**Parameters**
> | name | type | data type | description |
> | -------- | -------- | --------- | ---------------------------------------------- |
> | endpoint | required | str | The endpoint to make the request to |
> | params | optional | Dict | Query parameters like the pagination and so on |
**Returns:** `VintedJsonModel` (VintedScraper) or `Dict[str, Any]` (VintedWrapper)
</details>
## Usage
```python
from vinted_scraper import VintedScraper
scraper = VintedScraper("https://www.vinted.com")
items = scraper.search({"search_text": "board games"})
for item in items:
print(f"{item.title} - {item.price}")
```
> Check out the [examples](https://github.com/Giglium/vinted_scraper/tree/main/examples) for more!
## Debugging
To enable debug logging for troubleshooting:
```python
import logging
# Configure logging BEFORE importing vinted_scraper
logging.basicConfig(
level=logging.DEBUG,
format="%(levelname)s:%(name)s:%(message)s"
)
from vinted_scraper import VintedScraper
scraper = VintedScraper("https://www.vinted.com")
scraper.search({"search_text": "board games"})
```
<details>
<summary>Debug output (click to expand)</summary>
```bash
DEBUG:vinted_scraper._vinted_wrapper:Initializing VintedScraper(baseurl=https://www.vinted.com, user_agent=None, session_cookie=auto-fetch, config=None)
DEBUG:vinted_scraper._vinted_wrapper:Refreshing session cookie
DEBUG:vinted_scraper._vinted_wrapper:Cookie fetch attempt 1/3
DEBUG:vinted_scraper._vinted_wrapper:Session cookie fetched successfully: eyJraWQiOiJFNTdZZHJ1...
DEBUG:vinted_scraper._vinted_wrapper:Calling search() with params: {'search_text': 'board games'}
DEBUG:vinted_scraper._vinted_wrapper:API Request: GET /api/v2/catalog/items with params {'search_text': 'board games'}
DEBUG:vinted_scraper._vinted_wrapper:API Response: /api/v2/catalog/items - Status: 200
```
</details>
### Common Issues
- **403 Forbidden Error**: The `item()` method frequently return 403 errors ([#58](https://github.com/Giglium/vinted_scraper/issues/59)).
- **Cookie Fetch Failed**: If cookies cannot be fetched:
- Verify the base URL is correct
- Check your internet connection, some VPN are banned. Try manually getting the cookie by running the following:
```bash
curl -v -c - -L "<base-url>" | grep access_token_web
```
## License
This project is licensed under the MIT License - see
the [LICENSE](https://github.com/Giglium/vinted_scraper/blob/main/LICENSE) file for details.
[](https://app.fossa.com/projects/git%2Bgithub.com%2FGiglium%2Fvinted_scraper?ref=badge_large)
| text/markdown | Giglium | null | null | null | MIT License
Copyright (c) 2023 Migliorin Francesco Antonio
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language... | [] | null | null | >=3.7 | [] | [] | [] | [
"httpx[brotli]>=0.20.0; python_full_version >= \"3.7\""
] | [] | [] | [] | [
"Changelog, https://github.com/Giglium/vinted_scraper/releases",
"Documentation, https://github.com/Giglium/vinted_scraper",
"Homepage, https://github.com/Giglium/vinted_scraper",
"Issues, https://github.com/Giglium/vinted_scraper/issues",
"Source, https://github.com/Giglium/vinted_scraper"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:29:38.473956 | vinted_scraper-3.0.0.tar.gz | 14,157 | 01/52/4bdd318bdeee20f53fb9b010b9998998c8c27a1c2e587012618e09a348ac/vinted_scraper-3.0.0.tar.gz | source | sdist | null | false | ad216831723e71d9de9773d4373f4ed7 | d5bdf06cd7395606c48e80bfa4c21cde7da12d5f280f7d5debd1cd24f9f71562 | 01524bdd318bdeee20f53fb9b010b9998998c8c27a1c2e587012618e09a348ac | null | [] | 495 |
2.4 | anifetch-cli | 1.0.1 | Animated terminal fetch with video and audio support. | 
# Anifetch - Neofetch but animated.
This is a small tool built with fastfetch/neofetch, ffmpeg and chafa. It allows you to use fastfetch or neofetch while having animations.
## Installation
### Installation for Linux
Recommended Python version: 3.11 and later. If you use NixOS refer to [Installation for NixOS](#installation-for-nixos).
Run this in the terminal.
```bash
curl https://raw.githubusercontent.com/Notenlish/anifetch/refs/heads/main/install.sh | bash
```
After installation, run this to test if anifetch was installed correctly:
```bash
anifetch example.mp4
```
Please read our [User guide](#user-guide) for more info on how to use anifetch.
---
### Installation for Windows (Winget or Scoop)
Check whether you have winget installed by running `winget` in the windows terminal. If you dont have it, install it [here](https://github.com/microsoft/winget-cli/?tab=readme-ov-file#installing-the-client). If you want, you can use [Scoop](https://scoop.sh/) instead. Just replace the `winget` part with `scoop`.
Run this in the terminal after verifying winget works:
```
winget install chafa ffmpeg fastfetch
```
You can install neofetch too but it is deprecated and not recommended. Run this to install: `winget install neofetch`
After installing the necessary dependencies using winget/scoop, install anifetch via pip. You can install it via pipx too.
```pip install anifetch-cli```
> [!WARNING]
> **Do not** install `anifetch` on pypi, it is not related with this project. Install `anifetch-cli`.
### Installation for MacOS with Homebrew
Install homebrew if you haven't installed it already by following the guide [here](https://brew.sh/).
Run this in the terminal after verifying homebrew is installed:
```
brew install chafa ffmpeg fastfetch
```
After installing the necessary dependencies, install anifetch via pip . You can install it via pipx too.
```pip install anifetch-cli```
> [!WARNING]
> **Do not** install `anifetch` on pypi, it is not related with this project. Install `anifetch-cli`.
### Manual Installation
You need the following tools installed on your system:
- `chafa`
- Debian/Ubuntu: `sudo apt install chafa`
- [Other distros – Download Instructions](https://hpjansson.org/chafa/download/)
- `ffmpeg` (for video/audio playback)
- Debian/Ubuntu: `sudo apt install ffmpeg`
- [Other systems – Download](https://www.ffmpeg.org/download.html)
- `fastfetch / neofetch` (Fastfetch is recommended)
- Debian/Ubuntu: `sudo apt install fastfetch`
- [Other systems - Instructions for Fastfetch](https://github.com/fastfetch-cli/fastfetch?tab=readme-ov-file#installation)
- Neofetch installation _(Not recommended)_ can be found [here](https://github.com/dylanaraps/neofetch/wiki/Installation)
🔧 Make sure `pipx` is installed:
```bash
sudo apt install pipx
pipx ensurepath
```
and then:
```bash
pipx install git+https://github.com/Notenlish/anifetch.git
```
This installs `anifetch` in an isolated environment, keeping your system Python clean.
You can then run the `anifetch` command directly in your terminal.
Since pipx installs packages in an isolated environment, you won't have to worry about dependency conflicts or polluting your global python environment. `anifetch` will behave just like a native cli tool. You can upgrade your installation with `pipx upgrade anifetch`
---
### Installation for NixOS
#### ❄️ As a flake:
Add the anifetch repo as a flake input:
```nix
{
inputs = {
anifetch = {
url = "github:Notenlish/anifetch";
inputs.nixpkgs.follows = "nixpkgs";
};
};
}
```
Remember to add:
```nix
specialArgs = {inherit inputs;};
```
to your nixos configuration, like I've done here on my system:
```nix
nixosConfigurations = {
Enlil = nixpkgs.lib.nixosSystem {
specialArgs = {inherit inputs outputs;};
```
#### ❄️ As a package:
Add anifetch to your packages list like so:
```nix
{inputs, pkgs, ...}: {
environment.systemPackages = with pkgs; [
inputs.anifetch.packages.${pkgs.system}.default
fastfetch # Choose either fastfetch or neofetch to run anifetch with
neofetch
];
}
```
#### ❄️ As an overlay:
Add the overlay to nixpkgs overlays, then add the package to your package list as you would a package from the normal nixpkgs repo.
```nix
{inputs, pkgs, ...}: {
nixpkgs = {
overlays = [
inputs.anifetch.overlays.anifetch
];
};
environment.systemPackages = with pkgs; [
anifetch
fastfetch # Choose either fastfetch or neofetch to run anifetch with
neofetch
];
}
```
The Nix package contains all the dependencies in a wrapper script for the application aside from fastfetch or neofetch, so you should only need to add one of those to your package list as well.
After you've done these steps, rebuild your system.
---
### Developer Installation (for contributors):
```bash
git clone https://github.com/Notenlish/anifetch.git
cd anifetch
python3 -m venv venv
source venv/bin/activate
pip install -e .
```
> on windows do this to activate the venv instead: `venv\Scripts\activate`. Also on windows you should use `py` instead of `python3`.
This installs `anifetch` in editable mode within a local virtual environment for development.
You can then run the program in two ways:
- As a CLI: `anifetch`
- Or as a module: `python3 -m anifetch` (useful for debugging or internal testing)
> Please avoid using `pip install` outside a virtual environment on Linux. This is restricted by [PEP 668](https://peps.python.org/pep-0668/) to protect the system Python.
On Nix you can run:
```bash
nix develop
pip install -e .
```
inside the anifetch dir after cloning the repo. This creates a python venv you can re-enter by running `nix develop` inside the project dir.
## User Guide
You don't need to configure anything for `fastfetch` or `neofetch`. If they already work on your machine, `anifetch` will detect and use them automatically. Please note that at least one of these must be installed, otherwise anifetch won't work. **By default, `anifetch` will use fastfetch**.
> We dont recommend using neofetch as it is archived. **To use neofetch**, you must append `-nf` to the anifetch command. For some distros you may need to append `--force` to the command too since neofetch is deprecated.
Simply `cd` to the directory your video file is located in and do `anifetch [path_to_video_file]`. Both relative and absolute paths are supported. Anifetch is packaged with an `example.mp4` video by default. You can use that to test anifetch.
Any video file you give to anifetch will be stored in `~/.local/share/anifetch/assets` folder for linux and `C:\\Users\\[Username]\\AppData\\Local\\anifetch\\anifetch\\assets` folder for windows. After running `anifetch` with this video file once, next time you use anifetch, you will be able to use that same video file in any location by just using its filename, since the video file has been saved in `assets`.
### Example usage:
```bash
anifetch video.mp4 -W 40 -H 20 -ca "--symbols wide --fg-only"
```
_Note : by default, the video `example.mp4` can directly be used as an example._
### Optional arguments:
- `-s` / `--sound`: Plays sound along with the video. If you provide a sound file, it will use it, otherwise will use ffmpg to extract audio from the video.
- `-r` / `--framerate`: Framerate to use when extracting frames from ffmpeg.
- `-W` / `--width`: video width
- `-H` / `--height`: video height (may be automatically fixed with the width)
- `-ca` / `--chafa-arguments`: extra arguments to pass to `chafa`. For an example, try adding this: `-ca "--symbols wide --fg-only"` this makes the output use Japanese characters.
- `-C` / `--center`: centers the terminal animation vertically
- `--cleanup`: Clears the screen on program exit.
- `-nf` / `--neofetch`: uses `neofetch` instead of `fastfetch`
- `-fr` / `--force-render`: Forcefully re-renders the animation while not caring about the cache. Useful if the cache is broken or the contents of the video file has changed.
- `-i` / `--interval`: Use this to make anifetch update the fetch information over time, sets fetch refresh interval in seconds. Default is -1(never).
- `-b` / `--benchmark`: For testing, prints how long it took to process in seconds.
- `--force`: Add this argument if you want to use neofetch even if it is deprecated on your system.
- `--chroma`: Add this argument to chromakey a hexadecimal color from the video using ffmpeg. Syntax: '--chroma \<hex-color>:\<similiarity>:\<blend>'
- `--quality`: Changes the output quality of ffmpeg when extracting frames. This doesn't have much effect on the quality or speed from my testing, so you shouldn't need to change this. 2 highest quality, 10 lowest quality.
- `--loop`: Determines how many times the animation should loop. Default is -1(always loop).
- `--no-key-exit`: Don't exit anifetch when user presses a key.
### Cached files:
Anifetch automatically caches rendered animations to speed up future runs. Each unique combination of video and render options generates a cache stored in `~/.local/share/anifetch/`, organized by hash. This includes frames, output, and audio.
Cache-related commands:
`anifetch --cache-list` — View all cached configurations and orders them.
`anifetch --cache-delete <number>` — Delete a specific cache.
`anifetch --clear` — Delete all cached files.
Note that modifying the content of a video file but keeping the same name makes Anifetch still use the old cache. In that case, use `--force-render` or `-fr` to bypass the cache and generate a new version.
For full help:
```bash
anifetch --help
```
### Auto start on terminal start.
Add this to the end of `.bashrc`:
```bash
anifetch [video_file] [other_args_if_needed]
```
## Customizing Fastfetch/Neofetch output
For customizing fastfetch/neofetch output, you can check out these pages:
- [Fastfetch Customization](https://github.com/fastfetch-cli/fastfetch/wiki/Configuration)
- [Neofetch Customization](https://github.com/dylanaraps/neofetch/wiki/Customizing-Info)
## 📊 Benchmarks
Here's the benchmark from running each cli 10 times. Tested on Windows 11 with Intel I5-12500H processor.
| CLI | Time Taken(total) | Time Taken (avg) |
| ------------------------------ | ----------------- | ---------------- |
| fastfetch | 0.27 seconds | 0.03 seconds |
| anifetch (nocache) (fastfetch) | 20.18 seconds | 2.02 seconds |
| anifetch (cached) (fastfetch) | 0.78 seconds | 0.08 seconds |
As it can be seen, Anifetch is quite fast if you cache the animations.
## Troubleshooting
Make sure to install the dependencies listed on [Prerequisites](#Prerequisites). If ffmpeg throws an error saying `libxm12.so.16: cannot open shared object file: No such file or directory exists` then you must install `libxm12`. Here's an comment showing how to install it for arch: [https://github.com/Notenlish/anifetch/issues/24#issuecomment-2920189918](solution)
If weird characters are appearing in your terminal then your terminals font probably can't render some characters. Consider installling [nerdfonts](https://www.nerdfonts.com/).
## Notes
Anifetch attempts to cache the animation so that it doesn't need to render them again when you run it with the same file. However, if the name of the file is the same, but it's contents has changed, it won't re-render it. In that case, you will need to add `--force-render` as an argument to `anifetch.py` so that it re-renders it. You only have to do this only once when you change the file contents.
Also, ffmpeg can generate the the same image for 2 consecutive frames, which may make it appear like it's stuttering. Try changing the framerate if that happens. Or just increase the playback rate.
Currently only the `symbols` format of chafa is supported, formats like kitty, iterm etc. are not supported. If you try to tell chafa to use iterm, kitty etc. it will just override your format with `symbols` mode.
## What's Next
- [ ] Support different formats like iterm, kitty, sixel etc.
- [ ] Allow the user to provide their own premade frames in a folder instead of an video.
- [ ] Update the animated logo on the readme so that its resolution is smaller + each individual symbol is bigger.
## Dev commands
Devs can use additional tools in the `tools` folder in order to test new features from Anifetch.
## Credits
Neofetch: [Neofetch](https://github.com/dylanaraps/neofetch)
Fastfetch: [Fastfetch](https://github.com/fastfetch-cli/fastfetch)
I got the inspiration for Anifetch from Pewdiepie's Linux video. [Video](https://youtu.be/pVI_smLgTY0?t=879)
I dont remember where I got the example.mp4 video from, if you know the source or license please open an issue. If you are the license owner and want this video removed please open an issue and I will remove it.
## Star History
[](https://www.star-history.com/#Notenlish/anifetch&Date)
| text/markdown | null | Notenlish <notenlish@gmail.com> | Immelancholy | Gallophostrix <gallophostrix@gmail.com>, Notenlish <notenlish@gmail.com> | MIT | fetch, terminal, cli, animated, neofetch, fastfetch, anifetch | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Topic :: Terminals",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Operating System :: MacOS"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"platformdirs",
"wcwidth",
"rich",
"pynput"
] | [] | [] | [] | [
"Homepage, https://github.com/Notenlish/anifetch",
"Issues, https://github.com/Notenlish/anifetch/issues"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-18T13:28:08.146239 | anifetch_cli-1.0.1.tar.gz | 13,069,861 | 53/52/addd66fea4c0f169cb2315734e406122054bf627663e49e561f14b6feb0e/anifetch_cli-1.0.1.tar.gz | source | sdist | null | false | 48d6f7a4ea7a26b5043c966bd6e40f4a | 934549ec7ef3bac86891aed2d6e3e8e3cb2e59593fd03458befcb42200a4c620 | 5352addd66fea4c0f169cb2315734e406122054bf627663e49e561f14b6feb0e | null | [
"LICENSE"
] | 242 |
2.4 | digiflot | 1.1.0.post1.dev0 | A digital and modular platform to assist on laboratory flotation experiments and collect relevant information. | # DigiFlot
A digital and modular platform to assist on laboratory flotation experiments and collect relevant information from different sensors in a structured manner.
<image src="https://github.com/pereirageomet/digiflot/blob/main/docs/run.png" width="450"/> <image src="https://github.com/pereirageomet/digiflot/blob/main/docs/demo.gif" width="300"/>
# Wiki
, including its structure, how to install it, and how to use it in the lab.
* Very important, once installed, you call digiflot with:
```python
python -m digiflot.DigiFlot
```
# Necessary parts
[Here you find a list with the pieces you need or might want to have for your system.](docs/DigiFlot-pieces.csv)
Please keep in mind that **we have no affiliation with the vendors suggested in there, and we do not receive any financial support from them**. Our objective with this list is to only simplify the system installation in your laboratory.
All pieces indicated there are alredy integrated in the DigiFlot system.
# Acknowldgements
This laboratory assistant has, so far, been programmed at the [Helmholtz Institute Freiberg for Resource Technology](https://hzdr.de/hif), by Lucas Pereira & Christian Schmidt.
The developers are extremely grateful to all test users who battled through the bugs of the preliminary versions, also providing much valuable feedback. To name a few: Borhane Ben Said, Ali Hassan, Gülce Öktem, Aliza Salces, and Klaus Sygusch.
Also many thanks to Rocco Naumann, responsible to assure the safety of all of our sensors, always engineering the best connectors for the laboratory assistant.
This project has been partially financed by the Federal Ministry of Research, Technology and Space through the WIR! Recomine funding scheme, project "WIR!-rECOmine – Digitalisierung und modellprädiktive Regelung komplexer Aufbreitungsprozesse mittels Sensorfusion und KI-gestützter Auswertung (03WIR1919B)".
# Reference
If you use this open source project, please cite it in your work as:
```
Pereira, L., Schmidt, C., Rudolph, M., 2025. DigiFlot: a modular laboratory assistant tailored for froth flotation experiments. https://doi.org/10.14278/rodare.3841
```
| text/markdown | Christian Schmidt | Lucas Pereira <l.pereira@hzdr.de> | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"python-vlc",
"pyzmq",
"paho-mqtt==1.6.1",
"psutil",
"bronkhorst-propar",
"minimalmodbus",
"kafka-python-ng",
"iai-gxipy",
"PyQt5",
"pillow",
"opencv-python-headless",
"pandas",
"picamera2; platform_machine == \"armv7l\" or platform_machine == \"aarch64\""
] | [] | [] | [] | [
"Homepage, https://github.com/pereirageomet/digiflot"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T13:28:04.222634 | digiflot-1.1.0.post1.dev0.tar.gz | 4,407,354 | 76/f8/8837af26547dcd5d94a6839846b324de75ef8f4e0880a3ec07c56d6740c9/digiflot-1.1.0.post1.dev0.tar.gz | source | sdist | null | false | 848972fbd5f2829b66446b422697655f | 4d6ba2032e36f7fa72414804a9bb590fec32bdf88d4251316558fa5d1fea2a0a | 76f88837af26547dcd5d94a6839846b324de75ef8f4e0880a3ec07c56d6740c9 | null | [
"LICENSE"
] | 233 |
2.4 | apheris-cli | 0.48.0 | Apheris CLI package for interaction with the Apheris product. | # Apheris CLI
The [Apheris](http://www.apheris.com) Command Line Interface (CLI) is a tool for Machine Learning Engineers and Data Scientists to define Federated computations, launch them and get the results through the Apheris product.
The CLI provides both terminal and Python interfaces to interact with the Apheris 3.0 platform. It can be used to create, activate and deactivate Compute Specs, and to submit and monitor compute jobs.
We recommend installing the CLI in a fresh virtual environment.
For a full guide to the CLI, please see the [Apheris CLI documentation](https://www.apheris.com/docs/gateway/latest/data-science-and-ml/apheris-cli-hello-world.html).
## Quickstart: Python API
```python
import apheris
# Login to apheris
>>> apheris.login()
Logging in to your company account...
Apheris:
Authenticating with Apheris Cloud Platform...
Please continue the authorization process in your browser.
Login was successful
# List the datasets to which you have access
>>> apheris.list_datasets()
+-----+---------------------------------------------------------+--------------+---------------------------+
| idx | dataset_id | organization | data custodian |
+-----+---------------------------------------------------------+--------------+---------------------------+
| 0 | cancer-medical-images_gateway-2_org-2 | Org 2 | Orsino Hoek |
| 1 | pneumonia-x-ray-images_gateway-2_org-2 | Org 2 | Orsino Hoek |
| 2 | covid-19-patients_gateway-1_org-1 | Org 1 | Agathe McFarland |
| 3 | medical-decathlon-task004-hippocampus-a_gateway-1_org-1 | Org 1 | Agathe McFarland |
| 4 | medical-decathlon-task004-hippocampus-b_gateway-2_org-2 | Org 2 | Orsino Hoek |
| ... | ... | ... | ... |
+-----+---------------------------------------------------------+--------------+---------------------------+
# List models available to you
>>> apheris.list_models()
+-----+---------------------------+-------------------------------------+
| id | name | version |
+-----+---------------------------+-------------------------------------+
| 0 | apheris-nnunet | u.v.w |
| 1 | apheris-statistics | x.y.z |
| ... | ... | ... |
+-----+---------------------------+-------------------------------------+
# List compuutations
>>> apheris.list_compute_specs()
+--------------------------------------+---------------------+------------------------------+
| ID | Created | Activation Status |
+--------------------------------------+---------------------+------------------------------+
| f20eba74-28d2-4458-aedb-72a983cb2a33 | 2025-05-20 13:37:59 | inactive.awaiting_activation |
| 29d542ed-d273-4176-8e3f-dfc70311cf32 | 2025-05-20 13:38:44 | inactive.shutdown |
| c4e3f12a-0b20-4475-9611-79846dcb23b6 | 2025-05-21 07:40:53 | inactive.shutdown |
| aae7bf0e-0568-4441-8d85-fdabc6343a4d | 2025-05-21 07:48:57 | inactive.shutdown |
| 67b76354-aae3-48cc-810a-fd79c1040cc3 | 2025-05-21 07:50:05 | inactive.shutdown |
| 70829d63-bb77-4ff0-a1a9-90273aa38792 | 2025-05-23 06:16:36 | inactive.shutdown |
| 41994296-be34-487c-893c-d183f7baeb99 | 2025-05-23 06:52:50 | inactive.shutdown |
| f1589f63-cfc9-4b1a-b985-ee959121c765 | 2025-05-23 07:17:11 | inactive.shutdown |
| 3640fed9-f1d3-43f8-9b6b-5cde345a5ed5 | 2025-05-26 11:56:14 | inactive.awaiting_activation |
| defe5013-2c73-4eb9-be52-1ae7aed841ff | 2025-05-26 12:00:59 | active.running |
+--------------------------------------+---------------------+------------------------------+
# Run a job in Apheris
>>> from aphcli.api import job
>>> job.run(
... datasets=[
... "medical-decathlon-task004-hippocampus-a_gateway-1_org-1",
... "medical-decathlon-task004-hippocampus-b_gateway-2_org-2"
... ],
... payload={"mode": "training", "model_configuration": "2d", "dataset_id": 4, "num_rounds": 1},
... model="apheris-nnunet",
... version="x.y.z"
...)
Job(duration='0:00:00', id=UUID('f77d5dc7-a2e7-4a2a-827d-49a2131b1ffe'), status='submitted', created_at=datetime.datetime(2025, 7, 8, 17, 17, 6, 897476), compute_spec_id=UUID('defe5013-2c73-4eb9-be52-1ae7aed841ff'))
# Logout of Apheris
>>> apheris.logout()
Logging out from Apheris Cloud Platform session
Successfully logged out
```
## Quickstart: CLI
Logging into Apheris:
```console
$ apheris login
Logging in to your company account...
Apheris:
Authenticating with Apheris Cloud Platform...
Please continue the authorization process in your browser.
Login was successful
You are logged in:
e-mail: your.name@your-company.com
organization: your_organisation
environment: your_environment
```
You can check your current login status.
```console
$ apheris login status
You are logged in:
e-mail: your.name@your-company.com
organization: your_organisation
environment: your_environment
```
When you are done with your work, it is recommended to log out.
```console
$ apheris logout
Logging out from Apheris Cloud Platform session
Logging out from Apheris Compute environments session
Successfully logged out
```
You can see the datasets to which you've been given access using the `datasets` command:
```console
$ apheris datasets list
+-----+---------------------------------------------------------+--------------+---------------------------+
| idx | dataset_id | organization | data custodian |
+-----+---------------------------------------------------------+--------------+---------------------------+
| 0 | cancer-medical-images_gateway-2_org-2 | Org 2 | Orsino Hoek |
| 1 | pneumonia-x-ray-images_gateway-2_org-2 | Org 2 | Orsino Hoek |
| 2 | covid-19-patients_gateway-1_org-1 | Org 1 | Agathe McFarland |
| 3 | medical-decathlon-task004-hippocampus-a_gateway-1_org-1 | Org 1 | Agathe McFarland |
| 4 | medical-decathlon-task004-hippocampus-b_gateway-2_org-2 | Org 2 | Orsino Hoek |
| ... | ... | ... | ... |
+-----+---------------------------------------------------------+--------------+---------------------------+
```
And you can see models using the `models` command:
```console
$ apheris models list
+-----+---------------------------+-------------------------------------+
| id | name | version |
+-----+---------------------------+-------------------------------------+
| 0 | apheris-nnunet | u.v.w |
| 1 | apheris-statistics | x.y.z |
| ... | ... | ... |
+-----+---------------------------+-------------------------------------+
```
You can schedule a job on Apheris using the `job` command:
```console
$ apheris job schedule \
--dataset_ids medical-decathlon-task004-hippocampus-a_gateway-1_org-1,medical-decathlon-task004-hippocampus-b_gateway-2_org-2 \
--model_id apheris-nnunet \
--model_version x.y.z \
--payload '{"mode": "training", "model_configuration": "2d", "dataset_id": 4, "num_rounds": 1}'
About to schedule job with parameters:
Dataset IDs: medical-decathlon-task004-hippocampus-a_gateway-1_org-1,medical-decathlon-task004-hippocampus-b_gateway-2_org-2
Model: apheris-nnunet:x.y.z
Payload: {"mode": "training", "model_configuration": "2d", "dataset_id": 4, "num_rounds": 1}
Resources:
Client: 1.0 CPU, 0 GPU, 2000 MB memory
Server: 1.0 CPU, 0 GPU, 2000 MB memory
Do you want to proceed? (y/N)
:y
The job was submitted! The job ID is d6f7b657-8b30-4636-8f4c-2d96678095ba
```
Check the status of a job:
```console
$ apheris job status
Using the cached `compute_spec_id` defe5013-2c73-4eb9-be52-1ae7aed841ff [2025-07-08 17:17:06].
Using the cached `job_id` f77d5dc7-a2e7-4a2a-827d-49a2131b1ffe [stored 2025-07-08 17:36:26].
status: running
```
Once a job is complete, you can download the results:
```console
$ apheris job download-results /path/to/store/results
Using the cached `compute_spec_id` defe5013-2c73-4eb9-be52-1ae7aed841ff [2025-07-08 17:17:06].
Using the cached `job_id` f77d5dc7-a2e7-4a2a-827d-49a2131b1ffe [stored 2025-07-08 17:36:26].
Successfully downloaded job outputs to /path/to/store/results
```
| text/markdown | Apheris | null | null | null | null | python, apheris, federated learning | [] | [] | null | null | null | [] | [] | [] | [
"apheris-auth==0.22.*",
"prettytable==3.17.0",
"typer==0.24.0",
"semver==3.0.4",
"urllib3==2.6.3"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:27:24.696552 | apheris_cli-0.48.0-py3-none-any.whl | 54,859 | 03/f0/32de6ca1a212b23fb91aac31fc1296d232487a7d65f364772c2d9d2fbfa1/apheris_cli-0.48.0-py3-none-any.whl | py3 | bdist_wheel | null | false | b59abd6d6a831620ba403ee6372897cd | b3784f3be38be2dd12e1bea956e6b375c3c4fc3756a5d9500cb08f9170329482 | 03f032de6ca1a212b23fb91aac31fc1296d232487a7d65f364772c2d9d2fbfa1 | null | [
"LICENSE"
] | 116 |
2.4 | OctoPrint | 1.11.7 | The snappy web interface for your 3D printer | <p align="center"><img src="https://octoprint.org/assets/img/logo.png" alt="OctoPrint's logo" /></p>
<h1 align="center">OctoPrint</h1>
<p align="center">
<img src="https://img.shields.io/github/v/release/OctoPrint/OctoPrint?logo=github&logoColor=white" alt="GitHub release"/>
<img src="https://img.shields.io/pypi/v/OctoPrint?logo=python&logoColor=white" alt="PyPI"/>
<img src="https://img.shields.io/github/actions/workflow/status/OctoPrint/OctoPrint/build.yml?branch=master" alt="Build status"/>
<a href="https://community.octoprint.org"><img src="https://img.shields.io/discourse/users?label=forum&logo=discourse&logoColor=white&server=https%3A%2F%2Fcommunity.octoprint.org" alt="Community Forum"/></a>
<a href="https://discord.octoprint.org"><img src="https://img.shields.io/discord/704958479194128507?label=discord&logo=discord&logoColor=white" alt="Discord"/></a>
<a href="https://octoprint.org/conduct/"><img src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg" alt="Contributor Covenant"/></a>
<a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/badge/code%20style-ruff-261230" alt="Linting & formatting: ruff"/></a>
<a href="https://github.com/prettier/prettier"><img src="https://img.shields.io/badge/code_style-prettier-ff69b4.svg" alt="Code style: prettier"/></a>
<a href="https://github.com/pre-commit/pre-commit"><img src="https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white" alt="pre-commit"/></a>
</p>
OctoPrint provides a snappy web interface for controlling consumer 3D printers. It is Free Software
and released under the [GNU Affero General Public License V3](https://www.gnu.org/licenses/agpl-3.0.html)[^1].
Its website can be found at [octoprint.org](https://octoprint.org/?utm_source=github&utm_medium=readme).
The community forum is available at [community.octoprint.org](https://community.octoprint.org/?utm_source=github&utm_medium=readme). It also serves as a central knowledge base.
An invite to the Discord server can be found at [discord.octoprint.org](https://discord.octoprint.org).
The FAQ can be accessed by following [faq.octoprint.org](https://faq.octoprint.org/?utm_source=github&utm_medium=readme).
The documentation is located at [docs.octoprint.org](https://docs.octoprint.org).
The official plugin repository can be reached at [plugins.octoprint.org](https://plugins.octoprint.org/?utm_source=github&utm_medium=readme).
**OctoPrint's development wouldn't be possible without the [financial support by its community](https://octoprint.org/support-octoprint/?utm_source=github&utm_medium=readme).
If you enjoy OctoPrint, please consider becoming a regular supporter!**

You are currently looking at the source code repository of OctoPrint. If you already installed it
(e.g. by using the Raspberry Pi targeted distribution [OctoPi](https://github.com/guysoft/OctoPi)) and only
want to find out how to use it, [the documentation](https://docs.octoprint.org/) might be of more interest for you. You might also want to subscribe to join
[the community forum at community.octoprint.org](https://community.octoprint.org) where there are other active users who might be
able to help you with any questions you might have.
[^1]: Where another license applies to a specific file or folder, that is noted inside the file itself or a folder README. For licenses of both linked and
vendored third party dependencies, see also THIRDPARTYLICENSES.md.
## Contributing
Contributions of all kinds are welcome, not only in the form of code but also with regards to the
[official documentation](https://docs.octoprint.org/), debugging help
in the [bug tracker](https://github.com/OctoPrint/OctoPrint/issues), support of other users on
[the community forum at community.octoprint.org](https://community.octoprint.org) or
[the official discord at discord.octoprint.org](https://discord.octoprint.org)
and also [financially](https://octoprint.org/support-octoprint/?utm_source=github&utm_medium=readme).
If you think something is bad about OctoPrint or its documentation the way it is, please help
in any way to make it better instead of just complaining about it -- this is an Open Source Project
after all :)
For information about how to go about submitting bug reports or pull requests, please see the project's
[Contribution Guidelines](https://github.com/OctoPrint/OctoPrint/blob/master/CONTRIBUTING.md).
## Installation
Installation instructions for installing from source for different operating
systems can be found [on the forum](https://community.octoprint.org/tags/c/support/guides/15/setup).
If you want to run OctoPrint on a Raspberry Pi, you really should take a look at [OctoPi](https://github.com/guysoft/OctoPi)
which is a custom SD card image that includes OctoPrint plus dependencies.
The generic steps that should basically be done regardless of operating system
and runtime environment are the following (as *regular
user*, please keep your hands *off* of the `sudo` command here!) - this assumes
you already have Python 3.7+, pip and virtualenv and their dependencies set up on your system:
1. Create a user-owned virtual environment therein: `virtualenv venv`. If you want to specify a specific python
to use instead of whatever version your system defaults to, you can also explicitly require that via the `--python`
parameter, e.g. `virtualenv --python=python3 venv`.
2. Install OctoPrint *into that virtual environment*: `./venv/bin/pip install OctoPrint`
You may then start the OctoPrint server via `/path/to/OctoPrint/venv/bin/octoprint`, see [Usage](#usage)
for details.
After installation, please make sure you follow the first-run wizard and set up
access control as necessary.
## Dependencies
OctoPrint depends on a few python modules to do its job. Those are automatically installed when installing
OctoPrint via `pip`.
OctoPrint currently supports Python 3.7, 3.8, 3.9, 3.10, 3.11, 3.12 and 3.13.
Support for Python 3.7 and 3.8 will be dropped with OctoPrint 1.12.0.
## Usage
Running the pip install via
pip install OctoPrint
installs the `octoprint` script in your Python installation's scripts folder
(which, depending on whether you installed OctoPrint globally or into a virtual env, will be in your `PATH` or not). The
following usage examples assume that the `octoprint` script is on your `PATH`.
You can start the server via
octoprint serve
By default it binds to all interfaces on port 5000 (so pointing your browser to `http://127.0.0.1:5000`
will do the trick). If you want to change that, use the additional command line parameters `host` and `port`,
which accept the host ip to bind to and the numeric port number respectively. If for example you want the server
to only listen on the local interface on port 8080, the command line would be
octoprint serve --host=127.0.0.1 --port=8080
Alternatively, the host and port on which to bind can be defined via the config file.
If you want to run OctoPrint as a daemon (only supported on Linux), use
octoprint daemon {start|stop|restart} [--pid PIDFILE]
If you do not supply a custom pidfile location via `--pid PIDFILE`, it will be created at `/tmp/octoprint.pid`.
You can also specify the config file or the base directory (for basing off the `uploads`, `timelapse` and `logs` folders),
e.g.:
octoprint serve --config /path/to/another/config.yaml --basedir /path/to/my/basedir
To start OctoPrint in safe mode - which disables all third party plugins that do not come bundled with OctoPrint - use
the ``--safe`` flag:
octoprint serve --safe
See `octoprint --help` for more information on the available command line parameters.
OctoPrint also ships with a `run` script in its source directory. You can invoke it to start the server. It
takes the same command line arguments as the `octoprint` script.
## Configuration
If not specified via the command line, the config file `config.yaml` for OctoPrint is expected in the settings folder,
which is located at `~/.octoprint` on Linux, at `%APPDATA%/OctoPrint` on Windows and
at `~/Library/Application Support/OctoPrint` on MacOS.
A comprehensive overview of all available configuration settings can be found
[in the docs](https://docs.octoprint.org/en/main/configuration/config_yaml.html).
Please note that the most commonly used configuration settings can also easily
be edited from OctoPrint's settings dialog.
## Special Thanks
Cross-browser testing services are kindly provided by [BrowserStack](https://www.browserstack.com/).
Profiling is done with the help of [PyVmMonitor](https://www.pyvmmonitor.com).
Error tracking is powered and sponsored by [Sentry](https://sentry.io).
| text/markdown | Gina Häußge | gina@octoprint.org | null | null | GNU Affero General Public License v3 | 3dprinting 3dprinter 3d-printing 3d-printer octoprint | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Flask",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Manufacturing",
"Intended Audience :: Other Audience",
"Intended... | [] | https://octoprint.org | null | <3.14,>=3.7 | [] | [] | [] | [
"OctoPrint-FileCheck>=2024.11.12",
"OctoPrint-FirmwareCheck>=2025.5.14",
"OctoPrint-PiSupport>=2023.10.10",
"argon2-cffi>=23.1.0",
"Babel<2.17,>=2.16; python_version >= \"3.8\"",
"cachelib<0.14,>=0.13.0; python_version >= \"3.8\"",
"Click!=8.2.0,<8.3,>=8.1.8",
"colorlog<7,>=6.9.0",
"emoji<3,>=2.14.1... | [] | [] | [] | [
"Community Forum, https://community.octoprint.org",
"Bug Reports, https://github.com/OctoPrint/OctoPrint/issues",
"Source, https://github.com/OctoPrint/OctoPrint",
"Funding, https://support.octoprint.org"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:26:21.413459 | octoprint-1.11.7.tar.gz | 3,236,058 | 6f/9a/3fdc1f43b93f8c62313f16fb6f0547a677eacebf000e33807c12a5441952/octoprint-1.11.7.tar.gz | source | sdist | null | false | a4009d506731ebce9eab15df979f1d1f | d761dc092f642f1810e7b28046bc62063546f1326cc17d8f9b05edfb42a85f2b | 6f9a3fdc1f43b93f8c62313f16fb6f0547a677eacebf000e33807c12a5441952 | null | [
"LICENSE.txt"
] | 0 |
2.4 | byteforge-telegram | 0.1.4 | Generic Telegram bot notification and webhook management library | # byteforge-telegram
A generic, reusable Python library for Telegram bot notifications and webhook management.
## Features
- **TelegramBotController**: Send notifications via Telegram Bot API
- Plain text messages
- Formatted messages with title, fields, and footer
- Both sync and async support
- Automatic event loop handling
- Session cleanup to prevent leaks
- **WebhookManager**: Manage Telegram webhooks
- Set webhook URL
- Get webhook information
- Delete webhook
- CLI tool included
## Installation
```bash
pip install byteforge-telegram
```
Or install from source:
```bash
git clone https://github.com/jmazzahacks/byteforge-telegram.git
cd byteforge-telegram
pip install -e .
```
## Quick Start
### Sending Notifications
```python
from byteforge_telegram import TelegramBotController, ParseMode
# Initialize with your bot token
bot = TelegramBotController("YOUR_BOT_TOKEN")
# Send a simple message
bot.send_message_sync(
text="Hello from byteforge-telegram!",
chat_ids=["CHAT_ID_1", "CHAT_ID_2"]
)
# Send a formatted message
bot.send_formatted_sync(
title="Deployment Complete",
fields={
"Environment": "production",
"Version": "1.2.3",
"Status": "Success"
},
chat_ids=["YOUR_CHAT_ID"],
emoji="✅",
footer="Deployed at 2025-01-03 12:00:00 UTC"
)
```
### Managing Webhooks
#### Programmatic API
```python
from byteforge_telegram import WebhookManager
# Initialize manager
manager = WebhookManager("YOUR_BOT_TOKEN")
# Set webhook
result = manager.set_webhook("https://example.com/telegram/webhook")
if result['success']:
print(f"Webhook set: {result['description']}")
# Get webhook info
info = manager.get_webhook_info()
if info:
print(f"Current webhook: {info.get('url')}")
print(f"Pending updates: {info.get('pending_update_count')}")
# Delete webhook
result = manager.delete_webhook()
if result['success']:
print("Webhook deleted")
```
#### Command-Line Interface
The package includes a `setup-telegram-webhook` CLI tool:
```bash
# Set webhook
setup-telegram-webhook --token YOUR_BOT_TOKEN --url https://example.com/telegram/webhook
# Or use environment variable
export TELEGRAM_BOT_TOKEN=YOUR_BOT_TOKEN
setup-telegram-webhook --url https://example.com/telegram/webhook
# Get webhook info
setup-telegram-webhook --token YOUR_BOT_TOKEN --info
# Delete webhook
setup-telegram-webhook --token YOUR_BOT_TOKEN --delete
```
## API Reference
### TelegramBotController
#### Methods
**`send_message_sync(text, chat_ids, parse_mode=ParseMode.HTML, ...)`**
- Send a plain text message (synchronous)
- Returns: `Dict[str, bool]` - success status for each chat
**`send_formatted_sync(title, fields, chat_ids, emoji=None, footer=None)`**
- Send a formatted message with title, fields, and footer (synchronous)
- Returns: `Dict[str, bool]` - success status for each chat
**`send_message(...)` / `send_formatted(...)`**
- Async versions of the above methods
- Use with `await` in async contexts
**`test_connection_sync(chat_id)`**
- Send a test message to verify bot is working
- Returns: `bool`
#### Parse Modes
```python
from byteforge_telegram import ParseMode
ParseMode.HTML # HTML formatting (default)
ParseMode.MARKDOWN # Markdown formatting
ParseMode.MARKDOWN_V2 # MarkdownV2 formatting
ParseMode.NONE # Plain text, no formatting
```
### WebhookManager
#### Methods
**`set_webhook(webhook_url, timeout=10)`**
- Set the webhook URL for the bot
- Args:
- `webhook_url`: HTTPS URL (required)
- `timeout`: Request timeout in seconds
- Returns: `Dict[str, Any]` with `success` and `description`
- Raises: `ValueError` if URL is not HTTPS
**`get_webhook_info(timeout=10)`**
- Get current webhook configuration
- Returns: `Dict[str, Any]` with webhook details, or `None` on error
**`delete_webhook(timeout=10)`**
- Delete the current webhook
- Returns: `Dict[str, Any]` with `success` and `description`
### TelegramResponse
Type-safe dataclass for constructing webhook responses.
#### Fields
- `method`: API method name (usually "sendMessage")
- `chat_id`: Target chat ID
- `text`: Message text
- `parse_mode`: Format type (default: "HTML")
- `reply_markup`: Optional keyboard markup
- `disable_web_page_preview`: Disable link previews (default: False)
- `disable_notification`: Send silently (default: False)
#### Methods
**`to_dict()`**
- Convert to JSON-serializable dictionary
- Returns: `Dict[str, Any]`
#### Example
```python
from byteforge_telegram import TelegramResponse
response = TelegramResponse(
method='sendMessage',
chat_id=12345,
text='<b>Hello!</b>',
parse_mode='HTML',
disable_web_page_preview=True
)
# Use in Flask webhook
return jsonify(response.to_dict()), 200
```
## Examples
### Integration with Flask (Simple)
```python
import os
from flask import Flask, request, jsonify
from byteforge_telegram import TelegramBotController
app = Flask(__name__)
bot = TelegramBotController(os.getenv('TELEGRAM_BOT_TOKEN'))
@app.route('/telegram/webhook', methods=['POST'])
def telegram_webhook():
update = request.get_json()
# Process the update
message = update.get('message', {})
text = message.get('text', '')
chat_id = str(message.get('chat', {}).get('id'))
if text == '/start':
bot.send_message_sync(
text="Welcome! I'm your bot.",
chat_ids=[chat_id]
)
return jsonify({'ok': True}), 200
```
### Integration with Flask (Using TelegramResponse)
For more complex webhooks, use `TelegramResponse` for type-safe responses:
```python
from flask import Flask, request, jsonify
from byteforge_telegram import TelegramResponse
app = Flask(__name__)
@app.route('/telegram/webhook', methods=['POST'])
def telegram_webhook():
update = request.get_json()
# Extract message details
message = update.get('message', {})
text = message.get('text', '')
chat_id = message.get('chat', {}).get('id')
# Handle command
if text == '/start':
response = TelegramResponse(
method='sendMessage',
chat_id=chat_id,
text='<b>Welcome!</b> Type /help for commands.',
parse_mode='HTML'
)
return jsonify(response.to_dict()), 200
return jsonify({'ok': True}), 200
```
### Async Usage
```python
import asyncio
from byteforge_telegram import TelegramBotController, ParseMode
async def send_notifications():
bot = TelegramBotController("YOUR_BOT_TOKEN")
# Send multiple messages concurrently
results = await bot.send_message(
text="Async notification",
chat_ids=["CHAT_1", "CHAT_2", "CHAT_3"],
parse_mode=ParseMode.HTML
)
for chat_id, success in results.items():
if success:
print(f"Sent to {chat_id}")
else:
print(f"Failed to send to {chat_id}")
asyncio.run(send_notifications())
```
### Error Handling
```python
from byteforge_telegram import TelegramBotController
bot = TelegramBotController("YOUR_BOT_TOKEN")
results = bot.send_message_sync(
text="Important notification",
chat_ids=["CHAT_ID"]
)
for chat_id, success in results.items():
if not success:
print(f"Failed to send to {chat_id}")
# Implement retry logic, logging, etc.
```
## Design Philosophy
### Sync/Async Compatibility
The library handles both synchronous and asynchronous contexts automatically:
- `*_sync()` methods work in regular Python code (like Flask apps)
- `async` methods work in async contexts (like FastAPI, async scripts)
- Automatically detects running event loops
- Creates fresh Bot instances per call to avoid loop conflicts
### Session Management
Each message send creates a new Bot instance and properly cleans up the HTTP session afterward. This prevents connection leaks and event loop conflicts.
### Error Handling
- Network errors are caught and logged
- Results dict shows success/failure per chat ID
- Graceful degradation when services are unavailable
## Requirements
- Python 3.9+
- python-telegram-bot >= 20.0
- requests >= 2.31.0
## Development
### Setup
```bash
# Clone repository
git clone https://github.com/jmazzahacks/byteforge-telegram.git
cd byteforge-telegram
# Create and activate virtual environment
python3 -m venv .
source bin/activate
# Install development dependencies
pip install -r dev-requirements.txt
# Install package in development mode
pip install -e .
# Run tests
pytest
# Format code
black src/
```
### Running Tests
```bash
# Run all tests
source bin/activate && pytest
# Run with coverage
source bin/activate && pytest --cov=byteforge_telegram
# Run specific test file
source bin/activate && pytest tests/test_models.py
```
## License
MIT License - see LICENSE file for details
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## Author
Jason Byteforge (@jmazzahacks)
## Links
- GitHub: https://github.com/jmazzahacks/byteforge-telegram
- Issues: https://github.com/jmazzahacks/byteforge-telegram/issues
- PyPI: https://pypi.org/project/byteforge-telegram/
| text/markdown | Jason Byteforge | null | null | null | MIT | bot, notifications, telegram, webhook | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"python-telegram-bot>=20.0",
"requests>=2.31.0",
"black>=23.0; extra == \"dev\"",
"isort>=5.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jmazzahacks/byteforge-telegram",
"Repository, https://github.com/jmazzahacks/byteforge-telegram",
"Issues, https://github.com/jmazzahacks/byteforge-telegram/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T13:25:49.265314 | byteforge_telegram-0.1.4.tar.gz | 13,653 | 76/0d/72999ac8f47bd50733797a31b6909e826152a7bddcb3907cdd0eab07fa07/byteforge_telegram-0.1.4.tar.gz | source | sdist | null | false | f03e0b5fe610b2b2ecdcff8e95620b0d | b319d41fa93a8db7b694fc4b02e69c6a912f6f1c1451ffaa24660e618bf878f3 | 760d72999ac8f47bd50733797a31b6909e826152a7bddcb3907cdd0eab07fa07 | null | [
"LICENSE"
] | 236 |
2.4 | mc-wiki-fetch-mcp | 0.4.0 | A MCP-based Minecraft Wiki backend server providing convenient access to Minecraft Wiki content via stdio | # Minecraft Wiki MCP Server
[](README.md) [](README_CN.md)
## Project Overview
A **MCP**-based **Minecraft Wiki** backend server that provides convenient access to Minecraft Wiki content. Now supports quick deployment via **uvx** without complex configuration.
Note: This project only provides example Minecraft wiki API. If you need local API deployment or SSE support, please visit [this project](https://github.com/rice-awa/minecraft-wiki-fetch-api) for more information.
### Features
- 🔍 **Wiki Content Search**: Search Minecraft Wiki pages by keywords
- 📄 **Page Content Retrieval**: Get complete page content in Wikitext, HTML and Markdown formats
- 📝 **Wikitext Support**: Get original Wiki source code (recommended for token efficiency)
- 📚 **Batch Page Retrieval**: Efficiently retrieve multiple pages in batch
- ✅ **Page Existence Check**: Quick check if a page exists
- 🏥 **Health Monitoring**: Monitor backend Wiki API service status
- 🚀 **One-Click Deployment**: Quick installation and running via uvx
- ⚙️ **Environment Variables**: Flexible configuration without config files
- 💻 **Command Line Arguments**: Override configuration via command line parameters
## Quick Start
### 🚀 Recommended: Using uvx
No installation required, run directly:
```bash
# Basic usage (with default configuration)
uvx mc-wiki-fetch-mcp
# Use custom API URL
MC_WIKI_API_BASE_URL=http://localhost:3000 uvx mc-wiki-fetch-mcp
# Enable verbose logging
MC_WIKI_LOG_LEVEL=DEBUG uvx mc-wiki-fetch-mcp
# Use command line arguments
uvx mc-wiki-fetch-mcp --api-url http://localhost:3000 --log-level DEBUG
# Show help
uvx mc-wiki-fetch-mcp --help
```
### 💻 Integration with Claude Desktop
1. **Find configuration file location:**
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Linux**: `~/.config/claude/claude_desktop_config.json`
2. **Edit configuration file:**
```json
{
"mcpServers": {
"minecraft-wiki": {
"command": "uvx",
"args": ["mc-wiki-fetch-mcp"],
"env": {
"MC_WIKI_API_BASE_URL": "http://mcwiki.rice-awa.top"
}
}
}
}
```
3. **Restart Claude Desktop**
## Configuration Options
### Environment Variables Configuration
| Environment Variable | Description | Default Value |
|----------------------|-------------|---------------|
| `MC_WIKI_API_BASE_URL` | Wiki API base URL | `http://mcwiki.rice-awa.top` |
| `MC_WIKI_API_TIMEOUT` | API request timeout (seconds) | `30` |
| `MC_WIKI_API_MAX_RETRIES` | Maximum retry attempts | `3` |
| `MC_WIKI_DEFAULT_FORMAT` | Default output format | `wikitext` |
| `MC_WIKI_DEFAULT_LIMIT` | Default search results limit | `10` |
| `MC_WIKI_MAX_BATCH_SIZE` | Maximum batch processing size | `20` |
| `MC_WIKI_MAX_CONCURRENCY` | Maximum concurrency | `5` |
| `MC_WIKI_MCP_NAME` | MCP server name | `Minecraft Wiki MCP (stdio)` |
| `MC_WIKI_MCP_DESCRIPTION` | MCP server description | Auto-generated |
| `MC_WIKI_LOG_LEVEL` | Log level | `INFO` |
### Command Line Arguments
```bash
uvx mc-wiki-fetch-mcp --help
```
| Parameter | Description |
|-----------|-------------|
| `--api-url` | Wiki API base URL (overrides environment variable) |
| `--timeout` | API request timeout (seconds) |
| `--max-retries` | Maximum retry attempts |
| `--log-level` | Log level (DEBUG/INFO/WARNING/ERROR) |
| `--version` | Show version information |
| `--help` | Show help information |
## Configuration Examples
### Basic Configuration Example
```bash
# Set environment variables
export MC_WIKI_API_BASE_URL="http://localhost:3000"
export MC_WIKI_LOG_LEVEL="DEBUG"
# Run server
uvx mc-wiki-fetch-mcp
```
### Claude Desktop Advanced Configuration
```json
{
"mcpServers": {
"minecraft-wiki": {
"command": "uvx",
"args": [
"mc-wiki-fetch-mcp",
"--api-url", "http://localhost:3000",
"--log-level", "INFO"
],
"env": {
"MC_WIKI_DEFAULT_LIMIT": "20",
"MC_WIKI_MAX_BATCH_SIZE": "50"
}
}
}
}
```
## Traditional Installation (Developers)
If you need to modify code or develop:
```bash
# Clone repository
git clone <repository-url>
cd mc-wiki-fetch-mcp
# Install dependencies
pip install -e .
# Run
mc-wiki-fetch-mcp
```
## 🛠️ Available Tools
| Tool Name | Description | Main Parameters |
|-----------|-------------|-----------------|
| `search_wiki` | Search Wiki content | `query`, `limit`, `namespaces` |
| `get_wiki_page` | Get page content | `page_name`, `format` (wikitext/html/markdown/both), `use_cache` |
| `get_wiki_pages_batch` | Batch get pages | `pages`, `format`, `concurrency` |
| `check_page_exists` | Check page existence | `page_name` |
| `check_wiki_api_health` | Health check | No parameters |
### Usage Examples
#### Using in Claude Desktop
After configuration, you can directly ask in Claude Desktop:
```
Please help me search for information about redstone
Get detailed content of the diamond page
Check if the "redstone circuit" page exists
Batch get content for "diamond", "redstone", and "enchanting" pages
```
## 🔧 Advanced Configuration
### Configuration Priority
Configuration priority order (high to low):
1. Command line arguments
2. Environment variables
3. Default values
### Configuration Parameter Description
| Parameter | Description | Default Value | Optional Values |
|-----------|-------------|---------------|-----------------|
| API Base URL | Wiki API service address | `http://mcwiki.rice-awa.top` | Any valid URL |
| Request Timeout | API request timeout | `30 seconds` | Positive integer (seconds) |
| Maximum Retries | Failed request retry count | `3 times` | Positive integer |
| Default Format | Page content output format | `wikitext` | `wikitext`, `html`, `markdown`, `both` |
| Search Limit | Default search result count | `10` | 1-50 |
| Batch Size | Maximum pages for batch processing | `20` | 1-100 |
| Concurrency | Maximum concurrent requests | `5` | 1-20 |
### Log Configuration
```bash
# Different log levels
MC_WIKI_LOG_LEVEL=DEBUG uvx mc-wiki-fetch-mcp # Detailed debug information
MC_WIKI_LOG_LEVEL=INFO uvx mc-wiki-fetch-mcp # Basic information
MC_WIKI_LOG_LEVEL=WARNING uvx mc-wiki-fetch-mcp # Only warnings and errors
MC_WIKI_LOG_LEVEL=ERROR uvx mc-wiki-fetch-mcp # Only errors
```
## 🐛 Troubleshooting
### Common Issues
#### 1. uvx command not found
**Problem**: `uvx: command not found`
**Solution**:
```bash
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Or use pip
pip install uv
```
#### 2. Cannot connect to Wiki API
**Problem**: Tool calls return connection errors
**Solution**:
1. Check environment variable configuration:
```bash
echo $MC_WIKI_API_BASE_URL
```
2. Test API connection:
```bash
curl http://your-api-url/health
```
3. Enable verbose logging:
```bash
MC_WIKI_LOG_LEVEL=DEBUG uvx mc-wiki-fetch-mcp
```
#### 3. Tools not showing in Claude Desktop
**Problem**: After configuration, MCP tools are not visible in Claude Desktop
**Solution**:
1. Confirm uvx is available:
```bash
uvx mc-wiki-fetch-mcp --version
```
2. Check Claude Desktop logs
3. Restart Claude Desktop
### Debugging Tips
#### Enable Verbose Logging
```bash
# Start server and view detailed logs
MC_WIKI_LOG_LEVEL=DEBUG uvx mc-wiki-fetch-mcp 2>debug.log
# View logs
tail -f debug.log
```
#### Test Configuration
```bash
# Test specific configuration
MC_WIKI_API_BASE_URL=http://localhost:3000 \
MC_WIKI_LOG_LEVEL=DEBUG \
uvx mc-wiki-fetch-mcp --help
```
#### Verify Environment Variables
```bash
# Check current environment variables
env | grep MC_WIKI
# Or check in Python
python -c "import os; print({k:v for k,v in os.environ.items() if k.startswith('MC_WIKI')})"
```
## 📖 Related Documentation
- [UVX Packaging Summary](docs/UVX_PACKAGING_SUMMARY.md) - UVX packaging and environment variable configuration
- [API Documentation](docs/API_DOCUMENTATION.md) - Detailed API interface documentation
- [Usage Guide](docs/USAGE_GUIDE.md) - In-depth usage tutorial
- [Project Completion Summary](docs/PROJECT_COMPLETION_SUMMARY.md) - Project development summary
## 🤝 Contributing
Welcome to submit Issues and Pull Requests to improve the project!
## 📄 License
This project is licensed under the MIT License. See [LICENSE](./LICENSE) file for details.
## 🆘 Getting Help
If you encounter problems or need help:
1. Check the troubleshooting section of this README
2. Check detailed documentation in the [docs/](docs/) directory
3. Submit an Issue describing your problem
4. Check log files for detailed error information
---
**Quick Start Tips**:
- 🚀 **Recommended**: Use `uvx mc-wiki-fetch-mcp` to get started quickly
- 💻 **Claude Desktop**: Use `uvx` command and environment variables in configuration
- ⚙️ **Customize**: Adjust configuration through environment variables or command line arguments
- 🔧 **Development**: Clone repository and use `pip install -e .` for development | text/markdown | null | rice_awa <riceawa@rice-awa.top> | null | null | null | api, mcp, minecraft, server, wiki | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.8.0",
"fastmcp>=0.1.0",
"mcp>=1.12.3"
] | [] | [] | [] | [
"Homepage, https://github.com/rice-awa/mc-wiki-mcp-pypi",
"Repository, https://github.com/rice-awa/mc-wiki-mcp-pypi",
"Issues, https://github.com/rice-awa/mc-wiki-mcp-pypi/issues",
"Documentation, https://github.com/rice-awa/mc-wiki-mcp-pypi/blob/main/README.md"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T13:25:06.811632 | mc_wiki_fetch_mcp-0.4.0.tar.gz | 121,036 | e2/bc/634653a23f930d606ca905687820d7b088f71587ce8e7bfad79ba307076f/mc_wiki_fetch_mcp-0.4.0.tar.gz | source | sdist | null | false | debb3c2c6dc61c9e034523a465f3c1ed | d7b1a4540ce8e518c06c2aa6d65734879751c48eda0429d0256137ded481cfa1 | e2bc634653a23f930d606ca905687820d7b088f71587ce8e7bfad79ba307076f | null | [
"LICENSE"
] | 227 |
2.4 | swagger-ui-py-x | 26.2.18 | Swagger UI for Python web framework, such as Tornado, Flask, Quart, Sanic and Falcon. | [](https://github.com/b3n4kh/swagger-ui-py/actions/workflows/lint-and-pytest.yml)
[](https://pypi.org/project/swagger-ui-py-x/)
[](https://pepy.tech/project/swagger-ui-py-x)
[Project Page](https://pwzer.github.io/swagger-ui-py/)
# swagger-ui-py-x
Swagger UI for Python web framework, such Tornado, Flask, Quart, aiohttp, Sanic and Falcon.
Only support Python3.
## Supported
- [Tornado](https://www.tornadoweb.org/en/stable/)
- [Flask](https://flask.palletsprojects.com/)
- [Sanic](https://sanicframework.org/en/)
- [AIOHTTP](https://docs.aiohttp.org/en/stable/)
- [Quart](https://pgjones.gitlab.io/quart/)
- [Starlette](https://www.starlette.io/)
- [Falcon](https://falcon.readthedocs.io/en/stable/)
- [Bottle](https://bottlepy.org/docs/dev/)
- [Chalice](https://aws.github.io/chalice/index.html)
You can print supported list use command
```bash
python3 -c "from swagger_ui import supported_list; print(supported_list)"
```
> If you want to add supported frameworks, you can refer to [Flask Support](/swagger_ui/handlers/flask.py) or [Falcon Support](/swagger_ui/handlers/falcon.py), Implement the corresponding `handler` and `match` function.
## Usage
- Install
```bash
pip3 install swagger-ui-py
```
- Code
Using the local config file
```python
from swagger_ui import api_doc
api_doc(app, config_path='./config/test.yaml', url_prefix='/api/doc', title='API doc')
```
Or using config url, but need to suport [CORS](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing)
```python
api_doc(app, config_url='https://petstore.swagger.io/v2/swagger.json', url_prefix='/api/doc', title='API doc')
```
Or using the config spec string
```python
from swagger_ui import api_doc
spec_string = '{"openapi":"3.0.1","info":{"title":"python-swagger-ui test api","description":"python-swagger-ui test api","version":"1.0.0"},"servers":[{"url":"http://127.0.0.1:8989/api"}],"tags":[{"name":"default","description":"default tag"}],"paths":{"/hello/world":{"get":{"tags":["default"],"summary":"output hello world.","responses":{"200":{"description":"OK","content":{"application/text":{"schema":{"type":"object","example":"Hello World!!!"}}}}}}}},"components":{}}'
api_doc(app, config_spec=spec_string, url_prefix='/api/doc', title='API doc')
```
Or using the config dict
```python
from swagger_ui import api_doc
config = {"openapi":"3.0.1","info":{"title":"python-swagger-ui test api","description":"python-swagger-ui test api","version":"1.0.0"},"servers":[{"url":"http://127.0.0.1:8989/api"}],"tags":[{"name":"default","description":"default tag"}],"paths":{"/hello/world":{"get":{"tags":["default"],"summary":"output hello world.","responses":{"200":{"description":"OK","content":{"application/text":{"schema":{"type":"object","example":"Hello World!!!"}}}}}}}},"components":{}}
api_doc(app, config=config, url_prefix='/api/doc', title='API doc')
```
And suport config file with editor
```python
api_doc(app, config_path='./config/test.yaml', editor=True)
```
And keep the old way
```python
# for Tornado
from swagger_ui import tornado_api_doc
tornado_api_doc(app, config_path='./conf/test.yaml', url_prefix='/api/doc', title='API doc')
# for Sanic
from swagger_ui import sanic_api_doc
sanic_api_doc(app, config_path='./conf/test.yaml', url_prefix='/api/doc', title='API doc')
# for Flask
from swagger_ui import flask_api_doc
flask_api_doc(app, config_path='./conf/test.yaml', url_prefix='/api/doc', title='API doc')
# for Quart
from swagger_ui import quart_api_doc
quart_api_doc(app, config_path='./conf/test.yaml', url_prefix='/api/doc', title='API doc')
# for aiohttp
from swagger_ui import aiohttp_api_doc
aiohttp_api_doc(app, config_path='./conf/test.yaml', url_prefix='/api/doc', title='API doc')
# for Falcon
from swagger_ui import falcon_api_doc
falcon_api_doc(app, config_path='./conf/test.yaml', url_prefix='/api/doc', title='API doc')
```
Passing a value to the keyword argument `host_inject` will disable the behaviour which injects a host value into the specification served by Swagger UI.
- Edit `Swagger` config file (JSON or YAML)
Please see [https://swagger.io/resources/open-api/](https://swagger.io/resources/open-api/).
- Access
Open `http://<host>:<port>/api/doc/editor`, you can edit api doc config file.
Open `http://<host>:<port>/api/doc` view api doc.
## SwaggerUI Configuration
You can configure Swagger parameters using the dictionary, Both key and value are of type str, if value is JavaScript string, you need to wrap the quotes around it.
Such as `"layout": "\"StandaloneLayout\""`.
```python
parameters = {
"deepLinking": "true",
"displayRequestDuration": "true",
"layout": "\"StandaloneLayout\"",
"plugins": "[SwaggerUIBundle.plugins.DownloadUrl]",
"presets": "[SwaggerUIBundle.presets.apis, SwaggerUIStandalonePreset]",
}
api_doc(app, config_path='./config/test.yaml', parameters=parameters)
```
For details about parameters configuration, see the official documentation [Parameters Configuration](https://swagger.io/docs/open-source-tools/swagger-ui/usage/configuration/).
## OAuth2 Configuration
The format is similar to `parameters`.
```python
oauth2_config = {
"clientId": "\"your-client-id\"",
"clientSecret": "\"your-client-secret-if-required\"",
"realm": "\"your-realms\"",
"appName": "\"your-app-name\"",
"scopeSeparator": "\" \"",
"scopes": "\"openid profile\"",
"additionalQueryStringParams": "{test: \"hello\"}",
"usePkceWithAuthorizationCodeGrant": True,
}
api_doc(app, config_path='./config/test.yaml', oauth2_config=oauth2_config)
```
For details about OAuth2 configuration, see the official documentation [OAuth2 Configuration](https://swagger.io/docs/open-source-tools/swagger-ui/usage/oauth2/).
## Swagger UI
Swagger UI version is `v5.31.1`. see [https://github.com/swagger-api/swagger-ui](https://github.com/swagger-api/swagger-ui).
## Swagger Editor
Swagger Editor version is `v5.2.1`. see [https://github.com/swagger-api/swagger-editor](https://github.com/swagger-api/swagger-editor).
## Update
You can update swagger ui and swagger editor version with
```bash
tox -e update
```
## Contributing
If you are interested in becoming the developer or maintainer of this project, please contact me by email.
| text/markdown | null | b3n4kh <b@akhras.at>, PWZER <pwzergo@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"jinja2>=2.0",
"packaging>=20.0",
"PyYaml>=5.0",
"ruff; extra == \"dev\"",
"isort; extra == \"dev\"",
"aiohttp; extra == \"test\"",
"djlint; extra == \"test\"",
"bottle; extra == \"test\"",
"chalice; extra == \"test\"",
"falcon; extra == \"test\"",
"flask; extra == \"test\"",
"pytest; extra ==... | [] | [] | [] | [
"Homepage, https://github.com/b3n4kh/swagger-ui-py"
] | uv/0.9.14 {"installer":{"name":"uv","version":"0.9.14","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Fedora Linux","version":"43","id":"","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T13:24:50.064896 | swagger_ui_py_x-26.2.18.tar.gz | 11,374,828 | 03/0d/fe2e22afaf071f4355f95fa8257a2e5e2731b8909a67d5f1f30ed96c3a9b/swagger_ui_py_x-26.2.18.tar.gz | source | sdist | null | false | 4219bdef539a6a712f111a64f407c9fb | d4cb05c01ba529885c2f44de5e2daf088681d18846cf41396b780359807cb0fb | 030dfe2e22afaf071f4355f95fa8257a2e5e2731b8909a67d5f1f30ed96c3a9b | Apache-2.0 | [
"LICENSE"
] | 243 |
2.4 | spectraforce | 0.1.7 | Premium desktop workstation for Earth observation segmentation. | # SpectraForge
A premium desktop workstation for **Earth observation processing and visualization** across sensors. Unsupervised segmentation and uncertainty are optional modules inside a much broader EO workflow. Built to run locally on Windows/macOS/Linux.
## Why it exists
- **Multi-sensor preprocessing** (Sentinel‑1/2/3, Landsat, ERA5, ENMAP, hyperspectral, PlanetScope, UAV)
- **Band management and index generation** for fast EO analysis
- **Interactive ROI labeling** for ground‑truthing and validation
- **Optional unsupervised segmentation** with probability maps and uncertainty
- **Designed for patent-ready workflows** (keep private until filing)
## Features (v1)
- Clean, modern studio UI (not a QGIS clone)
- GeoTIFF, NetCDF, ENVI (hyperspectral), Sentinel‑2 folder support
- Sensor presets (Sentinel‑1/2/3, Landsat, PlanetScope, UAV, ERA5, ENMAP, hyperspectral)
- Auto feature selection across multi-band data
- Unsupervised segmentation with probability maps and entropy-based uncertainty (optional)
- Index Builder with curated indices per sensor + custom JSON/YAML recipes
- ROI selection + cluster labeling workflow
- Run history saved to `runs/`
- Copernicus + Planet API panels (offline stub in this build)
## Install
From PyPI:
```bash
python -m pip install spectraforce
```
From source:
```bash
python -m pip install -r requirements.txt
```
## Run
```bash
spectraforge
```
Alternative:
```bash
python -m spectraforge
```
## Data formats
- **GeoTIFF**: `.tif`, `.tiff`
- **NetCDF**: `.nc` (ERA5 and other gridded products)
- **ENVI**: `.hdr` + `.img` (hyperspectral)
- **Sentinel‑2**: SAFE folder with `.jp2` band files
- **NumPy**: `.npy` arrays (2D or 3D)
- **Tabular**: `.csv`, `.xlsx`
## Indices
SpectraForge ships with curated indices (NDVI, NDWI, etc.) per sensor. You can add your own:
- JSON/YAML recipe files (see `samples/` for examples)
- Index layers appear in the layer stack like any other band
Example custom indices file: `samples/indices_example.json`
## NPY export
Export data to `.npy` directly from the UI:
- All bands in one file
- Selected bands in one file
- Individual band files
- Optional index layer exports
## Offline samples
Synthetic samples live in `samples/` so the repo runs without downloads.
Real cropped samples live in `samples/real/`.
Manipur example outputs live in `samples/manipur/`.
## API keys (stored locally)
API keys are stored in `~/.spectraforge/config.json` on your machine.
## Connectors (step by step)
The **Connectors** tab provides a guided workflow for sensor platform access (Copernicus and PlanetScope). This build ships with an offline stub so the UI and local key storage can be tested without network access. The steps below describe the full workflow, and the current offline behavior is noted where relevant.
**What the Connectors tab does**
- Collects and stores API keys locally
- Lets a dataset browser query sensors (S1/S2/S3 and PlanetScope)
- Builds a download queue with a save location
**Step-by-step workflow**
1. Open the **Connectors** tab in the right‑hand inspector.
2. Choose a provider in the left panel.
3. Paste the API key and click **Save**.
4. If a **Test** or **Validate** button is shown, click it.
5. Open the **Dataset Browser** section.
6. Pick a sensor family (for example: Sentinel‑2).
7. Set a bounding box or choose a saved ROI.
8. Set a date range and cloud filter.
9. Click **Search** to list scenes.
10. Select one or more scenes and click **Add to Queue**.
11. Set the **Save Location**.
12. Click **Start Download**.
**Example**
1. Provider: Copernicus.
2. Sensor: Sentinel‑2.
3. AOI: a polygon ROI drawn in the map.
4. Date range: `2025-01-01` to `2025-02-01`.
5. Cloud filter: 0–20%.
6. Action: search, add two scenes to queue, save to `samples/downloads/`.
7. Result: queued items appear in the download list with size, sensor, and target path.
**Offline stub behavior in this build**
- Keys are saved locally and shown as “stored”.
- Searches and downloads are disabled to keep the build offline.
- The UI still displays the full workflow so it can be demonstrated end‑to‑end.
## Why SpectraForge Helps
SpectraForge provides a fast, repeatable path from raw EO images to clean indices, analysis layers, and exportable outputs. Custom scripts do not need to be rebuilt for every dataset. The same workflow can be applied across sensors for consistent results. A typical workflow can be done by loading a folder, letting the app auto detect bands, choosing indices, and optionally running unsupervised segmentation with ROI labeling.
## UI Demo (Auto NPY Detection)
When a `.npy` dataset is opened, SpectraForge can automatically detect the sensor, load the bands, and populate the band list. If the sensor cannot be detected, it can be selected manually and the workflow can continue without restarting.
<p align="center"><img src="docs/screenshots/demo_auto_npy.png" alt="Auto NPY Detection Demo" width="680"></p>
## Step by Step Outputs for Manipur Sentinel‑2
The full pipeline was run on real Sentinel‑2 data from Manipur, and the outputs were saved below.
**Step 1 — Load bands and true color**
<p align="center"><img src="docs/screenshots/manipur_full/manipur_full_truecolor.png" alt="Manipur True Color" width="520"></p>
**Step 2 — NDVI and NDWI indices**
<table>
<tr>
<td><img src="docs/screenshots/manipur_full/manipur_full_ndvi.png" alt="Manipur NDVI" width="400"><br><em>NDVI</em></td>
<td><img src="docs/screenshots/manipur_full/manipur_full_ndwi.png" alt="Manipur NDWI" width="400"><br><em>NDWI</em></td>
</tr>
</table>
**Step 3 — Unsupervised segmentation with 8 clusters**
<p align="center"><img src="docs/screenshots/manipur_full/manipur_full_seg_labels.png" alt="Manipur Segmentation Labels" width="480"></p>
**Step 4 — ROI selection and labeling**
<table>
<tr>
<td><img src="docs/screenshots/manipur_full/manipur_full_roi_overlay.png" alt="Manipur ROI Overlay" width="400"><br><em>ROI overlay</em></td>
<td><img src="docs/screenshots/manipur_full/manipur_full_roi_labels.png" alt="Manipur ROI Labels" width="400"><br><em>ROI labels</em></td>
</tr>
</table>
**Step 5 — Uncertainty and confidence**
The same color rule is used for both maps: blue means low, red means high.
For uncertainty, blue means low uncertainty and red means high uncertainty.
For confidence, blue means low confidence and red means high confidence.
<table>
<tr>
<td><img src="docs/screenshots/manipur_full/manipur_full_seg_uncertainty.png" alt="Manipur Segmentation Uncertainty" width="400"><br><em>Uncertainty</em></td>
<td><img src="docs/screenshots/manipur_full/manipur_full_seg_confidence.png" alt="Manipur Segmentation Confidence" width="400"><br><em>Confidence</em></td>
</tr>
</table>
**Saved outputs full resolution**
- `samples/manipur_full/manipur_full_stack.npy`
- `samples/manipur_full/manipur_full_stack_preview.tif`
- `samples/manipur_full/manipur_full_<index>.npy`
- `samples/manipur_full/manipur_full_<index>.tif`
## Color Legend for segmentation labels
Clusters are **unsupervised**. Colors map to cluster IDs in order:
- `0` → maroon `#800000`
- `1` → darkblue `#00008B`
- `2` → darkgreen `#006400`
- `3` → cyan `#00FFFF`
- `4` → darkcyan `#008B8B`
- `5` → magenta `#FF00FF`
- `6` → indigo `#4B0082`
- `7` → grey `#808080`
- `8` → peru `#CD853F`
- `9` → slateblue `#6A5ACD`
- `10` → mediumspringgreen `#00FA9A`
- `11` → orangered `#FF4500`
If you run fewer than 12 clusters, only the first N colors are used.
## Color Notes for indices
Index quicklooks use a **viridis** scale:
brighter colors indicate higher values.
## Uncertainty calibration made easy
- Run segmentation → get probability maps (`predict_proba`)
- Use entropy + confidence to visualize uncertain regions
- Assign labels with ROI selections (no labeled data required)
## Segmentation engine
If your environment has scientific stack conflicts, switch the engine to **Safe mode** in the UI.
Fast mode uses scikit‑learn when available.
## SpectraForge vs QGIS (Pros & Cons)
| Aspect | SpectraForge | QGIS |
| --- | --- | --- |
| Focus | EO segmentation + indices + uncertainty | Full GIS for all domains |
| Setup | One command local run | Heavier install + plugins |
| Unsupervised segmentation + uncertainty | Built‑in, turnkey | Requires plugins/workflows |
| Indices | Curated EO indices + custom recipes | Many tools, but more manual setup |
| UI style | Modern studio layout (not QGIS style) | Traditional GIS layout |
| Extensibility | Focused feature set | Huge plugin ecosystem |
| Geoprocessing breadth | Focused EO analytics | Broad GIS toolbox |
| Best for | Fast EO segmentation + research demos | Full GIS analysis & cartography |
**Pros of SpectraForge:** fast EO‑first workflow, built‑in uncertainty, simple NPY export, easy to demo.
**Cons vs QGIS:** fewer GIS tools, smaller plugin ecosystem, less advanced cartography.
## Contributions
See `CONTRIBUTING.md` and `CODE_OF_CONDUCT.md`.
## Credits
Arnab Bhowmik
## How to cite
If you use SpectraForge in academic work, please cite it like this:
```
Arnab Bhowmik. SpectraForge: Earth Observation Processing and Visualization Toolkit. Version 0.1.7, 2026. https://github.com/ArnaBannonymus/SpectraForge
```
BibTeX:
```bibtex
@software{spectraforge_2026,
author = {Bhowmik, Arnab},
title = {SpectraForge: Earth Observation Processing and Visualization Toolkit},
year = {2026},
version = {0.1.7},
url = {https://github.com/ArnaBannonymus/SpectraForge}
}
```
## Privacy note
Runs locally. No data leaves your machine.
## License
Proprietary (permission required for any use)
| text/markdown | SpectraForge | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"PySide6",
"numpy<2.2,>=1.23",
"pandas",
"scikit-learn",
"matplotlib",
"pillow",
"rasterio",
"xarray",
"rioxarray",
"netCDF4",
"spectral",
"pyyaml"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:24:36.228118 | spectraforce-0.1.7.tar.gz | 33,294 | cc/5f/5b43474ea731e16ac72933653d36fa219bcddb91ed305849d4e88b5e0608/spectraforce-0.1.7.tar.gz | source | sdist | null | false | 19f17bc0d970871d10e8b24ef908e77c | cb40abd31e6c36c55dbcccd8c440071c4a3229dd08ceec469f73acfecc041e00 | cc5f5b43474ea731e16ac72933653d36fa219bcddb91ed305849d4e88b5e0608 | LicenseRef-Proprietary | [
"LICENSE"
] | 213 |
2.4 | mlwiz | 1.4.2 | Machine Learning Research Wizard | <p align="center">
<img src="https://raw.githubusercontent.com/diningphil/mlwiz/main/docs/_static/mlwiz-logo2-horizontal.png" width="360" alt="MLWiz logo"/>
</p>
# MLWiz
_Machine Learning Research Wizard — reproducible experiments from YAML (model selection + risk assessment) for vectors, images, time-series, and graphs._
[](https://pypi.org/project/mlwiz/)
[](https://pypi.org/project/mlwiz/)
[](https://github.com/diningphil/mlwiz/actions/workflows/python-test-and-coverage.yml)
[](https://mlwiz.readthedocs.io/en/stable/)
[](https://github.com/diningphil/mlwiz/actions/workflows/python-test-and-coverage.yml)
[](https://interrogate.readthedocs.io/en/latest/)
[](https://opensource.org/licenses/BSD-3-Clause)
[](https://github.com/diningphil/mlwiz/stargazers)
## 🔗 Quick Links
- 📘 Docs: https://mlwiz.readthedocs.io/en/stable/
- 🧪 Tutorial (recommended): https://mlwiz.readthedocs.io/en/stable/tutorial.html
- 📦 PyPI: https://pypi.org/project/mlwiz/
- 📝 Changelog: `CHANGELOG.md`
- 🤝 Contributing: `CONTRIBUTING.md`
## ✨ What It Does
MLWiz helps you run end-to-end research experiments with minimal boilerplate:
- 🧱 Build/prepare datasets and generate splits (hold-out or nested CV)
- 🎛️ Expand a hyperparameter search space (grid or random search)
- ⚡ Run model selection + risk assessment in parallel with Ray (CPU/GPU or cluster)
- 📈 Log metrics, checkpoints, and TensorBoard traces in a consistent folder structure
Inspired by (and a generalized version of) [PyDGN](https://github.com/diningphil/PyDGN).
## ✅ Key Features
| Area | What you get |
| --- | --- |
| Research Oriented Framework | Anything is customizable, easy prototyping of models and setups |
| Reproducibility | Ensure your results are reproducible across multiple runs |
| Automatic Split Generation | Dataset preparation + `.splits` generation for hold-out / (nested) CV |
| Automatic and Robust Evaluation | Nested model selection (inner folds) + risk assessment (outer folds) |
| Parallelism | Ray-based execution across CPU/GPU (or a Ray cluster) |
## 🚀 Getting Started
### 📦 Installation
MLWiz supports Python 3.10+.
```bash
pip install mlwiz
```
Tip: for GPU / graph workloads, install PyTorch and PyG following their official instructions first, then `pip install mlwiz`.
### ⚡ Quickstart
| Step | Command | Notes |
| --- | --- | --- |
| 1) Prepare dataset + splits | `mlwiz-data --config-file examples/DATA_CONFIGS/config_MNIST.yml` | Creates processed data + a `.splits` file |
| 2) Run an experiment (grid search) | `mlwiz-exp --config-file examples/MODEL_CONFIGS/config_MLP.yml` | Add `--debug` to run sequentially and print logs |
| 3) Inspect results | `cat RESULTS/mlp_MNIST/MODEL_ASSESSMENT/assessment_results.json` | Aggregated results live under `RESULTS/` |
| 4) Visualize in TensorBoard | `tensorboard --logdir RESULTS/mlp_MNIST` | Per-run logs are written automatically |
| 5) Stop a running experiment | Press `Ctrl-C` | |
### 🧭 Navigating the CLI (non-debug mode)
Example of the global view CLI:
<p align="center">
<img src="https://raw.githubusercontent.com/diningphil/mlwiz/main/docs/_static/exp_gui.png" width="760" alt="MLWiz terminal progress UI"/>
</p>
Specific views can be accessed, e.g. to visualize a specific model run:
```bash
:<outer_fold> <inner_fold> <config_id> <run_id>
```
…or, analogously, a risk assessment run:
```bash
:<outer_fold> <run_id>
```
Here is how it will look like
<p align="center">
<img src="https://raw.githubusercontent.com/diningphil/mlwiz/main/docs/_static/run_view.png" width="760" alt="MLWiz terminal specific view"/>
</p>
Handy commands:
```bash
: # or :g or :global (back to global view)
:r # or :refresh (refresh the screen)
```
You can use **left-right arrows** to move across configurations, and **up-down arrows** to switch between model selection and risk assessment runs.
## 🧩 Architecture (High-Level)
MLWiz is built around two YAML files and a small set of composable components:
```text
data.yml ──► mlwiz-data ──► processed dataset + .splits
exp.yml ──► mlwiz-exp ──► Ray workers
├─ inner folds: model selection (best hyperparams)
└─ outer folds: risk assessment (final scores)
```
- 🧰 **Data pipeline**: `mlwiz-data` instantiates your dataset class and writes a `.splits` file for hold-out / (nested) CV.
- 🧪 **Search space**: `grid:` and `random:` sections expand into concrete hyperparameter configurations.
- 🛰️ **Orchestration**: the evaluator schedules training runs with Ray across CPU/GPU (or a Ray cluster).
- 🏗️ **Execution**: each run builds a model + training engine from dotted paths, then logs artifacts and returns structured results.
## ⚙️ Configuration At A Glance
MLWiz expects:
- 🗂️ one YAML for **data + splits**
- 🧾 one YAML for **experiment + search space**
Minimal data config:
```yaml
splitter:
splits_folder: DATA_SPLITS/
class_name: mlwiz.data.splitter.Splitter
args:
n_outer_folds: 3
n_inner_folds: 2
seed: 42
dataset:
class_name: mlwiz.data.dataset.MNIST
args:
storage_folder: DATA/
```
Minimal experiment config (grid search):
```yaml
storage_folder: DATA
dataset_class: mlwiz.data.dataset.MNIST
data_splits_file: DATA_SPLITS/MNIST/MNIST_outer3_inner2.splits
device: cpu
max_cpus: 8
dataset_getter: mlwiz.data.provider.DataProvider
data_loader:
class_name: torch.utils.data.DataLoader
args:
num_workers : 0
pin_memory: False
result_folder: RESULTS
exp_name: mlp
experiment: mlwiz.experiment.Experiment
higher_results_are_better: true
evaluate_every: 1
risk_assessment_training_runs: 3
model_selection_training_runs: 2
grid:
model: mlwiz.model.MLP
epochs: 400
batch_size: 512
dim_embedding: 5
mlwiz_tests: True # patch: allow reshaping of MNIST dataset
optimizer:
- class_name: mlwiz.training.callback.optimizer.Optimizer
args:
optimizer_class_name: torch.optim.Adam
lr:
- 0.01
- 0.03
weight_decay: 0.
loss: mlwiz.training.callback.metric.MulticlassClassification
scorer: mlwiz.training.callback.metric.MulticlassAccuracy
engine: mlwiz.training.engine.TrainingEngine
```
See `examples/` for complete configs (including random search, schedulers, early stopping, and more).
### 🧩 Custom Code Via Dotted Paths
Point YAML entries to your own classes (in your project). `mlwiz-data` and `mlwiz-exp` add the current working directory to `sys.path`, so this works out of the box:
```yaml
grid:
model: my_project.models.MyModel
dataset:
class_name: my_project.data.MyDataset
```
## 📦 Outputs
Runs are written under `RESULTS/`:
| Output | Location |
| --- | --- |
| Aggregated outer-fold results | `RESULTS/<exp_name>_<dataset>/MODEL_ASSESSMENT/assessment_results.json` |
| Per-fold summaries | `RESULTS/<exp_name>_<dataset>/MODEL_ASSESSMENT/OUTER_FOLD_k/outer_results.json` |
| Model selection (inner folds + winner config) | `.../MODEL_SELECTION/...` |
| Final retrains with selected hyperparams | `.../final_run*/` |
Each training run also writes TensorBoard logs under `<run_dir>/tensorboard/`.
## 🛠️ Utilities
### 🗂️ Config Management (CLI)
Duplicate a base experiment config across multiple datasets:
```bash
mlwiz-config-duplicator --base-exp-config base.yml --data-config-files data1.yml data2.yml
```
### 📊 Post-process Results (Python)
Filter configurations from a `MODEL_SELECTION/` folder and convert them to a DataFrame:
```python
from mlwiz.evaluation.util import retrieve_experiments, filter_experiments, create_dataframe
configs = retrieve_experiments(
"RESULTS/mlp_MNIST/MODEL_ASSESSMENT/OUTER_FOLD_1/MODEL_SELECTION/"
)
filtered = filter_experiments(configs, logic="OR", parameters={"lr": 0.001})
df = create_dataframe(
config_list=filtered,
key_mappings=[("lr", float), ("avg_validation_score", float)],
)
```
Export aggregated assessment results to LaTeX:
```python
from mlwiz.evaluation.util import create_latex_table_from_assessment_results
experiments = [
("RESULTS/mlp_MNIST", "MLP", "MNIST"),
("RESULTS/dgn_PROTEINS", "DGN", "PROTEINS"),
]
latex_table = create_latex_table_from_assessment_results(
experiments,
metric_key="main_score",
no_decimals=3,
model_as_row=True,
use_single_outer_fold=False,
)
print(latex_table)
```
Compare statistical significance between models (Welch t-test):
```python
from mlwiz.evaluation.util import statistical_significance
reference = ("RESULTS/mlp_MNIST", "MLP", "MNIST")
competitors = [
("RESULTS/baseline1_MNIST", "B1", "MNIST"),
("RESULTS/baseline2_MNIST", "B2", "MNIST"),
]
df = statistical_significance(
highlighted_exp_metadata=reference,
other_exp_metadata=competitors,
metric_key="main_score",
set_key="test",
confidence_level=0.95,
)
print(df)
```
### 🔍 Load a Trained Model (Notebook-friendly)
Load the best configuration for a fold, instantiate dataset/model, and restore a checkpoint:
```python
from mlwiz.evaluation.util import (
retrieve_best_configuration,
instantiate_dataset_from_config,
instantiate_model_from_config,
load_checkpoint,
)
config = retrieve_best_configuration(
"RESULTS/mlp_MNIST/MODEL_ASSESSMENT/OUTER_FOLD_1/MODEL_SELECTION/"
)
dataset = instantiate_dataset_from_config(config)
model = instantiate_model_from_config(config, dataset)
load_checkpoint(
"RESULTS/mlp_MNIST/MODEL_ASSESSMENT/OUTER_FOLD_1/final_run1/best_checkpoint.pth",
model,
device="cpu",
)
```
For more post-processing helpers, see the tutorial: https://mlwiz.readthedocs.io/en/stable/tutorial.html
## 🤝 Contributing
See `CONTRIBUTING.md`.
## 📄 License
BSD-3-Clause. See `LICENSE`.
| text/markdown | null | Federico Errica <f.errica@protonmail.com> | null | null | null | machine-learning, deep-learning, experiments, research, evaluation-framework | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: BSD License",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"PyYAML>=5.4",
"dill>=0.3.8",
"Requests>=2.31.0",
"scikit_learn>=1.3.0",
"scipy>=1.15.3",
"pandas>=2.0.0",
"tensorboard>=2.11.0",
"tqdm>=4.47.0",
"ray[default]>=2.6.0",
"torchvision>=0.18.1",
"torch>=2.5.0",
"torch-geometric>=2.6.0",
"gpustat"
] | [] | [] | [] | [
"Homepage, https://mlwiz.readthedocs.io/en/latest/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:24:34.163198 | mlwiz-1.4.2.tar.gz | 111,214 | 90/3b/c2d00cba41e70211dea883aa9fe74a60e66cfc49414bda1ac33db5b48c49/mlwiz-1.4.2.tar.gz | source | sdist | null | false | 7e517dfacb515ade2f18d358f60dfefa | 4efca2ea404f9563faf88973c5bcbdc83e58445f92bdab8258b8cf38f47e7d42 | 903bc2d00cba41e70211dea883aa9fe74a60e66cfc49414bda1ac33db5b48c49 | null | [
"LICENSE"
] | 230 |
2.4 | gfp-mcp | 0.3.5 | Model Context Protocol (MCP) server for GDSFactory+ photonic IC design | # GDSFactory+ MCP Server
[](https://pypi.org/project/gfp-mcp/)
[](https://pypi.org/project/gfp-mcp/)
[](https://opensource.org/licenses/MIT)
Model Context Protocol (MCP) server for GDSFactory+ that enables AI assistants like Claude to design and build photonic integrated circuits.
## What is this?
This MCP server connects AI assistants to [GDSFactory+](https://gdsfactory.com), allowing you to design photonic ICs through natural language. Build components, run verification checks, and manage multiple projects directly from Claude Code or Claude Desktop.
## Prerequisites
- Python 3.10 or higher
- VSCode with the [GDSFactory+ extension](https://marketplace.visualstudio.com/items?itemName=gdsfactory.gdsfactoryplus) installed
## Installation
Choose your AI assistant below and follow the instructions.
### 1. Cursor
**One-click install:**
[](https://cursor.com/en-US/install-mcp?name=gdsfactoryplus&config=eyJlbnYiOnt9LCJjb21tYW5kIjoidXZ4IC0tZnJvbSBnZnAtbWNwIGdmcC1tY3Atc2VydmUifQ%3D%3D)
**Manual setup:**
Add to `.cursor/mcp.json` in your project (or `~/.cursor/mcp.json` for global access):
```json
{
"mcpServers": {
"gdsfactoryplus": {
"command": "uvx",
"args": ["--from", "gfp-mcp", "gfp-mcp-serve"]
}
}
}
```
### 2. Claude Code
Run the following command:
```bash
claude mcp add gdsfactoryplus -- uvx --from gfp-mcp gfp-mcp-serve
```
Or add to `.claude/settings.json` manually:
```json
{
"mcpServers": {
"gdsfactoryplus": {
"command": "uvx",
"args": ["--from", "gfp-mcp", "gfp-mcp-serve"]
}
}
}
```
### 3. Claude Desktop
Add to your config file:
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
- **Linux**: `~/.config/Claude/claude_desktop_config.json`
```json
{
"mcpServers": {
"gdsfactoryplus": {
"command": "uvx",
"args": ["--from", "gfp-mcp", "gfp-mcp-serve"]
}
}
}
```
Restart Claude Desktop after adding the configuration.
### 4. Other MCP Clients
Install `gfp-mcp` and run the server:
```bash
uvx --from gfp-mcp gfp-mcp-serve
```
Or install globally first, then reference `gfp-mcp-serve` in your client's MCP configuration:
```bash
uv tool install gfp-mcp
```
## Start Designing
The MCP server automatically discovers running GDSFactory+ servers via the registry (`~/.gdsfactory/server-registry.json`). On startup, it will log all discovered projects.
Try these commands with your AI assistant:
- "List all available photonic components"
- "Build an MZI interferometer"
- "Show me details about the directional coupler"
- "Build multiple components: mzi, coupler, and bend_euler"
- "List all my GDSFactory+ projects"
## Available Tools
| Tool | Description |
|------|-------------|
| **list_projects** | List all running GDSFactory+ server instances |
| **get_project_info** | Get detailed information about a specific project |
| **build_cells** | Build one or more GDS cells by name (pass a list, can be single-item) |
| **list_cells** | List all available photonic components |
| **get_cell_info** | Get detailed component metadata |
| **check_drc** | Run Design Rule Check verification with structured violation reports |
| **check_connectivity** | Run connectivity verification |
| **check_lvs** | Run Layout vs. Schematic verification |
| **simulate_component** | Run SAX circuit simulations with custom parameters |
| **list_samples** | List available sample files from GDSFactory+ General PDK projects |
| **get_sample_file** | Get the content of a specific sample file from a project |
## Multi-Project Support
The MCP server automatically discovers all running GDSFactory+ projects via the server registry (`~/.gdsfactory/server-registry.json`). The registry is the source of truth for available servers. Use the `list_projects` tool to see all running projects, then specify the project name when building components:
```
User: "List all my GDSFactory+ projects"
Claude: [Uses list_projects tool to show all running servers]
User: "Build the mzi component in my_photonics_project"
Claude: [Routes request to the correct project]
```
## Troubleshooting
<details>
<summary><strong>Server not appearing in Claude</strong></summary>
1. Verify installation: `gfp-mcp-serve --help`
2. Check Claude Code logs: `claude --debug`
3. Restart Claude Desktop/Code
4. Ensure the GDSFactory+ VSCode extension is active and a project is open
</details>
<details>
<summary><strong>Connection refused errors</strong></summary>
The MCP server uses the registry (`~/.gdsfactory/server-registry.json`) to discover running servers.
1. Use the `list_projects` tool in Claude to check available servers
2. If no servers are found, ensure the GDSFactory+ VSCode extension is running with an active project:
- Open VSCode with the GDSFactory+ extension installed
- Open a GDSFactory+ project folder
- The extension automatically starts the server and registers it
3. Check the MCP startup logs for discovered servers
4. Verify the registry is accessible at `~/.gdsfactory/server-registry.json`
5. For backward compatibility, you can set a specific server URL:
```bash
export GFP_API_URL="http://localhost:YOUR_PORT"
```
</details>
<details>
<summary><strong>Tool execution timeout</strong></summary>
Increase the timeout for long-running operations:
```bash
export GFP_MCP_TIMEOUT=600 # 10 minutes
```
</details>
| text/markdown | GDSFactory+ Team | null | null | null | MIT | mcp, gdsfactory, photonics, ic-design, eda, model-context-protocol, photonic-ic, gds | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: MIT License",
"Programming La... | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.7.1",
"httpx>=0.25.0",
"typing-extensions>=4.0.0; python_version < \"3.11\"",
"tomli>=2.0.0; python_version < \"3.11\"",
"psutil>=5.9.0",
"klayout>=0.28.0; extra == \"render\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\... | [] | [] | [] | [
"Homepage, https://github.com/doplaydo/gfp-mcp",
"Repository, https://github.com/doplaydo/gfp-mcp",
"Documentation, https://github.com/doplaydo/gfp-mcp#readme",
"Changelog, https://github.com/doplaydo/gfp-mcp/blob/main/CHANGELOG.md",
"Issue Tracker, https://github.com/doplaydo/gfp-mcp/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:24:16.546943 | gfp_mcp-0.3.5.tar.gz | 64,579 | 03/74/47679e87abb1df44e879b118c2a4e754b20945ddc785bed0ffdca253c422/gfp_mcp-0.3.5.tar.gz | source | sdist | null | false | 169e6bb92cc9ec9cfd21a97b57eebcf5 | f549d5a8112dbe5d0e80c00c0ee6789708fd282d37aa51b3d98f10518ab3488b | 037447679e87abb1df44e879b118c2a4e754b20945ddc785bed0ffdca253c422 | null | [
"LICENSE"
] | 251 |
2.4 | aip-identity | 0.5.24 | Cryptographic identity, trust chains, and E2E encrypted messaging for AI agents | [](https://github.com/The-Nexus-Guard/aip/actions)
[](https://pypi.org/project/aip-identity/)
[](https://pypi.org/project/aip-identity/)
[](https://opensource.org/licenses/MIT)
[](https://aip-service.fly.dev/docs)
# Agent Identity Protocol (AIP)
**The problem:** Your agent talks to other agents, runs their code, sends them data. But you have no way to verify who they are, whether they're trustworthy, or if their code hasn't been tampered with. Every interaction is a leap of faith.
**AIP fixes this.** Ed25519 keypairs give each agent a provable identity. Signed vouches create verifiable trust chains. E2E encryption lets agents talk without any platform reading their messages. No central authority required.
## Get Started in 30 Seconds
```bash
pip install aip-identity
aip init github my_agent --name "My Agent" --bio "What I do"
```
That's it. Your agent now has a cryptographic identity (a DID), can verify other agents, and send encrypted messages. Run `aip demo` to see the network, or `aip doctor` to check your setup.
```bash
# See who's in the network
aip list
# Vouch for an agent you trust
aip vouch <their-did> --scope CODE_SIGNING --statement "Reviewed their code"
# Send an encrypted message (only they can read it)
aip message <their-did> "Want to collaborate?"
# Check your inbox
aip messages
```
## Why AIP?
| Problem | AIP Solution |
|---------|-------------|
| "Is this the same agent?" | Ed25519 keypair identity + challenge-response verification |
| "Should I trust this agent?" | Verifiable vouch chains with trust decay scoring |
| "Is this skill safe to run?" | Cryptographic skill signing + CODE_SIGNING vouches |
| "How do we talk privately?" | E2E encrypted messaging (service sees only encrypted blobs) |
| "What if the platform dies?" | Your keys are local. Your identity is portable. |
## The Three Layers
AIP provides three layers:
**Identity Layer** - "Is this the same agent?"
- Ed25519 keypair-based identity
- DID (Decentralized Identifier) for each agent
- Challenge-response verification
- Signed messages and payloads
**Trust Layer** - "Should I trust this agent?"
- Vouching: signed statements of trust between agents
- Trust scopes: general, code-signing, financial, etc.
- Trust paths: verifiable chains showing *how* you trust someone
- Revocation: withdraw trust when needed
**Communication Layer** - "How do we talk securely?"
- E2E encrypted messaging between AIP agents
- Sender verification via cryptographic signatures
- Only recipient can decrypt (AIP relay sees encrypted blobs)
- Poll `/messages/count` to check for new messages
## Key Properties
- **Decentralized** - No central registry needed
- **Verifiable** - All vouches are cryptographically signed
- **Local-first** - Each agent maintains their own trust view
- **Auditable** - Full "isnad chains" show trust provenance
- **Zero dependencies** - Pure Python implementation available
## Quick Start
**New to AIP?** See [docs/quickstart.md](docs/quickstart.md) for a 2-minute guide.
### Identity
```python
from src.identity import AgentIdentity, VerificationChallenge
# Create agent identities
alice = AgentIdentity.create("alice")
bob = AgentIdentity.create("bob")
# Alice challenges Bob to prove his identity
challenge = VerificationChallenge.create_challenge()
response = VerificationChallenge.respond_to_challenge(bob, challenge)
is_bob = VerificationChallenge.verify_response(challenge, response)
# is_bob == True
```
### Trust
```python
from src.trust import TrustGraph, TrustLevel, TrustScope
# Each agent maintains their own trust graph
alice_trust = TrustGraph(alice)
# Alice vouches for Bob
vouch = alice_trust.vouch_for(
bob,
scope=TrustScope.CODE_SIGNING,
level=TrustLevel.STRONG,
statement="Bob writes secure code"
)
# Later: check if Alice trusts someone
trusted, path = alice_trust.check_trust(target_did, TrustScope.CODE_SIGNING)
if trusted:
print(f"Trust level: {path.trust_level.name}")
print(f"Path length: {path.length} hops")
# Full isnad chain available in path.path
```
### Trust Paths (Isnad Chains)
When Alice trusts Bob, and Bob trusts Carol, Alice can find a trust path to Carol:
```
Alice → Bob → Carol
↑ ↑
| └── "Bob vouches for Carol for code-signing"
└── "Alice vouches for Bob for general trust"
```
Each link is cryptographically signed and verifiable.
### Messaging
```python
from aip_client import AIPClient
# Load your credentials
client = AIPClient.from_file("aip_credentials.json")
# Send an encrypted message to another agent
client.send_message(
recipient_did="did:aip:xyz789",
message="Hello from Alice! Want to collaborate?"
)
# Check if you have new messages (poll periodically)
count = client.get_message_count()
if count["unread"] > 0:
# Retrieve messages (requires proving you own this DID)
messages = client.get_messages()
for msg in messages:
print(f"From: {msg['sender_did']}")
print(f"Message: {msg['decrypted_content']}")
# Delete after reading
client.delete_message(msg['id'])
```
The AIP service never sees your message content - only encrypted blobs.
## Demos
```bash
# Identity verification demo
python3 examples/multi_agent_workflow.py
# Full trust network demo
python3 examples/trust_network_demo.py
# Verify a signed skill (no account needed!)
python3 examples/verify_skill.py ./my-skill/
```
## Installation
```bash
# Recommended: install from PyPI
pip install aip-identity
# Or clone for development
git clone https://github.com/The-Nexus-Guard/aip.git
cd aip
pip install -e .
```
## Registration
### Quick Registration (Development Only)
The `/register/easy` endpoint generates a keypair server-side and returns both keys. **This is a development convenience only** — the server briefly handles your private key.
```bash
curl -X POST "https://aip-service.fly.dev/register/easy" \
-H "Content-Type: application/json" \
-d '{"platform": "moltbook", "username": "my_agent"}'
```
### Secure Registration (Recommended for Production)
For production use, **generate your keypair locally** and send only the public key:
```python
from nacl.signing import SigningKey
import hashlib, requests, json
# Generate keypair locally — private key never leaves your machine
sk = SigningKey.generate()
pub_hex = bytes(sk.verify_key).hex()
# Register only the public key
resp = requests.post("https://aip-service.fly.dev/register", json={
"public_key": pub_hex,
"platform": "moltbook",
"username": "my_agent"
})
print(resp.json()) # {"did": "did:aip:...", ...}
```
Or use the secure registration script:
```bash
./cli/aip-register-secure moltbook my_agent
# Generates keys locally, registers public key, saves identity to ~/.aip/identity.json
```
## Rate Limits
| Endpoint | Limit | Scope |
|----------|-------|-------|
| `/register/easy` | 5/hour | per IP |
| `/register` | 10/hour | per IP |
| `/challenge` | 30/minute | per DID |
| `/vouch` | 20/hour | per DID |
| `/message` | 60/hour | per sender DID |
| Other endpoints | 120/minute | per IP |
Exceeding a limit returns `429 Too Many Requests` with a `Retry-After` header.
## Message Signing Format
The message signing payload format is:
```
sender_did|recipient_did|timestamp|encrypted_content
```
> **Note:** The previous format (without `encrypted_content`) still works but is **deprecated** and will be removed in a future version. Update your clients to use the new format.
## Python Client
The simplest way to use AIP:
```python
from aip_client import AIPClient
# Register (one-liner)
client = AIPClient.register("moltbook", "my_agent_name")
client.save("aip_credentials.json")
# Later: load credentials
client = AIPClient.from_file("aip_credentials.json")
# Vouch for another agent
vouch_id = client.vouch(
target_did="did:aip:abc123",
scope="CODE_SIGNING",
statement="Reviewed their code"
)
# Quick trust check - does this agent have vouches?
trust = client.get_trust("did:aip:xyz789")
print(f"Vouched by: {trust['vouched_by']}")
print(f"Scopes: {trust['scopes']}")
# Simple boolean check
if client.is_trusted("did:aip:xyz789", scope="CODE_SIGNING"):
print("Safe to run their code")
# Check trust path with decay scoring
result = client.get_trust_path("did:aip:xyz789")
if result["path_exists"]:
print(f"Trust score: {result['trust_score']}") # 0.64 = 2 hops at 0.8 decay
# Get portable vouch certificate
cert = client.get_certificate(vouch_id)
# cert can be verified offline without AIP service
```
Install dependencies (optional, for better performance):
```bash
pip install cryptography # or pynacl
```
## Live Service
**API:** https://aip-service.fly.dev
**Docs:** https://aip-service.fly.dev/docs
**Landing:** https://the-nexus-guard.github.io/aip/
## Trust Badges
Show your AIP verification status with dynamic SVG badges:
```markdown

```
**Size variants:**
```markdown
<!-- Small (80x20) -->

<!-- Medium (120x28) - default -->

<!-- Large (160x36) -->

```
Badge states:
- **Gray "Not Found"** - DID not registered
- **Gray "Registered"** - Registered but no vouches
- **Blue "Vouched (N)"** - Has N vouches
- **Green "Verified"** - 3+ vouches with CODE_SIGNING scope
Add to your Moltbook profile, GitHub README, or documentation.
## Status
🚀 **v0.5.21** - Identity + Trust + Messaging + Skill Signing + Trust Graphs + Doctor + Offline Cache
- [x] Ed25519 identity (pure Python + PyNaCl + cryptography backends)
- [x] DID document generation
- [x] Challenge-response verification
- [x] Trust graphs with vouching
- [x] Trust path discovery (isnad chains) with **trust decay scoring**
- [x] Trust revocation
- [x] **E2E encrypted messaging** - Secure agent-to-agent communication
- [x] **Skill signing** - Sign skill.md files with your DID
- [x] **CODE_SIGNING vouches** - Trust chains for code provenance
- [x] **MCP integration** - Add AIP to Model Context Protocol
- [x] **Vouch certificates** - Portable trust proofs for offline verification
- [x] **Python client** - One-liner registration and trust operations
- [ ] Trust gossip protocol
- [ ] Reputation scoring
## CLI Tool
The AIP CLI provides command-line access to all AIP features:
```bash
# One-command setup (register + profile)
aip init moltbook my_agent --name "My Agent" --bio "I build things" --tags "ai,builder"
# Or register separately
aip register moltbook my_agent --secure
# View your identity
aip whoami
# Full dashboard
aip status
# List registered agents
aip list
# Visualize trust network
aip trust-graph
```
### All CLI Commands
| Command | Description |
|---------|-------------|
| `init` | One-command setup: register + set profile |
| `register` | Register a new agent DID |
| `verify` | Verify a signed artifact |
| `vouch` | Vouch for another agent |
| `revoke` | Revoke a vouch you previously issued |
| `sign` | Sign a skill directory or file |
| `message` | Send an encrypted message to another agent |
| `messages` | Retrieve your messages |
| `reply` | Reply to a received message by ID |
| `rotate-key` | Rotate your signing key |
| `badge` | Show trust badge for a DID |
| `list` | List registered agents |
| `trust-score` | Calculate transitive trust score between two agents |
| `trust-graph` | Visualize the AIP trust network (ascii/dot/json) |
| `status` | Dashboard: identity + network health + unread messages |
| `audit` | Self-audit: trust score, vouches, messages, profile completeness |
| `doctor` | Diagnose setup: connectivity, credentials, registration (via /trust endpoint) |
| `export` | Export your identity (DID + public key) as portable JSON |
| `import` | Import another agent's public key for offline verification |
| `search` | Search for agents by platform, username, or DID |
| `stats` | Show network statistics and growth chart |
| `profile` | View or update agent profiles |
| `webhook` | Manage webhooks (list/add/delete) |
| `changelog` | Show version changelog |
| `whoami` | Show your current identity |
| `cache` | Offline mode: sync/lookup/status/clear for offline verification |
| `migrate` | Migrate credentials between locations |
| `demo` | Interactive walkthrough without registration |
| `--version` | Show CLI version |
### Examples
```bash
# Register and save credentials
aip register -p moltbook -u my_agent --save
# Saves to ~/.aip/credentials.json (or set AIP_CREDENTIALS_PATH env var)
# Vouch for another agent with CODE_SIGNING scope
./cli/aip vouch did:aip:xyz789 --scope CODE_SIGNING --statement "Reviewed their code"
# Sign a skill directory
./cli/aip sign my_skill/
# Verify a signed skill
./cli/aip verify my_skill/
# Get badge in markdown format
./cli/aip badge did:aip:abc123 --size large --markdown
# Visualize the trust network
# Trust score between agents
aip trust-score did:aip:abc123 did:aip:def456
aip trust-score did:aip:abc123 did:aip:def456 --scope CODE_SIGNING
./cli/aip trust-graph # ASCII art (default)
./cli/aip trust-graph --format dot # GraphViz DOT
./cli/aip trust-graph --format json # Machine-readable JSON
# List all registered agents
./cli/aip list
# Reply to a message
./cli/aip reply <message_id> "Thanks for reaching out!"
```
## Skill Signing
Sign your skills with cryptographic proof of authorship:
```bash
# Using the CLI
./cli/aip sign my_skill/
# Verify a signed skill
./cli/aip verify my_skill/
```
Or via the API:
```bash
# Hash content
curl -X POST "https://aip-service.fly.dev/skill/hash?skill_content=..."
# Verify signature
curl "https://aip-service.fly.dev/skill/verify?content_hash=...&author_did=...&signature=...×tamp=..."
```
See [docs/skill_signing_tutorial.md](docs/skill_signing_tutorial.md) for the full guide.
## Architecture
```
┌─────────────────────────────────────────────┐
│ Application Layer │
│ (Moltbook, MCP, DeFi agents, skills) │
├─────────────────────────────────────────────┤
│ Communication Layer │
│ E2E Encrypted • Signed • Polling-based │
├─────────────────────────────────────────────┤
│ Skill Signing Layer │
│ Signed Skills • CODE_SIGNING Vouches │
├─────────────────────────────────────────────┤
│ Trust Layer │
│ Vouching • Trust Paths • Revocation │
├─────────────────────────────────────────────┤
│ Identity Layer │
│ Ed25519 • DIDs • Challenge-Response │
└─────────────────────────────────────────────┘
```
## MCP Integration
AIP fills the "agent identity gap" in MCP (Model Context Protocol):
```python
# Sign MCP requests with AIP
headers = {
"X-AIP-DID": agent_did,
"X-AIP-Timestamp": timestamp,
"X-AIP-Signature": signature
}
mcp_client.request(url, headers=headers)
```
See [docs/mcp_integration_guide.md](docs/mcp_integration_guide.md) for full details.
## Why Three Layers?
**Identity** tells you "this is the same agent I talked to before."
**Trust** tells you "this agent is worth talking to."
**Communication** lets you "talk securely with verified agents."
Cryptographic identity is necessary but not sufficient. You need to know not just *who* someone is, but whether they're trustworthy, and then you need a secure channel to communicate. AIP provides all three.
## Documentation
- [🚀 Getting Started](docs/getting-started.md) - Install, register, sign, message — step by step
- [📝 Signing Reference](docs/signing-reference.md) - Every signed endpoint, payload formats, and code examples
- [Skill Signing Spec](docs/skill_signing_spec.md) - Full specification
- [Skill Signing Tutorial](docs/skill_signing_tutorial.md) - Step-by-step guide
- [**AIP for Skill Authors**](docs/tutorials/skill-signing.md) - Sign your skill in 3 commands
- [MCP Integration Guide](docs/mcp_integration_guide.md) - Add AIP to MCP
## License
MIT
## Contact
Built by The_Nexus_Guard_001 (agent) and @hauspost (human)
- GitHub: https://github.com/The-Nexus-Guard/aip
- DID: did:aip:c1965a89866ecbfaad49803e6ced70fb
| text/markdown | The_Nexus_Guard_001 | null | null | null | MIT | ai, agent, identity, trust, cryptography, did, decentralized, ed25519, verification, mcp, llm, multi-agent, authentication, digital-identity, signing, encrypted-messaging, vouch, reputation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.8 | [] | [] | [] | [
"pynacl>=1.5.0",
"requests>=2.28.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://the-nexus-guard.github.io/aip/",
"Documentation, https://aip-service.fly.dev/docs",
"Repository, https://github.com/The-Nexus-Guard/aip"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T13:22:44.667095 | aip_identity-0.5.24.tar.gz | 82,215 | e2/e7/5467a647ea8896ff58c6b3eed7296bccfaa350124715d9f0cd02ca0ced32/aip_identity-0.5.24.tar.gz | source | sdist | null | false | 22f3b1f7730706143abc062c86c765ff | d97020532c5da1c3838c62682f7774a89e1fd4f1dc0348ff487de4b9a06e6a5e | e2e75467a647ea8896ff58c6b3eed7296bccfaa350124715d9f0cd02ca0ced32 | null | [] | 240 |
2.4 | pythonstl | 0.1.4 | C++ STL-style containers implemented in Python using the Facade Design Pattern | # PythonSTL - Python Standard Template Library
[](https://pepy.tech/project/pythonstl)
[](https://www.python.org/downloads/)
[](https://pypi.org/project/pythonstl/)
[](LICENSE)
<br>
<div align="center">
<img width="500" height="500" alt="pythonstl_logo" src="https://github.com/user-attachments/assets/7ef83b5f-d005-48e0-a186-05dd7e2221c2" />
</div><br>
A Python package that replicates C++ STL-style data structures using the **Facade Design Pattern**. PythonSTL provides clean, familiar interfaces for developers coming from C++ while maintaining Pythonic best practices.
## Features
- **C++ STL Compliance**: Exact method names and semantics matching C++ STL
- **Facade Design Pattern**: Clean separation between interface and implementation
- **Iterator Support**: STL-style iterators (begin, end, rbegin, rend) and Python iteration
- **Python Integration**: Magic methods (__len__, __bool__, __contains__, __repr__, __eq__)
- **Type Safety**: Full type hints throughout the codebase
- **Copy Operations**: Deep copy support with copy(), __copy__(), and __deepcopy__()
- **Comprehensive Documentation**: Detailed docstrings with time complexity annotations
- **Production Quality**: Proper error handling, PEP8 compliance, and extensive testing
- **Zero Dependencies**: Core package has no external dependencies
## 📦 Installation
```bash
pip install pythonstl
```
Or install from source:
```bash
git clone https://github.com/AnshMNSoni/PythonSTL.git
cd PythonSTL
pip install -e .
```
## Quick Start
```python
from pythonstl import stack, queue, vector, stl_set, stl_map, priority_queue
# Stack (LIFO) - Now with Python magic methods!
s = stack()
s.push(10)
s.push(20)
print(s.top()) # 20
print(len(s)) # 2 - Python len() support
print(bool(s)) # True - Python bool() support
# Vector (Dynamic Array) - With iterators!
v = vector()
v.push_back(100)
v.push_back(200)
v.push_back(300)
v.reserve(1000) # Pre-allocate capacity
print(len(v)) # 3
print(200 in v) # True - Python 'in' operator
# Iterate using STL-style iterators
for elem in v.begin():
print(elem)
# Or use Python iteration
for elem in v:
print(elem)
# Set (Unique Elements) - With magic methods
s = stl_set()
s.insert(5)
s.insert(10)
print(5 in s) # True
print(len(s)) # 2
# Map (Key-Value Pairs) - With iteration
m = stl_map()
m.insert("key1", 100)
m.insert("key2", 200)
print("key1" in m) # True
for key, value in m:
print(f"{key}: {value}")
# Priority Queue - With comparator support
pq_max = priority_queue(comparator="max") # Max-heap (default)
pq_min = priority_queue(comparator="min") # Min-heap
pq_max.push(30)
pq_max.push(10)
pq_max.push(20)
print(pq_max.top()) # 30
```
## Data Structures
### Stack
LIFO (Last-In-First-Out) container adapter.
**Methods:**
- `push(value)` - Add element to top
- `pop()` - Remove top element
- `top()` - Access top element
- `empty()` - Check if empty
- `size()` - Get number of elements
- `copy()` - Create deep copy
**Python Integration:**
- `len(s)` - Get size
- `bool(s)` - Check if non-empty
- `repr(s)` - String representation
- `s1 == s2` - Equality comparison
### Queue
FIFO (First-In-First-Out) container adapter.
**Methods:**
- `push(value)` - Add element to back
- `pop()` - Remove front element
- `front()` - Access front element
- `back()` - Access back element
- `empty()` - Check if empty
- `size()` - Get number of elements
- `copy()` - Create deep copy
**Python Integration:**
- `len(q)` - Get size
- `bool(q)` - Check if non-empty
- `repr(q)` - String representation
- `q1 == q2` - Equality comparison
### Vector
Dynamic array with capacity management.
**Methods:**
- `push_back(value)` - Add element to end
- `pop_back()` - Remove last element
- `at(index)` - Access element with bounds checking
- `insert(position, value)` - Insert element at position
- `erase(position)` - Remove element at position
- `clear()` - Remove all elements
- `reserve(capacity)` - Pre-allocate capacity
- `shrink_to_fit()` - Reduce capacity to size
- `size()` - Get number of elements
- `capacity()` - Get current capacity
- `empty()` - Check if empty
- `begin()` - Get forward iterator
- `end()` - Get end iterator
- `rbegin()` - Get reverse iterator
- `rend()` - Get reverse end iterator
- `copy()` - Create deep copy
**Python Integration:**
- `len(v)` - Get size
- `bool(v)` - Check if non-empty
- `value in v` - Check if value exists
- `repr(v)` - String representation
- `v1 == v2` - Equality comparison
- `v1 < v2` - Lexicographic comparison
- `for elem in v` - Python iteration
### Set
Associative container storing unique elements.
**Methods:**
- `insert(value)` - Add element
- `erase(value)` - Remove element
- `find(value)` - Check if element exists
- `empty()` - Check if empty
- `size()` - Get number of elements
- `begin()` - Get iterator
- `end()` - Get end iterator
- `copy()` - Create deep copy
**Python Integration:**
- `len(s)` - Get size
- `bool(s)` - Check if non-empty
- `value in s` - Check if value exists
- `repr(s)` - String representation
- `s1 == s2` - Equality comparison
- `for elem in s` - Python iteration
### Map
Associative container storing key-value pairs.
**Methods:**
- `insert(key, value)` - Add or update key-value pair
- `erase(key)` - Remove key-value pair
- `find(key)` - Check if key exists
- `at(key)` - Access value by key
- `empty()` - Check if empty
- `size()` - Get number of pairs
- `begin()` - Get iterator
- `end()` - Get end iterator
- `copy()` - Create deep copy
**Python Integration:**
- `len(m)` - Get size
- `bool(m)` - Check if non-empty
- `key in m` - Check if key exists
- `repr(m)` - String representation
- `m1 == m2` - Equality comparison
- `for key, value in m` - Python iteration
### Priority Queue
Container adapter providing priority-based access.
**Methods:**
- `push(value)` - Insert element
- `pop()` - Remove top element
- `top()` - Access top element
- `empty()` - Check if empty
- `size()` - Get number of elements
- `copy()` - Create deep copy
**Comparator Support:**
- `priority_queue(comparator="max")` - Max-heap (default)
- `priority_queue(comparator="min")` - Min-heap
**Python Integration:**
- `len(pq)` - Get size
- `bool(pq)` - Check if non-empty
- `repr(pq)` - String representation
- `pq1 == pq2` - Equality comparison
## Time Complexity Reference
| Container | Operation | Complexity |
|-----------|-----------|------------|
| **Stack** | push() | O(1) amortized |
| | pop() | O(1) |
| | top() | O(1) |
| **Queue** | push() | O(1) |
| | pop() | O(1) |
| | front() / back() | O(1) |
| **Vector** | push_back() | O(1) amortized |
| | pop_back() | O(1) |
| | at() | O(1) |
| | insert() | O(n) |
| | erase() | O(n) |
| | reserve() | O(1) |
| | shrink_to_fit() | O(1) |
| **Set** | insert() | O(1) average |
| | erase() | O(1) average |
| | find() | O(1) average |
| **Map** | insert() | O(1) average |
| | erase() | O(1) average |
| | find() | O(1) average |
| | at() | O(1) average |
| **Priority Queue** | push() | O(log n) |
| | pop() | O(log n) |
| | top() | O(1) |
## 🏗️ Architecture
PythonSTL follows the **Facade Design Pattern** with three layers:
1. **Core Layer** (`pythonstl/core/`)
- Base classes and type definitions
- Custom exceptions
- Iterator classes
2. **Implementation Layer** (`pythonstl/implementations/`)
- Private implementation classes (prefixed with `_`)
- Efficient use of Python built-ins
- Not intended for direct user access
3. **Facade Layer** (`pythonstl/facade/`)
- Public-facing classes
- Clean, STL-compliant API
- Delegates to implementation layer
This architecture ensures:
- **Encapsulation**: Internal implementation is hidden
- **Maintainability**: Easy to modify internals without breaking API
- **Testability**: Each layer can be tested independently
## Thread Safety
**Important:** PythonSTL containers are **NOT thread-safe** by default. If you need to use them in a multi-threaded environment, you must provide your own synchronization (e.g., using `threading.Lock`).
```python
import threading
from pythonstl import stack
s = stack()
lock = threading.Lock()
def thread_safe_push(value):
with lock:
s.push(value)
```
## Design Decisions
### Why Facade Pattern?
- **Clean API**: Users interact with simple, well-defined interfaces
- **Flexibility**: Internal implementation can change without affecting users
- **Type Safety**: Facade layer enforces type contracts
- **Error Handling**: Consistent error messages across all containers
### Why STL Naming?
- **Familiarity**: C++ developers can use PythonSTL immediately
- **Consistency**: Predictable method names across containers
- **Documentation**: Extensive C++ STL documentation applies
### Python Integration
Full Python integration while maintaining STL compatibility:
- Magic methods for natural Python usage
- Iterator protocol support
- Copy protocol support
- Maintains backward compatibility
## Benchmarks
PythonSTL provides benchmarks comparing performance against Python built-ins:
```bash
python benchmarks/benchmark_stack.py
python benchmarks/benchmark_vector.py
python benchmarks/benchmark_map.py
```
**Expected Overhead:** 1.1x - 1.5x compared to native Python structures
The facade pattern adds minimal overhead while providing:
- STL-style API
- Better error messages
- Bounds checking
- Type safety
See `benchmarks/README.md` for detailed analysis.
## Testing
Run the test suite:
```bash
# Install test dependencies
pip install pytest pytest-cov
# Run tests
pytest tests/
# Run with coverage
pytest tests/ --cov=pythonstl --cov-report=html
```
## 🛠️ Development
### Setup
```bash
git clone https://github.com/AnshMNSoni/PythonSTL.git
cd PythonSTL
pip install -e ".[dev]"
```
### Code Quality
```bash
# Type checking
mypy pythonstl/
# Linting
flake8 pythonstl/
# Run all checks
pytest && mypy pythonstl/ && flake8 pythonstl/
```
## Note
➡️ The goal is NOT to replace Python built-ins.<br>
➡️ The goal is to provide: 1) Conceptual clarity 2) STL familiarity for C++ developers 3) A structured learning bridge for DSA <br>
## 📝 License
MIT License - see LICENSE file for details.
## 🤝 Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Add tests for new features
4. Ensure all tests pass
5. Submit a pull request
## Thankyou
## Contact
- GitHub: [@AnshMNSoni](https://github.com/AnshMNSoni)
- Issues: [GitHub Issues](https://github.com/AnshMNSoni/PythonSTL/issues)
**PythonSTL v0.1.1** - Bringing C++ STL elegance to Python
| text/markdown | PySTL Contributors | PySTL Contributors <pythonstl@example.com> | null | null | MIT | stl, data-structures, containers, facade-pattern, cpp-stl, standard-template-library | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Develop... | [] | https://github.com/yourusername/pystl | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/AnshMNSoni/STL",
"Repository, https://github.com/AnshMNSoni/STL",
"Issues, https://github.com/AnshMNSoni/STL/issues",
"Documentation, https://github.com/AnshMNSoni/STL#readme"
] | twine/6.2.0 CPython/3.11.4 | 2026-02-18T13:22:04.936541 | pythonstl-0.1.4.tar.gz | 23,418 | 43/2b/a316c629b4e25b067d62779869f0dbc5ab22cf858339f85639672a1de09a/pythonstl-0.1.4.tar.gz | source | sdist | null | false | 20dfdd19803c509bb43daeaaea8cfd47 | 5f43ee2b523d4e124ebfdff73eee60c8aee10c4b791b62b07a207335f9a0a0d0 | 432ba316c629b4e25b067d62779869f0dbc5ab22cf858339f85639672a1de09a | null | [
"LICENSE"
] | 237 |
2.4 | assignment-hub-jupyterlab | 0.1.1 | Assignment Hub JupyterLab extension | # assignment_hub_jupyterlab
[](https://github.com/enuri14/AssignmntPortal/actions/workflows/build.yml)
A JupyterLab extension.
## Requirements
- JupyterLab >= 4.0.0
## Install
To install the extension, execute:
```bash
pip install assignment_hub_jupyterlab
```
## Uninstall
To remove the extension, execute:
```bash
pip uninstall assignment_hub_jupyterlab
```
## Contributing
### Development install
Note: You will need NodeJS to build the extension package.
The `jlpm` command is JupyterLab's pinned version of
[yarn](https://yarnpkg.com/) that is installed with JupyterLab. You may use
`yarn` or `npm` in lieu of `jlpm` below.
```bash
# Clone the repo to your local environment
# Change directory to the assignment_hub_jupyterlab directory
# Set up a virtual environment and install package in development mode
python -m venv .venv
source .venv/bin/activate
pip install --editable "."
# Link your development version of the extension with JupyterLab
jupyter labextension develop . --overwrite
# Rebuild extension Typescript source after making changes
# IMPORTANT: Unlike the steps above which are performed only once, do this step
# every time you make a change.
jlpm build
```
You can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension.
```bash
# Watch the source directory in one terminal, automatically rebuilding when needed
jlpm watch
# Run JupyterLab in another terminal
jupyter lab
```
With the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt).
By default, the `jlpm build` command generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command:
```bash
jupyter lab build --minimize=False
```
### Development uninstall
```bash
pip uninstall assignment_hub_jupyterlab
```
In development mode, you will also need to remove the symlink created by `jupyter labextension develop`
command. To find its location, you can run `jupyter labextension list` to figure out where the `labextensions`
folder is located. Then you can remove the symlink named `assignment-hub-jupyterlab` within that folder.
### Packaging the extension
See [RELEASE](RELEASE.md)
| text/markdown | null | null | null | null | BSD 3-Clause License
Copyright (c) 2026, Enuri
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | null | [
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Framework :: Jupyter :: JupyterLab :: Extensions",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Prebuilt",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programm... | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.23 | 2026-02-18T13:21:55.319686 | assignment_hub_jupyterlab-0.1.1.tar.gz | 142,577 | 83/d4/3f8bfa35dfd04f20b7c057230585800dc2b3fee7fc90753f435d64ca007b/assignment_hub_jupyterlab-0.1.1.tar.gz | source | sdist | null | false | 1466f928cb59d71e0b0d45393d152961 | 2ed36202c300c995c8b00cee7e2a927544a9fbeda18ddcb2c7ce929b5e6734c2 | 83d43f8bfa35dfd04f20b7c057230585800dc2b3fee7fc90753f435d64ca007b | null | [
"LICENSE"
] | 239 |
2.4 | linopy | 0.6.4 | Linear optimization with N-D labeled arrays in Python | # linopy: Optimization with array-like variables and constraints
[](https://pypi.org/project/linopy/)
[](LICENSE.txt)
[](https://github.com/PyPSA/linopy/actions/workflows/test.yml)
[](https://linopy.readthedocs.io/en/latest/)
[](https://codecov.io/gh/PyPSA/linopy)
**L**inear\
**I**nteger\
**N**on-linear\
**O**ptimization in\
**PY**thon
**linopy** is an open-source python package that facilitates **optimization** with **real world data**. It builds a bridge between data analysis packages like [xarray](https://github.com/pydata/xarray) & [pandas](https://pandas.pydata.org/) and problem solvers like [cbc](https://projects.coin-or.org/Cbc), [gurobi](https://www.gurobi.com/) (see the full list below). **Linopy** supports **Linear, Integer, Mixed-Integer and Quadratic Programming** while aiming to make linear programming in Python easy, highly-flexible and performant.
## Benchmarks
**linopy** is designed to be fast and efficient. The following benchmark compares the performance of **linopy** with the alternative popular optimization packages.

## Main features
**linopy** is heavily based on [xarray](https://github.com/pydata/xarray) which allows for many flexible data-handling features:
* Define (arrays of) continuous or binary variables with **coordinates**, e.g. time, consumers, etc.
* Apply **arithmetic operations** on the variables like adding, substracting, multiplying with all the **broadcasting** potentials of xarray
* Apply **arithmetic operations** on the **linear expressions** (combination of variables)
* **Group terms** of a linear expression by coordinates
* Get insight into the **clear and transparent data model**
* **Modify** and **delete** assigned variables and constraints on the fly
* Use **lazy operations** for large linear programs with [dask](https://dask.org/)
* Choose from **different commercial and non-commercial solvers**
* Fast **import and export** a linear model using xarray's netcdf IO
## Installation
So far **linopy** is available on the PyPI repository
```bash
pip install linopy
```
or on conda-forge
```bash
conda install -c conda-forge linopy
```
## In a Nutshell
Linopy aims to make optimization programs transparent and flexible. To illustrate its usage, let's consider a scenario where we aim to minimize the cost of buying apples and bananas over a week, subject to daily and weekly vitamin intake constraints.
```python
>>> import pandas as pd
>>> import linopy
>>> m = linopy.Model()
>>> days = pd.Index(["Mon", "Tue", "Wed", "Thu", "Fri"], name="day")
>>> apples = m.add_variables(lower=0, name="apples", coords=[days])
>>> bananas = m.add_variables(lower=0, name="bananas", coords=[days])
>>> apples
```
```
Variable (day: 5)
-----------------
[Mon]: apples[Mon] ∈ [0, inf]
[Tue]: apples[Tue] ∈ [0, inf]
[Wed]: apples[Wed] ∈ [0, inf]
[Thu]: apples[Thu] ∈ [0, inf]
[Fri]: apples[Fri] ∈ [0, inf]
```
Add daily vitamin constraints
```python
>>> m.add_constraints(3 * apples + 2 * bananas >= 8, name="daily_vitamins")
```
```
Constraint `daily_vitamins` (day: 5):
-------------------------------------
[Mon]: +3 apples[Mon] + 2 bananas[Mon] ≥ 8
[Tue]: +3 apples[Tue] + 2 bananas[Tue] ≥ 8
[Wed]: +3 apples[Wed] + 2 bananas[Wed] ≥ 8
[Thu]: +3 apples[Thu] + 2 bananas[Thu] ≥ 8
[Fri]: +3 apples[Fri] + 2 bananas[Fri] ≥ 8
```
Add weekly vitamin constraint
```python
>>> m.add_constraints((3 * apples + 2 * bananas).sum() >= 50, name="weekly_vitamins")
```
```
Constraint `weekly_vitamins`
----------------------------
+3 apples[Mon] + 2 bananas[Mon] + 3 apples[Tue] ... +2 bananas[Thu] + 3 apples[Fri] + 2 bananas[Fri] ≥ 50
```
Define the prices of apples and bananas and the objective function
```python
>>> apple_price = [1, 1.5, 1, 2, 1]
>>> banana_price = [1, 1, 0.5, 1, 0.5]
>>> m.objective = apple_price * apples + banana_price * bananas
```
Finally, we can solve the problem and get the optimal solution:
```python
>>> m.solve()
>>> m.objective.value
```
```
17.166
```
... and display the solution as a pandas DataFrame
```python
>>> m.solution.to_pandas()
```
```
apples bananas
day
Mon 2.667 0
Tue 0 4
Wed 0 9
Thu 0 4
Fri 0 4
```
## Supported solvers
**linopy** supports the following solvers
* [Cbc](https://projects.coin-or.org/Cbc)
* [GLPK](https://www.gnu.org/software/glpk/)
* [HiGHS](https://highs.dev/)
* [Gurobi](https://www.gurobi.com/)
* [Xpress](https://www.fico.com/en/products/fico-xpress-solver)
* [Cplex](https://www.ibm.com/de-de/analytics/cplex-optimizer)
* [MOSEK](https://www.mosek.com/)
* [COPT](https://www.shanshu.ai/copt)
* [cuPDLPx](https://github.com/MIT-Lu-Lab/cuPDLPx)
* [Knitro](https://www.artelys.com/solvers/knitro/)
Note that these do have to be installed by the user separately.
## Development Setup
To set up a local development environment for linopy and to run the same tests that are run in the CI, you can run:
```sh
python -m venv venv
source venv/bin/activate
pip install uv
uv pip install -e .[dev,solvers]
pytest
```
The `-e` flag of the install command installs the `linopy` package in editable mode, which means that the virtualenv (and thus the tests) will run the code from your local checkout.
## Citing Linopy
If you use Linopy in your research, please cite the following paper:
- Hofmann, F., (2023). Linopy: Linear optimization with n-dimensional labeled variables.
Journal of Open Source Software, 8(84), 4823, [https://doi.org/10.21105/joss.04823](https://doi.org/10.21105/joss.04823)
A BibTeX entry for LaTeX users is
```latex
@article{Hofmann2023,
doi = {10.21105/joss.04823},
url = {https://doi.org/10.21105/joss.04823},
year = {2023}, publisher = {The Open Journal},
volume = {8},
number = {84},
pages = {4823},
author = {Fabian Hofmann},
title = {Linopy: Linear optimization with n-dimensional labeled variables},
journal = {Journal of Open Source Software}
}
```
## License
Copyright 2021 Fabian Hofmann
This package is published under MIT license. See [LICENSE.txt](LICENSE.txt) for details.
| text/markdown | null | Fabian Hofmann <fabianmarikhofmann@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT ... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy; python_version > \"3.10\"",
"numpy<2; python_version <= \"3.10\"",
"scipy",
"bottleneck",
"toolz",
"numexpr",
"xarray>=2024.2.0",
"dask>=0.18.0",
"polars>=1.31.1",
"tqdm",
"deprecation",
"packaging",
"google-cloud-storage",
"requests",
"ipython==8.26.0; extra == \"docs\"",
"num... | [] | [] | [] | [
"Homepage, https://github.com/PyPSA/linopy",
"Source, https://github.com/PyPSA/linopy"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:21:42.389569 | linopy-0.6.4.tar.gz | 1,268,000 | a7/0f/c4921543922bf753ab61bec45a5062e4ef9f1e0a7a189f084fea8c80f4ea/linopy-0.6.4.tar.gz | source | sdist | null | false | a26ad461715e4b6f5bc9cbe5493c81f5 | 41c6ca38ab4239eb46f41737f339e6bdb7564a94870668fb25898bc3452f0368 | a70fc4921543922bf753ab61bec45a5062e4ef9f1e0a7a189f084fea8c80f4ea | null | [
"LICENSE.txt"
] | 5,331 |
2.4 | py2puml | 0.11.0 | Generate PlantUML class diagrams to document your Python application. | <div align="center">
<a href="https://www.python.org/psf-landing/" target="_blank">
<img width="350px" alt="Python logo"
src="https://www.python.org/static/community_logos/python-logo-generic.svg" />
</a>
<a href="https://plantuml.com/" target="_blank">
<img width="116px" height="112px" alt="PlantUML logo" src="https://cdn-0.plantuml.com/logoc.png" style="margin-bottom: 40px" vspace="40px" />
</a>
<h1>Python to PlantUML</h1>
</div>
Generate PlantUML class diagrams to document your Python application.
[](https://results.pre-commit.ci/latest/github/lucsorel/py2puml/main)
`py2puml` uses [pre-commit hooks](https://pre-commit.com/) and [pre-commit.ci Continuous Integration](https://pre-commit.ci/) to enforce commit messages, code formatting and linting for quality and consistency sake.
See the [code conventions](#code-conventions) section if you would like to contribute to the project.
## Installation
`py2puml` is a command-line interface (CLI) documentation tool that can be installed as a dependency of your project, or installed globally on your system, or even run in an isolated way.
### Install as a project dependency
Install `py2puml` from [PyPI](https://pypi.org/project/py2puml/) with your favorite installation tool:
```sh
pip install py2puml
uv add py2puml
poetry add py2puml
pipenv install py2puml
```
### Run as an isolated binary
Uv can download and install py2puml on your system and run it in an isolated way (no influence on your other Python tools):
```sh
uvx --isolated py2puml --help
```
## Usage
The primary purpose of `py2puml` is to document domain models as [PlantUML class diagrams](https://plantuml.com/en/class-diagram), it focuses on data structures attributes and relationships (inheritance and composition/association).
Documenting methods may come later.
Once `py2puml` is installed, an eponymous CLI is available in your environment shell.
### Generate documentation in the standard output
Give `py2puml` a package (a folder) or a module (a `.py` file) to inspect and it will generate the PlantUML diagram either in the standard output or in a file path:
To document the domain model used by `py2puml` to model data structures:
```sh
# at the root of the py2puml project
py2puml --path py2puml/domain
# short-flag version:
py2puml -p py2puml/domain
```
This outputs the following PlantUML content:
```plantuml
@startuml py2puml.domain
!pragma useIntermediatePackages false
class py2puml.domain.umlitem.UmlItem {
name: str
fqn: str
}
class py2puml.domain.umlrelation.UmlRelation {
source_fqn: str
target_fqn: str
type: RelType
}
class py2puml.domain.inspection.Inspection {
items_by_fqn: Any
relations: Any
}
class py2puml.domain.umlclass.UmlAttribute {
name: str
type: str
static: bool
}
class py2puml.domain.umlclass.UmlClass {
attributes: List[UmlAttribute]
is_abstract: bool
}
class py2puml.domain.umlenum.Member {
name: str
value: str
}
class py2puml.domain.umlenum.UmlEnum {
members: List[Member]
}
enum py2puml.domain.umlrelation.RelType {
COMPOSITION: * {static}
INHERITANCE: <| {static}
}
py2puml.domain.umlrelation.UmlRelation *-- py2puml.domain.umlrelation.RelType
py2puml.domain.umlclass.UmlClass *-- py2puml.domain.umlclass.UmlAttribute
py2puml.domain.umlitem.UmlItem <|-- py2puml.domain.umlclass.UmlClass
py2puml.domain.umlenum.UmlEnum *-- py2puml.domain.umlenum.Member
py2puml.domain.umlitem.UmlItem <|-- py2puml.domain.umlenum.UmlEnum
footer Generated by //py2puml//
@enduml
```
Using PlantUML (online or with IDE extensions) renders this content as follows:

### Pipe the diagram in a local PlantUML server
Pipe the result of the CLI with a PlantUML server for instantaneous documentation (rendered by ImageMagick):
```sh
# runs a local PlantUML server from a docker container:
docker run -d --rm -p 1234:8080 --name plantumlserver plantuml/plantuml-server:jetty
py2puml -p py2puml/domain | curl -X POST --data-binary @- http://localhost:1234/svg/ --output - | display
# stop the container when you don't need it anymore, restart it later
docker stop plantumlserver
docker start plantumlserver
```
### Generate documentation in a file
```sh
py2puml --path py2puml/domain --output-file py2puml-domain.puml
# short-flag version:
py2puml -p py2puml/domain -o py2puml-domain.puml
```
### Generate documentation for a specific module
```sh
py2puml --path py2puml/domain/umlitem.py
```
```plantuml
@startuml py2puml.domain.umlitem
!pragma useIntermediatePackages false
class py2puml.domain.umlitem.UmlItem {
name: str
fqn: str
}
footer Generated by //py2puml//
@enduml
```
### Generate documentation for a project with a src folder
Use the `--path` flag to indicate the path to the root namespace of the project and the `--namespace` flag to indicate that the "src" part should be ignored:
```sh
py2puml -p src/project -n project
```
Note: `py2puml` won't handle automatically the "src" part if it is in the middle of the path to inspect.
### Use py2puml outside the namespace root
By default, `py2puml` derives the Python namespace from the given path, assuming the command is called from the root namespace:
```sh
py2puml --path py2puml/domain
# is equivalent to:
py2puml --path py2puml/domain --namespace py2puml.domain
# short-flag version
py2puml -p py2puml/domain -n py2puml.domain
```
But sometimes your shell may be positionned out of the namespace folder, or within it.
In such cases, it is necessary to specify the namespace of the domain to inspect so that `py2puml` can inspect it properly and follow the imports in the inspected package or modules:
```sh
# from your home folder:
# - for a package
py2puml --path repositories/py2puml/py2puml/domain --namespace py2puml.domain
# -> py2puml will move down its "inspection working directory" to repositories/py2puml
# - for a module
py2puml -p repositories/py2puml/py2puml/domain/item.py -n py2puml.domain.umlitem
# from a sub-package of the project to inspect (in py2puml/domain)
# - for a package
py2puml --path . --namespace py2puml.domain
# -> py2puml will move its "inspection working directory" up 2 folders in order to be at the root namespace
# - for a module
py2puml -p umlitem.py -n py2puml.domain.umlitem
```
### Help commands
For a full overview of the CLI, run:
```sh
# documents the available flags and their description
py2puml --help
# displays the installed version
py2puml --version
# -> py2puml 0.11.0
```
### Python API
To programatically create the diagram of the `py2puml` domain classes, import the `py2puml` function in your script:
```python
from py2puml.py2puml import py2puml
if __name__ == '__main__':
# 1. outputs the PlantUML content in the terminal
print(''.join(py2puml('py2puml/domain', 'py2puml.domain')))
# 2. or writes the PlantUML content in a file
with open('py2puml/py2puml.domain.puml', 'w', encoding='utf8') as puml_file:
puml_file.writelines(py2puml('py2puml/domain', 'py2puml.domain'))
```
## How it works
`py2puml` internally uses code [inspection](https://docs.python.org/3/library/inspect.html) (also called *reflexion* in other programming languages) and [abstract tree parsing](https://docs.python.org/3/library/ast.html) to retrieve relevant information.
### Features
From a given path corresponding to a folder containing Python code, `py2puml` inspects each Python module and generates a [PlantUML diagram](https://plantuml.com/en/class-diagram) from the definitions of various data structures using:
* **[inspection](https://docs.python.org/3/library/inspect.html)** and [type annotations](https://docs.python.org/3/library/typing.html) to detect:
* static class attributes and [dataclass](https://docs.python.org/3/library/dataclasses.html) fields
* fields of [namedtuples](https://docs.python.org/3/library/collections.html#collections.namedtuple)
* members of [enumerations](https://docs.python.org/3/library/enum.html)
* composition and inheritance relationships.
The detection of composition relationships relies on type annotations only, assigned values or expressions are never evaluated to prevent unwanted side-effects
* parsing **[abstract syntax trees](https://docs.python.org/3/library/ast.html#ast.NodeVisitor)** to detect the instance attributes defined in `__init__` constructors
`py2puml` outputs diagrams in PlantUML syntax, which can be:
* versioned along your code with a unit-test ensuring its consistency (see the [test_py2puml.py's test_assert_domain_documentation](tests/py2puml/test_py2puml.py) example).
You can also use the `assert_py2puml_command_args` utility from [py2puml.asserts](py2puml/asserts.py) to check the output of a `py2puml` command against a versioned file (that you can easily update):
```python
from py2puml.asserts import assert_py2puml_command_args
def test_assert_domain_documentation():
assert_py2puml_command_args('-p py2puml/domain', DOCUMENTATION_PATH / 'py2puml.domain.puml')
# temporarily add the `overwrite_expected_output=True` argument to update the file containing the expected contents
assert_py2puml_command_args('-p py2puml/domain', DOCUMENTATION_PATH / 'py2puml.domain.puml', overwrite_expected_output=True)
```
* generated and hosted along other code documentation (better option: generated documentation should not be versioned with the codebase)
If you like tools related with PlantUML, you may also be interested in this [lucsorel/plantuml-file-loader](https://github.com/lucsorel/plantuml-file-loader) project:
a webpack loader which converts PlantUML files into images during the webpack processing (useful to [include PlantUML diagrams in your slides](https://github.com/lucsorel/markdown-image-loader/blob/master/README.md#web-based-slideshows) with RevealJS or RemarkJS).
## Changelog and versions
See [CHANGELOG.md](CHANGELOG.md).
## Licence
Unless stated otherwise all works are licensed under the [MIT license](http://spdx.org/licenses/MIT.html), a copy of which is included [here](LICENSE).
## Contributions
I'm thankful to [all the people who have contributed](https://github.com/lucsorel/py2puml/graphs/contributors) to this project:

* [Luc Sorel-Giffo](https://github.com/lucsorel)
* [Doyou Jung](https://github.com/doyou89)
* [Julien Jerphanion](https://github.com/jjerphan)
* [Luis Fernando Villanueva Pérez](https://github.com/jonykalavera)
* [Konstantin Zangerle](https://github.com/justkiddingcode)
* [Mieszko](https://github.com/0xmzk)
### Pull requests and code conventions
Pull-requests are welcome and will be processed on a best-effort basis.
Pull requests must follow the guidelines enforced by `pre-commit` hooks (see the [.pre-commit-config.yaml](.pre-commit-config.yaml) configuration file):
- commit messages must follow the the conventional-commit rules enforced by the `commitlint` hook
- code formatting must follow the conventions enforced by the `isort` and `ruff-format` hooks
- code linting should not detect code smells in your contributions, this is checked by the `ruff-check` hook
Please also follow the [contributing guide](CONTRIBUTING.md) to ease your contribution.
### Pre-commit hooks
#### Activate the git hooks
Set the git hooks (`pre-commit` and `commit-msg` types):
```sh
uv run pre-commit install
```
#### Run the hooks locally
Before committing, you can check your changes with:
```sh
# all hooks on the staged files
uv run pre-commit run
# all hooks on all files
uv run pre-commit run --all-files
# a specific hook on all files
uv run pre-commit run ruff-format --all-files
```
#### Code formatting
This project uses `isort` and `ruff-format` to format the code.
The guidelines are expressed in their respective sections in the [pyproject.toml](pyproject.toml) file.
#### Static analysis and best practices
This project uses the `ruff-check` linter, which is configured in its section in the [pyproject.toml](pyproject.toml) file.
#### Commit messages
Please, follow the [conventional commit guidelines](https://www.conventionalcommits.org/en/v1.0.0/) for commit messages.
When merging your pull-request, the new version of the project will be derived from the commit messages.
### Tests
Add automated tests on your contributions, which can be run with the vollowing commands:
```sh
# directly with poetry
uv run pytest -v
# in a virtual environment
python3 -m pytest -v
# a specific test suite file or a given test
uv run pytest -v tests/py2puml/test_cli_controller.py
uv run pytest -v -k test_controller_stdout_and_in_file
```
Code coverage (with [missed branch statements](https://pytest-cov.readthedocs.io/en/latest/config.html?highlight=--cov-branch)):
```sh
uv run pytest -v --cov=src/py2puml --cov-branch --cov-report term-missing --cov-fail-under 93
```
## Current limitations
* regarding **inspection**
* type hinting is optional when writing Python code and discarded when it is executed, as mentionned in the [typing official documentation](https://docs.python.org/3/library/typing.html).
The quality of the diagram output by `py2puml` depends on the reliability of the type annotations
> The `python` runtime does not enforce function and variable type annotations. They can be used by third party tools such as type checkers, IDEs, linters, etc.
* inspection implies that the `python` interpreter parses your `.py` files, make sure that your executable code is guarded by `if __name__ == '__main__':` clauses so that it won't be executed during a `py2puml` inspection
* regarding the detection of instance attributes with **AST parsing**:
* only constructors are visited, attributes assigned in other functions won't be documented
* attribute types are inferred from type annotations:
* of the attribute itself
* of the variable assigned to the attribute: a signature parameter or a locale variable
* to avoid side-effects, no code is executed nor interpreted
## Alternatives
If `py2puml` does not meet your needs (suggestions and pull-requests are **welcome**), you can have a look at these projects which follow other approaches (AST, linting, modeling):
* [pyreverse](https://pylint.pycqa.org/en/latest/additional_commands/index.html#pyreverse), which includes a PlantUML printer [since version 2.10.0](https://pylint.pycqa.org/en/latest/whatsnew/changelog.html?highlight=plantuml#what-s-new-in-pylint-2-10-0)
* [cb109/pyplantuml](https://github.com/cb109/pyplantuml)
* [deadbok/py-puml-tools](https://github.com/deadbok/py-puml-tools)
* [caballero/genUML](https://github.com/jose-caballero/genUML)
| text/markdown | Luc Sorel-Giffo | null | Luc Sorel-Giffo | null | null | class diagram, PlantUML, documentation, inspection, AST | [] | [] | null | null | >=3.10.9 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/lucsorel/py2puml.git",
"Issues, https://github.com/lucsorel/py2puml/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:20:57.835760 | py2puml-0.11.0.tar.gz | 19,974 | 87/65/51eb076157c34ca7686726c45a92da1bc2cb72e8d95c292f4cb86f0982ab/py2puml-0.11.0.tar.gz | source | sdist | null | false | 472bc4612f514f422a68bc8a8312ea68 | a08ce1988e163ebcb9e0f2fbc6e6b1251e0abd1f9cf1f2e32a32744cb85a99ea | 876551eb076157c34ca7686726c45a92da1bc2cb72e8d95c292f4cb86f0982ab | MIT | [] | 1,025 |
2.4 | pylangacq | 0.20.0 | Tools for Language Acquisition Research | PyLangAcq: Language Acquisition Research in Python
==================================================
Full documentation: https://pylangacq.org
|
.. image:: https://badge.fury.io/py/pylangacq.svg
:target: https://pypi.python.org/pypi/pylangacq
:alt: PyPI version
.. image:: https://img.shields.io/pypi/pyversions/pylangacq.svg
:target: https://pypi.python.org/pypi/pylangacq
:alt: Supported Python versions
.. image:: https://img.shields.io/pypi/dm/pylangacq
:target: https://pypi.python.org/pypi/pylangacq
:alt: PyPI - Downloads
|
.. start-sphinx-website-index-page
PyLangAcq is a Python library for language acquisition research.
- Reading and writing the CHAT data format used by TalkBank and CHILDES datasets
- Intuitive Python data structures for flexible data access and manipulation
- Standard developmental measures readily available: Mean length of utterance (MLU),
type-token ratio (TTR), and Index of Productive Syntax (IPSyn)
- Direct support and powerful extensions possible for CHAT-formatted conversational datasets
more generally
.. _download_install:
Download and Install
--------------------
To download and install the most recent version::
$ pip install --upgrade pylangacq
Ready for more?
Check out the `Quickstart <https://pylangacq.org/quickstart.html>`_ page.
Links
-----
* Documentation: https://pylangacq.org
* Author: `Jackson L. Lee <https://jacksonllee.com>`_
* Source code: https://github.com/jacksonllee/pylangacq
How to Cite
-----------
Lee, Jackson L., Ross Burkholder, Gallagher B. Flinn, and Emily R. Coppess. 2016.
`Working with CHAT transcripts in Python <https://jacksonllee.com/papers/lee-etal-2016-pylangacq.pdf>`_.
Technical report `TR-2016-02 <https://newtraell.cs.uchicago.edu/research/publications/techreports/TR-2016-02>`_,
Department of Computer Science, University of Chicago.
.. code-block:: latex
@TechReport{lee-et-al-pylangacq:2016,
Title = {Working with CHAT transcripts in Python},
Author = {Lee, Jackson L. and Burkholder, Ross and Flinn, Gallagher B. and Coppess, Emily R.},
Institution = {Department of Computer Science, University of Chicago},
Year = {2016},
Number = {TR-2016-02},
}
License
-------
MIT License. Please see ``LICENSE.txt`` in the GitHub source code for details.
.. end-sphinx-website-index-page
Changelog
---------
Please see ``CHANGELOG.md``.
Contributing
------------
Please see ``CONTRIBUTING.md``.
| text/x-rst | null | "Jackson L. Lee" <jacksonlunlee@gmail.com> | null | null | MIT License | CHILDES, TalkBank, language-acquisition, language-development | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Langu... | [] | null | null | >=3.10 | [] | [] | [] | [
"rustling>=0.5.0",
"black==26.1.0; extra == \"dev\"",
"flake8==7.3.0; extra == \"dev\"",
"pytest==9.0.2; extra == \"dev\"",
"build==1.4.0; extra == \"dev\"",
"twine==6.2.0; extra == \"dev\"",
"furo==2025.12.19; extra == \"docs\"",
"Sphinx>=8.1.3; extra == \"docs\"",
"sphinx-copybutton==0.5.2; extra ... | [] | [] | [] | [
"Homepage, https://pylangacq.org",
"Source, https://github.com/jacksonllee/pylangacq"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T13:20:18.036087 | pylangacq-0.20.0.tar.gz | 5,326 | db/ad/89030b97cb0a3a6bdf2ccc5fb1202373441b2447692687ba92bd3433b375/pylangacq-0.20.0.tar.gz | source | sdist | null | false | 93234ee1c84c7618e3bf22f502c754a0 | 1325db9b32b6ae55ffa189dd12cbb2f414bdd9a95e80af6206873ac65a4177fa | dbad89030b97cb0a3a6bdf2ccc5fb1202373441b2447692687ba92bd3433b375 | null | [
"LICENSE.txt"
] | 298 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.