metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | hawki | 0.7.0 | Holistic Analysis for Web3 Kode & Infrastructure – Security intelligence platform | # 🦅 Hawk‑i
**Holistic Analysis for Web3 Kode & Infrastructure**
*Open‑source, privacy‑first security intelligence for smart contracts*
[](https://pypi.org/project/hawki/)
[](https://pypi.org/project/hawki/)
[](https://pypi.org/project/hawki/)
[](https://hub.docker.com/r/0xsemantic/hawki)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://github.com/0xSemantic/hawki/graphs/contributors)
[](https://github.com/0xSemantic/hawki/discussions)
---
## 📖 Table of Contents
- [What is Hawk‑i?](#-what-is-hawk‑i)
- [Features](#-features)
- [Quick Start](#-quick-start)
- [Operational Modes](#-operational-modes)
- [Advanced Usage](#-advanced-usage)
- [Audit‑Grade Reporting](#audit‑grade-reporting)
- [Security Score](#security-score)
- [Guided Remediation](#guided-remediation)
- [Telemetry (Opt‑In)](#telemetry-opt‑in)
- [CLI Reference](#cli-reference)
- [CI/CD Integration](#cicd-integration)
- [Ecosystem Integrations](#ecosystem-integrations)
- [Demo Suite](#-demo-suite)
- [Project Structure](#-project-structure)
- [Contributing](#-contributing)
- [License](#-license)
- [Acknowledgements](#-acknowledgements)
- [Roadmap](#-roadmap)
- [Contact](#-contact)
---
## 🦅 What is Hawk‑i?
Hawk‑i is an **open‑source security intelligence platform** for Web3 smart contracts. It evolves beyond a simple scanner into a complete audit‑grade system that **detects, simulates, scores, and helps fix vulnerabilities** – all while respecting your privacy.
Whether you're a solo developer, an auditor, or a protocol team, Hawk‑i integrates into your workflow to provide continuous, actionable security insights.
**Key differentiators:**
- **Hybrid analysis** – static rules + AI reasoning + live exploit simulation.
- **Professional reporting** – executive summaries, risk scores, charts, and per‑finding remediation.
- **Privacy by design** – runs locally; no code is ever sent to external servers (AI uses your own API keys).
- **Extensible** – drop‑in rules, attack scripts, and templates.
---
## ✨ Features
### Core Capabilities
- **🔍 Repository Intelligence** – Parse and index Solidity files from local folders or remote Git repos (GitHub, GitLab, etc.).
- **📦 Static Rule Engine** – Detect 30+ common vulnerabilities (reentrancy, access control, integer overflows, oracle manipulation, etc.) with an extensible rule system.
- **🧠 AI Reasoning** – Leverage LLMs (Gemini, OpenAI, Anthropic, local Ollama) to uncover logic flaws, economic exploits, and governance risks that static analysis misses.
- **💣 Exploit Simulation Sandbox** – Automatically deploy contracts in an isolated Docker environment and run attack scripts to validate vulnerabilities; results influence your risk score.
- **⏱️ Continuous Monitoring** – Watch repositories and deployed contracts for changes, and get alerts via file or console.
### v0.7.0 – Intelligence & Reporting Upgrade
- **📊 Audit‑Grade Reporting (ARS v2)** – Generate professional reports with executive summary, security score (0–100), severity charts, and per‑finding remediation.
- **🛡️ Guided Remediation Engine** – Every finding includes a context‑aware fix snippet, auto‑populated with your code’s variable names.
- **📈 Security Score** – A deterministic 0–100 score based on finding severity and exploit success, with clear risk bands.
- **📡 Telemetry (Opt‑In)** – Anonymous usage metrics to demonstrate ecosystem impact (no code, no repo names).
- **🧩 Expanded Vulnerability Library** – 30 fully documented vulnerabilities, each with detection, exploit script, and fix template.
---
## 🚀 Quick Start
### Installation
**Option 1: Install from PyPI (recommended)**
```bash
pip install hawki
```
**Option 2: Use Docker**
```bash
docker pull 0xsemantic/hawki:latest
docker run --rm -v $(pwd):/repo 0xsemantic/hawki scan /repo
```
**Option 3: Install from source**
```bash
git clone https://github.com/0xSemantic/hawki.git
cd hawki
pip install -e .
```
### Basic Scan
```bash
hawki scan /path/to/your/project
```
This runs static rules only (Minimal mode) and outputs a simple JSON report (legacy format). For the new audit‑grade report, use `--format`.
### Full Audit with AI + Sandbox
```bash
hawki scan /path --ai --ai-model openai/gpt-4 --api-key YOUR_KEY --sandbox --format pdf --telemetry
```
- `--ai` enables AI reasoning (requires an API key).
- `--sandbox` runs exploit simulations (requires Docker).
- `--format pdf` generates a professional PDF report.
- `--telemetry` opts in to anonymous usage stats.
### Generate a Report from Previous Scan
```bash
hawki report --input findings.json --format html --output report.html
```
### View Security Score
```bash
hawki score findings.json
```
### Show Local Telemetry
```bash
hawki metrics
```
### Monitor a Repository
```bash
hawki monitor /path/to/repo --interval 60 --alert-log alerts.txt
```
---
## ⚙️ Operational Modes
Hawk‑i adapts to your environment and privacy needs:
| Mode | Static Rules | AI | Sandbox | Docker Required |
|---------------|--------------|------|---------|-----------------|
| **Minimal** | ✅ | ❌ | ❌ | ❌ |
| **Enhanced** | ✅ | ✅ | ❌ | ❌ |
| **Full Audit**| ✅ | ✅ | ✅ | ✅ |
Reports indicate which mode was used and adapt content accordingly (e.g., omit exploit steps if sandbox disabled).
---
## 🔧 Advanced Usage
### Audit‑Grade Reporting
The `hawki scan` command with `--format` generates a report using the new **Audit‑Grade Report System (ARS v2)**. Reports include:
- **Executive Summary** – total contracts, severity counts, security score, risk classification, mode used.
- **Vulnerability Breakdown** – pie chart (severity) + bar chart (type) + fallback table.
- **Per‑Finding Details** – title, severity, file/line, vulnerable code, recommended fix, explanation, impact, exploit steps (if sandbox succeeded).
- **Simulation Metrics** – success rate, balance deltas, gas used.
Formats: Markdown (default), JSON, HTML, PDF (optional dependencies).
**Example Markdown snippet:**
```markdown
## 🔍 Detailed Findings
### F001: Reentrancy in withdraw() (CRITICAL)
- **File:** contracts/Vault.sol
- **Line:** 42
**Vulnerable Code:**
```solidity
function withdraw() external {
uint bal = balances[msg.sender];
(bool success,) = msg.sender.call{value: bal}("");
require(success);
balances[msg.sender] = 0;
}
```
**Recommended Fix:**
```solidity
function withdraw() external nonReentrant {
uint bal = balances[msg.sender];
balances[msg.sender] = 0;
(bool success,) = msg.sender.call{value: bal}("");
require(success);
}
```
**Explanation:** The function makes an external call before updating state, allowing reentrancy.
**Impact:** An attacker can drain all funds.
**Exploit Reproduction Steps:**
- Exploit succeeded using script: reentrancy_attack.py
- Before balance: 1000000000000000000
- After balance: 0
- Gas used: 120000
- Transaction hash: 0xabc...
```
### Security Score
The security score is a **deterministic 0–100** number computed as:
- Base: 100
- Deductions per finding:
- Critical: -15
- High: -8
- Medium: -4
- Low: -1
- If sandbox enabled: **-5** for each successfully reproduced exploit (capped).
**Risk Bands:**
| Score | Classification |
|--------|----------------|
| 90–100 | Secure |
| 75–89 | Minor Risk |
| 50–74 | Moderate Risk |
| 25–49 | High Risk |
| 0–24 | Critical Risk |
Use `hawki score findings.json` to see the score without generating a full report.
### Guided Remediation
Every finding now includes a `fix_snippet` populated by the **Remediation Engine**. The engine uses templates and AST context to generate accurate fixes. For example, a reentrancy finding might include:
```solidity
// Vulnerable code
function withdraw() external {
uint amount = balances[msg.sender];
(bool success, ) = msg.sender.call{value: amount}("");
require(success);
balances[msg.sender] = 0;
}
// Recommended fix
function withdraw() external nonReentrant {
uint amount = balances[msg.sender];
balances[msg.sender] = 0;
(bool success, ) = msg.sender.call{value: amount}("");
require(success);
}
```
### Telemetry (Opt‑In)
When you run `hawki scan --telemetry`, Hawk‑i collects **anonymous** data:
- Total scans performed
- Findings per severity
- Simulation success rate (if sandbox enabled)
- Hawk‑i version
- Execution mode
Data is stored locally in `~/.hawki/metrics.json` and can be viewed with `hawki metrics`. If you opt in, aggregated statistics may be sent to a public endpoint to power the community metrics badge. **No source code, repository names, or IPs are ever collected.**
**View your metrics:**
```bash
hawki metrics
```
**Example output:**
```
Total scans: 42
Total findings: 87 (Critical: 12, High: 23, Medium: 31, Low: 21)
```
### CLI Reference
The main command is `hawki`. Available subcommands:
| Subcommand | Description |
|------------|-------------|
| `scan` | Perform a one‑time security scan. |
| `monitor` | Continuously monitor a repository or contract. |
| `report` | Generate a report from existing findings. |
| `score` | Calculate the security score from a findings file. |
| `metrics` | Display local telemetry statistics. |
| `simulate` | Run a specific exploit simulation (advanced). |
**`hawki scan` options:**
```
hawki scan <target> [options]
-v, --verbose Enable debug logging
-o, --output-dir DIR Report output directory (default: ./hawki_reports)
--ai Enable AI reasoning
--ai-model MODEL LLM model (e.g., openai/gpt-4)
--api-key KEY API key for LLM
--sandbox Run exploit simulation (requires Docker)
--format {md,json,html,pdf} Output report format (if omitted, legacy JSON)
--telemetry Opt in to anonymous usage metrics
```
**`hawki report` options:**
```
hawki report [options]
-i, --input FILE Findings JSON file (default: latest)
-o, --output-dir DIR Output directory
-f, --format FORMAT Output format (md, json, html, pdf)
```
**`hawki score`**:
```
hawki score findings.json [-v]
```
**`hawki metrics`**:
```
hawki metrics [-v]
```
### CI/CD Integration
Hawk‑i provides a dedicated script `scripts/ci_pipeline.py` that auto‑detects GitHub Actions or GitLab CI and formats output accordingly.
**GitHub Actions example (`.github/workflows/hawki.yml`):**
```yaml
name: Hawk-i Security Scan
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install Hawk-i
run: pip install hawki
- name: Run Hawk-i CI
run: python scripts/ci_pipeline.py .
```
The script exits with code `1` if any **HIGH** severity findings are detected, allowing you to fail the pipeline.
**GitLab CI example (`.gitlab-ci.yml`):**
```yaml
hawki-scan:
image: python:3.11
before_script:
- pip install hawki
script:
- python scripts/ci_pipeline.py .
artifacts:
reports:
codequality: gl-code-quality-report.json
```
### Ecosystem Integrations
Use the helper script `scripts/deploy_helpers.py` to integrate with popular development tools.
#### Foundry
```bash
python scripts/deploy_helpers.py foundry /path/to/forge-project --ai
```
#### Hardhat
```bash
python scripts/deploy_helpers.py hardhat /path/to/hardhat-project
```
#### Remix
```bash
python scripts/deploy_helpers.py remix /path/to/remix-workspace
```
#### Generate a human‑readable audit report
```bash
python scripts/deploy_helpers.py readme /path/to/report.json --output AUDIT.md
```
---
## 🧪 Demo Suite
We’ve built a **dedicated demo suite** of intentionally vulnerable contracts to help you understand Hawk‑i’s capabilities and to test your contributions.
The suite includes **30 vulnerable contracts**, one for each rule, covering:
- Reentrancy
- Access control bypass
- Delegatecall misuse
- Oracle manipulation
- Flash loan attacks
- Governance vote manipulation
- Signature replay
- Integer overflows
- And more…
### Quick Demo (with Docker)
```bash
docker build -f demo/Dockerfile.demo -t hawki-demo .
docker run --rm hawki-demo
```
### Manual Demo
```bash
cd demo
npm install # install Hardhat dependencies
npx hardhat node # start local blockchain (keep open)
# In another terminal:
npx hardhat run scripts/deploy.js --network localhost
hawki scan . --ai --sandbox --format html --telemetry
```
See the [demo README](demo/README.md) for detailed instructions and expected output.
---
## 📁 Project Structure
```
hawki/
├── core/
│ ├── repo_intelligence/ # Repo cloning & Solidity parsing
│ ├── static_rule_engine/ # Static analysis & dynamic rule loading (30+ rules)
│ ├── ai_engine/ # LLM orchestration & prompt management
│ ├── exploit_sandbox/ # Docker‑based exploit simulation
│ ├── remediation_engine/ # Context‑aware fix snippet generation
│ ├── telemetry/ # Opt‑in anonymous metrics
│ ├── monitoring/ # Continuous monitoring & alerts
│ └── data_layer/ # Report generation (ARS v2) & persistence
├── cli/ # Command‑line interface
├── scripts/ # CI/CD and integration helpers
├── docker/ # Dockerfile and compose
├── demo/ # Vulnerable contracts for testing
├── tests/ # Unit tests
├── pyproject.toml # Package metadata
├── CONTRIBUTING.md # Contribution guidelines
├── CONTRIBUTORS.md # List of contributors
└── README.md # This file
```
---
## 🤝 Contributing
We welcome contributions from the community! Whether you're fixing a bug, adding a new rule, or improving documentation, your help makes Hawk‑i better for everyone.
Please read our [Contributing Guidelines](CONTRIBUTING.md) to get started. All contributors are recognised in [CONTRIBUTORS.md](CONTRIBUTORS.md) – we use the [All Contributors](https://allcontributors.org/) specification.
---
## 📄 License
Hawk‑i is released under the [MIT License](LICENSE).
---
## 🙏 Acknowledgements
Hawk‑i builds upon excellent open‑source projects:
- [tree‑sitter](https://tree-sitter.github.io/) and [tree‑sitter‑solidity](https://github.com/tree-sitter/tree-sitter-solidity) for parsing.
- [LiteLLM](https://github.com/BerriAI/litellm) for unified LLM access.
- [Docker](https://www.docker.com/) for sandboxing.
- [Web3.py](https://web3py.readthedocs.io/) for blockchain interaction.
- [GitPython](https://gitpython.readthedocs.io/) for repository handling.
- [matplotlib](https://matplotlib.org/) for chart generation (optional).
- [Jinja2](https://jinja.palletsprojects.com/) for templated reports.
Special thanks to all contributors and the Web3 security community.
---
## 🛣️ Roadmap
- [x] **Phase 1** – Repository intelligence + static rule engine
- [x] **Phase 2** – AI reasoning with LiteLLM
- [x] **Phase 3** – Exploit simulation sandbox
- [x] **Phase 4** – Continuous monitoring & alerts
- [x] **Phase 5** – CI/CD & ecosystem integrations
- [x] **Phase 6** – Deployment (PyPI, Docker, CLI)
- [x] **Phase 7 – v0.7.0 Intelligence & Reporting Upgrade**
- Audit‑grade reporting (ARS v2)
- 30 vulnerability rules
- Security score
- Guided remediation engine
- Opt‑in telemetry
- [ ] **Phase 8** – Dashboard & real‑time visualisation
- [ ] **Phase 9** – Intelligence network & community rules marketplace
---
## 📬 Contact & Support
- **Issues**: [GitHub Issues](https://github.com/0xSemantic/hawki/issues)
- **Discussions**: [GitHub Discussions](https://github.com/0xSemantic/hawki/discussions)
- **LinkedIn**: [0xSemantic](https://linkedin.com/in/0xsemantic)
- **Medium**: [@0xSemantic](https://medium.com/@0xsemantic)
- **Twitter**: [@0xSemantic](https://twitter.com/0xSemantic)
**Happy auditing, and may your contracts be bug‑free!** 🦅
| text/markdown | null | 0xSemantic <0xsemantic@example.com> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Security",
"Topic :: Software Development :: Testing",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"tree-sitter>=0.20.1",
"tree-sitter-solidity>=0.1.2",
"gitpython>=3.1.40",
"requests>=2.31.0",
"pydantic>=2.5.0",
"colorlog>=6.7.0",
"litellm>=1.0.0",
"docker>=6.1.3",
"web3>=6.15.0",
"py-solc-x>=2.0.0",
"eth-account>=0.8.0",
"rich>=13.0.0",
"jinja2>=3.1.0; extra == \"reports\"",
"matplotlib>=3.7.0; extra == \"reports\"",
"pdfkit>=1.0.0; extra == \"pdf\"",
"hawki[pdf,reports]; extra == \"all\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T06:52:38.839022 | hawki-0.7.0.tar.gz | 74,075 | 14/eb/60eeac7d95df6bf04184b19c6eb2d1709412f687316fb52ed890fe4a2708/hawki-0.7.0.tar.gz | source | sdist | null | false | 2cb8f5a5de8eab7aadfa74ff65326b8a | 5560d85aca3e46b14fe22e0174f2e0840d464724f93fd2613e76ed7a00427a5d | 14eb60eeac7d95df6bf04184b19c6eb2d1709412f687316fb52ed890fe4a2708 | null | [
"LICENSE"
] | 249 |
2.4 | lumenova-beacon | 2.4.7 | Lumenova Beacon SDK - A Python SDK for observability tracing with OpenTelemetry-compatible span export | # Lumenova Beacon SDK
[](https://pypi.org/project/lumenova-beacon/)
[](https://pypi.org/project/lumenova-beacon/)
[](https://opensource.org/licenses/Apache-2.0)
> A Python observability tracing SDK that sends spans in OpenTelemetry-compatible format, designed for AI/LLM applications.
## Features
- **OpenTelemetry Integration** - Automatic instrumentation for Anthropic, OpenAI, FastAPI, Redis, HTTPX, and more
- **Manual & Decorator Tracing** - Create spans manually or use `@trace` decorator
- **LangChain/LangGraph Integration** - Automatic tracing for chains, agents, tools, and retrievers
- **Strands Agents Integration** - Callback handler for AWS Strands agent tracing
- **CrewAI Integration** - Event listener for CrewAI crew tracing
- **LiteLLM Integration** - Callback logger for LiteLLM proxy tracing
- **Dataset Management** - ActiveRecord-style API for managing test datasets
- **Prompt Management** - Version-controlled prompt templates with labels (staging, production)
- **Experiment & Evaluation Management** - Run experiments over datasets and evaluate results
- **Data Masking** - Built-in PII detection and redaction via Beacon Guardrails
- **Flexible Transport** - HTTP or file-based span export
- **Full Async Support** - Async/await throughout
## Requirements
- Python 3.10+
## Installation
```bash
# Base installation
pip install lumenova-beacon
# With OpenTelemetry support
pip install lumenova-beacon[opentelemetry]
# With LangChain/LangGraph support
pip install lumenova-beacon[langchain]
# With LiteLLM support
pip install lumenova-beacon[litellm]
# With Strands Agents support
pip install lumenova-beacon[strands]
# With CrewAI support
pip install lumenova-beacon[crewai]
```
## Quick Start
```python
from lumenova_beacon import BeaconClient, trace
# Initialize client with your Beacon credentials
client = BeaconClient(
endpoint="https://your-beacon-endpoint.lumenova.ai", # Your Beacon endpoint
api_key="your-api-key", # API key from your Beacon account
session_id="my-session"
)
# Use decorator for automatic tracing
@trace
def my_function(x, y):
return x + y
result = my_function(10, 20) # Automatically traced
```
## Configuration
### Environment Variables
All environment variables work as fallback - constructor parameters override them:
| Variable | Purpose | Default |
|----------|---------|---------|
| `BEACON_ENDPOINT` | API base URL for OTLP export | (required unless using `file_directory`) |
| `BEACON_API_KEY` | Authentication token | |
| `BEACON_SESSION_ID` | Default session ID for spans | |
| `BEACON_SERVICE_NAME` | Service name for OTEL resource (fallback: `OTEL_SERVICE_NAME`) | |
| `BEACON_ENVIRONMENT` | Deployment environment (e.g., "production", "staging") | |
| `BEACON_VERIFY` | SSL certificate verification | `true` |
| `BEACON_EAGER_EXPORT` | Export spans eagerly on end | `true` |
```bash
# Bash/Linux/macOS
export BEACON_ENDPOINT="https://your-beacon-endpoint.lumenova.ai"
export BEACON_API_KEY="your-api-key"
export BEACON_SESSION_ID="my-session"
```
```powershell
# PowerShell
$env:BEACON_ENDPOINT = "https://your-beacon-endpoint.lumenova.ai"
$env:BEACON_API_KEY = "your-api-key"
$env:BEACON_SESSION_ID = "my-session"
```
### Configuration Options
```python
from lumenova_beacon import BeaconClient
client = BeaconClient(
# Connection
endpoint="https://your-beacon-endpoint.lumenova.ai",
api_key="your-api-key",
verify=True,
headers={"Custom-Header": "value"},
# Span Configuration
session_id="my-session",
service_name="my-service",
environment="production",
# OpenTelemetry
auto_instrument_opentelemetry=True, # Auto-configure OTEL (default: True)
isolated=False, # Use private TracerProvider (default: False)
auto_instrument_litellm=False, # Auto-configure LiteLLM (default: False)
# Data Masking
masking_function=None, # Custom masking function (optional)
# General
enabled=True,
eager_export=True,
)
```
### File Transport
For local development or testing, use `file_directory` instead of `endpoint`:
```python
from lumenova_beacon import BeaconClient
client = BeaconClient(
file_directory="./traces",
)
```
## Core Features
### 1. Tracing
#### Decorator Tracing
The `@trace` decorator automatically captures function execution:
```python
from lumenova_beacon import trace
# Simple usage
@trace
def process_data(data):
return data.upper()
# With custom name
@trace(name="custom_operation")
def another_function():
pass
# Capture inputs and outputs
@trace(capture_args=True, capture_result=True)
def calculate(x, y):
return x + y
# Works with async functions
@trace
async def async_operation():
await some_async_call()
```
#### Manual Tracing
For more control, use context managers:
```python
from lumenova_beacon import BeaconClient
from lumenova_beacon.types import SpanKind, StatusCode
client = BeaconClient()
# Context manager
with client.trace("operation_name") as span:
span.set_attribute("user_id", "123")
span.set_input({"query": "search term"})
try:
result = do_work()
span.set_output(result)
span.set_status(StatusCode.OK)
except Exception as e:
span.record_exception(e)
span.set_status(StatusCode.ERROR, str(e))
raise
# Async context manager
async with client.trace("async_operation") as span:
result = await async_work()
span.set_output(result)
# Direct span creation
span = client.create_span(
name="manual_span",
kind=SpanKind.CLIENT,
)
span.start()
# ... do work ...
span.end()
```
### 2. OpenTelemetry Integration
Beacon automatically configures OpenTelemetry to export spans:
```python
from lumenova_beacon import BeaconClient
from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
# Initialize (auto-configures OpenTelemetry)
client = BeaconClient(
endpoint="https://your-beacon-endpoint.lumenova.ai",
api_key="your-api-key",
auto_instrument_opentelemetry=True # Default
)
# Instrument libraries
AnthropicInstrumentor().instrument()
OpenAIInstrumentor().instrument()
# Now all API calls are automatically traced!
from anthropic import Anthropic
anthropic = Anthropic()
response = anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
messages=[{"role": "user", "content": "Hello!"}]
) # Automatically traced with proper span hierarchy
```
#### Supported Instrumentors
Install additional instrumentors as needed:
```bash
pip install opentelemetry-instrumentation-anthropic
pip install opentelemetry-instrumentation-openai
pip install opentelemetry-instrumentation-fastapi
pip install opentelemetry-instrumentation-redis
pip install opentelemetry-instrumentation-httpx
pip install opentelemetry-instrumentation-requests
```
### 3. LangChain Integration
Automatically trace all LangChain operations:
```python
from lumenova_beacon import BeaconClient, BeaconCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
client = BeaconClient()
handler = BeaconCallbackHandler(
session_id="session-123"
)
# Use with request-time callbacks (recommended)
llm = ChatOpenAI(model="gpt-4")
response = llm.invoke(
"What is the capital of France?",
config={"callbacks": [handler]}
)
# Works with chains
prompt = ChatPromptTemplate.from_template("Tell me about {topic}")
chain = prompt | llm
response = chain.invoke(
{"topic": "AI"},
config={"callbacks": [handler]}
)
# Traces agents, tools, retrievers, and more
from langchain.agents import create_react_agent, AgentExecutor
agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke(
{"input": "What's the weather?"},
config={"callbacks": [handler]}
)
```
### 4. Strands Agents Integration
Trace AWS Strands Agent executions with automatic span hierarchy:
```python
from lumenova_beacon import BeaconClient, BeaconStrandsHandler
from strands import Agent
client = BeaconClient()
handler = BeaconStrandsHandler(
session_id="my-session",
agent_name="My Agent",
)
agent = Agent(model=model, callback_handler=handler)
result = agent("Hello, world!")
print(handler.trace_id) # Link to Beacon trace
```
### 5. CrewAI Integration
Trace CrewAI Crew executions via the event listener:
```python
from lumenova_beacon import BeaconClient, BeaconCrewAIListener
from crewai import Agent, Crew, Task
client = BeaconClient()
# Auto-registers with CrewAI event bus
listener = BeaconCrewAIListener(
session_id="my-session",
crew_name="My Research Crew",
)
crew = Crew(agents=[...], tasks=[...])
result = crew.kickoff()
print(listener.trace_id) # Link to Beacon trace
```
### 6. Dataset Management
Manage test datasets with an ActiveRecord-style API. Both sync and async methods are available:
- **Sync methods** (simple names): `Dataset.method(...)` or `dataset.method(...)`
- **Async methods** ('a' prefix): `await Dataset.amethod(...)` or `await dataset.amethod(...)`
```python
from lumenova_beacon import BeaconClient
from lumenova_beacon.datasets import Dataset, DatasetRecord
client = BeaconClient()
# Create dataset (sync)
dataset = Dataset.create(
name="qa-evaluation",
description="Question answering test cases"
)
# Create dataset (async)
dataset = await Dataset.acreate(
name="qa-evaluation",
description="Question answering test cases"
)
# Add a single record with flexible column-based data (sync)
dataset.create_record(
data={
"prompt": "What is AI?",
"expected_answer": "Artificial Intelligence is...",
"difficulty": "easy",
"category": "definitions"
}
)
# Add a single record (async)
await dataset.acreate_record(
data={
"prompt": "What is AI?",
"expected_answer": "Artificial Intelligence is...",
"difficulty": "easy"
}
)
# Bulk create records (sync)
records = [
{
"data": {
"question": "What is ML?",
"expected_answer": "Machine Learning...",
"difficulty": "medium"
}
},
{
"data": {
"question": "What is DL?",
"expected_answer": "Deep Learning...",
"difficulty": "hard"
}
}
]
dataset.bulk_create_records(records)
# Bulk create records (async)
await dataset.abulk_create_records(records)
# List datasets (sync)
datasets, pagination = Dataset.list(page=1, page_size=20, search="qa")
for ds in datasets:
print(f"{ds.name}: {ds.description}")
# List datasets (async)
datasets, pagination = await Dataset.alist(page=1, page_size=20, search="qa")
for ds in datasets:
print(f"{ds.name}: {ds.description}")
# Get dataset (sync)
dataset = Dataset.get(dataset_id="dataset-uuid", include_records=True)
# Get dataset (async)
dataset = await Dataset.aget(dataset_id="dataset-uuid", include_records=True)
# List records with pagination (sync)
records, pagination = dataset.list_records(page=1, page_size=50)
# List records with pagination (async)
records, pagination = await dataset.alist_records(page=1, page_size=50)
# Update dataset (sync)
dataset.update(name="updated-name", description="New description")
# Update dataset (async)
await dataset.aupdate(name="updated-name", description="New description")
# Delete dataset (cascade deletes records) (sync)
dataset.delete()
# Delete dataset (async)
await dataset.adelete()
```
### 7. Prompt Management
Version-controlled prompt templates with labels:
#### Creating Prompts
```python
from lumenova_beacon import BeaconClient
from lumenova_beacon.prompts import Prompt
client = BeaconClient()
# Create text prompt (sync)
prompt = Prompt.create(
name="greeting",
template="Hello {{name}}! Welcome to {{company}}.",
description="Customer greeting template",
tags=["customer-support", "greeting"]
)
# Create chat prompt (async)
prompt = await Prompt.acreate(
name="support-bot",
messages=[
{"role": "system", "content": "You are a helpful assistant for {{product}}."},
{"role": "user", "content": "{{question}}"}
],
tags=["support"]
)
# Quick sync example
prompt = Prompt.create(
name="quick-prompt",
template="Hi {{name}}!"
)
```
#### Fetching and Using Prompts
```python
# Get latest version (sync)
prompt = Prompt.get("greeting")
# Get specific version (async)
prompt = await Prompt.aget("greeting", version=2)
# Get labeled version (sync)
prompt = Prompt.get("greeting", label="production")
# Get by ID (async)
prompt = await Prompt.aget(prompt_id="prompt-uuid")
# Format prompt with variables
message = prompt.format(name="Alice", company="Acme Corp")
# Result: "Hello Alice! Welcome to Acme Corp."
# Chat prompt formatting (async)
prompt = await Prompt.aget("support-bot")
messages = prompt.format(product="CloudSync", question="How do I sync?")
# Result: [{"role": "system", "content": "..."}, {"role": "user", "content": "..."}]
```
#### Versioning and Labels
```python
# Publish new version (async)
new_version = await prompt.apublish(
template="Hi {{name}}! Welcome to {{company}}. We're excited to have you!",
message="Added enthusiastic tone"
)
print(f"Published version {new_version.version}")
# Set labels (sync)
prompt.set_label("staging", version=2)
prompt.set_label("production", version=2)
# Promote staging to production after testing (async)
staging_prompt = await Prompt.aget("greeting", label="staging")
# ... test the prompt ...
await staging_prompt.aset_label("production")
```
#### LangChain Conversion
```python
from langchain_core.prompts import PromptTemplate, ChatPromptTemplate
# Convert text prompt to LangChain (sync)
prompt = Prompt.get("greeting", label="production")
lc_prompt = prompt.to_langchain() # Returns PromptTemplate
result = lc_prompt.format(name="Bob", company="TechCorp")
# Convert chat prompt to LangChain (async)
chat_prompt = await Prompt.aget("support-bot", label="production")
lc_chat = chat_prompt.to_langchain() # Returns ChatPromptTemplate
messages = lc_chat.format_messages(product="DataHub", question="Reset password?")
# Use in chain
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4")
chain = lc_chat | llm
response = await chain.ainvoke({"product": "CloudSync", "question": "Why sync failing?"})
```
#### List and Search
```python
# List all prompts (sync)
prompts = Prompt.list(page=1, page_size=20)
# Filter by tags (async)
support_prompts = await Prompt.alist(tags=["customer-support"])
# Search by text (sync)
results = Prompt.list(search="greeting")
# Async version
prompts = await Prompt.alist(page=1, page_size=10)
```
### 8. Experiment Management
Run experiments over datasets with step-by-step pipelines:
```python
from lumenova_beacon.experiments import Experiment
# Create an experiment (sync)
experiment = Experiment.create(
name="qa-eval-v1",
dataset_id="dataset-uuid",
description="Evaluate QA model accuracy",
)
# Run the experiment with a processing function
run = experiment.run(process_fn=my_pipeline)
# List experiments (async)
experiments, pagination = await Experiment.alist(page=1, page_size=20)
```
### 9. Evaluation Management
Evaluate experiment results with custom evaluators:
```python
from lumenova_beacon.evaluations import Evaluation, EvaluationRun
# Create an evaluation (sync)
evaluation = Evaluation.create(
name="accuracy-check",
experiment_id="experiment-uuid",
)
# List evaluations (async)
evaluations, pagination = await Evaluation.alist(page=1, page_size=20)
```
### 10. LLM Config Management
Retrieve LLM configurations from the Beacon platform:
```python
from lumenova_beacon.llm_configs import LLMConfig
# Get an LLM config (sync)
config = LLMConfig.get(config_id="config-uuid")
# List configs (async)
configs, pagination = await LLMConfig.alist(page=1, page_size=20)
```
### 11. Data Masking
Automatically mask sensitive data (PII) before spans are exported:
```python
from lumenova_beacon import BeaconClient
from lumenova_beacon.masking.integrations.beacon_guardrails import (
create_beacon_masking_function,
MaskingMode,
PIIType,
)
# Create a masking function backed by Beacon Guardrails API
masking_fn = create_beacon_masking_function(
pii_types=[PIIType.PERSON, PIIType.EMAIL_ADDRESS, PIIType.US_SSN],
mode=MaskingMode.REDACT,
)
# Pass it to the client - all span data is masked before export
client = BeaconClient(
endpoint="https://your-beacon-endpoint.lumenova.ai",
api_key="your-api-key",
masking_function=masking_fn,
)
```
You can also provide a custom masking function:
```python
def my_masking_fn(text: str) -> str:
return text.replace("secret", "***")
client = BeaconClient(masking_function=my_masking_fn)
```
### 12. Guardrails
Apply content guardrail policies via the Beacon API:
```python
from lumenova_beacon.guardrails import Guardrail
guardrail = Guardrail(guardrail_id="guardrail-uuid")
# Sync
result = guardrail.apply("some user input")
# Async
result = await guardrail.aapply("some user input")
```
## API Reference
### Main Exports
```python
from lumenova_beacon import (
BeaconClient, # Main client
BeaconConfig, # Configuration class
get_client, # Get current client singleton
trace, # Tracing decorator
# Integrations (lazy-loaded)
BeaconCallbackHandler, # LangChain/LangGraph
BeaconStrandsHandler, # Strands Agents
BeaconCrewAIListener, # CrewAI
LangGraphBeaconConfig, # LangGraph configuration
)
from lumenova_beacon.datasets import Dataset, DatasetRecord
from lumenova_beacon.prompts import Prompt
from lumenova_beacon.experiments import Experiment
from lumenova_beacon.evaluations import Evaluation, EvaluationRun
from lumenova_beacon.llm_configs import LLMConfig
from lumenova_beacon.guardrails import Guardrail
from lumenova_beacon.types import SpanKind, StatusCode, SpanType
```
### BeaconClient
```python
client = BeaconClient(
endpoint: str | None = None,
api_key: str | None = None,
file_directory: str | None = None,
session_id: str | None = None,
service_name: str | None = None,
environment: str | None = None,
auto_instrument_opentelemetry: bool = True,
isolated: bool = False,
auto_instrument_litellm: bool = False,
masking_function: Callable | None = None,
verify: bool | None = None,
eager_export: bool | None = None,
)
# Methods
span = client.create_span(name, kind, span_type, session_id)
ctx = client.trace(name, kind, span_type) # Context manager (sync & async)
client.export_span(span) # Export a single span
client.export_spans(spans) # Export multiple spans
client.flush() # Flush pending spans
```
### Dataset
```python
# Class methods (sync - simple names)
dataset = Dataset.create(name: str, description: str | None = None, column_schema: list[dict[str, Any]] | None = None)
dataset = Dataset.get(dataset_id: str, include_records: bool = False)
datasets, pagination = Dataset.list(page=1, page_size=20, search=None)
# Class methods (async - 'a' prefix)
dataset = await Dataset.acreate(...)
dataset = await Dataset.aget(...)
datasets, pagination = await Dataset.alist(...)
# Instance methods (sync - simple names)
dataset.save()
dataset.update(name=None, description=None)
dataset.delete()
record = dataset.create_record(data: dict[str, Any])
dataset.bulk_create_records(records: list[dict])
records, pagination = dataset.list_records(page=1, page_size=50)
# Instance methods (async - 'a' prefix)
await dataset.asave()
await dataset.aupdate(...)
await dataset.adelete()
record = await dataset.acreate_record(...)
await dataset.abulk_create_records(...)
records, pagination = await dataset.alist_records(...)
# Properties
dataset.id
dataset.name
dataset.description
dataset.record_count
dataset.created_at
dataset.updated_at
dataset.column_schema
```
### DatasetRecord
```python
# Class methods (sync - simple names)
record = DatasetRecord.get(dataset_id: str, record_id: str)
records, pagination = DatasetRecord.list(dataset_id: str, page=1, page_size=50)
# Class methods (async - 'a' prefix)
record = await DatasetRecord.aget(...)
records, pagination = await DatasetRecord.alist(...)
# Instance methods (sync - simple names)
record.save()
record.update(data: dict[str, Any] | None = None)
record.delete()
# Instance methods (async - 'a' prefix)
await record.asave()
await record.aupdate(...)
await record.adelete()
# Properties
record.id
record.dataset_id
record.data # dict[str, Any] - flexible column data
record.created_at
record.updated_at
```
### Prompt
```python
# Class methods (sync - simple names)
prompt = Prompt.create(name, template=None, messages=None, description=None, tags=None)
prompt = Prompt.get(name=None, prompt_id=None, label="latest", version=None)
prompts = Prompt.list(page=1, page_size=10, tags=None, search=None)
# Class methods (async - 'a' prefix)
prompt = await Prompt.acreate(...)
prompt = await Prompt.aget(...)
prompts = await Prompt.alist(...)
# Instance methods (sync - simple names)
prompt.update(name=None, description=None, tags=None)
prompt.delete()
new_version = prompt.publish(template=None, messages=None, message="")
prompt.set_label(label: str, version: int | None = None)
# Instance methods (async - 'a' prefix)
await prompt.aupdate(...)
await prompt.adelete()
new_version = await prompt.apublish(...)
await prompt.aset_label(...)
# Rendering (always sync)
result = prompt.format(**kwargs)
result = prompt.compile(variables: dict)
template = prompt.to_template() # Convert to Python f-string format
lc_prompt = prompt.to_langchain() # Convert to LangChain template
# Properties
prompt.id
prompt.name
prompt.type # "text" or "chat"
prompt.version
prompt.template # For TEXT prompts
prompt.messages # For CHAT prompts
prompt.labels # list[str]
prompt.tags # list[str]
```
### Span
```python
span = Span(name, kind, span_type)
# Lifecycle
span.start()
span.end(status_code=StatusCode.OK)
# Status
span.set_status(StatusCode.ERROR, "description")
span.record_exception(exc: Exception)
# Attributes
span.set_attribute("key", value)
span.set_attributes({"k1": "v1", "k2": "v2"})
span.set_input(data: dict)
span.set_output(data: dict)
span.set_metadata("key", value)
# Properties
span.trace_id
span.span_id
span.parent_id
span.name
span.kind
span.span_type
```
### Type Enums
```python
from lumenova_beacon.types import SpanKind, StatusCode, SpanType
# SpanKind
SpanKind.INTERNAL
SpanKind.SERVER
SpanKind.CLIENT
SpanKind.PRODUCER
SpanKind.CONSUMER
# StatusCode
StatusCode.UNSET
StatusCode.OK
StatusCode.ERROR
# SpanType
SpanType.SPAN
SpanType.GENERATION
SpanType.CHAIN
SpanType.TOOL
SpanType.RETRIEVAL
SpanType.AGENT
SpanType.FUNCTION
SpanType.REQUEST
SpanType.SERVER
SpanType.TASK
SpanType.CACHE
SpanType.EMBEDDING
SpanType.HANDOFF
SpanType.CONDITIONAL
```
## Error Handling
### Exception Hierarchy
```python
from lumenova_beacon.exceptions import (
BeaconError, # Base exception
ConfigurationError, # Configuration issues
TransportError, # Transport errors
HTTPTransportError, # HTTP transport errors
FileTransportError, # File transport errors
SpanError, # Span-related errors
DatasetError, # Dataset errors
DatasetNotFoundError, # Dataset not found
DatasetValidationError, # Dataset validation
PromptError, # Prompt errors
PromptNotFoundError, # Prompt not found
PromptValidationError, # Prompt validation
PromptCompilationError, # Template compilation
PromptNetworkError, # Network errors
ExperimentError, # Experiment errors
ExperimentNotFoundError, # Experiment not found
ExperimentValidationError, # Experiment validation
EvaluationError, # Evaluation errors
EvaluationNotFoundError, # Evaluation not found
EvaluationValidationError, # Evaluation validation
LLMConfigError, # LLM config errors
LLMConfigNotFoundError, # LLM config not found
MaskingError, # Masking errors
MaskingAPIError, # Masking API errors
MaskingNotFoundError, # Masking not found
MaskingValidationError, # Masking validation
GuardrailError, # Guardrail errors
GuardrailNotFoundError, # Guardrail not found
GuardrailValidationError, # Guardrail validation
)
```
### Retry Logic
All HTTP operations automatically retry up to 3 times with exponential backoff:
```python
from lumenova_beacon.exceptions import PromptNetworkError
try:
prompt = await Prompt.get("my-prompt")
except PromptNetworkError as e:
# Failed after 3 automatic retries
print(f"Network error: {e}")
except PromptNotFoundError as e:
# Prompt doesn't exist
print(f"Not found: {e}")
```
### Graceful Degradation
```python
from lumenova_beacon import BeaconClient
# Disable tracing in development
client = BeaconClient(enabled=False)
# Tracing becomes no-op when disabled
@trace
def my_function():
return "result" # No tracing overhead
```
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
| text/markdown | null | Lumenova AI <support@lumenova.ai> | null | Lumenova AI <support@lumenova.ai> | null | ai, langchain, llm, monitoring, observability, opentelemetry, sdk, tracing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Monitoring",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"tenacity>=9.0.0",
"typing-extensions>=4.12.0",
"crewai>=0.28.0; extra == \"crewai\"",
"opentelemetry-api==1.38.0; extra == \"crewai\"",
"opentelemetry-exporter-otlp-proto-common==1.38.0; extra == \"crewai\"",
"opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == \"crewai\"",
"opentelemetry-exporter-otlp-proto-http==1.38.0; extra == \"crewai\"",
"opentelemetry-sdk==1.38.0; extra == \"crewai\"",
"black>=24.0.0; extra == \"dev\"",
"ipython>=8.37.0; extra == \"dev\"",
"mypy>=1.10.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"anthropic>=0.40.0; extra == \"examples\"",
"crewai>=0.1.0; extra == \"examples\"",
"fastapi>=0.115.0; extra == \"examples\"",
"google-genai>=1.0.0; extra == \"examples\"",
"litellm>=1.70.0; extra == \"examples\"",
"llama-index-core>=0.12.3; extra == \"examples\"",
"llama-index-embeddings-azure-openai>=0.3.0; extra == \"examples\"",
"llama-index-llms-azure-openai>=0.3.0; extra == \"examples\"",
"openai>=1.0.0; extra == \"examples\"",
"openinference-instrumentation-google-genai>=0.1.0; extra == \"examples\"",
"openinference-instrumentation-llama-index>=4.3.8; extra == \"examples\"",
"opentelemetry-instrumentation-anthropic>=0.1.0; extra == \"examples\"",
"opentelemetry-instrumentation-fastapi==0.59b0; extra == \"examples\"",
"opentelemetry-instrumentation-httpx==0.59b0; extra == \"examples\"",
"opentelemetry-instrumentation-openai>=0.1.0; extra == \"examples\"",
"opentelemetry-instrumentation-redis==0.59b0; extra == \"examples\"",
"opentelemetry-instrumentation-requests==0.59b0; extra == \"examples\"",
"opentelemetry-instrumentation==0.59b0; extra == \"examples\"",
"python-dotenv>=1.0.0; extra == \"examples\"",
"redis>=5.0.0; extra == \"examples\"",
"requests>=2.31.0; extra == \"examples\"",
"strands-agents-tools>=0.2.0; extra == \"examples\"",
"strands-agents>=1.15.0; extra == \"examples\"",
"faiss-cpu>=1.13.1; extra == \"langchain\"",
"langchain-anthropic>=0.3.0; extra == \"langchain\"",
"langchain-community>=0.4.1; extra == \"langchain\"",
"langchain-core>=0.3.0; extra == \"langchain\"",
"langchain-openai>=1.1.5; extra == \"langchain\"",
"langchain>=1.0.0; extra == \"langchain\"",
"langgraph>=1.0.5; extra == \"langchain\"",
"opentelemetry-api==1.38.0; extra == \"langchain\"",
"opentelemetry-exporter-otlp-proto-common==1.38.0; extra == \"langchain\"",
"opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == \"langchain\"",
"opentelemetry-exporter-otlp-proto-http==1.38.0; extra == \"langchain\"",
"opentelemetry-sdk==1.38.0; extra == \"langchain\"",
"litellm>=1.70.0; extra == \"litellm\"",
"opentelemetry-api==1.38.0; extra == \"litellm\"",
"opentelemetry-exporter-otlp-proto-common==1.38.0; extra == \"litellm\"",
"opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == \"litellm\"",
"opentelemetry-exporter-otlp-proto-http==1.38.0; extra == \"litellm\"",
"opentelemetry-sdk==1.38.0; extra == \"litellm\"",
"opentelemetry-api==1.38.0; extra == \"opentelemetry\"",
"opentelemetry-exporter-otlp-proto-common==1.38.0; extra == \"opentelemetry\"",
"opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == \"opentelemetry\"",
"opentelemetry-exporter-otlp-proto-http==1.38.0; extra == \"opentelemetry\"",
"opentelemetry-sdk==1.38.0; extra == \"opentelemetry\"",
"boto3>=1.34.0; extra == \"strands\"",
"opentelemetry-api==1.38.0; extra == \"strands\"",
"opentelemetry-exporter-otlp-proto-common==1.38.0; extra == \"strands\"",
"opentelemetry-exporter-otlp-proto-grpc==1.38.0; extra == \"strands\"",
"opentelemetry-exporter-otlp-proto-http==1.38.0; extra == \"strands\"",
"opentelemetry-sdk==1.38.0; extra == \"strands\"",
"strands-agents-tools>=0.2.0; extra == \"strands\"",
"strands-agents>=1.15.0; extra == \"strands\""
] | [] | [] | [] | [
"Homepage, https://lumenova.ai"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T06:52:36.643776 | lumenova_beacon-2.4.7.tar.gz | 631,637 | 8e/9e/becf3dff9fd7cc87449f4c0a120dd59ef85a62c5c8160e43488b71b2038a/lumenova_beacon-2.4.7.tar.gz | source | sdist | null | false | 7158d5fb0085143748e8f904ba4e7f8c | 7b028c34f86f6e7a1a021ab0e6cb75c02ac2671d7dc860f344c4a08823a701da | 8e9ebecf3dff9fd7cc87449f4c0a120dd59ef85a62c5c8160e43488b71b2038a | Apache-2.0 | [
"LICENSE"
] | 250 |
2.4 | wisent | 0.7.1497 | Monitor and influence AI Brains | <p align="center">
<img src="banner.png" alt="Wisent Banner" width="100%">
</p>
<p align="center">
<code>pip install wisent</code>
</p>
## Overview
Wisent allows you to control your AI by identifying brain patterns corresponding to responses you don't like, like hallucinations or harmful outputs. We use contrastive pairs of representations to detect when a model might be generating harmful content or hallucinating. Learn more at [wisent.ai/documentation](https://www.wisent.ai/documentation).
## License
This project is licensed under the MIT License - see the LICENSE file for details.
| text/markdown | Lukasz Bartoszcze and the Wisent Team | lukasz.bartoszcze@wisent.ai | null | null | null | nlp, machine learning, language models, safety, guardrails, lm-evaluation-harness | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | https://github.com/wisent-ai/wisent | null | >=3.8 | [] | [] | [] | [
"torch>=1.9.0",
"transformers>=4.46.0",
"tqdm>=4.50.0",
"scikit-learn>=0.24.0",
"pandas>=1.2.0",
"numpy>=1.21.0",
"numba>=0.56.0",
"datasets>=2.0.0",
"sentence-transformers>=2.0.0",
"faiss-cpu>=1.7.0",
"uncensorbench>=0.2.0",
"pebble>=5.0.0",
"latex2sympy2_extended>=1.0.0",
"sae_lens>=0.1.0",
"trl>=0.7.0",
"peft>=0.7.0",
"safetensors>=0.4.0",
"huggingface_hub>=0.20.0",
"psycopg2-binary>=2.9.0",
"pynndescent>=0.5.0",
"hdbscan>=0.8.0",
"umap-learn>=0.5.0",
"pacmap>=0.7.0",
"lm-eval==0.4.8; extra == \"harness\"",
"pyreft>=0.1.0; extra == \"reft\"",
"flash-attn>=2.5.0; extra == \"cuda\"",
"sparsify>=0.1.0; extra == \"sparsify\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.5 | 2026-02-21T06:52:36.152579 | wisent-0.7.1497.tar.gz | 1,805,973 | ee/b7/b457e2e94c05d75edfecb4db3ee899ca044ca2e541b24f614666b0c883c1/wisent-0.7.1497.tar.gz | source | sdist | null | false | ec88fa3ff0f5a3f75adee0d08d1e3b42 | f052e96c917b093d83b303ce08598dd76fd70f26d9177ce28e7d485df514f2ae | eeb7b457e2e94c05d75edfecb4db3ee899ca044ca2e541b24f614666b0c883c1 | null | [
"LICENSE"
] | 257 |
2.4 | gevent-runner | 0.0.3 | The greenlet launch tool | # Gevent Runner
**Gevent Runner** — это инструмент для управления задачами на основе гринлетов (greenlets) в Python с использованием библиотеки gevent.
## Возможности
- ✅ Запуск нескольких конкурентных задач одновременно
- ✅ Динамическое добавление и удаление задач в процессе работы
- ✅ Корректная обработка сигналов завершения (SIGTERM, SIGINT)
- ✅ Потокобезопасное управление задачами
## Установка
### Из исходников
```bash
pip install -e .
```
### Для разработки
```bash
pip install -e ".[dev]"
```
## Требования
- Python >= 3.10
- gevent == 24.11.1
## Быстрый старт
### Базовый пример
Создайте несколько задач и запустите их одновременно:
```python
import time
import gevent
from classic.gevent_runner.runner import GreenletRunner
def print_numbers():
count = 0
while True:
count += 1
print(f"[Numbers Task] Count: {count}")
gevent.sleep(2)
def print_timestamp():
while True:
current_time = time.strftime("%H:%M:%S")
print(f"[Timestamp Task] Current time: {current_time}")
gevent.sleep(3)
def print_heartbeat():
while True:
print(f"[Heartbeat Task] System is alive!")
gevent.sleep(5)
def main():
print("Starting GreenletRunner with 3 concurrent tasks...")
print("Press Ctrl+C to stop all tasks\n")
# Создаём экземпляр runner
runner = GreenletRunner()
# Добавляем задачи в runner
runner.add(print_numbers, print_timestamp, print_heartbeat)
# Запускаем главный цикл (блокирует выполнение до получения SIGTERM или SIGINT)
runner.run()
print("\nAll tasks stopped gracefully.")
if __name__ == "__main__":
main()
```
**Вывод:**
```
Starting GreenletRunner with 3 concurrent tasks...
Press Ctrl+C to stop all tasks
[Numbers Task] Count: 1
[Timestamp Task] Current time: 12:34:56
[Numbers Task] Count: 2
[Heartbeat Task] System is alive!
[Timestamp Task] Current time: 12:34:59
...
```
Нажмите `Ctrl+C` для корректной остановки всех задач.
## Использование
### Создание runner
```python
from classic.gevent_runner.runner import GreenletRunner
runner = GreenletRunner()
```
### Добавление задач
Вы можете добавить одну или несколько задач одновременно:
```python
# Добавление одной задачи
runner.add(my_task)
# Добавление нескольких задач
runner.add(task1, task2, task3)
# Добавление задачи с демоном
runner.add(task1, daemon=True)
```
**Важно:** Задача должна быть вызываемой функцией (callable). Каждая задача может быть добавлена только один раз.
### Удаление задач
Остановка и удаление задач с опциональным таймаутом:
```python
# Удаление одной задачи
runner.remove(my_task)
# Удаление нескольких задач с таймаутом
runner.remove(task1, task2, timeout=2.0)
```
При удалении задачи, гринлет будет корректно завершён.
### Запуск runner
```python
# Запуск основного цикла (блокирующий вызов)
runner.run()
```
Метод `run()` блокирует выполнение до получения сигнала `SIGTERM` или `SIGINT` (Ctrl+C).
| text/markdown | null | Sergey Variasov <variasov@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"gevent~=25.9",
"build~=1.2.2.post1; extra == \"dev\"",
"pytest==8.3.4; extra == \"dev\"",
"pytest-cov==6.0.0; extra == \"dev\"",
"twine~=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/variasov/gevent-runner"
] | twine/6.2.0 CPython/3.10.13 | 2026-02-21T06:52:20.771492 | gevent_runner-0.0.3.tar.gz | 4,533 | 21/b9/f5fa5e1542561081ba8aab2d0bbf0e52c0711fcaa674d92f07e5bc38c951/gevent_runner-0.0.3.tar.gz | source | sdist | null | false | 1e4d396169748959bffb2eafe86cabfe | 65c2fe4a45de3b94ad69970bd65d45cc811f94a9705f784675cf642b36754d62 | 21b9f5fa5e1542561081ba8aab2d0bbf0e52c0711fcaa674d92f07e5bc38c951 | null | [
"LICENSE"
] | 239 |
2.4 | guten-morgen | 0.3.3 | CLI for Morgen calendar and task management — optimised for LLM consumption | # guten-morgen
A CLI for [Morgen](https://morgen.so) calendar and task management, designed for both humans and LLM agents.
All commands emit structured JSON, making it easy to pipe into scripts, `jq`, or feed directly to AI coding assistants like Claude Code.
## Features
- **Unified task view** — see tasks from Morgen, Linear, and Notion in one place
- **Calendar groups** — filter events by work/personal/family with a single flag
- **Time-blocking** — schedule tasks as calendar events with `tasks schedule`
- **Tag lifecycle** — model task stages (Active, Waiting-On, Someday) with tags
- **LLM-friendly output** — `--json`, `--response-format concise`, `--jq`, `--fields` for token-efficient responses
- **Smart caching** — TTL-based cache with `cache clear` and `cache stats`
## Installation
Requires Python 3.10+.
```bash
# Run without installing (uv)
uvx guten-morgen --help
# Install globally (uv — recommended)
uv tool install guten-morgen
# Install globally (pipx)
pipx install guten-morgen
# Install into current environment (pip)
pip install guten-morgen
# From source (development)
git clone https://github.com/tenfourty/guten-morgen.git
cd guten-morgen
uv sync --all-extras
uv run pre-commit install
```
All methods expose both `gm` (short) and `guten-morgen` (long) commands.
## Setup
1. **Get an API key** from [Morgen Platform](https://platform.morgen.so/) (Settings > API Keys)
2. **Create your `.env` file:**
```bash
cp .env.example .env
# Edit .env and add your MORGEN_API_KEY
```
3. **Configure calendar groups** (optional):
```bash
cp .config.toml.example .config.toml
# Edit .config.toml with your account details
```
4. **Verify it works:**
```bash
gm accounts
gm today --json
```
## Quick Start
```bash
# What's coming up?
gm next --json --response-format concise
# Full daily overview
gm today --json
# List overdue tasks across all sources
gm tasks list --status open --overdue --json --group-by-source
# Create and time-block a task
gm tasks create --title "Write design doc" --due 2026-02-20 --duration 90
gm tasks schedule <task-id> --start 2026-02-20T10:00:00
# Filter by tag
gm tasks list --tag "Active" --status open --json
```
Run `gm usage` for the full command reference.
## Calendar Groups
Groups let you filter events by context. Configure in `.config.toml`:
```toml
default_group = "work"
active_only = true
[groups.work]
accounts = ["you@company.com:google"]
calendars = ["Work Calendar"]
[groups.personal]
accounts = ["you@personal.com:fastmail"]
calendars = ["Personal"]
```
Use `--group personal` to switch context, or `--group all` to see everything.
## Global Options
| Option | Description |
|--------|-------------|
| `--format table\|json\|jsonl\|csv` | Output format (default: table) |
| `--json` | Shortcut for `--format json` |
| `--fields <list>` | Select specific fields |
| `--jq <expr>` | jq filtering on output |
| `--response-format concise` | ~1/3 the tokens (great for LLMs) |
| `--short-ids` | Truncate IDs to 12 chars |
| `--group NAME` | Filter by calendar group |
| `--no-cache` | Bypass cache |
## Development
```bash
# Install dev dependencies
uv sync --all-extras
uv run pre-commit install
# Run tests
uv run pytest -x -q --cov
# Type checking
uv run mypy src/
# Lint
uv run ruff check .
```
Pre-commit hooks enforce ruff, mypy, bandit, pytest (90% coverage minimum), and [ggshield](https://github.com/GitGuardian/ggshield) secret scanning. To use ggshield, [create a free account](https://dashboard.gitguardian.com/signup) and set `GITGUARDIAN_API_KEY`.
### Architecture
```
src/guten_morgen/
cli.py Click commands — boundary layer (model -> dict)
client.py MorgenClient — typed API wrapper (Pydantic models)
models.py Pydantic v2 models
output.py Render pipeline (table/json/jsonl/csv + fields + jq)
errors.py Exception hierarchy -> structured JSON on stderr
config.py Settings from .env
time_utils.py Date range helpers
cache.py TTL-based request cache
groups.py Calendar group filtering from .config.toml
```
**The boundary rule:** `client.py` returns Pydantic models, `cli.py` converts with `model_dump()`, `output.py` only sees dicts.
## Claude Code Integration
This project includes a `CLAUDE.md` with conventions and a `.claude/` directory with hooks and skills for use with [Claude Code](https://docs.anthropic.com/en/docs/claude-code). These are optional — the CLI works without them.
## License
[MIT](LICENSE)
| text/markdown | Jeremy Brown | null | null | null | MIT | calendar, cli, llm, morgen, tasks | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Office/Business :: Scheduling",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1",
"httpx>=0.27",
"jq>=1.8",
"pydantic>=2.0",
"python-dotenv>=1.0",
"rich>=13.0",
"tomli>=2.0; python_version < \"3.11\"",
"bandit>=1.7; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"pre-commit>=3.7; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.5; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"test\"",
"pytest>=8.0; extra == \"test\""
] | [] | [] | [] | [
"Repository, https://github.com/tenfourty/guten-morgen"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:52:13.764253 | guten_morgen-0.3.3.tar.gz | 164,375 | 03/18/f26c7d4f7c1a57b9a0590ffc2a6cda8734d399651b24a002159ceb20d461/guten_morgen-0.3.3.tar.gz | source | sdist | null | false | 97c8da29fedde80d949bb6d8b15f7800 | 99f89a4f94e0cfba786196d1a8774335e9e63bacfc5dd78f236d10fbbb64dce3 | 0318f26c7d4f7c1a57b9a0590ffc2a6cda8734d399651b24a002159ceb20d461 | null | [
"LICENSE"
] | 229 |
2.2 | basilisk-engine | 0.2.20 | Python bindings for the Basilisk C++ library | # Basilisk Engine
## Building and Running the Project
To build this project from source, you'll use **CMake**. Follow these steps from the command line:
1. First, navigate to the `build` directory:
```bash
cd build
```
2. Next, run CMake to configure the project. This generates the necessary build files for your system.
```bash
cmake ..
```
3. Then, use CMake to build the project. This compiles the source code and creates the executable file.
```bash
cmake --build .
```
Once the build is complete, you can run the final program with this command:
```bash
./render
```
## steps for publishing wheels
```
cmake .. (from build)
cmake --build build (from root)
cmake --install build --prefix ./python
pip install build (run once)
python -m build
```
# Todo
## Rendering
- [x] Lighting System
- [x] Directional
- [x] Point
- [ ] Spot
- [x] Ambient
- [x] Skybox
- [ ] Shadows
- [ ] Basic PBR
- [ ] Bloom
- [ ] Text Rendering
- [ ] SSAO
## QOL
- [ ] Default lights
- [ ] Default material/texture
- [ ] Material from path
- [ ] Default Mesh
## Optimizations
- [x] Forward+
- [ ] CSM
- [ ] Frustum Culling
- [ ] Auto LOD (meshoptimizer)
- [ ] Instancing
## Physics
I want to set up a build script that will to automate pyinstaller building from python.
1. call pyinstaller on a given file
2. Copy all the top level files and folders (other than those generated by pyinstaller) into the dist/.../_internal/ folder. | text/markdown | null | Jonah Coffelt <coffelt.jonah0918@gmail.com>, Isaac Lagoy <isaacblagoy@gmail.com> | null | Jonah Coffelt <coffelt.jonah0918@gmail.com>, Isaac Lagoy <isaacblagoy@gmail.com> | MIT | graphics, game-engine, 3d, 2d, opengl, physics, simulation, rendering | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: C++",
"Topic :: Multimedia :: Graphics",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Games/Entertainment"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pyinstaller>=6.0; extra == \"build\""
] | [] | [] | [] | [
"Homepage, https://github.com/BasiliskGroup/BasiliskEngine",
"Repository, https://github.com/BasiliskGroup/BasiliskEngine",
"Documentation, https://github.com/BasiliskGroup/BasiliskEngine#readme",
"Bug Tracker, https://github.com/BasiliskGroup/BasiliskEngine/issues"
] | twine/5.0.0 CPython/3.12.12 | 2026-02-21T06:52:03.531745 | basilisk_engine-0.2.20.tar.gz | 24,564,009 | 83/ef/2b2b68c0bcf5aaae286b4ba6140cd4e0689d29b7d68126aec9cd70fdc62d/basilisk_engine-0.2.20.tar.gz | source | sdist | null | false | 0862e8a89e34385b1654a00d28887d2a | 6b1610c2b396681f8228c55a071acd38e33e1186ada44a866c87d120d86762bd | 83ef2b2b68c0bcf5aaae286b4ba6140cd4e0689d29b7d68126aec9cd70fdc62d | null | [] | 342 |
2.4 | skill-scanner | 0.1.1 | Security scanner for AI agent skills and instruction artifacts | # skill-scanner
[](https://github.com/thedevappsecguy/skill-scanner/actions/workflows/ci.yml)
[](https://github.com/thedevappsecguy/skill-scanner/actions/workflows/publish-testpypi.yml)
[](https://github.com/thedevappsecguy/skill-scanner/actions/workflows/release.yml)
[](https://github.com/thedevappsecguy/skill-scanner/actions/workflows/zizmor.yml)
`skill-scanner` reviews AI skill and instruction artifacts for security risk using:
- OpenAI analysis
- VirusTotal analysis
## Requirements
- Python 3.11+
- [`uv`](https://docs.astral.sh/uv/)
- OpenAI and/or VirusTotal API key (at least one)
## Install (from source)
```bash
uv sync --all-extras --group dev
```
Run with:
```bash
uv run skill-scanner --help
```
Alias:
```bash
uv run skillscan --help
```
## What gets scanned
By default, `discover` and `scan` detect markdown-based skill/instruction artifacts (for example `SKILL.md`, `AGENTS.md`, `CLAUDE.md`, `*.instructions.md`, `*.prompt.md`, `*.agent.md`, `.mdc`).
Use `--path` to target a specific file or folder.
## Quick start
```bash
# See targets
uv run skill-scanner discover --format json
# Verify key/model configuration
uv run skill-scanner doctor
# Run a combined scan (if both keys are configured)
uv run skill-scanner scan --format summary
```
## Key configuration and analyzer selection
`scan` requires at least one analyzer enabled.
- If only `OPENAI_API_KEY` is available, AI runs and VT is disabled.
- If only `VT_API_KEY` is available, VT runs and AI is disabled.
- If both keys are available, VT findings are included and VT context is passed into AI analysis.
- You can disable either analyzer with `--no-ai` or `--no-vt`.
## API key safety
Use 1Password secret references instead of plaintext secrets:
```bash
OPENAI_API_KEY=op://Developer/OpenAI/api_key
VT_API_KEY=op://Developer/VirusTotal/api_key
```
Run the scanner through 1Password CLI so references are resolved at runtime:
```bash
op run --env-file=.env -- uv run skill-scanner scan --format summary
```
Security best practice:
- Prefer a 1Password Service Account scoped to only the vault/items required for scanning (least privilege).
Reference:
- https://developer.1password.com/docs/cli/secret-references/
## Output formats
`scan --format` supports:
- `table` (default)
- `summary`
- `json`
- `sarif`
You can write output to a file with `--output <path>`.
## Useful commands
```bash
# List providers
uv run skill-scanner providers
# Scan one path only
uv run skill-scanner scan --path ./some/skill/folder --format summary
# List discovered targets without running analyzers
uv run skill-scanner scan --list-targets
# Scan only selected discovered targets (repeat --target)
uv run skill-scanner scan --target /absolute/path/to/SKILL.md --target /absolute/path/to/AGENTS.md --format summary
# Filter to medium+
uv run skill-scanner scan --min-severity medium --format summary
# Non-zero exit if high+ findings exist
uv run skill-scanner scan --fail-on high --format summary
```
`--list-targets` can be used without API keys because it only runs discovery and exits.
## Exit behavior
- `0`: scan completed and fail threshold not hit
- `1`: `--fail-on` threshold matched
- `2`: no analyzers enabled (for example missing keys combined with flags), or `--target` did not match any discovered target
| text/markdown | skill-scanner | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx==0.28.1",
"pydantic==2.12.5",
"pyyaml==6.0.3",
"rich==14.3.2",
"typer==0.24.0",
"openai==2.21.0; extra == \"all\"",
"vt-py==0.22.0; extra == \"all\"",
"openai==2.21.0; extra == \"openai\"",
"vt-py==0.22.0; extra == \"virustotal\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:51:59.170079 | skill_scanner-0.1.1.tar.gz | 127,490 | 5f/69/23eee0f14144fec7484399c621a0ef9cf1ce6662c0778053da3b12935172/skill_scanner-0.1.1.tar.gz | source | sdist | null | false | c0cbd67e54960fca0016fc8200d4264d | 88249bcee7f30b3345b4b6636859b69b6128a66d7101178e3e546075d33808e8 | 5f6923eee0f14144fec7484399c621a0ef9cf1ce6662c0778053da3b12935172 | null | [] | 237 |
2.3 | dycw-postgres | 0.2.14 | Library to operate `postgres` and `pgbackrest` | # `postgres`
Library to operate `postgres` and `pgbackrest`
| text/markdown | Derek Wan | Derek Wan <d.wan@icloud.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.3.1",
"dycw-installer>=0.9.11",
"dycw-utilities>=0.192.0",
"pydantic-settings>=2.13.1",
"click==8.3.1; extra == \"cli\"",
"dycw-installer==0.9.11; extra == \"cli\"",
"dycw-utilities==0.192.0; extra == \"cli\"",
"pydantic-settings==2.13.1; extra == \"cli\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:50:36.724011 | dycw_postgres-0.2.14-py3-none-any.whl | 21,155 | 6a/c6/c960071241fb14748c0c65f910038ea59ee0abfa83ade01e212a7bbcd996/dycw_postgres-0.2.14-py3-none-any.whl | py3 | bdist_wheel | null | false | 440d61f9249ef37a5587c8a2a8d8f18f | 533727ed3a7aca0c83c0dc58e4d0e3d8ba8506c4df1dd8d68ebdef476d5973eb | 6ac6c960071241fb14748c0c65f910038ea59ee0abfa83ade01e212a7bbcd996 | null | [] | 98 |
2.4 | fastFE | 0.1.0 | A Python library that simplifies QuantLib for pricing exotic derivatives — curves, vol surfaces, and Monte Carlo out of the box. | # fastFE
A Python library that simplifies [QuantLib](https://www.quantlib.org/) for pricing exotic derivatives. It provides high-level abstractions for yield curve bootstrapping, volatility surface construction, financial model calibration, and Monte Carlo simulation — so you can focus on the economics of a trade rather than QuantLib's low-level API.
## Features
- **Curve bootstrapping** — build yield curves from deposits, swaps, OIS, FRAs, bonds, and SOFR futures for USD, EUR, JPY, TWD, CHF, and GBP with a single function call
- **Volatility surfaces** — construct 1D vol curves and 2D vol surfaces; create calibration helpers for swaptions and Heston models
- **Financial models** — Hull-White (interest rate), Heston (stochastic vol equity), Black-Scholes-Merton (equity/FX), Garman-Kohlhagen (FX), and multi-asset correlation models
- **Monte Carlo paths** — all models expose a uniform `monte_carlo_paths()` interface returning pandas DataFrames indexed by `ql.Date`
- **Longstaff-Schwartz** — built-in LSM engine for Bermudan and American-style early-exercise products
- **Market conventions** — pre-defined fixed and floating leg conventions for major currencies via `Conventions`
- **Schedule utilities** — helpers for combining schedules, computing year fractions, and resolving fixing dates
## Installation
```bash
pip install fastFE
```
QuantLib must be installed separately (see [QuantLib-Python installation guide](https://github.com/lballabio/QuantLib)):
```bash
pip install QuantLib
```
## Quick Start
### Build a yield curve
```python
import QuantLib as ql
import pandas as pd
from fastFE import bootstrap_curve
today = ql.Date().todaysDate()
df_deposit = pd.DataFrame({
'tenor': ['1M', '2M', '3M', '6M', '9M'],
'rates': [0.015, 0.018, 0.02, 0.022, 0.025]
})
df_swap = pd.DataFrame({
'tenor': ['1Y', '2Y', '5Y', '7Y', '10Y'],
'rate': [0.015, 0.018, 0.02, 0.022, 0.025]
})
curve = bootstrap_curve(today, deposit=df_deposit, swap=df_swap)
print(curve.discount(today + ql.Period('1Y')))
```
`bootstrap_curve` automatically applies the correct market conventions for the currency inferred from the helpers. Pass `currency='EUR'` to override.
See [`examples/examples.py`](examples/examples.py) for a complete reference of all rate helper types (OIS, FRA, bond, SOFR futures).
---
### Construct a volatility surface
```python
import QuantLib as ql
import pandas as pd
from fastFE import create_black_vol_curve, create_black_vol_surface
today = ql.Date().todaysDate()
# 1D vol curve
vol_series = pd.Series(
[0.15, 0.18, 0.20, 0.22, 0.25],
index=['1M', '2M', '3M', '6M', '9M']
)
vol_curve = create_black_vol_curve(vol_series, today)
# 2D vol surface (rows = expiry tenors, columns = strikes)
vol_df = pd.DataFrame(
[[0.20, 0.22, 0.24],
[0.21, 0.23, 0.25],
[0.22, 0.24, 0.26]],
index=['1M', '3M', '6M'],
columns=[90.0, 100.0, 110.0]
)
vol_surface = create_black_vol_surface(vol_df, today)
```
---
### Price an equity derivative with Black-Scholes-Merton
```python
import QuantLib as ql
from fastFE import BlackScholesMertonModel, create_black_vol_curve
today = ql.Date().todaysDate()
dayCount = ql.Actual365Fixed()
calendar = ql.WeekendsOnly()
spot = 100.0
risk_free = ql.YieldTermStructureHandle(ql.FlatForward(today, 0.04, dayCount))
dividend = ql.YieldTermStructureHandle(ql.FlatForward(today, 0.01, dayCount))
vol_curve = create_black_vol_curve(
pd.Series([0.20, 0.21, 0.22, 0.23], index=['3M', '6M', '9M', '1Y']),
today
)
model = BlackScholesMertonModel(risk_free, dividend, vol_curve, spot)
fixing_schedule = ql.MakeSchedule(
today, today + ql.Period('1Y'), ql.Period('1M'), calendar=calendar
)
paths = model.monte_carlo_paths(fixing_schedule, numPaths=1000)
# paths: pd.DataFrame, shape (len(fixing_schedule), 1000)
```
See [`examples/equity_link_example.py`](examples/equity_link_example.py) for a complete equity-linked structured product (Heston model, Bermudan knock-out, European knock-in, auto-call).
---
### Price an FX derivative with Garman-Kohlhagen
```python
import QuantLib as ql
from fastFE import GarmanKohlagenProcessModel, create_black_vol_surface
today = ql.Date().todaysDate()
dayCount = ql.Actual365Fixed()
spot = 1.10 # EURUSD
domestic = ql.YieldTermStructureHandle(ql.FlatForward(today, 0.04, dayCount))
foreign = ql.YieldTermStructureHandle(ql.FlatForward(today, 0.02, dayCount))
# vol_surface: see "Construct a volatility surface" above
fx_model = GarmanKohlagenProcessModel(domestic, foreign, vol_surface, spot)
daily_schedule = ql.MakeSchedule(
today, today + ql.Period('1Y'), ql.Period('1D'), calendar=ql.WeekendsOnly()
)
fx_paths = fx_model.monte_carlo_paths(daily_schedule, numPaths=1000)
```
See [`examples/fx_linked_example.py`](examples/fx_linked_example.py) for an FX range accrual note, and [`examples/target_redemption_example.py`](examples/target_redemption_example.py) for a Target Redemption Forward (TRF).
---
### Price an interest rate derivative with Hull-White
```python
import QuantLib as ql
import pandas as pd
from fastFE import HullWhiteModel, bootstrap_curve
today = ql.Date().todaysDate()
ql.Settings.instance().evaluationDate = today
curve = bootstrap_curve(today, deposit=df_deposit, swap=df_swap)
hw_model = HullWhiteModel(today, curve)
hw_model.calibrate(df_swaption) # df_swaption: DataFrame with maturity, length, volatility
def create_libor_6M(ts):
return ql.IborIndex(
'Libor6M', ql.Period('6M'), 2, ql.USDCurrency(),
ql.UnitedStates(ql.UnitedStates.Settlement),
ql.ModifiedFollowing, True, ql.Thirty360(ql.Thirty360.USA), ts
)
underlying_path, fixings, discount_factors = hw_model.monte_carlo_paths(
index_factories={'libor_6M': create_libor_6M},
fixingSchedule=fixing_schedule,
paymentSchedule=payment_schedule,
numPaths=1000
)
libor_fixings = fixings['libor_6M'] # pd.DataFrame
```
See [`examples/irs_examle.py`](examples/irs_examle.py) for a cancellable Libor IRS, [`examples/sofr_irs_example.py`](examples/sofr_irs_example.py) for a cancellable SOFR IRS with daily compounding, and [`examples/ir_linked_example.py`](examples/ir_linked_example.py) for a CMS spread cancellable IRS.
---
### Bermudan / American optionality with Longstaff-Schwartz
```python
import numpy as np
import pandas as pd
from fastFE import LongstaffSchwartz, subset_to_bool
# cashflows: pd.DataFrame (payment_dates x paths)
# discount_factors: pd.DataFrame (payment_dates x 1)
# exercise_payoff: callable, receives array of path values, returns cashflow array
exercisable = subset_to_bool(exercise_dates, cashflows.index)
lsm = LongstaffSchwartz(
cashflows=cashflows,
discountFactors=discount_factors,
exercisable=exercisable,
exercise_payoff=lambda x: np.zeros(len(x)),
observable=libor_fixings
)
lsm.backward_induction()
print(lsm.confidence_interval()) # (lower, upper) at 95%
print(lsm.survival_probability()) # probability of not exercising early
print(lsm.exercise_cashflows()) # expected cashflows from early exercise
print(lsm.exercise_mask()) # bool DataFrame: exercise decision per path
```
See [`examples/examples.py`](examples/examples.py) for a standalone LSM example with mock data.
---
### Multi-asset Monte Carlo
```python
from fastFE import MultiAssetModel
corr_matrix = [[1.0, 0.6], [0.6, 1.0]]
# Combine a BSM equity process and a GK FX process
multi_model = MultiAssetModel(
processes=[equity_model.process, fx_model.process],
corrMatrix=corr_matrix
)
fixings = multi_model.monte_carlo_paths(fixing_schedule, numPaths=1000)
equity_paths = fixings[0] # pd.DataFrame
fx_paths = fixings[1] # pd.DataFrame
```
See [`examples/multi_stock_examle.py`](examples/multi_stock_examle.py) for a multi-asset example.
---
### Market conventions
`Conventions` provides pre-built fixed and floating leg convention dictionaries for USD, EUR, JPY, TWD, CHF, and GBP, ready to pass directly into rate helper constructors.
```python
from fastFE import Conventions, create_deposit_rate_helpers, create_swap_rate_helpers
deposit_helpers = create_deposit_rate_helpers(df_deposit, Conventions.USFixedLegConventions())
swap_helpers = create_swap_rate_helpers(
df_swap,
fixed_leg_conventions=Conventions.USFixedLegConventions(),
floating_leg_conventions=Conventions.USFloatingLegConventions()
)
```
---
## Examples
| File | Description |
|------|-------------|
| [`examples/examples.py`](examples/examples.py) | Comprehensive reference — dates, schedules, all rate helpers, all model types, LSM |
| [`examples/irs_examle.py`](examples/irs_examle.py) | Cancellable fixed-vs-Libor IRS using Hull-White + LSM |
| [`examples/sofr_irs_example.py`](examples/sofr_irs_example.py) | Cancellable fixed-vs-SOFR IRS with daily compounding |
| [`examples/ir_linked_example.py`](examples/ir_linked_example.py) | CMS 5Y-2Y spread cancellable IRS (Hull-White + LSM) |
| [`examples/equity_link_example.py`](examples/equity_link_example.py) | Equity-linked note with Heston model, knock-out, knock-in, auto-call |
| [`examples/fx_linked_example.py`](examples/fx_linked_example.py) | FX daily range accrual note (Garman-Kohlhagen) |
| [`examples/target_redemption_example.py`](examples/target_redemption_example.py) | Target Redemption Forward (TRF) in EURUSD |
| [`examples/multi_stock_examle.py`](examples/multi_stock_examle.py) | Multi-asset correlated Monte Carlo |
## API Reference
### `bootstrap_curve(today, *, deposit=None, swap=None, OIS=None, fra=None, bond=None, sofr=None, currency='USD')`
Bootstraps a yield curve from any combination of instrument DataFrames. Returns a `ql.YieldTermStructureHandle`.
### `HullWhiteModel(today, curve)`
Interest rate model. Call `.calibrate(df_swaption)` to fit to market swaptions, then `.monte_carlo_paths(index_factories, fixingSchedule, paymentSchedule, numPaths)` to generate short-rate paths and index fixings.
### `HestonModel(risk_free_curve, dividend_curve, calendar)`
Stochastic volatility equity model. Call `.calibrate(vol_df, spot)`, then `.monte_carlo_paths(schedule, numPaths)`.
### `BlackScholesMertonModel(risk_free_curve, dividend_curve, vol, spot)`
Equity/index model. Accepts constant vol, vol curve, or vol surface. Call `.monte_carlo_paths(schedule, numPaths)`.
### `GarmanKohlagenProcessModel(domestic_curve, foreign_curve, vol, spot)`
FX model. Same vol options and `monte_carlo_paths` interface as BSM.
### `MultiAssetModel(processes, corrMatrix)`
Combines multiple GBM processes with a correlation matrix. Returns a list of DataFrames from `.monte_carlo_paths(schedule, numPaths)`.
### `LongstaffSchwartz(cashflows, discountFactors, exercisable, exercise_payoff, observable)`
Longstaff-Schwartz LSM engine. Call `.backward_induction()` first, then access `.valuations()`, `.confidence_interval()`, `.survival_probability()`, `.exercise_cashflows()`, `.exercise_mask()`.
### `Conventions`
Class with class methods: `USFixedLegConventions()`, `USFloatingLegConventions()`, `EURFixedLegConventions()`, `EURFloatingLegConventions()`, `JPYFixedLegConventions()`, `JPYFloatingLegConventions()`, and equivalents for TWD, CHF, GBP.
## Requirements
- Python >= 3.9
- QuantLib >= 1.32
- pandas >= 1.5
- numpy >= 1.23
- scikit-learn >= 1.2
## License
MIT
| text/markdown | Kuan-Hung Wang | null | null | null | MIT | quantlib, quantitative finance, derivatives, interest rate, monte carlo, yield curve, volatility surface, exotic derivatives, fixed income, hull white, heston | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business :: Financial",
"Topic :: Scientific/Engineering :: Mathematics"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"QuantLib>=1.32",
"pandas>=1.5",
"numpy>=1.23",
"scikit-learn>=1.2",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/kuanhungwang/fastFE",
"Issues, https://github.com/kuanhungwang/fastFE/issues"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-21T06:50:13.669762 | fastfe-0.1.0.tar.gz | 28,777 | 2e/ee/44b847ee515b5d8a92fddadf23e1eeca5b6b1f541211352509b4a118b9e5/fastfe-0.1.0.tar.gz | source | sdist | null | false | 98eeaec32a58659a8004cd090e631323 | ce875f17174e4f7ce530704781e0d6e5c3d47c91dcff61bf534d33862ef5a305 | 2eee44b847ee515b5d8a92fddadf23e1eeca5b6b1f541211352509b4a118b9e5 | null | [] | 0 |
2.4 | bitquery-pb2-kafka-package | 0.2.23 | This package contains the pb2 files necessary to interact with Bitquery Kafka Protobuf messages | # Bitquery Protobuf Kafka Package
A Python library containing `pb2` files to simplify parsing blockchain data (Solana, EVM & Tron) from Bitquery Kafka Streams using Protobuf messages.
Read more on Bitquery onchain data streams [here](https://docs.bitquery.io/docs/streams/kafka-streaming-concepts/)
## Installation
Install easily via pip:
```bash
pip install bitquery-pb2-kafka-package
```
## Usage
You can import and use the protobuf-generated Python classes like this:
### ▶️ Price Index
Read about the new price index stream [here](https://docs.bitquery.io/docs/trading/price-index/introduction/)
```
from price_index import price_index_pb2
price_feed=price_index_pb2.PriceIndexMessage()
```
### ▶️ Solana
```python
from solana import block_message_pb2
# Create a Solana BlockMessage instance
block_message = block_message_pb2.BlockMessage()
# Set fields
block_message.field_name = "value"
# Serialize to bytes
serialized = block_message.SerializeToString()
# Deserialize from bytes
msg = block_message_pb2.BlockMessage()
msg.ParseFromString(serialized)
print(msg)
```
### ▶️ EVM
```python
from evm import block_message_pb2
# Create an EVM BlockMessage instance
evm_block = block_message_pb2.BlockMessage()
# Set fields
evm_block.field_name = "value"
# Serialize and Deserialize
data = evm_block.SerializeToString()
decoded = block_message_pb2.BlockMessage()
decoded.ParseFromString(data)
print(decoded)
```
### ▶️ Tron
```python
from tron import block_message_pb2
# Create a Tron BlockMessage instance
tron_block = block_message_pb2.BlockMessage()
# Set fields
tron_block.field_name = "value"
# Serialize and Deserialize
data = tron_block.SerializeToString()
decoded = block_message_pb2.BlockMessage()
decoded.ParseFromString(data)
print(decoded)
```
## Available Protobuf Messages
### Solana
- `block_message_pb2.BlockMessage`
- `dex_block_message_pb2.DexBlockMessage`
- `ohlc_message_pb2.OhlcMessage`
- `parsed_idl_block_message_pb2.ParsedIdlBlockMessage`
- `token_block_message_pb2.TokenBlockMessage`
### EVM
- `block_message_pb2.BlockMessage`
- `dex_block_message_pb2.DexBlockMessage`
- `parsed_abi_block_message_pb2.ParsedAbiBlockMessage`
- `token_block_message_pb2.TokenBlockMessage`
- `dex_pool_block_message_pb2.DexPoolBlockMessage`
### Tron
- `block_message_pb2.BlockMessage`
- `dex_block_message_pb2.DexBlockMessage`
- `parsed_abi_block_message_pb2.ParsedAbiBlockMessage`
- `token_block_message_pb2.TokenBlockMessage`
| text/markdown | null | Bitquery <divyasshree@bitquery.io> | null | null | MIT License
Copyright (c) 2025 Bitquery
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/bitquery/streaming_protobuf/tree/main"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T06:49:29.307007 | bitquery_pb2_kafka_package-0.2.23.tar.gz | 18,502 | fa/56/c9a7d22bc8c19e73b2341630928c0552cfeb3b0b406ce7affaf3a9aa14e6/bitquery_pb2_kafka_package-0.2.23.tar.gz | source | sdist | null | false | aec62a371935957fda847ef997b2e933 | a10189e17645ad3b8c67b9757c35f2d8e7044a4d3010c1091ef44b06e9f4ac92 | fa56c9a7d22bc8c19e73b2341630928c0552cfeb3b0b406ce7affaf3a9aa14e6 | null | [
"LICENSE"
] | 250 |
2.3 | wxmp | 1.1.0 | 微信公众号 API 等相关工具 | # 微信公众号 API 工具
微信公众平台(微信公众号)API 相关工具,提供公众号搜索、文章列表获取、文章内容下载等功能。
**项目地址**: https://github.com/morning-start/wxmp
## 项目简介
`wxmp` 是一个 Python 库,用于与微信公众平台后台 API 进行交互。通过本项目可以:
- 通过微信登录后的 cookies 获取访问 token
- 搜索公众号(根据关键词查找公众号信息)
- 获取指定公众号的文章列表
- 验证文章链接有效性
- 下载文章内容并转换为 Markdown 格式
- 支持时间范围缓存和增量更新
- 支持并发下载
## 安装
```bash
pip install wxmp
```
或从源码安装:
```bash
git clone https://github.com/morning-start/wxmp.git
cd wxmp
pip install -e .
```
## 快速开始
### 1. 初始化 API
```python
from wxmp import WxMPAPI
cookies = {
"wxuin": "your_wxuin",
"pass_ticket": "your_pass_ticket",
}
api = WxMPAPI(cookies)
```
### 2. 搜索公众号
```python
response = api.search_fakeid("Python")
for account in response.list:
print(f"名称: {account.nickname}")
```
### 3. 使用时间范围爬虫(推荐)
```python
from wxmp.spider import TimeRangeSpider
from datetime import datetime
spider = TimeRangeSpider.from_cookies_file("cookies.json")
bizs = spider.load_or_search_bizs(["Python编程"])
time_range = TimeRange(
begin=datetime(2024, 1, 1),
end=datetime(2024, 12, 31)
)
df = spider.search_articles_content(bizs, time_range)
spider.save_all_article_content(df, save_dir="temp/article_content/")
```
## 文档
详细文档请查看 [Wiki](./wiki/README.md):
- [项目概览](./wiki/项目概览.md) - 项目简介、技术栈、架构概览
- [架构设计](./wiki/架构设计.md) - 设计原则、模块设计、缓存策略
- [API 文档](./wiki/API文档.md) - API 层完整文档、数据模型、异常类
- [使用指南](./wiki/使用指南.md) - 快速开始、使用场景、最佳实践
- [数据流动与状态管理](./wiki/数据流动与状态管理.md) - 数据流转、状态机、缓存策略
- [贡献指南](./wiki/贡献指南.md) - 如何贡献代码、开发环境设置
- [常见问题](./wiki/常见问题.md) - 常见问题和解决方案
- [更新日志](./wiki/CHANGELOG.md) - 版本历史、变更记录
## License
MIT License
| text/markdown | morning-start | morning-start <morning-start@foxmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fake-useragent>=2.2.0",
"loguru>=0.7.3",
"pandas>=2.3.3",
"pydantic>=2.12.5",
"requests>=2.32.5",
"tqdm>=4.67.3",
"urllib3>=2.6.3"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:48:09.400937 | wxmp-1.1.0.tar.gz | 11,774 | 75/f2/b3a2fd4134a22335db78489a477da315873bd29271ff92e918a4f824ed2e/wxmp-1.1.0.tar.gz | source | sdist | null | false | 534d7dda60af652290d37cafaf26a667 | 1cee3161e07051bc709e4e5c1fa6a75a38e8e28eff5dd9715c3ddf336451f697 | 75f2b3a2fd4134a22335db78489a477da315873bd29271ff92e918a4f824ed2e | null | [] | 234 |
2.4 | spectrik | 0.3.0 | A specification and blueprint manager for declarative configuration-as-code tools. | # spectrik
A generic specification and blueprint pattern for declarative configuration-as-code tools.
## Overview
spectrik provides a reusable framework for building tools that apply declarative
configurations to external systems. It includes:
- **Specification** — an abstract base class for defining desired-state resources
- **SpecOp strategies** — `Present`, `Ensure`, and `Absent` wrappers that control
when specs are applied or removed
- **Blueprint** — a named, ordered collection of spec operations
- **Project** — a top-level build target that orchestrates blueprints
- **HCL loading engine** — parse `.hcl` files into blueprints and projects with
decorator-based spec registration
## Installation
```bash
pip install spectrik
```
## Quick Start
```python
import spectrik
from spectrik.hcl import load_blueprints, load_projects
# Register specs for HCL block decoding
@spectrik.spec("widget")
class Widget(spectrik.Specification["MyProject"]):
def __init__(self, *, color: str):
self.color = color
def equals(self, ctx):
...
def apply(self, ctx):
...
def remove(self, ctx):
...
# Load from HCL files
blueprints = load_blueprints(Path("hcl"))
projects = load_projects(Path("hcl"), blueprints, project_type=MyProject)
# Build a project
projects["myapp"].build(dry_run=True)
```
## HCL Support
spectrik uses [HCL](https://github.com/hashicorp/hcl) as its configuration
language. Load `.hcl` files into a `Workspace` to define blueprints and
projects:
```python
import spectrik.hcl as hcl
ws = hcl.scan("./configs", project_type=MyProject, context={
"env": os.environ,
"name": "myapp",
"cwd": os.getcwd,
})
ws["myapp"].build()
```
For manual control, parse individual files and feed them to a Workspace:
```python
from spectrik import Workspace
ws = Workspace(project_type=MyProject)
ws.load(hcl.load(Path("blueprints.hcl"), context={...}))
ws.load(hcl.load(Path("projects.hcl"), context={...}))
```
### Interpolation
String values in HCL files support `${...}` variable interpolation. Pass a
context dict when loading, and spectrik resolves references after parsing.
```hcl
project "app" {
description = "${name}"
home = "${env.HOME}/.config/${name}"
workdir = "${cwd}/data"
}
```
Dotted references walk the context using attribute or key access, so dicts,
dataclasses, and Pydantic models all work naturally. If a context value is
callable, it is invoked at resolution time — useful for values like
`"cwd": os.getcwd`.
### Escaping
Use `$$` to produce a literal `$` in the output. This is needed when HCL
values contain template syntax meant for other tools:
| You write in HCL | Output after interpolation |
| ----------------------- | ------------------------------ |
| `${name}` | Resolved from context |
| `$${name}` | Literal `${name}` |
| `$${{ secrets.TOKEN }}` | Literal `${{ secrets.TOKEN }}` |
For example, to embed a GitHub Actions workflow that mixes spectrik
variables with Actions expressions:
```hcl
blueprint "deploy" {
present "file" {
path = ".github/workflows/deploy.yaml"
content = <<-EOF
name: Deploy
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: echo "Deploying ${app_name}"
env:
TOKEN: $${{ secrets.GITHUB_TOKEN }}
EOF
}
}
```
In this example, `${app_name}` is resolved by spectrik while
`$${{ secrets.GITHUB_TOKEN }}` produces the literal `${{ secrets.GITHUB_TOKEN }}`
that GitHub Actions expects.
## License
MIT
| text/markdown | null | jheddings <jheddings@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic>=2.12.5",
"python-hcl2>=7.3.1"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:47:59.052670 | spectrik-0.3.0-py3-none-any.whl | 10,970 | 9e/83/cb5831d96870288eda6b18dfc3da1f48f49fb9f03192ffa7e775b19c283f/spectrik-0.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | e4bed69ea95b3361cc36f7310d3272a4 | e6392e66acebb4d80e9ee29dce639d0525d388a7cf1b891af89ab94e20718da1 | 9e83cb5831d96870288eda6b18dfc3da1f48f49fb9f03192ffa7e775b19c283f | MIT | [
"LICENSE"
] | 247 |
2.4 | opendraft | 1.7.2 | Generate master-level research papers with real citations in minutes | # OpenDraft Engine
The Python AI engine that powers thesis draft generation. Contains the 19-agent pipeline, citation research, and export functionality.
## Structure
```
engine/
├── draft_generator.py # Main 19-stage pipeline orchestrator
├── config.py # Model settings, API keys, rate limits
├── utils/
│ ├── agent_runner.py # Agent execution engine
│ ├── api_citations/ # Citation APIs (CrossRef, Semantic Scholar)
│ ├── citation_*.py # Citation management & validation
│ ├── export_professional.py # PDF/DOCX export
│ ├── pdf_engines/ # Pandoc, WeasyPrint engines
│ └── deep_research.py # Research phase utilities
├── prompts/
│ ├── 00_WORKFLOW.md # Complete agent workflow
│ ├── 01_research/ # Deep Research, Scout, Scribe, Signal
│ ├── 02_structure/ # Architect, Citation Manager, Formatter
│ ├── 03_compose/ # Crafter, Thread, Narrator
│ ├── 04_validate/ # Skeptic, Verifier, Referee
│ ├── 05_refine/ # Citation Verifier, Voice, Entropy, Polish
│ └── 06_enhance/ # Abstract Generator, Enhancer
└── opendraft/ # CLI tools
```
## Usage
### Run Pipeline Directly
```bash
cd engine
python draft_generator.py --topic "Your research topic" --level master
```
### Academic Levels
| Level | Words | Chapters | Time |
|-------|-------|----------|------|
| research_paper | 3-5k | 3-4 | 5-10 min |
| bachelor | 10-15k | 5-7 | 8-15 min |
| master | 20-30k | 7-10 | 10-25 min |
| phd | 50-80k | 10-15 | 20-40 min |
## Environment Variables
Required in `.env` (project root):
```bash
GEMINI_API_KEY=your-key # Required
PROXY_LIST=... # Optional: for faster research
SCOUT_PARALLEL_WORKERS=32 # Optional: parallelism
```
## Dependencies
```bash
pip install -r requirements.txt
```
| text/markdown | Federico De Ponte | null | Federico De Ponte | null | MIT | academic-writing, draft, ai, research, citation-management, llm, pdf-generation, scholarly | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Education",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering",
"Topic :: Text Processing :: Markup :: Markdown",
"Topic :: Text Processing :: Markup :: LaTeX"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"anthropic>=0.20.0",
"openai>=1.0.0",
"google-genai>=1.0.0",
"pybtex>=0.24.0",
"citeproc-py>=0.6.0",
"PyYAML>=6.0.0",
"markdown>=3.5.0",
"weasyprint>=60.0",
"python-docx>=1.0.0",
"requests>=2.31.0",
"beautifulsoup4>=4.12.0",
"lxml>=4.9.0",
"python-dotenv>=1.0.0",
"rich>=13.0.0",
"psutil>=5.9.0",
"pydantic>=2.0.0",
"tenacity>=8.0.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"streamlit>=1.28.0; extra == \"docker\"",
"pymupdf>=1.23.0; extra == \"pdf\"",
"requests>=2.31.0; extra == \"audio\"",
"opendraft[audio,dev,docker,pdf]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://opendraft.xyz",
"Documentation, https://github.com/federicodeponte/opendraft/tree/main/docs",
"Repository, https://github.com/federicodeponte/opendraft",
"Changelog, https://github.com/federicodeponte/opendraft/blob/main/CHANGELOG.md",
"Bug Tracker, https://github.com/federicodeponte/opendraft/issues"
] | twine/6.2.0 CPython/3.13.8 | 2026-02-21T06:47:50.224272 | opendraft-1.7.2.tar.gz | 262,506 | 4c/f3/064d6e28cb2e0882a001fb889b25964367067c9588630f5a7c7385c86c2b/opendraft-1.7.2.tar.gz | source | sdist | null | false | 71bc11cc688e6e327a3f5db367abbdf2 | 447c5b4270e5c665c37b5b62fff72d28a5ccbb47947dcf6cf10d2cccdb6ab842 | 4cf3064d6e28cb2e0882a001fb889b25964367067c9588630f5a7c7385c86c2b | null | [] | 249 |
2.4 | nanobanana-cli | 20260221.64737 | CLI tool for generating and editing images using Google's Gemini API or OpenRouter | # Nanobanana
A lightweight CLI tool for generating and editing images using Google's Gemini API or OpenRouter.
> **Best used with your coding agent CLI of choice!** This tool pairs excellently with [Claude Code](https://claude.ai/claude-code) and similar AI coding assistants for automated image generation workflows.
## Features
- **Text-to-image generation** - Create images from text prompts
- **Image editing** - Transform existing images with text instructions
- **Multi-image composition** - Combine multiple input images
- **Flexible output** - 10 aspect ratios and 3 size options
- **Multiple API backends** - Use Gemini API directly or via OpenRouter
## Requirements
- Python 3.14+
- Google Gemini API key OR OpenRouter API key
## Installation
### Via uv (recommended)
```bash
uv tool install nanobanana-cli
nanobanana install-skill
```
The first command installs the CLI. The second installs the Claude Code skill to `~/.claude/skills/nanobanana/`.
### Via pip
```bash
pip install nanobanana-cli
nanobanana install-skill
```
### From source
```bash
git clone https://github.com/sussdorff/nanobanana.git
cd nanobanana
./install.sh
```
`install.sh` does both steps in one go (CLI + skill).
## Setup
### Option 1: Gemini API (Direct)
1. Get a Gemini API key from [Google AI Studio](https://aistudio.google.com/apikey)
2. Set it as an environment variable:
```bash
export GEMINI_API_KEY="your-api-key-here"
```
### Option 2: OpenRouter
1. Get an API key from [OpenRouter](https://openrouter.ai/keys)
2. Set it as an environment variable:
```bash
export OPENROUTER_API_KEY="your-api-key-here"
```
## Configuration
### Config File (Recommended)
Create a config file at `~/.config/nanobanana/config.json`:
```json
{
"api": "openrouter",
"model": "google/gemini-3-pro-image-preview",
"aspect": "16:9",
"size": "2K"
}
```
| Field | Description | Values |
|-------|-------------|--------|
| `api` | API backend | `gemini` or `openrouter` |
| `model` | OpenRouter model | e.g., `google/gemini-3-pro-image-preview` |
| `aspect` | Default aspect ratio | `1:1`, `16:9`, etc. |
| `size` | Default image size | `1K`, `2K`, `4K` |
The config file location follows the XDG spec: `$XDG_CONFIG_HOME/nanobanana/config.json`
### Priority
Settings are resolved in this order (highest to lowest):
1. CLI flags
2. Config file
3. Environment variables (API keys only)
4. Built-in defaults
### Shell Wrapper Example
For 1Password users, a simple wrapper in `.zshrc`:
```bash
nanobanana() {
if [[ -z "$OPENROUTER_API_KEY" ]]; then
export OPENROUTER_API_KEY="$(op read 'op://API Keys/OpenRouter/credential')"
fi
command nanobanana "$@"
}
```
## Usage
```bash
nanobanana <command> [options] "prompt"
nanobanana [options] "prompt" # defaults to free-form generation
```
### Commands
Each command wraps your prompt in an optimized template with sensible defaults.
**Design Exploration:**
| Command | Description | Default Aspect | Default Size |
|---------|-------------|----------------|--------------|
| `dashboard` | KPI/analytics dashboard mockup | 16:9 | 2K |
| `moodboard` | Website/app moodboard collage | 1:1 | 2K |
| `explore` | Same concept in 4 style variations | 1:1 | 2K |
| `wireframe` | UI wireframe or screen layout | 16:9 | 2K |
**Content Creation:**
| Command | Description | Default Aspect | Default Size |
|---------|-------------|----------------|--------------|
| `slide` | Presentation slide | 16:9 | 2K |
| `social` | Social media post image | 1:1 | 2K |
| `icon` | App icon | 1:1 | 1K |
| `architecture` | System/cloud architecture diagram | 16:9 | 2K |
**Base:**
| Command | Description |
|---------|-------------|
| `generate` | Free-form prompt (explicit version of default) |
| `help` | Show help for all commands or a specific command |
| `version` | Show version |
If the first argument is not a known command, it is treated as a free-form prompt (backwards compatible).
### Options
| Flag | Description | Default |
|------|-------------|---------|
| `-i <file>` | Input image (repeatable for multiple images) | none |
| `-o <file>` | Output filename | `image_YYYYMMDD_HHMMSS.png` |
| `-aspect <ratio>` | Aspect ratio (overrides command default) | `1:1` |
| `-size <size>` | Image size (overrides command default) | `1K` |
| `-model <model>` | OpenRouter model (enables OpenRouter API) | `google/gemini-3-pro-image-preview` |
| `-h` | Show help | - |
| `-version` | Show version | - |
### Supported Aspect Ratios
`1:1`, `2:3`, `3:2`, `3:4`, `4:3`, `4:5`, `5:4`, `9:16`, `16:9`, `21:9`
### Supported Sizes
| Size | Resolution |
|------|------------|
| `1K` | ~1024px |
| `2K` | ~2048px |
| `4K` | ~4096px |
### Supported Image Formats
PNG, JPEG, WebP, GIF
## Examples
### Subcommands
```bash
# Dashboard mockup (16:9, 2K by default)
nanobanana dashboard "SaaS metrics with MRR, churn rate, and user growth"
# Presentation slide
nanobanana slide "Q4 revenue highlights: 40% YoY growth, 3 new enterprise clients"
# App icon
nanobanana icon "podcast app with microphone and sound waves"
# Architecture diagram
nanobanana architecture "microservices with API gateway, 3 services, Redis cache, PostgreSQL"
# Design exploration (4 style variations)
nanobanana explore "landing page hero for a meditation app"
# Moodboard
nanobanana moodboard "fintech app targeting young professionals"
# Wireframe
nanobanana wireframe "settings page with account, notifications, and billing"
# Social media post
nanobanana social "product launch announcement for an AI writing tool"
# Override command defaults
nanobanana dashboard -size 4K "quarterly revenue breakdown"
nanobanana social -aspect 9:16 "instagram story for product launch"
# Get help for a specific command
nanobanana help dashboard
```
### Free-form generation
```bash
# Simple generation (no command = free-form)
nanobanana "a cute cat sitting on a windowsill"
# With aspect ratio and size
nanobanana -aspect 16:9 -size 2K "cinematic mountain landscape at sunset"
# Custom output filename
nanobanana -o hero-image.png "abstract geometric pattern"
```
### Using OpenRouter
```bash
# Use default model (gemini-3-pro-image-preview)
nanobanana -model google/gemini-3-pro-image-preview "a cute cat"
# Use a different model
nanobanana -model google/gemini-2.5-flash-image-preview "a sunset over mountains"
```
### Image editing
```bash
# Style transfer
nanobanana -i photo.jpg "transform into watercolor painting"
# Modifications
nanobanana -i portrait.png "add sunglasses"
```
### Multi-image composition
```bash
# Combine images
nanobanana -i background.png -i subject.png "place the subject in the scene"
# Style reference
nanobanana -i content.jpg -i style.jpg "apply the style to the content image"
```
## Examples Directory
The `examples/` folder contains working examples with generated images:
### basic/
Simple text-to-image generation.
```bash
nanobanana -o basic_example.png "a friendly yellow banana character"
```
### presentation/
Generate presentation slides from text prompts.
```bash
nanobanana -aspect 16:9 -size 2K -o slide_01.png "title slide prompt..."
nanobanana -aspect 16:9 -size 2K -o slide_02.png "content slide prompt..."
```
### branded-presentation/
Use a template image as a style reference for consistent branding across slides.
```bash
# 1. Generate a style template first
nanobanana -aspect 16:9 -size 2K -o template.png "slide template with brand colors..."
# 2. Generate slides using template as reference
nanobanana -i template.png -aspect 16:9 -size 2K -o slide_01.png "title slide..."
nanobanana -i template.png -aspect 16:9 -size 2K -o slide_02.png "content slide..."
```
Each example includes a README and the markdown source used to generate the images. See the `examples/` folder for full prompts and generated outputs.
## Using with Coding Agents
Nanobanana works great with AI coding assistants like Claude Code for automated image generation workflows:
1. Describe slides/images in a markdown file
2. Your coding agent reads the markdown and extracts prompts
3. The agent runs nanobanana to generate each image
See `examples/branded-presentation/` for a complete workflow demonstration.
## API Pricing
### Gemini API (Direct)
Uses `gemini-3-pro-image-preview` model. Approximate costs:
| Size | Cost per Image |
|------|----------------|
| 1K-2K | ~$0.13 |
| 4K | ~$0.24 |
See [Gemini API Pricing](https://ai.google.dev/gemini-api/docs/pricing) for current rates.
### OpenRouter
Pricing varies by model. See [OpenRouter Pricing](https://openrouter.ai/models) for current rates.
## Development
```bash
# Run tests
uv run pytest -v
# Run the CLI locally
uv run nanobanana -version
uv run nanobanana -h
```
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"google-genai>=1.0.0",
"httpx>=0.27.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:47:41.851599 | nanobanana_cli-20260221.64737.tar.gz | 502,490 | 85/37/c80925847f12f61a0acb916c66ab113f91450090d8a66f4ec649667e8b14/nanobanana_cli-20260221.64737.tar.gz | source | sdist | null | false | 479923a824851e42f3b67e470d7a0d87 | a34d11d2a1e98df782a3efe99eee3a9d68833aa6321b94c0b56766d46124aecd | 8537c80925847f12f61a0acb916c66ab113f91450090d8a66f4ec649667e8b14 | MIT | [] | 240 |
2.4 | ferp | 0.10.1 | Terminal-first file navigator and FSCP automation workbench built with Textual. | <p align="center">
<img src="https://raw.githubusercontent.com/zappbrandigan/ferp/refs/heads/main/ferp/resources/ferp-logo.png">
</p>
<p align="center" style="display:flex; justify-content:center; gap:6px; flex-wrap:wrap;">
<img alt="Release" src="https://img.shields.io/pypi/v/ferp?label=release&style=for-the-badge&color=olive">
<img alt="Python" src="https://img.shields.io/pypi/pyversions/ferp?style=for-the-badge">
<img alt="License" src="https://img.shields.io/pypi/l/ferp?style=for-the-badge">
<img alt="CI" src="https://img.shields.io/github/actions/workflow/status/zappbrandigan/ferp/publish.yml?style=for-the-badge">
<img alt="Status" src="https://img.shields.io/badge/scope-internal-orange?style=for-the-badge">
</p>
---
## About
**FERP** is a terminal-friendly file manager and automation workbench. It combines an interactive file navigator and a protocol-driven script runner so you can explore directories and execute repeatable workflows through a TUI—without requiring terminal knowledge.
## Highlights
- **Keyboard-first navigation**
- A full list of keys are available in the app.
- **Context panes**
- Script list reads from the user config `config.json` (platformdirs).
- Output panel streams FSCP results and records transcripts under the user data `logs` directory.
- README modal (Enter on a script) displays bundled documentation.
- **Visual mode (multi-select)**
- Select multiple items in the file navigator, including range selection.
- Copy, move, paste, and delete selected files or folders without running scripts.
- Scripts/output panels are disabled while visual mode is active.
- **Managed script runtime**
- Scripts execute via the FSCP host ↔ script protocol.
- Interactive prompts, confirmations, progress, and structured results are supported.
- Logs are timestamped and automatically pruned (default 50 files / 14 days).
## Quick Start
```bash
pipx install ferp
```
> [!NOTE]
> To use the default scripts, open the command palette (`Ctrl+P`) and select **Install/Update Default Scripts**.
> [!WARNING]
> This option is intended for specific users. It will remove any existing scripts you have installed.
>
> If you prefer to install scripts individually, or to use your own custom scripts, see [FSCP](./ferp/fscp).
## Configuring Scripts
Scripts are declared in your user config `config.json` (created on first script install). Each entry defines:
- `script`: path to the executable (e.g. `scripts/ferp.zip_dir/script.py`).
- `target`: `current_directory`, `highlighted_file`, or `highlighted_directory`.
- `file_extensions`: optional list of suffixes (for `highlighted_file` targets).
- Optional README at `scripts/<id>/readme.md`.
Each script lives under `scripts/<id>/` (the directory name matches the fully-qualified ID, such as `ferp.zip_dir`). Inside the directory:
- `script.py` contains the executable FSCP script.
- `readme.md` provides the optional documentation shown inside FERP.
Namespaces are only required for bundled/default scripts and for the repo dev setup. Custom scripts can stay in the single user `config.json` without a namespace.
### Dev toggle for script config
During development you can point FERP at the repo copy of `ferp/scripts/config.json` instead of the user config file:
```bash
FERP_DEV_CONFIG=1 textual run --dev ferp/app.py
```
When enabled, FERP reads the repo configs directly: `ferp/scripts/config.json` plus any `ferp/scripts/<namespace>/config.json` files, and skips the one-time copy into the user config directory.
Script update notifications are suppressed while `FERP_DEV_CONFIG=1` is set.
Scripts that log data with `debug` level are skipped by default. You can enable these logs by adding the debug flag:
```bash
FERP_DEV_CONFIG=1 FERP_SCRIPT_LOG_LEVEL=debug textual run --dev ferp/app.py
```
## Configuration
FERP uses two configuration layers:
- **Runtime config (env)**: `FERP_*` environment variables (preferred for automation).
- **User settings (file)**: `settings.json` in the user config directory (managed by the app).
Precedence order: **env -> settings.json -> defaults**.
Runtime env vars:
- `FERP_DEV_CONFIG=1` to read script configs from the repo during development.
- `FERP_SCRIPT_LOG_LEVEL=debug` to include debug logs from FSCP scripts.
You can inspect the resolved configuration with:
```bash
ferp print-config
```
Settings are versioned with `schemaVersion` and automatically normalized on load.
If an upgrade is detected, a backup is created next to `settings.json` with a
`settings.bak-YYYYMMDDHHMMSS.json` suffix.
Logs are separated by type under the user data directory:
- `logs/host/host.log` for UI/host application logs.
- `logs/scripts/` for per-script transcript logs.
### Error Codes
When operations fail, FERP emits structured error codes for easier support and
troubleshooting. Example codes you may see:
- `release_metadata_failed`, `release_asset_download_failed`
- `namespaces_missing_list`, `namespaces_missing_namespace`
- `monday_api_error`, `monday_board_not_found`
## Authoring FSCP Scripts
Python scripts executed from FERP speak the [FSCP](./ferp/fscp) protocol. See
`SCRIPT_AUTHORS.md` for the SDK guide, examples, logging, cancellation, cleanup,
and packaging details.
## Terminal Commands
FERP opens your system terminal in the current directory (shown in the top bar).
- Open a terminal using `Ctrl+t`.
- The spawned terminal inherits the current working directory.
- On Windows system, prefers PowerShell and falls back to CommandPrompt.
## Task List
FERP includes a lightweight task list for quick capture and review.
- Press `t` to add a task from anywhere in the UI.
- Press `l` to open the task list and review or mark tasks as complete.
- Tag tasks with `@` for text highlighting and filtering.
- Toggle completion status with the space bar.
- The task status indicator updates automatically as tasks are completed.
## Other Features
- **Default script updates**: Pull the latest default scripts from the release feed (suppressed in dev mode).
- **Process list**: View and stop running scripts from the command palette.
- **Tasks**: Capture quick tasks and review them in the task list.
- **Themes**: Switch themes from the command palette.
- **Startup directory**: Set the default path Ferp opens on launch.
- **Logs**: Open the latest transcript log from the command palette.
| text/markdown | Brandon Johnson | null | null | null | MIT | null | [
"Environment :: Console",
"License :: OSI Approved :: MIT License",
"Topic :: System :: Shells",
"Topic :: Utilities",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"jsonschema>=4.21",
"watchdog>=4.0",
"textual>=7.4.0",
"platformdirs>=4.2",
"requests>=2.32",
"pydantic>=2.12.5",
"pydantic-settings>=2.13.1",
"pypdf>=6.6; extra == \"dev\"",
"extract-msg>=0.55.0; extra == \"dev\"",
"openpyxl>=3.1; extra == \"dev\"",
"googletrans==4.0.2; extra == \"dev\"",
"Unidecode>=1.3.8; extra == \"dev\"",
"pywin32>=306; platform_system == \"Windows\" and extra == \"dev\"",
"pdfplumber>=0.11.8; extra == \"dev\"",
"reportlab>=4.4.9; extra == \"dev\"",
"py7zr>=1.1.0; extra == \"dev\"",
"pytest>=8.2; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"textual-dev>=1.8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/zappbrandigan/ferp",
"Documentation, https://github.com/zappbrandigan/ferp#readme",
"Issues, https://github.com/zappbrandigan/ferp/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:47:24.368445 | ferp-0.10.1.tar.gz | 104,794 | 37/ab/f796aa754cd7d1242b1f687d72aa6317de2fe1b26136f0c3891d85f5b756/ferp-0.10.1.tar.gz | source | sdist | null | false | f09f4c91c76151d330944416cb58d282 | a1f6dfef1ca6c0382e3cb56a7b39ae103d98a10084eec103fc3948e345410abd | 37abf796aa754cd7d1242b1f687d72aa6317de2fe1b26136f0c3891d85f5b756 | null | [
"LICENSE"
] | 250 |
2.4 | orbs-cli | 0.2.0 | Orbs - an automation framework for Web, Mobile (Appium), and API testing designed to grow with your team | <p align="center">
<img src="assets/orbs.png" width="120" />
</p>
<h1 align="center">Orbs</h1>
<p align="center">
Automation framework that grows with your team
</p>
---
## What is Orbs?
**Orbs** is an automation framework for **Web, Mobile (Appium), and API testing** designed to **grow with your team**.
Orbs supports different levels of automation maturity:
* Junior QA engineers can start with visual tools, record-and-playback, reusable keywords, and Studio-based workflows
* Senior engineers can work directly with code, CLI, and CI/CD pipelines without restrictions or license lock-in
Both approaches share the same execution engine and project structure, allowing teams to evolve their automation practices **without rewriting tests or migrating frameworks**.
---
## Philosophy
### 1. Tests are software, not scripts
Automation code should be designed, structured, reviewed, and evolved like production code — not copied scripts glued together over time.
### 2. Explicit is better than implicit
If something runs, it should be obvious:
* what is executed
* from where
* with which configuration
No silent defaults. No hidden behavior.
### 3. Structure before scale
Orbs enforces structure early so teams don’t pay technical debt later. Scaling automation should feel predictable, not painful.
### 4. One core, many interfaces
The same execution engine can be accessed via:
* CLI
* REST API
* Orbs Studio (GUI)
Different entry points, same behavior.
### 5. Tooling should assist, not hide reality
Generators, runners, and spy tools exist to accelerate work — not to obscure how automation actually works.
---
## Table of Contents
* [Core Capabilities](#core-capabilities)
* [Quick Start](#quick-start)
* [CLI Overview](#cli-overview)
* [Spy](#spy)
* [Project Structure](#project-structure)
* [Configuration](#configuration)
* [Documentation](#documentation)
* [Contributing](#contributing)
* [License](#license)
---
## Core Capabilities
* 📦 Project scaffolding with `orbs init`
* 🧱 Clear project structure for large test suites
* ⟳ Test suite, test case, feature, and step generation
* ▶️ Unified runner for `.feature`, `.yml`, and `.py`
* 🌐 REST API server for listing and scheduling executions
* 🕵️ Web & Mobile Spy for element inspection
* ⚙️ Typer-powered CLI
* 🧩 Extensible hooks and listeners
---
## Quick Start
```bash
pip install orbs-cli
orbs setup android
orbs init myproject
cd myproject
orbs create-feature login
orbs implement-feature login
orbs run features/login.feature
```
---
## CLI Overview
```bash
orbs setup android
orbs init <project>
orbs create-testsuite <name>
orbs create-testcase <name>
orbs create-feature <name>
orbs implement-feature <name>
orbs run <target>
orbs serve [--port <port>]
orbs spy
```
---
## Spy
Orbs provides an interactive **Web & Mobile Spy**, for inspecting elements and capturing locators.
```bash
orbs spy --web --url=https://example.com
orbs spy --mobile
```
📖 Full Spy documentation: [docs/spy.md](https://github.com/badrusalam11/orbs-cli/blob/main/docs/spy.md)
---
## Project Structure
```text
myproject/
├── features/
├── steps/
├── testcases/
├── testsuites/
├── listeners/
├── settings/
└── .env
```
---
## Configuration
Environment variables and properties are defined explicitly using `.env` and `settings/*.properties`.
```env
APP_PORT=5006
SERVER_URL=http://localhost:5006
```
📖 Full configuration guide: [docs/configuration.md](https://github.com/badrusalam11/orbs-cli/blob/main/docs/configuration.md)
---
## Documentation
Detailed documentation is available under the `docs/` directory:
* [Philosophy & Concepts](https://github.com/badrusalam11/orbs-cli/blob/main/docs/philosophy.md) - Framework principles and maturity levels
* [CLI Reference](https://github.com/badrusalam11/orbs-cli/blob/main/docs/cli-reference.md) - Complete command documentation
* [Web Testing](https://github.com/badrusalam11/orbs-cli/blob/main/docs/web-testing.md) - Browser automation guide
* [Mobile Testing](https://github.com/badrusalam11/orbs-cli/blob/main/docs/mobile-testing.md) - Android testing with Appium
* [API Testing](https://github.com/badrusalam11/orbs-cli/blob/main/docs/api-testing.md) - REST API testing guide
* [Spy Tool](https://github.com/badrusalam11/orbs-cli/blob/main/docs/spy.md) - Element inspection and capture
* [Architecture](https://github.com/badrusalam11/orbs-cli/blob/main/docs/architecture.md) - Technical design and patterns
**Start here:** [docs/philosophy.md](https://github.com/badrusalam11/orbs-cli/blob/main/docs/philosophy.md)
---
## Contributing
Contributions are welcome.
Please ensure:
* Templates and CLI commands are updated
* Documentation reflects behavior changes
---
## License
Licensed under the Apache License, Version 2.0.
See the [LICENSE](https://github.com/badrusalam11/orbs-cli/blob/main/LICENSE) file for details.
---
## Contact
Built & maintained by **Muhamad Badru Salam** - QA Engineer (SDET)
* Repository: [https://github.com/badrusalam11/orbs-cli](https://github.com/badrusalam11/orbs-cli)
* Pypi: [https://pypi.org/project/orbs-cli](https://pypi.org/project/orbs-cli)
* GitHub: [https://github.com/badrusalam11](https://github.com/badrusalam11)
* LinkedIn: [https://www.linkedin.com/in/muhamad-badru-salam/](https://www.linkedin.com/in/muhamad-badru-salam/)
* Email: [muhamadbadrusalam760@gmail.com](mailto:muhamadbadrusalam760@gmail.com)
| text/markdown | null | Muhamad Badru Salam <badrusalam760@gmail.com> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"typer[all]",
"flask",
"pyyaml",
"apscheduler",
"jinja2",
"requests",
"python-dotenv",
"Appium-Python-Client",
"selenium",
"behave",
"reportlab",
"InquirerPy"
] | [] | [] | [] | [
"Homepage, https://github.com/badrusalam11/orbs-cli",
"Repository, https://github.com/badrusalam11/orbs-cli",
"Issues, https://github.com/badrusalam11/orbs-cli/issues"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-21T06:44:53.077627 | orbs_cli-0.2.0.tar.gz | 77,243 | 67/bb/1507008f4ae0a2173ea642ce00631cb9f5aa9d2954dceb1bab4fd7fad3bd/orbs_cli-0.2.0.tar.gz | source | sdist | null | false | d2288c8c049116ceb8c2f36c070d0d0b | e6bb4a75bcd0797a3290f0a36c2d119edfa13b226b6dd803e27a56d69eb8a850 | 67bb1507008f4ae0a2173ea642ce00631cb9f5aa9d2954dceb1bab4fd7fad3bd | null | [
"LICENSE"
] | 247 |
2.4 | aiontfy | 0.8.0 | Async ntfy client library | # aiontfy
Asynchronous client library for the [ntfy](https://ntfy.sh/) pub-sub notification service
[](https://github.com/tr4nt0r/aiontfy/actions)
[](https://codecov.io/gh/tr4nt0r/aiontfy)
[](https://badge.fury.io/py/aiontfy)

[](https://www.buymeacoffee.com/tr4nt0r)
[](https://github.com/sponsors/tr4nt0r)
---
## 📖 Documentation
- **Full Documentation**: [https://tr4nt0r.github.io/aiontfy](https://tr4nt0r.github.io/aiontfy)
- **Source Code**: [https://github.com/tr4nt0r/aiontfy](https://github.com/tr4nt0r/aiontfy)
---
## 📦 Installation
You can install aiontfy via pip:
```sh
pip install aiontfy
```
---
## 🚀 Usage
### Basic Examples
```python
"""Publish to a ntfy topic."""
import asyncio
from aiohttp import ClientSession
from aiontfy import Message, Ntfy
async def main() -> None:
async with ClientSession() as session:
ntfy = Ntfy("https://ntfy.sh", session)
message = Message(
topic="aiontfy",
title="Hello",
message="World",
click="https://example.com/",
delay="10s",
priority=3,
tags=["octopus"],
)
print(await ntfy.publish(message))
asyncio.run(main())
```
```python
"""Subscribe to ntfy topics."""
import asyncio
from aiohttp import ClientSession
from aiontfy import Event, Notification, Ntfy
def callback(message: Notification) -> None:
"""Process notifications callback function."""
if message.event is Event.MESSAGE:
print(message.to_dict())
async def main() -> None:
async with ClientSession() as session:
ntfy = Ntfy("https://ntfy.sh", session)
await ntfy.subscribe(
["aiontfy", "test"], # Subscribe to multiple topics
callback,
priority=[3, 4, 5], # Only subscribe to priority >= 3
)
asyncio.run(main())
```
For more advanced usage, refer to the [documentation](https://tr4nt0r.github.io/aiontfy).
---
## 🛠 Contributing
Contributions are welcome! To contribute:
1. Fork the repository.
2. Create a new branch.
3. Make your changes and commit them.
4. Submit a pull request.
Make sure to follow the [contributing guidelines](CONTRIBUTING.md).
---
## 📜 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
## ❤️ Support
If you find this project useful, consider [buying me a coffee ☕](https://www.buymeacoffee.com/tr4nt0r) or [sponsoring me on GitHub](https://github.com/sponsors/tr4nt0r)!
| text/markdown | null | Manfred Dennerlein Rodelo <manfred@dennerlein.name> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp~=3.11",
"mashumaro~=3.13",
"orjson~=3.10"
] | [] | [] | [] | [
"Documentation, https://tr4nt0r.github.io/aiontfy/",
"Source, https://github.com/tr4nt0r/aiontfy"
] | Hatch/1.16.3 cpython/3.14.2 HTTPX/0.28.1 | 2026-02-21T06:44:04.351120 | aiontfy-0.8.0.tar.gz | 26,602 | e1/32/38571454b0d30b214176b14e09ab5273e0d2a7d0e7a71a48418c0abf23ed/aiontfy-0.8.0.tar.gz | source | sdist | null | false | 09c09264e559f0145a7de5c537dfb521 | 4c6a9edb2ffef42fe4a55c00a90bc4c40af73902d6df744a3ed60dd874586743 | e13238571454b0d30b214176b14e09ab5273e0d2a7d0e7a71a48418c0abf23ed | MIT | [
"LICENSE"
] | 538 |
2.4 | seonbi | 0.1.0a5 | Korean typographic adjustment processor | # seonbi (Python binding)
<p align="center">
<img src="https://raw.githubusercontent.com/moreal/seonbi-rs/main/assets/logo.svg" alt="seonbi logo" width="180">
</p>
Python binding for `seonbi-rs` using `PyO3` and `maturin`.
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | LGPL-2.1 | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Text Processing",
"Topic :: Text Processing :: Linguistic"
] | [] | null | null | >=3.14 | [] | [] | [] | [] | [] | [] | [] | [] | maturin/1.12.3 | 2026-02-21T06:42:50.459267 | seonbi-0.1.0a5-cp314-abi3-win_amd64.whl | 388,497 | af/dc/3ee7f37789a8f8bfa5d9050ef88bbcfbe3ae6155a083500d53f7131fed70/seonbi-0.1.0a5-cp314-abi3-win_amd64.whl | cp314 | bdist_wheel | null | false | 1debfe7f4d3489c455f812c98e46d5a7 | 1e75a4587f767a917ffdd47b9189c3431dcadf41804fddff6aefc99318ef2fe4 | afdc3ee7f37789a8f8bfa5d9050ef88bbcfbe3ae6155a083500d53f7131fed70 | null | [] | 265 |
2.4 | oncecheck | 0.7.3 | A terminal-first CLI tool that scans iOS, Android, and Web projects for launch-critical compliance risks. | # Oncecheck CLI
A terminal-first CLI tool that scans iOS, Android, and Web projects for launch-critical compliance risks.
## Features
- **73+ compliance rules** across iOS, Android, Web, and cross-platform categories
- **Auto-detection** — identifies your project type automatically
- **Interactive browser** — arrow-key driven UI to explore findings
- **Multiple export formats** — JSON, SARIF 2.1, plain text
- **CI/CD ready** — exit codes: 0 (pass), 1 (warnings), 2 (failures)
- **Cross-platform checks** — COPPA, HIPAA, PCI-DSS, accessibility, supply chain
- **Rule suppression** — `.oncecheckignore` file and `.oncecheckrc` config
- **Shell completions** — bash, zsh, and fish
## Rule Categories
| Platform | Rules | Covers |
|----------|-------|--------|
| iOS | 20 | Info.plist, ATS, entitlements, privacy manifests, HealthKit, Keychain |
| Android | 18 | AndroidManifest, target SDK, permissions, ProGuard, Play Integrity |
| Web | 27 | CSP, CORS, OWASP Top 10, accessibility, privacy, cookies |
| Common | 8 | COPPA, HIPAA, PCI-DSS, color contrast, data retention, supply chain |
## Installation
```bash
pip install oncecheck
```
### Development
```bash
git clone <repo-url>
cd oncecheck-cli
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
```
## Usage
### Interactive mode
```bash
oncecheck
```
Launches the full interactive UI with welcome screen, menu navigation, and findings browser.
### Direct scan
```bash
# Auto-detect platform
oncecheck scan ./my-project
# Force a specific platform
oncecheck scan ./my-project --platform ios
# Show explicit scan pipeline step states (sync/auth/quota/detect/scan/report)
oncecheck scan ./my-project
# Live-sync official policy pages before scanning
oncecheck scan ./my-project --policy-sync auto
# Advanced engine profile and strict compiler-grade requirement
oncecheck scan ./my-project --analysis-mode hybrid --advanced-profile codeql-first --require-compiler-engine
# Filter low-confidence findings
oncecheck scan ./my-project --min-confidence 0.80
# Hard-fail scan if policy sources are stale/missing
oncecheck scan ./my-project --require-fresh-policies
# Export as JSON (Team plan)
oncecheck scan ./my-project --format json --output results.json
# Export as SARIF (Team plan — for GitHub/VS Code)
oncecheck scan ./my-project --format sarif --output results.sarif
# Interactive findings browser
oncecheck scan ./my-project --interactive
# Fail CI on warnings or higher
oncecheck scan ./my-project --fail-on WARN
```
### Policy Source Sync
```bash
# Sync official policy sources (Apple/Google/OWASP/W3C)
oncecheck rules sync
# Force a full refresh of all policy source snapshots
oncecheck rules sync --force
# Check freshness status
oncecheck rules status
# Show rule impact from recent policy source updates
oncecheck rules impact --within-days 7
# List rules (filter by platform/severity/tier)
oncecheck rules list --platform ios --severity FAIL --tier starter
# Inspect one rule in detail
oncecheck rules show IOS-PRIV-001
# Run rule + policy health checks (CI-friendly)
oncecheck rules doctor
```
### Benchmark Quality Gates
```bash
# Score predictions vs truth labels
oncecheck benchmark score --predictions results.json --truth truth.json
# Enforce minimum quality thresholds for CI (exit 2 on failure)
oncecheck benchmark gate --predictions results.json --truth truth.json \
--min-precision 0.80 --min-recall 0.80 --min-f1 0.80 --min-tp 1
# Run calibrated multi-suite gates (OWASP/Juliet slices)
oncecheck benchmark gate-config --config benchmarks/ci/gate_config.json
# Generate a starter truth template
oncecheck benchmark template --suite owasp-benchmark --output truth-template.json
```
### Advanced Engines
```bash
# Show CodeQL/Semgrep availability and detected project languages
oncecheck engines .
```
### Suppressions with Justification
```bash
# Add suppression with required reason
oncecheck suppress add WEB-OWASP-003 --path . --reason "Legacy markdown renderer pending replacement"
# List suppressions and recorded reasons
oncecheck suppress list --path .
```
### Authentication
```bash
# Sign in (opens browser)
oncecheck login
# Check status
oncecheck status
# Sign out
oncecheck logout
```
### Configuration
```bash
# Generate config files in your project
oncecheck init ./my-project
```
This creates:
- `.oncecheckrc` — YAML config for disabled rules, severity overrides, and fail threshold
- `.oncecheckignore` — one rule ID per line to suppress
Example `.oncecheckrc`:
```yaml
disabled_rules:
- IOS-SEC-001
- WEB-OWASP-003
severity_overrides:
WEB-A11Y-001: INFO
fail_on: FAIL
```
### Shell Completions
```bash
# Bash
oncecheck completions bash >> ~/.bashrc
# Zsh
oncecheck completions zsh >> ~/.zshrc
# Fish
oncecheck completions fish > ~/.config/fish/completions/oncecheck.fish
```
## CI/CD Integration
```yaml
# GitHub Actions example
- name: Compliance scan
run: |
pip install oncecheck
oncecheck login
oncecheck scan . --fail-on WARN
- name: Upload SARIF (Team plan)
run: oncecheck scan . --format sarif --output results.sarif
```
### Exit Codes
| Code | Meaning |
|------|---------|
| 0 | No issues (or only INFO) |
| 1 | Warnings found |
| 2 | Failures found |
## Plans
| Feature | Starter (Free) | Team ($19/mo or $190/yr) |
|---------|----------------|--------------------------|
| Compliance rules | 35 | All 73+ |
| Scans per day | 3 | Unlimited |
| Terminal output | Yes | Yes |
| JSON/text export | — | Yes |
| SARIF export | — | Yes |
| File export (`--output`) | — | Yes |
| Priority support | — | Yes |
## Testing
```bash
python -m pytest tests/ -v
```
## License
MIT
| text/markdown | null | Oncecheck <hello@oncecheck.com> | null | null | null | compliance, scanner, ios, android, web, app-store, play-store, owasp, privacy, accessibility | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.1",
"rich>=13.0",
"pyyaml>=6.0",
"readchar>=4.0",
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://oncecheck.com",
"Documentation, https://oncecheck.com",
"Repository, https://github.com/oncecheck/oncecheck-cli",
"Issues, https://github.com/oncecheck/oncecheck-cli/issues"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-21T06:42:28.882805 | oncecheck-0.7.3.tar.gz | 102,077 | 5b/1e/a0cb7b4d1a0409c34174a2d7de213264846157be7dcb3d3a29732dc1b8d2/oncecheck-0.7.3.tar.gz | source | sdist | null | false | cea19706a8c8306d41b7127a6c129713 | 412fc4270bf79c226926e3dbc05afd10960a632d3ba3d8477a0aaa3ff4a57060 | 5b1ea0cb7b4d1a0409c34174a2d7de213264846157be7dcb3d3a29732dc1b8d2 | MIT | [
"LICENSE"
] | 249 |
2.4 | claude-worktree | 0.10.48 | CLI tool integrating git worktree with Claude Code for seamless feature development workflows | # Claude Worktree
> Work on multiple git branches simultaneously with isolated AI coding sessions
[](https://github.com/DaveDev42/claude-worktree/actions)
[](https://pypi.org/project/claude-worktree/)
[](https://pypi.org/project/claude-worktree/)
[](https://opensource.org/licenses/BSD-3-Clause)
## What is this?
**claude-worktree** (command: `cw`) helps you work on multiple features at the same time by creating separate directories for each branch. No more switching branches, stashing changes, or losing context.
Each feature gets:
- ✅ Its own directory (git worktree)
- ✅ Its own AI coding session (Claude Code, Codex, Happy, or custom)
- ✅ Zero interference with other work
Perfect for developers who want to:
- Work on multiple features in parallel
- Keep AI conversation context for each feature
- Never lose work when switching tasks
- Cleanly merge features without conflicts
## Why Use This?
### No More Branch Switching Chaos
**Before claude-worktree:**
```bash
# Working on feature-api
git add .
git stash # Save current work
git checkout main
git checkout -b fix-urgent-bug
# Fix bug, commit
git checkout feature-api
git stash pop # Hope nothing conflicts
# Where was I?
```
**With claude-worktree:**
```bash
cw new fix-urgent-bug
# Fix bug in separate directory
cw merge fix-urgent-bug --push
# Return to feature-api - it's untouched
```
Each feature stays isolated, AI context is preserved, and switching is instant.
### Lightning-Fast Workflow with Shell Completion
Tab completion makes everything faster:
```bash
cw <TAB> # All commands appear
cw new --<TAB> # All options for 'new' command
cw resume <TAB> # Your branch names
cw-cd <TAB> # Jump to any worktree instantly
```
No more typing long commands or remembering branch names - just type and press Tab.
## Quick Start
### Install
```bash
# Using uv (recommended)
uv tool install claude-worktree
# Or using pip
pip install claude-worktree
```
### Basic Usage
```bash
# 1. Create a new feature worktree
cw new fix-login-bug
# This creates:
# - A new branch: fix-login-bug
# - A new directory: ../myproject-fix-login-bug/
# - Launches Claude Code in that directory
# 2. Work on your feature
# (AI helps you, you commit changes, etc.)
# 3. When done, create a PR
cw pr
# Or merge directly (for solo projects)
cw merge --push
```
That's it! You've just created an isolated workspace with AI assistance, worked on your feature, and merged it back.
## Key Features
### Essential Commands
| Command | What it does |
|---------|-------------|
| `cw new <name>` | Create new feature worktree + launch AI |
| `cw new <name> --term i-t` | Create worktree, launch AI in iTerm tab |
| `cw list` | Show all your worktrees |
| `cw resume [branch]` | Resume AI session in a worktree |
| `cw pr` | Create GitHub pull request |
| `cw merge` | Merge to base branch and cleanup |
| `cw delete <name>` | Remove a worktree |
### Shell Completion & Navigation
**Enable tab completion for faster workflow:**
```bash
# Install completion (bash/zsh/fish/PowerShell)
cw --install-completion
# Restart your shell, then enjoy:
cw <TAB> # Shows available commands
cw new --<TAB> # Shows available options
cw resume <TAB> # Shows branch names
```
**Windows PowerShell users:**
```powershell
# Install completion for PowerShell
cw --install-completion powershell
# Restart PowerShell, then use tab completion:
cw <TAB> # Shows available commands
cw resume <TAB> # Shows branch names
```
**Quick navigation between worktrees:**
```bash
# Interactive setup (recommended):
cw shell-setup
# Or install manually:
# bash/zsh: Add to ~/.bashrc or ~/.zshrc
source <(cw _shell-function bash)
# fish: Add to ~/.config/fish/config.fish
cw _shell-function fish | source
# PowerShell: Add to $PROFILE
cw _shell-function powershell | Invoke-Expression
# Then use:
cw-cd feature-api # Jump to any worktree instantly
cw-cd <TAB> # Tab completion works!
```
## Example Workflow
### Scenario: Working on multiple features
```bash
# Start 3 features at once
cw new feature-api
cw new fix-bug-123
cw new refactor-db
# Check what you have
cw list
# BRANCH STATUS PATH
# main clean .
# feature-api active ../myproject-feature-api
# fix-bug-123 modified ../myproject-fix-bug-123
# refactor-db clean ../myproject-refactor-db
# Resume work on a specific feature
cw resume fix-bug-123
# Complete features as they're done
cw pr feature-api # Create PR
cw merge fix-bug-123 --push # Direct merge
```
### Scenario: Team collaboration
```bash
# Create feature and share
cw new team-feature
git push -u origin team-feature
# Stay in sync with team
cw sync team-feature
# Compare before merging
cw diff main team-feature --summary
# Create PR for review
cw pr --title "Add awesome feature"
```
## Configuration
### AI Tool Selection
By default, `cw` launches Claude Code. You can easily change this:
```bash
# Use a preset
cw config use-preset claude # Claude Code (default)
cw config use-preset happy # Happy (mobile Claude)
cw config use-preset codex # OpenAI Codex
cw config use-preset no-op # Skip AI launch
# Or set custom tool
cw config set ai-tool "your-ai-tool"
# List available presets
cw config list-presets
```
### Auto-Copy Files
Automatically copy project-specific files (like `.env`) to new worktrees:
```bash
# Add files to copy list
cw config copy-files add .env
cw config copy-files add .env.local
cw config copy-files add config/local.json
# List configured files
cw config copy-files list
# Remove a file from the list
cw config copy-files remove .env
```
**Note:** Dependencies like `node_modules` and `.venv` are automatically symlinked (not copied) to save disk space.
For detailed configuration options (Happy setup, auto-updates, export/import, etc.), see **[Configuration Guide](docs/configuration.md)**.
## More Features
**Maintenance & Cleanup:** `cw clean`, `cw sync`, `cw doctor`
**Analysis:** `cw tree`, `cw stats`, `cw diff`
**Backup & Restore:** `cw backup create/restore`
**Stash Management:** `cw stash save/apply`
See **[Advanced Features Guide](docs/advanced-features.md)** for details.
## Command Reference
For the complete command reference with all options, see **[Commands Documentation](docs/commands.md)** or run:
```bash
cw --help
cw <command> --help
```
## Requirements
- **Git**: 2.31+ (for worktree support)
- **Python**: 3.11+
- **AI Tool** (optional): Claude Code, Codex, Happy, or custom
## Installation Methods
<details>
<summary>Using uv (recommended)</summary>
```bash
uv tool install claude-worktree
```
</details>
<details>
<summary>Using pip</summary>
```bash
pip install claude-worktree
```
</details>
<details>
<summary>From source</summary>
```bash
git clone https://github.com/DaveDev42/claude-worktree.git
cd claude-worktree
uv pip install -e .
```
</details>
## Troubleshooting
<details>
<summary>"Not a git repository"</summary>
Run commands from within a git repository.
</details>
<details>
<summary>"AI tool not detected"</summary>
Install your AI tool or skip AI launch:
```bash
cw config use-preset no-op
```
</details>
<details>
<summary>"Rebase failed"</summary>
Resolve conflicts manually:
```bash
cd <worktree-path>
git rebase <base-branch>
# Fix conflicts
git rebase --continue
cw pr # or cw merge --push
```
</details>
<details>
<summary>Shell completion not working</summary>
```bash
cw --install-completion
# Restart shell
```
</details>
For more troubleshooting help, see **[TROUBLESHOOTING.md](TROUBLESHOOTING.md)**.
## Documentation
### User Guides
- **[Commands Reference](docs/commands.md)** - Complete command reference with all options
- **[Configuration Guide](docs/configuration.md)** - AI tools, presets, shell completion, export/import
- **[Advanced Features](docs/advanced-features.md)** - Backup/restore, sync, cleanup, CI/CD
- **[Troubleshooting](TROUBLESHOOTING.md)** - Common issues and solutions
### Links
- **[GitHub Issues](https://github.com/DaveDev42/claude-worktree/issues)** - Report bugs or request features
- **[PyPI](https://pypi.org/project/claude-worktree/)** - Package page
- **[Changelog](https://github.com/DaveDev42/claude-worktree/releases)** - Release history
## Contributing
Contributions welcome! For development setup:
```bash
git clone https://github.com/DaveDev42/claude-worktree.git
cd claude-worktree
uv pip install -e ".[dev]"
# Run tests
uv run --extra dev pytest
# Run linting
ruff check src/ tests/
mypy src/claude_worktree
```
**For maintainers:** Use the automated release script to create new releases:
```bash
# Create a patch release (0.10.20 → 0.10.21)
uv run python scripts/release.py
# Create a minor release (0.10.20 → 0.11.0)
uv run python scripts/release.py --minor
# Create a major release (0.11.0 → 1.0.0)
uv run python scripts/release.py --major
```
**CHANGELOG management:** The changelog is automatically generated from GitHub Releases. When a release PR is merged:
1. GitHub automatically creates a Release with notes (PR-based)
2. Workflow updates `CHANGELOG.md` from Releases
3. Changes are committed to main
To manually update the changelog:
```bash
python scripts/changelog_sync.py
```
See [CLAUDE.md](CLAUDE.md) for detailed development and release workflows.
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Acknowledgments
Built with [Typer](https://typer.tiangolo.com/) and [Rich](https://rich.readthedocs.io/) for a great CLI experience.
---
**Made for developers who love AI-assisted coding and clean git workflows** 🚀
| text/markdown | null | Dave <dave.dev@icloud.com> | null | null | BSD-3-Clause | claude, cli, development, git, workflow, worktree | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Version Control :: Git",
"Topic :: Utilities"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"packaging>=24.0",
"rich>=13.0.0",
"typer>=0.12.0",
"mypy>=1.8.0; extra == \"dev\"",
"pre-commit>=4.0.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-md>=0.2.0; extra == \"dev\"",
"pytest-mock>=3.12.0; extra == \"dev\"",
"pytest-xdist>=3.5.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.3.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/DaveDev42/claude-worktree",
"Repository, https://github.com/DaveDev42/claude-worktree",
"Issues, https://github.com/DaveDev42/claude-worktree/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:42:21.906206 | claude_worktree-0.10.48.tar.gz | 223,024 | 25/de/356cdadf5dcc6f6981a07e8e44561a83448dd6464ff07e544e808c634ee2/claude_worktree-0.10.48.tar.gz | source | sdist | null | false | 415031d7450593c12006657bf9b2e2a6 | 0d91d17e4835a6b66a3153a58c230e608bf8ceb6c2a8fa9384eb183e4c04401b | 25de356cdadf5dcc6f6981a07e8e44561a83448dd6464ff07e544e808c634ee2 | null | [
"LICENSE"
] | 238 |
2.4 | pbir-utils | 2.1.3 | A tool for managing Power BI Enhanced Report Format (PBIR) projects | <div align="center">
<img src="https://raw.githubusercontent.com/akhilannan/pbir-utils/main/docs/assets/logo.svg" alt="pbir-utils logo" width="200"/>
</div>
[](https://pypi.org/project/pbir-utils/)
[](https://pypi.org/project/pbir-utils/)
[](https://github.com/akhilannan/pbir-utils/actions/workflows/ci.yml)
[](https://opensource.org/licenses/MIT)
**pbir-utils** is a Python library designed to streamline the tasks that Power BI developers typically handle manually in Power BI Desktop. This module offers a range of utility functions to efficiently manage and manipulate PBIR (Power BI Enhanced Report Format) metadata.
## 📚 Documentation
**[View Full Documentation →](https://akhilannan.github.io/pbir-utils/)**
- [UI Guide](https://akhilannan.github.io/pbir-utils/ui/) - Web interface documentation
- [CLI Reference](https://akhilannan.github.io/pbir-utils/cli/) - Command-line usage and examples
- [Python API](https://akhilannan.github.io/pbir-utils/api/) - Library documentation and code examples
- [CI/CD Integration](https://akhilannan.github.io/pbir-utils/ci_cd/) - Pipeline integration and validation
## 📦 Installation
```bash
# Using pip
pip install "pbir-utils[ui]"
# Using uv
uv add "pbir-utils[ui]"
```
## 🚀 Quick Start
### CLI
```bash
# Launch interactive web UI (alias: pbir-utils serve)
pbir-utils ui
# Sanitize a report (dry-run to preview changes)
pbir-utils sanitize "C:\Reports\MyReport.Report" --dry-run
# Validate a report against rules
pbir-utils validate "C:\Reports\MyReport.Report"
# Extract metadata to CSV
pbir-utils extract-metadata "C:\Reports\MyReport.Report"
# Visualize report wireframes
pbir-utils visualize "C:\Reports\MyReport.Report"
```
### Python API
```python
import pbir_utils as pbir
# Sanitize a report
pbir.sanitize_powerbi_report(r"C:\Reports\MyReport.Report", actions=["remove_unused_measures", "standardize_pbir_folders"])
```
## ✨ Features
- **💻 CLI Support**: Access all utilities directly from the command line
- **🌐 Web UI**: Interactive browser-based interface for reports and actions
- **⚙️ CI/CD Integration**: Validate reports in pipelines before deployment
- **✅ Validate Reports**: Rule-based validation with custom expressions
- **📄 Extract Metadata**: Retrieve key metadata from PBIR files
- **🖼️ Wireframe Visualizer**: Visualize PBIR report layout
- **🧼 Sanitize Reports**: Clean up and optimize reports with YAML configuration
- **⛔ Disable Interactions**: Bulk disable interactions
- **🧹 Manage Measures**: Remove unused measures, analyze dependencies
- **🔍 Filter Management**: Update and sort report-level filters
- **📂 Standardize Folder Names**: Organize page and visual folders
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
| text/markdown | Akhil Ashok | null | null | null | MIT License | null | [
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Programming Language :: Python :: 3 :: Only",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"jinja2",
"pyyaml",
"simpleeval",
"fastapi; extra == \"ui\"",
"uvicorn[standard]; extra == \"ui\"",
"sse-starlette; extra == \"ui\"",
"python-multipart; extra == \"ui\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"httpx; extra == \"dev\"",
"ruff; extra == \"dev\"",
"bandit; extra == \"dev\"",
"pip-audit; extra == \"dev\"",
"mkdocs; extra == \"docs\"",
"mkdocs-material; extra == \"docs\"",
"requests; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/akhilannan/pbir-utils"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:41:34.406803 | pbir_utils-2.1.3.tar.gz | 191,004 | f3/e8/52b5734182ba603dba04a5a0497a72b85d14825e13fa67191b9c5abf42d9/pbir_utils-2.1.3.tar.gz | source | sdist | null | false | aa45c5f0fd46e8feda9a532cb1b63b73 | 934cb106e726829ad50d1d0f4f55bfda8381ad151c92be07b7b10338764b8ba7 | f3e852b5734182ba603dba04a5a0497a72b85d14825e13fa67191b9c5abf42d9 | null | [
"LICENSE"
] | 244 |
2.4 | mcp-atlassian | 0.16.0 | The Model Context Protocol (MCP) Atlassian integration is an open-source implementation that bridges Atlassian products (Jira and Confluence) with AI language models following Anthropic's MCP specification. This project enables secure, contextual AI interactions with Atlassian tools while maintaining data privacy and security. Key features include: | # MCP Atlassian



[](https://github.com/sooperset/mcp-atlassian/actions/workflows/tests.yml)

[](https://personal-1d37018d.mintlify.app)
Model Context Protocol (MCP) server for Atlassian products (Confluence and Jira). Supports both Cloud and Server/Data Center deployments.
https://github.com/user-attachments/assets/35303504-14c6-4ae4-913b-7c25ea511c3e
<details>
<summary>Confluence Demo</summary>
https://github.com/user-attachments/assets/7fe9c488-ad0c-4876-9b54-120b666bb785
</details>
## Quick Start
### 1. Get Your API Token
Go to https://id.atlassian.com/manage-profile/security/api-tokens and create a token.
> For Server/Data Center, use a Personal Access Token instead. See [Authentication](https://personal-1d37018d.mintlify.app/docs/authentication).
### 2. Configure Your IDE
Add to your Claude Desktop or Cursor MCP configuration:
```json
{
"mcpServers": {
"mcp-atlassian": {
"command": "uvx",
"args": ["mcp-atlassian"],
"env": {
"JIRA_URL": "https://your-company.atlassian.net",
"JIRA_USERNAME": "your.email@company.com",
"JIRA_API_TOKEN": "your_api_token",
"CONFLUENCE_URL": "https://your-company.atlassian.net/wiki",
"CONFLUENCE_USERNAME": "your.email@company.com",
"CONFLUENCE_API_TOKEN": "your_api_token"
}
}
}
}
```
> **Server/Data Center users**: Use `JIRA_PERSONAL_TOKEN` instead of `JIRA_USERNAME` + `JIRA_API_TOKEN`. See [Authentication](https://personal-1d37018d.mintlify.app/docs/authentication) for details.
### 3. Start Using
Ask your AI assistant to:
- **"Find issues assigned to me in PROJ project"**
- **"Search Confluence for onboarding docs"**
- **"Create a bug ticket for the login issue"**
- **"Update the status of PROJ-123 to Done"**
## Documentation
Full documentation is available at **[personal-1d37018d.mintlify.app](https://personal-1d37018d.mintlify.app)**.
Documentation is also available in [llms.txt format](https://llmstxt.org/), which LLMs can consume easily:
- [`llms.txt`](https://personal-1d37018d.mintlify.app/llms.txt) — documentation sitemap
- [`llms-full.txt`](https://personal-1d37018d.mintlify.app/llms-full.txt) — complete documentation
| Topic | Description |
|-------|-------------|
| [Installation](https://personal-1d37018d.mintlify.app/docs/installation) | uvx, Docker, pip, from source |
| [Authentication](https://personal-1d37018d.mintlify.app/docs/authentication) | API tokens, PAT, OAuth 2.0 |
| [Configuration](https://personal-1d37018d.mintlify.app/docs/configuration) | IDE setup, environment variables |
| [HTTP Transport](https://personal-1d37018d.mintlify.app/docs/http-transport) | SSE, streamable-http, multi-user |
| [Tools Reference](https://personal-1d37018d.mintlify.app/docs/tools-reference) | All Jira & Confluence tools |
| [Troubleshooting](https://personal-1d37018d.mintlify.app/docs/troubleshooting) | Common issues & debugging |
## Compatibility
| Product | Deployment | Support |
|---------|------------|---------|
| Confluence | Cloud | Fully supported |
| Confluence | Server/Data Center | Supported (v6.0+) |
| Jira | Cloud | Fully supported |
| Jira | Server/Data Center | Supported (v8.14+) |
## Key Tools
| Jira | Confluence |
|------|------------|
| `jira_search` - Search with JQL | `confluence_search` - Search with CQL |
| `jira_get_issue` - Get issue details | `confluence_get_page` - Get page content |
| `jira_create_issue` - Create issues | `confluence_create_page` - Create pages |
| `jira_update_issue` - Update issues | `confluence_update_page` - Update pages |
| `jira_transition_issue` - Change status | `confluence_add_comment` - Add comments |
| `jira_get_issue_sla` - Calculate SLA metrics | `confluence_get_page_history` - Get historical page versions |
| `jira_get_issue_development_info` - Get linked PRs, branches, commits | `confluence_get_page_views` - Get page view stats (Cloud only) |
| `jira_get_issue_proforma_forms` - Get ProForma forms | |
| `jira_get_proforma_form_details` - Get form details | |
| `jira_update_proforma_form_answers` - Update form answers | |
See [Tools Reference](https://personal-1d37018d.mintlify.app/docs/tools-reference) for the complete list.
## Security
Never share API tokens. Keep `.env` files secure. See [SECURITY.md](SECURITY.md).
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup.
## License
MIT - See [LICENSE](LICENSE). Not an official Atlassian product.
| text/markdown | null | sooperset <soomiles.dev@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"atlassian-python-api>=4.0.0",
"beautifulsoup4>=4.12.3",
"cachetools>=5.0.0",
"click>=8.1.7",
"fastmcp<2.15.0,>=2.13.0",
"httpx>=0.28.0",
"keyring>=25.6.0",
"markdown-to-confluence<0.4.0,>=0.3.0",
"markdown>=3.7.0",
"markdownify>=0.11.6",
"mcp<2.0.0,>=1.8.0",
"pydantic<3.0,>=2.10.6",
"python-dateutil>=2.9.0.post0",
"python-dotenv>=1.0.1",
"requests[socks]>=2.31.0",
"starlette>=0.49.1",
"thefuzz>=0.22.1",
"trio>=0.29.0",
"types-cachetools>=5.5.0.20240820",
"types-python-dateutil>=2.9.0.20241206",
"tzdata>=2024.1; platform_system == \"Windows\"",
"unidecode>=1.3.0",
"urllib3>=2.6.3",
"uvicorn>=0.27.1"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:40:50.793684 | mcp_atlassian-0.16.0.tar.gz | 587,987 | 55/6b/8407c33ce8731a3a5ed0137818f23aa9c5178aa94e75d4a83ccc9a04b966/mcp_atlassian-0.16.0.tar.gz | source | sdist | null | false | cfaa3c469c41e7d5cd4c95078146908f | 07fdf178c45bdc4bd5a21aa1679d1dbf8ce4a59de3e60f2a6b68bf6d50734138 | 556b8407c33ce8731a3a5ed0137818f23aa9c5178aa94e75d4a83ccc9a04b966 | null | [
"LICENSE"
] | 6,791 |
2.4 | lakeops | 1.1.0 | Data lake operations toolkit | # LakeOps
[](https://badge.fury.io/py/lakeops)
[](https://pypi.org/project/lakeops/)
[](https://github.com/hoaihuongbk/lakeops/actions/workflows/test.yml)
[](https://codecov.io/gh/hoaihuongbk/lakeops)
A modern data lake operations toolkit working with multiple table formats (Delta, Iceberg, Parquet) and engines
(Spark, Polars) via the same APIs.
## Features
- Multi-format support: Delta, Iceberg, Parquet
- Multiple engine backends: Apache Spark, Polars (default)
- Storage operations: read, write
To learn more, read the [user guide](https://hoaihuongbk.github.io/lakeops/).
## Quick Start
### Installation
```bash
pip install lakeops
```
### Sample Usage
```python
from pyspark.sql import SparkSession
from lakeops import LakeOps
from lakeops.core.engine import SparkEngine
# Init Spark session and create LakeOps instance
spark = SparkSession.builder.getOrCreate()
engine = SparkEngine(spark)
ops = LakeOps(engine)
# Read data from table name
df = ops.read("s3://local/test/table", format="parquet")
# Write data to table name
ops.write(df, "s3://local/test/table", format="parquet")
```
| text/markdown; charset=UTF-8; variant=GFM | null | Huong Vuong <hoaihuongvuonghuynh@gmail.com> | null | null | null | null | [
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Rust",
"Topic :: Scientific/Engineering",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=44.0.1",
"databricks-sdk>=0.44.1",
"deltalake>=0.24.0",
"duckdb>=1.2.1",
"pandas>=2.2.3",
"polars>=1.21.0",
"pyarrow>=19.0.0",
"gspread>=6.1.4; extra == \"gsheet\"",
"pyspark<4.0.0,>=3.5.3; extra == \"spark\"",
"pyspark[connect]<4.0.0,>=3.5.3; extra == \"spark-connect\"",
"connectorx>=0.4.1; extra == \"trino\"",
"trino[sqlalchemy]>=0.333.0; extra == \"trino\"",
"hvac>=2.3.0; extra == \"vault\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:40:35.827951 | lakeops-1.1.0-cp312-cp312-win_amd64.whl | 267,107 | b2/9c/cb811cadfbe2bc5c779bfb4c3acfb75cf436a9f2a54c4dfc2b1adb5cf8f5/lakeops-1.1.0-cp312-cp312-win_amd64.whl | cp312 | bdist_wheel | null | false | 683cd64314ab92db58d20fae61557ada | 6c3fad5e337def54f52df4ff27f1d809a53ab1e6375ecdc199b1fedb15d5a4c8 | b29ccb811cadfbe2bc5c779bfb4c3acfb75cf436a9f2a54c4dfc2b1adb5cf8f5 | null | [
"LICENSE"
] | 627 |
2.4 | agent-company-ai | 0.3.2 | Spin up an AI agent company - a business run by AI agents, managed by you | # Agent Company AI
**Spin up an AI agent company - a business run by AI agents, managed by you.**
Agent Company AI lets a solo entrepreneur create a virtual company staffed entirely by AI agents. Each agent has a specific business role (CEO, CTO, Developer, Marketer, etc.), they collaborate on tasks, and you manage everything through a CLI or web dashboard.
## Quick Start
```bash
pip install agent-company-ai
```
### 1. Initialize your company
```bash
agent-company-ai init --name "My AI Startup"
```
You'll be prompted to choose an LLM provider (Anthropic, OpenAI, DeepSeek, Ollama, and more).
### 2. One-command setup
Spin up a full team from a preset template:
```bash
agent-company-ai setup tech_startup --name "Acme AI"
agent-company-ai setup saas --name "CloudCo" --provider anthropic
agent-company-ai setup --list # see all presets
```
**Available presets:** `tech_startup` (6 agents), `agency` (6), `ecommerce` (7), `saas` (9), `consulting` (6), `content` (5), `full` (10 — all departments)
### 3. Or hire agents individually
```bash
agent-company-ai hire ceo --name Alice
agent-company-ai hire cto --name Bob
agent-company-ai hire developer --name Carol
agent-company-ai hire marketer --name Dave
```
### 4. Define your business model
Tell your agents how the company makes money:
```bash
agent-company-ai profit-engine setup --template saas
```
This injects your business DNA into every agent's decision-making. See [ProfitEngine](#profitengine--business-dna) below.
### 5. Run autonomously
Give the CEO a goal and watch the company run:
```bash
agent-company-ai run "Build a landing page for our new product"
```
The CEO will break down the goal, delegate tasks to the team, and agents will collaborate to deliver results.
```bash
# With limits
agent-company-ai run "Launch MVP" --cycles 3 --timeout 600 --max-tasks 20
```
## Commands
| Command | Description |
|---------|-------------|
| `init` | Initialize a new company |
| `setup <preset>` | Set up a full company from a template |
| `hire <role>` | Hire an agent |
| `fire <name>` | Remove an agent |
| `team` | List all agents |
| `assign "<task>"` | Assign a task |
| `tasks` | Show the task board |
| `chat <name>` | Chat with an agent |
| `run "<goal>"` | Autonomous mode |
| `broadcast "<msg>"` | Message all agents |
| `dashboard` | Launch web dashboard |
| `status` | Company overview |
| `output` | List deliverables produced by agents |
| `roles` | List available roles |
| `companies` | List all companies in this directory |
| `destroy` | Permanently delete a company |
| `profit-engine <cmd>` | Configure business model DNA ([details](#profitengine--business-dna)) |
| `wallet <cmd>` | Manage blockchain wallet ([details](#blockchain-wallet)) |
### Global Options
| Flag | Description |
|------|-------------|
| `--company` / `-C` | Company slug to operate on (default: `default`) |
You can also set the company via environment variable:
```bash
export AGENT_COMPANY_NAME=my-startup
agent-company-ai team # operates on my-startup
```
## ProfitEngine — Business DNA
ProfitEngine lets you define your company's business model — how it earns money, who it serves, and what matters most. This "business DNA" is injected into **every agent's system prompt** and into the **CEO's goal loop**, so all decisions align with your business model.
### Setup
Start from a preset template or from scratch:
```bash
# Interactive wizard with a preset
agent-company-ai profit-engine setup --template saas
# Fully interactive — choose a template then customize each field
agent-company-ai profit-engine setup
```
The wizard walks you through 8 fields:
| Field | What it defines |
|-------|----------------|
| **Mission** | The company's core purpose |
| **Revenue Streams** | How the company makes money |
| **Target Customers** | Who the ideal customers are |
| **Pricing Model** | How products/services are priced |
| **Competitive Edge** | What sets the company apart |
| **Key Metrics** | What metrics define success |
| **Cost Priorities** | Where money should be spent first |
| **Additional Context** | Any other business context |
### Templates
6 preset templates to start from:
| Template | Business Model |
|----------|---------------|
| `saas` | SaaS (Software as a Service) — recurring subscriptions |
| `ecommerce` | E-Commerce — online retail |
| `marketplace` | Marketplace / Platform — transaction fees |
| `agency` | Agency / Services — project and retainer fees |
| `consulting` | Consulting — advisory and engagement fees |
| `content` | Content / Media — ads, subscriptions, licensing |
```bash
agent-company-ai profit-engine templates # list all templates
```
### Commands
| Command | Description |
|---------|-------------|
| `profit-engine setup` | Interactive wizard to configure business DNA |
| `profit-engine show` | Display current DNA |
| `profit-engine edit <field>` | Edit a single field |
| `profit-engine templates` | List available preset templates |
| `profit-engine disable` | Disable DNA injection (config preserved) |
### How it works
Once configured, the business DNA is automatically:
- **Appended to every agent's system prompt** — so developers, marketers, sales, and support all understand the business model
- **Injected into the CEO's planning and review tasks** — so autonomous mode goals are planned and evaluated through the lens of your business model
The DNA is stored in `config.yaml` under the `profit_engine` key. No new database tables — just config.
```yaml
# .agent-company-ai/default/config.yaml
profit_engine:
enabled: true
mission: "Build and scale a SaaS product..."
revenue_streams: "Monthly/annual subscriptions..."
target_customers: "SMB to enterprise..."
pricing_model: "Tiered subscription pricing..."
competitive_edge: "Product-led growth..."
key_metrics: "MRR/ARR, churn rate, LTV:CAC..."
cost_priorities: "Engineering first..."
additional_context: ""
```
### Dashboard API
| Endpoint | Description |
|----------|-------------|
| `GET /api/profit-engine` | Return current ProfitEngine config |
| `POST /api/profit-engine` | Update fields and save to config |
| `GET /api/profit-engine/templates` | List all templates with content |
## Blockchain Wallet
Built-in Ethereum wallet with multi-chain support. Agents can request payments (with human approval), and you can send tokens directly from the CLI.
### Setup
```bash
agent-company-ai wallet create
```
Creates an encrypted keystore (password-protected). One address works across all supported chains.
### Supported Chains
| Chain | Native Token |
|-------|-------------|
| Ethereum | ETH |
| Base | ETH |
| Arbitrum | ETH |
| Polygon | MATIC |
### Commands
| Command | Description |
|---------|-------------|
| `wallet create` | Generate a new wallet with encrypted keystore |
| `wallet address` | Show the company wallet address |
| `wallet balance` | Show balances across all chains |
| `wallet balance --chain base` | Show balance on a specific chain |
| `wallet send <amount> --to <addr> --chain <chain>` | Send native tokens (requires password) |
| `wallet payments` | Show the payment approval queue |
| `wallet approve <id>` | Approve and send a pending payment |
| `wallet reject <id>` | Reject a pending payment |
### Agent Payments
Agents with wallet tools (`check_balance`, `get_wallet_address`, `list_payments`, `request_payment`) can request payments during task execution. All payment requests go into an approval queue — nothing is sent without your explicit approval.
```bash
# Check pending payments
agent-company-ai wallet payments --status pending
# Approve a payment
agent-company-ai wallet approve abc123
# Reject a payment
agent-company-ai wallet reject abc123
```
### Dashboard API
| Endpoint | Description |
|----------|-------------|
| `GET /api/wallet/balance` | Balances (optional `?chain=` filter) |
| `GET /api/wallet/address` | Wallet address |
| `GET /api/wallet/payments` | Payment queue (optional `?status=` filter) |
## Multi-Company Support
Run multiple independent companies in the same directory. Each company gets its own config, database, and agents:
```bash
# Default company
agent-company-ai init --name "Acme AI"
agent-company-ai hire ceo --name Alice
# Create a second company
agent-company-ai -C my-startup init --name "My Startup" --provider anthropic
agent-company-ai -C my-startup hire ceo --name Bob
# List all companies
agent-company-ai companies
# Destroy a company
agent-company-ai destroy --company my-startup
agent-company-ai destroy --yes # destroy default, skip confirmation
```
**Directory layout:**
```
.agent-company-ai/
default/
config.yaml
company.db
my-startup/
config.yaml
company.db
```
Existing single-company setups are automatically migrated into `default/` on first access.
## Available Roles
| Role | Title | Reports To |
|------|-------|------------|
| `ceo` | Chief Executive Officer | Owner |
| `cto` | Chief Technology Officer | CEO |
| `developer` | Software Developer | CTO |
| `marketer` | Head of Marketing | CEO |
| `sales` | Head of Sales | CEO |
| `support` | Customer Support Lead | CEO |
| `finance` | CFO / Finance | CEO |
| `hr` | Head of HR | CEO |
| `project_manager` | Project Manager | CEO |
## LLM Providers
Supports 11 providers out of the box:
| Provider | Models | API Key Env Var |
|----------|--------|-----------------|
| **Anthropic** (default) | Claude Sonnet 4.5, etc. | `ANTHROPIC_API_KEY` |
| **OpenAI** | GPT-4o, etc. | `OPENAI_API_KEY` |
| **DeepSeek** | DeepSeek-R1 | `DEEPSEEK_API_KEY` |
| **MiMo** | MiMo-7B-RL | `DEEPSEEK_API_KEY` |
| **Kimi** | Kimi-K2 | `MOONSHOT_API_KEY` |
| **Qwen** | Qwen-Max | `DASHSCOPE_API_KEY` |
| **MiniMax** | MiniMax-M1 | `MINIMAX_API_KEY` |
| **Ollama** | Llama 3.1, etc. | None (local) |
| **Together** | Llama, Mixtral, etc. | `TOGETHER_API_KEY` |
| **Groq** | Fast open-source inference | `GROQ_API_KEY` |
| **OpenAI-compatible** | Any endpoint | Custom |
Configure different providers per agent:
```yaml
llm:
default_provider: anthropic
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-sonnet-4-5-20250929
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
base_url: https://api.openai.com/v1 # or any compatible endpoint
agents:
- name: Alice
role: ceo
provider: anthropic
- name: Bob
role: developer
provider: openai
```
## Web Dashboard
```bash
agent-company-ai dashboard --port 8420
```
Features:
- **Org Chart** - Visual company hierarchy
- **Agent Roster** - See all agents and their roles
- **Task Board** - Kanban-style task management
- **Chat** - Talk directly to any agent
- **Activity Feed** - Real-time event stream via WebSocket
- **Autonomous Mode** - Set goals and monitor progress from the UI
- **Cost Tracker** - Real-time API cost breakdown by agent and model
- **ProfitEngine** - View and edit business DNA from the dashboard
- **Wallet** - Check balances and view payment queue
## Built-in Agent Tools
Agents have access to these tools based on their role:
| Tool | Description |
|------|-------------|
| **web_search** | Search the web via DuckDuckGo (no API key needed) |
| **read_file** / **write_file** | File operations in the workspace (sandboxed) |
| **code_exec** | Execute Python code (restricted builtins) |
| **shell** | Run shell commands (30s timeout, dangerous patterns blocked) |
| **delegate_task** | Delegate work to other agents |
| **report_result** | Submit task results |
| **check_balance** | Check wallet balance (wallet-enabled agents) |
| **get_wallet_address** | Get company wallet address (wallet-enabled agents) |
| **list_payments** | View payment queue (wallet-enabled agents) |
| **request_payment** | Request a payment — goes to approval queue (wallet-enabled agents) |
## Autonomous Mode
The company runs in CEO-driven cycles:
1. **Plan** - CEO breaks the goal into tasks and delegates to the team
2. **Execute** - Agents work on tasks in parallel waves
3. **Review** - CEO evaluates progress and decides: DONE, CONTINUE, or FAILED
4. **Loop** - Repeat until goal achieved or limits reached
When ProfitEngine is enabled, the CEO factors business DNA into every planning and review decision.
**Configurable limits** (in `config.yaml` or via CLI flags):
- `max_cycles: 5` — CEO review loops
- `max_waves_per_cycle: 10` — parallel execution waves per cycle
- `max_total_tasks: 50` — hard cap on tasks
- `max_time_seconds: 3600` — wall-clock timeout
- `max_cost_usd: 0.0` — spending cap (0 = unlimited)
## Custom Roles
Create custom roles by adding YAML files:
```yaml
# .agent-company-ai/roles/custom_analyst.yaml
name: analyst
title: "Data Analyst"
description: "Analyzes data and creates reports"
system_prompt: |
You are a data analyst at {company_name}.
Your expertise: data analysis, visualization, reporting.
Team: {team_members}
Delegates: {delegates}
default_tools:
- code_exec
- file_io
can_delegate_to: []
reports_to: cto
```
## Donate
If this project is useful to you, consider supporting development:
**ETH:** `0x0448F896Fc878DF56046Aa0090D05Dd01F28b338`
## Enterprise Customization & Consulting
**"We build AI agent workforce for your company"**
- **Implementation fee:** $10k-100k+
- **Ongoing support:** $2k-10k/month
Please contact gobeyondfj@gmail.com
## License
MIT
| text/markdown | Agent Company AI | null | null | null | null | agents, ai, automation, business, llm | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiosqlite>=0.19.0",
"anthropic>=0.39.0",
"eth-account>=0.11.0",
"fastapi>=0.110.0",
"httpx>=0.27.0",
"jinja2>=3.1.0",
"openai>=1.0.0",
"pydantic>=2.0",
"pyyaml>=6.0",
"rich>=13.0",
"typer[all]>=0.9.0",
"uvicorn[standard]>=0.27.0",
"web3>=6.0.0",
"websockets>=12.0"
] | [] | [] | [] | [
"Homepage, https://github.com/gobeyondfj-cmd/agent-company-ai",
"Documentation, https://github.com/gobeyondfj-cmd/agent-company-ai#readme",
"Issues, https://github.com/gobeyondfj-cmd/agent-company-ai/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-21T06:40:16.556284 | agent_company_ai-0.3.2.tar.gz | 71,184 | d0/7b/fadc121eb2e79d33cd530b8167ba35cc0204937b01939e5b12b3050c1412/agent_company_ai-0.3.2.tar.gz | source | sdist | null | false | ecd3bd40dcec580d27ebf8d9ac0fc88c | da368a4eb699a513b01a8b900460d0f4bc265830f399900c3d3574ce11dd4642 | d07bfadc121eb2e79d33cd530b8167ba35cc0204937b01939e5b12b3050c1412 | MIT | [
"LICENSE"
] | 234 |
2.4 | social-links | 1.3.1 | Python library to validate, sanitize, and detect social media URLs. Support for LinkedIn, Instagram, TikTok, X/Twitter, GitHub, Facebook, YouTube, and 50+ platforms. Features automatic URL normalization and zero dependencies. Easy to use, regex-powered, and customizable. | # Social Links



[](https://pypi.org/project/social-links/)
[](https://pepy.tech/projects/social-links)
[](https://ysskrishna.github.io/social-links/)
[](https://ysskrishna.github.io/social-links/demo/)
Python library to validate, sanitize, and detect social media URLs. Support for LinkedIn, Instagram, TikTok, X/Twitter, GitHub, Facebook, YouTube, and 65+ platforms. Features automatic URL normalization and zero dependencies. Easy to use, regex-powered, and customizable.
> 🚀 **Try it interactively in your browser!** Test the library with our [Interactive Demo](https://ysskrishna.github.io/social-links/demo/) - no installation required.

## Features
- 🔍 **Auto-detect** social media platforms from URLs
- ✅ **Validate** URLs against specific platforms
- 🧹 **Sanitize** URLs to canonical format
- 🆔 **Extract IDs** (usernames, profile IDs) from URLs
- 🎯 **65+ predefined platforms** (LinkedIn, GitHub, Twitter/X, Facebook, Instagram, YouTube, and more)
- 🔧 **Customizable** - Add your own platforms with regex patterns
- 🚀 **Zero dependencies** - Pure Python, no external libraries
## Installation
```bash
pip install social-links
```
Or using `uv`:
```bash
uv pip install social-links
```
## Quick Start
```python
from sociallinks import detect_platform, sanitize, extract_id, is_valid, list_platforms
# Detect platform from URL
platform = detect_platform("https://www.linkedin.com/in/ysskrishna/")
print(platform) # "linkedin"
# Validate URL for a specific platform
is_valid_url = is_valid("linkedin", "https://www.linkedin.com/in/ysskrishna/")
print(is_valid_url) # True
# Sanitize URL to canonical format
sanitized = sanitize("linkedin", "https://www.linkedin.com/in/ysskrishna/")
print(sanitized) # "https://linkedin.com/in/ysskrishna"
# Extract username/ID from URL
user_id = extract_id("linkedin", "https://www.linkedin.com/in/ysskrishna/")
print(user_id) # "ysskrishna"
# List all supported platforms
platforms = list_platforms()
print(f"Supported platforms: {len(platforms)}") # Supported platforms: 65+
```
That's it! For most use cases, you don't need anything more. See [Basic Usage](#basic-usage) for more examples, or [Advanced Usage](#advanced-usage) if you need custom platforms or configurations.
## Supported Platforms
The library comes with 65+ predefined platforms:
| Predefined Platforms| | |
|---|---|---|
| [Apple Music](https://music.apple.com) | [ArtStation](https://artstation.com) | [Bandcamp](https://bandcamp.com) |
| [Behance](https://behance.net) | [Bluesky](https://bsky.app) | [Chess.com](https://chess.com) |
| [CodePen](https://codepen.io) | [Crunchbase](https://crunchbase.com) | [Dailymotion](https://dailymotion.com) |
| [Dev.to](https://dev.to) | [Discord](https://discord.com) | [Docker Hub](https://hub.docker.com) |
| [Douyin](https://douyin.com) | [Dribbble](https://dribbble.com) | [Etsy](https://etsy.com) |
| [Exercism](https://exercism.io) | [Facebook](https://facebook.com) | [Fiverr](https://fiverr.com) |
| [Flickr](https://flickr.com) | [GitHub](https://github.com) | [GitLab](https://gitlab.com) |
| [Gravatar](https://gravatar.com) | [Gumroad](https://gumroad.com) | [HackerRank](https://hackerrank.com) |
| [Hacker News](https://news.ycombinator.com) | [Hashnode](https://hashnode.com) | [Instagram](https://instagram.com) |
| [Kaggle](https://kaggle.com) | [Keybase](https://keybase.io) | [Kuaishou](https://kuaishou.com) |
| [LeetCode](https://leetcode.com) | [Lemmy World](https://lemmy.world) | [Lichess](https://lichess.org) |
| [LinkedIn](https://linkedin.com) (personal & company) | [Linktree](https://linktr.ee) | [Mastodon](https://mastodon.social) |
| [Medium](https://medium.com) | [Patreon](https://patreon.com) | [Pinterest](https://pinterest.com) |
| [Product Hunt](https://producthunt.com) | [PyPI](https://pypi.org) | [Quora](https://quora.com) |
| [Reddit](https://reddit.com) | [Replit](https://replit.com) | [Rumble](https://rumble.com) |
| [Signal](https://signal.me) | [SlideShare](https://slideshare.net) | [Snapchat](https://snapchat.com) |
| [SoundCloud](https://soundcloud.com) | [Spotify](https://spotify.com) | [Stack Overflow](https://stackoverflow.com) |
| [Steam](https://steamcommunity.com) | [Substack](https://substack.com) | [Telegram](https://telegram.org) |
| [Threads](https://threads.net) | [TikTok](https://tiktok.com) | [Trello](https://trello.com) |
| [Tumblr](https://tumblr.com) | [Twitch](https://twitch.tv) | [Unsplash](https://unsplash.com) |
| [Vimeo](https://vimeo.com) | [VK](https://vk.com) | [WeChat](https://weixin.qq.com) |
| [Weibo](https://weibo.com) | [Wellfound (AngelList)](https://wellfound.com) | [WhatsApp](https://whatsapp.com) |
| [WordPress](https://wordpress.com) | [X (Twitter)](https://x.com) | [YouTube](https://youtube.com) |
## Basic Usage
The simplest way to use social-links is with module-level functions. These work out of the box with 50+ predefined platforms - no configuration needed!
### Detect Platform
```python
from sociallinks import detect_platform
# Detect from full URL
detect_platform("https://github.com/ysskrishna") # "github"
detect_platform("https://x.com/ysskrishna") # "x"
detect_platform("https://example.com") # None
# Works with various URL formats
detect_platform("http://linkedin.com/in/ysskrishna")
detect_platform("www.facebook.com/ysskrishna")
detect_platform(" https://instagram.com/ysskrishna ") # Handles whitespace
```
### Validate URLs
```python
from sociallinks import is_valid
# Validate against specific platform
is_valid("linkedin", "https://www.linkedin.com/in/ysskrishna/") # True
is_valid("linkedin", "https://example.com") # False
is_valid("github", "https://github.com/ysskrishna") # True
```
### Sanitize URLs
```python
from sociallinks import sanitize
# Normalize to canonical format
sanitize("linkedin", "https://www.linkedin.com/in/ysskrishna/")
# Returns: "https://linkedin.com/in/ysskrishna"
sanitize("github", "http://www.github.com/ysskrishna")
# Returns: "https://github.com/ysskrishna"
sanitize("x", "https://twitter.com/ysskrishna")
# Returns: "https://x.com/ysskrishna"
```
### Extract IDs
```python
from sociallinks import extract_id
# Extract username/profile ID from URL
extract_id("linkedin", "https://www.linkedin.com/in/ysskrishna/") # "ysskrishna"
extract_id("github", "https://github.com/ysskrishna") # "ysskrishna"
extract_id("x", "https://twitter.com/ysskrishna") # "ysskrishna"
# Works with company/org URLs too
extract_id("linkedin", "https://linkedin.com/company/acme") # "acme"
```
### List Platforms
```python
from sociallinks import list_platforms
# Get all available platforms
platforms = list_platforms()
# Returns: ["behance", "dev_to", "dribbble", "github", "linkedin", ...]
print(f"Supported platforms: {len(platforms)}") # 50+
```
---
## Advanced Usage
For custom configurations, custom platforms, or programmatic platform management, use the `SocialLinks` class directly instead of the module-level functions.
### When to Use Advanced Features
Use the class API when you need to:
- Add support for platforms not in the predefined list
- Customize how existing platforms are detected or sanitized
- Manage platforms programmatically (add, remove, modify)
- Configure regex flags or start with an empty platform list
### Using the Class API
The `SocialLinks` class provides the same methods as module-level functions, but with additional configuration options:
```python
from sociallinks import SocialLinks
sl = SocialLinks()
# Same methods as module functions
sl.detect_platform("https://github.com/ysskrishna") # "github"
sl.is_valid("linkedin", "https://linkedin.com/in/user") # True
sl.sanitize("github", "https://github.com/user") # "https://github.com/user"
sl.extract_id("github", "https://github.com/user") # "user"
sl.list_platforms() # ["behance", "dev_to", "dribbble", ...]
```
#### Configuration Options
```python
import re
# Start with empty platform list (useful for custom platforms only)
sl = SocialLinks(use_predefined_platforms=False)
# Configure regex compilation flags
sl = SocialLinks(regex_flags=re.IGNORECASE | re.MULTILINE)
```
### Understanding Platform Configuration
A platform configuration is a **list of dictionaries**. Each dictionary contains:
- **`patterns`**: List of regex patterns that match URLs for this platform
- **`sanitized`**: Template string for the canonical URL format
Multiple dictionaries are useful when a platform has different URL types that normalize to different canonical forms (e.g., LinkedIn personal profiles `/in/` vs company pages `/company/`).
#### Configuration Structure
**Single dictionary** - Multiple patterns sharing the same sanitization template:
```python
platform_config = [{
"patterns": [
r"https?://(www\.)?example\.com/(?P<id>[A-Za-z0-9_]+)/?$",
r"https?://example\.com/user/(?P<id>[A-Za-z0-9_]+)/?$"
],
"sanitized": "https://example.com/{id}"
}]
```
**Multiple dictionaries** - Different URL formats with different sanitization templates (e.g., personal profiles vs company pages):
```python
# Example: LinkedIn supports both personal profiles and company pages
platform_config = [
{
"patterns": [
r"https?://(www\.)?linkedin\.com/in/(?P<id>[A-Za-z0-9_-]+)/?$",
r"https?://linkedin\.com/mwlite/in/(?P<id>[A-Za-z0-9_-]+)/?$"
],
"sanitized": "https://linkedin.com/in/{id}" # Personal profiles
},
{
"patterns": [
r"https?://(www\.)?linkedin\.com/company/(?P<id>[A-Za-z0-9_-]+)/?$",
r"https?://(www\.)?linkedin\.com/school/(?P<id>[A-Za-z0-9_-]+)/?$"
],
"sanitized": "https://linkedin.com/company/{id}" # Company/school pages
}
]
```
#### Key Concepts
**Pattern Matching:**
- Use named groups like `(?P<id>...)` to capture identifiers (username, ID, etc.)
- For `detect_platform()`: All patterns are checked (order-independent)
- For `sanitize()`: Patterns are checked **in order**, and the **first match** is used
**Sanitization Template:**
- Use `{id}` (or other named groups from patterns) as placeholders
- This defines the canonical URL format returned by `sanitize()`
## Adding Custom Platforms
#### Basic Example
```python
from sociallinks import SocialLinks
sl = SocialLinks(use_predefined_platforms=False)
# Define a custom platform
custom_platform = [{
"patterns": [
r"https?://(www\.)?example\.com/(?P<id>[A-Za-z0-9_]+)/?$",
r"https?://example\.com/user/(?P<id>[A-Za-z0-9_]+)/?$"
],
"sanitized": "https://example.com/{id}"
}]
# Register the platform
sl.set_platform("example", custom_platform)
# Use it
sl.detect_platform("https://example.com/user123") # "example"
sl.sanitize("example", "https://www.example.com/user123/") # "https://example.com/user123"
```
#### Handling Multiple URL Formats
When a platform supports different URL formats that normalize to **different** canonical forms (e.g., personal profiles vs company pages):
```python
from sociallinks import SocialLinks
sl = SocialLinks(use_predefined_platforms=False)
# Example: Platform with personal profiles and company pages
linkedin_style_platform = [
{
"patterns": [
r"https?://(www\.)?example\.com/profile/(?P<id>[A-Za-z0-9_-]+)/?$",
r"https?://example\.com/user/(?P<id>[A-Za-z0-9_-]+)/?$"
],
"sanitized": "https://example.com/profile/{id}" # Personal profiles
},
{
"patterns": [
r"https?://(www\.)?example\.com/company/(?P<id>[A-Za-z0-9_-]+)/?$",
r"https?://(www\.)?example\.com/org/(?P<id>[A-Za-z0-9_-]+)/?$"
],
"sanitized": "https://example.com/company/{id}" # Company pages
}
]
sl.set_platform("example", linkedin_style_platform)
# Personal profile URLs normalize to /profile/
sl.sanitize("example", "https://www.example.com/user/johndoe")
# Returns: "https://example.com/profile/johndoe"
# Company URLs normalize to /company/
sl.sanitize("example", "https://www.example.com/org/acme-corp")
# Returns: "https://example.com/company/acme-corp"
```
### Managing Platforms
#### Viewing Platform Configurations
```python
from sociallinks import SocialLinks
sl = SocialLinks()
# Get configuration for an existing platform
github_config = sl.get_platform("github")
print(github_config) # See the patterns and sanitized template
# List all available platforms
platforms = sl.list_platforms()
# Returns: ["behance", "dev_to", "dribbble", "github", "linkedin", ...]
```
#### Adding Platforms
```python
# Add a new platform (raises error if platform already exists)
custom_platform = [{
"patterns": [r"https?://example.com/(?P<id>[A-Za-z0-9_]+)"],
"sanitized": "https://example.com/{id}"
}]
sl.set_platform("example", custom_platform)
# Override an existing platform (including predefined ones)
sl.set_platform("github", custom_platform, override=True)
```
#### Adding Multiple Platforms
```python
# Add multiple platforms at once
new_platforms = {
"platform1": [{
"patterns": [r"https?://example1.com/(?P<id>[A-Za-z0-9_]+)"],
"sanitized": "https://example1.com/{id}"
}],
"platform2": [{
"patterns": [r"https?://example2.com/(?P<id>[A-Za-z0-9_]+)"],
"sanitized": "https://example2.com/{id}"
}]
}
sl.set_platforms(new_platforms, override=False) # Raises error if any exist
sl.set_platforms(new_platforms, override=True) # Overrides existing platforms
```
#### Modifying Existing Platforms
```python
# Get existing configuration
github_config = sl.get_platform("github")
# Modify and override
custom_github = [{
"patterns": [r"https?://github\.com/(?P<id>[A-Za-z0-9_]+)/?$"],
"sanitized": "https://github.com/{id}"
}]
sl.set_platform("github", custom_github, override=True)
```
#### Removing Platforms
```python
# Delete a single platform
sl.delete_platform("custom_platform")
# Delete multiple platforms
sl.delete_platforms(["platform1", "platform2"])
# Clear all platforms
sl.clear_platforms()
```
### Complete Example
Here's a complete example showing how to build a custom platform manager:
```python
from sociallinks import SocialLinks
# Start with predefined platforms
sl = SocialLinks()
# Add a custom platform
my_platform = [{
"patterns": [
r"https?://(www\.)?mysite\.com/profile/(?P<id>[A-Za-z0-9_]+)/?$",
r"https?://mysite\.com/u/(?P<id>[A-Za-z0-9_]+)/?$"
],
"sanitized": "https://mysite.com/profile/{id}"
}]
sl.set_platform("mysite", my_platform)
# Use it
url = "https://www.mysite.com/u/johndoe"
platform = sl.detect_platform(url) # "mysite"
sanitized = sl.sanitize(platform, url) # "https://mysite.com/profile/johndoe"
user_id = sl.extract_id(platform, url) # "johndoe"
is_valid = sl.is_valid(platform, url) # True
# View all platforms
all_platforms = sl.list_platforms()
print(f"Total platforms: {len(all_platforms)}")
```
## Changelog
See [CHANGELOG.md](https://github.com/ysskrishna/social-links/blob/main/CHANGELOG.md) for a detailed list of changes and version history.
## Roadmap
The following improvements are planned for upcoming releases:
- [ ] Add method to configure custom sanitization patterns
- [ ] Integrate development tools (flake8, black, isort) for code quality
- [ ] Add code coverage reporting with pytest-cov
- [ ] Refactor platform entries using dataclasses for better structure
## Contributing
Contributions are welcome! Please read our [Contributing Guide](https://github.com/ysskrishna/social-links/blob/main/CONTRIBUTING.md) for details on our code of conduct, development setup, and the process for submitting pull requests.
## Support
If you find this library helpful:
- ⭐ Star the repository
- 🐛 Report issues
- 🔀 Submit pull requests
- 💝 [Sponsor on GitHub](https://github.com/sponsors/ysskrishna)
## Credits
This package is inspired by the [social-links](https://www.npmjs.com/package/social-links) npm package by [gkucmierz](https://github.com/gkucmierz/social-links).
## License
MIT © [Y. Siva Sai Krishna](https://github.com/ysskrishna) - see [LICENSE](https://github.com/ysskrishna/social-links/blob/main/LICENSE) file for details.
---
<p align="left">
<a href="https://github.com/ysskrishna">Author's GitHub</a> •
<a href="https://linkedin.com/in/ysskrishna">Author's LinkedIn</a> •
<a href="https://github.com/ysskrishna/social-links/issues">Report Issues</a> •
<a href="https://pypi.org/project/social-links/">Package on PyPI</a> •
<a href="https://ysskrishna.github.io/social-links/">Package Documentation</a> •
<a href="https://ysskrishna.github.io/social-links/demo/">Interactive Demo</a>
</p>
| text/markdown | null | ysskrishna <sivasaikrishnassk@gmail.com> | null | null | MIT | api reference, behance, bluesky, changelog, contribution guide, detect, discord, dribbble, facebook, github, instagram, linkedin, mastodon, medium, mit license, normalize, open source, pinterest, python library, quora, reddit, regex, regular expression, sanitize, social media links, social-links, tiktok, twitter, url parser, url validation, validate, version history, x, youtube, ysskrishna | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Processing :: Markup",
"Topic :: Utilities"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/ysskrishna/social-links",
"Documentation, https://ysskrishna.github.io/social-links/",
"Repository, https://github.com/ysskrishna/social-links.git",
"Issues, https://github.com/ysskrishna/social-links/issues",
"Changelog, https://github.com/ysskrishna/social-links/blob/main/CHANGELOG.md",
"Discussions, https://github.com/ysskrishna/social-links/discussions"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:40:07.552163 | social_links-1.3.1-py3-none-any.whl | 19,995 | d1/32/fd7f4da6391ed4b45805c198b842be085d63b8e0eabf27c433ddd95b7549/social_links-1.3.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 655f68279cb2f2127cf339ec350af4f2 | b789da49f3092931c033bd650715a66e5ababfc5859dee2984957f5304e5a30d | d132fd7f4da6391ed4b45805c198b842be085d63b8e0eabf27c433ddd95b7549 | null | [
"LICENSE"
] | 239 |
2.1 | erpl-adt | 2026.2.21 | CLI for SAP ADT REST API | # erpl-adt
CLI and MCP server for the SAP ADT REST API — a single binary that talks the same HTTP endpoints Eclipse ADT uses. No Eclipse, no SAP NW RFC SDK, no JVM.
[](https://github.com/datazooDE/erpl-adt/actions/workflows/build.yaml)
Part of the [Datazoo](https://datazoo.de) ERPL family.
## What it does
- **Search and browse** ABAP objects, packages, data dictionary tables, CDS views
- **Read and write** source code with lock management and transport integration
- **Run tests** — ABAP Unit and ATC quality checks from the command line
- **Manage transports** — create, list, and release transport requests
- **MCP server** — expose all capabilities to AI agents over JSON-RPC (MCP 2024-11-05)
Every command accepts `--json` for machine-readable output.
## Quick examples
```bash
# Save connection credentials (prompts for password)
erpl-adt login --host sap.example.com --port 44300 --https --user DEVELOPER
# Search for classes matching a pattern
erpl-adt search ZCL_MY_* --type CLAS --max 20
# Read object metadata and source code
erpl-adt object read /sap/bc/adt/oo/classes/zcl_my_class
erpl-adt source read /sap/bc/adt/oo/classes/zcl_my_class/source/main
# Write source code (auto-locks, writes, unlocks)
erpl-adt source write /sap/bc/adt/oo/classes/zcl_my_class/source/main --file impl.abap
# Write and activate in one step
erpl-adt source write /sap/bc/adt/oo/classes/zcl_my_class/source/main --file impl.abap --activate
# Activate an object by name
erpl-adt activate ZCL_MY_CLASS
# Run unit tests and ATC checks (by name or URI)
erpl-adt test ZCL_MY_CLASS
erpl-adt check ZCL_MY_CLASS --variant DEFAULT
# Create a transport request and release it
erpl-adt transport create --desc "Feature XYZ" --package ZPACKAGE
erpl-adt transport release NPLK900042
# Browse packages and data dictionary
erpl-adt package tree ZPACKAGE --type CLAS
erpl-adt ddic table SFLIGHT
erpl-adt ddic cds I_AIRLINE
# Check syntax
erpl-adt source check /sap/bc/adt/oo/classes/zcl_my_class/source/main
```
## Installation
The quickest way to run erpl-adt — no download needed:
```bash
uvx erpl-adt --help
```
Or install permanently:
```bash
pip install erpl-adt
```
Alternatively, download the binary for your platform from the [latest release](https://github.com/datazooDE/erpl-adt/releases/latest), or [build from source](#building-from-source).
| Platform | Architecture |
|----------|-------------|
| Linux | x86_64 |
| macOS | arm64, x86_64 |
| Windows | x64 |
## Full reference
```
erpl-adt - CLI for the SAP ADT REST API
Talks the same HTTP endpoints Eclipse ADT uses. No Eclipse, no RFC SDK, no JVM.
All commands accept --json for machine-readable output.
USAGE
erpl-adt [global-flags] <command> [args] [flags]
SEARCH — Search for ABAP objects
search <pattern> Search for ABAP objects
--type <type> Object type: CLAS, PROG, TABL, INTF, FUGR
--max <n> Maximum number of results
OBJECT — Read, create, delete, lock/unlock ABAP objects
create Create an ABAP object
--type <type> Object type (e.g., CLAS/OC, PROG/P) (required)
--name <name> Object name (required)
--package <pkg> Target package (required)
--description <text> Object description
--transport <id> Transport request number
delete <uri> Delete an ABAP object
--handle <handle> Lock handle (skips auto-lock if provided)
--transport <id> Transport request number
lock <uri> Lock an object for editing
--session-file <path> Save session for later unlock
read <uri> Read object structure
unlock <uri> Unlock an object
--handle <handle> Lock handle (required)
--session-file <path> Session file for stateful workflow
SOURCE — Read, write, and check ABAP source code
check <uri> Check syntax
read <uri> Read source code
--version <version> active or inactive (default: active)
write <uri> Write source code
--file <path> Path to local source file (required)
--handle <handle> Lock handle (skips auto-lock if provided)
--transport <id> Transport request number
--session-file <path> Session file for stateful workflow
--activate Activate the object after writing
ACTIVATE — Activate inactive ABAP objects
activate <name-or-uri> Activate an ABAP object
TEST — Run ABAP Unit tests
test <name-or-uri> Run ABAP unit tests
CHECK — Run ATC quality checks
check <name-or-uri> Run ATC checks
--variant <name> ATC variant (default: DEFAULT)
TRANSPORT — List, create, and release transports
create Create a transport
--desc <text> Transport description (required)
--package <pkg> Target package (required)
list List transports
--user <user> Filter by user (default: DEVELOPER)
release <number> Release a transport
DATA DICTIONARY — Tables and CDS views
cds <name> Get CDS source
table <name> Get table definition
PACKAGE — List contents and check package existence
exists <name> Check if package exists
list <name> List package contents
tree <name> List package contents recursively
--type <type> Filter by object type: CLAS, PROG, TABL, INTF, FUGR
--max-depth <n> Maximum recursion depth (default: 50)
DISCOVER — Discover available ADT services
services Discover ADT services
CREDENTIALS
login Save connection credentials
logout Remove saved credentials
GLOBAL FLAGS
--host <host> SAP hostname (default: localhost)
--port <port> SAP port (default: 50000)
--user <user> SAP username (default: DEVELOPER)
--password <pass> SAP password
--password-env <var> Read password from env var (default: SAP_PASSWORD)
--client <num> SAP client (default: 001)
--https Use HTTPS
--insecure Skip TLS verification (with --https)
--json JSON output
--timeout <sec> Request timeout in seconds
--session-file <path> Persist session for lock/write/unlock workflows
--color Force colored output
--no-color Disable colored output
-v Verbose logging (INFO level)
-vv Debug logging (DEBUG level)
Credential priority: flags > --password-env > .adt.creds (via login) > SAP_PASSWORD env var
EXIT CODES
0 Success 1 Connection/auth 2 Not found
3 Clone error 4 Pull error 5 Activation error
6 Lock conflict 7 Test failure 8 ATC check error
9 Transport error 10 Timeout 99 Internal error
```
## MCP server
erpl-adt includes a built-in MCP server (Model Context Protocol, version 2024-11-05) that exposes all ADT operations as tools over JSON-RPC 2.0 on stdin/stdout. This lets AI agents search, read, write, test, and manage ABAP code directly.
```bash
erpl-adt mcp --host sap.example.com --port 44300 --https
```
Configure it in your MCP client (e.g., Claude Desktop, Claude Code):
```json
{
"mcpServers": {
"erpl-adt": {
"command": "erpl-adt",
"args": ["mcp", "--host", "sap.example.com", "--port", "44300", "--https"],
"env": {
"SAP_PASSWORD": "your_password"
}
}
}
}
```
## Deploy workflow
erpl-adt also includes the original `deploy` workflow for automated abapGit package deployment via YAML configuration:
```bash
cat > config.yaml <<EOF
connection:
host: localhost
port: 50000
use_https: false
client: "001"
user: DEVELOPER
password_env: SAP_PASSWORD
repos:
- name: flight
url: https://github.com/SAP-samples/abap-platform-refscen-flight.git
branch: refs/heads/main
package: /DMO/FLIGHT
activate: true
EOF
export SAP_PASSWORD=your_password
erpl-adt deploy -c config.yaml
```
The deploy workflow is an idempotent state machine: `discover → create package → clone → pull → activate`. Each step checks preconditions and skips if already satisfied. Re-running is safe. Supports multi-repo deployments with `depends_on` for topological ordering.
## Building from source
```bash
git clone --recurse-submodules https://github.com/datazooDE/erpl-adt.git
cd erpl-adt
make release
```
Requires CMake 3.21+, Ninja, and a C++17 compiler (GCC 13+, Apple Clang 15+, or MSVC 17+). vcpkg is included as a git submodule.
To run the tests:
```bash
make test # Unit tests (offline, no SAP system needed)
make test-integration-py # Integration tests (requires SAP system)
```
## Docker
```bash
docker build -t erpl-adt .
docker run --rm -v $(pwd)/config.yaml:/config.yaml \
-e SAP_PASSWORD=your_password \
erpl-adt deploy -c /config.yaml
```
Or use Docker Compose for end-to-end provisioning with a SAP ABAP Cloud Developer Trial:
```bash
docker compose up
```
## License
[Apache License 2.0](LICENSE) — Copyright 2026 Datazoo GmbH
| text/markdown | DataZoo GmbH | null | null | null | Apache-2.0 | null | [] | [] | https://github.com/DataZooDE/erpl-adt | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:39:57.074894 | erpl_adt-2026.2.21-py3-none-win_amd64.whl | 3,522,826 | ce/f4/6062b02459bc09eaa894c79aa2677370e2f6295571514c628dc34210fd77/erpl_adt-2026.2.21-py3-none-win_amd64.whl | py3 | bdist_wheel | null | false | d54069a248f625759007a3f7a3391631 | 1cce935e40b47fced47b7d5ea189902ff661f3507b898118b4a4c2a05f4947d9 | cef46062b02459bc09eaa894c79aa2677370e2f6295571514c628dc34210fd77 | null | [] | 253 |
2.3 | dycw-restic | 0.7.11 | Library to operate `restic` | # `restic`
Library to operate `restic`
| text/markdown | Derek Wan | Derek Wan <d.wan@icloud.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.3.1",
"dycw-utilities>=0.192.0",
"pydantic-settings>=2.13.1",
"python-dotenv>=1.2.1",
"click==8.3.1; extra == \"cli\"",
"dycw-utilities==0.192.0; extra == \"cli\"",
"pydantic-settings==2.13.1; extra == \"cli\"",
"python-dotenv==1.2.1; extra == \"cli\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:39:26.317446 | dycw_restic-0.7.11-py3-none-any.whl | 21,558 | af/ce/5046cfe0cb22120dcee68533b227effa0a2348343e2e17959bb7c9246a75/dycw_restic-0.7.11-py3-none-any.whl | py3 | bdist_wheel | null | false | 80fdd1f7b2feabac84c110f252c477de | 3efcbf8fee1d3151931048ab6210d9943fd71507f392985826a6c9fcba5d66e6 | afce5046cfe0cb22120dcee68533b227effa0a2348343e2e17959bb7c9246a75 | null | [] | 98 |
2.4 | longtrainer | 1.0.1 | Production-ready RAG framework built on LangChain — multi-tenant chatbots with streaming, tool calling, agent mode, FAISS vector search, and persistent MongoDB memory | <p align="center">
<img src="https://github.com/ENDEVSOLS/Long-Trainer/blob/master/assets/longtrainer-logo.png?raw=true" alt="LongTrainer Logo">
</p>
<h1 align="center">LongTrainer 1.0.1 — Production-Ready RAG Framework</h1>
<p align="center">
<strong>Multi-tenant bots, streaming, tools, and persistent memory — all batteries included.</strong>
</p>
<p align="center">
<a href="https://pypi.org/project/longtrainer/">
<img src="https://img.shields.io/pypi/v/longtrainer" alt="PyPI Version">
</a>
<a href="https://pepy.tech/project/longtrainer">
<img src="https://static.pepy.tech/badge/longtrainer" alt="Total Downloads">
</a>
<a href="https://pepy.tech/project/longtrainer">
<img src="https://static.pepy.tech/badge/longtrainer/month" alt="Monthly Downloads">
</a>
<a href="https://github.com/ENDEVSOLS/Long-Trainer/stargazers">
<img src="https://img.shields.io/github/stars/ENDEVSOLS/Long-Trainer?style=flat" alt="GitHub Stars">
</a>
<a href="https://github.com/ENDEVSOLS/Long-Trainer/actions/workflows/ci.yml">
<img src="https://github.com/ENDEVSOLS/Long-Trainer/actions/workflows/ci.yml/badge.svg" alt="CI">
</a>
<img src="https://img.shields.io/pypi/pyversions/longtrainer" alt="Python Versions">
<a href="https://github.com/ENDEVSOLS/Long-Trainer/blob/master/LICENSE">
<img src="https://img.shields.io/github/license/ENDEVSOLS/Long-Trainer" alt="License">
</a>
<a href="https://opencollective.com/longtrainer">
<img src="https://img.shields.io/opencollective/all/longtrainer?label=sponsors" alt="Open Collective">
</a>
</p>
<p align="center">
<a href="https://endevsols.github.io/Long-Trainer/">Documentation</a> •
<a href="#quick-start-">Quick Start</a> •
<a href="#features-">Features</a> •
<a href="#migration-from-034">Migration from 0.3.4</a> •
<a href="#support-the-project-">Sponsor</a>
</p>
---
## What is LongTrainer?
LongTrainer is a **production-ready RAG framework** that turns your documents into intelligent, multi-tenant chatbots — with **5 lines of code**.
Built on top of LangChain, LongTrainer handles the hard parts that every production RAG system needs: **multi-bot isolation, persistent MongoDB memory, FAISS vector search, streaming responses, custom tool calling, chat encryption, and vision support** — so you don't have to wire them together yourself.
### Why LongTrainer over raw LangChain / LlamaIndex?
| Problem | LangChain / LlamaIndex | LongTrainer |
|---|---|---|
| Multi-bot management | DIY — manage state per bot | Built-in: `initialize_bot_id()` → isolated bots |
| Persistent chat memory | Wire MongoDB/Redis yourself | Built-in: MongoDB-backed, encrypted, restorable |
| Document ingestion | Assemble loaders + splitters | One-liner: `add_document_from_path(path, bot_id)` |
| Streaming responses | Implement `astream` yourself | `get_response(stream=True)` yields chunks |
| Custom tool calling | Define tools, build agent | `add_tool(my_tool)` — plug and play |
| Web search augmentation | Find and integrate search | Built-in toggle: `web_search=True` |
| Vision chat | Complex multi-modal setup | `get_vision_response()` — pass images |
| Self-improving from chats | Not a concept | `train_chats()` feeds Q&A back into KB |
| Encryption at rest | DIY | `encrypt_chats=True` — Fernet out of the box |
---
## Installation
```bash
pip install longtrainer
```
**With agent/tool-calling support (optional):**
```bash
pip install longtrainer[agent]
```
### System Dependencies
<details>
<summary><strong>Linux (Ubuntu/Debian)</strong></summary>
```bash
sudo apt install libmagic-dev poppler-utils tesseract-ocr qpdf libreoffice pandoc
```
</details>
<details>
<summary><strong>macOS</strong></summary>
```bash
brew install libmagic poppler tesseract qpdf libreoffice pandoc
```
</details>
---
## Quick Start 🚀
### RAG Mode (Default) — Simple Document Q&A
```python
from longtrainer.trainer import LongTrainer
import os
os.environ["OPENAI_API_KEY"] = "sk-..."
# Initialize
trainer = LongTrainer(mongo_endpoint="mongodb://localhost:27017/")
bot_id = trainer.initialize_bot_id()
# Add documents (PDF, DOCX, CSV, HTML, MD, TXT, URLs, YouTube, Wikipedia)
trainer.add_document_from_path("path/to/your/data.pdf", bot_id)
# Create bot and start chatting
trainer.create_bot(bot_id)
chat_id = trainer.new_chat(bot_id)
# Get response
answer, sources = trainer.get_response("What is this document about?", bot_id, chat_id)
print(answer)
```
### Streaming Responses
```python
# Stream tokens in real-time
for chunk in trainer.get_response("Summarize the key points", bot_id, chat_id, stream=True):
print(chunk, end="", flush=True)
```
### Async Streaming
```python
async for chunk in trainer.aget_response("Explain the methodology", bot_id, chat_id):
print(chunk, end="", flush=True)
```
### Agent Mode — With Custom Tools
```python
from longtrainer.tools import web_search
from langchain_core.tools import tool
# Add built-in web search tool
trainer.add_tool(web_search, bot_id)
# Add your own custom tool
@tool
def calculate(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
trainer.add_tool(calculate, bot_id)
# Create bot in agent mode
trainer.create_bot(bot_id, agent_mode=True)
chat_id = trainer.new_chat(bot_id)
response, _ = trainer.get_response("What is 42 * 17?", bot_id, chat_id)
print(response)
```
### Vision Chat
```python
vision_id = trainer.new_vision_chat(bot_id)
response, sources = trainer.get_vision_response(
"Describe what you see in this image",
image_paths=["photo.jpg"],
bot_id=bot_id,
vision_chat_id=vision_id,
)
print(response)
```
### Per-Bot Customization
```python
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
# Each bot can have its own LLM, embeddings, and retrieval config
trainer.create_bot(
bot_id,
llm=ChatOpenAI(model="gpt-4o-mini", temperature=0.2),
embedding_model=OpenAIEmbeddings(model="text-embedding-3-small"),
num_k=5, # retrieve 5 docs per query
prompt_template="You are a helpful legal assistant. {context}",
agent_mode=True, # enable tool calling
tools=[web_search],
)
```
---
## Features ✨
### Core
- ✅ **Dual Mode:** RAG (LCEL chain) for simple Q&A, Agent (LangGraph) for tool calling
- ✅ **Streaming Responses:** Sync and async streaming out of the box
- ✅ **Custom Tool Calling:** Add any LangChain `@tool` — web search, document reader, or your own
- ✅ **Multi-Bot Management:** Isolated bots with independent sessions, data, and configs
- ✅ **Persistent Memory:** MongoDB-backed chat history, fully restorable
- ✅ **Chat Encryption:** Fernet encryption for stored conversations
### Document Ingestion
- ✅ **PDF, DOCX, CSV, HTML, Markdown, TXT** — auto-detected by extension
- ✅ **URLs, YouTube, Wikipedia** — via `add_document_from_link()` / `add_document_from_query()`
- ✅ **Any format** via `use_unstructured=True` (PowerPoint, images, etc.)
### RAG Pipeline
- ✅ **FAISS Vector Store** — fast similarity search with batched indexing
- ✅ **Multi-Query Ensemble Retrieval** — generates alternative queries for better recall
- ✅ **Self-Improving:** `train_chats()` feeds past Q&A back into the knowledge base
### Customization
- ✅ **Per-bot LLM** — use different models for different bots
- ✅ **Per-bot Embeddings** — custom embedding models per bot
- ✅ **Per-bot Retrieval Config** — custom `num_k`, `chunk_size`, `chunk_overlap`
- ✅ **Custom Prompt Templates** — full control over system prompts
- ✅ **Vision Chat** — GPT-4 Vision support with image understanding
### Works with All LangChain-Compatible LLMs
- ✅ OpenAI (default)
- ✅ Anthropic
- ✅ Google VertexAI / Gemini
- ✅ AWS Bedrock
- ✅ HuggingFace
- ✅ Groq
- ✅ Together AI
- ✅ Ollama (local models)
- ✅ Any `BaseChatModel` implementation
---
## API Reference
### `LongTrainer` — Main Class
```python
trainer = LongTrainer(
mongo_endpoint="mongodb://localhost:27017/",
llm=None, # default: ChatOpenAI(model="gpt-4o-2024-08-06")
embedding_model=None, # default: OpenAIEmbeddings()
prompt_template=None, # custom system prompt
max_token_limit=32000, # conversation memory limit
num_k=3, # docs to retrieve per query
chunk_size=2048, # text splitter chunk size
chunk_overlap=200, # text splitter overlap
ensemble=False, # enable multi-query ensemble retrieval
encrypt_chats=False, # enable Fernet encryption
encryption_key=None, # custom encryption key (auto-generated if None)
)
```
### Key Methods
| Method | Description |
|---|---|
| `initialize_bot_id()` | Create a new bot, returns `bot_id` |
| `create_bot(bot_id, ...)` | Build the bot from loaded documents |
| `load_bot(bot_id)` | Restore an existing bot from MongoDB + FAISS |
| `new_chat(bot_id)` | Start a new chat session, returns `chat_id` |
| `get_response(query, bot_id, chat_id, stream=False)` | Get response (or stream) |
| `aget_response(query, bot_id, chat_id)` | Async streaming response |
| `add_document_from_path(path, bot_id)` | Ingest a file |
| `add_document_from_link(links, bot_id)` | Ingest URLs / YouTube links |
| `add_tool(tool, bot_id)` | Register a tool for a bot |
| `remove_tool(tool_name, bot_id)` | Remove a tool |
| `list_tools(bot_id)` | List registered tools |
| `train_chats(bot_id)` | Self-improve from chat history |
| `new_vision_chat(bot_id)` | Start a vision chat session |
| `get_vision_response(query, images, bot_id, vision_id)` | Vision response |
---
## Migration from 0.3.4
LongTrainer 1.0.0 is a major upgrade with breaking changes:
| 0.3.4 | 1.0.0 |
|---|---|
| `ConversationalRetrievalChain` | LCEL chain (`RAGBot`) or LangGraph agent (`AgentBot`) |
| `requirements.txt` + `setup.py` | `pyproject.toml` (UV/pip compatible) |
| No streaming | `stream=True` or `aget_response()` |
| No tool calling | `add_tool()` + `agent_mode=True` |
| `langchain.memory` | `langchain_core.chat_history` |
| Fixed LLM for all bots | Per-bot LLM, embeddings, and config |
**Upgrade path:**
```bash
pip install --upgrade longtrainer
```
The core API (`initialize_bot_id`, `create_bot`, `new_chat`, `get_response`) remains the same — existing code should work with minimal changes. The main difference is `get_response()` now returns `(answer, sources)` instead of `(answer, sources, web_sources)`.
---
## Support the Project 💖
LongTrainer is free and open-source. If it's useful to you, consider sponsoring its development:
<p align="center">
<a href="https://opencollective.com/longtrainer">
<img src="https://opencollective.com/longtrainer/donate/button@2x.png?color=blue" width="300" alt="Donate to LongTrainer">
</a>
</p>
Your sponsorship helps fund:
- 🚀 New features (CLI, API server, evaluation tools)
- 🐛 Bug fixes and maintenance
- 📖 Documentation and tutorials
- 🧪 CI/CD infrastructure
---
## Citation
```
@misc{longtrainer,
author = {Endevsols},
title = {LongTrainer: Production-Ready RAG Framework},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ENDEVSOLS/Long-Trainer}},
}
```
## License
[MIT License](LICENSE)
## Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
| text/markdown | null | Endevsols <technology@endevsols.com> | null | null | null | agent, ai, chatbot, conversational-ai, document-qa, embeddings, faiss, gpt, langchain, langgraph, llm, longtrainer, mongodb, multi-tenant, nlp, openai, rag, retrieval-augmented-generation, streaming, vector-database | [
"Development Status :: 5 - Production/Stable",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Processing :: Linguistic",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=44.0",
"docx2txt>=0.8",
"duckduckgo-search>=7.0",
"faiss-cpu>=1.9.0",
"langchain-community>=0.3.14",
"langchain-core>=0.3.30",
"langchain-openai>=0.3.4",
"langchain-text-splitters>=0.3.0",
"langchain-unstructured>=0.2.0",
"langchain>=0.3.14",
"pandas>=2.2",
"pydantic>=2.9",
"pymongo>=4.10",
"pypdf>=5.1",
"python-docx>=1.1",
"python-magic>=0.4",
"pytube>=15.0",
"tiktoken>=0.8",
"unstructured[all-docs]>=0.18",
"wikipedia>=1.4",
"youtube-transcript-api>=1.0",
"langgraph>=0.3.10; extra == \"agent\"",
"fastapi>=0.115; extra == \"api\"",
"uvicorn>=0.32; extra == \"api\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.9; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ENDEVSOLS/Long-Trainer",
"Documentation, https://endevsols.github.io/Long-Trainer/",
"Repository, https://github.com/ENDEVSOLS/Long-Trainer",
"Changelog, https://github.com/ENDEVSOLS/Long-Trainer/blob/master/CHANGELOG.md",
"Bug Tracker, https://github.com/ENDEVSOLS/Long-Trainer/issues"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-21T06:39:20.313457 | longtrainer-1.0.1.tar.gz | 698,864 | 99/a8/7d6c116d81fee8612ce35b198fd225b58667397e99f34d11e47bdfc63f7a/longtrainer-1.0.1.tar.gz | source | sdist | null | false | 93c55bcea1c3164453afd84bb72b643d | ad68a41630b05828c77421a201b19a9b2ab954ef7d5b5a0dd90edff1c60182e1 | 99a87d6c116d81fee8612ce35b198fd225b58667397e99f34d11e47bdfc63f7a | MIT | [
"LICENSE"
] | 235 |
2.4 | mindroom | 2026.2.134 | A universal interface for AI agents with persistent memory, where every conversation has a home | # mindroom
[](https://pypi.org/project/mindroom/)
[](https://pypi.org/project/mindroom/)
[](https://github.com/mindroom-ai/mindroom/actions/workflows/pytest.yml)
[](https://github.com/mindroom-ai/mindroom/actions/workflows/build-mindroom.yml)
[](https://docs.mindroom.chat)
[](https://github.com/mindroom-ai/mindroom/blob/main/LICENSE)
[](https://pypi.org/project/mindroom/)
[](https://github.com/mindroom-ai/mindroom)
<img src="https://raw.githubusercontent.com/mindroom-ai/mindroom/main/frontend/public/logo-text.svg" alt="MindRoom Logo" align="right" width="150" />
**Your AI is trapped in apps. We set it free.**
AI agents that learn who you are shouldn't forget everything when you switch apps. MindRoom agents follow you everywhere—Slack, Telegram, Discord, WhatsApp—with persistent memory intact.
Deploy once on Matrix. Your agents now work in any chat platform via bridges. They can even visit your client's workspace or join your friend's group chat.
Self-host for complete control or use our encrypted service. Either way, your agents remember you and can collaborate across organizations.
## The Problem
Every AI app is a prison:
- ChatGPT knows your coding style... but can't join your team's Slack
- Claude understands your writing... but can't access your email
- GitHub Copilot helps with code... but can't see your project specs
- You teach each AI from scratch, over and over
Meanwhile, your human team collaborates across Slack, Discord, Telegram, and email daily. Why can't your AI?
## The Solution
MindRoom agents:
- **Live in Matrix** - A federated protocol like email
- **Work everywhere** - Via bridges to Slack, Telegram, Discord, WhatsApp, IRC, email
- **Remember everything** - Persistent memory across all platforms
- **Collaborate naturally** - Multiple agents working together in threads
- **Respect boundaries** - You control which agent sees what data
## Built on Proven Infrastructure
MindRoom leverages the Matrix protocol, a decade-old open standard with significant real-world adoption:
**Foundation**
- **10+ years** of development by the Matrix.org Foundation
- **€10M+** invested in protocol development
- **100+ developers** contributing to the core ecosystem
- **35+ million users** globally
**Enterprise Validation**
- **German Healthcare**: 150,000+ organizations using Ti-Messenger
- **French Government**: 5.5 million civil servants on Tchap
- **Military Adoption**: NATO, U.S. Space Force, and other defense organizations
- **GDPR Compliant**: Built for European privacy standards
**What This Means For You**
By building on Matrix, MindRoom inherits:
- Production-tested federation across organizations
- Military-grade E2E encryption (Olm/Megolm)
- Professional clients (Element, FluffyChat, Cinny)
- 50+ maintained bridges to other platforms
- Proven scale and reliability
This foundation allows MindRoom to focus entirely on agent orchestration and intelligence, rather than reimplementing communication infrastructure.
## See It In Action
```
Monday, in your Matrix room:
You: @assistant Remember our project uses Python 3.11 and FastAPI
Tuesday, in your team's Slack (via bridge):
Colleague: What Python version are we using?
You: @assistant can you help?
Assistant: [Joins from Matrix] We're using Python 3.11 with FastAPI
Wednesday, in client's Telegram (via bridge):
Client: Can your AI review our API spec?
You: @assistant please analyze this
Assistant: [Travels from your server] I'll review this against our FastAPI patterns...
```
One agent. Every platform. Continuous memory.
## The Magic Moment - Cross-Organization Collaboration
```
Thursday, your client asks in their Discord:
Client: Can our architect AI review this with your team?
You: Sure! @assistant please collaborate with them
Your Assistant: [Joins from your Matrix server]
Client's Architect AI: [Joins from their server]
Together: [They review architecture, sharing context from both organizations]
```
**Two AI agents from different companies collaborating.**
This is impossible with ChatGPT, Claude, or any other platform.
## But It Gets Better - Your Agents Work as a Team
```
Friday, planning next sprint:
You: @research @analyst @writer Create a competitive analysis report
Research: I'll gather data on our top 5 competitors...
Analyst: I'll identify strategic patterns and opportunities...
Writer: I'll compile everything into an executive summary...
[They work together, transparently, delivering a comprehensive report]
```
## Key Features
### 🧠 Dual Memory System
- **Agent Memory**: Each agent remembers conversations, preferences, and patterns across all platforms
- **Room Memory**: Contextual knowledge that stays within specific rooms (work projects, personal notes)
### 🤝 Multi-Agent Collaboration
```
You: @research @analyst @email Create weekly competitor analysis reports
Research: I'll gather competitor updates
Analyst: I'll identify strategic patterns
Email: I'll compile and send every Friday
[They work together, automatically, every week]
```
### 💬 Direct Messages (DMs)
- Agents respond naturally in 1:1 DMs without needing mentions
- Add more agents to existing DM rooms for collaborative private work
- Complete privacy separate from configured public rooms
### 🔐 Intelligent Trust Boundaries
- Route sensitive data to local Ollama models on your hardware
- Use GPT-5.2 for complex reasoning
- Send general queries to cost-effective cloud models
- You decide which AI sees what
### 🔌 100+ Integrations
Gmail, GitHub, Spotify, Home Assistant, Google Drive, Reddit, weather services, news APIs, financial data, and many more. Your agents can interact with all your tools.
### 📅 Automation & Scheduling
- Daily check-ins from your mindfulness agent
- Scheduled reports and summaries
- Event-driven workflows (conditional requests converted to polling schedules)
- Background tasks with human escalation
## Who This Is For
- **Teams using Matrix/Element** - Add AI to your existing secure infrastructure without migration
- **Open Source Projects** - Agents that remember all decisions and can visit contributor chats
- **Consultants & Agencies** - Your AI can securely join client workspaces
- **Privacy-Focused Organizations** - Self-host everything, own your data completely
- **Developers** - Build on our platform, contribute agents, extend functionality
## Quick Start
### Prerequisites
- Python 3.12+
- [uv](https://github.com/astral-sh/uv) for Python package management
- Node.js 20+ and [bun](https://bun.sh/) (optional, for web UI)
### Installation and starting
```bash
# Clone and install
git clone https://github.com/mindroom-ai/mindroom
cd mindroom
uv sync --all-extras
```
```bash
# Terminal 1: Start backend (agents + API)
uv run mindroom run
# Terminal 2: Start frontend (optional, for web UI)
./run-frontend.sh
```
The web interface will be available at http://localhost:3003
### First Steps
In any Matrix client (Element, FluffyChat, etc):
```
You: @mindroom_assistant What can you do?
Assistant: I can coordinate our team of specialized agents...
You: @mindroom_research @mindroom_analyst What are the latest AI breakthroughs?
[Agents collaborate to research and analyze]
```
## How Agents Work
### Agent Response Rules
Agents respond using Matrix thread relations to keep conversations organized.
If your client does not support thread UI, plain replies still work: MindRoom
resolves the reply chain and continues the correct conversation thread.
1. **Mentioned agents always respond** - Tag them to get their attention
2. **Single agent continues** - One agent in thread? It keeps responding
3. **Multiple agents collaborate** - They work together, not compete
4. **Smart routing** - System picks the best agent for new threads
### Available Commands
<!-- CODE:START -->
<!-- import sys -->
<!-- sys.path.insert(0, 'src') -->
<!-- from mindroom.commands import _get_command_entries -->
<!-- for entry in _get_command_entries(format_code=True): -->
<!-- print(entry) -->
<!-- CODE:END -->
<!-- OUTPUT:START -->
<!-- ⚠️ This content is auto-generated by `markdown-code-runner`. -->
- `!help [topic]` - Get help
- `!schedule <task>` - Schedule a task
- `!list_schedules` - List scheduled tasks
- `!cancel_schedule <id>` - Cancel a scheduled task
- `!edit_schedule <id> <task>` - Edit an existing scheduled task
- `!widget [url]` - Add configuration widget
- `!config <operation>` - Manage configuration
- `!hi` - Show welcome message
- `!skill <name> [args]` - Run a skill by name
<!-- OUTPUT:END -->
## Note for Self-Hosters
This repository contains everything you need to self-host MindRoom. The `saas-platform/` directory contains infrastructure and code specific to running MindRoom as a hosted service and can be safely ignored by self-hosters.
## Configuration
### Basic Setup
1. Create `config.yaml` (for example):
```yaml
agents:
assistant:
display_name: Assistant
role: A helpful AI assistant
model: default
rooms: [lobby]
models:
default:
provider: anthropic
id: claude-sonnet-4-5-latest
mindroom_user:
username: mindroom_user # Set this before first run; username is immutable after bootstrap
display_name: MindRoomUser
defaults:
markdown: true
compress_tool_results: true # Compress tool results in history to save context
enable_session_summaries: false # AI summaries of older conversation segments (costs extra LLM call)
max_tool_calls_from_history: null # Limit tool call messages replayed from history (null = no limit)
num_history_runs: null # Number of prior runs to include (null = all)
```
2. Configure your Matrix homeserver and API keys (optional, defaults shown):
```bash
export MATRIX_HOMESERVER=https://your-matrix.server
export ANTHROPIC_API_KEY=your-key-here
# Optional: protect dashboard API endpoints (recommended for non-localhost)
# export MINDROOM_API_KEY=your-secret-key
# Optional: use a non-default config location
# export MINDROOM_CONFIG_PATH=/path/to/config.yaml
```
### Optional Advanced Configuration
```yaml
knowledge_bases:
engineering_docs:
path: ./knowledge_docs
watch: true
agents:
assistant:
display_name: Assistant
role: A helpful AI assistant
model: default
rooms: [lobby]
knowledge_bases: [engineering_docs]
# Per-agent overrides for history/context (override defaults above):
# compress_tool_results: false
# enable_session_summaries: true
# max_tool_calls_from_history: 5
# num_history_runs: 10
voice:
enabled: true
stt:
provider: openai
model: whisper-1
mindroom_user:
username: mindroom_user # Set this before first run; username is immutable after bootstrap
display_name: MindRoomUser
authorization:
global_users: ["@alice:example.com"]
room_permissions:
"!exampleRoomId:example.com": ["@bob:example.com"]
default_room_access: false
```
`mindroom_user.username` can only be set before the internal user account is created. After first startup, change `mindroom_user.display_name` if you only want a different visible name.
## Deployment Options
### 🏠 Self-Hosted
Complete control on your infrastructure:
```bash
# Using your existing Matrix server
MATRIX_HOMESERVER=https://your-matrix.server uv run mindroom run
# Or let MindRoom handle everything locally
uv run mindroom run
```
### ☁️ Our Hosted Service (Coming Soon)
Zero setup, enterprise security:
- End-to-end encrypted (we can't read your data)
- Automatic updates and scaling
- 99.9% uptime SLA
- Start free, scale as needed
### 🔀 Hybrid
Mix and match:
- Sensitive rooms on your server
- General rooms on our cloud
- Agents collaborate seamlessly across both
## Architecture
### Technical Stack
- **Matrix**: Any homeserver (Synapse, Conduit, Dendrite, etc.)
- **Agents**: Python with matrix-nio
- **AI Models**: OpenAI, Anthropic, Ollama, or any provider
- **Memory**: Mem0 + ChromaDB vector storage (persistent on disk)
- **UI**: Web widget + any Matrix client
## Philosophy
We believe AI should be:
1. **Persistent**: Your AI should remember and learn from every interaction
2. **Ubiquitous**: Available wherever you communicate
3. **Collaborative**: Multiple specialists working together
4. **Private**: You control where your data lives
5. **Natural**: Just chat—no complex interfaces
## Status
- ✅ **Production ready** with 1000+ commits
- ✅ **100+ integrations** working today
- ✅ **Multi-agent collaboration** with persistent memory
- ✅ **Federation** across organizations and platforms
- ✅ **Self-hosted & cloud** options available
- ✅ **Voice transcription** for Matrix voice messages
- ✅ **Text-to-speech tools** via OpenAI, Groq, ElevenLabs, and Cartesia
- 🚧 Mobile apps in development
- 🚧 Agent marketplace planned
## Contributing
We welcome contributions! See [CLAUDE.md](CLAUDE.md) for the current development workflow and quality checks.
From the developer of 10+ successful open source projects with thousands of users. MindRoom represents 1000+ commits of production-ready code, not a weekend experiment.
## License
- **Repository (except `saas-platform/`)**: [Apache License 2.0](LICENSE)
- **SaaS Platform** (`saas-platform/`): [Business Source License 1.1](saas-platform/LICENSE) (converts to Apache 2.0 on 2030-02-06)
## Acknowledgments
Built with:
- [Matrix](https://matrix.org/) - The federated communication protocol
- [Agno](https://agno.dev/) - AI agent framework
- [matrix-nio](https://github.com/poljar/matrix-nio) - Python Matrix client
---
**mindroom** - AI that follows you everywhere, remembers everything, and stays under your control.
| text/markdown | mindroom team | null | null | null | null | null | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"agno[anthropic,google,ollama,openai]==2.4.7",
"anthropic>=0.70",
"cerebras-cloud-sdk>=1.46",
"chromadb>=1.0.15",
"cron-descriptor>=1.4.5",
"croniter>=6",
"diskcache>=5.6.3",
"fastapi[standard]>=0.116.1",
"google-genai",
"groq>=0.31",
"httpx>=0.27",
"humanize>=4.12.3",
"json5>=0.13",
"loguru>=0.7.3",
"markdown>=3.7",
"matrix-nio>=0.24",
"mcp[cli]>=1.12.4",
"mem0ai>=0.1.115",
"openai",
"pydantic>=2",
"pyjwt>=2.8",
"python-dotenv>=1",
"pyyaml>=6",
"rich",
"structlog>=24.1",
"tenacity>=9.1.2",
"typer>=0.16",
"uvicorn>=0.35",
"watchfiles>=1",
"agentql; extra == \"agentql\"",
"playwright; extra == \"agentql\"",
"apify-client>=1.12.2; platform_machine != \"aarch64\" and extra == \"apify\"",
"arxiv; extra == \"arxiv\"",
"pypdf; extra == \"arxiv\"",
"boto3>=1.40.8; extra == \"aws-lambda\"",
"boto3>=1.40.8; extra == \"aws-ses\"",
"baidusearch; extra == \"baidusearch\"",
"pycountry; extra == \"baidusearch\"",
"requests; extra == \"bitbucket\"",
"httpx>=0.27; extra == \"brandfetch\"",
"requests; extra == \"brightdata\"",
"browserbase; extra == \"browserbase\"",
"playwright; extra == \"browserbase\"",
"pytz; extra == \"cal-com\"",
"requests; extra == \"cal-com\"",
"cartesia; extra == \"cartesia\"",
"claude-agent-sdk>=0.1.35; extra == \"claude-agent\"",
"requests; extra == \"clickup\"",
"composio-agno; extra == \"composio\"",
"pydantic>=2; extra == \"config-manager\"",
"pyyaml>=6; extra == \"config-manager\"",
"atlassian-python-api; extra == \"confluence\"",
"crawl4ai>=0.7.3; extra == \"crawl4ai\"",
"duckdb; extra == \"csv\"",
"requests; extra == \"custom-api\"",
"openai; extra == \"dalle\"",
"daytona; extra == \"daytona\"",
"requests; extra == \"desi-vocal\"",
"requests; extra == \"discord\"",
"docker; extra == \"docker\"",
"duckdb; extra == \"duckdb\"",
"ddgs; extra == \"duckduckgo\"",
"e2b-code-interpreter; extra == \"e2b\"",
"elevenlabs; extra == \"eleven-labs\"",
"exa-py; extra == \"exa\"",
"fal-client; extra == \"fal\"",
"reportlab; extra == \"file-generation\"",
"requests; extra == \"financial-datasets-api\"",
"firecrawl-py>=3; extra == \"firecrawl\"",
"google-genai; extra == \"gemini\"",
"httpx>=0.27; extra == \"giphy\"",
"pygithub>=2.5; extra == \"github\"",
"google-api-python-client>=2.178; extra == \"gmail\"",
"google-auth-httplib2>=0.2; extra == \"gmail\"",
"google-auth-oauthlib>=1.2.2; extra == \"gmail\"",
"google-auth>=2.40.3; extra == \"gmail\"",
"google-cloud-bigquery; extra == \"google-bigquery\"",
"google-api-python-client>=2.178; extra == \"google-calendar\"",
"google-auth-httplib2>=0.2; extra == \"google-calendar\"",
"google-auth-oauthlib>=1.2.2; extra == \"google-calendar\"",
"google-auth>=2.40.3; extra == \"google-calendar\"",
"google-maps-places; extra == \"google-maps\"",
"googlemaps; extra == \"google-maps\"",
"google-api-python-client>=2.178; extra == \"google-sheets\"",
"google-auth-httplib2>=0.2; extra == \"google-sheets\"",
"google-auth-oauthlib>=1.2.2; extra == \"google-sheets\"",
"ddgs; extra == \"googlesearch\"",
"groq>=0.31; extra == \"groq\"",
"httpx>=0.27; extra == \"hackernews\"",
"httpx>=0.27; extra == \"homeassistant\"",
"httpx>=0.27; extra == \"jina\"",
"pydantic>=2; extra == \"jina\"",
"jira>=3.10.5; extra == \"jira\"",
"requests; extra == \"linear\"",
"linkup-sdk>=0.2.8; extra == \"linkup\"",
"lumaai; extra == \"lumalabs\"",
"mem0ai>=0.1.115; extra == \"mem0\"",
"requests; extra == \"modelslabs\"",
"moviepy; extra == \"moviepy-video-tools\"",
"neo4j; extra == \"neo4j\"",
"lxml-html-clean; extra == \"newspaper4k\"",
"newspaper4k; extra == \"newspaper4k\"",
"notion-client; extra == \"notion\"",
"openai; extra == \"openai\"",
"openbb; extra == \"openbb\"",
"requests; extra == \"openweather\"",
"oxylabs; extra == \"oxylabs\"",
"pandas>=2.2; extra == \"pandas\"",
"psycopg[binary]; extra == \"postgres\"",
"httpx>=0.27; extra == \"pubmed\"",
"praw; extra == \"reddit\"",
"redshift-connector; extra == \"redshift\"",
"replicate>=1.0.7; extra == \"replicate\"",
"resend; extra == \"resend\"",
"scrapegraph-py; extra == \"scrapegraph\"",
"pydantic>=2; extra == \"self-config\"",
"pyyaml>=6; extra == \"self-config\"",
"google-search-results>=2.4.2; extra == \"serpapi\"",
"requests; extra == \"serper\"",
"httpx>=0.27; extra == \"shopify\"",
"slack-sdk; extra == \"slack\"",
"spider-client; extra == \"spider\"",
"httpx>=0.27; extra == \"spotify\"",
"spotipy>=2.25.1; extra == \"spotify\"",
"sqlalchemy; extra == \"sql\"",
"supabase>=2.18.1; extra == \"supabase\"",
"tavily-python; extra == \"tavily\"",
"httpx>=0.27; extra == \"telegram\"",
"todoist-api-python; extra == \"todoist\"",
"trafilatura; extra == \"trafilatura\"",
"py-trello; extra == \"trello\"",
"twilio; extra == \"twilio\"",
"matplotlib; extra == \"visualization\"",
"webexpythonsdk; extra == \"webex\"",
"beautifulsoup4; extra == \"website\"",
"httpx>=0.27; extra == \"website\"",
"httpx>=0.27; extra == \"whatsapp\"",
"wikipedia; extra == \"wikipedia\"",
"tweepy; extra == \"x\"",
"yfinance; extra == \"yfinance\"",
"youtube-transcript-api; extra == \"youtube\"",
"requests; extra == \"zendesk\"",
"zep-cloud>=3.3; extra == \"zep\"",
"requests; extra == \"zoom\""
] | [] | [] | [] | [
"documentation, https://github.com/mindroom-ai/mindroom",
"homepage, https://github.com/mindroom-ai/mindroom",
"repository, https://github.com/mindroom-ai/mindroom"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:39:07.553360 | mindroom-2026.2.134.tar.gz | 1,964,697 | f3/73/845049c62337939d2afafa0e4bd1bcc5e707f7826665b3946314b4bb9ef0/mindroom-2026.2.134.tar.gz | source | sdist | null | false | 0cb3a3c71448b1ff5c38fbb52ab8ca09 | 338d6ea464c428504b3580f744e89412c226e270f5072b9186976f4dcc6bd182 | f373845049c62337939d2afafa0e4bd1bcc5e707f7826665b3946314b4bb9ef0 | Apache-2.0 | [
"LICENSE"
] | 232 |
2.4 | wxdata | 1.3 | A Python library that acts as a client to download, pre-process and post-process weather data. Friendly for users on VPN/PROXY connections. | # WxData
<img src="https://github.com/edrewitz/WxData/blob/main/icons/weather%20icon.jpg?raw=true" width="200" alt="Alt text" /> <img src="https://github.com/edrewitz/WxData/blob/1be590e9a16033974a592d8cf99f3cd521f95e0b/icons/python%20logo.png?raw=true" width="200" alt="Alt text" />
[](https://anaconda.org/conda-forge/wxdata) [](https://anaconda.org/conda-forge/wxdata) [](https://anaconda.org/conda-forge/wxdata)  [](https://anaconda.org/conda-forge/wxdata) [](https://anaconda.org/conda-forge/wxdata)
[](https://doi.org/10.5281/zenodo.17727621)
Anaconda Downloads
[](https://anaconda.org/conda-forge/wxdata)
PIP Downloads:

**(C) Eric J. Drewitz 2025-2026**
An open-source package that helps meteorologists and weather enthusiats download, pre-process and post-process various types of weather data.
This package only retrieves open-source publicly available weather data.
This package provides the following extra functionality compared to existing packages for downloading weather data:
**How To Install**
Copy and paste either command into your terminal or anaconda prompt:
*Install via Anaconda*
`conda install wxdata`
*Install via pip*
`pip install wxdata`
**How To Update To The Latest Version**
Copy and paste either command into your terminal or anaconda prompt:
*Update via Anaconda*
***This is for users who initially installed WxData through Anaconda***
`conda update wxdata`
*Update via pip*
***This is for users who initially installed WxData through pip***
`pip install --upgrade wxdata`
***Important Compatibility Information***
*Python 3.14 is not compatible with the pip version of the eccodes library*
Methods to fix eccodes compatibility with Python 3.14:
1) Uninstall the pip version of WxData and install WxData via Anaconda
*Steps For Method 1*
1) pip uninstall wxdata
2) conda install wxdata
2) If the user is unable to use Anaconda as a package manager, the user must set up a new Python environment with the following specifications:
*Specifications*
Python >= 3.10 and Python <= 3.13
Python 3.10 is compatible.
Python 3.11 is compatible.
Python 3.12 is compatible.
Python 3.13 is compatible
Then pip install wxdata after the new Python environment is set up.
1) Friendly for users working on VPN/PROXY connections.
- Users input their PROXY IP address as a dictionary and pass it into the function to avoid SSL errors
- If the user is on a VPN/PROXY Connection the following is needed:
proxies=None ---> proxies={
'http':'http://url',
'https':'https://url'
}
[e.g. get_observed_sounding_data('nkx', proxies=proxies)]
<img src="https://github.com/edrewitz/WxData/blob/main/diagrams/proxy.png?raw=true" width="500" alt="Alt text" />
For more information on configuring proxies: https://requests.readthedocs.io/en/latest/user/advanced/#proxies
- Some data access functions work on VPN/PROXY connections without needing to define VPN/PROXY settings:
- METARs
- NOAA Storm Prediction Center/National Weather Service Products
- FEMS
- Data access methods that users need to define VPN/PROXY IP addresses if using a VPN/PROXY connection:
- Various Forecast Models
- Observed Sounding Data from University of Wyoming
- Real-Time Mesoscale Analysis
1) Converts GRIB variable keys into variable keys that are in plain language.
- (e.g. 'r2' ---> '2m_relative_humidity')
2) Has a scanner that checks if the data files on your PC are up to date with those on the data server.
- This is a safeguard to protect newer developers from getting temporary IP address bans from the various data servers.
- Improves performance by preventing the potential of repetative downloading the same dataset.
3) Preserves system memory via the following methods:
- Clears out old data files before each new data download.
- Optional setting `clear_recycle_bin=True` in all functions.
- When `clear_recycle_bin=True` the computer's recycle/trash bin is cleared with each run of the script using any WxData function.
- If a user wishes to not clear out their recycle bin `set clear_recycle_bin=False`.
## WxData Examples
*Regular Users*
1) [Downloading METAR Data](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/metars.ipynb)
2) [Downloading Observed Sounding Data](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/soundings.ipynb)
3) [Downloading the first 72 hours of the ECMWF IFS and ECMWF AIFS](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/ecmwf.ipynb)
4) [Downloading the GEFS members p01 and p02 for only Temperature](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/gefs.ipynb)
5) [Downloading the Real-Time Mesoscale Analysis (RTMA)](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/rtma.ipynb)
6) [Downloading the RAWS SIG Group Fuels Data and the NFDRS Forecast for the RAWS SIG Groups for the South Ops Geographic Area Coordination Center](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/fems.ipynb)
7) [Downloading the SPC Convective Outlook for CONUS](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/spc.ipynb)
8) [Downloading NWS Maximum Temperature Forecast for Hawaii](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/nws_hi.ipynb)
9) [Downloading the GFS0P25 then performing pixel and line queries on the data](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/GFS.ipynb)
10) [Downloading various datasets from the Applied Climate Information System (ACIS)](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/xmacis2.ipynb)
11) [Downloading AIGFS Data](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/aigfs.ipynb)
12) [Downloading AIGEFS Data](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/aigefs.ipynb)
13) [Downloading and plotting the Climate Prediction Center 6-10 Day Precipitation Outlook](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/cpc_precip_outlook.ipynb)
*Advanced Users*
1) [Using the `client` module to download the latest HadCRUT5 Analysis netCDF file and open this dataset in xarray](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/hadcrut5.ipynb)
2) [Downloading the GFS0P25 for temperature fields and using run_external_scripts() to post-process this GFS0P25 dataset in an external Python script](https://github.com/edrewitz/WxData-JupyterLab-Examples/blob/main/external_scripts.ipynb)
## WxData Documentation
***Global Forecast System (GFS)***
1. [GFS0P25](https://github.com/edrewitz/WxData/blob/main/Documentation/GFS0P25.md)
2. [GFS0P25 SECONDARY PARAMETERS](https://github.com/edrewitz/WxData/blob/main/Documentation/GFS0P25%20Secondary%20Parameters.md)
3. [GFS0P50](https://github.com/edrewitz/WxData/blob/main/Documentation/GEFS0P50.md)
***AI Global Forecast System (AIGFS)***
1. [AIGFS](https://github.com/edrewitz/WxData/blob/main/Documentation/aigfs.md)
***Hybrid Global Ensemble Forecast System (HGEFS)***
1. [HGEFS](https://github.com/edrewitz/WxData/blob/main/Documentation/hgefs.md#hybrid-global-ensemble-forecast-system-hgefs)
***Global Ensemble Forecast System (GEFS)***
1. [GEFS0P50](https://github.com/edrewitz/wxdata/blob/main/Documentation/GEFS0P50.md#global-ensemble-forecast-system-050-x-050-degree-gefs0p50)
2. [GEFS0P50 SECONDARY PARAMETERS](https://github.com/edrewitz/wxdata/blob/main/Documentation/GEFS0P50%20Secondary%20Parameters.md#global-ensemble-forecast-system-050-x-050-degree-secondary-parameters-gefs0p50-secondary-parameters)
3. [GEFS0P25](https://github.com/edrewitz/wxdata/blob/main/Documentation/GEFS0P25.md#global-ensemble-forecast-system-025-x-025-degree-gefs0p25)
***AI Global Ensemble Forecast System (AIGEFS)***
1. [AIGEFS Members (Pressure Parameters](https://github.com/edrewitz/WxData/blob/main/Documentation/aigefs_pressure_members.md)
2. [AIGEFS Members (Surface Parameters](https://github.com/edrewitz/WxData/blob/main/Documentation/aigefs_surface_members.md)
3. [AIGEFS Ensemble Mean & Ensemble Spread](https://github.com/edrewitz/WxData/blob/main/Documentation/aigefs_single.md)
***ECMWF Open Data***
1. [ECMWF IFS](https://github.com/edrewitz/WxData/blob/main/Documentation/ECMWF_IFS.md)
2. [ECMWF AIFS](https://github.com/edrewitz/WxData/blob/main/Documentation/ECMWF_AIFS.md)
3. [ECMWF High Resolution IFS](https://github.com/edrewitz/WxData/blob/main/Documentation/ECMWF_High_Res_IFS.md)
4. [ECMWF IFS Wave](https://github.com/edrewitz/WxData/blob/main/Documentation/ECMWF_IFS_Wave.md)
***Real-Time Mesoscale Analysis (RTMA)***
1. [RTMA](https://github.com/edrewitz/wxdata/blob/main/Documentation/rtma.md#real-time-mesoscale-analysis-rtma)
2. [RTMA Comparison](https://github.com/edrewitz/wxdata/blob/main/Documentation/rtma%20comparison.md#real-time-mesoscale-analysis-rtma-24-hour-comparison)
***NOAA Storm Prediction Center Outlooks/Climate Prediction Center Outlooks/National Weather Service Forecasts***
1. [Get NDFD Grids](https://github.com/edrewitz/wxdata/blob/main/Documentation/noaa.md#noaa-get-storm-prediction-center-outlooks-and-national-weather-service-forecasts-ndfd-grids)
2. [Climate Prediction Center Outlooks](https://github.com/edrewitz/WxData/blob/main/Documentation/cpc_outlooks.md#noaanws-climate-prediction-center-outlooks)
***METAR Observations***
1. [METAR Observations](https://github.com/edrewitz/wxdata/blob/main/Documentation/metars.md#metar-observations)
***FEMS RAWS Network***
1. [Get Single Station RAWS Data](https://github.com/edrewitz/wxdata/blob/main/Documentation/single_raws.md#fems-get-single-raws-station-data)
2. [Get Each SIG of RAWS Data for a Geographic Area Coordination Center](https://github.com/edrewitz/wxdata/blob/main/Documentation/raws%20sig.md#fems-get-raws-sig-data-for-a-geographic-area-coordination-center-region)
3. [Get NFDRS Forecast Data for Each SIG for a Geographic Area Coordination Center](https://github.com/edrewitz/wxdata/blob/main/Documentation/nfdrs%20forecast.md#fems-get-nfdrs-forecast-data-for-a-raws-sig-for-a-geographic-area-coordination-center-region)
***Observed Atmospheric Soundings***
1. [University Of Wyoming Soundings](https://github.com/edrewitz/wxdata/blob/main/Documentation/wyoming_soundings.md)
***GFS Post-Processing***
1. [Primary GFS Post-Processing](https://github.com/edrewitz/WxData/blob/main/Documentation/Primary%20GFS%20Post%20Processing.md)
2. [Secondary GFS Post-Processing](https://github.com/edrewitz/WxData/blob/main/Documentation/Secondary%20GFS%20Post%20Processing.md)
***AIGFS Post-Processing***
1. [AIGFS Post-Processing](https://github.com/edrewitz/WxData/blob/main/Documentation/aigfs_post_processing.md)
***GEFS Post-Processing***
1. [Primary GEFS Post-Processing](https://github.com/edrewitz/WxData/blob/main/Documentation/Primary%20GEFS%20Post-Processing.md)
2. [Secondary GEFS Post-Processing](https://github.com/edrewitz/WxData/blob/main/Documentation/Secondary%20GEFS%20Post%20Processing.md)
***AIGEFS Post-Processing***
1. [AIGEFS Members Post-Processing](https://github.com/edrewitz/WxData/blob/main/Documentation/aigefs_members_post_processing.md)
2. [AIGEFS Single Post-Processing](https://github.com/edrewitz/WxData/blob/main/Documentation/aigefs_single_post_processing.md)
***HGEFS Post-Processing***
1. [HGEFS Post-Processing](https://github.com/edrewitz/WxData/blob/main/Documentation/hgefs_post_processing.md#hybrid-global-ensemble-forecast-system-hgefs-post-processing)
***ECMWF Post-Processing***
1. [ECMWF IFS and ECMWF High Resolution IFS](https://github.com/edrewitz/WxData/blob/main/Documentation/ECMWF%20IFS%20Post%20Processing.md)
2. [ECMWF AIFS](https://github.com/edrewitz/WxData/blob/main/Documentation/ECMWF%20AIFS%20Post%20Processing.md)
3. [ECMWF IFS Wave](https://github.com/edrewitz/WxData/blob/main/Documentation/ECMWF%20IFS%20Wave%20Post%20Processing.md)
***Real-Time Mesoscale Analysis Post-Processing***
1. [RTMA](https://github.com/edrewitz/WxData/blob/main/Documentation/RTMA%20Post%20Processing.md)
***xmACIS2 Climate Data***
1. [xmACIS2 Client](https://github.com/edrewitz/WxData/blob/main/Documentation/xmacis2_client.md)
***Custom Gridded Data***
1. [Gridded Data Client](https://github.com/edrewitz/WxData/blob/main/Documentation/get_gridded_data.md#get-gridded-data)
***Custom CSV Data***
1. [CSV Data Client](https://github.com/edrewitz/WxData/blob/main/Documentation/get_csv_data.md#get-csv-data)
***Cyclic Points For Hemispheric Plots***
1. [Cyclic Points](https://github.com/edrewitz/wxdata/blob/main/Documentation/cyclic_point.md#using-wxdata-to-add-cyclic-points-for-hemispheric-plots)
***Shifting Longitude From 0 to 360 --> -180 to 180***
1. [shift_longitude](https://github.com/edrewitz/WxData/blob/main/Documentation/shift_longitude.md)
***Pixel Query***
1. [pixel_query](https://github.com/edrewitz/WxData/blob/main/Documentation/pixel_query.md)
***Line Query***
1. [line_query](https://github.com/edrewitz/WxData/blob/main/Documentation/line_query.md)
***Running External Python Scripts In Your Workflow***
1 [run_external_scripts](https://github.com/edrewitz/WxData/blob/main/Documentation/run_external_scripts.md)
## Importing Functions from WxData
"""
This file hosts all of the functions in the WxData Python library that directly interact with the user.
(C) Eric J. Drewitz 2025-2026
"""
"""
This section of functions are for users who want full wxdata functionality.
These functions do the following:
1) Scan for the latest available data.
- If the data on your local machine is not up to date, new data will download automatically.
- If the data on your local machine is up to date, new data download is bypassed.
- This is a safeguard that prevents excessive requests on the data servers.
2) Builds the wxdata directory to store the weather data files.
- Scans for the directory branch and builds the branch if it does not exist.
3) Downloads the data.
- Users can define their VPN/PROXY IP Address as a (dict) in their script and pass their
VPN/PROXY IP address into the function to avoid SSL Certificate errors when requesting data.
- Algorithm allows for up to 5 retries with a 30 second break between each retry to resolve connection
interruptions while not overburdening the data servers.
4) Pre-processes the data via filename formatting and correctly filing in the wxdata directory.
5) Post-processing by doing the following tasks:
- Remapping GRIB2 variable keys into plain language variable keys.
- Fixing dataset build errors and grouping all variables together.
- Transforms longitude from 0 to 360 degrees into -180 to 180 degrees.
- Subsetting the data to the latitude/longitude boundaries specified by the user.
- Converting temperature from kelvin to units the user wants (default is Celsius).
- Returning a post-processed xarray.array to the user.
6) Preserves system memory by doing the following:
- Deleting old data files before each new download.
- When clear_recycle_bin=True, the user's recycle bin is also cleared.
"""
# Global Forecast System (GFS)
from wxdata.gfs.gfs import(
gfs_0p25,
gfs_0p25_secondary_parameters,
gfs_0p50
)
# AI Global Forecast System (AIGFS)
from wxdata.aigfs.aigfs import aigfs
# Hybrid Global Ensemble Forecast System (HGEFS)
from wxdata.hgefs.hgefs import hgefs_mean_spread
# Global Ensemble Forecast System (GEFS)
from wxdata.gefs.gefs import(
gefs_0p50,
gefs_0p50_secondary_parameters,
gefs_0p25
)
# AI Global Ensemble Forecast System (AIGEFS)
from wxdata.aigefs.aigefs import(
aigefs_pressure_members,
aigefs_surface_members,
aigefs_single
)
# European Centre for Medium-Range Weather Forecasts (ECMWF)
from wxdata.ecmwf.ecmwf import(
ecmwf_ifs,
ecmwf_aifs,
ecmwf_ifs_high_res,
ecmwf_ifs_wave
)
# FEMS RAWS Network
from wxdata.fems.fems import(
get_single_station_data,
get_raws_sig_data,
get_nfdrs_forecast_data
)
# Real-Time Mesoscale Analysis (RTMA)
from wxdata.rtma.rtma import(
rtma,
rtma_comparison
)
# NOAA
# Storm Prediction Center Outlooks
# Climate Prediction Center Outlooks
# National Weather Service Forecasts
from wxdata.noaa.nws import(
get_ndfd_grids,
get_cpc_outlook
)
# Observed Upper-Air Soundings
# (University of Wyoming Database)
from wxdata.soundings.wyoming_soundings import get_observed_sounding_data
# METAR Observational Data (From NOAA)
from wxdata.metars.metar_obs import download_metar_data
"""
This section hosts all the functions and modules that involve post-processing the data.
These are the functions and modules that:
1) Re-map the GRIB2 Variable Keys into Plain Language Keys
2) Build the xarray.array of the various datasets.
"""
# Global Forecast System (GFS)
import wxdata.post_processors.gfs_post_processing as gfs_post_processing
# AI Global Forecast System (AIGFS)
import wxdata.post_processors.aigfs_post_processing as aigfs_post_processing
# Hybrid Global Ensemble Forecast System (HGEFS)
import wxdata.post_processors.hgefs_post_processing as hgefs_post_processing
# Global Ensemble Forecast System (GEFS)
import wxdata.post_processors.gefs_post_processing as gefs_post_processing
# AI Global Ensemble Forecast System (AIGEFS)
import wxdata.post_processors.aigefs_post_processing as aigefs_post_processing
# European Centre for Medium-Range Weather Forecasts (ECMWF)
import wxdata.post_processors.ecmwf_post_processing as ecmwf_post_processing
# Real-Time Mesoscale Analysis (RTMA)
from wxdata.post_processors.rtma_post_processing import process_rtma_data
"""
This section hosts the utility functions accessable to the user.
These functions provide helpful utilities when analyzing weather data.
Utility functions are geared towards the following types of users:
1) Users who want to use their own scripts to download the data however, they
would like to use the wxdata post-processing capabilities.
2) Users who want to make hemispheric graphics or any graphics where cyclic points
resolve missing data along the prime meridian or international dateline.
"""
# WxData function using cartopy to make cyclic points
# This is for users who wish to make graphics that cross the -180/180 degree longitude line
# This is commonly used for Hemispheric graphics
# Function that converts the longitude dimension in an xarray.array
# From 0 to 360 to -180 to 180
from wxdata.utils.coords import(
cyclic_point,
shift_longitude
)
# Functions to pixel query and query pixels along a line between points A and B
# Function to interpolate to n amount of points in between x and y values respectively
from wxdata.utils.tools import(
pixel_query,
line_query,
linear_anti_aliasing
)
"""
This section hosts the various data clients that retrieve various types of data.
These clients can be easily configured to work on VPN/PROXY connections.
"""
# These are the wxdata HTTPS Clients with full VPN/PROXY Support
# Client List:
# - get_gridded_data()
# - get_csv_data()
# - get_xmacis_data()
import wxdata.client.client as client
# This function executes a list of Python scripts in the order the user lists them
from wxdata.utils.scripts import run_external_scripts
## Citations
**MetPy**: May, R. M., Goebbert, K. H., Thielen, J. E., Leeman, J. R., Camron, M. D., Bruick, Z.,
Bruning, E. C., Manser, R. P., Arms, S. C., and Marsh, P. T., 2022: MetPy: A
Meteorological Python Library for Data Analysis and Visualization. Bull. Amer. Meteor.
Soc., 103, E2273-E2284, https://doi.org/10.1175/BAMS-D-21-0125.1.
**xarray**: Hoyer, S., Hamman, J. (In revision). Xarray: N-D labeled arrays and datasets in Python. Journal of Open Research Software.
**cartopy**: Phil Elson, Elliott Sales de Andrade, Greg Lucas, Ryan May, Richard Hattersley, Ed Campbell, Andrew Dawson, Bill Little, Stephane Raynaud, scmc72, Alan D. Snow, Ruth Comer, Kevin Donkers, Byron Blay, Peter Killick, Nat Wilson, Patrick Peglar, lgolston, lbdreyer, … Chris Havlin. (2023). SciTools/cartopy: v0.22.0 (v0.22.0). Zenodo. https://doi.org/10.5281/zenodo.8216315
**NumPy**: Harris, C.R., Millman, K.J., van der Walt, S.J. et al. Array programming with NumPy. Nature 585, 357–362 (2020). DOI: 10.1038/s41586-020-2649-2. (Publisher link).
**Pandas**: Pandas: McKinney, W., & others. (2010). Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference (Vol. 445, pp. 51–56).
**dask**: Dask Development Team (2016). Dask: Library for dynamic task scheduling. URL http://dask.pydata.org
**cfgrib**: Author: ECMWF, Year: (2025), Title: cfgrib: A Python interface to map GRIB files to xarray, Source: https://github.com/ecmwf/cfgrib
**requests**: K. Reitz, "Requests: HTTP for Humans". Available: https://requests.readthedocs.io/.
**shapeography**: Eric J. Drewitz. (2026). edrewitz/shapeography: shapeography 1.0 Released (shapeography1.0). Zenodo. https://doi.org/10.5281/zenodo.18676845
**geopandas**: Kelsey Jordahl, Joris Van den Bossche, Martin Fleischmann, Jacob Wasserman, James McBride, Jeffrey Gerard, … François Leblanc. (2020, July 15). geopandas/geopandas: v0.8.1 (Version v0.8.1). Zenodo. http://doi.org/10.5281/zenodo.3946761
## Data Sources
1) [National Oceanic and Atmospheric Administration/National Center for Environmental Prediction](https://nomads.ncep.noaa.gov/)
2) [European Centre for Medium-Range Weather Forecasts](https://data.ecmwf.int/forecasts/)
3) [University of Wyoming](http://www.weather.uwyo.edu/upperair/sounding.shtml)
4) [National Oceanic and Atmospheric Administration/National Weather Service](https://tgftp.nws.noaa.gov/)
5) [National Oceanic and Atmospheric Administration/Aviation Weather Center](https://aviationweather.gov/)
6) [National Oceanic and Atmospheric Administration/Climate Prediction Center](https://www.cpc.ncep.noaa.gov/products/GIS/GIS_DATA/us_tempprcpfcst/index.php)
7) [Applied Climate Information System (ACIS)](https://www.rcc-acis.org/docs_webservices.html)
| text/markdown | Eric J. Drewitz | null | null | null | null | meteorology, atmospheric sciences | [
"Programming Language :: Python",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Atmospheric Science",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"metpy>=1.5.1",
"numpy>=1.24",
"pandas>=2.2",
"xarray>=2023.1.0",
"netcdf4>=1.7.1",
"shapeography>=1.0",
"beautifulsoup4>=4.13.4",
"requests>=2.32.4",
"cfgrib>=0.9.10.4",
"dask>=2025.5.1"
] | [] | [] | [] | [
"Documentation, https://github.com/edrewitz/wxdata/blob/main/Documentation/Landing%20Page.md",
"Repository, https://github.com/edrewitz/wxdata/tree/main"
] | twine/6.0.1 CPython/3.12.7 | 2026-02-21T06:37:38.038964 | wxdata-1.3.tar.gz | 136,228 | 94/03/6a54e18a3f894e539532d23747ee3ea0302566f89f70e8eecd9061f444b1/wxdata-1.3.tar.gz | source | sdist | null | false | 850ff6672062627f1b8165076759ccfb | 782173c7e661d4e51081c4cdee603bbd41621c1052b51261a0259fe19b267cbf | 94036a54e18a3f894e539532d23747ee3ea0302566f89f70e8eecd9061f444b1 | null | [] | 256 |
2.4 | FalkorDB | 1.6.0 | Python client for interacting with FalkorDB database | [](https://github.com/falkordb/falkordb-py)
[](https://github.com/falkordb/falkordb-py/releases/latest)
[](https://badge.fury.io/py/falkordb)
[](https://codecov.io/gh/falkordb/falkordb-py)
[](https://github.com/orgs/FalkorDB/discussions)
[](https://discord.gg/ErBEqN9E)
# falkordb-py
[](https://app.falkordb.cloud)
FalkorDB Python client
see [docs](http://falkordb-py.readthedocs.io/)
## Installation
```sh
pip install FalkorDB
```
## Usage
### Run FalkorDB instance
Docker:
```sh
docker run --rm -p 6379:6379 falkordb/falkordb
```
Or use [FalkorDB Cloud](https://app.falkordb.cloud)
### Synchronous Example
```python
from falkordb import FalkorDB
# Connect to FalkorDB
db = FalkorDB(host='localhost', port=6379)
# Select the social graph
g = db.select_graph('social')
# Create 100 nodes and return a handful
nodes = g.query('UNWIND range(0, 100) AS i CREATE (n {v:1}) RETURN n LIMIT 10').result_set
for n in nodes:
print(n)
# Read-only query the graph for the first 10 nodes
nodes = g.ro_query('MATCH (n) RETURN n LIMIT 10').result_set
# Copy the Graph
copy_graph = g.copy('social_copy')
# Delete the Graph
g.delete()
```
### Asynchronous Example
```python
import asyncio
from falkordb.asyncio import FalkorDB
from redis.asyncio import BlockingConnectionPool
async def main():
# Connect to FalkorDB
pool = BlockingConnectionPool(max_connections=16, timeout=None, decode_responses=True)
db = FalkorDB(connection_pool=pool)
# Select the social graph
g = db.select_graph('social')
# Execute query asynchronously
result = await g.query('UNWIND range(0, 100) AS i CREATE (n {v:1}) RETURN n LIMIT 10')
# Process results
for n in result.result_set:
print(n)
# Run multiple queries concurrently
tasks = [
g.query('MATCH (n) WHERE n.v = 1 RETURN count(n) AS count'),
g.query('CREATE (p:Person {name: "Alice"}) RETURN p'),
g.query('CREATE (p:Person {name: "Bob"}) RETURN p')
]
results = await asyncio.gather(*tasks)
# Process concurrent results
print(f"Node count: {results[0].result_set[0][0]}")
print(f"Created Alice: {results[1].result_set[0][0]}")
print(f"Created Bob: {results[2].result_set[0][0]}")
# Close the connection when done
await pool.aclose()
# Run the async example
if __name__ == "__main__":
asyncio.run(main())
```
| text/markdown | null | FalkorDB inc <info@falkordb.com> | null | null | null | Cypher, FalkorDB, GraphDB | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"python-dateutil>=2.9.0",
"redis>=7.1.0",
"pytest-asyncio>=1.3.0; extra == \"test\"",
"pytest-cov>=7.0.0; extra == \"test\"",
"pytest>=9.0.2; extra == \"test\""
] | [] | [] | [] | [
"Homepage, http://falkordb-py.readthedocs.io",
"Repository, http://github.com/falkorDB/falkordb-py"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:36:19.107078 | falkordb-1.6.0.tar.gz | 98,157 | 72/52/5495fcba8e21c269a605092a7c4a8b33ceecae283dbfe76fc53f7f5b50ab/falkordb-1.6.0.tar.gz | source | sdist | null | false | 5d758affd61fb75e05b69b18460844a9 | 5c307d973f3fc3987a18478ebd5882f7e842d4225463a8ef5e026970ebfba8c6 | 72525495fcba8e21c269a605092a7c4a8b33ceecae283dbfe76fc53f7f5b50ab | null | [
"LICENSE"
] | 0 |
2.4 | ductor | 0.4.3 | Control Claude Code and Codex CLI from Telegram. Live streaming, sessions, cron jobs, webhooks, Docker sandboxing. | <p align="center">
<img src="https://raw.githubusercontent.com/PleasePrompto/ductor/main/ductor_bot/bot/ductor_images/logo_text.png" alt="ductor" width="100%" />
</p>
<p align="center">
<strong>Claude Code and Codex CLI as your personal Telegram assistant.</strong><br>
Persistent memory. Scheduled tasks. Live streaming. Docker sandboxing.<br>
Uses only the official CLIs. Nothing spoofed, nothing proxied.
</p>
<p align="center">
<a href="https://pypi.org/project/ductor/"><img src="https://img.shields.io/pypi/v/ductor?color=blue" alt="PyPI" /></a>
<a href="https://pypi.org/project/ductor/"><img src="https://img.shields.io/pypi/pyversions/ductor?v=1" alt="Python" /></a>
<a href="https://github.com/PleasePrompto/ductor/blob/main/LICENSE"><img src="https://img.shields.io/github/license/PleasePrompto/ductor" alt="License" /></a>
</p>
<p align="center">
<a href="#quick-start">Quick start</a> ·
<a href="#why-ductor">Why ductor?</a> ·
<a href="#features">Features</a> ·
<a href="#prerequisites">Prerequisites</a> ·
<a href="#how-it-works">How it works</a> ·
<a href="#commands">Commands</a> ·
<a href="https://github.com/PleasePrompto/ductor/tree/main/docs">Docs</a> ·
<a href="#contributing">Contributing</a>
</p>
---
ductor runs on your machine, uses your existing Claude or Codex subscription, and remembers who you are between conversations. One Markdown file is the agent's memory. One folder (`~/.ductor/`) holds everything. `pipx install ductor`, done.
You can schedule cron jobs, set up webhooks, and let the agent check in on its own with heartbeat prompts. Responses stream live into Telegram. Sessions survive restarts.
<p align="center">
<img src="https://raw.githubusercontent.com/PleasePrompto/ductor/main/docs/images/ductor-start.jpeg" alt="ductor /start screen" width="49%" />
<img src="https://raw.githubusercontent.com/PleasePrompto/ductor/main/docs/images/ductor-quick-actions.jpeg" alt="ductor quick action buttons" width="49%" />
</p>
<p align="center">
<sub>Left: <code>/start</code> onboarding screen — Right: Quick action buttons generated dynamically by the agent</sub>
</p>
## Quick start
```bash
pipx install ductor
ductor
```
The setup wizard walks you through the rest.
## Why ductor?
I tried a bunch of CLI wrappers and Telegram bots for Claude and Codex. Most were either too complex to set up, too hard to modify, or got people banned because they spoofed headers and forged API requests to impersonate the official CLI.
ductor doesn't do that.
- Spawns the real CLI binary as a subprocess. No token interception, no request forging
- Uses only official rule files: `CLAUDE.md` and `AGENTS.md`
- Memory is one Markdown file. No RAG, no vector stores
- One channel (Telegram), one Python package, one command
The agents are good enough now that you can steer them through their own rule files. I don't need a RAG system to store memories -a single Markdown file that tracks what I like, what I don't, and what I'm working on is plenty. I can reach them from Telegram instead of a terminal.
I picked Python because it's easy to modify. The agents can write their own automations, receive webhooks (new email? parse it and ping me), set up scheduled tasks. All controlled from your phone.
## Features
### Core
- Responses stream in real-time -ductor edits the Telegram message live as text arrives
- Switch between Claude Code and Codex mid-conversation with `/model`
- Sessions survive bot restarts
- `@opus explain this` temporarily switches model without changing your default
- Send images, PDFs, voice messages, or videos -ductor routes them to the right tool
- Agents can send `[button:Yes]` `[button:No]` inline keyboards back to you
- Works in Telegram groups with forum topics -- replies land in the correct topic thread
- Persistent memory across sessions, stored in one Markdown file
### Automation
- **Cron jobs** -recurring tasks with cron expressions and timezone support. Each job runs as its own subagent with a dedicated workspace and memory file (plus optional per-job quiet hours and dependency locks)
- **Webhooks** -HTTP endpoints with Bearer or HMAC auth. Two modes: *wake* injects a prompt into your active chat, *cron_task* runs a separate task session. Works with GitHub, Stripe, or anything that sends POST
- **Heartbeat** -the agent checks in periodically during active sessions. Quiet hours respected
- **Cleanup** -daily retention cleanup for `telegram_files/` and `output_to_user/`
#### Example: a cron job
Tell the agent: "Check Hacker News every morning at 8 and send me the top AI stories."
ductor creates a task folder with everything the subagent needs:
```
~/.ductor/workspace/cron_tasks/hn-ai-digest/
CLAUDE.md # Agent rules (managed by ductor)
AGENTS.md # Same rules for Codex
TASK_DESCRIPTION.md # What the agent should do
hn-ai-digest_MEMORY.md # The subagent's own memory across runs
scripts/ # Helper scripts if needed
```
At 8:00 every morning, ductor starts a fresh session in that folder. The subagent reads the task, does the work, writes what it learned to memory, and posts the result to your chat. It is context-isolated from your main conversation and memory.
#### Example: a webhook wake call
Your CI fails. A webhook in *wake* mode injects the payload into your active chat. Your agent sees it with full history and memory and responds.
```
POST /hooks/ci-failure -> "CI failed on branch main: test_auth.py::test_login timed out"
-> Agent reads this, checks the code, tells you what went wrong
```
### Infrastructure
- `ductor service install` -Linux systemd user service (start on boot, restart on crash)
- Docker sandbox image (built via `Dockerfile.sandbox`) -both CLIs have full filesystem access by default, so a container keeps your host safe
- `/upgrade` checks PyPI, offers in-chat upgrade, then restarts automatically on success
- Supervisor with PID lock. Exit code 42 triggers restart
- Prompt injection detection, path traversal checks, per-user allowlist
### Developer experience
- First-run wizard detects your CLIs, walks through config, seeds the workspace
- New config fields merge automatically on upgrade
- `/diagnose` shows system diagnostics (version/provider/model, Codex cache status, recent logs), `/status` shows session stats
- `/stop` terminates the active run and drains queued messages, `/new` starts a fresh session
- `/showfiles` lets you browse `~/.ductor/` as a clickable file tree inside Telegram
- Messages sent while the agent is working show `[Message in queue...]` with a cancel button
- Bundled skills (e.g. `skill-creator`) are symlinked into the workspace and stay current with the installed version
## Prerequisites
| Requirement | Details |
|---|---|
| Python 3.11+ | `python3 --version` |
| pipx | `pip install pipx` (recommended) or pip |
| One CLI installed | [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or [Codex CLI](https://github.com/openai/codex) |
| CLI authenticated | `claude auth` or `codex auth` |
| Telegram Bot Token | From [@BotFather](https://t.me/BotFather) |
| Your Telegram User ID | From [@userinfobot](https://t.me/userinfobot) |
| Docker *(optional)* | Recommended for sandboxed execution |
> Detailed platform guides: [Installation (Linux, macOS, WSL, Windows, VPS)](https://github.com/PleasePrompto/ductor/blob/main/docs/installation.md)
## How it works
```
You (Telegram)
|
v
ductor (aiogram)
|
├── AuthMiddleware (user allowlist)
├── SequentialMiddleware (per-chat lock + queue tracking)
|
v
Orchestrator
|
├── Command Router (/new, /model, /stop, ...)
├── Message Flow -> CLIService -> claude / codex subprocess
├── CronObserver -> Scheduled task execution
├── HeartbeatObserver -> Periodic background checks
├── WebhookObserver -> HTTP endpoint server
├── CleanupObserver -> Daily file retention cleanup
└── UpdateObserver -> PyPI version check
|
v
Streamed response -> Live-edited Telegram message
```
ductor spawns the CLI as a child process and parses its streaming output. The Telegram message gets edited live as text arrives. Sessions are stored as JSON. Background systems run as asyncio tasks in the same process.
## Workspace
Everything lives in `~/.ductor/`.
```
~/.ductor/
config/config.json # Bot config (token, user IDs, model, Docker, timezone)
sessions.json # Active sessions per chat
cron_jobs.json # Scheduled task definitions
webhooks.json # Webhook endpoint definitions
CLAUDE.md # Agent rules (auto-synced)
AGENTS.md # Same rules for Codex (auto-synced)
logs/agent.log # Rotating log file
workspace/
memory_system/
MAINMEMORY.md # The agent's long-term memory about you
cron_tasks/ # One subfolder per scheduled job
skills/
skill-creator/ # Bundled skill (symlinked from package)
tools/
cron_tools/ # Add, edit, remove, list cron jobs
webhook_tools/ # Add, edit, remove, test webhooks
telegram_tools/ # Process files, transcribe audio, read PDFs
user_tools/ # Custom scripts the agent builds for you
telegram_files/ # Downloaded media, organized by date
output_to_user/ # Files the agent sends back to you
```
Plain text, JSON, and Markdown. No databases, no binary formats.
## Configuration
Config lives in `~/.ductor/config/config.json`. The wizard creates it on first run:
```bash
ductor # wizard creates config interactively
```
Key fields: `telegram_token`, `allowed_user_ids`, `provider` (claude or codex), `model`, `docker.enabled`, `user_timezone`, `cleanup`. Full schema in [docs/config.md](https://github.com/PleasePrompto/ductor/blob/main/docs/config.md).
#### CLI Parameters
Configure provider-specific CLI parameters in `config.json`:
```json
{
"cli_parameters": {
"claude": [],
"codex": ["--chrome"]
}
}
```
Parameters are appended to CLI commands for the respective provider.
#### Advanced Cron Task Configuration
Cron tasks support per-task execution overrides:
```json
{
"provider": "codex",
"model": "gpt-5.2-codex",
"reasoning_effort": "high",
"cli_parameters": ["--chrome"],
"quiet_start": 22,
"quiet_end": 7,
"dependency": "nightly-reports"
}
```
All fields are optional and fall back to global config values if not specified.
## Commands
| Command | Description |
|---|---|
| `/new` | Start a fresh session |
| `/stop` | Stop active agent execution and discard queued messages |
| `/model` | Switch AI model (interactive keyboard) |
| `/model opus` | Switch directly to a specific model |
| `/status` | Session info, tokens, cost, auth status |
| `/memory` | View persistent memory |
| `/cron` | View/manage scheduled tasks (toggle enable/disable) |
| `/showfiles` | Browse `~/.ductor/` as an interactive file tree |
| `/info` | Project links and version info |
| `/upgrade` | Check for updates and show upgrade prompt |
| `/restart` | Restart the bot |
| `/diagnose` | Show system diagnostics and recent logs |
| `/help` | Command reference |
## Documentation
| Document | Description |
|---|---|
| [Installation](https://github.com/PleasePrompto/ductor/blob/main/docs/installation.md) | Platform-specific setup (Linux, macOS, WSL, Windows, VPS) |
| [Developer Quickstart](https://github.com/PleasePrompto/ductor/blob/main/docs/developer_quickstart.md) | Fast onboarding for contributors and junior devs |
| [Automation](https://github.com/PleasePrompto/ductor/blob/main/docs/automation.md) | Cron jobs, webhooks, heartbeat |
| [Configuration](https://github.com/PleasePrompto/ductor/blob/main/docs/config.md) | Full config schema and options |
| [Architecture](https://github.com/PleasePrompto/ductor/blob/main/docs/architecture.md) | System design and runtime flow |
| [Module reference](https://github.com/PleasePrompto/ductor/blob/main/docs/README.md) | Per-subsystem documentation |
## Disclaimer
ductor runs the official CLI binaries from Anthropic and OpenAI. It does not modify API calls, spoof headers, forge tokens, or impersonate clients. Every request comes from the real CLI process.
Terms of Service can change. Automating CLI interactions may be a gray area depending on how providers interpret their rules. We built ductor to follow intended usage patterns, but can't guarantee it won't lead to account restrictions.
Use at your own risk. Check the current ToS before deploying:
- [Anthropic Terms of Service](https://www.anthropic.com/policies/terms)
- [OpenAI Terms of Use](https://openai.com/policies/terms-of-use)
Not affiliated with Anthropic or OpenAI.
## Contributing
```bash
git clone https://github.com/PleasePrompto/ductor.git
cd ductor
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytest
ruff check .
mypy ductor_bot
```
Zero warnings, zero errors. See [CLAUDE.md](https://github.com/PleasePrompto/ductor/blob/main/CLAUDE.md) for conventions.
## License
[MIT](https://github.com/PleasePrompto/ductor/blob/main/LICENSE)
| text/markdown | PleasePrompto | null | null | null | null | agent, ai, automation, bot, claude, cli, codex, ductor, streaming, telegram | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Communications :: Chat",
"Topic :: Software Development",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiogram<4.0.0,>=3.24.0",
"aiohttp<4.0.0,>=3.9.0",
"cronsim>=2.7",
"pydantic>=2.12.5",
"pyyaml>=6.0.3",
"questionary>=2.1.1",
"rich>=14.3.2",
"mypy>=1.19.1; extra == \"dev\"",
"pytest-asyncio>=1.3.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest>=9.0.2; extra == \"dev\"",
"ruff>=0.15.0; extra == \"dev\"",
"time-machine>=3.2.0; extra == \"dev\"",
"mypy>=1.19.1; extra == \"lint\"",
"ruff>=0.15.0; extra == \"lint\"",
"pytest-asyncio>=1.3.0; extra == \"test\"",
"pytest-cov>=7.0.0; extra == \"test\"",
"pytest>=9.0.2; extra == \"test\"",
"time-machine>=3.2.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://ductor.dev",
"Repository, https://github.com/PleasePrompto/ductor",
"Documentation, https://ductor.dev",
"Issues, https://github.com/PleasePrompto/ductor/issues",
"Changelog, https://github.com/PleasePrompto/ductor/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:35:49.047538 | ductor-0.4.3.tar.gz | 791,956 | c1/e4/1513bc60d7a8288958a85d0d367eb732b739d8bbe2f5908839fb434a7ced/ductor-0.4.3.tar.gz | source | sdist | null | false | 18d9267827fc337b76ac25282cceb11f | 0b07dcdece79f4767934d36a1135d1055ead5d99c711a1688ac7014c25d6ce8c | c1e41513bc60d7a8288958a85d0d367eb732b739d8bbe2f5908839fb434a7ced | MIT | [
"LICENSE"
] | 257 |
2.4 | mgba-live-mcp | 0.3.0 | MCP server for persistent live mGBA control with script bridge, screenshots, memory and input tooling | # mgba-live-mcp
[](https://codecov.io/gh/penandlim/mgba-live-mcp)
MCP server for persistent, live mGBA control. It is designed for agent workflows
that need to keep one emulator process running across multiple tool calls
(input, Lua, memory reads, OAM/entity dumps, screenshots).
If you need one-shot/headless runs instead of persistent sessions, see
[struktured-labs/mgba-mcp](https://github.com/struktured-labs/mgba-mcp).
## What You Get (MCP Context)
- Long-lived session lifecycle: `mgba_live_start`, `mgba_live_attach`,
`mgba_live_status`, `mgba_live_stop`
- Live control: `mgba_live_input_tap`, `mgba_live_input_set`,
`mgba_live_input_clear`, `mgba_live_run_lua`
- Inspection: `mgba_live_read_memory`, `mgba_live_read_range`,
`mgba_live_dump_pointers`, `mgba_live_dump_oam`, `mgba_live_dump_entities`
- Snapshot output: `mgba_live_export_screenshot` plus image snapshots returned
by `status`/`attach`/`start_with_lua`/most live commands
MCP reference: [docs/mcp-reference.md](docs/mcp-reference.md)
## Quick Start (uvx)
1. Run directly from PyPI with `uvx`:
```bash
uvx mgba-live-mcp
```
2. If you want to run from git (for unreleased changes), use:
```bash
uvx --from git+https://github.com/penandlim/mgba-live-mcp mgba-live-mcp
```
3. Register in an MCP client (Codex example):
```toml
[mcp_servers.mgba]
command = "uvx"
args = ["mgba-live-mcp"]
```
Git fallback (if package is not yet published):
```toml
[mcp_servers.mgba]
command = "uvx"
args = ["--from", "git+https://github.com/penandlim/mgba-live-mcp", "mgba-live-mcp"]
```
## Local Development
1. Install dependencies for this repo:
```bash
uv sync
```
2. Run the MCP server:
```bash
uv run mgba-live-mcp
```
3. Register it in your MCP client (Codex example):
```toml
[mcp_servers.mgba]
command = "uv"
args = [
"run",
"--directory",
"/absolute/path/to/mgba-live-mcp",
"mgba-live-mcp",
]
```
4. Install and run pre-commit hooks (lint/checks via `uv run`):
```bash
make precommit-install
make precommit-run
```
## Requirements And Setup Links
- [mGBA](https://github.com/mgba-emu/mgba):
Build/install a Qt + Lua-capable binary (`mgba-qt`/`mGBA`) with these required
CMake flags:
`-DBUILD_QT=ON -DENABLE_SCRIPTING=ON -DUSE_LUA=ON`
- [uv](https://docs.astral.sh/uv/):
Python package/runtime manager used by this repo (`uv sync`, `uv run ...`)
- [Model Context Protocol](https://modelcontextprotocol.io):
Protocol used by the server; configure this process as an MCP server in your client
Important runtime notes:
- A ROM path is required to start (`.gba`, `.gb`, `.gbc`).
- Binary auto-discovery order: `mgba-qt`, `mgba`, `mGBA`.
- If auto-discovery fails, pass `mgba_path` in `mgba_live_start` or
`mgba_live_start_with_lua`.
- Runtime state is stored at `~/.mgba-live-mcp/runtime` (sessions, logs, command/response files).
- This is a hard cutover from repo-local `.runtime`; no hybrid fallback is used.
- If you have old repo-local sessions, migrate manually by copying `.runtime/*` to
`~/.mgba-live-mcp/runtime/`.
- `mgba_live_status` with `all=true` lists sessions from this shared user-level runtime root.
- `scripts/mgba_live_bridge.lua` is transitional for local workflows; packaged
`src/mgba_live_mcp/resources/mgba_live_bridge.lua` is the runtime source of truth.
## Common MCP Flows
### 1) Start a session
```json
{
"rom": "/absolute/path/to/game.gba",
"fast": true
}
```
Notes:
- `fast: true` maps to `fps_target=600`
- default when omitted is `fps_target=120`
### 2) Start + run Lua immediately
Use `mgba_live_start_with_lua` when you need first-frame setup and a post-Lua
snapshot in one call.
```json
{
"rom": "/absolute/path/to/game.gba",
"code": "return emu:currentFrame()"
}
```
### 3) Tap input and capture after settle
```json
{
"session": "20260220-120000",
"key": "A",
"frames": 2,
"wait_frames": 6
}
```
`wait_frames` is applied after release before the screenshot is captured.
### 4) Read memory
```json
{
"session": "20260220-120000",
"start": 49664,
"length": 64
}
```
### 5) Save screenshot to a known path
```json
{
"session": "20260220-120000",
"out": "/tmp/mgba-shot.png"
}
```
## Important Behavior
- `mgba_live_start` is bootstrap-only (no Lua arg, no screenshot return).
- `mgba_live_start_with_lua` requires exactly one of `file` or `code`.
- `mgba_live_run_lua` supports callback-style macros by returning
`{ macro_key = "..." }` and setting `_G[macro_key].active = false` when done;
the tool waits for completion before returning its snapshot.
- `mgba_live_input_set` and `mgba_live_input_clear` update held keys but do not
include a snapshot; call `mgba_live_status` to verify visually.
- Automatic snapshots in tool responses include only `frame` metadata and image
content. Files are persisted only via `mgba_live_export_screenshot`.
## Local CLI (Dev/Debug)
The MCP server wraps `scripts/mgba_live.py`. This script is a compatibility shim
that delegates to the packaged module CLI.
```bash
uv run python scripts/mgba_live.py --help
uv run python scripts/mgba_live.py start --help
uv run pytest
```
Quality commands:
```bash
make lint
make typecheck
make test
make check
```
## Release Checklist
1. Update version in `pyproject.toml` and `src/mgba_live_mcp/__init__.py`.
2. Add release notes in `CHANGELOG.md`.
3. Run local checks:
`uv sync --group dev && make check && uv build`
4. Trigger TestPyPI publish workflow (`publish-testpypi`) and verify install from TestPyPI.
5. Push tag `vX.Y.Z` to trigger the PyPI release workflow.
6. Smoke test:
`uvx mgba-live-mcp` and `uvx --from git+https://github.com/penandlim/mgba-live-mcp mgba-live-mcp`.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"mcp>=1.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:35:25.797363 | mgba_live_mcp-0.3.0.tar.gz | 7,057 | d4/fd/8c0ea73e89d52b9da90496e261c0e0698d3005acce4026dda403e53dedec/mgba_live_mcp-0.3.0.tar.gz | source | sdist | null | false | 7252713ca60d7ed58da9de3af8672617 | e36d85fb167229a243794ae8f572ecf61d034804c79bebc2235a0195dbb76bc0 | d4fd8c0ea73e89d52b9da90496e261c0e0698d3005acce4026dda403e53dedec | null | [
"LICENSE"
] | 247 |
2.1 | iman | 2.0.4 | Python package for daily Tasks | iman
====
Overview
--------
``iman`` is a comprehensive Python package offering a wide array of utilities for audio processing, file manipulation, machine learning, system operations, web utilities, and more. It provides tools for tasks such as audio feature extraction, voice activity detection, file I/O, system monitoring, and integration with frameworks like PyTorch and TensorFlow. The package is organized into multiple submodules, each designed for specific functionalities, as detailed below.
Installation
------------
Install ``iman`` via pip:
.. code-block:: bash
pip install iman
Ensure dependencies like ``numpy``, ``torch``, ``tensorflow``, ``speechbrain``, ``librosa``, ``matplotlib``, ``pandas``, and external tools like ``ffmpeg``, ``ffprobe``, and ``WinRAR`` are installed. Some functions require pre-trained models or specific paths (e.g., model files, ``ffmpeg_path``).
Usage
-----
Below are examples of key functionalities from the ``iman`` package. For detailed function signatures and parameters, refer to the sections below or use the built-in help system:
**Example: Audio Processing**
.. code-block:: python
from iman import Audio
# Read a WAV file
data, sr = Audio.Read("audio.wav", sr=16000, start_from=0, dur=None, mono=True, ffmpeg_path="c:\\ffmpeg.exe", ffprobe_path="c:\\ffprobe.exe")
# Resample and write audio
resampled = Audio.Resample(data, fs=sr, sr=8000)
Audio.Write("output.wav", resampled, fs=8000)
**Example: File Operations**
.. code-block:: python
from iman import *
# Get files matching a pattern
files = gf("*.txt")
# Write a dictionary to a file
my_dict = {"key1": "value1", "key2": "value2"}
Write_Dic(my_dict, "output.txt")
**Example: VAD with Segmenter**
.. code-block:: python
from iman.sad_torch_mfcc import Segmenter
seg = Segmenter(batch_size=32, vad_type="vad", sr=8000, model_path="c:\\sad_model_pytorch.pth", tq=1, ffmpeg_path="c:\\ffmpeg.exe", complete_output=False, device="cuda", input_type="file")
isig, wav, mfcc = seg("audio.wav")
Modules and Functions
---------------------
The ``iman`` package is organized into several submodules, each with specific functions. Below is a complete list of modules and their functions as provided.
iman
~~~~
- ``plt``: Matplotlib plotting library.
- ``now()``: Get current time.
- ``F``: Format floating-point number.
- ``D``: Format integer number.
- ``Write_List(MyList, Filename)``: Write a list to a text file.
- ``Write_Dic(MyDic, Filename)``: Write a dictionary to a text file.
- ``Read(Filename)``: Read a text file.
- ``Read_Lines(Filename)``: Read a text file line by line and return a list.
- ``Write(_str, Filename)``: Write a string to a text file.
- ``gf(pattern)``: Get files in a directory matching a pattern.
- ``gfa(directory_pattern, ext="*.*")``: Get files in a directory and subdirectories.
- ``ReadE(Filename)``: Read Excel files.
- ``PM(dir)``: Create a directory.
- ``PB(fname)``: Get basename of a file.
- ``PN(fname)``: Get filename without path.
- ``PE(fname)``: Get file extension.
- ``PD(fname)``: Get directory of a file.
- ``PS(fname)``: Get file size.
- ``PJ(segments)``: Join path segments.
- ``clear()``: Clear command-line interface.
- ``os``: Python os module.
- ``np``: NumPy module.
- ``RI(start_int, end_int, count=1)``: Generate random integers.
- ``RF(start_float, end_float, count=1)``: Generate random floats.
- ``RS(Arr)``: Shuffle an array.
- ``LJ(job_file_name)``: Load job file (details not specified).
- ``SJ(value, job_file_name)``: Save job file (details not specified).
- ``LN(np_file_name)``: Load NumPy file (details not specified).
- ``SN(arr, np_file_name)``: Save NumPy array to file.
- ``cmd(command, redirect=True)``: Run a command in CMD.
- ``PX(fname)``: Check existence of a file.
- ``RC(Arr, size=1)``: Random choice from an array.
- ``onehot(data, nb_classes)``: Convert data to one-hot encoding.
- ``exe(pyfile)``: Convert Python file to executable (requires PyInstaller).
- ``FWL(wavfolder, sr)``: Get total audio length in a folder.
- ``norm(vector)``: Normalize a vector (vector/magnitude(vector)).
- ``delete(pattern)``: Delete files matching a pattern.
- ``rename(fname, fout)``: Rename a file.
- ``separate(pattern, folout)``: Separate vocal from music.
- ``dll(fname)``: Create a .pyd file from a Python file.
- ``get_hard_serial()``: Get hardware serial number.
- ``mute_mic()``: Toggle microphone on/off.
- ``PA(fname)``: Get absolute path of a file.
iman.Audio
~~~~~~~~~~
- ``Read(filename, sr, start_from, dur, mono, ffmpeg_path, ffprobe_path)``: Read WAV, ALAW, MP3, and other audio formats.
- ``Resample(data, fs, sr)``: Resample audio data.
- ``Write(filename, data, fs)``: Write audio data to a file.
- ``frame(y)``: Frame audio data (details not specified).
- ``split(y)``: Split audio data (details not specified).
- ``ReadT(filename, sr, mono=True)``: Read and resample WAV file with torchaudio.
- ``VAD(y, top_db=40, frame_length=200, hop_length=80)``: Voice activity detection.
- ``compress(fname_pattern, sr=16000, ext='mp3', mono=True, ffmpeg_path='c:\\ffmpeg.exe', ofolder=None, worker=4)``: Compress audio files.
- ``clip_value(wav)``: Return clipping percentage in an audio file.
- ``WriteS(filename, data, fs)``: Convert and write audio to stereo.
iman.info
~~~~~~~~~
- ``get()``: Get information about CPU and GPU (requires torch).
- ``cpu()``: Get CPU percentage usage.
- ``gpu()``: Get GPU memory usage.
- ``memory()``: Get RAM usage in GB.
- ``plot(fname="log.txt", delay=1)``: Plot system metrics from a log file.
iman.metrics
~~~~~~~~~~~~
- ``EER(lab, score)``: Compute Equal Error Rate.
- ``cosine_distance(v1, v2)``: Compute cosine distance between two vectors.
- ``roc(lab, score)``: Compute ROC curve.
- ``wer(ref, hyp)``: Compute Word Error Rate.
- ``cer(ref, hyp)``: Compute Character Error Rate.
- ``wer_list(ref_list, hyp_list)``: Compute WER for lists.
- ``cer_list(ref_list, hyp_list)``: Compute CER for lists.
- ``DER(ref_list, res_list, file_dur=-1, sr=8000)``: Compute Detection Error Rate.
iman.tsne
~~~~~~~~~
- ``plot(fea, label)``: Plot t-SNE visualization of features.
iman.xvector
~~~~~~~~~~~~
- ``xvec, lda_xvec, gender = get(filename, model(model_path, model_name, model_speaker_num))``: Extract x-vectors for speaker recognition.
iman.web
~~~~~~~~
- ``change_wallpaper()``: Change system wallpaper.
- ``dl(url)``: Download a file from a URL.
- ``links(url, filter_text=None)``: Extract links from a URL.
- ``imgs(url, filter_text=None)``: Extract images from a URL.
iman.matlab
~~~~~~~~~~~
- ``np2mat(param, mat_file_name)``: Convert NumPy array to MATLAB file.
- ``dic2mat(param, mat_file_name)``: Convert dictionary to MATLAB file.
- ``mat2dic(mat_file_name)``: Convert MATLAB file to dictionary.
iman.Features
~~~~~~~~~~~~~
- ``mfcc_fea, mspec, log_energy = mfcc.SB.Get(wav, sample_rate)``: Compute MFCC with SpeechBrain (input must be read with torchaudio).
- ``mfcc.SB.Normal(MFCC)``: Mean-variance normalization of MFCC with SpeechBrain.
- ``mfcc_fea, log_energy = mfcc.LS.Get(wav, sample_rate, le=False)``: Compute MFCC with Librosa (input is NumPy array).
- ``mfcc.LS.Normal(MFCC, win_len=150)``: Mean-variance normalization (local, 150 frames left and right).
iman.AUG
~~~~~~~~
- ``Add_Noise(data, noise, snr)``: Add noise to audio data.
- ``Add_Reverb(data, rir)``: Add reverberation to audio data.
- ``Add_NoiseT(data, noise, snr)``: Add noise using torchaudio.
- ``Add_ReverbT(data, rir)``: Add reverberation using torchaudio.
- ``mp3(fname, fout, sr_out, ratio, ffmpeg_path='c:\\ffmpeg.exe')``: Convert to MP3.
- ``speed(fname, fout, ratio, ffmpeg_path='c:\\ffmpeg.exe')``: Change audio speed.
- ``volume(fname, fout, ratio, ffmpeg_path='c:\\ffmpeg.exe')``: Adjust audio volume.
iman.sad_torch_mfcc | iman.sad_tf
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- **Initializer** (PyTorch):
.. code-block:: python
seg = Segmenter(batch_size, vad_type=['sad'|'vad'], sr=8000, model_path="c:\\sad_model_pytorch.pth", tq=1, ffmpeg_path='c:\\ffmpeg.exe', complete_output=False, device='cuda', input_type='file')
- **Initializer** (TensorFlow):
.. code-block:: python
seg = Segmenter(batch_size, vad_type=['sad'|'vad'], sr=16000, model_path="c:\\keras_speech_music_noise_cnn.hdf5", gender_path="c:\\keras_male_female_cnn.hdf5", ffmpeg_path='c:\\ffmpeg.exe', detect_gender=False, complete_output=False, device='cuda', input_type='file')
- ``isig, wav, mfcc = seg(fname)``: Process audio file (MFCC output only in PyTorch model).
- ``nmfcc = filter_fea(isig, mfcc, sr, max_time)``: Filter features (PyTorch only).
- ``mfcc = MVN(mfcc)``: Mean-variance normalization (PyTorch only).
- ``isig = filter_output(isig, max_silence, ignore_small_speech_segments, max_speech_len, split_speech_bigger_than)``: Filter output when ``complete_output=False``.
- ``seg2aud(isig, filename)``: Convert segments to audio.
- ``seg2json(isig)``: Convert segments to JSON.
- ``seg2Gender_Info(isig)``: Extract gender information from segments.
- ``seg2Info(isig)``: Extract segment information.
- ``wav_speech, wav_noise = filter_sig(isig, wav, sr)``: Get speech and noise parts (when ``complete_output=False``).
- **sad_tf.segmentero**:
.. code-block:: python
from sad_tf.segmentero import Segmenter # Use ONNX models (requires onnxruntime)
iman.sad_torch_mfcc_speaker
~~~~~~~~~~~~~~~~~~~~~~~~~~~
- **Initializer**:
.. code-block:: python
seg = Segmenter(batch_size, vad_type=['sad'|'vad'], sr=8000, model_path="c:\\sad_model_pytorch.pth", max_time=120, tq=1, ffmpeg_path='c:\\ffmpeg.exe', device='cuda', pad=False)
- ``mfcc, len(sec) = seg(fname)``: Process audio file, MFCC padded to ``max_time`` if ``pad=True``.
iman.sad_tf_mlp_speaker
~~~~~~~~~~~~~~~~~~~~~~~
- **Initializer**:
.. code-block:: python
seg = Segmenter(batch_size, vad_type=['sad'|'vad'], sr=8000, model_path="sad_tf_mlp.h5", max_time=120, tq=1, ffmpeg_path='c:\\ffmpeg.exe', device='cuda', pad=False)
- ``mfcc, len(sec) = seg(fname)``: Process audio file, MFCC padded to ``max_time`` if ``pad=True``.
iman.Report
~~~~~~~~~~~
- **Initializer**:
.. code-block:: python
r = Report.rep(log_dir=None)
- ``WS(_type, _name, value, itr)``: Add scalar to TensorBoard.
- ``WT(_type, _name, _str, itr)``: Add text to TensorBoard.
- ``WG(pytorch_model, example_input)``: Add graph to TensorBoard.
- ``WI(_type, _name, images, itr)``: Add image to TensorBoard.
iman.par
~~~~~~~~
- **Parallel Processing**:
.. code-block:: python
if __name__ == '__main__':
res = par.par(files, func, worker=4, args=[]) # func defined as: def func(fname, _args): ...
iman.Image
~~~~~~~~~~
- ``Image.convert(fname_pattern, ext='jpg', ofolder=None, w=-1, h=-1, level=100, worker=4, ffmpeg_path='c:\\ffmpeg.exe')``: Convert images to specified format.
- ``Image.resize(fname_pattern, ext='jpg', ofolder=None, w=2, h=2, worker=4, ffmpeg_path='c:\\ffmpeg.exe')``: Resize images to 1/w and 1/h.
iman.Boors
~~~~~~~~~~
- ``Boors.get(sahm)``: Get stock information.
iman.Text
~~~~~~~~~
- **Initializer**:
.. code-block:: python
norm = Text.normal("c:\\Replace_List.txt")
- ``norm.rep(str)``: Replace text based on normalization rules.
- ``norm.from_file(filename, file_out=None)``: Normalize text from a file.
iman.num2fa
~~~~~~~~~~~
- ``words(number)``: Convert number to Persian words.
iman.Rar
~~~~~~~~
- ``rar(fname, out="", rar_path=r"C:\\Program Files\\WinRAR\\winrar.exe")``: Create RAR archive.
- ``zip(fname, out="", rar_path=r"C:\\Program Files\\WinRAR\\winrar.exe")``: Create ZIP archive.
- ``unrar(fname, out="", rar_path=r"C:\\Program Files\\WinRAR\\winrar.exe")``: Extract RAR archive.
- ``unzip(fname, out="", rar_path=r"C:\\Program Files\\WinRAR\\winrar.exe")``: Extract ZIP archive.
iman.Enhance
~~~~~~~~~~~~
- ``Enhance.Dereverb(pattern, out_fol, sr=16000, batchsize=16, device="cuda", model_path=r"C:\\UVR-DeEcho-DeReverb.pth")``: Dereverberate audio files.
- ``Enhance.Denoise(pattern, out_fol, sr=16000, batchsize=16, device="cuda", model_path=r"C:\\UVR-DeNoise-Lite.pth")``: Denoise audio files.
iman.tf
~~~~~~~
- ``flops(model)``: Get FLOPs of a TensorFlow model.
- ``param(model)``: Get parameter count of a TensorFlow model.
- ``paramp(model)``: Get parameter count and print model layers.
- ``gpu()``: Return True if GPU is available.
- ``gpun()``: Return number of GPUs.
- ``limit()``: Limit GPU memory allocation for TensorFlow models.
iman.torch
~~~~~~~~~~
- ``param(model)``: Get parameter and trainable count of a PyTorch model.
- ``paramp(model)``: Get parameter count and print model layers.
- ``layers(model)``: Get layers of a PyTorch model.
- ``gpu()``: Return True if GPU is available.
- ``gpun()``: Return number of GPUs.
iman.yt
~~~~~~~
- ``dl(url)``: Download a YouTube video.
- ``list_formats(url)``: List available formats for a YouTube link.
iman.svad
~~~~~~~~~
- ``segments, wav = svad(filename, sampling_rate=16000, min_speech_duration_ms=250, max_speech_duration_s=float('inf'), min_silence_duration_ms=100)``: Run fast speech activity detection and return speech segments.
iman.word
~~~~~~~~~~
- ``documnet = Documnet()``: Create New Documnet
- ``adds(document , top_margin = 4 , bottom_margin=4 , start_page =1)``: Add Section with footer page number
- ``addpic(document , image_path , width , align = "c")``: Add picture to document
- ``addp(document , text , rtl = True , align="J" , font = "B Nazanin" , size = 12 , bold = False , color = RGBColor(0, 0, 0) , line_spacing = 0.8, right_indent= 0 , left_indent= 0 , space_after=0, bullet = False )``: Add paragraph to documnet
- ``get_docx_page_count_win32(docx_filename)``: Count Page Number
- ``convert_to_persian_numbers(text)``: Convert numbers in text to persian format
- ``docx_to_pdf(docx_path , pdf_path)``: Convet docx to pdf
- ``merge_pdfs(pdf_files, output_filename)``: merge pdf files
Dependencies
------------
The ``iman`` package requires the following:
- **Python Packages**: ``numpy``, ``torch``, ``tensorflow``, ``speechbrain``, ``librosa``, ``matplotlib``, ``pandas``, ``onnxruntime`` (for ONNX models).
- **External Tools**: ``ffmpeg``, ``ffprobe``, ``WinRAR`` (for RAR/ZIP operations).
- **Optional**: Pre-trained models (e.g., for VAD, x-vector, dereverberation) specified in function arguments.
Check the package's ``requirements.txt`` for specific versions.
Contributing
------------
Contributions are welcome! Submit bug reports, feature requests, or pull requests via the project's GitHub repository (if available). Follow contribution guidelines and include tests for new features.
License
-------
``iman`` is licensed under the MIT License (assumed). See the LICENSE file for details.
Contact
-------
For support, contact the maintainers via the project's GitHub page or email (if provided).
.. note::
Some functions require external tools (e.g., ``ffmpeg``, ``WinRAR``) or pre-trained models. Ensure these are configured correctly. Paths like ``c:\\ffmpeg.exe`` are Windows-specific; adjust for other operating systems.
| null | Iman Sarraf | imansarraf@gmail.com | null | null | null | python, iman | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Education",
"Programming Language :: Python :: 3"
] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/4.0.2 CPython/3.7.0 | 2026-02-21T06:34:09.749214 | iman-2.0.4.tar.gz | 2,135,885 | c7/27/9a9d47061261ccce97bcad4eb9845384f6df115396ec815472b7e5f3d018/iman-2.0.4.tar.gz | source | sdist | null | false | 2aa5e4ab88e049fb56733fd9371d7c74 | 2754440106911e08e93d1bafe6f2df39c7dd18ad67b3e4bbe9c0827050fed6c8 | c7279a9d47061261ccce97bcad4eb9845384f6df115396ec815472b7e5f3d018 | null | [] | 253 |
2.4 | pinescript-mcp | 0.5.1 | MCP server providing Pine Script v6 documentation for AI assistants | # pinescript-mcp
<!-- mcp-name: io.gitlab.articat1066/pinescript-v6-mcp -->
MCP server providing Pine Script v6 documentation for AI assistants (Claude, etc.).
Enables AI to:
- Look up Pine Script functions and validate syntax
- Access official documentation for indicators, strategies, and visuals
- Understand Pine Script concepts (execution model, repainting, etc.)
- Generate correct v6 code with proper function references
## Usage with Claude Code
Add to `.mcp.json` in your project (recommended):
```json
{
"mcpServers": {
"pinescript-docs": {
"type": "stdio",
"command": "uvx",
"args": ["pinescript-mcp"]
}
}
}
```
## Usage with Claude Desktop
Add to `~/.config/claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"pinescript-docs": {
"command": "uvx",
"args": ["pinescript-mcp"]
}
}
}
```
## Usage with Google Antigravity
Add to `~/.gemini/antigravity/mcp_config.json`:
```json
{
"mcpServers": {
"pinescript-docs": {
"command": "uvx",
"args": ["pinescript-mcp"]
}
}
}
```
Or use the public HTTP server (no install):
```json
{
"mcpServers": {
"pinescript-docs": {
"serverUrl": "https://pinescript-mcp.fly.dev/mcp"
}
}
}
```
## Version Pinning
Documentation is bundled in the package - each version contains a frozen snapshot. For reproducible agent behavior, pin to a specific version:
```json
{
"mcpServers": {
"pinescript-docs": {
"command": "uvx",
"args": ["pinescript-mcp==0.5.1"]
}
}
}
```
Without pinning, `uvx pinescript-mcp` gets the latest version.
## Alternative: pip install
If you prefer pip over uvx:
```bash
pip install pinescript-mcp==0.5.1
```
Note: `"command": "pinescript-mcp"` only works if the install location is in your PATH. The `uvx` method above is more reliable as it handles environments automatically.
## Public Server (No Install Required)
Connect directly to the hosted server - no Python or uvx needed:
```json
{
"mcpServers": {
"pinescript-docs": {
"type": "http",
"url": "https://pinescript-mcp.fly.dev/mcp/"
}
}
}
```
## Available Tools
| Tool | Description |
|------|-------------|
| `resolve_topic(query)` | **Start here** - Map a question to relevant docs |
| `get_doc(path)` | Read a specific documentation file |
| `get_section(path, header)` | Read a specific section from a documentation file |
| `search_docs(query)` | Full-text search across all docs |
| `list_docs()` | List all documentation files with descriptions |
| `get_functions(namespace)` | List valid functions (ta, strategy, etc.) |
| `validate_function(name)` | Check if a function exists in Pine v6 |
| `lint_script(script)` | Lint Pine Script (15 rules, free, no API cost) |
| `get_manifest()` | Get routing guidance for topics |
## Available Prompts
| Prompt | Description |
|--------|-------------|
| `debug_error(error, code)` | Analyze a Pine Script compilation error |
| `convert_v5_to_v6(code)` | Convert Pine Script v5 code to v6 syntax |
| `explain_function(name)` | Explain a Pine Script function in detail |
## Example Queries
- "How do I create a trailing stop in Pine Script?"
- "What's the difference between var and varip?"
- "Is ta.supertrend a valid function?"
- "How do I avoid repainting with request.security?"
## Documentation Coverage
The server bundles comprehensive Pine Script v6 documentation:
- **Concepts**: Execution model, timeframes, colors, methods, objects, common errors
- **Reference**: Types, variables, constants, keywords, operators, annotations
- **Functions**: Technical analysis (ta.*), strategies, requests, drawings, collections
- **Visuals**: Plots, fills, shapes, tables, lines, boxes, backgrounds
- **Writing Scripts**: Style guide, debugging, optimization, limitations
## Why Use This?
AI models often hallucinate Pine Script functions or use deprecated v5 syntax. This MCP server grounds the AI in actual v6 documentation, preventing:
- Made-up function names (e.g., `ta.hull` doesn't exist, use `ta.hma`)
- Deprecated syntax from v4/v5
- Incorrect parameter orders
- Missing required arguments
## Lint Rules (15 total)
The `lint_script` tool checks for common Pine Script issues without using AI:
| Rule | Type | Description |
|------|------|-------------|
| E001 | Error | `input.enum()` with const string constants |
| E003 | Error | Return type keyword on function declaration |
| E005 | Error | `study()` → use `indicator()` |
| E006 | Error | `security()` → use `request.security()` |
| E007 | Error | `alertcondition()` in strategy scripts |
| E009 | Error | `format.currency` doesn't exist |
| E010 | Error | Direct `na` comparison (use `na()` function) |
| E012 | Error | Unknown/hallucinated function |
| E013 | Error | `input.enum()` with options array |
| E014 | Error | `strategy()` missing title parameter |
| E015 | Error | Mismatched string quotes |
| W001 | Warning | Missing `//@version=6` |
| W003 | Warning | `lookahead_on` without justification |
| W004 | Warning | High `max_bars_back` value |
| W005 | Warning | Potentially unused variable |
## Performance
The HTTP server (Fly.io) uses response caching with 1-hour TTL for documentation lookups, making repeated queries very fast.
## Development
```bash
# Clone and install locally
git clone https://gitlab.com/articat1066/pinescript-v6-mcp
cd pinescript-mcp
pip install -e .
# Run the server
pinescript-mcp
```
## License
MIT
| text/markdown | bch | null | null | null | null | ai, claude, documentation, mcp, pine-script, tradingview | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Documentation"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastmcp>=2.0.0",
"uvicorn>=0.27.0"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/articat1066/pinescript-v6-mcp",
"Documentation, https://gitlab.com/articat1066/pinescript-v6-mcp#readme",
"Repository, https://gitlab.com/articat1066/pinescript-v6-mcp"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-21T06:33:30.952312 | pinescript_mcp-0.5.1.tar.gz | 319,979 | e2/b6/cb564be07fbd996c9e7277ec9e9ead99268d6460e27961011a231b219aa0/pinescript_mcp-0.5.1.tar.gz | source | sdist | null | false | 00e7b08cac854f1bfb926f189e3ed47a | ed6819d49809d658ad54fa3f9a2d16332ad048e0b40493f3b7e3d9c69593c20e | e2b6cb564be07fbd996c9e7277ec9e9ead99268d6460e27961011a231b219aa0 | MIT | [
"LICENSE"
] | 239 |
2.4 | wikipedia-mcp | 2.0.0 | MCP server for Wikipedia API | # Wikipedia MCP Server
[](https://smithery.ai/server/@Rudra-ravi/wikipedia-mcp)
A Model Context Protocol (MCP) server that retrieves information from Wikipedia to provide context to Large Language Models (LLMs). This tool helps AI assistants access factual information from Wikipedia to ground their responses in reliable sources.
<a href="https://glama.ai/mcp/servers/@Rudra-ravi/wikipedia-mcp">
<img width="380" height="200" src="https://glama.ai/mcp/servers/@Rudra-ravi/wikipedia-mcp/badge" alt="Wikipedia Server MCP server" />
</a>

## Overview
The Wikipedia MCP server provides real-time access to Wikipedia information through a standardized Model Context Protocol interface. This allows LLMs to retrieve accurate and up-to-date information directly from Wikipedia to enhance their responses.
## Verified By
[](https://mseep.ai/app/rudra-ravi-wikipedia-mcp)
## Features
- **Search Wikipedia**: Find articles matching specific queries with enhanced diagnostics
- **Retrieve Article Content**: Get full article text with all information
- **Article Summaries**: Get concise summaries of articles
- **Section Extraction**: Retrieve specific sections from articles
- **Link Discovery**: Find links within articles to related topics
- **Related Topics**: Discover topics related to a specific article
- **Multi-language Support**: Access Wikipedia in different languages by specifying the `--language` or `-l` argument when running the server (e.g., `wikipedia-mcp --language ta` for Tamil).
- **Country/Locale Support**: Use intuitive country codes like `--country US`, `--country China`, or `--country TW` instead of language codes. Automatically maps to appropriate Wikipedia language variants.
- **Language Variant Support**: Support for language variants such as Chinese traditional/simplified (e.g., `zh-hans` for Simplified Chinese, `zh-tw` for Traditional Chinese), Serbian scripts (`sr-latn`, `sr-cyrl`), and other regional variants.
- **Optional caching**: Cache API responses for improved performance using --enable-cache
- **Modern MCP Transport Support**: Supports `stdio`, `http`, and `streamable-http` (with legacy `sse` compatibility).
- **Optional MCP Transport Auth**: Secure network transports with `--auth-mode static` or `--auth-mode jwt`.
- **Google ADK Compatibility**: Fully compatible with Google ADK agents and other AI frameworks that use strict function calling schemas
## Installation
### Using pipx (Recommended for Claude Desktop)
The best way to install for Claude Desktop usage is with pipx, which installs the command globally:
```bash
# Install pipx if you don't have it
pip install pipx
pipx ensurepath
# Install the Wikipedia MCP server
pipx install wikipedia-mcp
```
This ensures the `wikipedia-mcp` command is available in Claude Desktop's PATH.
### Installing via Smithery
To install wikipedia-mcp for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@Rudra-ravi/wikipedia-mcp):
```bash
npx -y @smithery/cli install @Rudra-ravi/wikipedia-mcp --client claude
```
### From PyPI (Alternative)
You can also install directly from PyPI:
```bash
pip install wikipedia-mcp
```
**Note**: If you use this method and encounter connection issues with Claude Desktop, you may need to use the full path to the command in your configuration. See the [Configuration](#configuration-for-claude-desktop) section for details.
### Using a virtual environment
```bash
# Create a virtual environment
python3 -m venv venv
# Activate the virtual environment
source venv/bin/activate
# Install the package
pip install git+https://github.com/rudra-ravi/wikipedia-mcp.git
```
### From source
```bash
# Clone the repository
git clone https://github.com/rudra-ravi/wikipedia-mcp.git
cd wikipedia-mcp
# Create a virtual environment
python3 -m venv wikipedia-mcp-env
source wikipedia-mcp-env/bin/activate
# Install in development mode
pip install -e .
```
## Usage
### Running the server
```bash
# If installed with pipx
wikipedia-mcp
# If installed in a virtual environment
source venv/bin/activate
wikipedia-mcp
# Specify transport protocol (default: stdio)
wikipedia-mcp --transport stdio # For Claude Desktop
wikipedia-mcp --transport http --host 0.0.0.0 --port 8080 --path /mcp
wikipedia-mcp --transport streamable-http --host 0.0.0.0 --port 8080 --path /mcp
wikipedia-mcp --transport sse # Legacy compatibility transport
# Specify language (default: en for English)
wikipedia-mcp --language ja # Example for Japanese
wikipedia-mcp --language zh-hans # Example for Simplified Chinese
wikipedia-mcp --language zh-tw # Example for Traditional Chinese (Taiwan)
wikipedia-mcp --language sr-latn # Example for Serbian Latin script
# Specify country/locale (alternative to language codes)
wikipedia-mcp --country US # English (United States)
wikipedia-mcp --country China # Chinese Simplified
wikipedia-mcp --country Taiwan # Chinese Traditional (Taiwan)
wikipedia-mcp --country Japan # Japanese
wikipedia-mcp --country Germany # German
wikipedia-mcp --country france # French (case insensitive)
# List all supported countries
wikipedia-mcp --list-countries
# Optional: Specify host/port/path for network transport (use 0.0.0.0 for containers)
wikipedia-mcp --transport http --host 0.0.0.0 --port 8080 --path /mcp
# Optional: Enable caching
wikipedia-mcp --enable-cache
# Optional: Use Personal Access Token to avoid rate limiting (403 errors)
wikipedia-mcp --access-token your_wikipedia_token_here
# Or set via environment variable
export WIKIPEDIA_ACCESS_TOKEN=your_wikipedia_token_here
wikipedia-mcp
# Optional: Secure incoming MCP network requests with static bearer token
wikipedia-mcp --transport http --auth-mode static --auth-token your_mcp_token --host 0.0.0.0 --port 8080
# Optional: Secure incoming MCP network requests with JWT validation
wikipedia-mcp --transport http --auth-mode jwt --auth-jwks-uri https://issuer/.well-known/jwks.json --auth-issuer https://issuer
# Security note: prefer http/streamable-http + auth-mode for exposed network transport.
# Combine options
wikipedia-mcp --country Taiwan --enable-cache --access-token your_wikipedia_token --transport http --path /mcp --port 8080
### Docker/Kubernetes
When running inside containers, bind the HTTP MCP server to all interfaces and map
the container port to the host or service:
```bash
# Build and run with Docker
docker build -t wikipedia-mcp .
docker run --rm -p 8080:8080 wikipedia-mcp --transport http --host 0.0.0.0 --port 8080 --path /mcp
```
Kubernetes example (minimal):
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wikipedia-mcp
spec:
replicas: 1
selector:
matchLabels:
app: wikipedia-mcp
template:
metadata:
labels:
app: wikipedia-mcp
spec:
containers:
- name: server
image: your-repo/wikipedia-mcp:latest
args: ["--transport", "http", "--host", "0.0.0.0", "--port", "8080", "--path", "/mcp"]
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: wikipedia-mcp
spec:
selector:
app: wikipedia-mcp
ports:
- name: http
port: 8080
targetPort: 8080
```
```
### Configuration for Claude Desktop
Add the following to your Claude Desktop configuration file:
**Option 1: Using command name (requires `wikipedia-mcp` to be in PATH)**
```json
{
"mcpServers": {
"wikipedia": {
"command": "wikipedia-mcp"
}
}
}
```
**Option 2: Using full path (recommended if you get connection errors)**
```json
{
"mcpServers": {
"wikipedia": {
"command": "/full/path/to/wikipedia-mcp"
}
}
}
```
**Option 3: With country/language specification**
```json
{
"mcpServers": {
"wikipedia-us": {
"command": "wikipedia-mcp",
"args": ["--country", "US"]
},
"wikipedia-taiwan": {
"command": "wikipedia-mcp",
"args": ["--country", "TW"]
},
"wikipedia-japan": {
"command": "wikipedia-mcp",
"args": ["--country", "Japan"]
}
}
}
```
To find the full path, run: `which wikipedia-mcp`
**Configuration file locations:**
- macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Windows: `%APPDATA%/Claude/claude_desktop_config.json`
- Linux: `~/.config/Claude/claude_desktop_config.json`
> **Note**: If you encounter connection errors, see the [Troubleshooting](#common-issues) section for solutions.
## Documentation Index
- CLI usage and options: see [`docs/CLI.md`](docs/CLI.md)
- API and MCP tools/resources: see [`docs/API.md`](docs/API.md)
- Architecture overview: see [`docs/ARCHITECTURE.md`](docs/ARCHITECTURE.md)
- User guide and troubleshooting: see [`docs/USER_GUIDE.md`](docs/USER_GUIDE.md)
- Development guide: see [`docs/DEVELOPMENT.md`](docs/DEVELOPMENT.md)
- Testing guide: see [`docs/TESTING.md`](docs/TESTING.md)
## Available MCP Tools
The Wikipedia MCP server provides the following tools for LLMs to interact with Wikipedia:
Each tool is also exposed with a `wikipedia_`-prefixed alias (for example, `wikipedia_get_article`) for improved cross-server discoverability.
### `search_wikipedia`
Search Wikipedia for articles matching a query.
**Parameters:**
- `query` (string): The search term
- `limit` (integer, optional): Maximum number of results to return (default: 10)
**Returns:**
- A list of search results with titles, snippets, and metadata
### `get_article`
Get the full content of a Wikipedia article.
**Parameters:**
- `title` (string): The title of the Wikipedia article
**Returns:**
- Article content including text, summary, sections, links, and categories
### `get_summary`
Get a concise summary of a Wikipedia article.
**Parameters:**
- `title` (string): The title of the Wikipedia article
**Returns:**
- A text summary of the article
### `get_sections`
Get the sections of a Wikipedia article.
**Parameters:**
- `title` (string): The title of the Wikipedia article
**Returns:**
- A structured list of article sections with their content
### `get_links`
Get the links contained within a Wikipedia article.
**Parameters:**
- `title` (string): The title of the Wikipedia article
**Returns:**
- A list of links to other Wikipedia articles
### `get_coordinates`
Get the coordinates of a Wikipedia article.
**Parameters:**
- `title` (string): The title of the Wikipedia article
**Returns:**
- A dictionary containing coordinate information including:
- `title`: The article title
- `pageid`: The page ID
- `coordinates`: List of coordinate objects with latitude, longitude, and metadata
- `exists`: Whether the article exists
- `error`: Any error message if retrieval failed
### `get_related_topics`
Get topics related to a Wikipedia article based on links and categories.
**Parameters:**
- `title` (string): The title of the Wikipedia article
- `limit` (integer, optional): Maximum number of related topics (default: 10)
**Returns:**
- A list of related topics with relevance information
### `summarize_article_for_query`
Get a summary of a Wikipedia article tailored to a specific query.
**Parameters:**
- `title` (string): The title of the Wikipedia article
- `query` (string): The query to focus the summary on
- `max_length` (integer, optional): Maximum length of the summary (default: 250)
**Returns:**
- A dictionary containing the title, query, and the focused summary
### `summarize_article_section`
Get a summary of a specific section of a Wikipedia article.
**Parameters:**
- `title` (string): The title of the Wikipedia article
- `section_title` (string): The title of the section to summarize
- `max_length` (integer, optional): Maximum length of the summary (default: 150)
**Returns:**
- A dictionary containing the title, section title, and the section summary
### `extract_key_facts`
Extract key facts from a Wikipedia article, optionally focused on a specific topic within the article.
**Parameters:**
- `title` (string): The title of the Wikipedia article
- `topic_within_article` (string, optional): A specific topic within the article to focus fact extraction
- `count` (integer, optional): Number of key facts to extract (default: 5)
**Returns:**
- A dictionary containing the title, topic, and a list of extracted facts
## Country/Locale Support
The Wikipedia MCP server supports intuitive country and region codes as an alternative to language codes. This makes it easier to access region-specific Wikipedia content without needing to know language codes.
### Supported Countries and Regions
Use `--list-countries` to see all supported countries:
```bash
wikipedia-mcp --list-countries
```
This will display countries organized by language, for example:
```
Supported Country/Locale Codes:
========================================
en: US, USA, United States, UK, GB, Canada, Australia, ...
zh-hans: CN, China
zh-tw: TW, Taiwan
ja: JP, Japan
de: DE, Germany
fr: FR, France
es: ES, Spain, MX, Mexico, AR, Argentina, ...
pt: PT, Portugal, BR, Brazil
ru: RU, Russia
ar: SA, Saudi Arabia, AE, UAE, EG, Egypt, ...
```
### Usage Examples
```bash
# Major countries by code
wikipedia-mcp --country US # United States (English)
wikipedia-mcp --country CN # China (Simplified Chinese)
wikipedia-mcp --country TW # Taiwan (Traditional Chinese)
wikipedia-mcp --country JP # Japan (Japanese)
wikipedia-mcp --country DE # Germany (German)
wikipedia-mcp --country FR # France (French)
wikipedia-mcp --country BR # Brazil (Portuguese)
wikipedia-mcp --country RU # Russia (Russian)
# Countries by full name (case insensitive)
wikipedia-mcp --country "United States"
wikipedia-mcp --country China
wikipedia-mcp --country Taiwan
wikipedia-mcp --country Japan
wikipedia-mcp --country Germany
wikipedia-mcp --country france # Case insensitive
# Regional variants
wikipedia-mcp --country HK # Hong Kong (Traditional Chinese)
wikipedia-mcp --country SG # Singapore (Simplified Chinese)
wikipedia-mcp --country "Saudi Arabia" # Arabic
wikipedia-mcp --country Mexico # Spanish
```
### Country-to-Language Mapping
The server automatically maps country codes to appropriate Wikipedia language editions:
- **English-speaking**: US, UK, Canada, Australia, New Zealand, Ireland, South Africa → `en`
- **Chinese regions**:
- CN, China → `zh-hans` (Simplified Chinese)
- TW, Taiwan → `zh-tw` (Traditional Chinese - Taiwan)
- HK, Hong Kong → `zh-hk` (Traditional Chinese - Hong Kong)
- SG, Singapore → `zh-sg` (Simplified Chinese - Singapore)
- **Major languages**: JP→`ja`, DE→`de`, FR→`fr`, ES→`es`, IT→`it`, RU→`ru`, etc.
- **Regional variants**: Supports 140+ countries and regions
### Error Handling
If you specify an unsupported country, you'll get a helpful error message:
```bash
$ wikipedia-mcp --country INVALID
Error: Unsupported country/locale: 'INVALID'.
Supported country codes include: US, USA, UK, GB, CA, AU, NZ, IE, ZA, CN.
Use --language parameter for direct language codes instead.
Use --list-countries to see supported country codes.
```
## Language Variants
The Wikipedia MCP server supports language variants for languages that have multiple writing systems or regional variations. This feature is particularly useful for Chinese, Serbian, Kurdish, and other languages with multiple scripts or regional differences.
### Supported Language Variants
#### Chinese Language Variants
- `zh-hans` - Simplified Chinese
- `zh-hant` - Traditional Chinese
- `zh-tw` - Traditional Chinese (Taiwan)
- `zh-hk` - Traditional Chinese (Hong Kong)
- `zh-mo` - Traditional Chinese (Macau)
- `zh-cn` - Simplified Chinese (China)
- `zh-sg` - Simplified Chinese (Singapore)
- `zh-my` - Simplified Chinese (Malaysia)
#### Serbian Language Variants
- `sr-latn` - Serbian Latin script
- `sr-cyrl` - Serbian Cyrillic script
#### Kurdish Language Variants
- `ku-latn` - Kurdish Latin script
- `ku-arab` - Kurdish Arabic script
#### Norwegian Language Variants
- `no` - Norwegian (automatically mapped to Bokmål)
### Usage Examples
```bash
# Access Simplified Chinese Wikipedia
wikipedia-mcp --language zh-hans
# Access Traditional Chinese Wikipedia (Taiwan)
wikipedia-mcp --language zh-tw
# Access Serbian Wikipedia in Latin script
wikipedia-mcp --language sr-latn
# Access Serbian Wikipedia in Cyrillic script
wikipedia-mcp --language sr-cyrl
```
### How Language Variants Work
When you specify a language variant like `zh-hans`, the server:
1. Maps the variant to the base Wikipedia language (e.g., `zh` for Chinese variants)
2. Uses the base language for API connections to the Wikipedia servers
3. Includes the variant parameter in API requests to get content in the specific variant
4. Returns content formatted according to the specified variant's conventions
This approach ensures optimal compatibility with Wikipedia's API while providing access to variant-specific content and formatting.
## Example Prompts
Once the server is running and configured with Claude Desktop, you can use prompts like:
### General Wikipedia queries:
- "Tell me about quantum computing using the Wikipedia information."
- "Summarize the history of artificial intelligence based on Wikipedia."
- "What does Wikipedia say about climate change?"
- "Find Wikipedia articles related to machine learning."
- "Get me the introduction section of the article on neural networks from Wikipedia."
- "What are the coordinates of the Eiffel Tower?"
- "Find the latitude and longitude of Mount Everest from Wikipedia."
- "Get coordinate information for famous landmarks in Paris."
### Using country-specific Wikipedia:
- "Search Wikipedia China for information about the Great Wall." (uses Chinese Wikipedia)
- "Tell me about Tokyo from Japanese Wikipedia sources."
- "What does German Wikipedia say about the Berlin Wall?"
- "Find information about the Eiffel Tower from French Wikipedia."
- "Get Taiwan Wikipedia's article about Taiwanese cuisine."
### Language variant examples:
- "Search Traditional Chinese Wikipedia for information about Taiwan."
- "Find Simplified Chinese articles about modern China."
- "Get information from Serbian Latin Wikipedia about Belgrade."
## MCP Resources
The server also provides MCP resources (similar to HTTP endpoints but for MCP):
- `search/{query}`: Search Wikipedia for articles matching the query
- `article/{title}`: Get the full content of a Wikipedia article
- `summary/{title}`: Get a summary of a Wikipedia article
- `sections/{title}`: Get the sections of a Wikipedia article
- `links/{title}`: Get the links in a Wikipedia article
- `coordinates/{title}`: Get the coordinates of a Wikipedia article
- `summary/{title}/query/{query}/length/{max_length}`: Get a query-focused summary of an article
- `summary/{title}/section/{section_title}/length/{max_length}`: Get a summary of a specific article section
- `facts/{title}/topic/{topic_within_article}/count/{count}`: Extract key facts from an article
## Development
### Local Development Setup
```bash
# Clone the repository
git clone https://github.com/rudra-ravi/wikipedia-mcp.git
cd wikipedia-mcp
# Create a virtual environment
python3 -m venv venv
source venv/bin/activate
# Install the package in development mode
pip install -e .
# Install development and test dependencies
pip install -r requirements-dev.txt
# Run the server
wikipedia-mcp
```
### Project Structure
- `wikipedia_mcp/`: Main package
- `__main__.py`: Entry point for the package
- `server.py`: MCP server implementation
- `wikipedia_client.py`: Wikipedia API client
- `api/`: API implementation
- `core/`: Core functionality
- `utils/`: Utility functions
- `tests/`: Test suite
- `test_basic.py`: Basic package tests
- `test_cli.py`: Command-line interface tests
- `test_server_tools.py`: Comprehensive server and tool tests
## Testing
The project includes a comprehensive test suite to ensure reliability and functionality.
### Test Structure
The test suite is organized in the `tests/` directory with the following test files:
- **`test_basic.py`**: Basic package functionality tests
- **`test_cli.py`**: Command-line interface and transport tests
- **`test_server_tools.py`**: Comprehensive tests for all MCP tools and Wikipedia client functionality
### Running Tests
#### Run All Tests
```bash
# Install test dependencies
pip install -r requirements-dev.txt
# Run all tests
python -m pytest tests/ -v
# Run tests with coverage
python -m pytest tests/ --cov=wikipedia_mcp --cov-report=html
```
#### Run Specific Test Categories
```bash
# Run only unit tests (excludes integration tests)
python -m pytest tests/ -v -m "not integration"
# Run only integration tests (requires internet connection)
python -m pytest tests/ -v -m "integration"
# Run specific test file
python -m pytest tests/test_server_tools.py -v
```
### Test Categories
#### Unit Tests
- **WikipediaClient Tests**: Mock-based tests for all client methods
- Search functionality
- Article retrieval
- Summary extraction
- Section parsing
- Link extraction
- Related topics discovery
- **Server Tests**: MCP server creation and tool registration
- **CLI Tests**: Command-line interface functionality
#### Integration Tests
- **Real API Tests**: Tests that make actual calls to Wikipedia API
- **End-to-End Tests**: Complete workflow testing
### Test Configuration
The project uses `pytest.ini` for test configuration:
```ini
[pytest]
markers =
integration: marks tests as integration tests (may require network access)
slow: marks tests as slow running
testpaths = tests
addopts = -v --tb=short
```
### Continuous Integration
All tests are designed to:
- Run reliably in CI/CD environments
- Handle network failures gracefully
- Provide clear error messages
- Cover edge cases and error conditions
### Adding New Tests
When contributing new features:
1. Add unit tests for new functionality
2. Include both success and failure scenarios
3. Mock external dependencies (Wikipedia API)
4. Add integration tests for end-to-end validation
5. Follow existing test patterns and naming conventions
## Troubleshooting
### Common Issues
#### Claude Desktop Connection Issues
**Problem**: Claude Desktop shows errors like `spawn wikipedia-mcp ENOENT` or cannot find the command.
**Cause**: This occurs when the `wikipedia-mcp` command is installed in a user-specific location (like `~/.local/bin/`) that's not in Claude Desktop's PATH.
**Solutions**:
1. **Use full path to the command** (Recommended):
```json
{
"mcpServers": {
"wikipedia": {
"command": "/home/username/.local/bin/wikipedia-mcp"
}
}
}
```
To find your exact path, run: `which wikipedia-mcp`
2. **Install with pipx for global access**:
```bash
pipx install wikipedia-mcp
```
Then use the standard configuration:
```json
{
"mcpServers": {
"wikipedia": {
"command": "wikipedia-mcp"
}
}
}
```
3. **Create a symlink to a global location**:
```bash
sudo ln -s ~/.local/bin/wikipedia-mcp /usr/local/bin/wikipedia-mcp
```
#### Other Issues
- **Article Not Found**: Check the exact spelling of article titles
- **Rate Limiting**: Wikipedia API has rate limits; consider adding delays between requests
- **Large Articles**: Some Wikipedia articles are very large and may exceed token limits
## Troubleshooting Search Issues
If you're experiencing empty search results, use the new diagnostic tools:
### 1. Test Connectivity
Use the `test_wikipedia_connectivity` tool to check if you can reach Wikipedia's API:
```json
{
"tool": "test_wikipedia_connectivity"
}
```
This returns diagnostics including:
- Connection status (`success` or `failed`)
- Response time in milliseconds
- Site/host information when successful
- Error details when connectivity fails
### 2. Enhanced Search Error Information
The `search_wikipedia` tool now returns detailed metadata:
```json
{
"tool": "search_wikipedia",
"arguments": {
"query": "Ada Lovelace",
"limit": 10
}
}
```
Example response:
```json
{
"query": "Ada Lovelace",
"results": [...],
"count": 5,
"status": "success",
"language": "en"
}
```
When no results are found, you receive:
```json
{
"query": "nonexistent",
"results": [],
"status": "no_results",
"count": 0,
"language": "en",
"message": "No search results found. This could indicate connectivity issues, API errors, or simply no matching articles."
}
```
### 3. Common Search Issues and Solutions
- **Empty results**: Run the connectivity test, verify query spelling, try broader terms.
- **Connection errors**: Check firewall or proxy settings, ensure `*.wikipedia.org` is reachable.
- **API limits**: Requests with `limit > 500` are automatically capped; negative values reset to the default (10).
### 4. Debugging with Verbose Logging
Launch the server with debug logging for deeper insight:
```bash
wikipedia-mcp --log-level DEBUG
```
This emits the request parameters, response status codes, and any warnings returned by the API.
## Understanding the Model Context Protocol (MCP)
The Model Context Protocol (MCP) is not a traditional HTTP API but a specialized protocol for communication between LLMs and external tools. Key characteristics:
- Uses stdio for local integrations and streamable HTTP for network integrations (`sse` retained for legacy compatibility)
- Designed specifically for AI model interaction
- Provides standardized formats for tools, resources, and prompts
- Integrates directly with Claude and other MCP-compatible AI systems
Claude Desktop acts as the MCP client, while this server provides the tools and resources that Claude can use to access Wikipedia information.
## Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Connect with the Author
- 🌐 Portfolio: [ravikumar-dev.me](https://ravikumar-dev.me)
- 📝 Blog: [Medium](https://medium.com/@Ravikumar-e)
- 💼 LinkedIn: [in/ravi-kumar-e](https://linkedin.com/in/ravi-kumar-e)
- 🐦 Twitter: [@Ravikumar_d3v](https://twitter.com/Ravikumar_d3v)
| text/markdown | null | Ravi Kumar E <ravikumar@ravikumar-dev.me> | null | null | null | mcp, wikipedia, model-context-protocol, ai, llm | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastmcp>=2.3.0",
"wikipedia-api>=0.8.0",
"requests>=2.31.0",
"python-dotenv>=1.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-mock>=3.10.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/rudra-ravi/wikipedia-mcp",
"Repository, https://github.com/rudra-ravi/wikipedia-mcp",
"Issues, https://github.com/rudra-ravi/wikipedia-mcp/issues",
"Documentation, https://github.com/rudra-ravi/wikipedia-mcp#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:33:19.483427 | wikipedia_mcp-2.0.0.tar.gz | 54,755 | 02/98/c53bc57f58d52038a1ab30bca29da995982464011ae876341c9d7db06df7/wikipedia_mcp-2.0.0.tar.gz | source | sdist | null | false | d477d4ddca226030d08469562499b4b5 | d75cfb2f012638d6016ac23afdadb46ab0dd822d7a8c8adcd556283fae1dc15d | 0298c53bc57f58d52038a1ab30bca29da995982464011ae876341c9d7db06df7 | MIT | [
"LICENSE"
] | 298 |
2.1 | py-mind-memo | 0.1.0 | A lightweight and intuitive mindmap like tool built with tkinter. | # py_mind_memo - Python Mindmap like Tool
Pythonの標準ライブラリ `tkinter` のみで構築された、軽量で直感的な操作が可能なマインドマップ風のメモツールです。
XMindのような快適なキーボード操作と、モダンで美しいビジュアルデザインを両立しています。
## 特徴
* **直感的な操作**: キーボードショートカットを中心とした素早い思考の書き出しが可能。
* **美しいデザイン**: 系統ごとの自動色分け(ブランチカラー)、滑らかなテーパー曲線、S字ベジェ曲線による接続。
* **動的レイアウト**: テキストの長さに応じて自動的にトピック間隔や位置を調整。長いテキストは自動的に折り返されます。
* **ドラッグ&ドロップ**: マウス操作でトピックの親子関係を自由に入れ替え可能。
* **折り畳み・展開**: 子を持つトピックをワンクリックで折りたたんで情報を集約。複雑なマップもスッキリ整理できます。
* **軽量・ポータブル**: Python標準ライブラリのみを使用しているため、導入が容易。UIは直感的な英語表記を採用しています。
## スクリーンショットイメージ

- **Root Topic** から美しい曲線が伸び、各ブランチが鮮やかに色分けされています。
- テキストの下にカラー線が引かれ、ミニマルで清潔感のあるデザインです。
## インストール・実行方法
Python 3.8以上が必要です。
```bash
# pip install py_mind_memo
py_mind_memo
```
## 使い方・ショートカットキー
### キーボード操作
| キー | 操作内容 |
| :--- | :--- |
| **Tab** | 選択中のトピックに **子トピック (New Topic)** を追加 |
| **Enter** | 選択中のトピックの **兄弟トピック**(同階層)を追加 |
| **F2** | 選択中のトピックの **テキストを編集** |
| **Enter (編集時)** | 編集を **確定・終了** |
| **Ctrl + Enter (編集時)** | **改行** を入力 |
| **Delete** | 選択中のトピック(およびその子孫)を **削除** |
| **矢印キー** | トピック間を直感的に **移動** |
| **Ctrl + S** | マインドマップを **Save (保存)** |
| **Ctrl + Shift + S** | マインドマップを **Save As (名前を付けて保存)** |
| **Ctrl + O** | 保存したマインドマップを **Open (開く)** |
### マウス操作
| 操作 | 内容 |
| :--- | :--- |
| **左クリック** | トピックを **選択**。選択されたトピックは青くハイライトされます。 |
| **ダブルクリック** | トピックの **編集モード** を開始します。 |
| **左ドラッグ** | トピックを他のトピックへ **移動**(ドロップ先のトピックの子になります)。 |
| **ホイール** | 画面の **上下スクロール**。 |
| **Shift + ホイール** | 画面の **左右スクロール**。 |
| **アイコンクリック** | トピックの右(または左)にある丸いアイコンをクリックして **折り畳み/展開** を切り替えます。折り畳み中は隠れている子トピックの数が表示されます。 |
*※ドラッグ中に画面端へポインタを持っていくと、自動的にキャンバスがスクロールします。*
### リッチテキスト装飾
トピックのテキスト内に以下のタグを記述することで、部分的に装飾が可能です。
| タグ | 効果 | 例 |
| :--- | :--- | :--- |
| `<b>...</b>` | **太字 (Bold)** | `<b>重要</b>な項目` |
| `<i>...</i>` | *斜体 (Italic)* | `<i>Italic</i> text` |
| `<u>...</u>` | <u>下線 (Underline)</u> | `<u>強調</u>したい` |
| `<c:#RRGGBB>...</c>` | 指定した **色** | `<c:#FF0000>赤色</c>` |
*※タグは入れ子(ネスト)にすることも可能です: `<b><i>太字かつ斜体</i></b>`*
## 開発コンセプト
このツールは「思考の速度を妨げないこと」を最優先に開発されました。
マウスに持ち替えることなく、キーボードだけでスピーディーにアイディアを膨らませ、かつ出来上がったマップが自動的に美しく整形されることを目指しています。
## 技術スタック
* **Language**: Python 3.8+
* **GUI Library**: tkinter (Standard Library)
* **Design Pattern**: MVC (Model-View-Controller)
| text/markdown | matsuuramasakazu | matsuuramasakazu@outlook.jp | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/matsuuramasakazu/py_mind_memo | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.2 | 2026-02-21T06:33:13.705937 | py_mind_memo-0.1.0-py3-none-any.whl | 23,394 | a2/82/04b6a9ff25704c74fdbc424379f3a557b44a39b8b66b652c6ef3fbf86f43/py_mind_memo-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | f4a788006ec9093aaa67145ed870d595 | fb5812defc4088fb8efa7dbf46d86113e2b80a10426cb005e079379af30c4662 | a28204b6a9ff25704c74fdbc424379f3a557b44a39b8b66b652c6ef3fbf86f43 | null | [] | 88 |
2.4 | verdictswarm-mcp | 0.1.2 | MCP server for AI-powered crypto token risk analysis — 6-agent consensus scoring, rug-pull detection, and security audits via the VerdictSwarm API. | # 🔍 VerdictSwarm MCP Server
[](https://github.com/vswarm-ai/verdictswarm)
[](https://github.com/vswarm-ai/verdictswarm)
[](LICENSE)
[](https://modelcontextprotocol.io)
**The first crypto token scanner available via MCP.** Give any AI agent the ability to analyze tokens for rug pulls, scams, and risk — powered by VerdictSwarm's 6-AI-agent consensus system.
Works with Claude Desktop, OpenClaw, Cursor, Codex, Windsurf, and any MCP-compatible client.
---
## Why?
AI trading agents are making on-chain decisions with no risk analysis. VerdictSwarm MCP gives them instant access to:
- **6-agent consensus scoring** — not one model's opinion, six independent AI agents debate the risk
- **On-chain security audits** — mint authority, freeze authority, honeypot detection, LP lock status
- **Rug pull detection** — holder concentration, bundle/sniper activity, contract age analysis
- **Human-readable reports** — markdown reports ready to share or embed
One tool call. Sub-second cached responses. No blockchain node required.
## Quick Start
### Install & Run
```bash
# Install from PyPI (recommended)
pip install verdictswarm-mcp
VS_API_KEY=your_key verdictswarm-mcp
# Or install from GitHub
pip install git+https://github.com/vswarm-ai/verdictswarm.git#subdirectory=mcp-server
VS_API_KEY=your_key verdictswarm-mcp
# Or with uvx (zero-install)
VS_API_KEY=your_key uvx git+https://github.com/vswarm-ai/verdictswarm.git#subdirectory=mcp-server
# Or clone and run
git clone https://github.com/vswarm-ai/verdictswarm.git
cd verdictswarm/mcp-server
uv run verdictswarm-mcp
```
### Claude Desktop
Add to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"verdictswarm": {
"command": "uvx",
"args": ["git+https://github.com/vswarm-ai/verdictswarm.git#subdirectory=mcp-server"],
"env": {
"VS_API_KEY": "your_key_here"
}
}
}
}
```
Then ask Claude: *"Check if this token is safe: `DezXAZ8z7PnrnRJjz3wXBoRgixCa6xjnB7YaB1pPB263` on Solana"*
### OpenClaw
```yaml
mcpServers:
verdictswarm:
command: uvx
args: ["verdictswarm-mcp"]
env:
VS_API_KEY: your_key_here
```
### No API Key?
The server works without a key at free-tier limits (3 scans/day, basic scores only). Get a key at [verdictswarm.ai](https://verdictswarm.ai) for full access.
## Tools
| Tool | Description | Use When |
|------|-------------|----------|
| `scan_token` | Full 6-agent consensus analysis | Deep due diligence on a specific token |
| `get_quick_score` | Fast cached score lookup (0-100) | Quick check before buying |
| `check_rug_risk` | Focused security/rug assessment | "Is this a scam?" |
| `get_trending_risky` | Trending high-risk tokens | Market surveillance (coming soon) |
| `get_token_report` | Formatted markdown report | Sharing analysis with others |
### Example: Quick Score
```
User: What's the risk score for BONK?
Agent: [calls get_quick_score("DezXAZ8z7PnrnRJjz3wXBoRgixCa6xjnB7YaB1pPB263")]
→ Score: 74/100 (Grade B) — LOW-MEDIUM risk
```
### Example: Rug Check
```
User: Is this new memecoin safe? 7xKXtg2CW87d97TXJSDpbD5jBkheTqA83TZRuJosgAsU
Agent: [calls check_rug_risk("7xKXtg...")]
→ DANGER
🚨 Liquidity NOT burned or locked
⚠️ Mint authority active
⚠️ Token is less than 24 hours old
⚠️ Bundle/sniper activity detected
```
## Resources & Prompts
**Resources** (reference data for agents):
- `verdictswarm://help` — Tool usage guide
- `verdictswarm://scoring` — Score interpretation (0-100 scale, grades A-F)
**Prompts** (pre-built workflows):
- `should_i_buy(token_address)` — Full investment analysis with recommendation
- `portfolio_check(tokens)` — Batch risk assessment across holdings
## Supported Chains
| Chain | Status |
|-------|--------|
| Solana | ✅ Full support |
| Ethereum | ✅ Full support |
| Base | ✅ Full support |
| BSC | ✅ Full support |
## Scoring Guide
| Score | Grade | Risk Level | Meaning |
|-------|-------|------------|---------|
| 80-100 | A | LOW | Relatively safe, established project |
| 70-79 | B | LOW-MEDIUM | Minor concerns, generally okay |
| 60-69 | C | MEDIUM | Proceed with caution |
| 40-59 | D | HIGH | Significant red flags |
| 0-39 | F | CRITICAL | Likely scam or rug pull |
## Configuration
| Environment Variable | Default | Description |
|---------------------|---------|-------------|
| `VS_API_KEY` | *(empty — free tier)* | Your VerdictSwarm API key |
| `VS_API_URL` | `https://verdictswarm-production.up.railway.app` | API base URL |
| `VS_TIMEOUT` | `120` | Request timeout in seconds |
## Architecture
```
MCP Client (Claude, Cursor, OpenClaw, Codex...)
│
│ MCP Protocol (stdio)
▼
┌──────────────────────────┐
│ VerdictSwarm MCP Server │ ← This package (thin wrapper)
│ FastMCP + Python │
└──────────┬───────────────┘
│ HTTP (httpx)
▼
┌──────────────────────────┐
│ VerdictSwarm API │ ← Existing backend (Railway)
│ 6 AI agents + on-chain │
└──────────────────────────┘
```
The MCP server is a stateless wrapper — all intelligence lives in the VerdictSwarm API. This means:
- Lightweight deployment (no GPU, no blockchain node)
- Single source of truth for scan logic
- API-level rate limiting and caching already work
## Development
```bash
git clone https://github.com/vswarm-ai/verdictswarm.git
cd verdictswarm/mcp-server
pip install -e ".[dev]"
pytest # 47 tests, ~0.3s
```
## License
MIT — see [LICENSE](LICENSE).
## Links
- **Website:** [verdictswarm.ai](https://verdictswarm.ai)
- **API Docs:** [docs.verdictswarm.ai](https://docs.verdictswarm.ai)
- **GitHub:** [vswarm-ai/verdictswarm](https://github.com/vswarm-ai/verdictswarm)
- **MCP Spec:** [modelcontextprotocol.io](https://modelcontextprotocol.io)
---
*Built by [Sentien Labs](https://sentienlabs.com) — AI-operated crypto intelligence infrastructure.*
| text/markdown | null | Sentien Labs <dev@sentienlabs.com> | null | null | null | mcp, crypto, token-scanner, rug-pull, risk-analysis, solana, ethereum, defi, ai-agents, verdictswarm | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security",
"Topic :: Office/Business :: Financial",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp[cli]<2,>=1.25.0",
"httpx>=0.27",
"pydantic>=2.0",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://verdictswarm.ai",
"Documentation, https://github.com/vswarm-ai/verdictswarm/tree/main/mcp-server",
"Repository, https://github.com/vswarm-ai/verdictswarm",
"Issues, https://github.com/vswarm-ai/verdictswarm/issues",
"Changelog, https://github.com/vswarm-ai/verdictswarm/blob/main/mcp-server/CHANGELOG.md"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T06:32:47.381838 | verdictswarm_mcp-0.1.2.tar.gz | 23,725 | 21/63/e53c848159916db35ef632625feb9a47e522725b02a39867011ae6266ec7/verdictswarm_mcp-0.1.2.tar.gz | source | sdist | null | false | e8d3db6d473994a650f432ae53b3a8f8 | 835a512d61309ba9416d89b7e8a339754bc694a2d053564003ede346bd308a24 | 2163e53c848159916db35ef632625feb9a47e522725b02a39867011ae6266ec7 | MIT | [
"LICENSE"
] | 236 |
2.4 | s3fifo | 0.0.1 | Implementation of S3-FIFO cache algorithm. FIFO queues are all you need for cache evictions. | # s3fifo
[](https://img.shields.io/github/v/release/psiace/s3fifo)
[](https://github.com/psiace/s3fifo/actions/workflows/main.yml?query=branch%3Amain)
[](https://codecov.io/gh/psiace/s3fifo)
[](https://img.shields.io/github/license/psiace/s3fifo)
Python implementation of the S3-FIFO cache algorithm.
It provides:
- `S3FIFOCache`: low-level cache container.
- `s3fifo_cache`: sync decorator with an API similar to `functools.lru_cache`.
- `as3fifo_cache`: async decorator inspired by `async-lru`.
## Installation
```bash
pip install s3fifo
```
## Quick Start
### Sync decorator
```python
from s3fifo import s3fifo_cache
@s3fifo_cache(maxsize=128)
def load_user(user_id: int) -> dict:
return {"id": user_id}
load_user(1)
load_user(1)
print(load_user.cache_info())
# CacheInfo(hits=1, misses=1, maxsize=128, currsize=1)
```
### Async decorator
```python
import asyncio
from s3fifo import as3fifo_cache
@as3fifo_cache(maxsize=128)
async def fetch_user(user_id: int) -> dict:
await asyncio.sleep(0.01)
return {"id": user_id}
async def main() -> None:
await asyncio.gather(fetch_user(1), fetch_user(1), fetch_user(1))
print(fetch_user.cache_info())
# CacheInfo(hits=2, misses=1, maxsize=128, currsize=1)
await fetch_user.cache_close()
asyncio.run(main())
```
`as3fifo_cache` deduplicates concurrent calls for the same key and shares one in-flight task.
## Configuration
The S3-FIFO policy parameters are configurable across all entry points and default to the paper/reference values:
- `small_size_ratio=0.10`
- `ghost_size_ratio=0.90`
- `move_to_main_threshold=2`
Example:
```python
from s3fifo import s3fifo_cache
@s3fifo_cache(
maxsize=256,
small_size_ratio=0.20,
ghost_size_ratio=0.50,
move_to_main_threshold=3,
)
def f(x: int) -> int:
return x
```
You can inspect active settings via `cache_parameters()`.
## API Summary
### `S3FIFOCache`
```python
S3FIFOCache(
maxsize: int,
*,
small_size_ratio: float = 0.10,
ghost_size_ratio: float = 0.90,
move_to_main_threshold: int = 2,
)
```
Methods:
- `get(key, default=None)`
- `put(key, value)`
- `remove(key, include_ghost=True)`
- `clear()`
- `len(cache)`
- `key in cache`
### `s3fifo_cache`
Usage:
- `@s3fifo_cache`
- `@s3fifo_cache(maxsize=128, typed=False, ...)`
Methods on decorated function:
- `cache_info()`
- `cache_clear()`
- `cache_parameters()`
Notes:
- `maxsize=0` disables storage and still counts misses.
- `typed=True` includes argument types in cache keys.
### `as3fifo_cache`
Usage:
- `@as3fifo_cache`
- `@as3fifo_cache(maxsize=128, typed=False, ...)`
Methods on decorated function:
- `cache_info()`
- `cache_clear()`
- `cache_invalidate(*args, **kwargs)`
- `cache_parameters()`
- `await cache_close(wait=False)`
## Limitations
- `as3fifo_cache` has event-loop affinity: a single cache instance must be used on one event loop.
- TTL and jitter policies are not implemented.
## Differences from Origin
- **Capacity is measured differently.**
The original implementation is byte-size based, while this package is entry-count based.
In practice, `maxsize` means “maximum number of cached keys”, not memory bytes.
- **The programming model is Python-first.**
The original design is request/object-size oriented; here you work with a plain Python cache object
(`S3FIFOCache`) and decorators (`s3fifo_cache`, `as3fifo_cache`).
- **Overwriting an existing key is treated as an update.**
Calling `put` with an existing key updates the value in place and does not trigger an extra eviction cycle.
## Development
```bash
make install
make check
make test
```
## License
[Apache-2.0](./LICENSE)
| text/markdown | null | Chojan Shang <psiace@apache.org> | null | null | null | python | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/psiace/s3fifo",
"Repository, https://github.com/psiace/s3fifo",
"Documentation, https://github.com/psiace/s3fifo"
] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:32:40.295843 | s3fifo-0.0.1-py3-none-any.whl | 13,154 | 26/60/a7743e12463bcab6a602cc5a60645d2c9b73862dc71fe5af1a8afba5fa3a/s3fifo-0.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 81dea64a1c9209e8b2ba16aac6c27110 | ab7d3d645c63c6ee72da982c8e84fd32c57b1604b67d44be12e242ba72c72e66 | 2660a7743e12463bcab6a602cc5a60645d2c9b73862dc71fe5af1a8afba5fa3a | null | [
"LICENSE"
] | 257 |
2.4 | homai | 0.0.1 | Multi-agent AI teams for Jupyter | # 🦅 Homai
**Multi-agent AI teams for Jupyter.**
Named after Homa (هما), the Persian mythological bird of fortune.
> Coming soon. Watch the repo: https://github.com/homai-ai/homai
| text/markdown | null | null | null | null | Apache-2.0 | jupyter, ai, agents, multi-agent, data-science | [
"Development Status :: 1 - Planning",
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/homai-ai/homai",
"Repository, https://github.com/homai-ai/homai"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T06:32:35.294781 | homai-0.0.1.tar.gz | 1,502 | 42/1d/b788623f16b3c678379bc343daca87ee9b83480b7e5874d88d0a6fd9cde7/homai-0.0.1.tar.gz | source | sdist | null | false | b546a2ad76611cc7f8773161233b2a5c | 38dd72e001899bf971f55611272cbe653d0bd2f25d6414f90103d7ff621e6d4d | 421db788623f16b3c678379bc343daca87ee9b83480b7e5874d88d0a6fd9cde7 | null | [] | 256 |
2.4 | shackle | 0.5.0 | Enterprise-grade async-first AI agent orchestration framework | # ShackleAI
[](https://pypi.org/project/shackle/)
[](https://pypi.org/project/shackle/)
[](LICENSE)
[]()
[]()
**Enterprise-grade, async-first AI agent orchestration framework.**
ShackleAI orchestrates LLM-powered agents into coordinated workflows. Agents form a **Fleet**, execute **Tasks** as part of a **Voyage**, drop **Pins** at checkpoints, and store knowledge in **Memory**.
## Quick Start
```bash
pip install shackle
```
```python
import asyncio
from shackle import Agent, Task, Voyage, ExecutionMode
async def main():
researcher = Agent(
name="Researcher",
role="Senior Research Analyst",
goal="Uncover key facts about AI agents",
)
writer = Agent(
name="Writer",
role="Content Writer",
goal="Write clear, engaging content",
)
research = Task(
description="Research the top 3 trends in AI agent orchestration.",
expected_output="A bullet-point brief with three trends.",
agent=researcher,
)
article = Task(
description="Write a 300-word blog post from the research.",
expected_output="A polished blog post in Markdown.",
agent=writer,
)
voyage = Voyage(
name="Research & Write",
agents=[researcher, writer],
tasks=[research, article],
execution_mode=ExecutionMode.SEQUENTIAL,
)
result = await voyage.sail()
for tr in result.task_results:
print(tr.output)
print(f"Total cost: ${result.total_cost:.6f}")
asyncio.run(main())
```
## Features
| Feature | Details |
|---|---|
| **Async-first** | Built on `asyncio` from the ground up |
| **Pydantic v2** | Fully typed models with runtime validation |
| **LiteLLM** | 100+ LLM providers via a single interface |
| **Cost tracking** | Per-call and per-voyage spend with configurable limits |
| **Execution modes** | Sequential, parallel, and hierarchical |
| **YAML workflows** | Define voyages declaratively |
| **FastAPI** | REST API for programmatic access |
| **Typer + Rich CLI** | `shackle sail workflow.yaml` |
| **structlog** | Structured JSON logging |
| **OpenTelemetry** | Distributed tracing out of the box |
| **Tool use** | Agents call tools via OpenAI function-calling |
| **Memory** | Short-term + long-term agent memory |
## YAML Workflow
```yaml
voyage:
name: Research & Write
execution_mode: sequential
agents:
- name: Researcher
role: Senior Analyst
goal: Find key facts
tasks:
- description: Research AI trends
expected_output: A research brief
agent: Researcher
```
```bash
shackle sail workflow.yaml
```
## CLI
```bash
shackle init my-project # scaffold a new project
shackle sail workflow.yaml # execute a voyage
shackle sail workflow.yaml -v # verbose mode
shackle validate workflow.yaml # validate YAML workflow
shackle cost workflow.yaml # estimate LLM costs
shackle logs # view Ship's Log entries
shackle dock # launch API server
shackle version # print version
```
## API
Launch the REST API with `shackle dock`:
```bash
shackle dock --port 8000
```
Key endpoints:
```bash
# Health check
curl http://localhost:8000/api/v1/health
# Create and run a voyage
curl -X POST http://localhost:8000/api/v1/voyages \
-H "Content-Type: application/json" \
-d '{"name": "My Voyage", "agents": [...], "tasks": [...]}'
# List voyages
curl http://localhost:8000/api/v1/voyages
# List available tools
curl http://localhost:8000/api/v1/tools
```
Full Swagger UI available at `http://localhost:8000/docs`.
## Berth Pass (Licensing)
ShackleAI uses **Berth Pass** — an offline Ed25519 license key system with three tiers:
| | Free | Pro | Enterprise |
|---|---|---|---|
| Agents per voyage | 5 | Unlimited | Unlimited |
| Tool calls per voyage | 100 | Unlimited | Unlimited |
| Built-in tools | 7 | 12+ | 73 |
| Cost tracking | Summary | Full breakdown | Full + export |
**Free tier** works out of the box — no key required.
To activate a Pro or Enterprise Berth Pass:
```bash
# Option 1: Environment variable
export SHACKLE_LICENSE_KEY="SHACKLE-PRO-<your-key>"
# Option 2: Key file
echo "SHACKLE-PRO-<your-key>" > ~/.shackle/berth-pass.key
```
Purchase a Berth Pass at [shackleai.com/pricing](https://shackleai.com/pricing).
## Docker Deployment
Deploy with Docker Compose (recommended for Pro+):
```bash
# Generate docker-compose.yml
shackle dock --docker
# Start the stack
docker compose up -d
```
Services included:
- **shackle-api** — FastAPI server (always)
- **shackle-db** — PostgreSQL (Pro+ profile)
- **shackle-cache** — Redis (Pro+ profile)
```bash
# Start with Pro services (PostgreSQL + Redis)
docker compose --profile pro up -d
```
## Configuration
All settings can be overridden via environment variables prefixed with `SHACKLE_`:
| Variable | Default | Description |
|---|---|---|
| `SHACKLE_DEFAULT_LLM` | `gpt-4o-mini` | Default model |
| `SHACKLE_MAX_COST_USD` | `10.0` | Cost limit per voyage |
| `SHACKLE_VERBOSE` | `false` | Pretty-print logs |
| `SHACKLE_LOG_LEVEL` | `INFO` | Minimum log level |
| `SHACKLE_TELEMETRY_ENABLED` | `false` | Enable OTel tracing |
## Documentation
Full docs at [shackle-ai.github.io/shackle](https://shackle-ai.github.io/shackle).
## License
ShackleAI is proprietary software. See [LICENSE](LICENSE) for terms.
Free to use for development and evaluation. Commercial use requires a license.
Contact: contact@shackleai.com
| text/markdown | null | ShackleAI <useshackleai@gmail.com> | null | null | Proprietary | agents, ai, async, llm, orchestration | [
"Development Status :: 4 - Beta",
"Framework :: AsyncIO",
"Framework :: FastAPI",
"Framework :: Pydantic :: 2",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx<1.0,>=0.27",
"litellm<1.70,>=1.55",
"pydantic-settings<3.0,>=2.0",
"pydantic<3.0,>=2.0",
"pyyaml<7.0,>=6.0",
"structlog<26.0,>=24.0",
"tenacity<10.0,>=8.0",
"aiosqlite<1.0,>=0.19; extra == \"all\"",
"alembic<2.0,>=1.13; extra == \"all\"",
"asyncpg<1.0,>=0.29; extra == \"all\"",
"boto3<2.0,>=1.34; extra == \"all\"",
"cryptography<47.0,>=46.0.5; extra == \"all\"",
"ddgs<10.0,>=9.0; extra == \"all\"",
"fastapi<1.0,>=0.110; extra == \"all\"",
"google-api-python-client<3.0,>=2.130; extra == \"all\"",
"google-cloud-vision<4.0,>=3.7; extra == \"all\"",
"mcp<2.0,>=1.0; extra == \"all\"",
"opentelemetry-api<2.0,>=1.24; extra == \"all\"",
"opentelemetry-exporter-otlp<2.0,>=1.24; extra == \"all\"",
"opentelemetry-sdk<2.0,>=1.24; extra == \"all\"",
"pygithub<3.0,>=2.3; extra == \"all\"",
"redis[hiredis]<6.0,>=5.0; extra == \"all\"",
"rich<14.0,>=13.0; extra == \"all\"",
"sendgrid<7.0,>=6.11; extra == \"all\"",
"slack-sdk<4.0,>=3.27; extra == \"all\"",
"sqlalchemy[asyncio]<3.0,>=2.0; extra == \"all\"",
"typer<1.0,>=0.12; extra == \"all\"",
"uvicorn[standard]<1.0,>=0.29; extra == \"all\"",
"boto3<2.0,>=1.34; extra == \"aws\"",
"rich<14.0,>=13.0; extra == \"cli\"",
"typer<1.0,>=0.12; extra == \"cli\"",
"aiosqlite<1.0,>=0.19; extra == \"db\"",
"alembic<2.0,>=1.13; extra == \"db\"",
"sqlalchemy[asyncio]<3.0,>=2.0; extra == \"db\"",
"aiosqlite<1.0,>=0.19; extra == \"dev\"",
"alembic<2.0,>=1.13; extra == \"dev\"",
"asyncpg<1.0,>=0.29; extra == \"dev\"",
"boto3<2.0,>=1.34; extra == \"dev\"",
"cryptography<47.0,>=46.0.5; extra == \"dev\"",
"ddgs<10.0,>=9.0; extra == \"dev\"",
"fastapi<1.0,>=0.110; extra == \"dev\"",
"google-api-python-client<3.0,>=2.130; extra == \"dev\"",
"google-cloud-vision<4.0,>=3.7; extra == \"dev\"",
"mcp<2.0,>=1.0; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"opentelemetry-api<2.0,>=1.24; extra == \"dev\"",
"opentelemetry-exporter-otlp<2.0,>=1.24; extra == \"dev\"",
"opentelemetry-sdk<2.0,>=1.24; extra == \"dev\"",
"pre-commit>=3.7; extra == \"dev\"",
"pygithub<3.0,>=2.3; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest-mock>=3.14; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"redis[hiredis]<6.0,>=5.0; extra == \"dev\"",
"rich<14.0,>=13.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"sendgrid<7.0,>=6.11; extra == \"dev\"",
"slack-sdk<4.0,>=3.27; extra == \"dev\"",
"sqlalchemy[asyncio]<3.0,>=2.0; extra == \"dev\"",
"typer<1.0,>=0.12; extra == \"dev\"",
"types-pyyaml>=6.0; extra == \"dev\"",
"uvicorn[standard]<1.0,>=0.29; extra == \"dev\"",
"mkdocs-material>=9.5; extra == \"docs\"",
"mkdocstrings[python]>=0.24; extra == \"docs\"",
"google-api-python-client<3.0,>=2.130; extra == \"gcp\"",
"google-cloud-vision<4.0,>=3.7; extra == \"gcp\"",
"pygithub<3.0,>=2.3; extra == \"github\"",
"cryptography<47.0,>=46.0.5; extra == \"licensing\"",
"mcp<2.0,>=1.0; extra == \"mcp\"",
"asyncpg<1.0,>=0.29; extra == \"postgresql\"",
"build>=1.0; extra == \"protect\"",
"cython>=3.0; extra == \"protect\"",
"pyarmor>=9.2.3; extra == \"protect\"",
"twine>=5.0; extra == \"protect\"",
"redis[hiredis]<6.0,>=5.0; extra == \"redis\"",
"ddgs<10.0,>=9.0; extra == \"search\"",
"sendgrid<7.0,>=6.11; extra == \"sendgrid\"",
"fastapi<1.0,>=0.110; extra == \"server\"",
"uvicorn[standard]<1.0,>=0.29; extra == \"server\"",
"slack-sdk<4.0,>=3.27; extra == \"slack\"",
"opentelemetry-api<2.0,>=1.24; extra == \"telemetry\"",
"opentelemetry-exporter-otlp<2.0,>=1.24; extra == \"telemetry\"",
"opentelemetry-sdk<2.0,>=1.24; extra == \"telemetry\""
] | [] | [] | [] | [
"Homepage, https://shackleai.com",
"Documentation, https://shackle-ai.github.io/shackle-docs",
"Support, https://shackleai.com/support"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:31:29.288061 | shackle-0.5.0-py3-none-any.whl | 1,355,677 | a9/62/d83aef674367a7b544f624c1deefecc4e1e30034036ee35bd1bf2b18abe4/shackle-0.5.0-py3-none-any.whl | py3 | bdist_wheel | null | false | ffa7fb6996e8b37a3281dce6dfbae4a9 | 51e56bfc6b97ebacae232892a02c1c0d29b30bdda630ac62d48c591a54bacee9 | a962d83aef674367a7b544f624c1deefecc4e1e30034036ee35bd1bf2b18abe4 | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | pqfilt | 0.1.0 | Generic Parquet filtering tool (CLI + API) | # pqfilt
Generic Parquet predicate-pushdown filter tool — CLI and Python API.
`pqfilt` wraps `pyarrow.dataset` to let you filter Parquet files **before** they
are fully read into memory, using row-group-level predicate pushdown.
## Installation
```bash
pip install pqfilt
# or
uv add pqfilt
```
## Python API
```python
import pqfilt
# Simple filter
df = pqfilt.read("data.parquet", filters="vmag < 20")
# AND + OR with expression syntax
df = pqfilt.read("data.parquet", filters="(a < 30 & b > 50) | c == 1")
# Tuple syntax (flat AND)
df = pqfilt.read("data.parquet", filters=[("a", "<", 30), ("b", ">", 50)])
# DNF syntax (OR of ANDs)
df = pqfilt.read("data.parquet", filters=[
[("a", "<", 30)],
[("b", ">", 50)],
])
# Column selection + output
df = pqfilt.read("data/*.parquet", columns=["a", "b"], output="out.parquet")
```
## CLI
```bash
# Basic filter
pqfilt data/*.parquet -f "vmag < 20" -o filtered.parquet
# AND + OR expression
pqfilt data/*.parquet -f "(a < 30 & b > 50) | c == 1" -o filtered.parquet
# Multiple -f flags (AND-ed together)
pqfilt data/*.parquet -f "vmag < 20" -f "dec > 30" -o filtered.parquet
# Column selection
pqfilt data/*.parquet -f "vmag < 20" --columns vmag,ra,dec -o filtered.parquet
# Membership filter
pqfilt data/*.parquet -f "desig in 1,2,3" -o filtered.parquet
```
### Column names with special characters
Columns containing operator characters can be backtick-quoted:
```python
pqfilt.read("data.parquet", filters="`alpha*360` > 100")
```
## License
MIT
| text/markdown | Yoonsoo P. Bach | null | null | null | null | data, filter, parquet, predicate-pushdown, pyarrow | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"pandas>=1.5",
"pyarrow>=10.0",
"pytest>=7.0; extra == \"dev\"",
"ruff; extra == \"dev\"",
"sphinx-rtd-theme>=2.0; extra == \"docs\"",
"sphinx>=7.0; extra == \"docs\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:30:41.197330 | pqfilt-0.1.0.tar.gz | 9,478,844 | 20/90/f96e620b06581617a7c38bb44ddaf6d10f296a3c4724fd039a7deb7ab40b/pqfilt-0.1.0.tar.gz | source | sdist | null | false | 49545a7f1e4c7ce0a171523cf2a56ab5 | 8beb532392bd45495fbacf5260b71d1ff52d4e04544d6267b95913b1fa9378fa | 2090f96e620b06581617a7c38bb44ddaf6d10f296a3c4724fd039a7deb7ab40b | MIT | [
"LICENSE"
] | 254 |
2.4 | tibet-forge | 0.5.0 | From vibe code to trusted tool. Automatic TIBET provenance, bloat detection, duplicate checking, and trust scoring. | # tibet-forge
**Zero-friction provenance. Built-in trust.**
Turn any Python project into a certified, auditable tool with one command. Cryptographic provenance baked in, not bolted on.
## Quick Start
```bash
pip install tibet-forge
tibet-forge certify ./my-project
```
```
╔════════════════════════════════════════════════════════╗
║ Humotica Trust Score: 87/100 (B+) ║
║ ✓ CERTIFIED ║
╚════════════════════════════════════════════════════════╝
Badge markdown:
[]
```
## What You Get
### Trust Scoring
Gamified quality metrics. See exactly where your code stands:
```
Humotica Trust Score: 87/100 (B+)
├── Code Quality: 85/100 (weight: 25%)
├── Security: 95/100 (weight: 25%)
├── Efficiency: 80/100 (weight: 20%)
├── Uniqueness: 70/100 (weight: 15%)
└── Provenance: 100/100 (weight: 15%)
```
### Zero-Friction Provenance
TIBET audit trails injected automatically. Every function call tracked, every decision logged:
```python
# Your code stays clean
def login(user, password):
...
# tibet-forge adds provenance invisibly
@tibet_audit(action="login", erachter="User authentication")
def login(user, password):
...
```
### Hyper-Optimized Execution
Bloat detection powered by AST analysis. Know exactly what's slowing you down:
```
Efficiency Analysis:
✓ No heavy dependencies detected
• Consider: httpx instead of requests (3x faster, async-native)
• Unused import: 'os' in utils.py
```
### Smart Deduplication
Intent hashing finds existing tools that do what you're building:
```
Similar Projects Found:
• rapid-rag (65% similar)
Production-ready RAG with TIBET integration
https://pypi.org/project/rapid-rag/
```
## Commands
```bash
# Full certification with badge
tibet-forge certify .
# Quick scan
tibet-forge scan .
# Just the score
tibet-forge score .
# Preview TIBET injection
tibet-forge wrap --dry-run .
# Initialize config
tibet-forge init
```
## Trust Score Components
| Component | Weight | Measures |
|-----------|--------|----------|
| Code Quality | 25% | README, tests, docs, types |
| Security | 25% | No vulns, no hardcoded secrets |
| Efficiency | 20% | No bloat, minimal dependencies |
| Uniqueness | 15% | Novel contribution, not reinventing |
| Provenance | 15% | TIBET integration, audit readiness |
## The Badge
Projects scoring 70+ earn the Humotica Trust badge:
[](https://humotica.com/trust)
## Configuration
Create `tibet-forge.json`:
```json
{
"name": "my-project",
"scan_bloat": true,
"scan_duplicates": true,
"scan_security": true,
"auto_wrap": true,
"min_score_for_badge": 70
}
```
Or in `pyproject.toml`:
```toml
[tool.tibet-forge]
scan_bloat = true
min_score_for_badge = 70
```
## Why "Forge"?
Raw code goes in. Trusted tool comes out.
Like a blacksmith's forge - heat, hammer, harden. Your vibe code becomes production steel.
## Part of the Humotica Suite
| Package | Focus |
|---------|-------|
| [tibet-core](https://pypi.org/project/tibet-core/) | Provenance foundation |
| [rapid-rag](https://pypi.org/project/rapid-rag/) | RAG in 3 lines |
| [oomllama](https://pypi.org/project/oomllama/) | Smart LLM routing |
| **tibet-forge** | Zero-friction certification |
## License
MIT - Humotica
| text/markdown | Gemini IDD | "J. van de Meent" <jasper@humotica.com>, "R. AI" <info@humotica.com> | null | null | MIT | ai-native, audit, badge, bloat, certification, ci-cd, code-quality, forge, provenance, security, tibet, trust, vibe-coding | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Security",
"Topic :: Software Development :: Quality Assurance"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.24.0",
"rich>=13.0.0",
"tibet-core>=0.2.0",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"semgrep>=1.0.0; extra == \"full\"",
"tree-sitter>=0.20.0; extra == \"full\""
] | [] | [] | [] | [
"Homepage, https://humotica.com",
"Repository, https://github.com/humotica/tibet-forge"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T06:30:08.618125 | tibet_forge-0.5.0.tar.gz | 23,483 | 0c/17/135c770a723ae68421e3b9e9bc04bff9b14f72d6bf80bdc23d2fd42ab8af/tibet_forge-0.5.0.tar.gz | source | sdist | null | false | 7eb14531d75b6b28466975b3d66253a8 | 6261aa9a05bfb59d27adcbc3095616dd2ea88b5f182b60c0db9f15f82aceb3d8 | 0c17135c770a723ae68421e3b9e9bc04bff9b14f72d6bf80bdc23d2fd42ab8af | null | [
"LICENSE"
] | 256 |
2.4 | porki | 0.7.0 | Agentic orchestration. | # porki
<img src="https://i.imgur.com/A9d5wWF.png" width="300" height="300" />
<div display="flex" align-items="center">
<img src="https://img.shields.io/badge/OpenAI-412991?style=for-the-badge&logo=openai&logoColor=white" />
<img src="https://img.shields.io/badge/Claude-D97757?style=for-the-badge&logo=anthropic&logoColor=white" />
<img src="https://img.shields.io/badge/Python-3776AB?style=for-the-badge&logo=python&logoColor=white" />
<img src="https://img.shields.io/badge/mac%20os-000000?style=for-the-badge&logo=apple&logoColor=white" />
<img src="https://img.shields.io/badge/Linux-FCC624?style=for-the-badge&logo=linux&logoColor=black" />
</div>
## Table of Contents
1. [Description](#description)
2. [Quickstart](#quickstart)
- 2.1 [Install](#install)
- 2.2 [Simple Command](#simple-command)
3. [Installation](#installation)
- 3.1 [Prerequisites](#prerequisites)
- 3.2 [PyPI](#pypi)
- 3.3 [From Source](#from-source)
- 3.4 [Development Only (uv)](#development-only-uv)
4. [Test](#test)
5. [CLI Reference](#cli-reference)
---
## Description
`porki` is an agentic orchestration runtime for multi-agent workflows. It coordinates orchestrator and agent processes, persists shared state in Redis, and uses an LLM CLI (`claude` or `codex`) to plan and execute tasks.
It leverages markdown-based control files: a primary `INSTRUCTIONS.md` for orchestrator/agent directives and per-agent heartbeat markdown files for live control signals (for example pause/resume-style runtime directives).
Example usage: [Systemg Orchestrator](https://github.com/ra0x3/systemg/tree/main/examples/orchestrator)
> [!WARNING]
> Designed to run with [systemg](https://github.com/ra0x3/systemg).
## Quickstart
### Install
```bash
pip install porki
```
### Simple Command
```bash
porki --help
```
```bash
porki run --help
```
```bash
porki instructions --help
```
Minimal runtime example (orchestrator + agent) with bundled test assets:
```bash
porki run \
--role orchestrator \
--instructions tests/assets/INSTRUCTIONS.md \
--llm-provider codex \
--llm-cli codex
```
```bash
porki run \
--role agent \
--instructions tests/assets/instructions/agent-research.md \
--heartbeat tests/assets/instructions/heartbeat/agent-research.md \
--agent-name agent-research \
--goal-id goal-demo \
--llm-provider codex \
--llm-cli codex
```
One-shot prompt mode:
```bash
porki run --prompt "Draft a concise architecture summary for this repo."
```
Enable colored logging output (similar to cargo):
```bash
porki run --role orchestrator --instructions INSTRUCTIONS.md --color
```
The `--color` flag enables ANSI color codes for log levels:
- **INFO**: Green
- **DEBUG**: Light Blue (Cyan)
- **WARNING**: Yellow
- **ERROR**: Red
The `--log-style` flag controls log verbosity:
- `concise` (default): human-first logs with only meaningful context keys.
- `event`: full structured key/value event suffix on each line.
Create a template instruction file:
```bash
porki instructions create --name "Core infra dev" --path ./instructions
```
Generated templates now include canonical JSON schema examples for goals, DAG tasks, task state, finished tasks, and LLM response payloads.
Each generated file also includes explicit version metadata (`porki_instruction_template_version` and `porki_schema_version`) so upgrades are trackable.
Example output path from the command above:
```bash
./instructions/CORE_INFRA_DEV.md
```
By default, `--redis-url` is `fakeredis://` for local/demo usage.
## Installation
### Prerequisites
- `python` 3.10+
- `systemg` (`sysg` CLI available on PATH)
- `redis` (server reachable by `--redis-url`)
- an LLM CLI: `claude` or `codex`
### PyPI
```bash
pip install porki
```
### From Source
```bash
git clone https://github.com/ra0x3/porki.git
cd porki
pip install -e .
```
### Development Only (uv)
`uv` commands are for development workflows (not required for normal runtime use):
```bash
uv sync
```
## Test
From repository checkout:
```bash
uv run pytest
```
Without `uv`:
```bash
python -m pip install pytest
python -m pytest
```
## CLI Reference
```text
usage: porki [-h] [--version] {run,instructions} ...
Porki agent/orchestrator entrypoint (version X.Y.Z)
positional arguments:
{run,instructions}
run Run orchestrator/agent loops or submit a one-shot prompt
to the configured LLM
instructions Instruction file utilities
options:
-h, --help show this help message and exit
--version Show version and exit
```
```text
usage: porki run [-h] [--role {agent,orchestrator}]
[--instructions INSTRUCTIONS] [-p [PROMPT]]
[--redis-url REDIS_URL] [--log-level LOG_LEVEL]
[--log-style {concise,event}] [--color]
[--agent-name AGENT_NAME] [--agent-role AGENT_ROLE]
[--goal-id GOAL_ID] [--heartbeat HEARTBEAT]
[--loop-interval LOOP_INTERVAL] [--lease-ttl LEASE_TTL]
[--poll-interval POLL_INTERVAL]
[--heartbeat-interval HEARTBEAT_INTERVAL]
[--instruction-interval INSTRUCTION_INTERVAL]
[--claude-cli CLAUDE_CLI]
[--claude-extra-arg CLAUDE_EXTRA_ARG] [--claude-use-sysg]
[--llm-provider {claude,codex}] [--llm-cli LLM_CLI]
[--llm-extra-arg LLM_EXTRA_ARG] [--llm-use-sysg]
```
```text
usage: porki instructions [-h] [--log-level LOG_LEVEL]
[--log-style {concise,event}] [--color] {create} ...
```
```text
usage: porki instructions create [-h] -n NAME -p PATH [--force]
[--log-level LOG_LEVEL]
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"redis>=5.0",
"fakeredis>=2.22",
"pydantic>=2.8",
"pyyaml>=6.0",
"tiktoken>=0.7"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T06:29:30.040578 | porki-0.7.0.tar.gz | 63,674 | 26/97/cc7bace662ed75ecfaa225b5b6fb79ceed7b3c22205f16b4930da17d354e/porki-0.7.0.tar.gz | source | sdist | null | false | 8ce28bb18520583ecfe0c9b66336cb8a | fe89a997929546c257e104e41ffa2bbaf7f843029bab5103ecb532e8e0e26766 | 2697cc7bace662ed75ecfaa225b5b6fb79ceed7b3c22205f16b4930da17d354e | null | [] | 253 |
2.1 | cdk8s-awscdk-resolver | 0.0.491 | @cdk8s/awscdk-resolver | # AWS CDK Resolver
The `AwsCdkResolver` is able to resolve any [`CfnOutput`](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.CfnOutput.html)
defined by your AWS CDK application. In this example, we create an S3 `Bucket` with the AWS CDK, and pass its (deploy time generated)
name as an environment variable to a Kubernetes `CronJob` resource.
```python
import * as aws from 'aws-cdk-lib';
import * as k8s from 'cdk8s';
import * as kplus from 'cdk8s-plus-27';
import { AwsCdkResolver } from '@cdk8s/awscdk-resolver';
const awsApp = new aws.App();
const stack = new aws.Stack(awsApp, 'aws');
const k8sApp = new k8s.App({ resolvers: [new AwsCdkResolver()] });
const manifest = new k8s.Chart(k8sApp, 'Manifest');
const bucket = new aws.aws_s3.Bucket(stack, 'Bucket');
const bucketName = new aws.CfnOutput(stack, 'BucketName', {
value: bucket.bucketName,
});
new kplus.CronJob(manifest, 'CronJob', {
schedule: k8s.Cron.daily(),
containers: [{
image: 'job',
envVariables: {
// directly passing the value of the `CfnOutput` containing
// the deploy time bucket name
BUCKET_NAME: kplus.EnvValue.fromValue(bucketName.value),
}
}]
});
awsApp.synth();
k8sApp.synth();
```
During cdk8s synthesis, the custom resolver will detect that `bucketName.value` is not a concrete value,
but rather a value of a `CfnOutput`. It will then perform AWS service calls in order to fetch the
actual value from the deployed infrastructure in your account. This means that in order
for `cdk8s synth` to succeed, it must be executed *after* the AWS CDK resources
have been deployed. So your deployment workflow should (conceptually) be:
1. `cdk deploy`
2. `cdk8s synth`
> Note that the `AwsCdkResolver` is **only** able to fetch tokens that have a `CfnOutput` defined for them.
##### Permissions
Since running `cdk8s synth` will now require performing AWS service calls, it must have access
to a set of AWS credentials. Following are the set of actions the credentials must allow:
* `cloudformation:DescribeStacks`
Note that the actions cdk8s require are far more scoped down than those normally required for the
deployment of AWS CDK applications. It is therefore recommended to not reuse the same set of credentials,
and instead create a scoped down `ReadOnly` role dedicated for cdk8s resolvers.
## Cross Repository Workflow
As we've seen, your `cdk8s` application needs access to the objects defined in your cloud application. If both applications
are defined within the same file, this is trivial to achieve. If they are in different files, a simple `import` statement will suffice.
However, what if the applications are managed in two separate repositories? This makes it a little trickier, but still possible.
In this scenario, `cdk.ts` in the AWS CDK application, stored in a dedicated repository.
```python
import * as aws from 'aws-cdk-lib';
const awsApp = new aws.App();
const stack = new aws.Stack(awsApp, 'aws');
const bucket = new aws.aws_s3.Bucket(stack, 'Bucket');
const bucketName = new aws.CfnOutput(stack, 'BucketName', {
value: bucket.bucketName,
});
awsApp.synth();
```
In order for the `cdk8s` application to have cross repository access, the AWS CDK object instances that we want to expose need to be available
via a package repository. To do this, break up the AWS CDK application into the following files:
`app.ts`
```python
import * as aws from 'aws-cdk-lib';
const awsApp = new aws.App();
const stack = new aws.Stack(awsApp, 'aws');
const bucket = new aws.aws_s3.Bucket(stack, 'Bucket');
// export the thing we want to have available for cdk8s applications
export const bucketName = new aws.CfnOutput(stack, 'BucketName', {
value: bucket.bucketName,
});
// note that we don't call awsApp.synth here
```
`main.ts`
```python
import { awsApp } from './app.ts'
awsApp.synth();
```
Now, publish the `app.ts` file to a package manager, so that your `cdk8s` application can install and import it.
This approach might be somewhat counter intuitive, because normally we only publish classes to the package manager,
not instances. Indeed, these types of applications introduce a new use-case that requires the sharing of instances.
Conceptually, this is no different than writing state<sup>*</sup> to an SSM parameter or an S3 bucket, and it allows us to remain
in the boundaries of our programming language, and the typing guarantees it provides.
> <sup>*</sup> Actually, we are only publishing instructions for fetching state, not the state itself.
Assuming `app.ts` was published as the `my-cdk-app` package, our `cdk8s` application will now look like so:
```python
import * as k8s from 'cdk8s';
import * as kplus from 'cdk8s-plus-27';
// import the desired instance from the AWS CDK app.
import { bucketName } from 'my-cdk-app';
import { AwsCdkResolver } from '@cdk8s/awscdk-resolver';
const k8sApp = new k8s.App({ resolvers: [new AwsCdkResolver()] });
const manifest = new k8s.Chart(k8sApp, 'Manifest');
new kplus.CronJob(manifest, 'CronJob', {
schedule: k8s.Cron.daily(),
containers: [{
image: 'job',
envVariables: {
// directly passing the value of the `CfnOutput` containing
// the deploy time bucket name
BUCKET_NAME: kplus.EnvValue.fromValue(bucketName.value),
}
}]
});
k8sApp.synth();
```
| text/markdown | Amazon Web Services | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved"
] | [] | https://github.com/cdk8s-team/cdk8s-awscdk-resolver.git | null | ~=3.9 | [] | [] | [] | [
"aws-cdk-lib<3.0.0,>=2.195.0",
"cdk8s<3.0.0,>=2.68.91",
"constructs<11.0.0,>=10.3.0",
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/cdk8s-team/cdk8s-awscdk-resolver.git"
] | twine/6.1.0 CPython/3.14.2 | 2026-02-21T06:25:00.807446 | cdk8s_awscdk_resolver-0.0.491.tar.gz | 1,182,902 | d5/0c/93e9c3bf8d7120779e35d07b5b746c0bfce0e97b49388da116dd5eeb1aa9/cdk8s_awscdk_resolver-0.0.491.tar.gz | source | sdist | null | false | bdb0d0eab054a0655916e135d6fb081d | aed67f908fc65b3a0a60d75cb34ed11de7b42c88d78b73a304b2cba3b6505009 | d50c93e9c3bf8d7120779e35d07b5b746c0bfce0e97b49388da116dd5eeb1aa9 | null | [] | 268 |
2.4 | django-tag-me | 2026.2.21.2 | A simple approach to Django tagging | =================
**Django Tag Me**
=================
|
**Version = 2026.02.21.2**
|
*Simple, flexible tagging for Django.*
|
.. image:: https://github.com/imAsparky/django-tag-me/actions/workflows/main_PR.yaml/badge.svg
:alt: Tests
:target: https://github.com/imAsparky/django-tag-me/actions/workflows/main_PR.yaml
.. image:: https://img.shields.io/badge/dynamic/toml?url=https%3A%2F%2Fraw.githubusercontent.com%2FimAsparky%2Fdjango-tag-me%2Fmain%2Fpyproject.toml&query=project.dependencies&logo=Django&label=Versions&labelColor=%23092E20
:alt: Django Version Badge
:target: https://docs.djangoproject.com/en/4.2/
.. image:: https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2FimAsparky%2Fdjango-tag-me%2Fmain%2Fpyproject.toml&logo=Python
:alt: Python Version Badge
:target: https://devdocs.io/python~3.10/
.. image:: https://www.repostatus.org/badges/latest/active.svg
:alt: Project Status: Active
:target: https://www.repostatus.org/#active
.. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white
:target: https://github.com/pre-commit/pre-commit
:alt: pre-commit
.. image:: https://readthedocs.org/projects/django-tag-me/badge/?version=latest
:target: https://django-tag-me.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
|
Add tags to any Django model field—with a polished widget, per-user tag customization, and optional synchronization across models.
|
Features
--------
- **Easy setup** — Add ``TagMeCharField`` to your model
- **Beautiful widget** — Searchable dropdown with tag pills, powered by Alpine.js
- **User tags** — Each user gets their own customizable tag set per field
- **System tags** — Define default tags available to all users
- **Tag synchronization** — Keep tags in sync across related models
- **Form integration** — Drop-in mixin for your model forms
- **Template tags** — Display tags as styled pills
- **Model rename resilient** — Uses ContentType FK lookups, not model name strings
|
Widget Preview
--------------
**Dropdown with tag options**
.. image:: https://raw.githubusercontent.com/imAsparky/django-tag-me/main/docs/source/imgs/tag_dropdown_with_options.png
:alt: Tag dropdown with options
|
**Search and filter tags**
.. image:: https://raw.githubusercontent.com/imAsparky/django-tag-me/main/docs/source/imgs/tag_dropdown_search.png
:alt: Tag dropdown search functionality
|
Installation
------------
.. code-block:: bash
pip install django-tag-me
See the `documentation <https://django-tag-me.readthedocs.io/>`_ for setup and usage instructions.
|
Links
-----
- `Documentation <https://django-tag-me.readthedocs.io/>`_
- `Source Code <https://github.com/imAsparky/django-tag-me>`_
- `Issue Tracker <https://github.com/imAsparky/django-tag-me/issues>`_
|
Credits
-------
- Dropdown interface adapted from `alpinejs-multiselect <https://github.com/alexpechkarev/alpinejs-multiselect/>`_
- Built with `Django Cookiecutter <https://github.com/imAsparky/django-cookiecutter>`_
| text/x-rst | null | Mark Sevelj <mark.sevelj@dunwright.com.au> | null | Mark Sevelj <mark.sevelj@dunwright.com.au> | BSD-3-Clause | Django Tag Me, Django, django, django tagging, Django tags, Django field tags | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.0",
"Framework :: Django :: 4.1",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"Django>=4.0",
"structlog",
"django-extensions; extra == \"dev\"",
"hypothesis; extra == \"dev\"",
"pyright; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-django; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"tox; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/imAsparky/django-tag-me",
"Documentation, https://django-tag-me.readthedocs.io",
"Tracker, https://github.com/imAsparky/django-tag-me/issues",
"Changelog, https://github.com/imAsparky/django-tag-me/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-21T06:24:52.305924 | django_tag_me-2026.2.21.2.tar.gz | 218,212 | e9/af/aebeae5c81ad72e82585856aaba1d9690330676b6f25ba2485e11f6e62ec/django_tag_me-2026.2.21.2.tar.gz | source | sdist | null | false | 0606019483da4173895179e3b2b1c04f | c84d08783e65e1a95769e19fb933b8eabdf59fafe36157dde6243f18db3b182f | e9afaebeae5c81ad72e82585856aaba1d9690330676b6f25ba2485e11f6e62ec | null | [
"LICENSE"
] | 246 |
2.3 | jax2onnx | 0.12.1 | export JAX to ONNX | # jax2onnx 🌟
[](https://github.com/enpasos/jax2onnx/actions/workflows/ci.yml)
[](https://pypi.org/project/jax2onnx/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://enpasos.github.io/jax2onnx/)
`jax2onnx` converts your [JAX](https://docs.jax.dev/), [Flax NNX](https://flax.readthedocs.io/en/latest/), [Flax Linen](https://flax-linen.readthedocs.io/en/latest/), [Equinox](https://docs.kidger.site/equinox/) functions directly into the ONNX format.

## 📚 Documentation
**[Read the full documentation here](https://enpasos.github.io/jax2onnx/)**
## 🚀 Quick Install
```bash
pip install jax2onnx
```
## ⚡ Quick Usage
```python
from jax2onnx import to_onnx
from flax import nnx
model = MyFlaxModel(...)
to_onnx(model, [("B", 32)], return_mode="file", output_path="model.onnx")
```
## 🤝 Contributing
We warmly welcome contributions! Please check our [Developer Guide](https://enpasos.github.io/jax2onnx/developer_guide/plugin_system/) for plugin tutorials and architecture details.
## 📜 License
Apache License, Version 2.0. See [`LICENSE`](./LICENSE).
## 🌟 Special Thanks
A huge thank you to all [our contributors and the community](https://enpasos.github.io/jax2onnx/about/acknowledgements/) for their help and inspiration!
---
**Happy converting! 🎉**
| text/markdown | enpasos | matthias.unverzagt@enpasos.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"jax>=0.7.2",
"flax>=0.12.1",
"ml_dtypes>=0.5.1",
"optax>=0.2.4",
"orbax-checkpoint>=0.11.6",
"orbax-export>=0.0.6",
"netron>=8.2.9",
"onnx>=1.20.1",
"onnxruntime>=1.24.1",
"einops>=0.8.1",
"equinox>=0.13.1",
"onnx-ir>=0.1.15",
"huggingface-hub>=0.26.0",
"dm-pix>=0.4.4"
] | [] | [] | [] | [] | poetry/2.1.3 CPython/3.12.5 Linux/6.6.87.2-microsoft-standard-WSL2 | 2026-02-21T06:24:51.841070 | jax2onnx-0.12.1.tar.gz | 553,876 | da/1e/8e283426b032f048e2dc2a05de529244a86619b2297c7256f99f6b4ac692/jax2onnx-0.12.1.tar.gz | source | sdist | null | false | c39ef9a7c2b6812ff3954bb10bf9f12b | 1c32ea38d1df9010267c96d7d2ec1ad4028b70a583c226461e013173ae6d93b2 | da1e8e283426b032f048e2dc2a05de529244a86619b2297c7256f99f6b4ac692 | null | [] | 243 |
2.4 | django-forms-workflows | 0.13.7 | Enterprise-grade, database-driven form builder with approval workflows and external data integration | # Django Forms Workflows
**Enterprise-grade, database-driven form builder with approval workflows and external data integration**
[](https://www.gnu.org/licenses/lgpl-3.0)
[](https://www.python.org/downloads/)
[](https://www.djangoproject.com/)
## Overview
Django Forms Workflows bridges the gap between simple form libraries (like Crispy Forms) and expensive SaaS solutions (like JotForm, Formstack). It provides:
- 📝 **Database-Driven Forms** - Forms stored in database, not code
- 🔄 **Approval Workflows** - Multi-step approval engine with notifications
- 🔌 **External Data Integration** - Pull data from LDAP, databases, APIs
- 🔒 **Enterprise Security** - LDAP/AD authentication, complete audit trails
- 🏠 **Self-Hosted** - No SaaS fees, full data control
- 🎨 **Beautiful UI** - Built on Crispy Forms and Bootstrap
## Key Features
### 🎯 No-Code Form Creation
Business users can create and modify forms through Django Admin without touching code:
- Drag-and-drop field ordering
- 15+ field types (text, select, date, file upload, etc.)
- Validation rules (required, regex, min/max, etc.)
- Conditional field visibility
- Custom help text and placeholders
### 🔄 Powerful Approval Workflows
Built-in multi-step approval engine:
- Sequential or parallel approvals
- Configurable approval logic (any/all approvers)
- Email notifications and reminders
- Complete audit trail
- Approval delegation
### 🔌 Configurable Prefill Sources
Automatically populate form fields from external systems with flexible, reusable configurations:
- **User Model** - Current user's profile data (email, name, username)
- **LDAP/Active Directory** - Enterprise directory attributes (department, title, manager)
- **External Databases** - Pull from any SQL database with custom field mappings
- **REST APIs** - Integrate with external services
- **System Values** - Current date/time, previous submissions
**New in v1.1:** Prefill sources are now configurable database records with:
- ✅ **Dropdown Selection** - Form builders select from pre-configured sources
- ✅ **Custom Field Mappings** - Configure database lookup fields for different environments
- ✅ **Reusable Configurations** - Define once, use across multiple forms
- ✅ **Centralized Management** - All sources managed in Django Admin
Example database prefill configuration:
```python
# Admin → Prefill Sources → Add Prefill Source
Name: Student - First Name
Source Type: Database
DB Schema: dbo
DB Table: STBIOS
DB Column: FIRST_NAME
DB Lookup Field: ID_NUMBER # Column to match against
DB User Field: employee_id # UserProfile field to use for lookup
```
See [Prefill Sources Guide](docs/PREFILL_SOURCES.md) for detailed configuration.
### 🔄 Post-Submission Actions (NEW)
Automatically update external systems with form data after submission or approval:
- **Database Updates** - Write data back to external databases with custom field mappings
- **LDAP Updates** - Update Active Directory attributes
- **API Calls** - Send data to external services via HTTP APIs
- **Custom Handlers** - Execute custom Python code for complex integrations
**Trigger Types:**
- ✅ **On Submit** - Execute immediately when form is submitted
- ✅ **On Approve** - Execute when form is approved
- ✅ **On Reject** - Execute when form is rejected
- ✅ **On Complete** - Execute when workflow is complete
**Features:**
- Conditional execution based on form field values
- Automatic retries with configurable max attempts
- Error handling (fail silently or block submission)
- Execution ordering for dependent actions
- Complete audit logging
Example database update configuration:
```python
# Admin → Post-Submission Actions → Add
Name: Update HR Database
Action Type: Database Update
Trigger: On Approval
DB Field Mappings:
[
{"form_field": "email", "db_column": "EMAIL_ADDRESS"},
{"form_field": "phone", "db_column": "PHONE_NUMBER"}
]
```
See [Post-Submission Actions Guide](docs/POST_SUBMISSION_ACTIONS.md) for detailed configuration.
### 🔒 Enterprise-Ready Security
- LDAP/Active Directory authentication
- Role-based permissions
- Complete audit logging (who, what, when, where)
- CSRF protection
- SQL injection prevention
- File upload validation
### 📊 Comprehensive Audit Trail
Track everything for compliance:
- Form creation/modification
- Form submissions
- Approval decisions
- Status changes
- Field value changes
- User actions with IP addresses
## Quick Start
### Installation
```bash
pip install django-forms-workflows
```
### Basic Setup
1. Add to `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
# ...
'crispy_forms',
'crispy_bootstrap5',
'django_forms_workflows',
# ...
]
```
2. Configure settings:
```python
# Crispy Forms
CRISPY_ALLOWED_TEMPLATE_PACKS = "bootstrap5"
CRISPY_TEMPLATE_PACK = "bootstrap5"
# Forms Workflows
FORMS_WORKFLOWS = {
'ENABLE_APPROVALS': True,
'ENABLE_AUDIT_LOG': True,
'ENABLE_FILE_UPLOADS': True,
'MAX_FILE_SIZE': 10 * 1024 * 1024, # 10MB
}
```
3. Run migrations:
```bash
python manage.py migrate django_forms_workflows
```
4. Include URLs:
```python
urlpatterns = [
path('forms/', include('django_forms_workflows.urls')),
]
```
5. Create your first form in Django Admin!
## Architecture
```mermaid
graph TB
subgraph UI["User Interface"]
FB["Form Builder<br/>(Admin)"]
FV["Form Viewer<br/>(End User)"]
AU["Approval UI<br/>(Approvers)"]
end
subgraph Django["Django Application"]
subgraph DSL["Data Source Abstraction Layer"]
LDAP["LDAP<br/>Source"]
DB["Database<br/>Source"]
API["API<br/>Source"]
end
end
subgraph External["External Systems"]
AD["Active<br/>Directory"]
Legacy["Legacy<br/>Databases"]
ExtAPI["External<br/>APIs"]
end
UI --> Django
DSL --> External
style UI fill:#e1f5ff
style Django fill:#fff4e1
style DSL fill:#ffe1f5
style External fill:#e1ffe1
```
## Use Cases
Perfect for:
- **HR Departments** - Employee onboarding, time-off requests, expense reports
- **IT Departments** - Access requests, equipment requests, change management
- **Finance** - Purchase orders, invoice approvals, budget requests
- **Education** - Student applications, course registrations, facility requests
- **Healthcare** - Patient intake, referrals, insurance claims
- **Government** - Permit applications, FOIA requests, citizen services
## Documentation
- [Installation Guide](docs/installation.md)
- [Configuration Guide](docs/configuration.md)
- [Data Sources Guide](docs/data-sources.md)
- [Workflow Guide](docs/workflows.md)
- [API Reference](docs/api.md)
- [Architecture Overview](docs/ARCHITECTURE.md)
## Comparison
| Feature | Django Forms Workflows | Crispy Forms | FormStack | Django-Formtools |
|---------|----------------------|--------------|-----------|------------------|
| Database-driven forms | ✅ | ❌ | ✅ | ❌ |
| No-code form creation | ✅ | ❌ | ✅ | ❌ |
| Self-hosted | ✅ | ✅ | ❌ | ✅ |
| Approval workflows | ✅ | ❌ | ⚠️ | ❌ |
| External data prefill | ✅ | ❌ | ⚠️ | ❌ |
| LDAP/AD integration | ✅ | ❌ | ❌ | ❌ |
| Audit trail | ✅ | ❌ | ✅ | ❌ |
| Open source | ✅ | ✅ | ❌ | ✅ |
## Requirements
- Python 3.10+
- Django 5.1+
- PostgreSQL 12+ (recommended) or MySQL 8.0+
- Celery 5.0+ (for background tasks)
- Redis/Valkey (for Celery broker)
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details.
## License
GNU Lesser General Public License v3.0 (LGPLv3) - see [LICENSE](LICENSE) for details.
## Support
- 📖 [Documentation](https://django-forms-workflows.readthedocs.io/)
- 💬 [Discussions](https://github.com/opensensor/django-forms-workflows/discussions)
- 🐛 [Issue Tracker](https://github.com/opensensor/django-forms-workflows/issues)
## Roadmap
### Phase 1: Core (Current)
- [x] Database-driven form definitions
- [x] Dynamic form rendering
- [x] Approval workflows
- [x] LDAP integration
- [x] Database prefill
- [x] Audit logging
### Phase 2: Enhanced UX
- [ ] Form builder UI (drag-and-drop)
- [ ] Conditional field visibility (client-side)
- [ ] File upload validation
- [ ] Form templates/cloning
- [ ] Dashboard analytics
### Phase 3: Advanced Features
- [ ] REST API for form submission
- [ ] Webhook support
- [ ] Custom field types (signature, location, etc.)
- [ ] Advanced reporting
- [ ] Form versioning
### Phase 4: Enterprise
- [ ] Multi-tenancy support
- [ ] SSO integration (SAML, OAuth)
- [ ] Advanced RBAC
- [ ] White-label support
- [ ] Plugin marketplace
## Credits
Built with ❤️ by the Django community.
Special thanks to:
- [Django Crispy Forms](https://github.com/django-crispy-forms/django-crispy-forms)
- [Celery](https://github.com/celery/celery)
- [django-auth-ldap](https://github.com/django-auth-ldap/django-auth-ldap)
| text/markdown | Matt Davis | matt@opensensor.io | null | null | LGPL-3.0-only | django, forms, workflows, approval, form-builder, dynamic-forms, ldap, enterprise, audit, database-driven, sso, saml, oauth2 | [
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"Django<7.0,>=5.1",
"celery>=5.3",
"crispy-bootstrap5>=2.0",
"django-auth-ldap>=4.0; extra == \"ldap\" or extra == \"all\"",
"django-crispy-forms>=2.0",
"google-api-python-client>=2.100; extra == \"gmail\" or extra == \"all\"",
"google-auth>=2.20; extra == \"gmail\" or extra == \"all\"",
"mssql-django>=1.3; extra == \"mssql\" or extra == \"all\"",
"mysqlclient>=2.2; extra == \"mysql\" or extra == \"all\"",
"psycopg2-binary>=2.9; extra == \"postgresql\" or extra == \"all\"",
"pyodbc>=5.0; extra == \"mssql\" or extra == \"all\"",
"python-decouple>=3.8",
"python-ldap>=3.4; extra == \"ldap\" or extra == \"all\"",
"python3-saml>=1.16; extra == \"sso\" or extra == \"saml\" or extra == \"all\"",
"requests>=2.31",
"social-auth-app-django>=5.4; extra == \"sso\" or extra == \"all\"",
"sphinx>=7.0; extra == \"docs\"",
"sphinx-autodoc-typehints>=1.25; extra == \"docs\"",
"sphinx-rtd-theme>=2.0; extra == \"docs\"",
"xhtml2pdf>=0.2.11; extra == \"pdf\" or extra == \"all\""
] | [] | [] | [] | [
"Documentation, https://django-forms-workflows.readthedocs.io/",
"Homepage, https://github.com/opensensor/django-forms-workflows",
"Repository, https://github.com/opensensor/django-forms-workflows"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T06:24:12.238594 | django_forms_workflows-0.13.7.tar.gz | 195,005 | 04/41/8e067e4f9b20ffe600ea60a89fca9518082ca1a4f5c043d2b02d0912dda4/django_forms_workflows-0.13.7.tar.gz | source | sdist | null | false | 2a233a18982a99ee75b97b1932290c08 | 897ea33fc8ecab6ce143259977fcd8e3159e6a61efd6693b073ea61219eb9bbd | 04418e067e4f9b20ffe600ea60a89fca9518082ca1a4f5c043d2b02d0912dda4 | null | [
"LICENSE"
] | 251 |
2.3 | dycw-actions | 0.19.6 | Library of actions | # `actions`
Library of actions
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.3.1",
"dycw-utilities>=0.192.0",
"pydantic>=2.12.5",
"requests>=2.32.5",
"ruamel-yaml>=0.19.1",
"xdg-base-dirs>=6.0.2",
"click==8.3.1; extra == \"cli\"",
"dycw-utilities==0.192.0; extra == \"cli\"",
"pydantic==2.12.5; extra == \"cli\"",
"requests==2.32.5; extra == \"cli\"",
"ruamel-yaml==0.19.1; extra == \"cli\"",
"xdg-base-dirs==6.0.2; extra == \"cli\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T06:23:57.474695 | dycw_actions-0.19.6-py3-none-any.whl | 26,532 | 66/e4/b986cc5faecc580055a4ca5f50123e9945f42b5df67fe5aca5f64bb994a9/dycw_actions-0.19.6-py3-none-any.whl | py3 | bdist_wheel | null | false | eb600a2a74937fb70619efaf52486830 | 76e609923f8fd0a3b0b59f21a9db0f5c90298471f5d0551bef2ec74d94099b84 | 66e4b986cc5faecc580055a4ca5f50123e9945f42b5df67fe5aca5f64bb994a9 | null | [] | 203 |
2.4 | iupitermag | 0.1.3 | Python package with Rust bindings to calculate the magnetic field in Jupiter's magnetosphere. | # iupitermag
## Introduction
iupitermag is a [Python](https://www.python.org/) package written in
[Rust](https://www.rust-lang.org/)
to model Jupiter's magnetic field.

*Jupiter's internal magnetic field intensity at the 1-bar "surface".*
Many **other public codes** do this or something similar,
* [`JupiterMag`](https://github.com/mattkjames7/JupiterMag) -
A Python package that uses the
[`libjupitermag`](https://github.com/mattkjames7/libjupitermag)
C++ library (both written by Matt James).
* [`jovian_jrm09_internal`](https://github.com/marissav06/jovian_jrm09_internal_matlab),
[`con_2020`](https://github.com/marissav06/con2020_matlab) - Codes written for MATLAB and IDL by Marrisa Vogt.
* [`con2020`](https://github.com/gabbyprovan/con2020) - Python code by Gabby Provan.
> More details on Jupiter magnetic field models and these codes
> can be found in this paper - Wilson, R.J., Vogt, M.F., Provan, G. et al. **Internal and External Jovian
> Magnetic Fields: Community Code to Serve the Magnetospheres of the Outer Planets
> Community.** Space Sci Rev 219, 15 (2023).
> [https://doi.org/10.1007/s11214-023-00961-3](https://doi.org/10.1007/s11214-023-00961-3)
## Installation
### Installing using wheels on PyPI
If you are on Python versions 3.12 or 3.13, you can install directly using the wheels hosted
on PyPI.
```
$ pip install iupitermag
```
or
```
$ uv add iupitermag
```
### Installing from source using `uv`
If you are using `uv` as your Python package manager, you can run the following commands after
cloning and changing into this directory.
```
$ uv pip install .
```
### Installing from source using `maturin`
```
$ maturin develop --release
```
## Usage
### Calculating the internal and current sheet fields at a single point.
All positions should be in the IAU_JUPITER coordinate system.
```python
import iupitermag as im
internal_field = im.InternalField("JRM33")
cs_field = im.CurrentSheetField("CON2020")
r = 10.
theta = 0.
phi = 0.
b_int_rtp = internal_field.calc_field(r, theta, phi)
b_ext_rtp = cs_field.calc_field(r, theta, phi)
# Or, you can the cartesian form. This calls the spherical version internally.
x = 10.
y = 5.
z = 1.
b_int_xyz = internal_field.calc_field_xyz(x, y, z)
b_ext_xyz = cs_field.calc_field_xyz(x, y, z)
```
### Calculating the internal and current sheet fields for a collection of points.
If you have a collection of points stored as a single numpy array of shape (N, 3),
you can use `map_calc_field` or `parmap_calc_field` (or their corresponding
cartesian versions `map_calc_field_xyz` and `parmap_calc_field_xyz`).
```python
points = np.zeros((10000, 3))
points[:, 0] = np.random.random_sample((10000,)) * 10 + 5
b_int = internal_field.map_calc_field(points)
b_ext = cs_field.map_calc_field(points)
b_int = internal_field.parmap_calc_field(points)
b_ext = cs_field.parmap_calc_field(points)
points_xyz = points * 1.
b_int_xyz = internal_field.map_calc_field_xyz(points_xyz)
b_ext_xyz = cs_field.map_calc_field_xyz(point_xyz)
b_int_xyz = internal_field.parmap_calc_field_xyz(points_xyz)
b_ext_xyz = cs_field.parmap_calc_field_xyz(points_xyz)
```
Based on some benchmarks, you should find `parmap_calc_field` nearly an order of magnitude
faster than `map_calc_field` or a Python loop over all points calling `calc_field`
repeatedly. `parmap_*` uses Rayon for parallelizing the calculation. Here are the results of
benchmarking different methods to calculate the JRM33 field for different number of points.

As you can see, `parmap_` is usually the better option if you have more than 20 or so points. Note
that the Python loop version still calls the Rust code internally to calculate the JRM33 field, so
even that is faster than a pure-Python implementation (not shown above).
### Tracing magnetic field lines
`iupitermag` can trace magnetic field lines to Jupiter using `trace_field_to_planet`, which takes
as input a collection of starting points (which each result in a separate trace). The coordinates
for these starting points should be cartesian.
```python
internal_field = im.InternalField("JRM33")
cs_field = im.CurrentSheetField("CON2020")
starting_positions_xyz = np.array([
[-10., 0., 0.],
[-15., 0., 0.],
[-20., 0., 0.],
[-25., 0., 0.],
[10., 0., 0.],
[15., 0., 0.],
[20., 0., 0.],
[25., 0., 0.],
])
trace = im.trace_field_to_planet(starting_positions_xyz, internal_field, cs_field)
```

| text/markdown; charset=UTF-8; variant=GFM | null | ysar <31999239+ysar@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"numpy>=2.4.2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:23:23.537693 | iupitermag-0.1.3.tar.gz | 308,214 | 60/9f/85d8e323035dc1e8275311d623fd62bf7b2521cad634ca5990cdaa310c7e/iupitermag-0.1.3.tar.gz | source | sdist | null | false | 8fa71e2a0aee0e5b8191434544bde186 | de1d60035fde199ef7790d3eed3edfd4abbd10c4299690917db7a6c7cff0079d | 609f85d8e323035dc1e8275311d623fd62bf7b2521cad634ca5990cdaa310c7e | null | [
"LICENSE"
] | 1,301 |
2.4 | agent-integrity-proto | 0.2.0 | Agent Integrity Protocol — real-time thinking block analysis for AI agent alignment | # agent-integrity-proto
Python SDK for the Agent Integrity Protocol — real-time thinking block analysis for AI agent alignment.
## Installation
```bash
pip install agent-integrity-proto
```
## Usage
```python
from aip import check_integrity, build_signal, AdapterRegistry
# Extract thinking block from LLM response
registry = AdapterRegistry()
thinking = registry.extract(response, provider="anthropic")
# Run integrity analysis
checkpoint = check_integrity(
thinking_block=thinking.content,
card=alignment_card,
config={
"agent_id": "my-agent",
"analysis_llm": {
"model": "claude-haiku-4-5-20251001",
"base_url": "https://api.anthropic.com",
"api_key": os.environ["ANTHROPIC_API_KEY"],
"max_tokens": 1024,
},
},
)
# Act on the verdict
if checkpoint.verdict == "clear":
proceed()
else:
escalate(checkpoint.concerns)
```
## API
See the [full documentation](https://github.com/mnemom/aip#readme) and [specification](https://github.com/mnemom/aip/blob/main/docs/SPEC.md).
## Requirements
- Python >= 3.10
## License
Apache 2.0. See [LICENSE](../../LICENSE) for details.
| text/markdown | null | "Mnemom.ai" <dev@mnemom.ai> | null | null | null | agent, ai, alignment, conscience, integrity, llm, safety, thinking | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Security",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27",
"mypy>=1.10; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"respx>=0.22; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/mnemom/aip",
"Documentation, https://github.com/mnemom/aip/blob/main/docs/SPEC.md",
"Repository, https://github.com/mnemom/aip",
"Issues, https://github.com/mnemom/aip/issues",
"Changelog, https://github.com/mnemom/aip/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:23:23.424015 | agent_integrity_proto-0.2.0.tar.gz | 70,144 | b7/e1/99e43c9bc634f195a209e7aa1c0f5c5e13425b56645bc613ed06990c2d0e/agent_integrity_proto-0.2.0.tar.gz | source | sdist | null | false | 00c6beb9bad98956e39d273d22c62209 | fdb9aeab913d122924a36fdd010fb45d69fded017999f6a67efcbd43bb3c082e | b7e199e43c9bc634f195a209e7aa1c0f5c5e13425b56645bc613ed06990c2d0e | Apache-2.0 | [] | 244 |
2.4 | play-launch | 0.6.0 | ROS2 Launch Inspection Tool - Record and replay launch executions for performance analysis | # play_launch
Record, replay, and analyze ROS 2 launch executions with resource monitoring and interactive management.
[](assets/demo.mp4)
## Installation
Install from PyPI:
```bash
pip install play_launch
```
Optional: Enable I/O monitoring (requires sudo):
```bash
play_launch setcap-io-helper
```
## Quick Start
Launch any ROS 2 package with monitoring and Web UI enabled by default:
```bash
play_launch launch demo_nodes_cpp talker_listener.launch.py
```
Access Web UI at `http://127.0.0.1:8080` for real-time node management and log streaming.
The Rust parser is used by default for speed. For maximum compatibility, use `--parser python`.
## Usage
### Launch Files
Replace `ros2 launch` with `play_launch launch`:
```bash
play_launch launch <package> <launch_file> [arguments...]
```
### Single Nodes
Replace `ros2 run` with `play_launch run`:
```bash
play_launch run <package> <executable> [arguments...]
```
### Two-Step Workflow
Record first, replay multiple times:
```bash
# Record
play_launch dump launch <package> <launch_file> [arguments...]
# Replay
play_launch replay
```
## Features
All features enabled by default:
- **Resource monitoring**: CPU, memory, I/O, GPU (2s interval)
- **Diagnostic monitoring**: `/diagnostics` and `/diagnostics_agg` topics
- **Web UI**: Interactive management at `http://127.0.0.1:8080`
- **Container isolation**: Composable nodes run in isolated processes via fork+exec (default)
### Disable Features
Disable specific features:
```bash
play_launch launch <package> <launch_file> --disable-monitoring
play_launch launch <package> <launch_file> --disable-diagnostics
play_launch launch <package> <launch_file> --disable-web-ui
play_launch launch <package> <launch_file> --disable-all
```
### Container Mode
Control how composable nodes are managed (default: `isolated`):
```bash
# Isolated: fork+exec per-node process isolation (default)
play_launch launch <pkg> <file> --container-mode isolated
# Observable: ComponentEvent publishing, shared process
play_launch launch <pkg> <file> --container-mode observable
# Stock: use original container from launch file, no override
play_launch launch <pkg> <file> --container-mode stock
```
### Adjust Monitoring
Change sampling interval (default: 2000ms):
```bash
play_launch launch <package> <launch_file> --monitor-interval-ms 500
```
### Configure Web UI
Change address or port (default: `127.0.0.1:8080`):
```bash
play_launch launch <package> <launch_file> --web-addr 0.0.0.0:8080
```
### Configuration File
Use YAML for advanced control:
```yaml
# config.yaml
monitoring:
enabled: true
sample_interval_ms: 2000
processes:
- node_pattern: "NODE 'rclcpp_components/component_container*"
cpu_affinity: [0, 1]
nice: 5
```
Apply configuration:
```bash
play_launch replay --config config.yaml
```
## Visualization
Generate interactive plots from monitoring data:
```bash
# Plot latest execution
play_launch plot
# Plot specific log directory
play_launch plot --log-dir play_log/2025-10-28_16-17-56
# Plot specific metrics
play_launch plot --metrics cpu memory
# List available metrics
play_launch plot --list-metrics
```
Output saved to `play_log/<timestamp>/plot/`:
- `cpu_timeline.html` - CPU usage over time
- `memory_timeline.html` - Memory usage over time
- `io_timeline.html` - I/O read/write rates
- `cpu_distribution.html` - CPU distribution box plot
- `memory_distribution.html` - Memory distribution box plot
- `statistics.txt` - Top 10 rankings for all metrics
## Web UI Features
- **Node management**: Start/Stop/Restart individual or all nodes
- **Container controls**: Load/Unload composable nodes
- **Real-time logs**: Stream stdout/stderr with log level coloring and filtering
- **Diagnostics panel**: View `/diagnostics` messages with level filtering
- **Status monitoring**: Color-coded node states
- **Auto-restart**: Per-node automatic restart configuration
- **Search & filter**: Find nodes in large deployments
## Output Structure
```
play_log/<timestamp>/
├── node/<node_name>/
│ ├── metadata.json
│ ├── metrics.csv # Resource metrics (when enabled)
│ ├── out/err # Process logs
│ ├── pid/status/cmdline
│ └── params_files/ # ROS parameter files
├── load_node/<name>/
│ └── out/err # Per-composable-node logs (isolated mode)
├── system_stats.csv # System-wide metrics
├── diagnostics.csv # Diagnostic messages (when enabled)
└── plot/ # Generated visualizations
```
## Command Reference
```bash
# Launch (all features enabled by default)
play_launch launch <package> <launch_file> [args...]
play_launch run <package> <executable> [args...]
# Dump and replay
play_launch dump launch <package> <launch_file> [args...]
play_launch replay [--input-file record.json]
# Parser selection (Rust is default)
play_launch launch <pkg> <file> --parser rust # Default, fast
play_launch launch <pkg> <file> --parser python # Maximum compatibility
# Container mode
play_launch launch <pkg> <file> --container-mode isolated # Default
play_launch launch <pkg> <file> --container-mode observable
play_launch launch <pkg> <file> --container-mode stock
# Disable features
play_launch launch <pkg> <file> --disable-monitoring
play_launch launch <pkg> <file> --disable-diagnostics
play_launch launch <pkg> <file> --disable-web-ui
play_launch launch <pkg> <file> --disable-all
play_launch launch <pkg> <file> --disable-respawn
# Enable only specific features
play_launch launch <pkg> <file> --enable monitoring
play_launch launch <pkg> <file> --enable web-ui --enable diagnostics
# Adjust settings
play_launch launch <pkg> <file> --monitor-interval-ms 500
play_launch launch <pkg> <file> --web-addr 0.0.0.0:8080
play_launch launch <pkg> <file> --config config.yaml
# Logging
play_launch launch <pkg> <file> --verbose # Enable INFO level
RUST_LOG=play_launch=debug play_launch launch <pkg> <file> # DEBUG level
# Visualization
play_launch plot
play_launch plot --log-dir <dir>
play_launch plot --metrics cpu memory io gpu
play_launch plot --list-metrics
```
## Development
See [CLAUDE.md](CLAUDE.md) for development guidelines and architecture details.
```bash
# Run checks (clippy + rustfmt + ruff + cpplint + clang-format)
just check
# Format code
just format
# Run tests
just test
```
## License
MIT License. See [LICENSE.txt](LICENSE.txt).
| text/markdown | null | "Lin, Hsiang-Jui" <jerry73204@gmail.com> | null | null | null | ros2, launch, profiling, monitoring, robotics | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Rust",
"Topic :: Software Development :: Testing",
"Topic :: System :: Monitoring"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml>=6.0",
"lark>=1.2",
"packaging>=24.0",
"plotly>=5.4.1",
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=4.1; extra == \"dev\"",
"pytest-mock>=3.12; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"build; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/NEWSLabNTU/play_launch",
"Documentation, https://github.com/NEWSLabNTU/play_launch#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:22:46.820708 | play_launch-0.6.0-py3-none-manylinux_2_35_x86_64.whl | 5,426,281 | 76/80/0ee503ef947d55c2598cc16b847ecb50e5abbbf2bcc1b7e7da7965cdd2d4/play_launch-0.6.0-py3-none-manylinux_2_35_x86_64.whl | py3 | bdist_wheel | null | false | 51b8711b4a43589025bcb80b6e55e12a | 91fbffd51af908cae45613b7afbc9eecfb13a19e96e2a2f195b5695c39cc6271 | 76800ee503ef947d55c2598cc16b847ecb50e5abbbf2bcc1b7e7da7965cdd2d4 | BSD-3-Clause | [
"LICENSE.txt"
] | 132 |
2.4 | expops | 0.1.16.dev0 | MLOps Platform with step-based pipeline execution | # ExpOps
`expops` is a project-based experiment runner: keep each experiment isolated under a workspace, run pipelines, and save run artifacts (with optional tracking/backends).
**[User Guide](https://expops.minima.fit/)**
**Install**:
```bash
pip install expops
```
The installed CLI command is **`expops`**.
## Initial Setup
### Prerequisites
- Git installed and configured
- Access to the project repository
- Required dependencies installed
### First-Time Setup
```bash
# Clone the repository
git clone <repository-url>
cd mlops-platform
# Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install the package in editable mode (for development)
pip install -e .
```
## Branch Naming Convention
Use descriptive branch names that follow this pattern:
```
<type>/<short-description>
```
**Types:**
- `feature/` - New features or enhancements
- `bugfix/` - Bug fixes
- `refactor/` - Code refactoring
- `test/` - Adding or updating tests
- `docs/` - Adding or updating documents
**Examples:**
- `feature/model-versioning`
- `bugfix/api-timeout-error`
## Development Workflow
### Step 1: Create a New Branch
Always create a new branch from the latest `main` branch:
```bash
# Update main branch
git checkout main
git pull origin main
# Create and switch to new branch
git checkout -b feature/your-feature-name
```
### Step 2: Make Changes
- Write clean, well-documented code
- Follow the project's coding standards
- Keep changes focused and atomic
- Test your changes locally
### Step 3: Regular Commits
Commit your changes regularly with meaningful messages:
```bash
git add <changed-files>
git commit -m "descriptive commit message"
```
### Step 4: Keep Branch Updated
Regularly sync your branch with main to avoid conflicts:
```bash
git checkout main
git pull origin main
git checkout feature/your-feature-name
git merge main
```
This would also trigger the CI/CD pipeline on Github Actions, which includes running automated testing.
## Testing Requirements
### Writing Tests
- Write unit tests for new functions and classes
- Place tests in the `tests/` directory
- Name test files as `test_<module_name>.py`
## Merge and Deployment
### Merging
1. Ensure all CI/CD checks pass
2. Rebase if needed to keep history clean
3. Merge using "Squash and merge" or "Rebase and merge"
4. Delete the feature branch after merging
```bash
# After merge, update local main
git checkout main
git pull origin main
git branch -d feature/your-feature-name
```
### Post-Merge
- Monitor for any issues in staging/production
- Update project documentation if needed
- Close related issues
## Publishing to PyPI
The project uses GitHub Actions to automatically build and publish to PyPI when a git tag is pushed.
### Steps to Publish
1. **Update version and create a git tag**:
```bash
# Create a new version tag (e.g., v1.0.0, v1.1.0, v2.0.0)
git tag v<version-number>
# Example:
git tag v1.0.0
```
2. **Push the tag to GitHub**:
```bash
git push origin v<version-number>
# Example:
git push origin v1.0.0
```
| text/markdown | null | null | null | null | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pandas",
"numpy",
"scikit-learn",
"joblib",
"networkx>=2.6",
"PyYAML",
"pydantic>=2",
"dask>=2023.1.0",
"distributed>=2023.1.0",
"lz4==4.0.2",
"lxml>=4.9.0",
"redis>=5.0.0",
"google-cloud-firestore>=2.18.0",
"google-cloud-pubsub>=2.20.0",
"google-cloud-storage>=2.18.0",
"fastapi>=0.110",
"uvicorn>=0.25",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"dask-jobqueue>=0.8.0; extra == \"slurm\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:22:44.526480 | expops-0.1.16.dev0.tar.gz | 485,265 | b7/50/f594c97ecfaed25440cf8c2529b7b44fbf4a579767eb336569a8f8941503/expops-0.1.16.dev0.tar.gz | source | sdist | null | false | c825fa6a8474f470a1bc55a4420f26fa | 96db8e47f7c1643533ab486f4c6f734473adefd6ad176b01a500517684ec712f | b750f594c97ecfaed25440cf8c2529b7b44fbf4a579767eb336569a8f8941503 | null | [
"LICENSE"
] | 228 |
2.4 | runnylib | 0.0.7 | just create file | # This is just a project that i made so i can use csv and json with auto file creation
yeah that's it | text/markdown | null | Runny1005 <passakorn.kus@gmail.com> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Runny1005/createfile",
"Issues, https://github.com/Runny1005/createfile/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T06:22:41.022133 | runnylib-0.0.7.tar.gz | 2,009 | 54/72/926f148cba92f267e49a5c8e362370bc0f71466d41f75a2560bb9885b32e/runnylib-0.0.7.tar.gz | source | sdist | null | false | 86bfe7136f178d6b6799ed08220dedcb | 0565474d4291b73a60d4c1157a7a4a4c059a7a7e2c344f99b000e50c2f5e1feb | 5472926f148cba92f267e49a5c8e362370bc0f71466d41f75a2560bb9885b32e | MIT | [
"LICENSE"
] | 248 |
2.4 | minihost | 0.1.3 | A minimal juce-based plugin host using nanobind | # minihost
Minihost is a headless, JUCE-based audio plugin host that supports VST3, AudioUnit, and LV2 plugins. It provides a C/C++ API for integration and a Python API powered by nanobind.
## Features
- Load VST3 plugins (macOS, Windows, Linux)
- Load AudioUnit plugins (macOS only)
- Load LV2 plugins (macOS, Windows, Linux)
- **Headless mode** (default) - no GUI dependencies, uses JUCE's `juce_audio_processors_headless` module
- **Plugin chaining** - connect multiple plugins in series (synth -> reverb -> limiter)
- **Audio file I/O** via miniaudio + tflac -- read WAV/FLAC/MP3/Vorbis, write WAV (16/24/32-bit) and FLAC (16/24-bit)
- **Real-time audio playback** via miniaudio (cross-platform)
- **Real-time MIDI I/O** via libremidi (cross-platform)
- **Virtual MIDI ports** - create named ports that DAWs can connect to (macOS, Linux)
- **Standalone MIDI input** - monitor raw MIDI messages without a plugin (`MidiIn` class)
- Process audio with sample-accurate parameter automation
- Single and double precision processing
- MIDI input/output support
- Transport info for tempo-synced plugins
- State save/restore for presets and per-program state
- Thread-safe parameter access
- Change notifications (latency, parameter info, program, non-parameter state)
- Parameter gestures for automation bracketing
- Bus layout validation and sidechain support
- Track name/color metadata forwarding to plugins
- Latency and tail time reporting
## Requirements
- CMake 3.20+
- C++17 compiler
- JUCE framework (automatically downloaded if not present)
### Platform-specific
- **macOS**: Xcode command line tools
- **Windows**: Visual Studio 2019+ or MinGW
- **Linux**: Install the following development libraries:
```bash
sudo apt install libasound2-dev libfreetype-dev libfontconfig1-dev \
libwebkit2gtk-4.1-dev libgtk-3-dev libgl-dev libcurl4-openssl-dev
```
## Building
### macOS / Linux
```bash
# Clone the repository
git clone https://github.com/shakfu/minihost.git
cd minihost
# Build (JUCE will be downloaded automatically)
make
# Or with a custom JUCE path
cmake -B build -DJUCE_PATH=/path/to/JUCE
cmake --build build
# Disable headless mode (enables GUI support)
cmake -B build -DMINIHOST_HEADLESS=OFF
cmake --build build
```
### Windows
```powershell
# Clone the repository
git clone https://github.com/shakfu/minihost.git
cd minihost
# Download JUCE
python scripts/download_juce.py
# Configure and build
cmake -B build
cmake --build build --config Release
```
### JUCE Setup
JUCE is downloaded automatically by `make` (macOS/Linux). You can also download it manually:
```bash
# Cross-platform (recommended) - works on Windows, macOS, Linux
python scripts/download_juce.py
# Unix only (bash)
./scripts/download_juce.sh
```
To use a different version or existing installation:
```bash
# Download specific version (macOS/Linux)
JUCE_VERSION=8.0.6 python scripts/download_juce.py
# Download specific version (Windows PowerShell)
$env:JUCE_VERSION="8.0.6"; python scripts/download_juce.py
# Or point to existing JUCE
cmake -B build -DJUCE_PATH=/path/to/your/JUCE
```
## Command Line Interface
The `minihost` command provides a CLI for common plugin operations:
```bash
# Install (from source)
uv sync
# Available commands
minihost --help
usage: minihost [-h] [-r SAMPLE_RATE] [-b BLOCK_SIZE]
{scan,info,params,midi,play,process} ...
Audio plugin hosting CLI
positional arguments:
{scan,info,params,midi,play,process}
Commands
scan Scan directory for plugins
info Show plugin info
params List plugin parameters
midi List or monitor MIDI ports
play Play plugin with real-time audio/MIDI
process Process audio through plugin (offline)
options:
-h, --help show this help message and exit
-r, --sample-rate SAMPLE_RATE
Sample rate in Hz (default: 48000)
-b, --block-size BLOCK_SIZE
Block size in samples (default: 512)
```
### Commands
#### `minihost info` - Show plugin info
```bash
minihost info /path/to/plugin.vst3 # full info (loads plugin)
minihost info /path/to/plugin.vst3 --probe # lightweight metadata only
minihost info /path/to/plugin.vst3 --json # JSON output
```
By default shows full runtime details (sample rate, channels, latency, buses, presets). Use `--probe` for fast metadata-only mode without fully loading the plugin.
#### `minihost scan` - Scan directory for plugins
```bash
minihost scan /Library/Audio/Plug-Ins/VST3/
minihost scan ~/Music/Plugins --json
```
#### `minihost params` - List plugin parameters
```bash
minihost params /path/to/plugin.vst3
minihost params /path/to/plugin.vst3 --json
```
#### `minihost midi` - List or monitor MIDI ports
```bash
minihost midi # list all MIDI ports
minihost midi --json # list as JSON
minihost midi -m 0 # monitor MIDI input port 0
minihost midi --virtual-midi "Monitor" # create virtual port and monitor
```
#### `minihost play` - Play plugin with real-time audio/MIDI
```bash
# Connect to MIDI input port 0
minihost play /path/to/synth.vst3 --midi 0
# Create a virtual MIDI port (macOS/Linux)
minihost play /path/to/synth.vst3 --virtual-midi "My Synth"
```
#### `minihost process` - Process audio/MIDI offline
```bash
# Process audio through effect
minihost process /path/to/effect.vst3 -i input.wav -o output.wav
# With parameter control
minihost process /path/to/effect.vst3 -i input.wav -o output.wav --param "Mix:0.5"
# Render MIDI through synth
minihost process /path/to/synth.vst3 -m song.mid -o output.wav --tail 3.0
# With preset and bit depth
minihost process /path/to/synth.vst3 -m song.mid -o output.wav --preset 5 --bit-depth 16
# Sidechain processing (second -i is sidechain)
minihost process /path/to/compressor.vst3 -i main.wav -i sidechain.wav -o output.wav
```
### Global Options
| Option | Description |
|--------|-------------|
| `-r, --sample-rate` | Sample rate in Hz (default: 48000) |
| `-b, --block-size` | Block size in samples (default: 512) |
## Python API
```bash
uv sync
```
```python
import numpy as np
import minihost
plugin = minihost.Plugin("/path/to/plugin.vst3", sample_rate=48000)
input_audio = np.zeros((2, 512), dtype=np.float32)
output_audio = np.zeros((2, 512), dtype=np.float32)
plugin.process(input_audio, output_audio)
```
### Real-time Audio Playback
```python
import minihost
import time
plugin = minihost.Plugin("/path/to/synth.vst3", sample_rate=48000)
# Use as context manager for automatic start/stop
with minihost.AudioDevice(plugin) as audio:
# Plugin is now producing audio through speakers
# Send MIDI programmatically
audio.send_midi(0x90, 60, 100) # Note on: C4, velocity 100
time.sleep(1)
audio.send_midi(0x80, 60, 0) # Note off
time.sleep(0.5)
# Or manual control
audio = minihost.AudioDevice(plugin)
audio.start()
audio.send_midi(0x90, 64, 80) # E4 note on
time.sleep(0.5)
audio.send_midi(0x80, 64, 0) # E4 note off
audio.stop()
```
### Real-time MIDI I/O
```python
import minihost
# Enumerate available MIDI ports
inputs = minihost.midi_get_input_ports()
outputs = minihost.midi_get_output_ports()
print(f"MIDI Inputs: {inputs}")
print(f"MIDI Outputs: {outputs}")
# Connect MIDI when creating AudioDevice
with minihost.AudioDevice(plugin, midi_input_port=0) as audio:
# MIDI from port 0 is now routed to the plugin
pass
# Or connect dynamically
audio = minihost.AudioDevice(plugin)
audio.connect_midi_input(0)
audio.start()
# ...
audio.disconnect_midi_input()
audio.stop()
# Create virtual MIDI ports (appear in system MIDI, DAWs can connect)
audio = minihost.AudioDevice(plugin)
audio.create_virtual_midi_input("minihost Input")
audio.create_virtual_midi_output("minihost Output")
audio.start()
# Other apps can now send MIDI to "minihost Input"
# and receive MIDI from "minihost Output"
```
### Standalone MIDI Input
Monitor MIDI messages without loading a plugin:
```python
import minihost
def on_midi(data: bytes):
status = data[0]
if status & 0xF0 == 0x90 and data[2] > 0:
print(f"Note On: {data[1]} vel={data[2]}")
# Open hardware MIDI port
with minihost.MidiIn.open(0, on_midi) as midi_in:
input("Press Enter to stop...\n")
# Or create a virtual MIDI port
with minihost.MidiIn.open_virtual("My Monitor", on_midi) as midi_in:
input("Press Enter to stop...\n")
```
### Audio File I/O
```python
import minihost
# Read audio files (WAV, FLAC, MP3, Vorbis)
data, sample_rate = minihost.read_audio("input.wav")
# data shape: (channels, samples), dtype: float32
# Write audio files
minihost.write_audio("output.wav", data, sample_rate, bit_depth=24) # WAV (16/24/32-bit)
minihost.write_audio("output.flac", data, sample_rate, bit_depth=24) # FLAC (16/24-bit)
# Get file info without decoding
info = minihost.get_audio_info("song.wav")
print(f"{info['channels']}ch, {info['sample_rate']}Hz, {info['duration']:.2f}s")
```
### MIDI File Read/Write
```python
import minihost
# Create a new MIDI file
mf = minihost.MidiFile()
mf.ticks_per_quarter = 480
# Add events
mf.add_tempo(0, 0, 120.0) # 120 BPM at tick 0
mf.add_note_on(0, 0, 0, 60, 100) # C4 note on at tick 0
mf.add_note_off(0, 480, 0, 60, 0) # C4 note off at tick 480
# Save to file
mf.save("output.mid")
# Load existing MIDI file
mf2 = minihost.MidiFile()
mf2.load("input.mid")
# Read events
events = mf2.get_events(0) # Get events from track 0
for event in events:
if event['type'] == 'note_on':
print(f"Note {event['pitch']} vel {event['velocity']} at {event['seconds']:.2f}s")
```
### MIDI File Rendering
Render MIDI files through plugins to produce audio output:
```python
import minihost
plugin = minihost.Plugin("/path/to/synth.vst3", sample_rate=48000)
# Render to numpy array
audio = minihost.render_midi(plugin, "song.mid")
print(f"Rendered {audio.shape[1] / 48000:.2f} seconds of audio")
# Render directly to WAV file
samples = minihost.render_midi_to_file(plugin, "song.mid", "output.wav", bit_depth=24)
# Stream blocks for large files or real-time processing
for block in minihost.render_midi_stream(plugin, "song.mid", block_size=512):
# Process each block (shape: channels, block_size)
pass
# Fine-grained control with MidiRenderer class
renderer = minihost.MidiRenderer(plugin, "song.mid")
print(f"Duration: {renderer.duration_seconds:.2f}s")
while not renderer.is_finished:
block = renderer.render_block()
print(f"Progress: {renderer.progress:.1%}")
```
### Plugin Chaining
Chain multiple plugins together for serial processing:
```python
import minihost
import time
# Load plugins (all must have same sample rate)
synth = minihost.Plugin("/path/to/synth.vst3", sample_rate=48000)
reverb = minihost.Plugin("/path/to/reverb.vst3", sample_rate=48000)
limiter = minihost.Plugin("/path/to/limiter.vst3", sample_rate=48000)
# Create chain
chain = minihost.PluginChain([synth, reverb, limiter])
print(f"Total latency: {chain.latency_samples} samples")
print(f"Tail length: {chain.tail_seconds:.2f} seconds")
# Real-time playback through chain
with minihost.AudioDevice(chain) as audio:
audio.send_midi(0x90, 60, 100) # Note on to synth
time.sleep(2)
audio.send_midi(0x80, 60, 0) # Note off
time.sleep(1) # Let reverb tail fade
# Offline processing
import numpy as np
input_audio = np.zeros((2, 512), dtype=np.float32)
output_audio = np.zeros((2, 512), dtype=np.float32)
chain.process(input_audio, output_audio)
# Process with MIDI (MIDI goes to first plugin)
midi_events = [(0, 0x90, 60, 100)]
chain.process_midi(input_audio, output_audio, midi_events)
# Sample-accurate automation across chain
# param_changes: (sample_offset, plugin_index, param_index, value)
param_changes = [
(0, 1, 0, 0.3), # Set reverb param 0 at sample 0
(256, 1, 0, 0.6), # Change reverb param 0 at sample 256
(0, 2, 0, 0.8), # Set limiter param 0 at sample 0
]
chain.process_auto(input_audio, output_audio, midi_events, param_changes)
# Render MIDI file through chain
audio = minihost.render_midi(chain, "song.mid")
minihost.render_midi_to_file(chain, "song.mid", "output.wav")
# Access individual plugins in chain
for i in range(chain.num_plugins):
plugin = chain.get_plugin(i)
print(f"Plugin {i}: {plugin.num_params} params")
```
## C API Usage
```c
#include "minihost.h"
// Load a plugin
char err[256];
MH_Plugin* plugin = mh_open("/path/to/plugin.vst3",
48000.0, // sample rate
512, // max block size
2, 2, // in/out channels
err, sizeof(err));
// Process audio
float* inputs[2] = { in_left, in_right };
float* outputs[2] = { out_left, out_right };
mh_process(plugin, inputs, outputs, 512);
// Process with MIDI
MH_MidiEvent midi[] = {
{ 0, 0x90, 60, 100 }, // Note on at sample 0
{ 256, 0x80, 60, 0 } // Note off at sample 256
};
mh_process_midi(plugin, inputs, outputs, 512, midi, 2);
// Parameter control
int num_params = mh_get_num_params(plugin);
float value = mh_get_param(plugin, 0);
mh_set_param(plugin, 0, 0.5f);
// State save/restore
int size = mh_get_state_size(plugin);
void* state = malloc(size);
mh_get_state(plugin, state, size);
mh_set_state(plugin, state, size);
// Cleanup
mh_close(plugin);
```
### Real-time Audio Playback
```c
#include "minihost_audio.h"
// Open audio device for real-time playback
MH_AudioConfig config = { .sample_rate = 48000, .buffer_frames = 512 };
MH_AudioDevice* audio = mh_audio_open(plugin, &config, err, sizeof(err));
// Start playback
mh_audio_start(audio);
// Plugin is now producing audio through speakers
// Send MIDI, adjust parameters, etc.
// Stop and cleanup
mh_audio_stop(audio);
mh_audio_close(audio);
mh_close(plugin);
```
### Real-time MIDI I/O
```c
#include "minihost_midi.h"
// Enumerate available MIDI ports
int num_inputs = mh_midi_get_num_inputs();
int num_outputs = mh_midi_get_num_outputs();
for (int i = 0; i < num_inputs; i++) {
char name[256];
mh_midi_get_input_name(i, name, sizeof(name));
printf("MIDI Input %d: %s\n", i, name);
}
// Connect MIDI to audio device
MH_AudioConfig config = {
.sample_rate = 48000,
.midi_input_port = 0, // Connect to first MIDI input
.midi_output_port = -1 // No MIDI output
};
MH_AudioDevice* audio = mh_audio_open(plugin, &config, err, sizeof(err));
// Or connect/disconnect dynamically
mh_audio_connect_midi_input(audio, 1);
mh_audio_disconnect_midi_input(audio);
// Create virtual MIDI ports (appear in system MIDI, DAWs can connect)
mh_audio_create_virtual_midi_input(audio, "minihost Input");
mh_audio_create_virtual_midi_output(audio, "minihost Output");
```
### Plugin Chaining
Chain multiple plugins together for processing (e.g., synth -> reverb -> limiter):
```c
#include "minihost_chain.h"
// Load plugins
MH_Plugin* synth = mh_open("/path/to/synth.vst3", 48000, 512, 0, 2, err, sizeof(err));
MH_Plugin* reverb = mh_open("/path/to/reverb.vst3", 48000, 512, 2, 2, err, sizeof(err));
MH_Plugin* limiter = mh_open("/path/to/limiter.vst3", 48000, 512, 2, 2, err, sizeof(err));
// Create chain (all plugins must have same sample rate)
MH_Plugin* plugins[] = { synth, reverb, limiter };
MH_PluginChain* chain = mh_chain_create(plugins, 3, err, sizeof(err));
// Get combined latency
int latency = mh_chain_get_latency_samples(chain);
// Process audio through chain
float* inputs[2] = { in_left, in_right };
float* outputs[2] = { out_left, out_right };
mh_chain_process(chain, inputs, outputs, 512);
// Process with MIDI (MIDI goes to first plugin only)
MH_MidiEvent midi[] = { { 0, 0x90, 60, 100 } };
mh_chain_process_midi_io(chain, inputs, outputs, 512, midi, 1, NULL, 0, NULL);
// Sample-accurate automation across chain
MH_ChainParamChange changes[] = {
{ .sample_offset = 0, .plugin_index = 1, .param_index = 0, .value = 0.3f },
{ .sample_offset = 256, .plugin_index = 1, .param_index = 0, .value = 0.6f },
};
mh_chain_process_auto(chain, inputs, outputs, 512,
NULL, 0, NULL, 0, NULL, changes, 2);
// Real-time playback through chain
MH_AudioConfig config = { .sample_rate = 48000, .buffer_frames = 512 };
MH_AudioDevice* audio = mh_audio_open_chain(chain, &config, err, sizeof(err));
mh_audio_start(audio);
// ...
mh_audio_stop(audio);
mh_audio_close(audio);
// Cleanup
mh_chain_close(chain); // Does not close individual plugins
mh_close(synth);
mh_close(reverb);
mh_close(limiter);
```
### Audio File I/O
Read and write audio files without external dependencies:
```c
#include "minihost_audiofile.h"
// Read any supported format (WAV, FLAC, MP3, Vorbis)
char err[1024];
MH_AudioData* audio = mh_audio_read("input.flac", err, sizeof(err));
if (audio) {
printf("Channels: %u, Frames: %u, Rate: %u\n",
audio->channels, audio->frames, audio->sample_rate);
// audio->data is interleaved float32
mh_audio_data_free(audio);
}
// Write audio file (format selected by extension)
mh_audio_write("output.wav", interleaved_data,
2, num_frames, 48000, 24, err, sizeof(err)); // WAV
mh_audio_write("output.flac", interleaved_data,
2, num_frames, 48000, 24, err, sizeof(err)); // FLAC
// Get file info without decoding
MH_AudioFileInfo info;
mh_audio_get_file_info("song.wav", &info, err, sizeof(err));
printf("Duration: %.2f seconds\n", info.duration);
```
## Thread Safety
- `mh_process`, `mh_process_midi`, `mh_process_midi_io`, `mh_process_auto`: Call from audio thread only (no locking)
- All other functions are thread-safe with internal locking
- Do not call `mh_close` while another thread is using the plugin
## API Reference
Detailed API documentation:
- [C API Reference](docs/api_c.md) -- `minihost.h`, `minihost_audio.h`, `minihost_audiofile.h`, `minihost_chain.h`, `minihost_midi.h`
- [Python API Reference](docs/api_python.md) -- `Plugin`, `PluginChain`, `AudioDevice`, `MidiFile`, `MidiIn`, audio I/O, MIDI rendering, automation, VST3 presets
- [Hosting Guide](docs/hosting_guide.md) -- practical guide with extended examples
## License
GPL3
| text/markdown | null | Shakeeb Alireza <shakfu@users.noreply.github.com> | null | null | null | audiounit, vst3, plugin, dsp, synthesis, music, juce, nanobind, midi | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Multimedia :: Sound/Audio",
"Topic :: Multimedia :: Sound/Audio :: Sound Synthesis",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.20"
] | [] | [] | [] | [
"Homepage, https://github.com/shakfu/minihost",
"Repository, https://github.com/shakfu/minihost",
"Issues, https://github.com/shakfu/minihost/issues"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-21T06:22:13.322317 | minihost-0.1.3.tar.gz | 23,904,374 | 7d/a9/2c7e3e294302cedc39c7fe3a3d7afbdddcb772b64165681a1e96dc5760ef/minihost-0.1.3.tar.gz | source | sdist | null | false | fd5c40b9d24f83a256bb2d43135dd104 | ccc38fc1ca3ef60bd40f17393628d1b226d0a0c86adae4d739f5352ecee76ea0 | 7da92c7e3e294302cedc39c7fe3a3d7afbdddcb772b64165681a1e96dc5760ef | GPL-3.0-or-later | [] | 1,179 |
2.4 | openrewrite | 8.74.0.dev20260221062044 | OpenRewrite automated refactoring for Python. | # OpenRewrite Python
OpenRewrite automated refactoring for Python source code.
## Installation
```bash
pip install openrewrite
```
## Quick Start
```python
from rewrite.python import PythonParser
from rewrite import ExecutionContext
# Parse Python source code
parser = PythonParser()
ctx = ExecutionContext()
source_files = parser.parse(ctx, "example.py")
# Apply recipes to transform code
# ...
```
## Writing Recipes
```python
from dataclasses import dataclass, field
from rewrite import Recipe, option
from rewrite.python import PythonVisitor
@dataclass
class ChangeImport(Recipe):
"""Changes an import from one module to another."""
old_module: str = field(metadata=option(
display_name="Old module",
description="The module to change imports from",
example="flask"
))
new_module: str = field(metadata=option(
display_name="New module",
description="The module to change imports to",
example="flask_restful"
))
@property
def name(self) -> str:
return "org.openrewrite.python.ChangeImport"
@property
def display_name(self) -> str:
return "Change import"
@property
def description(self) -> str:
return "Changes an import from one module to another."
def editor(self) -> PythonVisitor:
# Implementation...
pass
```
## Documentation
See [docs.openrewrite.org](https://docs.openrewrite.org) for full documentation.
## License
Moderne Source Available License - see [LICENSE.md](../LICENSE.md)
| text/markdown | null | "Moderne Inc." <support@moderne.io> | null | null | Moderne Source Available License | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Quality Assurance",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cbor2>=5.6.5",
"more_itertools>=10.0.0",
"ty-types>=0.0.18.dev0",
"parso<0.8,>=0.7.1",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-timeout>=2.0.0; extra == \"dev\"",
"tox>=4.0.0; extra == \"dev\"",
"black>=24.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"build>=1.0.0; extra == \"publish\"",
"twine>=5.0.0; extra == \"publish\""
] | [] | [] | [] | [
"Homepage, https://github.com/openrewrite/rewrite",
"Repository, https://github.com/openrewrite/rewrite.git",
"Documentation, https://docs.openrewrite.org",
"Issues, https://github.com/openrewrite/rewrite/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T06:21:56.020708 | openrewrite-8.74.0.dev20260221062044.tar.gz | 238,454 | 64/08/b6340d562ce9142836d57b04730c8d6cc9fae6e98589bb01b94aa9c5bad0/openrewrite-8.74.0.dev20260221062044.tar.gz | source | sdist | null | false | 27ab5d82f6bcadb3b03407fc803827d1 | 88e136d371b187f5de27a8b9c19204c5cb222b6222e40ebf1f38982843a6f0f7 | 6408b6340d562ce9142836d57b04730c8d6cc9fae6e98589bb01b94aa9c5bad0 | null | [] | 229 |
2.4 | hanzo | 0.4.0 | Hanzo AI - Complete AI Infrastructure Platform with CLI, Router, MCP, and Agent Runtime | # Hanzo CLI and Orchestration Tools
[](https://pypi.org/project/hanzo/)
[](https://pypi.org/project/hanzo/)
Core CLI and orchestration tools for the Hanzo AI platform.
## Installation
```bash
pip install hanzo
```
## Features
- **Interactive Chat**: Chat with AI models through CLI
- **Node Management**: Run local AI inference nodes
- **Router Control**: Manage LLM proxy router
- **REPL Interface**: Interactive Python REPL with AI
- **Batch Orchestration**: Orchestrate multiple AI tasks
- **Memory Management**: Persistent conversation memory
## Usage
### CLI Commands
```bash
# Interactive chat
hanzo chat
# Use specific model
hanzo chat --model gpt-4
# Use router (local proxy)
hanzo chat --router
# Use cloud API
hanzo chat --cloud
```
### Node Management
```bash
# Start local node
hanzo node start
# Check status
hanzo node status
# List available models
hanzo node models
# Load specific model
hanzo node load llama2:7b
# Stop node
hanzo node stop
```
### Router Management
```bash
# Start router proxy
hanzo router start
# Check router status
hanzo router status
# List available models
hanzo router models
# View configuration
hanzo router config
# Stop router
hanzo router stop
```
### Interactive REPL
```bash
# Start REPL
hanzo repl
# In REPL:
> /help # Show help
> /models # List models
> /model gpt-4 # Switch model
> /clear # Clear context
> What is Python? # Ask questions
```
## Python API
### Batch Orchestration
```python
from hanzo.batch_orchestrator import BatchOrchestrator
orchestrator = BatchOrchestrator()
results = await orchestrator.run_batch([
"Summarize quantum computing",
"Explain machine learning",
"Define artificial intelligence"
])
```
### Memory Management
```python
from hanzo.memory_manager import MemoryManager
memory = MemoryManager()
memory.add_to_context("user", "What is Python?")
memory.add_to_context("assistant", "Python is...")
context = memory.get_context()
```
### Fallback Handling
```python
from hanzo.fallback_handler import FallbackHandler
handler = FallbackHandler()
result = await handler.handle_with_fallback(
primary_fn=api_call,
fallback_fn=local_inference
)
```
## Configuration
### Environment Variables
```bash
# API settings
HANZO_API_KEY=your-api-key
HANZO_BASE_URL=https://api.hanzo.ai
# Router settings
HANZO_ROUTER_URL=http://localhost:4000/v1
# Node settings
HANZO_NODE_URL=http://localhost:8000/v1
HANZO_NODE_WORKERS=4
# Model preferences
HANZO_DEFAULT_MODEL=gpt-4
HANZO_FALLBACK_MODEL=llama2:7b
```
### Configuration File
Create `~/.hanzo/config.yaml`:
```yaml
api:
key: your-api-key
base_url: https://api.hanzo.ai
router:
url: http://localhost:4000/v1
auto_start: true
node:
url: http://localhost:8000/v1
workers: 4
models:
- llama2:7b
- mistral:7b
models:
default: gpt-4
fallback: llama2:7b
```
## Architecture
### Components
- **CLI**: Command-line interface (`cli.py`)
- **Chat**: Interactive chat interface (`commands/chat.py`)
- **Node**: Local AI node management (`commands/node.py`)
- **Router**: LLM proxy management (`commands/router.py`)
- **REPL**: Interactive Python REPL (`interactive/repl.py`)
- **Orchestrator**: Batch task orchestration (`batch_orchestrator.py`)
- **Memory**: Conversation memory (`memory_manager.py`)
- **Fallback**: Resilient API handling (`fallback_handler.py`)
### Port Allocation
- **4000**: Router (LLM proxy)
- **8000**: Node (local AI)
- **9550-9553**: Desktop app integration
## Development
### Setup
```bash
cd pkg/hanzo
uv sync --all-extras
```
### Testing
```bash
# Run tests
pytest tests/
# With coverage
pytest tests/ --cov=hanzo
```
### Building
```bash
uv build
```
## License
Apache License 2.0 | text/markdown | null | Hanzo AI <dev@hanzo.ai> | null | null | null | agents, ai, cli, hanzo, llm, local-ai, mcp, private-ai | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"anthropic>=0.25.0",
"click>=8.1.0",
"httpx>=0.23.0",
"openai>=1.0.0",
"prompt-toolkit>=3.0.0",
"pydantic>=2.0.0",
"pyyaml>=6.0",
"qrcode>=7.4.2",
"rich>=13.0.0",
"typer>=0.9.0",
"hanzo-agents>=0.1.0; extra == \"agents\"",
"hanzo-network>=0.1.3; extra == \"agents\"",
"hanzoai>=1.0.0; extra == \"ai\"",
"hanzo-aci>=0.2.8; extra == \"all\"",
"hanzo-agents>=0.1.0; extra == \"all\"",
"hanzo-cli>=0.2.0; extra == \"all\"",
"hanzo-kms>=1.1.0; extra == \"all\"",
"hanzo-mcp>=0.7.0; extra == \"all\"",
"hanzo-memory>=1.0.0; extra == \"all\"",
"hanzo-network>=0.1.3; extra == \"all\"",
"hanzo-repl>=0.1.0; extra == \"all\"",
"hanzoai>=1.0.0; extra == \"all\"",
"hanzo-cli>=0.2.0; extra == \"cli\"",
"hanzo-kms>=1.1.0; extra == \"cli\"",
"croniter>=2.0.0; extra == \"cron\"",
"redis>=5.0.0; extra == \"cron\"",
"hanzo-aci>=0.2.8; extra == \"dev\"",
"motor>=3.3.0; extra == \"documentdb\"",
"pymongo>=4.6.0; extra == \"documentdb\"",
"httpx>=0.23.0; extra == \"functions\"",
"aiobotocore>=2.9.0; extra == \"infra\"",
"botocore>=1.34.0; extra == \"infra\"",
"croniter>=2.0.0; extra == \"infra\"",
"httpx>=0.23.0; extra == \"infra\"",
"meilisearch-python-sdk>=3.0.0; extra == \"infra\"",
"motor>=3.3.0; extra == \"infra\"",
"nats-py>=2.6.0; extra == \"infra\"",
"pymongo>=4.6.0; extra == \"infra\"",
"qdrant-client>=1.7.0; extra == \"infra\"",
"redis>=5.0.0; extra == \"infra\"",
"temporalio>=1.4.0; extra == \"infra\"",
"redis>=5.0.0; extra == \"kv\"",
"hanzo-mcp>=0.7.0; extra == \"mcp\"",
"nats-py>=2.6.0; extra == \"pubsub\"",
"redis>=5.0.0; extra == \"queues\"",
"hanzo-repl>=0.1.0; extra == \"repl\"",
"meilisearch-python-sdk>=3.0.0; extra == \"search\"",
"aiobotocore>=2.9.0; extra == \"storage\"",
"botocore>=1.34.0; extra == \"storage\"",
"temporalio>=1.4.0; extra == \"tasks\"",
"qdrant-client>=1.7.0; extra == \"vector\""
] | [] | [] | [] | [
"Homepage, https://hanzo.ai",
"Repository, https://github.com/hanzoai/python-sdk",
"Documentation, https://docs.hanzo.ai/cli",
"Bug Tracker, https://github.com/hanzoai/python-sdk/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T06:20:20.522181 | hanzo-0.4.0.tar.gz | 242,797 | a5/aa/6815d0a6935a6574a5baa944467c22ad16bfcbcd3a2ff68a5e0e033cae54/hanzo-0.4.0.tar.gz | source | sdist | null | false | 0c3766129f0a8a8081f030188bb70106 | a2bb44a4a173c970593c63da27c96c64af3f4f899313798d2644cc351618eb4f | a5aa6815d0a6935a6574a5baa944467c22ad16bfcbcd3a2ff68a5e0e033cae54 | null | [] | 229 |
2.4 | hanzo-cli | 0.2.0 | Hanzo unified CLI — IAM, KMS, Deploy | # hanzo-cli
Unified CLI for the Hanzo platform — IAM, KMS, and PaaS management.
## Install
```bash
pip install hanzo-cli
```
## Usage
```bash
hanzo login
hanzo whoami
hanzo iam users
hanzo kms list
hanzo paas deploy list
```
| text/markdown | null | Hanzo AI <dev@hanzo.ai> | null | null | MIT | cli, devops, hanzo, iam | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0",
"hanzo-iam>=1.0.0",
"hanzo-kms>=1.0.0",
"httpx>=0.25.0",
"pyjwt>=2.8.0",
"rich>=13.0",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://hanzo.ai",
"Repository, https://github.com/hanzoai/python-sdk"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T06:20:11.096125 | hanzo_cli-0.2.0.tar.gz | 18,707 | 77/88/f89c9dcf06b3a526f2b2b828fd77ca352046995212c4504a1eca72fd1017/hanzo_cli-0.2.0.tar.gz | source | sdist | null | false | f710d4d5d7da87e56e96aba05d8473d7 | 15b8144dcf0ae7a98d9737af386af6c629d9813fddd6e83c8ef07bd8dab69099 | 7788f89c9dcf06b3a526f2b2b828fd77ca352046995212c4504a1eca72fd1017 | null | [] | 252 |
2.4 | hanzo-kms | 1.1.0 | Hanzo KMS SDK - Secret management for Python | # Hanzo KMS - Python SDK
Official Python SDK for [Hanzo KMS](https://kms.hanzo.ai) - Secret management for your applications.
## Installation
```bash
pip install hanzo-kms
```
Or with uv:
```bash
uv add hanzo-kms
```
## Quick Start
```python
from hanzo_kms import KMSClient, ClientSettings, AuthenticationOptions, UniversalAuthMethod
# Initialize client
client = KMSClient(ClientSettings(
site_url="https://kms.hanzo.ai",
auth=AuthenticationOptions(
universal_auth=UniversalAuthMethod(
client_id="your-client-id",
client_secret="your-client-secret",
)
)
))
# List all secrets
secrets = client.list_secrets(
project_id="my-project",
environment="production"
)
for secret in secrets:
print(f"{secret.secret_key}: {secret.secret_value}")
# Get a specific secret
db_url = client.get_value(
project_id="my-project",
environment="production",
secret_name="DATABASE_URL"
)
# Inject all secrets into environment
client.inject_env(
project_id="my-project",
environment="production"
)
```
## Environment Variables
The client can be configured via environment variables:
```bash
export HANZO_KMS_URL="https://kms.hanzo.ai"
export HANZO_KMS_CLIENT_ID="your-client-id"
export HANZO_KMS_CLIENT_SECRET="your-client-secret"
```
Then simply:
```python
from hanzo_kms import KMSClient
client = KMSClient() # Uses environment variables
secrets = client.list_secrets("my-project", "production")
```
## Authentication Methods
### Universal Auth (Recommended)
```python
from hanzo_kms import KMSClient, ClientSettings, AuthenticationOptions, UniversalAuthMethod
client = KMSClient(ClientSettings(
auth=AuthenticationOptions(
universal_auth=UniversalAuthMethod(
client_id="...",
client_secret="...",
)
)
))
```
### Kubernetes Auth
For workloads running in Kubernetes:
```python
from hanzo_kms import KMSClient, ClientSettings, AuthenticationOptions, KubernetesAuthMethod
client = KMSClient(ClientSettings(
auth=AuthenticationOptions(
kubernetes=KubernetesAuthMethod(
identity_id="your-identity-id",
# Uses default service account token path
)
)
))
```
### AWS IAM Auth
```python
from hanzo_kms import KMSClient, ClientSettings, AuthenticationOptions, AWSIamAuthMethod
client = KMSClient(ClientSettings(
auth=AuthenticationOptions(
aws_iam=AWSIamAuthMethod(
identity_id="your-identity-id",
)
)
))
```
## API Reference
### KMSClient
| Method | Description |
|--------|-------------|
| `list_secrets(project_id, environment, path="/")` | List all secrets |
| `get_secret(project_id, environment, secret_name)` | Get a single secret |
| `get_value(project_id, environment, secret_name, default=None)` | Get just the value |
| `create_secret(project_id, environment, secret_name, secret_value)` | Create a secret |
| `update_secret(project_id, environment, secret_name, secret_value)` | Update a secret |
| `delete_secret(project_id, environment, secret_name)` | Delete a secret |
| `inject_env(project_id, environment, overwrite=False)` | Inject into os.environ |
## Compatibility
This SDK is compatible with:
- Hanzo KMS (https://kms.hanzo.ai)
- Lux KMS (https://kms.lux.network)
- Infisical (https://infisical.com)
The `InfisicalClient` alias is provided for drop-in compatibility:
```python
from hanzo_kms import InfisicalClient # Same as KMSClient
```
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | Hanzo AI <dev@hanzo.ai> | null | null | MIT | hanzo, kms, lux, secrets, security, vault | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.25.0",
"pydantic>=2.0.0",
"httpx[http2]>=0.25.0; extra == \"async\"",
"mypy>=1.10.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://kms.hanzo.ai",
"Documentation, https://docs.hanzo.ai/kms",
"Repository, https://github.com/hanzoai/python-sdk"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T06:20:06.542616 | hanzo_kms-1.1.0.tar.gz | 7,553 | 6d/78/c459ae92072e55d94e6b0718d927fb2801c06ea8ef8a5cacc417c237b385/hanzo_kms-1.1.0.tar.gz | source | sdist | null | false | d432a3f51adb70ddb253b0cf5dbb9b85 | 13242266012dcc2a1b48705b48704504409f21c13ec022c32385f952cf4b9f8b | 6d78c459ae92072e55d94e6b0718d927fb2801c06ea8ef8a5cacc417c237b385 | null | [] | 259 |
2.4 | fetchez | 0.4.2 | Geo-Spatial Data Fetching | # 🌍 Fetchez 🐄
**Fetch geospatial data with ease.**
*Fetchez Les Données*
[](https://github.com/continuous-dems/fetchez)
[](LICENSE)
[](https://www.python.org/)
[](https://badge.fury.io/py/fetchez)
[](https://cudem.zulip.org)
**Fetchez** is a lightweight, modular and highly extendable Python library and command-line tool designed to discover and retrieve geospatial data from a wide variety of public repositories. Originally part of the [CUDEM](https://github.com/continuous-dems/cudem) project, Fetchez is now a standalone tool capable of retrieving Bathymetry, Topography, Imagery, and Oceanographic data (and more!) from sources like NOAA, USGS, NASA, and the European Space Agency.
---
### ❓ Why Fetchez?
Geospatial data access is fragmented. You often need one script to scrape a website for tide stations, another to download LiDAR from an S3 bucket, and a third to parse a local directory of shapefiles.
**Fetchez unifies this chaos.**
* **One Command to Fetch Them All:** Whether you need bathymetry, topography, or water levels, the syntax is always the same: `fetchez [module] -R [region]`.
* **Streaming First:** Fetchez is built for the cloud-native era. It prefers streaming data through standard pipes over downloading massive archives to disk.
* **Plugin Architecture:** The core engine is lightweight and agnostic. Data sources are just Python plugins, making it trivial to add support for new APIs or proprietary internal servers without forking the main codebase.
* **Smart caching:** It handles the boring stuff like retries, caching, and checksum verification, so you can get back to the science.
## 🌎 Features
* One command to fetch data from 50+ different modules, (SRTM, GMRT, NOAA NOS, USGS 3DEP, Copernicus, etc.).
* Built-in download management handles retries, resume-on-failure, authentication, and mirror switching automatically.
* Seamlessly mix disparate data types (e.g., fetch Stream Gauges (JSON), DEMs (GeoTIFF), and Coastlines (Shapefile) in one project).
* Define automated workflows (Hooks) (e.g., download -> unzip -> reproject -> grid) using Python-based Processing Hooks.
* Save complex processing chains (Presets) as simple reusable flags (e.g., fetchez ... --run-through-waffles).
* Includes "FRED" (Fetchez Remote Elevation Datalist) to index and query remote or local files spatially without hitting slow APIs or maintianing a database.
* Minimal dependencies (`requests`, `tqdm`, `lxml`). Optional `shapely` support for precise spatial filtering.
* Supports user-defined Data Modules *and* Processing Hooks via `~/.fetchez/`.
---
## 🧩 Where does Fetchez fit?
The geospatial ecosystem is full of powerful processing engines, translators, tansformers, converters, etc. but they all assume you already have the data ready to use. Fetchez fills the gap between the internet, your hard drive and your workflow.
In short: Use Fetchez to get the data so you can crunch the data.
## 📦 Installation
**From Pip/PyPi**
```bash
pip install fetchez
```
**From Source:**
Download and install git (If you have not already): [git installation](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
```bash
pip install git+https://github.com/continuous-dems/fetchez.git#egg=fetchez
```
Clone and install from source
```bash
git clone https://github.com/continuous-dems/fetchez.git
cd fetchez
pip install .
```
## 💻 CLI Usage
The primary command is fetchez.
### Basic Syntax
```bash
fetchez -R <region> <module> [options]
```
### Examples
* Fetch SRTM+ Data for a Bounding Box
```bash
# Region Format: West/East/South/North
fetchez -R -105.5/-104.5/39.5/40.5 srtm_plus
```
* Discover Data Sources
```bash
# View detailed metadata card for a module
fetchez --info gmrt
```
* Fetch Data Using a Place Name
```bash
# Automatically resolves "Boulder, CO" to a bounding box region
fetchez -R loc:"Boulder, CO" copernicus --datatype=1
```
* Advanced Data Pipelines (Hooks)
```bash
# Fetch data, automatically unzip it, and print the final filepath
fetchez -R loc:Miami charts --hook unzip --pipe-path
```
* List Available Modules
```bash
fetchez --modules
```
## 🐍 Python API
Fetchez is designed to be easily integrated into Python workflows.
### Simple Fetching
```python
import fetchez
# Define a region (West, East, South, North)
bbox = (-105.5, -104.5, 39.5, 40.5)
# Initialize a specific fetcher module
# Use the registry to load modules dynamically
SRTM = fetchez.registry.FetchezRegistry.load_module('srtm_plus')
# Configure and Run
fetcher = SRTM(src_region=bbox, verbose=True)
fetcher.run()
# Access Results (Metadata)
for result in fetcher.results:
print(f"Downloaded: {result['dst_fn']}")
print(f"Source URL: {result['url']}")
```
### Data Discovery
Query the registry to find datasets that match your criteria programmatically.
```python
from fetchez.registry import FetchezRegistry
# Search for global bathymetry datasets
matches = FetchezRegistry.search_modules('bathymetry')
print(f"Found modules: {matches}")
# Get details for a specific module
meta = FetchezRegistry.get_info('copernicus')
print(f"Resolution: {meta.get('resolution')}")
print(f"License: {meta.get('license')}")
```
For modules that rely on file lists (like Copernicus or NCEI), you can interact directly with the local index.
```python
from fetchez import fred
# Load the local index
index = fred.FRED(name='copernicus')
# Search for datasets in a region
results = index.search(
region=(-10, 10, 40, 50),
where=["DataType = '3'"] # Filter for COP-10 (European) data
)
print(f"Found {len(results)} datasets.")
```
## 🪝 Processing Hooks
Fetchez includes a powerful Hook System that allows you to chain actions together. Hooks run in a pipeline, meaning the output of one hook (e.g. unzipping a file) becomes the input for the next (e.g. processing it).
### Common Built-in Hooks:
* unzip: Automatically extracts .zip files.
* pipe: Prints the final absolute path to stdout (useful for piping to GDAL/PDAL).
### Example:
```bash
# Download data.zip
# Extract data.tif (via unzip hook)
# Print /path/to/data.tif (via pipe-path)
fetchez charts --hook unzip --hook pipe
```
You can write your own custom hooks (e.g., to log downloads to a database or trigger a script) and drop them in ~/.fetchez/hooks/. See [CONTRIBUTING.md](https://github.com/continuous-dems/fetchez/blob/main/CONTRIBUTING.md) for details.
## 🔗 Pipeline Presets (Macros)
Tired of typing the same chain of hooks every time? Presets allow you to define reusable workflow macros.
Instead of running this long command:
```bash
fetchez copernicus --hook checksum:algo=sha256 --hook enrich --hook audit:file=log.json
```
You can simply run:
```bash
fetchez copernicus --audit-full
```
*** How to use them ***
Fetchez comes with a few built-in shortcuts (check fetchez --help to see them), but the real power comes from defining your own.
* Initialize your config: Run this command to generate a starter configuration file at `~/.fetchez/presets.json`:
```bash
fetchez --init-presets
```
* Define your workflow: Edit the JSON file to create a named preset. A preset is just a list of hooks with arguments.
```json
"my-clean-workflow": {
"help": "Unzip files and immediately remove the zip archive.",
"hooks": [
{"name": "unzip", "args": {"remove": "true"}},
{"name": "pipe"}
]
}
```
* Run it: Your new preset automatically appears as a CLI flag!
```bash
fetchez charts --my-clean-workflow
```
## 🗺 Supported Data Sources
Fetchez supports over 50 modules categorized by data type. Run ```fetchez --modules``` to see the full list.
| Category | Example Modules |
|----|----|
| Topography | srtm_plus, copernicus, nasadem, tnm (USGS), arcticdem |
| Bathymetry | gmrt, emodnet, gebco, multibeam, nos_hydro |
| Oceanography |tides, buoys, mur_sst |
| Reference | osm (OpenStreetMap), vdatum |
| Generic | http (Direct URL), earthdata (NASA) |
## 🛟 Module-Specific Dependencies
Fetchez is designed to be lightweight. The core installation only includes what is strictly necessary to run the engine.
However, some data modules require extra libraries to function (e.g., `boto3` for AWS data, `pyshp` for Shapefiles). You can install these "Extras" automatically using pip:
```bash
# Install support for AWS-based modules (BlueTopo, etc.)
pip install "fetchez[aws]"
# Install support for Vector processing (Shapefiles, etc.)
pip install "fetchez[vector]"
# Install ALL optional dependencies
pip install "fetchez[full]"
```
If you try to run a module without its required dependency, fetchez will exit with a helpful error message telling you exactly which extra group to install.
## 🐄 Plugins, Hooks & Extensions
Need to fetch data from a specialized local server? Or maybe run a custom script immediately after every download? You don't need to fork the repo!
**Fetchez** is designed to be extendable in two ways:
Data Modules (~/.fetchez/plugins/): Add new data sources or APIs.
Processing Hooks (~/.fetchez/hooks/): Add new post-processing steps (unzip, convert, log).
Drop your Python scripts into these configuration folders, and they will be automatically registered as commands.
**Quick Start:**
1. Create the folder: `mkdir ~/.fetchez/plugins`
2. Drop a python script there (e.g., `my_data.py`).
3. Run it: `fetchez my_data`
See [CONTRIBUTING.md](https://github.com/continuous-dems/fetchez/blob/main/CONTRIBUTING.md) for a full code example.
## 🛠 Contributing
We welcome contributions! Please see [CONTRIBUTING.md](https://github.com/continuous-dems/fetchez/blob/main/CONTRIBUTING.md) for details on how to register new modules or hooks with our metadata schema.
## 🔱 Disclaimer on Data Persistence
We provide the tools to locate and download data from authoritative public repositories, but we do not host the data ourselves.
Government agencies reorganize websites, migrate APIs (e.g., WCS 1.0 to 2.0), or decommission servers without notice. A module that fetches perfectly today may encounter a 404 tomorrow.
Source datasets are frequently updated, reprocessed, or removed by their custodians. The "best available" data for a region can change overnight.
Remote servers (like NOAA NCEI, USGS, or Copernicus) may experience downtime, throttling, or rate limits that are entirely outside our control.
We strive to keep our modules robust and our index fresh. If you encounter a broken fetch or a changed endpoint, please open an issue. This helps the whole community keep up with the changes!
## ⚖ License
This project is licensed under the MIT License - see the [LICENSE](https://github.com/continuous-dems/fetchez/blob/main/LICENSE) file for details.
Copyright (c) 2010-2026 Regents of the University of Colorado
| text/markdown | null | Matthew Love <matthew.love@colorado.edu>, Christopher Amante <christopher.amante@colorado.edu>, Elliot Lim <elliot.lim@colorado.edu>, Michael MacFerrin <michael.macferrin@colorado.edu> | null | Matthew Love <matthew.love@colorado.edu> | MIT License
Copyright (c) 2010-2026 Regents of the University of Colorado
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | Geospatial | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: GIS"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"lxml",
"pyyaml",
"requests[security]",
"shapely",
"tqdm",
"boto3; extra == \"aws\"",
"mercantile; extra == \"bing\"",
"earthaccess>=0.9.0; extra == \"earthdata\"",
"boto3; extra == \"full\"",
"earthaccess>=0.9.0; extra == \"full\"",
"mercantile; extra == \"full\"",
"pyproj; extra == \"full\"",
"pyshp; extra == \"full\"",
"pystac; extra == \"full\"",
"pystac-client; extra == \"full\"",
"pystac; extra == \"stac\"",
"pystac-client; extra == \"stac\"",
"pyproj; extra == \"vector\"",
"pyshp; extra == \"vector\""
] | [] | [] | [] | [
"Homepage, https://github.com/ciresdem/fetchez",
"Issues, https://github.com/ciresdem/fetchez/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:18:23.685090 | fetchez-0.4.2.tar.gz | 1,376,845 | ca/39/49aa63efd063756cef8952ac50e23fe44c23e411dbacc8ef44e1921a2f6b/fetchez-0.4.2.tar.gz | source | sdist | null | false | 6c926b1d6f9ed41cc2bd7556851115e5 | f47e2424000e6406cf0d38c95920cf22d639de2513d414e31e2b65f505e1cb1b | ca3949aa63efd063756cef8952ac50e23fe44c23e411dbacc8ef44e1921a2f6b | null | [
"AUTHORS.md",
"LICENSE"
] | 249 |
2.4 | airut | 0.13.1 | Sandboxed Claude Code over email and Slack | # Airut
Sandboxed Claude Code over email and Slack. Named "Airut" (Finnish:
herald/messenger).
Send a message — email or Slack — with instructions, and get results back in the
same thread. Starting a new task is as simple as starting a new conversation.
Airut handles everything behind the scenes: workspace creation, container
isolation, network sandboxing, session persistence, and cleanup.
Self-hosted: your code and conversations never leave your infrastructure.
```
You → Email/Slack → Airut → Claude Code (container) → PR → Reply → You
```
## Key Features
- **Zero-friction tasking** — send a message to start a task. No workspace
setup, no session management, no cleanup.
- **Defense-in-depth sandboxing** — container isolation, network allowlist via
proxy, and credential masking limit blast radius when agents run with full
autonomy.
- **Conversation persistence** — reply to continue where you left off. Claude
Code session context is maintained across messages.
- **Task-to-PR foundation** — combined with proper repo configuration
(`CLAUDE.md`, CI tooling, branch protection), enables end-to-end autonomous
workflows where agents push PRs for human review.
- **Email and Slack channels** — authenticate via DMARC (email) or workspace
membership (Slack), with sender authorization per repo.
- **Web dashboard** — monitor running tasks and view network activity logs.
## Quick Start
### Prerequisites
- Linux (dedicated VM recommended, Debian 13 tested)
- [uv](https://docs.astral.sh/uv/), Git, and Podman (rootless)
- At least one channel per repository:
- **Email**: Dedicated email account with IMAP/SMTP access
- **Slack**: Slack workspace with app installation permissions
### Install
```bash
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install Airut from PyPI
uv tool install airut
```
Or install the latest development version from main:
```bash
uv tool install airut --from git+https://github.com/airutorg/airut.git
```
### Configure
```bash
# Generate initial config at ~/.config/airut/airut.yaml
airut init
# Validate config and system dependencies
airut check
```
### Deploy
```bash
# Install and start the systemd service
airut install-service
# Verify it's running
airut check
```
### Update
```bash
airut update
```
## How It Works
Each conversation runs in an isolated container with its own git workspace,
Claude Code session, and sandboxed network. The recommended workflow has agents
push PRs for your review — you review, leave comments, and reply to iterate.
```
You: "Add user authentication"
↓
Agent: works → pushes PR → replies with PR link
↓
You: review PR, leave comments
↓
You: reply "Address the review comments"
↓
Agent: reads comments → fixes → updates PR → replies
↓
You: approve and merge
```
## Sandbox Library
The `airut.sandbox` module is a standalone library for safe containerized
execution of headless Claude Code. It can be used independently of the gateway
to run Claude Code in isolated containers from any Python application — CI
pipelines, automation scripts, custom integrations, or your own agent
orchestrator.
**Core capabilities:**
- **Container lifecycle** — two-layer image build, execution, and cleanup via
Podman or Docker
- **Network isolation** — transparent DNS-spoofing proxy enforcing a domain
allowlist, with no `HTTP_PROXY` env vars or `iptables` rules needed
- **Secret masking** — surrogate credential injection so real secrets never
reach the container, with proxy-side replacement on egress
- **Event streaming** — append-only log of Claude's streaming JSON output, safe
for concurrent reads during execution
- **Outcome classification** — typed `Outcome` enum (success, timeout,
prompt-too-long, session-corrupted, container-failed) so callers match on
outcomes instead of parsing strings
**Quick example:**
```python
from airut.sandbox import Sandbox, SandboxConfig, Mount, ContainerEnv, Outcome
sandbox = Sandbox(SandboxConfig())
sandbox.startup()
image = sandbox.ensure_image(dockerfile, context_files)
task = sandbox.create_task(
execution_context_id="my-run-1",
execution_context_dir=run_dir,
image_tag=image,
mounts=[Mount(host_path=repo, container_path="/workspace")],
env=ContainerEnv(variables={"ANTHROPIC_API_KEY": key}),
timeout_seconds=600,
)
result = task.execute("Fix the failing tests")
if result.outcome == Outcome.SUCCESS:
print(result.response_text)
sandbox.shutdown()
```
See the
[sandbox spec](https://github.com/airutorg/airut/blob/main/spec/sandbox.md) for
full architecture details and API reference.
## Documentation
Full documentation is available on GitHub:
- [Deployment Guide](https://github.com/airutorg/airut/blob/main/doc/deployment.md)
— installation, configuration, and service management
- [Architecture](https://github.com/airutorg/airut/blob/main/doc/architecture.md)
— system architecture and data flow
- [Security Model](https://github.com/airutorg/airut/blob/main/doc/security.md)
— channel auth, container isolation, credential handling
- [Repo Onboarding](https://github.com/airutorg/airut/blob/main/doc/repo-onboarding.md)
— how to onboard a new repository
- [Agentic Operation](https://github.com/airutorg/airut/blob/main/doc/agentic-operation.md)
— message-to-PR workflow patterns
## Links
- [GitHub Repository](https://github.com/airutorg/airut)
- [Full README](https://github.com/airutorg/airut#readme)
## License
MIT License. See [LICENSE](https://github.com/airutorg/airut/blob/main/LICENSE)
for details.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.15,>=3.13 | [] | [] | [] | [
"msal>=1.31",
"packaging>=21.0",
"platformdirs>=4.0",
"python-dotenv>=1.0",
"pyyaml>=6.0.3",
"slack-bolt>=1.20",
"werkzeug>=3.1.5"
] | [] | [] | [] | [
"Homepage, https://github.com/airutorg/airut",
"Repository, https://github.com/airutorg/airut",
"Documentation, https://github.com/airutorg/airut/tree/main/doc"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:18:17.062765 | airut-0.13.1.tar.gz | 962,869 | 23/ad/fbec66702027f0b62bc7390962b546bf696c739ac4ef49b4dce7b25ea14e/airut-0.13.1.tar.gz | source | sdist | null | false | 0f82b9d6211c2b3584140a69b492c95a | a620cdd96c708a35248703f5cbe443950cdc87f3e2339921f08c0da2ee58b938 | 23adfbec66702027f0b62bc7390962b546bf696c739ac4ef49b4dce7b25ea14e | MIT | [
"LICENSE"
] | 221 |
2.4 | refurb | 2.3.0 | A tool for refurbishing and modernizing Python codebases | # Refurb
A tool for refurbishing and modernizing Python codebases.
## Example
```python
# main.py
for filename in ["file1.txt", "file2.txt"]:
with open(filename) as f:
contents = f.read()
lines = contents.splitlines()
for line in lines:
if not line or line.startswith("# ") or line.startswith("// "):
continue
for word in line.split():
print(f"[{word}]", end="")
print("")
```
Running:
```
$ refurb main.py
main.py:3:17 [FURB109]: Use `in (x, y, z)` instead of `in [x, y, z]`
main.py:4:5 [FURB101]: Use `y = Path(x).read_text()` instead of `with open(x, ...) as f: y = f.read()`
main.py:10:40 [FURB102]: Replace `x.startswith(y) or x.startswith(z)` with `x.startswith((y, z))`
main.py:16:9 [FURB105]: Use `print() instead of `print("")`
```
## Installing
```
$ pipx install refurb
$ refurb file.py folder/
```
> **Note**
> Refurb must be run on Python 3.10+, though it can check Python 3.7+ code by setting the `--python-version` flag.
## Explanations For Checks
You can use `refurb --explain FURB123`, where `FURB123` is the error code you are trying to look up.
For example:
````
$ refurb --explain FURB123
Don't cast a variable or literal if it is already of that type. For
example:
Bad:
```
name = str("bob")
num = int(123)
```
Good:
```
name = "bob"
num = 123
```
````
An online list of all available checks can be viewed [here](./docs/checks.md).
## Ignoring Errors
Use `--ignore 123` to ignore error 123. The error code can be in the form `FURB123` or `123`.
This flag can be repeated.
> The `FURB` prefix indicates that this is a built-in error. The `FURB` prefix is optional,
> but for all other errors (ie, `ABC123`), the prefix is required.
You can also use inline comments to disable errors:
```python
x = int(0) # noqa: FURB123
y = list() # noqa
```
Here, `noqa: FURB123` specifically ignores the FURB123 error for that line, and `noqa` ignores
all errors on that line.
You can also specify multiple errors to ignore by separating them with a comma/space:
```python
x = not not int(0) # noqa: FURB114, FURB123
x = not not int(0) # noqa: FURB114 FURB123
```
## Enabling/Disabling Checks
Certain checks are disabled by default, and need to be enabled first. You can do this using the
`--enable ERR` flag, where `ERR` is the error code of the check you want to enable. A disabled
check differs from an ignored check in that a disabled check will never be loaded, whereas an
ignored check will be loaded, an error will be emitted, and the error will be suppressed.
Use the `--verbose`/`-v` flag to get a full list of enabled checks.
The opposite of `--enable` is `--disable`, which will disable a check. When `--enable` and `--disable`
are both specified via the command line, whichever one comes last will take precedence. When using
`enable` and `disable` via the config file, `disable` will always take precedence.
Use the `--disable-all` flag to disable all checks. This allows you to incrementally `--enable` checks
as you see fit, as opposed to adding a bunch of `--ignore` flags. To use this in the config file,
set `disable_all` to `true`.
Use the `--enable-all` flag to enable all checks by default. This allows you to opt into all checks
that Refurb (and Refurb plugins) have to offer. This is a good option for new codebases. To use this
in a config file, set `enable_all` to `true`.
In the config file, `disable_all`/`enable_all` is applied first, and then the `enable` and `disable`
fields are applied afterwards.
> Note that `disable_all` and `enable_all` are mutually exclusive, both on the command line and in
> the config file. You will get an error if you try to specify both.
You can also disable checks by category using the `#category` syntax. For example, `--disable "#readability"`
will disable all checks with the `readability` category. The same applies for `enable` and `ignore`.
Also, if you disable an entire category you can still explicitly re-enable a check in that category.
> Note that `#readability` is wrapped in quotes because your shell will interpret the `#` as the
> start of a comment.
## Setting Python Version
Use the `--python-version` flag to tell Refurb which version of Python your codebase is using. This
should allow for better detection of language features, and allow for better error messages. The argument
for this flag must be in the form `x.y`, for example, `3.10`.
The syntax for using this in the config file is `python_version = "3.10"`.
When the Python version is unspecified, Refurb uses whatever version your local Python installation uses.
For example, if your `python --version` is `3.11.5`, Refurb uses `3.11`, dropping the `5` patch version.
## Changing Output Formats
By default everything is outputted as plain text:
```
file.py:1:5 [FURB123]: Replace `int(x)` with `x`
```
Here are all of the available formats:
* `text`: The default
* `github`: Print output for use with [GitHub Annotations](https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions)
* More to come!
To change the default format use `--format XYZ` on the command line, or `format = "XYZ"` in the config file.
## Changing Sort Order
By default errors are sorted by filename, then by error code. To change this, use the `--sort XYZ` flag on
the command line, or `sort_by = "XYZ"` in the config file, where `XYZ` is one of the following sort modes:
* `filename`: Sort files in alphabetical order (the default)
* `error`: Sort by error first, then by filename
## Overriding Mypy Flags
This is typically used for development purposes, but can also be used to better fine-tune Mypy from
within Refurb. Any command line arguments after `--` are passed to Mypy. For example:
```
$ refurb files -- --show-traceback
```
This tells Mypy to show a traceback if it crashes.
You can also use this in the config file by assigning an array of values to the `mypy_args` field.
Note that any Mypy arguments passed via the command line arguments will override the `mypy_args`
field in the config file.
## Configuring Refurb
In addition to the command line arguments, you can also add your settings in the `pyproject.toml` file.
For example, the following command line arguments:
```
refurb file.py --ignore 100 --load some_module --quiet
```
Corresponds to the following in your `pyproject.toml` file:
```toml
[tool.refurb]
ignore = [100]
load = ["some_module"]
quiet = true
```
Now all you need to type is `refurb file.py`!
Note that the values in the config file will be merged with the values specified via the
command line. In the case of boolean arguments like `--quiet`, the command line arguments
take precedence. All other arguments (such as `ignore` and `load`) will be combined.
You can use the `--config-file` flag to tell Refurb to use a different config file from the
default `pyproject.toml` file. Note that it still must be in the same form as the normal
`pyproject.toml` file.
Click [here](./docs/configs/README.md) to see some example config files.
### Ignore Checks Per File/Folder
If you have a large codebase you might want to ignore errors for certain files or folders,
which allows you to incrementally fix errors as you see fit. To do that, add the following
to your `pyproject.toml` file:
```toml
# these settings will be applied globally
[tool.refurb]
enable_all = true
# these will only be applied to the "src" folder
[[tool.refurb.amend]]
path = "src"
ignore = ["FURB123", "FURB120"]
# these will only be applied to the "src/util.py" file
[[tool.refurb.amend]]
path = "src/util.py"
ignore = ["FURB125", "FURB148"]
```
> Note that only the `ignore` field is available in the `amend` sections. This is because
> a check can only be enabled/disabled for the entire codebase, and cannot be selectively
> enabled/disabled on a per-file basis. Assuming a check is enabled though, you can simply
> `ignore` the errors for the files of your choosing.
## Using Refurb With `pre-commit`
You can use Refurb with [pre-commit](https://pre-commit.com/) by adding the following
to your `.pre-commit-config.yaml` file:
```yaml
- repo: https://github.com/dosisod/refurb
rev: REVISION
hooks:
- id: refurb
```
Replacing `REVISION` with a version or SHA of your choosing (or leave it blank to
let `pre-commit` find the most recent one for you).
## Plugins
Installing plugins for Refurb is very easy:
```
$ pip install refurb-plugin-example
```
Where `refurb-plugin-example` is the name of the plugin. Refurb will automatically load
any installed plugins.
To make your own Refurb plugin, see the [`refurb-plugin-example` repository](https://github.com/dosisod/refurb-plugin-example)
for more info.
## Writing Your Own Check
If you want to extend Refurb but don't want to make a full-fledged plugin,
you can easily create a one-off check file with the `refurb gen` command.
> Note that this command uses the `fzf` fuzzy-finder for getting user input,
> so you will need to [install fzf](https://github.com/junegunn/fzf#installation) before continuing.
Here is the basic overview for creating a new check using the `refurb gen` command:
1. First select the node type you want to accept
2. Then type in where you want to save the auto generated file
3. Add your code to the new file
To get an idea of what you need to add to your check, use the `--debug` flag to see the
AST representation for a given file (ie, `refurb --debug file.py`). Take a look at the
files in the `refurb/checks/` folder for some examples.
Then, to load your new check, use `refurb file.py --load your.path.here`
> Note that when using `--load`, you need to use dots in your argument, just like
> importing a normal python module. If `your.path.here` is a directory, all checks
> in that directory will be loaded. If it is a file, only that file will be loaded.
## Troubleshooting
If Refurb is running slow, use the `--timing-stats` flag to diagnose why:
```
$ refurb file --timing-stats /tmp/stats.json
```
This will output a JSON file with the following information:
* Total time Mypy took to parse the modules (a majority of the time usually).
* Time Mypy spent parsing each module. Useful for finding very large/unused files.
* Time Refurb spent checking each module. These numbers should be very small (less than 100ms).
Larger files naturally take longer to check, but files that take way too long should be
looked into, as an issue might only manifest themselves when a file reaches a certain size.
## Disable Color
Color output is enabled by default in Refurb. To disable it, do one of the following:
* Set the `NO_COLOR` env var.
* Use the `--no-color` flag.
* Set `color = false` in the config file.
* Pipe/redirect Refurb output to another program or file.
## Developing / Contributing
### Setup
To setup locally run:
```
$ git clone https://github.com/dosisod/refurb
$ cd refurb
$ make install
```
Tests can be ran all at once using `make`, or you can run each tool on its own using
`make black`, `make flake8`, and so on.
Unit tests can be ran with `pytest` or `make test`.
> Since the end-to-end (e2e) tests are slow, they are not ran when running `make`.
> You will need to run `make test-e2e` to run them.
### Updating Documentation
We encourage people to update the documentation when they see typos and other issues!
With that in mind though, don't directly modify the `docs/checks.md` file. It is auto-generated
and will be overridden when new checks are added. The documentation for checks can be updated
by changing the docstrings of in the checks themselves. For example, to update `FURB100`,
change the docstring of the `ErrorInfo` class in the `refurb/checks/pathlib/with_suffix.py` file.
You can find the file for a given check by grep-ing for `code = XYZ`, where `XYZ` is the check
you are looking for but with the `FURB` prefix removed.
Use the `--verbose` flag with `--explain` to find the filename for a given check. For example:
```
$ refurb --explain FURB123 --verbose
Filename: refurb/checks/readability/no_unnecessary_cast.py
FURB123: no-redundant-cast [readability]
...
```
## Why Does This Exist?
I love doing code reviews: I like taking something and making it better, faster, more
elegant, and so on. Lots of static analysis tools already exist, but none of them seem
to be focused on making code more elegant, more readable, or more modern. That is where
Refurb comes in.
Refurb is heavily inspired by [clippy](https://rust-lang.github.io/rust-clippy/master/index.html),
the built-in linter for Rust.
## What Refurb Is Not
Refurb is not a style/type checker. It is not meant as a first-line of defense for
linting and finding bugs, it is meant for making good code even better.
## Comparison To Other Tools
There are already lots of tools out there for linting and analyzing Python code, so
you might be wondering why Refurb exists (skepticism is good!). As mentioned above,
Refurb checks for code which can be made more elegant, something that no other linters
(that I have found) specialize in. Here is a list of similar linters and analyzers,
and how they differ from Refurb:
[Black](https://github.com/psf/black): is more focused on the formatting and
styling of the code (line length, trailing comas, indentation, and so on). It
does a really good job of making other projects using Black look more or less
the same. It doesn't do more complex things such as type checking or code
smell/anti-pattern detection.
[flake8](https://github.com/pycqa/flake8): flake8 is also a linter, is very extensible,
and performs a lot of semantic analysis-related checks as well, such as "unused
variable", "break outside of a loop", and so on. It also checks PEP8
conformance. Refurb won't try and replace flake8, because chances are you
are already using flake8 anyways.
[Pylint](https://github.com/PyCQA/pylint) has [a lot of checks](https://pylint.pycqa.org/en/latest/user_guide/messages/messages_overview.html)
which cover a lot of ground, but in general, are focused on bad or buggy
code, things which you probably didn't mean to do. Refurb assumes that you
know what you are doing, and will try to cleanup what is already there the best
it can.
[Mypy](https://github.com/python/mypy), [Pyright](https://github.com/Microsoft/pyright),
[Pyre](https://github.com/facebook/pyre-check), and [Pytype](https://github.com/google/pytype)
are all type checkers, and basically just enforce types, ensures arguments match,
functions are called in a type safe manner, and so on. They do much more then that, but
that is the general idea. Refurb actually is built on top of Mypy, and uses its AST
parser so that it gets good type information.
[pyupgrade](https://github.com/asottile/pyupgrade): Pyupgrade has a lot of good
checks for upgrading your older Python code to the newer syntax, which is really
useful. Where Refurb differs is that Pyupgrade is more focused on upgrading your
code to the newer version, whereas Refurb is more focused on cleaning up and
simplifying what is already there.
In conclusion, Refurb doesn't want you to throw out your old tools, since
they cover different areas of your code, and all serve a different purpose.
Refurb is meant to be used in conjunction with the above tools.
| text/markdown | dosisod | null | null | null | GPL-3.0-only | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Testing",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mypy!=1.11.0,>=1.10.0",
"tomli<3.0.0,>=2.0.1; python_version < \"3.11\""
] | [] | [] | [] | [
"Repository, https://github.com/dosisod/refurb"
] | poetry/2.3.2 CPython/3.12.3 Linux/6.11.0-1018-azure | 2026-02-21T06:16:43.788533 | refurb-2.3.0-py3-none-any.whl | 141,401 | 4c/13/778172b8df831eaa7ba5f0299be1d3705cce74bcd3dd1fdc7cccf002fcc3/refurb-2.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | a5a7cbe925359741d7e591ffb99f5260 | 5ae83b9dd84dca07d8c9b8be207329d2adfaf95c7c8ff0594e9fbd1cbf1841ec | 4c13778172b8df831eaa7ba5f0299be1d3705cce74bcd3dd1fdc7cccf002fcc3 | null | [
"LICENSE"
] | 1,057 |
2.4 | pyg-nightly | 2.8.0.dev20260221 | Graph Neural Network Library for PyTorch | <p align="center">
<img height="150" src="https://raw.githubusercontent.com/pyg-team/pyg_sphinx_theme/master/pyg_sphinx_theme/static/img/pyg_logo_text.svg?sanitize=true" />
</p>
______________________________________________________________________
<div align="center">
[![PyPI Version][pypi-image]][pypi-url]
[![PyPI Download][pypi-download-image]][pypi-download-url]
[![Slack][slack-image]][slack-url]
[![Contributing][contributing-image]][contributing-url]
**[Documentation](https://pytorch-geometric.readthedocs.io)** |
**[PyG 1.0 Paper](https://arxiv.org/abs/1903.02428)** |
**[PyG 2.0 Paper](https://arxiv.org/abs/2507.16991)** |
**[Colab Notebooks](https://pytorch-geometric.readthedocs.io/en/latest/get_started/colabs.html)** |
**[External Resources](https://pytorch-geometric.readthedocs.io/en/latest/external/resources.html)** |
**[OGB Examples](https://github.com/snap-stanford/ogb/tree/master/examples)**
</div>
**PyG** *(PyTorch Geometric)* is a library built upon [PyTorch](https://pytorch.org/) to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data.
It consists of various methods for deep learning on graphs and other irregular structures, also known as *[geometric deep learning](http://geometricdeeplearning.com/)*, from a variety of published papers.
In addition, it consists of easy-to-use mini-batch loaders for operating on many small and single giant graphs, [multi GPU-support](https://github.com/pyg-team/pytorch_geometric/tree/master/examples/multi_gpu), [`torch.compile`](https://pytorch-geometric.readthedocs.io/en/latest/advanced/compile.html) support, [`DataPipe`](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/datapipe.py) support, a large number of common benchmark datasets (based on simple interfaces to create your own), and helpful transforms, both for learning on arbitrary graphs as well as on 3D meshes or point clouds.
**[Click here to join our Slack community!][slack-url]**
<p align="center">
<a href="https://medium.com/stanford-cs224w"><img style="max-width: 941px" src="https://data.pyg.org/img/cs224w_tutorials.png" /></a>
</p>
______________________________________________________________________
- [Library Highlights](#library-highlights)
- [Quick Tour for New Users](#quick-tour-for-new-users)
- [Architecture Overview](#architecture-overview)
- [Implemented GNN Models](#implemented-gnn-models)
- [Installation](#installation)
## Library Highlights
Whether you are a machine learning researcher or first-time user of machine learning toolkits, here are some reasons to try out PyG for machine learning on graph-structured data.
- **Easy-to-use and unified API**:
All it takes is 10-20 lines of code to get started with training a GNN model (see the next section for a [quick tour](#quick-tour-for-new-users)).
PyG is *PyTorch-on-the-rocks*: It utilizes a tensor-centric API and keeps design principles close to vanilla PyTorch.
If you are already familiar with PyTorch, utilizing PyG is straightforward.
- **Comprehensive and well-maintained GNN models**:
Most of the state-of-the-art Graph Neural Network architectures have been implemented by library developers or authors of research papers and are ready to be applied.
- **Great flexibility**:
Existing PyG models can easily be extended for conducting your own research with GNNs.
Making modifications to existing models or creating new architectures is simple, thanks to its easy-to-use message passing API, and a variety of operators and utility functions.
- **Large-scale real-world GNN models**:
We focus on the need of GNN applications in challenging real-world scenarios, and support learning on diverse types of graphs, including but not limited to: scalable GNNs for graphs with millions of nodes; dynamic GNNs for node predictions over time; heterogeneous GNNs with multiple node types and edge types.
## Quick Tour for New Users
In this quick tour, we highlight the ease of creating and training a GNN model with only a few lines of code.
### Train your own GNN model
In the first glimpse of PyG, we implement the training of a GNN for classifying papers in a citation graph.
For this, we load the [Cora](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.datasets.Planetoid.html) dataset, and create a simple 2-layer GCN model using the pre-defined [`GCNConv`](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GCNConv.html):
```python
import torch
from torch import Tensor
from torch_geometric.nn import GCNConv
from torch_geometric.datasets import Planetoid
dataset = Planetoid(root='.', name='Cora')
class GCN(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels):
super().__init__()
self.conv1 = GCNConv(in_channels, hidden_channels)
self.conv2 = GCNConv(hidden_channels, out_channels)
def forward(self, x: Tensor, edge_index: Tensor) -> Tensor:
# x: Node feature matrix of shape [num_nodes, in_channels]
# edge_index: Graph connectivity matrix of shape [2, num_edges]
x = self.conv1(x, edge_index).relu()
x = self.conv2(x, edge_index)
return x
model = GCN(dataset.num_features, 16, dataset.num_classes)
```
<details>
<summary>We can now optimize the model in a training loop, similar to the <a href="https://pytorch.org/tutorials/beginner/basics/optimization_tutorial.html#full-implementation">standard PyTorch training procedure</a>.</summary>
```python
import torch.nn.functional as F
data = dataset[0]
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(200):
pred = model(data.x, data.edge_index)
loss = F.cross_entropy(pred[data.train_mask], data.y[data.train_mask])
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
</details>
More information about evaluating final model performance can be found in the corresponding [example](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py).
### Create your own GNN layer
In addition to the easy application of existing GNNs, PyG makes it simple to implement custom Graph Neural Networks (see [here](https://pytorch-geometric.readthedocs.io/en/latest/tutorial/create_gnn.html) for the accompanying tutorial).
For example, this is all it takes to implement the [edge convolutional layer](https://arxiv.org/abs/1801.07829) from Wang *et al.*:
$$x_i^{\\prime} ~ = ~ \\max\_{j \\in \\mathcal{N}(i)} ~ \\textrm{MLP}\_{\\theta} \\left( [ ~ x_i, ~ x_j - x_i ~ ] \\right)$$
```python
import torch
from torch import Tensor
from torch.nn import Sequential, Linear, ReLU
from torch_geometric.nn import MessagePassing
class EdgeConv(MessagePassing):
def __init__(self, in_channels, out_channels):
super().__init__(aggr="max") # "Max" aggregation.
self.mlp = Sequential(
Linear(2 * in_channels, out_channels),
ReLU(),
Linear(out_channels, out_channels),
)
def forward(self, x: Tensor, edge_index: Tensor) -> Tensor:
# x: Node feature matrix of shape [num_nodes, in_channels]
# edge_index: Graph connectivity matrix of shape [2, num_edges]
return self.propagate(edge_index, x=x) # shape [num_nodes, out_channels]
def message(self, x_j: Tensor, x_i: Tensor) -> Tensor:
# x_j: Source node features of shape [num_edges, in_channels]
# x_i: Target node features of shape [num_edges, in_channels]
edge_features = torch.cat([x_i, x_j - x_i], dim=-1)
return self.mlp(edge_features) # shape [num_edges, out_channels]
```
## Architecture Overview
PyG provides a multi-layer framework that enables users to build Graph Neural Network solutions on both low and high levels.
It comprises of the following components:
- The PyG **engine** utilizes the powerful PyTorch deep learning framework with full [`torch.compile`](https://pytorch-geometric.readthedocs.io/en/latest/advanced/compile.html) and [TorchScript](https://pytorch-geometric.readthedocs.io/en/latest/advanced/jit.html) support, as well as additions of efficient CPU/CUDA libraries for operating on sparse data, *e.g.*, [`pyg-lib`](https://github.com/pyg-team/pyg-lib).
- The PyG **storage** handles data processing, transformation and loading pipelines. It is capable of handling and processing large-scale graph datasets, and provides effective solutions for heterogeneous graphs. It further provides a variety of sampling solutions, which enable training of GNNs on large-scale graphs.
- The PyG **operators** bundle essential functionalities for implementing Graph Neural Networks. PyG supports important GNN building blocks that can be combined and applied to various parts of a GNN model, ensuring rich flexibility of GNN design.
- Finally, PyG provides an abundant set of GNN **models**, and examples that showcase GNN models on standard graph benchmarks. Thanks to its flexibility, users can easily build and modify custom GNN models to fit their specific needs.
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/pyg-team/pytorch_geometric/master/docs/source/_figures/architecture.svg?sanitize=true" />
</p>
## Implemented GNN Models
We list currently supported PyG models, layers and operators according to category:
**GNN layers:**
All Graph Neural Network layers are implemented via the **[`nn.MessagePassing`](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.MessagePassing.html)** interface.
A GNN layer specifies how to perform message passing, *i.e.* by designing different message, aggregation and update functions as defined [here](https://pytorch-geometric.readthedocs.io/en/latest/tutorial/create_gnn.html).
These GNN layers can be stacked together to create Graph Neural Network models.
- **[GCNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GCNConv.html)** from Kipf and Welling: [Semi-Supervised Classification with Graph Convolutional Networks](https://arxiv.org/abs/1609.02907) (ICLR 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py)\]
- **[ChebConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.ChebConv.html)** from Defferrard *et al.*: [Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering](https://arxiv.org/abs/1606.09375) (NIPS 2016) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py#L36-L37)\]
- **[GATConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GATConv.html)** from Veličković *et al.*: [Graph Attention Networks](https://arxiv.org/abs/1710.10903) (ICLR 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gat.py)\]
<details>
<summary><b>Expand to see all implemented GNN layers...</b></summary>
- **[GCN2Conv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GCN2Conv.html)** from Chen *et al.*: [Simple and Deep Graph Convolutional Networks](https://arxiv.org/abs/2007.02133) (ICML 2020) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn2_cora.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn2_ppi.py)\]
- **[SplineConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SplineConv.html)** from Fey *et al.*: [SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels](https://arxiv.org/abs/1711.08920) (CVPR 2018) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/cora.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/faust.py)\]
- **[NNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.NNConv.html)** from Gilmer *et al.*: [Neural Message Passing for Quantum Chemistry](https://arxiv.org/abs/1704.01212) (ICML 2017) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/qm9_nn_conv.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mnist_nn_conv.py)\]
- **[CGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.CGConv.html)** from Xie and Grossman: [Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.145301) (Physical Review Letters 120, 2018)
- **[ECConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.ECConv.html)** from Simonovsky and Komodakis: [Edge-Conditioned Convolution on Graphs](https://arxiv.org/abs/1704.02901) (CVPR 2017)
- **[EGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.EGConv.html)** from Tailor *et al.*: [Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions](https://arxiv.org/abs/2104.01481) (GNNSys 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/egc.py)\]
- **[GATv2Conv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GATv2Conv.html)** from Brody *et al.*: [How Attentive are Graph Attention Networks?](https://arxiv.org/abs/2105.14491) (ICLR 2022)
- **[TransformerConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.TransformerConv.html)** from Shi *et al.*: [Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification](https://arxiv.org/abs/2009.03509) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/unimp_arxiv.py)\]
- **[SAGEConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SAGEConv.html)** from Hamilton *et al.*: [Inductive Representation Learning on Large Graphs](https://arxiv.org/abs/1706.02216) (NIPS 2017) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/reddit.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/ogbn_train.py), [**Example3**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_sage_unsup.py), [**Example4**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_sage_unsup_ppi.py)\]
- **[GraphConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GraphConv.html)** from, *e.g.*, Morris *et al.*: [Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks](https://arxiv.org/abs/1810.02244) (AAAI 2019)
- **[GatedGraphConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GatedGraphConv.html)** from Li *et al.*: [Gated Graph Sequence Neural Networks](https://arxiv.org/abs/1511.05493) (ICLR 2016)
- **[ResGatedGraphConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.ResGatedGraphConv.html)** from Bresson and Laurent: [Residual Gated Graph ConvNets](https://arxiv.org/abs/1711.07553) (CoRR 2017)
- **[GINConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GINConv.html)** from Xu *et al.*: [How Powerful are Graph Neural Networks?](https://arxiv.org/abs/1810.00826) (ICLR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mutag_gin.py)\]
- **[GINEConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GINEConv.html)** from Hu *et al.*: [Strategies for Pre-training Graph Neural Networks](https://arxiv.org/abs/1905.12265) (ICLR 2020)
- **[ARMAConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.ARMAConv.html)** from Bianchi *et al.*: [Graph Neural Networks with Convolutional ARMA Filters](https://arxiv.org/abs/1901.01343) (CoRR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/arma.py)\]
- **[SGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SGConv.html)** from Wu *et al.*: [Simplifying Graph Convolutional Networks](https://arxiv.org/abs/1902.07153) (CoRR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/sgc.py)\]
- **[APPNP](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.APPNP.html)** from Klicpera *et al.*: [Predict then Propagate: Graph Neural Networks meet Personalized PageRank](https://arxiv.org/abs/1810.05997) (ICLR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/citation/appnp.py)\]
- **[MFConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.MFConv.html)** from Duvenaud *et al.*: [Convolutional Networks on Graphs for Learning Molecular Fingerprints](https://arxiv.org/abs/1509.09292) (NIPS 2015)
- **[AGNNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.AGNNConv.html)** from Thekumparampil *et al.*: [Attention-based Graph Neural Network for Semi-Supervised Learning](https://arxiv.org/abs/1803.03735) (CoRR 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/agnn.py)\]
- **[TAGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.TAGConv.html)** from Du *et al.*: [Topology Adaptive Graph Convolutional Networks](https://arxiv.org/abs/1710.10370) (CoRR 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/tagcn.py)\]
- **[PNAConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PNAConv.html)** from Corso *et al.*: [Principal Neighbourhood Aggregation for Graph Nets](https://arxiv.org/abs/2004.05718) (CoRR 2020) \[**[Example](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/pna.py)**\]
- **[FAConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.FAConv.html)** from Bo *et al.*: [Beyond Low-Frequency Information in Graph Convolutional Networks](https://arxiv.org/abs/2101.00797) (AAAI 2021)
- **[PDNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.nn.conv.PDNConv.html)** from Rozemberczki *et al.*: [Pathfinder Discovery Networks for Neural Message Passing](https://arxiv.org/abs/2010.12878) (WWW 2021)
- **[RGCNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.RGCNConv.html)** from Schlichtkrull *et al.*: [Modeling Relational Data with Graph Convolutional Networks](https://arxiv.org/abs/1703.06103) (ESWC 2018) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/rgcn.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/rgcn_link_pred.py)\]
- **[RGATConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.RGATConv.html)** from Busbridge *et al.*: [Relational Graph Attention Networks](https://arxiv.org/abs/1904.05811) (CoRR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/rgat.py)\]
- **[FiLMConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.FiLMConv.html)** from Brockschmidt: [GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation](https://arxiv.org/abs/1906.12192) (ICML 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/film.py)\]
- **[SignedConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SignedConv.html)** from Derr *et al.*: [Signed Graph Convolutional Network](https://arxiv.org/abs/1808.06354) (ICDM 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/signed_gcn.py)\]
- **[DNAConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.DNAConv.html)** from Fey: [Just Jump: Dynamic Neighborhood Aggregation in Graph Neural Networks](https://arxiv.org/abs/1904.04849) (ICLR-W 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/dna.py)\]
- **[PANConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PANConv.html)** from Ma *et al.*: [Path Integral Based Convolution and Pooling for Graph Neural Networks](https://arxiv.org/abs/2006.16811) (NeurIPS 2020)
- **[PointNetConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PointNetConv.html)** (including **[Iterative Farthest Point Sampling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.fps.html)**, dynamic graph generation based on **[nearest neighbor](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.knn_graph.html)** or **[maximum distance](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.radius_graph.html)**, and **[k-NN interpolation](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.unpool.knn_interpolate.html)** for upsampling) from Qi *et al.*: [PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation](https://arxiv.org/abs/1612.00593) (CVPR 2017) and [PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space](https://arxiv.org/abs/1706.02413) (NIPS 2017) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/pointnet2_classification.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/pointnet2_segmentation.py)\]
- **[EdgeConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.EdgeConv.html)** from Wang *et al.*: [Dynamic Graph CNN for Learning on Point Clouds](https://arxiv.org/abs/1801.07829) (CoRR, 2018) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/dgcnn_classification.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/dgcnn_segmentation.py)\]
- **[XConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.XConv.html)** from Li *et al.*: [PointCNN: Convolution On X-Transformed Points](https://arxiv.org/abs/1801.07791) (NeurIPS 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/points/point_cnn.py)\]
- **[PPFConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PPFConv.html)** from Deng *et al.*: [PPFNet: Global Context Aware Local Features for Robust 3D Point Matching](https://arxiv.org/abs/1802.02669) (CVPR 2018)
- **[GMMConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GMMConv.html)** from Monti *et al.*: [Geometric Deep Learning on Graphs and Manifolds using Mixture Model CNNs](https://arxiv.org/abs/1611.08402) (CVPR 2017)
- **[FeaStConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.FeaStConv.html)** from Verma *et al.*: [FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis](https://arxiv.org/abs/1706.05206) (CVPR 2018)
- **[PointTransformerConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PointTransformerConv.html)** from Zhao *et al.*: [Point Transformer](https://arxiv.org/abs/2012.09164) (2020)
- **[HypergraphConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.HypergraphConv.html)** from Bai *et al.*: [Hypergraph Convolution and Hypergraph Attention](https://arxiv.org/abs/1901.08150) (CoRR 2019)
- **[GravNetConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GravNetConv.html)** from Qasim *et al.*: [Learning Representations of Irregular Particle-detector Geometry with Distance-weighted Graph Networks](https://arxiv.org/abs/1902.07987) (European Physics Journal C, 2019)
- **[SuperGAT](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SuperGATConv.html)** from Kim and Oh: [How To Find Your Friendly Neighborhood: Graph Attention Design With Self-Supervision](https://openreview.net/forum?id=Wi5KUNlqWty) (ICLR 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/super_gat.py)\]
- **[HGTConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.HGTConv.html)** from Hu *et al.*: [Heterogeneous Graph Transformer](https://arxiv.org/abs/2003.01332) (WWW 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/hetero/hgt_dblp.py)\]
- **[HEATConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.HEATonv.html)** from Mo *et al.*: [Heterogeneous Edge-Enhanced Graph Attention Network For Multi-Agent Trajectory Prediction](https://arxiv.org/abs/2106.07161) (CoRR 2021)
- **[SSGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SSGConv.html)** from Zhu *et al.*: [Simple Spectral Graph Convolution](https://openreview.net/forum?id=CYO5T-YjWZV) (ICLR 2021)
- **[FusedGATConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.FusedGATConv.html)** from Zhang *et al.*: [Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective](https://proceedings.mlsys.org/paper/2022/file/9a1158154dfa42caddbd0694a4e9bdc8-Paper.pdf) (MLSys 2022)
- **[GPSConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GPSConv.html)** from Rampášek *et al.*: [Recipe for a General, Powerful, Scalable Graph Transformer](https://arxiv.org/abs/2205.12454) (NeurIPS 2022) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_gps.py)\]
</details>
**Pooling layers:**
Graph pooling layers combine the vectorial representations of a set of nodes in a graph (or a subgraph) into a single vector representation that summarizes its properties of nodes.
It is commonly applied to graph-level tasks, which require combining node features into a single graph representation.
- **[Top-K Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.TopKPooling.html)** from Gao and Ji: [Graph U-Nets](https://arxiv.org/abs/1905.05178) (ICML 2019), Cangea *et al.*: [Towards Sparse Hierarchical Graph Classifiers](https://arxiv.org/abs/1811.01287) (NeurIPS-W 2018) and Knyazev *et al.*: [Understanding Attention and Generalization in Graph Neural Networks](https://arxiv.org/abs/1905.02850) (ICLR-W 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_topk_pool.py)\]
- **[DiffPool](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.dense.dense_diff_pool.html)** from Ying *et al.*: [Hierarchical Graph Representation Learning with Differentiable Pooling](https://arxiv.org/abs/1806.08804) (NeurIPS 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_diff_pool.py)\]
<details>
<summary><b>Expand to see all implemented pooling layers...</b></summary>
- **[Attentional Aggregation](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.AttentionalAggregation.html)** from Li *et al.*: [Graph Matching Networks for Learning the Similarity of Graph Structured Objects](https://arxiv.org/abs/1904.12787) (ICML 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/global_attention.py)\]
- **[Set2Set](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.Set2Set.html)** from Vinyals *et al.*: [Order Matters: Sequence to Sequence for Sets](https://arxiv.org/abs/1511.06391) (ICLR 2016) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/set2set.py)\]
- **[Sort Aggregation](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.SortAggregation.html)** from Zhang *et al.*: [An End-to-End Deep Learning Architecture for Graph Classification](https://www.cse.wustl.edu/~muhan/papers/AAAI_2018_DGCNN.pdf) (AAAI 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/sort_pool.py)\]
- **[MinCut Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.dense.dense_mincut_pool.html)** from Bianchi *et al.*: [Spectral Clustering with Graph Neural Networks for Graph Pooling](https://arxiv.org/abs/1907.00481) (ICML 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_mincut_pool.py)\]
- **[DMoN Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.dense.DMoNPooling.html)** from Tsitsulin *et al.*: [Graph Clustering with Graph Neural Networks](https://arxiv.org/abs/2006.16904) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_dmon_pool.py)\]
- **[Graclus Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.graclus.html)** from Dhillon *et al.*: [Weighted Graph Cuts without Eigenvectors: A Multilevel Approach](http://www.cs.utexas.edu/users/inderjit/public_papers/multilevel_pami.pdf) (PAMI 2007) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mnist_graclus.py)\]
- **[Voxel Grid Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.voxel_grid.html)** from, *e.g.*, Simonovsky and Komodakis: [Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs](https://arxiv.org/abs/1704.02901) (CVPR 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mnist_voxel_grid.py)\]
- **[SAG Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.SAGPooling.html)** from Lee *et al.*: [Self-Attention Graph Pooling](https://arxiv.org/abs/1904.08082) (ICML 2019) and Knyazev *et al.*: [Understanding Attention and Generalization in Graph Neural Networks](https://arxiv.org/abs/1905.02850) (ICLR-W 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/sag_pool.py)\]
- **[Edge Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.EdgePooling.html)** from Diehl *et al.*: [Towards Graph Pooling by Edge Contraction](https://graphreason.github.io/papers/17.pdf) (ICML-W 2019) and Diehl: [Edge Contraction Pooling for Graph Neural Networks](https://arxiv.org/abs/1905.10990) (CoRR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/edge_pool.py)\]
- **[ASAPooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.ASAPooling.html)** from Ranjan *et al.*: [ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations](https://arxiv.org/abs/1911.07979) (AAAI 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/asap.py)\]
- **[PANPooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.PANPooling.html)** from Ma *et al.*: [Path Integral Based Convolution and Pooling for Graph Neural Networks](https://arxiv.org/abs/2006.16811) (NeurIPS 2020)
- **[MemPooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.MemPooling.html)** from Khasahmadi *et al.*: [Memory-Based Graph Networks](https://arxiv.org/abs/2002.09518) (ICLR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mem_pool.py)\]
- **[Graph Multiset Transformer](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.GraphMultisetTransformer.html)** from Baek *et al.*: [Accurate Learning of Graph Representations with Graph Multiset Pooling](https://arxiv.org/abs/2102.11533) (ICLR 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_gmt.py)\]
- **[Equilibrium Aggregation](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.EquilibriumAggregation.html)** from Bartunov *et al.*: [](https://arxiv.org/abs/2202.12795) (UAI 2022) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/equilibrium_median.py)\]
</details>
**GNN models:**
Our supported GNN models incorporate multiple message passing layers, and users can directly use these pre-defined models to make predictions on graphs.
Unlike simple stacking of GNN layers, these models could involve pre-processing, additional learnable parameters, skip connections, graph coarsening, etc.
- **[SchNet](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.SchNet.html)** from Schütt *et al.*: [SchNet: A Continuous-filter Convolutional Neural Network for Modeling Quantum Interactions](https://arxiv.org/abs/1706.08566) (NIPS 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/qm9_pretrained_schnet.py)\]
- **[DimeNet](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.DimeNet.html)** and **[DimeNetPlusPlus](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.DimeNetPlusPlus.html)** from Klicpera *et al.*: [Directional Message Passing for Molecular Graphs](https://arxiv.org/abs/2003.03123) (ICLR 2020) and [Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules](https://arxiv.org/abs/2011.14115) (NeurIPS-W 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/qm9_pretrained_dimenet.py)\]
- **[Node2Vec](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.Node2Vec.html)** from Grover and Leskovec: [node2vec: Scalable Feature Learning for Networks](https://arxiv.org/abs/1607.00653) (KDD 2016) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/node2vec.py)\]
- **[Deep Graph Infomax](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.DeepGraphInfomax.html)** from Veličković *et al.*: [Deep Graph Infomax](https://arxiv.org/abs/1809.10341) (ICLR 2019) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/infomax_transductive.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/infomax_inductive.py)\]
- **Deep Multiplex Graph Infomax** from Park *et al.*: [Unsupervised Attributed Multiplex Network Embedding](https://arxiv.org/abs/1911.06750) (AAAI 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/hetero/dmgi_unsup.py)\]
- **[Masked Label Prediction](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.MaskLabel.html)** from Shi *et al.*: [Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification](https://arxiv.org/abs/2009.03509) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/unimp_arxiv.py)\]
- **[PMLP](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.PMLP.html)** from Yang *et al.*: [Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs](https://arxiv.org/abs/2212.09034) (ICLR 2023)
<details>
<summary><b>Expand to see all implemented GNN models...</b></summary>
- **[Jumping Knowledge](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.JumpingKnowledge.html)** from Xu *et al.*: [Representation Learning on Graphs with Jumping Knowledge Networks](https://arxiv.org/abs/1806.03536) (ICML 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/gin.py#L54-L106)\]
- A **[MetaLayer](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.MetaLayer.html)** for building any kind of graph network similar to the [TensorFlow Graph Nets library](https://github.com/deepmind/graph_nets) from Battaglia *et al.*: [Relational Inductive Biases, Deep Learning, and Graph Networks](https://arxiv.org/abs/1806.01261) (CoRR 2018)
- **[MetaPath2Vec](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.MetaPath2Vec.html)** from Dong *et al.*: [metapath2vec: Scalable Representation Learning for Heterogeneous Networks](https://ericdongyx.github.io/papers/KDD17-dong-chawla-swami-metapath2vec.pdf) (KDD 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/hetero/metapath2vec.py)\]
- All variants of **[Graph Autoencoders](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.GAE.html)** and **[Variational Autoencoders](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.VGAE.html)** from:
- [Variational Graph Auto-Encoders](https://arxiv.org/abs/1611.07308) from Kipf and Welling (NIPS-W 2016) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/autoencoder.py)\]
- [Adversarially Regularized Graph Autoencoder for Graph Embedding](https://arxiv.org/abs/1802.04407) from Pan *et al.* (IJCAI 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/argva_node_clustering.py)\]
- [Simple and Effective Graph Autoencoders with One-Hop Linear Models](https://arxiv.org/abs/2001.07614) from Salha *et al.* (ECML 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/autoencoder.py)\]
- **[SEAL](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/seal_link_pred.py)** from Zhang and Chen: [Link Prediction Based on Graph Neural Networks](https://arxiv.org/pdf/1802.09691.pdf) (NeurIPS 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/seal_link_pred.py)\]
- **[RENet](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.RENet.html)** from Jin *et al.*: [Recurrent Event Network for Reasoning over Temporal Knowledge Graphs](https://arxiv.org/abs/1904.05530) (ICLR-W 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/renet.py)\]
- **[GraphUNet](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.GraphUNet.html)** from Gao and Ji: [Graph U-Nets](https://arxiv.org/abs/1905.05178) (ICML 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_unet.py)\]
- **[AttentiveFP](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.AttentiveFP.html)** from Xiong *et al.*: [Pushing the Boundaries of Molecular Representation for Drug Discovery with the Graph Attention Mechanism](https://pubs.acs.org/doi/10.1021/acs.jmedchem.9b00959) (J. Med. Chem. 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/attentive_fp.py)\]
- **[DeepGCN](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.DeepGCNLayer.html)** and the **[GENConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GENConv.html)** from Li *et al.*: [DeepGCNs: Can GCNs Go as Deep as CNNs?](https://arxiv.org/abs/1904.03751) (ICCV 2019) and [DeeperGCN: All You Need to Train Deeper GCNs](https://arxiv.org/abs/2006.07739) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/ogbn_proteins_deepgcn.py)\]
- **[RECT](https://pytorch-geometric.readth | text/markdown | null | Matthias Fey <matthias@pyg.org> | null | null | null | deep-learning, pytorch, geometric-deep-learning, graph-neural-networks, graph-convolutional-networks | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp",
"fsspec",
"jinja2",
"numpy",
"psutil>=5.8.0",
"pyparsing",
"requests",
"tqdm",
"xxhash",
"matplotlib; extra == \"benchmark\"",
"networkx; extra == \"benchmark\"",
"pandas; extra == \"benchmark\"",
"protobuf<4.21; extra == \"benchmark\"",
"wandb; extra == \"benchmark\"",
"ipython; extra == \"dev\"",
"matplotlib-inline; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"torch_geometric[test]; extra == \"dev\"",
"scipy; extra == \"full\"",
"scikit-learn; extra == \"full\"",
"ase; extra == \"full\"",
"captum<0.7.0; extra == \"full\"",
"graphviz; extra == \"full\"",
"h5py; extra == \"full\"",
"matplotlib; extra == \"full\"",
"networkx; extra == \"full\"",
"numba<0.60.0; extra == \"full\"",
"opt_einsum; extra == \"full\"",
"pandas; extra == \"full\"",
"pynndescent; extra == \"full\"",
"pytorch-memlab; extra == \"full\"",
"rdflib; extra == \"full\"",
"rdkit; extra == \"full\"",
"scikit-image; extra == \"full\"",
"statsmodels; extra == \"full\"",
"sympy; extra == \"full\"",
"tabulate; extra == \"full\"",
"torch_geometric[graphgym,modelhub]; extra == \"full\"",
"torchmetrics; extra == \"full\"",
"trimesh; extra == \"full\"",
"protobuf<4.21; extra == \"graphgym\"",
"pytorch-lightning; extra == \"graphgym\"",
"yacs; extra == \"graphgym\"",
"huggingface_hub; extra == \"modelhub\"",
"pcst_fast; extra == \"rag\"",
"datasets; extra == \"rag\"",
"transformers; extra == \"rag\"",
"pandas; extra == \"rag\"",
"sentencepiece; extra == \"rag\"",
"accelerate; extra == \"rag\"",
"torchmetrics; extra == \"rag\"",
"onnx; extra == \"test\"",
"onnxruntime; extra == \"test\"",
"onnxscript; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\""
] | [] | [] | [] | [
"changelog, https://github.com/pyg-team/pytorch_geometric/blob/master/CHANGELOG.md",
"documentation, https://pytorch-geometric.readthedocs.io",
"homepage, https://pyg.org",
"repository, https://github.com/pyg-team/pytorch_geometric.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:16:40.001927 | pyg_nightly-2.8.0.dev20260221.tar.gz | 877,252 | d1/61/833dac4ad78ccb6f59e8455789b8d7609f4f04fd58fb66710d585f31c109/pyg_nightly-2.8.0.dev20260221.tar.gz | source | sdist | null | false | b8a5a383f38a213eecf5bd59f570a44f | ff4be40878c48ecd5f9c05f811c8573f2baec5cab8493f49c3c9e602976eb98c | d161833dac4ad78ccb6f59e8455789b8d7609f4f04fd58fb66710d585f31c109 | MIT | [
"LICENSE"
] | 195 |
2.4 | pulumi-azuread | 6.9.0a1771654110 | A Pulumi package for creating and managing Azure Active Directory (Azure AD) cloud resources. | [](https://github.com/pulumi/pulumi-azuread/actions)
[](https://slack.pulumi.com)
[](https://npmjs.com/package/@pulumi/azuread)
[](https://badge.fury.io/nu/pulumi.azured)
[](https://pypi.org/project/pulumi-azuread)
[](https://pkg.go.dev/github.com/pulumi/pulumi-azuread/sdk/v6/go)
[](https://github.com/pulumi/pulumi-azuread/blob/master/LICENSE)
# Microsoft Azure Active Directory Resource Provider
The Microsoft Azure AD resource provider for Pulumi lets you use Azure Active Directory resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/). For a streamlined Pulumi walkthrough, including language runtime installation and Azure configuration, click "Get Started" below.
<div>
<a href="https://www.pulumi.com/docs/get-started/azure" title="Get Started">
<img src="https://www.pulumi.com/images/get-started.svg?" width="120">
</a>
</div>
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
npm install @pulumi/azuread
or `yarn`:
yarn add @pulumi/azuread
### Python 3
To use from Python, install using `pip`:
pip install pulumi-azuread
### Go
To use from Go, use `go get` to grab the latest version of the library
go get github.com/pulumi/pulumi-azuread/sdk/v6
### .NET
To use from .NET, install using `dotnet add package`:
dotnet add package Pulumi.Azuread
## Configuration
The following configuration points are available:
- `azuread:clientId` - The Client ID which should be used. This can also be sourced from the `ARM_CLIENT_ID` Environment
Variable.
- `azuread:tenantId` - The Tenant ID which should be used. This can also be sourced from the `ARM_TENANT_ID` Environment
Variable.
- `azuread:clientSecret` - The Client Secret which should be used. This can also be sourced from the `ARM_CLIENT_SECRET`
Environment Variable.
- `azuread:certificatePassword` - The password associated with the Client Certificate. This can also be sourced from
the `ARM_CLIENT_CERTIFICATE_PASSWORD` Environment Variable.
- `azuread:clientCertificatePath` - The path to the Client Certificate associated with the Service Principal which should
be used. This can also be sourced from the `ARM_CLIENT_CERTIFICATE_PATH` Environment Variable.
- `azuread:environment` - The Cloud Environment which be used. Possible values are public, usgovernment, german and china.
Defaults to `public`. This can also be sourced from the `ARM_ENVIRONMENT` environment variable.
- `azuread:msiEndpoint` - The path to a custom endpoint for Managed Service Identity - in most circumstances this should
be detected automatically. This can also be sourced from the `ARM_MSI_ENDPOINT` Environment Variable.
- `azuread:useMsi` - Should Managed Service Identity be used for Authentication? This can also be sourced from the
`ARM_USE_MSI` Environment Variable. Defaults to `false`.
## Reference
For further information, please visit [the AzureAD provider docs](https://www.pulumi.com/registry/packages/azuread/) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/registry/packages/azuread/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, azuread | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-azuread"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-21T06:16:37.899014 | pulumi_azuread-6.9.0a1771654110.tar.gz | 224,333 | 87/70/36381ed0edd544f0c05e382cc652bbe6284589c2ef665d93fbcf0959b1d1/pulumi_azuread-6.9.0a1771654110.tar.gz | source | sdist | null | false | 1a84d3b859a38ad8a4e0305e7752aaf0 | 104c08e99411d41757f6e9934abda4d65ea245aea64eeb3af5ab45dfd43c3538 | 877036381ed0edd544f0c05e382cc652bbe6284589c2ef665d93fbcf0959b1d1 | null | [] | 216 |
2.4 | silvergrain | 0.2 | Physically-based film grain rendering with GPU acceleration | # SilverGrain
Physically-based film grain rendering for Python.
SilverGrain implements the photographic grain simulation algorithm from [Newson et al. (2017)](https://doi.org/10.5201/ipol.2017.192), making it accessible as a friendly Python library and CLI tool. Unlike simple noise overlays, this approach models grain as a stochastic geometric process based on the actual physics of silver halide crystals in analog film.
The result: realistic, resolution-independent grain that scales cleanly to any output size.
### Example
| Original | SilverGrain (luminance) |
|----------------------------------------|------------------------------------------------------|
|  |  |
## Features
- **Physically accurate**: Models grain using Poisson point processes and Boolean geometry, matching how real film works
- **Resolution independent**: Render at any zoom level—grain structure remains consistent
- **GPU accelerated**: CUDA support for ~750× speedup on typical workloads
- **Flexible processing**: Apply grain to luminance only (preserves color) or per-channel (chromatic grain)
- **Multiple interfaces**: Use as a Python library, single-image CLI, batch processor, or dataset augmentation tool
- **Adjustable strength**: Blend grain with original image for subtle effects
## Installation
```bash
# RECOMMENDED: With GPU acceleration (requires NVIDIA GPU + CUDA)
pip install silvergrain[gpu]
# CPU-only version (deathly slow, you have been warned)
pip install silvergrain
```
## Quick Start
### Command Line
Process a single image:
```bash
silvergrain input.png output.png --grain-radius 0.12
```
Batch process a directory:
```bash
silvergrain-batch images/ processed/ --grain-radius 0.15 --strength 0.8
```
Generate augmented variants:
```bash
silvergrain-augment clean_images/ augmented/ \
--count 10 \
--grain-radius 0.08:0.20 \
--strength 0.7:1.0
```
### Python Library
```python
from PIL import Image
from silvergrain import FilmGrainRenderer
# Load image
image = Image.open("input.png")
# Create renderer with desired grain characteristics
renderer = FilmGrainRenderer(
grain_radius=0.12, # Average grain size (smaller = finer grain)
n_monte_carlo=200, # Quality vs speed tradeoff
device='auto' # Use GPU if available
)
# Apply grain
output = renderer.process_image(image, mode='luminance', strength=1.0)
output.save("output.png")
```
## How It Works
Traditional "film grain" effects just overlay noise patterns as gaussian, color, or luminance noise. SilverGrain does something more interesting: it simulates the actual stochastic geometry of photographic film grain.
### The Physical Model
In real photographic film, light-sensitive silver halide crystals are randomly distributed across the emulsion. When exposed and developed, each crystal that absorbed a photon becomes an opaque grain. The key insight: darker regions have *higher grain density*, not larger grains.
SilverGrain models this using:
1. **Poisson point process**: Grain centers placed randomly with density proportional to pixel brightness
2. **Boolean model**: Each grain is a disk with random radius (optional log-normal distribution)
3. **Monte Carlo convolution**: Simulate the optical filtering of the film-to-print process
This produces grain that:
- Scales correctly across brightness levels (more grain in shadows, less in highlights)
- Remains consistent when zooming or changing resolution
- Exhibits the characteristic "clumpy" structure of real film
### The Practical Implementation
The Monte Carlo approach works by:
1. For each output pixel, take N random samples from a Gaussian-offset neighborhood
2. For each sample, determine if it's covered by a grain (using deterministic per-pixel RNG)
3. Average the results to get the filtered grain value
This is parallel—perfect for GPU acceleration. The CUDA implementation typically runs 500x faster than CPU for typical images.
## Usage Patterns
### CLI Tools
#### `silvergrain` - Single Image Processing
Apply grain to one image with full control over parameters:
```bash
silvergrain input.png output.png \
--grain-radius 0.15 \
--grain-sigma 0.03 \
--n-monte-carlo 400 \
--mode luminance \
--strength 0.9
```
**Key options:**
- `--grain-radius`: Mean grain size in pixels (0.05-0.3 typical)
- `--grain-sigma`: Grain size variation (0 = uniform, higher = more variation)
- `--sigma-filter`: Anti-aliasing strength (default 0.8)
- `--n-monte-carlo`: Sample count (higher = better quality, slower)
- `--mode`: `luminance` (color-preserving) or `rgb` (per-channel)
- `--strength`: Blend amount 0.0-1.0 (1.0 = full grain)
- `--quality`: Quick preset (`fast`, `balanced`, `high`)
#### `silvergrain-batch` - Batch Processing
Process multiple images with consistent settings:
```bash
# Process entire directory
silvergrain-batch images/ output/ --quality high
# In-place processing with suffix
silvergrain-batch images/ --grain-radius 0.12
# Recursive search
silvergrain-batch images/ output/ --recursive --strength 0.7
```
**Key options:**
- Output directory is optional—omit for in-place mode (adds `-grainy.png` suffix)
- `--recursive`: Search subdirectories
- `--simple`: Minimal progress output
#### `silvergrain-augment` - Dataset Augmentation
Generate randomized variants for training data augmentation:
```bash
silvergrain-augment clean_images/ augmented/ \
--count 20 \
--grain-radius 0.08:0.20 \
--strength 0.7:1.0 \
--mode rand
```
**Key features:**
- Parameter ranges: Use `low:high` syntax for random sampling per variant
- Fixed values: Use single numbers for consistent parameters
- Output structure: Creates `aug_0/`, `aug_1/`, etc. with preserved filenames (perfect for paired dataloaders)
- `--mode rand`: Randomly chooses `luminance` or `rgb` per variant
**Example output structure:**
```
augmented/
├── aug_0/
│ ├── image_001.png
│ └── image_002.png
├── aug_1/
│ ├── image_001.png
│ └── image_002.png
...
```
### Library Usage
#### Basic Rendering
```python
from PIL import Image
from silvergrain import FilmGrainRenderer
renderer = FilmGrainRenderer(grain_radius=0.12, n_monte_carlo=200)
image = Image.open("input.png")
output = renderer.render(image) # Full grain, same resolution
output.save("output.png")
```
#### Luminance vs RGB Modes
```python
# Luminance mode: grain only affects brightness, preserves color
output = renderer.process_image(image, mode='luminance')
# RGB mode: independent grain per channel, can shift colors
output = renderer.process_image(image, mode='rgb')
```
#### Strength Blending
```python
# Subtle grain at 50% strength
output = renderer.process_image(image, strength=0.5)
```
#### GPU Acceleration
```python
# Explicit GPU (raises error if unavailable)
renderer = FilmGrainRenderer(device='gpu', grain_radius=0.12)
# Auto-detect (uses GPU if available, CPU otherwise)
renderer = FilmGrainRenderer(device='auto', grain_radius=0.12)
# Check which device is being used
print(renderer.device) # 'cpu' or 'gpu'
```
#### Zoom and Resolution Control
```python
# Double the output resolution
output = renderer.render(image, zoom=2.0)
# Explicit output size
output = renderer.render(image, output_size=(1920, 1080))
```
#### Quality Presets
```python
# Fast preview
renderer = FilmGrainRenderer(grain_radius=0.12, n_monte_carlo=100)
# Balanced (default)
renderer = FilmGrainRenderer(grain_radius=0.12, n_monte_carlo=200)
# High quality
renderer = FilmGrainRenderer(grain_radius=0.12, n_monte_carlo=400)
```
## Parameter Guide
### `grain_radius` (float, default 0.1)
Average grain radius in pixels. This is the most important parameter for controlling grain appearance.
- **0.05-0.08**: Very fine grain (ISO 100-200 equivalent)
- **0.10-0.15**: Medium grain (ISO 400-800)
- **0.20-0.30**: Heavy grain (ISO 1600-3200)
Smaller values = finer, more subtle grain. Larger = coarser, more visible texture.
### `grain_sigma` (float, default 0.0)
Standard deviation of grain size distribution (log-normal). Controls grain size variation.
- **0.0**: All grains same size (uniform, can look artificial)
- **0.02-0.05**: Subtle variation (realistic)
- **0.1+**: High variation (artistic effect)
Small variation often looks more natural than perfectly uniform grain.
### `sigma_filter` (float, default 0.8)
Gaussian filter strength for anti-aliasing. Simulates the optical blur of projection/viewing.
- **0.5-0.7**: Sharper grain (more texture)
- **0.8-1.0**: Smoother grain (more subtle)
- **1.2+**: Very smooth (soft focus effect)
### `n_monte_carlo` (int, default 800)
Number of samples per pixel. Higher = better quality but slower.
- **100-150**: Fast preview
- **200-300**: Balanced quality/speed
- **400-800**: High quality
- **1000+**: Diminishing returns (mostly overkill)
Quality scales roughly as √N, so doubling sample count gives ~40% quality improvement.
### `device` (str, default 'auto')
- **'auto'**: Use GPU if available, fall back to CPU
- **'cpu'**: Force CPU rendering (always available)
- **'gpu'**: Force GPU rendering (errors if unavailable)
### `mode` (str, default 'luminance')
- **'luminance'**: Apply grain only to brightness channel (preserves colors, more realistic for color film)
- **'rgb'**: Apply independent grain to R, G, B channels (can shift colors, more film-like color artifacts)
### `strength` (float, default 1.0)
Blend factor between original and grained image.
- **0.0**: Original image, no grain
- **0.5**: 50/50 blend
- **1.0**: Full grain effect
## Performance
Benchmark results (1024×1024 gradient, n_monte_carlo=200):
- **CPU**: ~159 seconds (2.6 minutes)
- **GPU (CUDA)**: ~0.21 seconds
**GPU acceleration: ~750× speedup.**
The GPU advantage scales dramatically with:
- Image size (CPU time scales quadratically, GPU stays fast)
- Monte Carlo sample count (more samples = bigger CPU penalty)
- Batch processing (GPU overhead amortizes across images)
## Examples
See `examples/` directory for complete working examples:
- `example_01_basic_grain.py` - Default settings
- `example_02_fine_grain.py` - Subtle, fine grain
- `example_03_heavy_grain.py` - Coarse, heavy grain
- `example_04_luminance_mode.py` - Color-preserving grain
- `example_05_rgb_mode.py` - Per-channel grain
- `example_06_blended_strength.py` - Partial grain blending
- `example_07_gpu_accelerated.py` - GPU rendering
## Technical Background
SilverGrain implements the algorithm described in:
> Alasdair Newson, Julie Delon, and Bruno Galerne. "Realistic Film Grain Rendering."
> *Image Processing On Line*, 7:165–183, 2017.
> https://doi.org/10.5201/ipol.2017.192
The original paper provides a C++ reference implementation. SilverGrain reimplements the ideas directly from the paper in Python with:
- Modern NumPy/Numba architecture
- Optional GPU acceleration via CUDA
- Simplified API for common use cases
- CLI tools for practical workflows
### Key Differences from the Paper
1. **Algorithm selection**: The paper describes both grain-wise and pixel-wise algorithms with automatic selection. SilverGrain currently implements only the pixel-wise approach, which provides good performance across all grain sizes and parallelizes efficiently on GPU.
2. **Default parameters**: Higher default sample counts (n_monte_carlo=800 vs paper's lower values) since GPU acceleration makes this practical.
3. **Color processing**: An explicit `mode` parameter for luminance-only vs per-channel grain.
4. **Blending**: A `strength` parameter for partial grain effects, not in original paper.
## Why Not Just Use Noise?
Simple approaches like adding Gaussian noise or overlaying pre-made grain textures have several problems:
1. **Wrong brightness relationship**: Real grain density increases with exposure (darker = more grain), but noise is uniform
2. **Resolution dependence**: Noise patterns don't scale correctly when resizing
3. **Statistical properties**: Real grain has spatial correlations (clumping) that random noise lacks
4. **Physical implausibility**: Noise distributions don't match actual photographic processes
SilverGrain's physics-based approach produces grain that:
- Scales correctly across brightness levels
- Remains consistent at any resolution
- Exhibits realistic spatial structure
- Matches actual photographic characteristics
## License
[AGPLv3](https://en.wikipedia.org/wiki/GNU_Affero_General_Public_License) - see [LICENSE](./LICENSE) file for details.
## Citation
If you use SilverGrain in research, please cite the original paper:
```bibtex
@article{newson2017realistic,
title={Realistic Film Grain Rendering},
author={Newson, Alasdair and Delon, Julie and Galerne, Bruno},
journal={Image Processing On Line},
volume={7},
pages={165--183},
year={2017},
doi={10.5201/ipol.2017.192}
}
```
| text/markdown | kjerk@github | null | null | null | AGPL-3.0-or-later | film-grain, image-processing, gpu, monte-carlo, photography, cuda | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Topic :: Multimedia :: Graphics",
"Topic :: Scientific/Engineering :: Image Processing",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"natsort>=8.4.0",
"numba==0.62.1",
"numpy>=2.2.6",
"opencv-python",
"Pillow>=10.0.0",
"rich>=14.0.0",
"numba-cuda[cu12]==0.20.1; extra == \"gpu\"",
"pytest>=6.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/kjerk/silvergrain",
"Repository, https://github.com/kjerk/silvergrain",
"Issues, https://github.com/kjerk/silvergrain/issues",
"Changelog, https://github.com/kjerk/silvergrain/releases"
] | uv/0.9.8 | 2026-02-21T06:16:07.135226 | silvergrain-0.2.tar.gz | 50,768 | 7b/f1/d960caddc6b1e03e57bfdf42fc69df9893c3ab02e254883864897e4a042a/silvergrain-0.2.tar.gz | source | sdist | null | false | 0e73dac18faae7ab73f8313f69585186 | 01220a57c77fbff79748fd8f6a190aa1c709dce4283a18daf1a011cdc4ce5387 | 7bf1d960caddc6b1e03e57bfdf42fc69df9893c3ab02e254883864897e4a042a | null | [
"LICENSE"
] | 224 |
2.4 | g-gremlin-dynamics-mcp | 0.1.0 | Standalone Dynamics 365 / Dataverse MCP server launcher for g-gremlin. | # g-gremlin-dynamics-mcp
Standalone Dynamics 365 / Dataverse MCP launcher for g-gremlin.
This package provides a dedicated `g-gremlin-dynamics-mcp` command so MCP clients can connect to Dataverse tools without calling the broader `g-gremlin` CLI directly.
It delegates to:
- `g-gremlin mcp serve-dynamics` (read/analyze tools only)
- `g-gremlin mcp serve-dynamics --enable-writes` (write tools exposed; all apply calls still require `plan_hash`)
- Optional profile selection: `g-gremlin mcp serve-dynamics --profile <name>`
## Quickstart
```bash
pipx install g-gremlin
pipx install g-gremlin-dynamics-mcp
# Configure Dynamics credentials for g-gremlin
g-gremlin auth set dynamics
```
## Dataverse Requirements
For Dataverse MCP with non-Microsoft clients:
- Enable Dataverse MCP in Power Platform Admin Center (PPAC)
- Add each non-Microsoft MCP client to the Dataverse MCP allowed clients list in PPAC
- Install the local Dataverse MCP proxy tool if needed:
- `dotnet tool install --global Microsoft.PowerPlatform.Dataverse.MCP`
Billing note:
- External AI-agent access can consume Dataverse API/request capacity. Review licensing and limits before enabling autonomous write flows.
## Claude Desktop
```json
{
"mcpServers": {
"g-gremlin-dynamics": {
"command": "g-gremlin-dynamics-mcp"
}
}
}
```
To expose write tools:
```json
{
"mcpServers": {
"g-gremlin-dynamics": {
"command": "g-gremlin-dynamics-mcp",
"args": ["--enable-writes"]
}
}
}
```
## Cursor / Windsurf
Use the same MCP server command in your client config:
```json
{
"mcpServers": {
"g-gremlin-dynamics": {
"command": "g-gremlin-dynamics-mcp"
}
}
}
```
To expose write tools in Cursor/Windsurf, add:
```json
{
"mcpServers": {
"g-gremlin-dynamics": {
"command": "g-gremlin-dynamics-mcp",
"args": ["--enable-writes"]
}
}
}
```
## Health check
```bash
g-gremlin-dynamics-mcp --check
```
Profile-aware startup:
```bash
g-gremlin-dynamics-mcp --profile prod
```
## Development
```bash
git clone https://github.com/mikeheilmann1024/g-gremlin-dynamics-mcp
cd g-gremlin-dynamics-mcp
pip install -e ".[dev]"
pytest
```
## License
MIT
| text/markdown | FoundryOps | null | null | null | null | crm, dataverse, dynamics, g-gremlin, mcp, model-context-protocol | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"g-gremlin>=0.1.14",
"mcp>=1.0",
"packaging>=21.0",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/mikeheilmann1024/g-gremlin-dynamics-mcp",
"Issues, https://github.com/mikeheilmann1024/g-gremlin-dynamics-mcp/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T06:15:54.968561 | g_gremlin_dynamics_mcp-0.1.0.tar.gz | 5,656 | fb/19/989e9318bddd90ccf29476249bfdb2aecb33bb67b2f55047b0b728710078/g_gremlin_dynamics_mcp-0.1.0.tar.gz | source | sdist | null | false | 7f9b318ebd05c7e173ea22e9d90c495e | 3604d45e34f1d7f1aa384541bbf98d2842e1ea3159b67d2bdd46f643637d1cba | fb19989e9318bddd90ccf29476249bfdb2aecb33bb67b2f55047b0b728710078 | MIT | [
"LICENSE"
] | 242 |
2.4 | pmtvs-embedding | 0.3.2 | Time-delay embedding for dynamical systems analysis (4 functions, 1 Rust-accelerated) | # pmtvs-embedding
Time-delay embedding for dynamical systems analysis.
## Installation
```bash
pip install pmtvs-embedding
```
## Functions
- `delay_embedding(signal, dim, tau)` - Construct time-delay embedding matrix
- `optimal_embedding_dimension(signal, tau, max_dim, threshold)` - Cao's method
- `mutual_information_delay(signal, max_lag, n_bins)` - Find optimal delay
- `false_nearest_neighbors(signal, tau, max_dim)` - FNN method
## Rust Acceleration
1 of 4 functions has Rust implementation (~12x speedup for delay_embedding).
Disable with `PMTVS_USE_RUST=0`.
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown; charset=UTF-8; variant=GFM | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Rust",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"pytest>=7.0; extra == \"dev\"",
"pytest-benchmark>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/pmtvs/pmtvs",
"Repository, https://github.com/pmtvs/pmtvs"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T06:15:38.890190 | pmtvs_embedding-0.3.2.tar.gz | 12,101 | 9a/f6/db774c21599528edf335cf2f9bbfedf742df4cd142276a802bbd32264414/pmtvs_embedding-0.3.2.tar.gz | source | sdist | null | false | ab75257a6afd283ca4fd32ab63a5b4c5 | a9f08789ccad98acfc4bbe985e6102728b4432ebc03867c7c47ebddceff58b98 | 9af6db774c21599528edf335cf2f9bbfedf742df4cd142276a802bbd32264414 | null | [] | 154 |
2.4 | pmtvs-distance | 0.3.2 | Distance metrics for signal comparison (4 functions, 3 Rust-accelerated) | # pmtvs-distance
Distance metrics for signal comparison.
## Installation
```bash
pip install pmtvs-distance
```
## Functions
- `euclidean_distance(x, y)` - Euclidean (L2) distance
- `cosine_distance(x, y)` - Cosine distance (1 - cosine similarity)
- `cosine_similarity(x, y)` - Cosine similarity
- `manhattan_distance(x, y)` - Manhattan (L1) distance
- `dtw_distance(x, y, window=None)` - Dynamic Time Warping distance
- `earth_movers_distance(x, y)` - Earth mover's (Wasserstein) distance
## Rust Acceleration
3 of 6 functions have Rust implementations (~8x speedup).
Disable with `PMTVS_USE_RUST=0`.
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown; charset=UTF-8; variant=GFM | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Rust",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"pytest>=7.0; extra == \"dev\"",
"pytest-benchmark>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/pmtvs/pmtvs",
"Repository, https://github.com/pmtvs/pmtvs"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T06:15:37.913566 | pmtvs_distance-0.3.2.tar.gz | 11,404 | b8/60/861694093599d601f416b36abb972a59d9f0705cf968e40b657f438d7519/pmtvs_distance-0.3.2.tar.gz | source | sdist | null | false | 7ac8ad9b9e723ea40979ee17d43e2a79 | a2b5fd5b319486d89c5573900dda43c1b273184a22867e201d0a1f4acb37a463 | b860861694093599d601f416b36abb972a59d9f0705cf968e40b657f438d7519 | null | [] | 154 |
2.4 | ralph-runner | 0.1.0 | Outer-loop orchestrator for Claude Code — run iterative, self-improving coding sessions with automatic progress tracking, verification, and cost reporting. | # ralph-runner
**Outer-loop orchestrator for Claude Code** — run iterative, self-improving coding sessions with automatic progress tracking, verification, and cost reporting.
ralph-runner spawns repeated Claude Code sessions in a loop, feeding each one the accumulated progress from prior iterations. Claude works on your task, writes progress notes, and ralph-runner verifies the result, tracks costs, and decides whether to continue or stop. The result is autonomous, multi-hour coding runs that converge on a solution.
## How It Works
```
┌─────────────────────────────────────────────────┐
│ ralph-runner │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Iter 1 │──▶│ Iter 2 │──▶│ Iter N │ │
│ │ Claude │ │ Claude │ │ Claude │ │
│ │ Session │ │ Session │ │ Session │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────────────────────────────────────┐ │
│ │ progress.md │ │
│ │ (shared memory across iterations) │ │
│ └──────────────────────────────────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────────────────────────────────────┐ │
│ │ verify command (optional) │ │
│ │ e.g. make test, pytest │ │
│ └──────────────────────────────────────────┘ │
└─────────────────────────────────────────────────┘
```
Each iteration is a fresh Claude Code session that reads the progress file, continues the work, and updates the file. Between iterations, ralph-runner runs your verification command, tracks token usage and costs, detects timeouts and crashes, and enforces minimum iteration counts before accepting completion.
## Installation
### As a CLI tool
```bash
pip install ralph-runner
```
Or install from source:
```bash
git clone https://github.com/gzxultra/ralph-runner.git
cd ralph-runner
pip install -e .
```
### As a Claude Code plugin
Install directly from the repo — no pip needed for plugin use:
```bash
# Add the marketplace
/plugin marketplace add gzxultra/ralph-runner
# Install the plugin
/plugin install ralph-runner@ralph-runner
```
Or install from a local clone:
```bash
claude plugin add ./ralph-runner
# or
claude --plugin-dir ./ralph-runner
```
### Prerequisites
- **Python 3.11+**
- **Claude Code CLI** (`claude`) installed and authenticated — see [Claude Code docs](https://docs.anthropic.com/en/docs/claude-code)
## Quick Start
### From the command line
```bash
# Basic run — 10 iterations on a task
ralph-runner --prompt "Refactor the auth module to use JWT tokens"
# With verification
ralph-runner --prompt "Fix the failing tests in src/api/" \
--verify "pytest tests/ -x"
# Shorter run with internet access
ralph-runner --prompt "Add rate limiting to the API" \
--iterations 5 --max-iterations 15 --internet
```
### From within Claude Code (plugin)
```
/ralph-runner:run Refactor the auth module to use JWT tokens
/ralph-runner:run --verify "pytest" Fix the failing tests
/ralph-runner:status
/ralph-runner:resume
```
## Usage
```
ralph-runner --prompt PROMPT [OPTIONS]
```
### Required
| Flag | Description |
|------|-------------|
| `--prompt TEXT` | The task description for Claude to work on |
### Options
| Flag | Default | Description |
|------|---------|-------------|
| `--iterations N` | `10` | Minimum iterations before completion is accepted |
| `--max-iterations N` | `50` | Hard stop after this many iterations |
| `--verify CMD` | _(none)_ | Shell command to run after each iteration (exit 0 = pass) |
| `--mode {afk,hitl}` | `afk` | `afk` = fully autonomous, `hitl` = pause between iterations |
| `--model NAME` | `sonnet` | Claude model to use |
| `--timeout SECS` | `900` | Hard timeout per iteration (15 min) |
| `--idle-timeout SECS` | `120` | Kill iteration if no output for this long |
| `--internet` | off | Enable web access for Claude sessions |
| `--plan / --no-plan` | `--plan` | Create a plan.md file in the first iteration |
| `--resume DIR` | _(none)_ | Resume a previous run from its output directory |
| `--debug` | off | Save prompts and enable verbose stderr logging |
## Modes
### AFK Mode (default)
Fully autonomous. Claude runs with `bypassPermissions` — no confirmation prompts. Ideal for overnight or background runs on trusted codebases.
```bash
ralph-runner --prompt "Migrate the database schema to v2" --mode afk
```
### HITL Mode (Human-in-the-Loop)
Pauses after each iteration so you can review progress before continuing. Press Enter to continue or `q` to quit.
```bash
ralph-runner --prompt "Redesign the dashboard layout" --mode hitl
```
## Verification
The `--verify` flag runs a command after each iteration. If it exits 0, the iteration passes; otherwise it fails. ralph-runner tracks pass/fail trends and blocks completion if verification fails.
```bash
# Run tests
ralph-runner --prompt "Fix all type errors" --verify "mypy src/"
# Run a test suite
ralph-runner --prompt "Implement the search feature" --verify "pytest tests/test_search.py"
# Chain multiple checks
ralph-runner --prompt "Clean up the codebase" \
--verify "ruff check src/ && mypy src/ && pytest tests/"
```
The verification trend is displayed as a convergence sequence:
```
✓✓✗✓✓ (4/5 passed, converging)
```
## Output
All run artifacts are saved to `~/.ralph-runner/runs/<timestamp>-<prefix>/`:
| File | Description |
|------|-------------|
| `progress.md` | Cumulative progress notes from all iterations |
| `plan.md` | Task plan created in the first iteration |
| `stats.json` | Machine-readable stats (costs, tokens, timing) |
| `summary.md` | LLM-generated summary of accomplishments |
| `iter-N.jsonl` | Raw Claude stream output for iteration N |
| `iter-N.txt` | Claude's text output for iteration N |
### Resuming
If a run is interrupted, resume it:
```bash
ralph-runner --resume ~/.ralph-runner/runs/20260220-143000-fix-auth/ \
--prompt "Fix the auth module"
```
## Live Display
ralph-runner shows a rich terminal display during execution:
- **Spinner** during Claude connection and MCP initialization
- **Live tool calls** — see what Claude is doing in real time (reading files, running commands, editing code)
- **Heartbeat** every 30 seconds showing elapsed time
- **Iteration summary** with status, duration, cost, and token counts
- **Verification results** with pass/fail trends
- **Final summary** with total cost, token breakdown by model, and an LLM-generated accomplishment summary
## Claude Code Plugin
ralph-runner ships as a **Claude Code plugin**, so you can invoke it directly from within Claude Code sessions. The repository doubles as both a pip-installable Python package and a Claude Code plugin marketplace.
### Install from the marketplace
```bash
# Add the marketplace (one-time)
/plugin marketplace add gzxultra/ralph-runner
# Install the plugin
/plugin install ralph-runner@ralph-runner
# Update to latest version
/plugin marketplace update
```
### Install from local clone
```bash
# Option A: add as a plugin
claude plugin add ./ralph-runner
# Option B: load for a single session
claude --plugin-dir ./ralph-runner
```
### Available skills and commands
| Command | Description |
|---------|-------------|
| `/ralph-runner:run <task>` | Launch an outer-loop orchestration session |
| `/ralph-runner:status` | Check progress, costs, and results of runs |
| `/ralph-runner:resume` | Resume an interrupted run |
### Example plugin usage
```
> /ralph-runner:run Fix all failing tests and get to 100% pass rate
> /ralph-runner:run --verify "pytest" Refactor the database layer
> /ralph-runner:status
> /ralph-runner:resume
```
## Architecture
```
ralph-runner/
├── .claude-plugin/
│ ├── plugin.json # Plugin manifest (name, version, metadata)
│ └── marketplace.json # Marketplace catalog for distribution
├── skills/
│ ├── run/SKILL.md # Skill: launch an orchestration session
│ └── status/SKILL.md # Skill: inspect run progress and costs
├── commands/
│ ├── run.md # Slash command: /ralph-runner:run
│ ├── status.md # Slash command: /ralph-runner:status
│ └── resume.md # Slash command: /ralph-runner:resume
├── settings.json # Default permission grants
├── src/ralph_runner/
│ ├── cli.py # CLI entry point and main orchestration loop
│ ├── runner.py # Core iteration engine — spawns and monitors Claude
│ ├── prompt.py # Prompt construction with progress injection
│ ├── verify.py # Verification command runner and trend tracking
│ ├── stats.py # Stats, progress files, and summary generation
│ ├── display.py # Terminal colors, spinners, formatting
│ ├── tools.py # Tool-call description formatting
│ └── models.py # Data classes (IterationResult)
├── tests/
│ └── test_basics.py # Unit tests
├── pyproject.toml # Python package configuration
├── LICENSE # MIT
└── README.md
```
## License
MIT — see [LICENSE](LICENSE).
| text/markdown | ralph-runner contributors | null | null | null | null | ai, automation, claude, claude-code, coding-agent, orchestrator, outer-loop | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/gzxultra/ralph-runner",
"Repository, https://github.com/gzxultra/ralph-runner",
"Issues, https://github.com/gzxultra/ralph-runner/issues"
] | twine/6.2.0 CPython/3.11.0rc1 | 2026-02-21T06:14:22.157104 | ralph_runner-0.1.0.tar.gz | 4,696,313 | 36/5d/4d19eb2580c199deec212866dc55fc5bd9340180da9e15712f76e2f8d527/ralph_runner-0.1.0.tar.gz | source | sdist | null | false | 7d94d37db011c01bf80c1e70137d0a1c | 5d3226cd1ce42ddb15a9d3dda1928239cca674daa0167b36d29fc7594d862df5 | 365d4d19eb2580c199deec212866dc55fc5bd9340180da9e15712f76e2f8d527 | MIT | [
"LICENSE"
] | 213 |
2.4 | reduced-3dgs | 1.14.1 | Refactored code for the paper "Reducing the Memory Footprint of 3D Gaussian Splatting" | # Reduced-3DGS: Memory Footprint Reduction for 3D Gaussian Splatting (Python Package Version)
This repository contains the **refactored Python code for [Reduced-3DGS](https://github.com/graphdeco-inria/reduced-3dgs)**. It is forked from commit [13e7393af8ecd83d69197dec7e4c891b333a7c1c](https://github.com/graphdeco-inria/reduced-3dgs/tree/13e7393af8ecd83d69197dec7e4c891b333a7c1c). The original code has been **refactored to follow the standard Python package structure**, while **maintaining the same algorithms as the original version**.
## Features
* [x] Code organized as a standard Python package
* [x] Pruning
* [x] SH Culling
* [x] Vector quantization by K-Means
## Prerequisites
* [Pytorch](https://pytorch.org/) (v2.4 or higher recommended)
* [CUDA Toolkit](https://developer.nvidia.com/cuda-12-4-0-download-archive) (12.4 recommended, should match with PyTorch version)
* (Optional) [cuML](https://github.com/rapidsai/cuml) for faster vector quantization
(Optional) If you have trouble with [`gaussian-splatting`](https://github.com/yindaheng98/gaussian-splatting), try to install it from source:
```sh
pip install wheel setuptools
pip install --upgrade git+https://github.com/yindaheng98/gaussian-splatting.git@master --no-build-isolation
```
## PyPI Install
```shell
pip install --upgrade reduced-3dgs
```
or
build latest from source:
```shell
pip install wheel setuptools
pip install --upgrade git+https://github.com/yindaheng98/reduced-3dgs.git@main --no-build-isolation
```
### Development Install
```shell
git clone --recursive https://github.com/yindaheng98/reduced-3dgs
cd reduced-3dgs
pip install scikit-learn
pip install --target . --upgrade --no-deps .
```
## Quick Start
1. Download the dataset (T&T+DB COLMAP dataset, size 650MB):
```shell
wget https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip -P ./data
unzip data/tandt_db.zip -d data/
```
2. Train 3DGS with densification, pruning, and SH culling (same as original 3DGS)
```shell
python -m reduced_3dgs.train -s data/truck -d output/truck -i 30000 --mode densify-prune-shculling
```
3. Render 3DGS
```shell
python -m gaussian_splatting.render -s data/truck -d output/truck -i 30000 --mode densify
```
> 💡 Note: This repository does not include code for creating datasets.
> If you wish to create your own dataset, please refer to [InstantSplat](https://github.com/yindaheng98/InstantSplat) or use [convert.py](https://github.com/graphdeco-inria/gaussian-splatting/blob/main/convert.py).
> 💡 See [.vscode/launch.json](.vscode/launch.json) for more examples. Refer to [reduced_3dgs.train](gaussian_splatting/train.py) and [gaussian_splatting.render](gaussian_splatting/render.py) for full options.
## API Usage
This project heavily depends on [`gaussian-splatting`](https://github.com/yindaheng98/gaussian-splatting) and only provides some enhanced Trainers and Gaussian models. Therefore, before starting, please refer to [`gaussian-splatting`](https://github.com/yindaheng98/gaussian-splatting) to understand the key concepts about Gaussian models, Dataset, Trainers, and how to use them.
### Pruning
`BasePruningTrainer` prunes the trainer at specified training steps.
```python
from reduced_3dgs.pruning import BasePruningTrainer
trainer = BasePruningTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
dataset=dataset,
prune_from_iter=1000,
prune_until_iter=15000,
prune_interval=100,
... # see reduced_3dgs/pruning/trainer.py for full options
)
```
`BasePrunerInDensifyTrainer` integrates pruning with densification.
```python
from reduced_3dgs.pruning import BasePrunerInDensifyTrainer
trainer = BasePrunerInDensifyTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
dataset=dataset,
mercy_from_iter=3000,
mercy_until_iter=20000,
mercy_interval=100,
densify_from_iter=500,
densify_until_iter=15000,
densify_interval=100,
densify_grad_threshold=0.0002,
densify_opacity_threshold=0.005,
prune_from_iter=1000,
prune_until_iter=15000,
prune_interval=100,
prune_screensize_threshold=20,
... # see reduced_3dgs/pruning/trainer.py for full options
)
```
### SH Culling
`VariableSHGaussianModel` is the 3DGS model that assigns each 3D Gaussian a different SH degree.
```python
from reduced_3dgs.shculling import VariableSHGaussianModel
gaussians = VariableSHGaussianModel(sh_degree).to(device)
```
`BaseSHCullingTrainer` culls the SH degree of each 3D Gaussian at specified training steps.
```python
from reduced_3dgs.shculling import BaseSHCullingTrainer
trainer = BaseSHCullingTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
dataset=dataset,
cull_at_steps=[15000],
... # see reduced_3dgs/shculling/trainer.py for full options
)
```
### Quantization
`VectorQuantizer` is the basic quantization operator:
```python
gaussians.load_ply("output/truck")
from reduced_3dgs.quantization import VectorQuantizer
quantizer = VectorQuantizer(gaussians, num_clusters=256)
quantizer.save_quantized("output/truck-quantized")
quantizer.load_quantized("output/truck-quantized")
```
`BaseVectorQuantizeTrainer` quantizes the model at specified training steps.
```python
from reduced_3dgs.shculling import BaseSHCullingTrainer
trainer = BaseVectorQuantizeTrainer(
gaussians,
spatial_lr_scale=dataset.scene_extent(),
dataset=dataset,
num_clusters=256,
quantizate_from_iter=5000,
quantizate_until_iter=30000,
quantizate_interval=1000,
... # see reduced_3dgs/shculling/trainer.py for full options
)
```
`VectorQuantizeTrainerWrapper` is a wrapper that integrates the quantization step into any Trainer:
```python
trainer = VectorQuantizeTrainerWrapper(
trainer,
num_clusters=num_clusters,
num_clusters_rotation_re=num_clusters_rotation_re,
num_clusters_rotation_im=num_clusters_rotation_im,
num_clusters_opacity=num_clusters_opacity,
num_clusters_scaling=num_clusters_scaling,
num_clusters_features_dc=num_clusters_features_dc,
num_clusters_features_rest=num_clusters_features_rest,
quantizate_from_iter=quantizate_from_iter,
quantizate_until_iter=quantizate_until_iter,
quantizate_interval=quantizate_interval,
)
if load_quantized:
trainer.quantizer.load_quantized(load_quantized)
# see reduced_3dgs/train.py
```
#### Quantized PLY Format
> 💡 See [reduced_3dgs/quantization/quantizer.py](reduced_3dgs/quantization/quantizer.py) for the code to save and load quantized PLY files.
The `save_quantized` function will produce a point cloud stored in a `.ply` format.
Previously, the layout of this file was one row per primitive, containing a series of parameters in `vertex` elements, namely
* 3 floats for position (`x`,`y`,`z`)
* 3 floats for normal (`nx`,`ny`,`nz`)
* 1 uint for the real part of the rotation quaternion (`rot_re`)
* 1 uint for the imaginary part of the rotation quaternion (`rot_im`)
* 1 uint for opacity (`opacity`)
* 3 uint for scaling (`scale`)
* 1 uint for DC color (`f_dc`)
* 3 uint for SH coefficients (`f_rest_0`, `f_rest_1`, `f_rest_2`)
The codebook quantization introduces some additional changes. For different parameters, you can set different lengths of the codebook. Each attribute's codebook will be stored in different elements. The codebooks are ordered as follows:
* `codebook_rot_re` element contains 1 float for the real part of the rotation quaternion (`rot_re`)
* `codebook_rot_im` element contains 3 floats for the 3 imaginary parts of the rotation quaternion (`rot_im_0`, `rot_im_1`, `rot_im_2`)
* `codebook_opacity` element contains 1 float for the opacity (`opacity`)
* `codebook_scaling` element contains 3 floats for the 3 parameters of scale (`scaling_0`, `scaling_1`, `scaling_2`)
* `codebook_f_dc` element contains 3 floats for the 3 DC color parameters (`f_dc_0`, `f_dc_1`, `f_dc_2`)
* 3 elements `codebook_f_rest_<SH degree>` contains floats for SH coefficients of 3 SH degrees (`f_rest_<SH degree>_<SH coefficients at this degree>`).
SH degree 1 has 3 coefficients `f_rest_0_<0,1,2>` in `codebook_f_rest_0`,
SH degree 2 has 5 coefficients `f_rest_1_<0,1,2,3,4>` in `codebook_f_rest_1`,
SH degree 3 has 7 coefficients `f_rest_2_<0,1,2,3,4,5,6>` in `codebook_f_rest_2`.
# Reducing the Memory Footprint of 3D Gaussian Splatting
Panagiotis Papantonakis Georgios Kopanas, Bernhard Kerbl, Alexandre Lanvin, George Drettakis<br>
| [Webpage](https://repo-sam.inria.fr/fungraph/reduced_3dgs/) | [Full Paper](https://repo-sam.inria.fr/fungraph/reduced_3dgs/reduced_3DGS_i3d.pdf) | [Datasets (TODO)](TODO) | [Video](https://youtu.be/EnKE-d7eMds?si=xWElEPf4JgwOAmbB&t=48) | [Other GRAPHDECO Publications](http://www-sop.inria.fr/reves/publis/gdindex.php) | [FUNGRAPH project page](https://fungraph.inria.fr) | <br>

This repository contains the code of the paper "Reducing the Memory Footprint of 3D Gaussian Splatting", which can be found [here](https://repo-sam.inria.fr/fungraph/reduced_3dgs/).
We also provide the configurations to train the models mentioned in the paper,
as well as the evaluation script that produces the results.
<a href="https://www.inria.fr/"><img height="100" src="assets/logo_inria.png"> </a>
<a href="https://univ-cotedazur.eu/"><img height="100" src="assets/logo_uca.png"> </a>
<a href="https://team.inria.fr/graphdeco/"> <img style="width:90%; padding-right: 15px;" src="assets/logo_graphdeco.png"></a>
Abstract: *3D Gaussian splatting provides excellent visual quality for novel view synthesis, with fast training and real-time rendering; unfortunately, the memory requirements of this method for storing and transmission are
unreasonably high. We first analyze the reasons for this, identifying three main areas where storage can
be reduced: the number of 3D Gaussian primitives used to represent a scene, the number of coefficients for
the spherical harmonics used to represent directional radiance, and the precision required to store Gaussian
primitive attributes. We present a solution to each of these issues. First, we propose an efficient, resolution-aware primitive pruning approach, reducing the primitive count by half. Second, we introduce an adaptive
adjustment method to choose the number of coefficients used to represent directional radiance for each
Gaussian primitive, and finally a codebook-based quantization method, together with a half-float representation
for further memory reduction. Taken together, these three components result in a ×27 reduction in overall size
on disk on the standard datasets we tested, along with a ×1.7 speedup in rendering speed. We demonstrate
our method on standard datasets and show how our solution results in significantly reduced download times
when using the method on a mobile device*
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>@Article{papantonakisReduced3DGS,
author = {Papantonakis, Panagiotis and Kopanas, Georgios and Kerbl, Bernhard and Lanvin, Alexandre and Drettakis, George},
title = {Reducing the Memory Footprint of 3D Gaussian Splatting},
journal = {Proceedings of the ACM on Computer Graphics and Interactive Techniques},
number = {1},
volume = {7},
month = {May},
year = {2024},
url = {https://repo-sam.inria.fr/fungraph/reduced_3dgs/}
}</code></pre>
</div>
</section>
| text/markdown | yindaheng98 | yindaheng98@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | https://github.com/yindaheng98/reduced-3dgs | null | null | [] | [] | [] | [
"gaussian-splatting>=2.3.0",
"scikit-learn"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.11 | 2026-02-21T06:14:20.957198 | reduced_3dgs-1.14.1-cp310-cp310-win_amd64.whl | 1,563,240 | 9f/f7/2e90943ea365ef855c793ab089f38480f5999a731cbee073c74b6070d8c5/reduced_3dgs-1.14.1-cp310-cp310-win_amd64.whl | cp310 | bdist_wheel | null | false | b1d79ac4f3674dd05b7bf4df5181ca19 | d2c3391963861c0d6344971f4fcbdf3246e5b6f981ff4c0db726cb8f370dc0e7 | 9ff72e90943ea365ef855c793ab089f38480f5999a731cbee073c74b6070d8c5 | null | [
"LICENSE.md"
] | 299 |
2.4 | drp-cli | 0.2.14 | Drop text and files from the command line — get a link instantly. | # drp 
Drop files or paste text, get a link instantly.
**[Live →](https://drp.vicnas.me)**
```bash
pipx install drp-cli
drp setup && drp up "hello world"
```
## Deploy
> ⚠️ Self-hosting for personal or internal use only — see [LICENSE](LICENSE).
[](https://railway.com?referralCode=ZIdvo-)
1. Fork → Railway → connect repo → add PostgreSQL
2. Set env vars (see below)
3. Start command:
```
python manage.py collectstatic --noinput && python manage.py migrate && gunicorn project.wsgi --bind 0.0.0.0:$PORT --workers 17 --worker-class gthread --threads 2
```
4. Create superuser via Railway shell: `python manage.py createsuperuser`
## Environment variables
| Variable | Required | Description |
|---|---|---|
| `SECRET_KEY` | ✓ | Django secret key |
| `DOMAIN` | ✓ | e.g. `hello.me` |
| `DB_URL` | ✓ | PostgreSQL connection string (Railway injects this) |
| `B2_KEY_ID` | ✓ | Backblaze B2 application key ID |
| `B2_APP_KEY` | ✓ | Backblaze B2 application key secret |
| `B2_BUCKET_NAME` | ✓ | e.g. `drp-files` |
| `B2_ENDPOINT_URL` | ✓ | e.g. `https://s3.us-east-005.backblazeb2.com` |
| `ADMIN_EMAIL` | — | Shown on error pages |
| `RESEND_API_KEY` | — | Transactional email via Resend |
| `DEFAULT_FROM_EMAIL` | — | Defaults to `noreply@{DOMAIN}` |
| `LEMONSQUEEZY_API_KEY` | — | Billing via Lemon Squeezy |
| `LEMONSQUEEZY_SIGNING_SECRET` | — | Webhook signature verification |
| `LEMONSQUEEZY_STORE_ID` | — | Lemon Squeezy store ID |
| `LEMONSQUEEZY_STARTER_VARIANT_ID` | — | Starter plan variant ID |
| `LEMONSQUEEZY_PRO_VARIANT_ID` | — | Pro plan variant ID |
| `ADSENSE_CLIENT` | — | e.g. `ca-pub-xxxxxxxxxxxxxxxx` — enables AdSense |
| `ADSENSE_SLOT` | — | AdSense slot ID |
| `DEBUG` | — | `True` only for local dev, never in production |
## Run locally
```bash
pip install -r requirements.txt
python manage.py migrate
python manage.py runserver
```
Set env vars in your shell or a `.env` file:
```bash
SECRET_KEY=any-random-string
DEBUG=True
# B2 vars required for file uploads/downloads
B2_KEY_ID=...
B2_APP_KEY=...
B2_BUCKET_NAME=drp-files
B2_ENDPOINT_URL=https://s3.us-east-005.backblazeb2.com
```
## Plans
| | Free | Starter ($3/mo) | Pro ($8/mo) |
|---|---|---|---|
| Max file | 200 MB | 1 GB | 5 GB |
| Storage | — | 5 GB | 20 GB |
| Locked drops | ✗ | ✓ | ✓ |
| Renewable | ✗ | ✓ | ✓ |
## License
Server: source-available, personal/internal use only.
See [LICENSE](LICENSE).
CLI (`cli/`): MIT.
| text/markdown | Vic Nas | null | null | null | MIT | cli, file-sharing, pastebin, drops | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.28",
"argcomplete>=3.1; extra == \"completion\"",
"pytest; extra == \"dev\"",
"pytest-django; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"django>=5.0; extra == \"dev\"",
"python-dotenv; extra == \"dev\"",
"dj-database-url; extra == \"dev\"",
"whitenoise; extra == \"dev\"",
"gunicorn; extra == \"dev\"",
"requests; extra == \"dev\"",
"psycopg2-binary>=2.9; extra == \"dev\"",
"resend; extra == \"dev\"",
"boto3; extra == \"dev\"",
"markdown; extra == \"dev\"",
"argcomplete>=3.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://drp.vicnas.me",
"Repository, https://github.com/vicnasdev/drp",
"Issues, https://github.com/vicnasdev/drp/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:13:40.658550 | drp_cli-0.2.14.tar.gz | 45,118 | 37/4a/11134d711b7a86b7675935875be0d30cd20139b842f8fb73c577364deee7/drp_cli-0.2.14.tar.gz | source | sdist | null | false | 0677ede74fd8e89cabe8600b6fc2de02 | 1119059985215f1a45b6b7769719dbc53297c0f576050061c629687e88e198c3 | 374a11134d711b7a86b7675935875be0d30cd20139b842f8fb73c577364deee7 | null | [
"LICENSE"
] | 214 |
2.4 | seqmat | 0.1.49 | Lightning-fast gene manipulation and analysis library. | # SeqMat
**Lightning-fast genomic sequence matrix library with mutation tracking**
SeqMat is a comprehensive Python library for genomic sequence analysis, providing efficient tools for working with genes, transcripts, and genomic sequences. Features include full mutation tracking, splicing analysis, and high-performance sequence manipulation.
## Key Features
- **Fast Sequence Operations**: Vectorized genomic sequence matrix with efficient slicing and manipulation
- **Comprehensive Mutation Tracking**: SNPs, insertions, deletions with full history and conflict detection
- **Gene & Transcript Analysis**: Load and analyze gene structures, exons, introns, and splice sites
- **Conservation Integration**: Built-in support for conservation scores and sequence analysis
- **Multi-organism Support**: Human (hg38) and mouse (mm39) genome support
- **Data Inspection Tools**: Comprehensive utilities to explore available data, gene counts, and biotypes
- **Command-line Interface**: Full CLI for data management and inspection
- **Expression Data**: Integration with GTEx tissue expression data
- **Flexible Installation**: Core functionality with optional bioinformatics dependencies
## Installation
### Basic Installation
```bash
pip install seqmat
```
### With Bioinformatics Features
```bash
# For protein translation
pip install seqmat[bio]
# For GTF parsing and genomics data setup
pip install seqmat[genomics]
# Install everything
pip install seqmat[all]
```
### From Source
```bash
git clone https://github.com/yourusername/seqmat.git
cd seqmat
pip install -e .
```
## Quick Start
### 1. Basic Sequence Operations
```python
from seqmat import SeqMat
# Create a sequence
seq = SeqMat("ATCGATCGATCG", name="my_sequence")
print(f"Length: {len(seq)}") # Length: 12
print(f"Sequence: {seq.seq}") # Sequence: ATCGATCGATCG
# Apply mutations
seq.apply_mutations([
(3, "C", "G"), # SNP: C->G at position 3
(6, "-", "AAA"), # Insertion: insert AAA at position 6
(10, "TC", "-") # Deletion: delete TC at position 10
])
print(f"Mutated: {seq.seq}")
print(f"Mutations: {len(seq.mutations)}")
```
### 2. Working with Genomic Coordinates
```python
import numpy as np
# Create sequence with genomic coordinates
indices = np.arange(1000, 1012) # Positions 1000-1011
seq = SeqMat("ATCGATCGATCG", indices=indices, name="chr1:1000-1011")
# Slice by genomic position
subseq = seq[1003:1008] # Extract positions 1003-1007
print(f"Subsequence: {subseq.seq}")
# Access single positions
base = seq[1005] # Get base at position 1005
print(f"Base at 1005: {base['nt'].decode()}")
```
### 3. Advanced Mutation Operations
```python
# Multiple mutations with validation
mutations = [
(1002, "T", "A"), # SNP
(1005, "-", "GGG"), # Insertion
(1008, "GAT", "-"), # Deletion
(1003, "CG", "AT") # Complex substitution
]
seq.apply_mutations(mutations)
# Mutation history
for mut in seq.mutations:
print(f"{mut['type']} at {mut['pos']}: {mut['ref']} -> {mut['alt']}")
# Check for conflicts (automatically validated)
conflicting_muts = [
(1003, "C", "T"),
(1003, "G", "A") # Overlaps with previous
]
# This will warn about conflicts and skip invalid mutations
```
### 4. Sequence Transformations
```python
# Complement and reverse complement
complement = seq.complement()
rev_comp = seq.clone()
rev_comp.reverse_complement()
print(f"Original: {seq.seq}")
print(f"Complement: {complement.seq}")
print(f"Rev comp: {rev_comp.seq}")
# Remove regions (e.g., splice out introns)
introns = [(1003, 1005), (1008, 1009)]
spliced = seq.remove_regions(introns)
print(f"Spliced: {spliced.seq}")
```
### 5. Working with Genes (requires genomics data setup)
```python
from seqmat import Gene, setup_genomics_data
# One-time setup (downloads reference data)
setup_genomics_data("/path/to/data", organism="hg38")
# Load a gene
kras = Gene.from_file("KRAS", organism="hg38")
print(kras) # Gene: KRAS, ID: ENSG00000133703, Chr: 12, Transcripts: 8
# Access transcripts
primary = kras.transcript() # Primary transcript
all_transcripts = list(kras) # All transcripts
# Analyze gene structure
acceptors, donors = kras.splice_sites()
print(f"Unique splice sites: {len(acceptors)} acceptors, {len(donors)} donors")
```
### 6. Transcript Analysis
```python
# Get a transcript
transcript = kras.transcript()
print(f"Transcript: {transcript.transcript_id}")
print(f"Protein coding: {transcript.protein_coding}")
# Access exon/intron structure
print(f"Exons: {len(transcript.exons)}")
print(f"Introns: {len(transcript.introns)}")
# Generate mature mRNA (spliced)
transcript.generate_mature_mrna()
print(f"Mature mRNA length: {len(transcript.mature_mrna)} bp")
# Generate protein (if protein-coding and BioPython available)
if transcript.protein_coding:
protein = transcript.generate_protein()
print(f"Protein length: {len(transcript.protein)} aa")
```
### 7. Loading from FASTA Files
```python
# Direct FASTA loading
genomic_seq = SeqMat.from_fasta_file(
"chr12.fasta",
"chr12",
start=25398284,
end=25398384
)
# Apply mutations to genomic sequence
genomic_seq.apply_mutations([
(25398290, "G", "A"), # Pathogenic variant
(25398300, "-", "T") # Novel insertion
])
```
## Data Setup and Configuration
SeqMat uses a flexible configuration system to manage organism data and file paths. This section explains how to set up data for existing organisms and add support for new ones.
### Quick Setup for Supported Organisms
For the built-in organisms (hg38, mm39), setup is straightforward:
```python
from seqmat import setup_genomics_data
# Download and setup human genome data
setup_genomics_data("/path/to/your/data", organism="hg38")
# Download and setup mouse genome data
setup_genomics_data("/path/to/your/data", organism="mm39")
```
Or via command line:
```bash
# Setup human data
i /path/to/your/data --organism hg38
# Setup mouse data
seqmat setup --path /path/to/your/data --organism mm39 --force
```
### Configuration System
SeqMat stores configuration in `~/.seqmat/config.json`. This file contains:
- **Organism paths**: Where each organism's data is stored
- **Default organism**: Which organism to use when none specified
- **Directory structure**: Customizable folder names
- **Data source URLs**: Where to download reference data
#### Configuration File Structure
```json
{
"default_organism": "hg38",
"directory_structure": {
"chromosomes": "chromosomes",
"annotations": "annotations"
},
"hg38": {
"BASE": "/path/to/data/hg38",
"CHROM_SOURCE": "/path/to/data/hg38/chromosomes",
"MRNA_PATH": "/path/to/data/hg38/annotations",
"fasta": "/path/to/data/hg38/chromosomes"
},
"mm39": {
"BASE": "/path/to/data/mm39",
"CHROM_SOURCE": "/path/to/data/mm39/chromosomes",
"MRNA_PATH": "/path/to/data/mm39/annotations",
"fasta": "/path/to/data/mm39/chromosomes"
}
}
```
### Directory Structure
When you run `setup_genomics_data`, it creates this directory structure:
```
/your/data/path/
├── hg38/ # Organism directory
│ ├── chromosomes/ # FASTA files (configurable)
│ │ ├── chr1.fasta
│ │ ├── chr2.fasta
│ │ └── ...
│ ├── annotations/ # Gene/transcript data (configurable)
│ │ ├── mrnas_ENSG00000133703_KRAS.pkl
│ │ ├── mrnas_ENSG00000141510_TP53.pkl
│ │ └── ...
│ ├── temp/ # Temporary download files
│ ├── conservation.pkl # Conservation scores
│ └── gtex_expression.gct.gz # Expression data (human only)
└── mm39/ # Mouse data (similar structure)
├── chromosomes/
├── annotations/
└── ...
```
### Adding Support for New Organisms
To add a new organism (e.g., dm6 for Drosophila), you have several options:
#### Option 1: Modify Configuration (Recommended)
1. **Update the default organism data** by editing your config or using the API:
```python
from seqmat.config import load_config, save_config
# Load current config
config = load_config()
# Add new organism with data source URLs
config['dm6'] = {
'name': 'Drosophila melanogaster (Fruit fly)',
'urls': {
'fasta': 'https://hgdownload.soe.ucsc.edu/goldenPath/dm6/bigZips/dm6.fa.gz',
'gtf': 'https://ftp.ensembl.org/pub/release-109/gtf/drosophila_melanogaster/Drosophila_melanogaster.BDGP6.32.109.gtf.gz'
}
}
# Save updated config
save_config(config)
# Now you can use it
setup_genomics_data("/path/to/data", organism="dm6")
```
#### Option 2: Extend Default Data Sources
For permanent additions, modify `seqmat/config.py`:
```python
# In seqmat/config.py, add to DEFAULT_ORGANISM_DATA:
DEFAULT_ORGANISM_DATA = {
'hg38': { ... },
'mm39': { ... },
'dm6': {
'name': 'Drosophila melanogaster (Fruit fly)',
'urls': {
'fasta': 'https://hgdownload.soe.ucsc.edu/goldenPath/dm6/bigZips/dm6.fa.gz',
'gtf': 'https://ftp.ensembl.org/pub/release-109/gtf/drosophila_melanogaster/Drosophila_melanogaster.BDGP6.32.109.gtf.gz'
}
}
}
```
#### Option 3: Manual Setup for Custom Data
For completely custom data sources:
```python
# 1. Create directory structure manually
import os
from pathlib import Path
base_path = Path("/path/to/data/custom_genome")
base_path.mkdir(exist_ok=True)
(base_path / "chromosomes").mkdir(exist_ok=True)
(base_path / "annotations").mkdir(exist_ok=True)
# 2. Place your FASTA files in chromosomes/ directory
# Your files: chr1.fasta, chr2.fasta, etc.
# 3. Update configuration
from seqmat.config import load_config, save_config
config = load_config()
config['custom_genome'] = {
'BASE': str(base_path),
'CHROM_SOURCE': str(base_path / 'chromosomes'),
'MRNA_PATH': str(base_path / 'annotations'),
'fasta': str(base_path / 'chromosomes')
}
save_config(config)
# 4. Process your GTF file (if you have gene annotations)
from seqmat.utils import process_gtf_annotations
process_gtf_annotations(
gtf_file="/path/to/your.gtf",
output_dir=str(base_path / 'annotations'),
organism="custom_genome"
)
```
### Data Sources and URLs
SeqMat downloads data from these default sources:
**Human (hg38):**
- FASTA: UCSC Genome Browser (latest hg38)
- Annotations: Ensembl release-111 GTF
- Conservation: Pre-computed conservation scores
- Expression: GTEx v8 median TPM data
**Mouse (mm39):**
- FASTA: UCSC Genome Browser (mm39)
- Annotations: Ensembl release-112 GTF
### Customizing Directory Structure
You can customize folder names by modifying the directory structure config:
```python
from seqmat.config import load_config, save_config
config = load_config()
config['directory_structure'] = {
'chromosomes': 'genomes', # Custom name for FASTA directory
'annotations': 'gene_data' # Custom name for annotation directory
}
save_config(config)
# Future setups will use these custom names
setup_genomics_data("/path/to/data", organism="hg38")
```
### Configuration Management
**View current configuration:**
```python
from seqmat.config import load_config, get_available_organisms, get_default_organism
print("Default organism:", get_default_organism())
print("Available organisms:", get_available_organisms())
print("Full config:", load_config())
```
**Reset to defaults:**
```python
from seqmat.config import DEFAULT_SETTINGS, save_config
save_config(DEFAULT_SETTINGS.copy())
```
**Change default organism:**
```python
from seqmat.config import load_config, save_config
config = load_config()
config['default_organism'] = 'mm39' # Switch default to mouse
save_config(config)
```
### Troubleshooting Setup
**Common issues:**
1. **"Organism not configured"** - Run setup first or check config
2. **Download failures** - Check internet connection and URLs
3. **Permission errors** - Ensure write access to data directory
4. **Disk space** - Human genome data requires ~4GB, mouse ~3GB
**Debugging:**
```python
# Check what's configured
from seqmat import list_available_organisms, print_data_summary
print("Configured organisms:", list_available_organisms())
print_data_summary() # Detailed status
# Verify file paths
from seqmat.config import get_organism_config
config = get_organism_config("hg38")
print("Paths:", config)
```
### Configuration Best Practices
**For single-user systems:**
- Use the default `~/.seqmat/config.json` location
- Set up organisms as needed with `setup_genomics_data()`
**For multi-user or shared systems:**
- Create a shared data directory: `/shared/genomics_data/`
- Point all users' configs to the same data paths
- Consider using environment variables for paths
**For ephemeral environments (e.g. Run.ai, Docker):**
- Store `config.json` in your persistent storage directory.
- Set `SEQMAT_CONFIG_DIR` to that directory so SeqMat finds the config even when the rest of the filesystem is ephemeral: `export SEQMAT_CONFIG_DIR=/storage/nicolaslynn/data/seqmat` (then keep `config.json` in that directory).
**For development/testing:**
- Use separate config files or directories
- Set `default_organism` to your most-used organism
- Keep test data in separate locations
**Configuration sharing:**
```python
# Export configuration to share with others
from seqmat.config import load_config
import json
config = load_config()
with open('seqmat_config_template.json', 'w') as f:
json.dump(config, f, indent=2)
```
This setup downloads and organizes:
- **Reference genome sequences** (FASTA) → `chromosomes/` directory
- **Gene annotations** (GTF/processed) → `annotations/` directory
- **Conservation scores** → organism root directory
- **Expression data** → organism root directory (human only)
## Data Inspection and Management
### Python API
Once data is set up, you can inspect what's available:
```python
from seqmat import (
list_supported_organisms, list_available_organisms,
get_organism_info, list_gene_biotypes, count_genes,
get_gene_list, search_genes, print_data_summary
)
# Check supported and configured organisms
print("Supported:", list_supported_organisms()) # ['hg38', 'mm39']
print("Configured:", list_available_organisms()) # Organisms with data
# Get detailed organism information
info = get_organism_info('hg38')
print(f"Gene types: {info['data_available']['biotypes']}")
print(f"Chromosomes: {len(info['data_available']['chromosomes'])}")
# Explore gene biotypes and counts
biotypes = list_gene_biotypes('hg38') # ['protein_coding', 'lncRNA', ...]
counts = count_genes('hg38') # {'protein_coding': 19234, 'lncRNA': 7805, ...}
# Get gene lists
protein_genes = get_gene_list('hg38', 'protein_coding', limit=10)
print(f"First 10 protein-coding genes: {protein_genes}")
# Search for specific genes
results = search_genes('hg38', 'KRAS') # Find genes matching 'KRAS'
kras_genes = search_genes('hg38', 'K', biotype='protein_coding', limit=5)
# Print comprehensive summary
print_data_summary() # Formatted overview of all data
```
### Command Line Interface
SeqMat provides a comprehensive CLI for data management. The CLI automatically detects available organisms from your configuration:
```bash
# Install data for supported organisms
seqmat setup --path /your/data/path --organism hg38
seqmat setup --path /your/data/path --organism mm39 --force
# The CLI will show all configured organisms as choices
seqmat setup --help # Shows: --organism {hg38,mm39,dm6,...}
# Check what organisms are supported/configured
seqmat organisms
# Get comprehensive data summary
seqmat summary
# List gene biotypes for an organism
seqmat biotypes --organism hg38
# Count genes by biotype
seqmat count --organism hg38 # All biotypes
seqmat count --organism hg38 --biotype protein_coding # Specific biotype
# List genes
seqmat list --organism hg38 --biotype protein_coding --limit 20
# Search for genes by name
seqmat search --organism hg38 --query KRAS
seqmat search --organism hg38 --query K --biotype protein_coding --limit 10
# Get detailed organism information
seqmat info --organism hg38
```
### Example CLI Output
```bash
$ seqmat summary
🧬 SeqMat Genomics Data Summary
========================================
📊 Total: 2 organisms, 15 biotypes, 47,832 genes
🌍 Supported Organisms:
hg38: Homo sapiens (Human) - ✅ Configured
mm39: Mus musculus (Mouse) - ✅ Configured
📁 HG38 Data:
Gene Types:
protein_coding: 19,234 genes
lncRNA: 7,805 genes
pseudogene: 14,723 genes
...
Chromosomes: 25 available (chr1, chr2, chr3, chr4, chr5...)
📁 MM39 Data:
Gene Types:
protein_coding: 21,815 genes
lncRNA: 8,032 genes
...
```
## API Reference
### SeqMat Class
**Core Methods:**
- `SeqMat(nucleotides, indices=None, name="wild_type")`: Create sequence
- `SeqMat.from_fasta_file(path, chrom, start, end)`: Load from FASTA
- `apply_mutations(mutations)`: Apply SNPs/indels
- `clone(start=None, end=None)`: Create copy
- `remove_regions(regions)`: Remove specified intervals
- `complement()`: Get complement sequence
- `reverse_complement()`: Reverse complement in place
**Properties:**
- `seq`: Current sequence string
- `reference_seq`: Original reference sequence
- `index`: Genomic coordinates
- `mutations`: List of applied mutations
- `mutated_positions`: Set of mutated positions
### Gene Class
- `Gene.from_file(gene_name, organism=None)`: Load gene from database (uses default organism if None)
- `transcript(tid=None)`: Get transcript by ID or primary
- `splice_sites()`: Get all splice site positions
- `primary_transcript`: Primary transcript ID
### Transcript Class
- `generate_pre_mrna()`: Create pre-mRNA sequence
- `generate_mature_mrna()`: Create spliced mRNA
- `generate_protein()`: Translate to protein (requires BioPython)
- `exons`/`introns`: Genomic coordinates
- `protein_coding`: Boolean flag
### Configuration Management
**Core Functions:**
- `load_config()`: Load configuration from `~/.seqmat/config.json`
- `save_config(config)`: Save configuration to file
- `get_default_organism()`: Get default organism from config
- `get_available_organisms()`: Get list of all configured organisms
- `get_organism_config(organism=None)`: Get file paths for organism (uses default if None)
- `get_organism_info(organism)`: Get organism metadata including URLs
- `get_directory_config()`: Get customizable directory structure
**Configuration Examples:**
```python
from seqmat.config import *
# Check current settings
print("Default:", get_default_organism()) # 'hg38'
print("Available:", get_available_organisms()) # ['hg38', 'mm39']
config = get_organism_config('hg38') # File paths
# Modify settings
config = load_config()
config['default_organism'] = 'mm39'
save_config(config)
```
### Data Inspection Utilities
**Organism Management:**
- `list_supported_organisms()`: Get all supported organisms (dynamic from config)
- `list_available_organisms()`: Get configured organisms
- `get_organism_info(organism)`: Detailed organism information
- `setup_genomics_data(basepath, organism=None, force=False)`: Download and setup data (uses default organism if None)
**Gene Discovery:**
- `list_gene_biotypes(organism)`: Get available gene types
- `count_genes(organism, biotype=None)`: Count genes by type
- `get_gene_list(organism, biotype, limit=None)`: List gene names
- `search_genes(organism, query, biotype=None, limit=10)`: Search genes by name
**Data Summary:**
- `data_summary()`: Complete data overview (programmatic)
- `print_data_summary()`: Formatted data summary (human-readable)
### Command Line Interface
**Setup Commands:**
- `seqmat setup --path PATH --organism {dynamic_list} [--force]`: Organism choices automatically detected from config
- `seqmat organisms`: List organism status and availability
**Exploration Commands:**
- `seqmat summary`: Data overview for all configured organisms
- `seqmat info --organism ORG`: Detailed organism info
- `seqmat biotypes --organism ORG`: List gene biotypes
- `seqmat count --organism ORG [--biotype TYPE]`: Count genes
- `seqmat list --organism ORG --biotype TYPE [--limit N]`: List genes
- `seqmat search --organism ORG --query PATTERN [--biotype TYPE] [--limit N]`: Search genes
**Note**: All CLI commands automatically use your configured default organism when `--organism` is omitted, and available organism choices are dynamically loaded from your configuration.
## Performance
SeqMat is optimized for performance:
- **Vectorized operations**: NumPy-based sequence operations
- **Memory efficient**: Structured arrays for sequence storage
- **Fast slicing**: O(1) genomic coordinate access
- **Conflict detection**: Efficient mutation validation
- **Lazy loading**: Sequences loaded on demand
## Dependencies
**Core (always required):**
- numpy >= 1.20.0, <3 (tested with NumPy 2.x)
- pandas >= 1.3.0 (use a build compatible with your NumPy version; for NumPy 2, prefer pandas ≥2.2 and pyarrow ≥14)
- pysam >= 0.19.0
- requests >= 2.26.0
- tqdm >= 4.62.0
**Optional:**
- biopython >= 1.79 (for protein translation)
- gtfparse >= 1.2.0 (for genomics data setup)
## Examples
Check the `examples/` directory for:
- Basic sequence manipulation
- Mutation analysis workflows
- Gene structure analysis
- Comparative genomics
- Performance benchmarks
## Contributing
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
## License
MIT License - see LICENSE file for details.
## Citation
If you use SeqMat in your research:
```
SeqMat: Lightning-fast genomic sequence matrix library
[Your Name], 2024
GitHub: https://github.com/yourusername/seqmat
```
## Support
- **Documentation**: https://seqmat.readthedocs.io
- **Issues**: https://github.com/yourusername/seqmat/issues
- **Discussions**: https://github.com/yourusername/seqmat/discussions
| text/markdown | Nicolas Lynn Vila | nicolasalynn@gmail.com | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/nicolasalynn/seqmat | null | >=3.10 | [] | [] | [] | [
"numpy<3,>=1.20.0",
"pandas>=2.0.0",
"pysam>=0.19.0",
"requests>=2.26.0",
"tqdm>=4.62.0",
"biopython>=1.79",
"gtfparse>=1.2.0",
"platformdirs>=3.0.0",
"lmdb>=1.4.0; extra == \"lmdb\"",
"pytest>=6.0; extra == \"dev\"",
"pytest-cov>=2.0; extra == \"dev\"",
"black>=21.0; extra == \"dev\"",
"flake8>=3.9; extra == \"dev\"",
"mypy>=0.910; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.10 | 2026-02-21T06:12:01.043475 | seqmat-0.1.49.tar.gz | 50,316 | 95/68/62a833804a8967db4f3e662bcbbdb7cc0f77b7ea4afffccd3cc7be7d8180/seqmat-0.1.49.tar.gz | source | sdist | null | false | 4e3f64f93050cfc03b76819cc0e643b6 | cf4b9b4457b53defa96f7386d2e3c9e983082635ef3c09724b9611c1280ec023 | 956862a833804a8967db4f3e662bcbbdb7cc0f77b7ea4afffccd3cc7be7d8180 | null | [
"LICENSE"
] | 245 |
2.1 | paytechuz | 0.3.51 | Unified Python package for Uzbekistan payment gateways (Payme, Click, Uzum, Paynet) |
# paytechuz
[](https://badge.fury.io/py/paytechuz)
[](https://pypi.org/project/paytechuz/)
[](https://pay-tech.uz)
[](https://opensource.org/licenses/MIT)
PayTechUZ is a unified payment library for integrating with popular payment systems in Uzbekistan. It provides a simple and consistent interface for working with Payme, Click, Uzum, Paynet, and Octo payment gateways.
📖 **[Complete Documentation](https://pay-tech.uz)** | 🚀 **[Quick Start Guide](https://pay-tech.uz/quickstart)**
## Features
- **API**: Consistent interface for multiple payment providers
- **Secure**: Built-in security features for payment processing
- **Framework Integration**: Native support for Django and FastAPI
- **Webhook Handling**: Easy-to-use webhook handlers for payment notifications
- **Transaction Management**: Automatic transaction tracking and management
- **Extensible**: Easy to add new payment providers
## Installation
### Basic Installation
```bash
pip install paytechuz
```
### Framework-Specific Installation
```bash
# For Django
pip install paytechuz[django]
# For FastAPI
pip install paytechuz[fastapi]
# For Flask
pip install paytechuz[flask]
```
## API Key Configuration
**Important:** PayTechUZ requires a valid license API key for license validation. Plz set it to .env
```bash
# Set your license API key as an environment variable
export PAYTECH_LICENSE_API_KEY="your-license-api-key-here"
```
To obtain a production license API key, please visit **[https://pay-tech.uz/console](https://pay-tech.uz/console)** or contact **@muhammadali_me** on Telegram.
## Quick Start
> 💡 **Need help?** Check out our [complete documentation](https://pay-tech.uz) for detailed guides and examples.
### Generate Payment Links
```python
from paytechuz.gateways.payme import PaymeGateway
from paytechuz.gateways.click import ClickGateway
from paytechuz.gateways.uzum.client import UzumGateway
from paytechuz.gateways.paynet import PaynetGateway
from paytechuz.gateways.octo import OctoGateway
# Initialize Payme gateway
payme = PaymeGateway(
payme_id="your_payme_id",
payme_key="your_payme_key",
is_test_mode=True # Set to False in production environment
)
# Initialize Click gateway
click = ClickGateway(
service_id="your_service_id",
merchant_id="your_merchant_id",
merchant_user_id="your_merchant_user_id",
secret_key="your_secret_key",
is_test_mode=True # Set to False in production environment
)
# Initialize Uzum gateway (Biller/open-service)
uzum = UzumGateway(
service_id="your_service_id", # Uzum Service ID
is_test_mode=True # Set to False in production environment
)
# Initialize Paynet gateway
paynet = PaynetGateway(
merchant_id="your_merchant_id", # Paynet Merchant ID (accepts both str and int)
is_test_mode=False # Set to True for testing
)
# Initialize Octo gateway
octo = OctoGateway(
octo_shop_id=123, # Octo Shop ID
octo_secret="your_octo_secret", # Octo Secret Key
notify_url="https://example.com/payments/webhook/octo/", # Callback URL
is_test_mode=True # Set to False in production
)
# Generate payment links
payme_link = payme.create_payment(
id="order_123",
amount=150000, # amount in UZS
return_url="https://example.com/return",
account_field_name="id" # Payme-specific: field name for account ID (default: "order_id")
)
# Note: account_field_name is only used for Payme and specifies the field name
# that will be used in the payment URL (e.g., ac.id=123).
# Other payment gateways (Click, Uzum) don't use this parameter.
click_link = click.create_payment(
id="order_123",
amount=150000, # amount in UZS
description="Test payment",
return_url="https://example.com/return"
)
# Generate Uzum Biller payment URL
# URL format: https://www.uzumbank.uz/open-service?serviceId=...&order_id=...&amount=...&redirectUrl=...
uzum_link = uzum.create_payment(
id="order_123", # Order ID (order_id parameter)
amount=100000, # amount in som (will be converted to tiyin)
return_url="https://example.com/callback" # redirectUrl parameter
)
# Result: https://www.uzumbank.uz/open-service?serviceId=your_service_id&order_id=order_123&amount=10000000&redirectUrl=https%3A%2F%2Fexample.com%2Fcallback
# Generate Paynet payment URL
# URL format: https://app.paynet.uz/?m={merchant_id}&c={payment_id}&a={amount}
paynet_link = paynet.create_payment(
id="order_123", # Payment ID (c parameter)
amount=15000000 # amount in tiyin (optional, a parameter) - 150000 som = 15000000 tiyin
)
# Result: https://app.paynet.uz/?m=your_merchant_id&c=order_123&a=15000000
# Or without amount (amount will be configured on Paynet's side)
paynet_link_no_amount = paynet.create_payment(id="order_123")
# Result: https://app.paynet.uz/?m=your_merchant_id&c=order_123
# Create Octo payment (one-stage, auto_capture)
octo_link = octo.create_payment(
id="order_123",
amount=50000, # amount in UZS
return_url="https://example.com/payment/done/",
description="Order #123",
)
# Redirect user to octo_link
```
### Important Notes
#### Payme `account_field_name` Parameter
**Example**:
```python
# Using default account_field_name = "order_id"
payme_link = payme.create_payment(
id="123",
amount=150000,
return_url="https://example.com/return"
)
# Using custom account_field_name
payme_link = payme.create_payment(
id="123",
amount=150000,
return_url="https://example.com/return",
account_field_name="id"
)
```
**Note**: Other payment gateways (Click, Uzum, Paynet) do not use the `account_field_name` parameter.
#### Paynet Payment Gateway
Paynet uses a unique URL-based payment system:
- **URL Format**: `https://app.paynet.uz/?m={merchant_id}&c={payment_id}&a={amount}`
- **merchant_id**: Accepts both `str` and `int` types (automatically converted to string)
- **amount**: Optional parameter in tiyin. If provided, it will be included in the URL as `a` parameter
- **No return_url**: Paynet does NOT support return URL parameter
- **Mobile-first**: Payment is completed in the Paynet mobile app
- Desktop users: QR code is displayed to scan
- Mobile users: Direct link to open Paynet app
- **Webhooks**: Payment status updates are handled through JSON-RPC 2.0 webhooks
### Django Integration
1. Create Order model:
```python
# models.py
from django.db import models
from django.utils import timezone
class Order(models.Model):
STATUS_CHOICES = (
('pending', 'Pending'),
('paid', 'Paid'),
('cancelled', 'Cancelled'),
('delivered', 'Delivered'),
)
product_name = models.CharField(max_length=255)
amount = models.DecimalField(max_digits=12, decimal_places=2)
status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='pending')
created_at = models.DateTimeField(default=timezone.now)
def __str__(self):
return f"{self.id} - {self.product_name} ({self.amount})"
```
2. Add to `INSTALLED_APPS` and configure settings:
```python
# settings.py
INSTALLED_APPS = [
# ...
'paytechuz.integrations.django',
]
PAYTECHUZ = {
'PAYME': {
'PAYME_ID': 'your_payme_id',
'PAYME_KEY': 'your_payme_key',
'ACCOUNT_MODEL': 'your_app.models.Order', # For example: 'orders.models.Order'
'ACCOUNT_FIELD': 'id',
'AMOUNT_FIELD': 'amount',
'ONE_TIME_PAYMENT': True,
},
'CLICK': {
'SERVICE_ID': 'your_service_id',
'MERCHANT_ID': 'your_merchant_id',
'MERCHANT_USER_ID': 'your_merchant_user_id',
'SECRET_KEY': 'your_secret_key',
'ACCOUNT_MODEL': 'your_app.models.Order',
'ACCOUNT_FIELD': 'id',
'COMMISSION_PERCENT': 0.0,
'ONE_TIME_PAYMENT': True,
},
'UZUM': {
'SERVICE_ID': 'your_service_id', # Uzum Service ID for Biller URL
'USERNAME': 'your_uzum_username', # For webhook Basic Auth
'PASSWORD': 'your_uzum_password', # For webhook Basic Auth
'ACCOUNT_MODEL': 'your_app.models.Order',
'ACCOUNT_FIELD': 'order_id', # or 'id'
'AMOUNT_FIELD': 'amount',
'ONE_TIME_PAYMENT': True,
},
'PAYNET': {
'SERVICE_ID': 'your_paynet_service_id',
'USERNAME': 'your_paynet_username',
'PASSWORD': 'your_paynet_password',
'ACCOUNT_MODEL': 'your_app.models.Order',
'ACCOUNT_FIELD': 'id',
'AMOUNT_FIELD': 'amount',
'ONE_TIME_PAYMENT': True,
},
'OCTO_BANK': {
'OCTO_SHOP_ID': 42125, # Octo Shop ID
'OCTO_SECRET': 'your_octo_secret', # Octo Secret Key
'OCTO_UNIQUE_KEY': 'your_octo_unique_key', # Required in production, provided by Octo team
'NOTIFY_URL': 'https://example.com/payments/webhook/octo/', # Callback URL
'ACCOUNT_MODEL': 'your_app.models.Order',
'ACCOUNT_FIELD': 'id',
'AMOUNT_FIELD': 'amount',
'ONE_TIME_PAYMENT': True,
'TEST_MODE': True, # Set to False in production (enables signature verification)
},
}
```
> **Note:** The `IS_TEST_MODE` parameter is configured when creating payment gateways (e.g., `PaymeGateway`, `ClickGateway`), not in webhook settings. Webhooks receive requests on the same URL regardless of test or production environment.
3. Create webhook handlers:
```python
# views.py
from paytechuz.integrations.django.views import (
BasePaymeWebhookView,
BaseClickWebhookView,
BaseUzumWebhookView,
BasePaynetWebhookView,
BaseOctoWebhookView,
)
from .models import Order
class PaymeWebhookView(BasePaymeWebhookView):
def successfully_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'paid'
order.save()
def cancelled_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'cancelled'
order.save()
def get_check_data(self, params, account): # optional
# Return additional data for CheckPerformTransaction (fiscal receipt)
return {
"additional": {"first_name": account.first_name, "balance": account.balance},
"detail": {
"receipt_type": 0,
"shipping": {"title": "Yetkazib berish", "price": 10000},
"items": [
{
"discount": 0,
"title": account.product_name,
"price": int(account.amount * 100),
"count": 1,
"code": "00001",
"units": 1,
"vat_percent": 0,
"package_code": "123456"
}
]
}
}
class ClickWebhookView(BaseClickWebhookView):
def successfully_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'paid'
order.save()
def cancelled_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'cancelled'
order.save()
class UzumWebhookView(BaseUzumWebhookView):
def successfully_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'paid'
order.save()
def cancelled_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'cancelled'
order.save()
def get_check_data(self, params, account):
# Return additional data for check/create/status/confirm actions
# Example: returning user's full name
return {
"fio": {
"value": "Ivanov Ivan"
}
}
class PaynetWebhookView(BasePaynetWebhookView):
def successfully_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'paid'
order.save()
def cancelled_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'cancelled'
order.save()
def get_check_data(self, params, account) # optional:
# Return additional data for GetInformation
order = Order.objects.get(id=account.id)
# You can use any key value pairs
return {
"fields": {
"first_name": order.user.first_name,
"balance": order.user.balance
}
}
class OctoWebhookView(BaseOctoWebhookView):
def successfully_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'paid'
order.save()
def cancelled_payment(self, params, transaction):
order = Order.objects.get(id=transaction.account_id)
order.status = 'cancelled'
order.save()
```
4. Add webhook URLs to `urls.py`:
```python
# urls.py
from django.urls import path
from .views import PaymeWebhookView, ClickWebhookView, UzumWebhookView, PaynetWebhookView, OctoWebhookView
urlpatterns = [
# ...
path('payments/webhook/payme/', PaymeWebhookView.as_view(), name='payme_webhook'),
path('payments/webhook/click/', ClickWebhookView.as_view(), name='click_webhook'),
path('payments/webhook/uzum/<str:action>/', UzumWebhookView.as_view(), name='uzum_webhook'),
path('payments/webhook/paynet/', PaynetWebhookView.as_view(), name='paynet_webhook'),
path('payments/webhook/octo/', OctoWebhookView.as_view(), name='octo_webhook'),
]
```
### FastAPI Integration
1. Set up database models:
```python
from datetime import datetime, timezone
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine, Column, Integer, String, Float, DateTime
from paytechuz.integrations.fastapi import Base as PaymentsBase
from paytechuz.integrations.fastapi.models import run_migrations
# Create database engine
SQLALCHEMY_DATABASE_URL = "sqlite:///./payments.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
# Create base declarative class
Base = declarative_base()
# Create Order model
class Order(Base):
__tablename__ = "orders"
id = Column(Integer, primary_key=True, index=True)
product_name = Column(String, index=True)
amount = Column(Float)
status = Column(String, default="pending")
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
# Create payment tables using run_migrations
run_migrations(engine)
# Create Order table
Base.metadata.create_all(bind=engine)
# Create session
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
```
2. Create webhook handlers:
```python
from fastapi import FastAPI, Request, Depends
from sqlalchemy.orm import Session
from paytechuz.integrations.fastapi import PaymeWebhookHandler, ClickWebhookHandler
app = FastAPI()
# Dependency to get the database session
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
class CustomPaymeWebhookHandler(PaymeWebhookHandler):
def successfully_payment(self, params, transaction):
# Handle successful payment
order = self.db.query(Order).filter(Order.id == transaction.account_id).first()
order.status = "paid"
self.db.commit()
def cancelled_payment(self, params, transaction):
# Handle cancelled payment
order = self.db.query(Order).filter(Order.id == transaction.account_id).first()
order.status = "cancelled"
self.db.commit()
class CustomClickWebhookHandler(ClickWebhookHandler):
def successfully_payment(self, params, transaction):
# Handle successful payment
order = self.db.query(Order).filter(Order.id == transaction.account_id).first()
order.status = "paid"
self.db.commit()
def cancelled_payment(self, params, transaction):
# Handle cancelled payment
order = self.db.query(Order).filter(Order.id == transaction.account_id).first()
order.status = "cancelled"
self.db.commit()
@app.post("/payments/payme/webhook")
async def payme_webhook(request: Request, db: Session = Depends(get_db)):
handler = CustomPaymeWebhookHandler(
db=db,
payme_id="your_merchant_id",
payme_key="your_merchant_key",
account_model=Order,
account_field='id',
amount_field='amount'
)
return await handler.handle_webhook(request)
@app.post("/payments/click/webhook")
async def click_webhook(request: Request, db: Session = Depends(get_db)):
handler = CustomClickWebhookHandler(
db=db,
service_id="your_service_id",
secret_key="your_secret_key",
account_model=Order,
account_field='id',
one_time_payment=True
)
return await handler.handle_webhook(request)
```
📖 **Documentation:** [pay-tech.uz](https://pay-tech.uz)
💬 **Support:** [Telegram](https://t.me/paytechuz)
## License
This project is licensed under the MIT License - see the LICENSE file for details.
| text/markdown | Muhammadali Akbarov | Muhammadali Akbarov <muhammadali17abc@gmail.com> | null | null | MIT | paytechuz, payme, click, uzum, paynet, uzbekistan, payment, gateway, payment-gateway, payment-processing, django, flask, fastapi | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/Muhammadali-Akbarov/paytechuz | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Muhammadali-Akbarov/paytechuz"
] | twine/6.1.0 CPython/3.12.9 | 2026-02-21T06:11:12.013862 | paytechuz-0.3.51.tar.gz | 67,171 | c0/41/6ae22ba95bfa5a5f4f0240e21b6b68e9c3f146178a528e9c62381425f5c4/paytechuz-0.3.51.tar.gz | source | sdist | null | false | 982c7aa987abad91bbbced4cba0f225b | 6a9fde30cc49dc83567f4320d5c5a47377b24a6650c33d35428e470b204044b3 | c0416ae22ba95bfa5a5f4f0240e21b6b68e9c3f146178a528e9c62381425f5c4 | null | [] | 212 |
2.4 | sanatan-verse-sdk | 0.30.9 | Python SDK for creating verse-based content sites with AI translations, multimedia (images, audio), semantic search, RAG-grounded Puranic context, and deployment | # Sanatan Verse SDK - Python SDK for Spiritual Verse Collections
Complete toolkit for generating rich multimedia content for spiritual text collections (Hanuman Chalisa, Sundar Kaand, etc.)
## Features
- **🔄 Complete Workflow**: Generate media and embeddings from canonical sources - all in one command
- **📖 Canonical Sources**: Local YAML files ensure text accuracy and quality
- **🎨 AI Images**: Generate themed images with DALL-E 3
- **🎵 Audio Pronunciation**: Full and slow-speed audio with ElevenLabs
- **🔍 Semantic Search**: Vector embeddings for intelligent verse discovery
- **📚 Multi-Collection**: Organized support for multiple verse collections
- **🎨 Theme System**: Customizable visual styles (modern, traditional, kids-friendly, etc.)
## Quick Start
### New Project Setup (Recommended)
```bash
# 1. Install
pip install sanatan-verse-sdk
# 2. Create project with collection templates
verse-init --project-name my-verse-project --collection hanuman-chalisa
cd my-verse-project
# 3. Configure API keys
cp .env.example .env
# Edit .env and add your API keys from:
# - OpenAI: https://platform.openai.com/api-keys
# - ElevenLabs: https://elevenlabs.io/app/settings/api-keys
# 4. Add canonical Devanagari text
# Edit data/verses/hanuman-chalisa.yaml with actual verse text
# 5. Validate setup
verse-validate
# 6. Generate multimedia content
verse-generate --collection hanuman-chalisa --verse 1
```
**What you get**: Verse file, AI-generated image, audio (full + slow speed), and search embeddings!
### Existing Project
```bash
# Validate and fix structure
verse-validate --fix
# Generate content
verse-generate --collection hanuman-chalisa --verse 15
# Check status
verse-status --collection hanuman-chalisa
```
### Advanced Usage
```bash
# Multiple collections at once
verse-init --collection hanuman-chalisa --collection sundar-kaand
# Custom number of sample verses
verse-init --collection my-collection --num-verses 10
# Generate specific components only
verse-generate --collection sundar-kaand --verse 3 --image
verse-generate --collection sundar-kaand --verse 3 --audio
# Skip embeddings update (faster)
verse-generate --collection hanuman-chalisa --verse 15 --no-update-embeddings
```
### What Gets Generated
Each verse generation creates:
- 🎨 **Image**: `images/{collection}/{theme}/verse-01.png` (DALL-E 3)
- 🎵 **Audio (full)**: `audio/{collection}/verse-01-full.mp3` (ElevenLabs)
- 🎵 **Audio (slow)**: `audio/{collection}/verse-01-slow.mp3` (0.75x speed)
- 🔍 **Embeddings**: `data/embeddings.json` (for semantic search)
**Text Source**: Canonical Devanagari text from `data/verses/{collection}.yaml` ([Local Verses Guide](docs/local-verses.md))
## Installation
```bash
pip install sanatan-verse-sdk
```
## Commands
### Project Setup
- **[verse-init](docs/commands/verse-init.md)** - Initialize new project with recommended structure
- **[verse-validate](docs/commands/verse-validate.md)** - Validate project structure and configuration
### Content Generation
- **[verse-generate](docs/commands/verse-generate.md)** - Complete orchestrator for verse content (text fetching, multimedia generation, embeddings)
- **[verse-translate](docs/commands/verse-translate.md)** - Translate verses into multiple languages (Hindi, Spanish, French, etc.)
- **[verse-images](docs/commands/verse-images.md)** - Generate images using DALL-E 3
- **[verse-audio](docs/commands/verse-audio.md)** - Generate audio pronunciations using ElevenLabs
- **[verse-embeddings](docs/commands/verse-embeddings.md)** - Generate vector embeddings for semantic search ([multi-collection guide](docs/multi-collection.md))
### Puranic Context
- **[verse-index-sources](docs/commands/verse-index-sources.md)** - Index Puranic source texts (PDFs, TXTs) into episodes and embeddings for RAG retrieval
- **[verse-puranic-context](docs/commands/verse-puranic-context.md)** - Generate Puranic context boxes for verses (RAG-grounded or GPT-4o free recall)
### Project Management
- **[verse-add](docs/commands/verse-add.md)** - Add new verse entries to collections (supports multi-chapter formats)
- **[verse-status](docs/commands/verse-status.md)** - Check status, completion, and validate text against canonical source
- **[verse-sync](docs/commands/verse-sync.md)** - Sync verse text with canonical source (fix mismatches)
- **[verse-deploy](docs/commands/verse-deploy.md)** - Deploy Cloudflare Worker for API proxy
## Configuration
Copy the example environment file and add your API keys:
```bash
cp .env.example .env
# Edit .env and add your API keys
```
See the [Usage Guide](docs/usage.md) for detailed information on project structure, workflows, batch processing, and cost optimization.
## Documentation
- **[Usage Guide](docs/usage.md)** - Project setup, workflows, batch processing, and best practices
- **[Local Verses Guide](docs/local-verses.md)** - Using local YAML files for verse text
- **[Chapter-Based Formats](docs/chapter-based-formats.md)** - Multi-chapter collections (Bhagavad Gita, etc.)
- **[Command Reference](docs/README.md)** - Detailed documentation for all commands
- **[Development Guide](docs/development.md)** - Setup and contributing to verse-sdk
- **[Troubleshooting](docs/troubleshooting.md)** - Common issues and solutions
- **[Multi-Collection Guide](docs/multi-collection.md)** - Working with multiple collections
- **[Publishing Guide](docs/publishing.md)** - For maintainers
## Example Project
[Hanuman GPT](https://github.com/sanatan-learnings/hanuman-gpt) - Multi-collection project with Hanuman Chalisa, Sundar Kaand, and Sankat Mochan Hanumanashtak
## Requirements
- Python 3.8+
- OpenAI API key (for text/images/embeddings)
- ElevenLabs API key (for audio)
## License
MIT License - See [LICENSE](LICENSE) file for details
## Support
- [GitHub Issues](https://github.com/sanatan-learnings/sanatan-verse-sdk/issues)
- [Documentation](docs/README.md)
- [Troubleshooting Guide](docs/troubleshooting.md)
| text/markdown | Sanatan Learnings | arun.gupta@gmail.com | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | https://github.com/sanatan-learnings/sanatan-verse-sdk | null | >=3.8 | [] | [] | [] | [
"openai>=1.0.0",
"elevenlabs>=1.0.0",
"requests>=2.31.0",
"Pillow>=10.0.0",
"python-dotenv>=1.0.0",
"PyYAML>=6.0.0",
"sentence-transformers>=2.2.0",
"torch>=2.0.0",
"beautifulsoup4>=4.12.0",
"boto3>=1.34.0",
"pdfplumber>=0.10.0",
"numpy>=1.24.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T06:09:45.841047 | sanatan_verse_sdk-0.30.9.tar.gz | 100,149 | 9e/16/a59b604aba83538f5c6e2cb02ebb52eea3a92ad302da8eda5a575db9bd4d/sanatan_verse_sdk-0.30.9.tar.gz | source | sdist | null | false | 8e8ac3fa72c185e8c9c52ace3219b06e | 5a4e3779c0cfe5f2cbdbf1165287bb729815c20a5bd479b1e82048ce10528fa9 | 9e16a59b604aba83538f5c6e2cb02ebb52eea3a92ad302da8eda5a575db9bd4d | null | [
"LICENSE"
] | 207 |
2.4 | rapid-pe | 0.1.2.dev20260221 | RapidPE: The original low-latency gravitational wave parameter estimation code. | RapidPE was the first piece of software written for rapidly measuring the
parameters of compact binary mergers observed via gravitational waves. It
leverages properties of general relativity in order to minimize the number of
simulations needed, thereby reducing the dominant cost of parameter estimation.
To install, run::
$ pip install rapid-pe
| null | null | null | null | null | GPL-2+ | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Astronomy",
"Topic :: Scientific/Engineering :: Physics"
] | [] | https://git.ligo.org/rapidpe-rift/rapidpe/ | null | null | [] | [] | [] | [
"bilby",
"h5py",
"healpy",
"lalsuite",
"ligo.skymap",
"lscsoft-glue",
"matplotlib",
"numpy",
"python-ligo-lw<1.9,>=1.8.1",
"scikit-learn",
"scipy",
"six"
] | [] | [] | [] | [
"Bug Tracker, https://git.ligo.org/rapidpe-rift/rapidpe/-/issues/",
"Source Code, https://git.ligo.org/rapidpe-rift/rapidpe/"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T06:09:25.828620 | rapid_pe-0.1.2.dev20260221.tar.gz | 100,411 | 25/22/d42c23d51f37dbf86c2dfb28d08da01a731c73a5701ae0bc2ba8928141f6/rapid_pe-0.1.2.dev20260221.tar.gz | source | sdist | null | false | ab705deea05f297f05a7c5c7cfc9c04c | 8ed54d743a1d823c04cf6d967c35f568d23c17cced00a31204a7df8e9987f5d8 | 2522d42c23d51f37dbf86c2dfb28d08da01a731c73a5701ae0bc2ba8928141f6 | null | [
"COPYING"
] | 197 |
2.4 | dank-mids | 4.20.199 | Multicall batching middleware for asynchronous scripts using web3.py | # Dank Mids
[](https://pypi.org/project/dank-mids)
[](https://pypistats.org/packages/dank-mids)
Dank Mids is a EVM RPC batching library that helps reduce the number of HTTP requests to a node, saving time and resources. It automatically collects eth_call calls into [multicalls](https://github.com/makerdao/multicall#multicall-) and bundles all RPC calls together in [jsonrpc](https://www.jsonrpc.org/specification#batch) [batch](https://geth.ethereum.org/docs/interacting-with-geth/rpc/batch) calls.
##### tl;dr: its fast as fuck.

The goal of this tool is to reduce the workload on RPC nodes and allow users to make calls to their preferred node more efficiently. This optimization is especially useful for developers writing scripts that perform large-scale blockchain analysis, as it can save development time and resources.

### Why is Dank so fast?
There are a number of optimizations that went into making Dank the fastest way to pull rpc data to Python.
1. Implemented (mostly) in C.
2. Bypasses the default formatters in [web3.py](https://github.com/ethereum/web3.py)
3. JSON encoding and decoding is handled by [msgspec](https://jcristharif.com/msgspec/). All responses are decoded to specialized [msgspec.Struct](https://jcristharif.com/msgspec/structs.html) objects defined in the [evmspec](https://github.com/BobTheBuidler/evmspec) library.
4. We use my C-compiled [faster-eth-abi](https://github.com/BobTheBuidler/faster-eth-abi/tree/master) and [faster-eth-utils](https://github.com/BobTheBuidler/faster-eth-utils/tree/master) instead of the original python implementations [eth-abi](https://github.com/ethereum/eth-abi) and [eth-utils](https://github.com/ethereum/eth-utils).
5. Responses are decoded on a JIT (just-in-time) basis, meaning individual task cancellation works as expected even when response data is received as part of a larger batch.
6. more stuff I'll write down later...
### Batching Flow
This diagram shows how requests move from user calls into Dank Mids queues, then through batch execution and response spoofing.
```mermaid
flowchart TD
A[User code<br/>await w3.eth.call / other RPC] --> B[DankMiddlewareController.__call__]
B -->|eth_call| C[eth_call request]
B -->|other RPC| D[RPCRequest]
C -->|multicall compatible| E[pending_eth_calls<br/>block to Multicall]
C -->|no multicall| D
D --> F[pending_rpc_calls<br/>JSONRPCBatch queue]
E --> G[RPCRequest.get_response<br/>triggers execute_batch when needed]
F --> G
G --> H[DankMiddlewareController.execute_batch]
H --> I[DankBatch<br/>multicalls + rpc_calls]
I --> J[DankBatch.coroutines]
J -->|large multicall| K[Multicall.get_response]
J -->|small multicall split| L[JSONRPCBatch]
J -->|rpc calls| L
K --> M[_requester.post<br/>eth_call to multicall contract]
M --> N[Multicall.spoof_response<br/>split results to eth_call futures]
L --> O[JSONRPCBatch.post<br/>build JSON-RPC batch payload]
O --> P[_requester.post batch<br/>+ decode responses]
P --> Q[JSONRPCBatch.spoof_response<br/>match by id, resolve futures]
N --> R[User awaiters resolve]
Q --> R
```
Notes:
- Batches can start early when the queue is full (`_Batch.append` -> `controller.early_start`).
- Otherwise, the first waiter to need results will trigger `execute_batch` from `RPCRequest.get_response`.
### Installation
To install Dank Mids, use pip:
`pip install dank-mids`
### Development and Contributing
This repository uses `pre-commit` for local commit-time checks.
Setup and usage instructions live in [`CONTRIBUTING.md`](./CONTRIBUTING.md).
### Benchmark
We've included a [benchmark script](./examples/benchmark.py) that compares the time it takes to fetch the pool tokens (token0 and token1) for each pool on Sushiswap on Ethereum mainnet. To run it, first install the repo with `poetry install` and then run the benchmark with `brownie run examples/benchmark`.
```
Running 'examples/benchmark.py::main'...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4213/4213 [08:50<00:00, 7.95it/s]
brownie sync end: 2025-04-14 21:21:35.531099
brownie sync took: 0:08:50.212665
brownie 4 threads start: 2025-04-14 21:21:35.548373
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4213/4213 [08:31<00:00, 8.23it/s]
brownie 4 threads end: 2025-04-14 21:30:08.065397
brownie 4 threads took: 0:08:32.517024
brownie 16 threads start: 2025-04-14 21:30:08.086342
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4213/4213 [08:26<00:00, 8.32it/s]
brownie 16 threads end: 2025-04-14 21:38:38.141635
brownie 16 threads took: 0:08:30.055293
dank start: 2025-04-14 21:38:38.161024
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4213/4213 [00:55<00:00, 75.49it/s]
dank end: 2025-04-14 21:39:33.982835
dank took: 0:00:55.821811
```
As you can see, dank_mids allowed us to save 7 minutes and 34 seconds vs brownie with 16 threads. That's an 89% reduction in runtime, or about 9x as fast as brownie!
### Usage with web3.py
The primary function you need to use Dank Mids is `setup_dank_w3_from_sync`. This function takes a sync Web3 instance and wraps it for async use. If using dank_mids with eth-brownie, you can just import the premade dank_web3 object as well
Example usage of Dank Mids with web3py:
```python
from dank_mids.helpers import setup_dank_w3_from_sync
dank_web3 = setup_dank_w3_from_sync(w3)
# OR
from dank_mids import dank_web3
# Then:
random_block = await dank_web3.eth.get_block(123)
```
### Usage with eth-brownie
- [Dank Brownie Example Commented Code](./examples/dank_brownie_example.py)
### Usage with ape
- COMING SOON: Dank Mids will also work with [ape](https://github.com/ApeWorX/ape).
### Observability (Retry Events)
Dank Mids exposes a retry observer API for capturing retry decisions, plus a stats
observer for collector metrics. Internal emit points are deferred to a follow-up PR.
For the full explanation and examples (structured logging, Prometheus, Sentry/Stats),
see `docs/retry_observer.rst`.
### Testimonials
[Yearn](https://yearn.finance) big brain [Tonkers Kuma](https://github.com/tonkers-kuma) had this to say:

### Notes
You can also set `DANK_MIDS_DEMO_MODE=True` to see a visual representation of the batching in real time on your console.
| text/markdown | BobTheBuidler | bobthebuidlerdefi@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"aiofiles",
"aiolimiter<1.3,>=1.2",
"cchecksum<1,>=0.0.3",
"eth-retry<1,>=0.3.6",
"eth-typing>=4.2.0",
"evmspec<1,>=0.4.6",
"ez-a-sync<0.35,>=0.34.0",
"faster-eth-abi>=5.2.12",
"faster-eth-utils>=5.3.11",
"faster-hexbytes<2,>1",
"multicall<1,>=0.6.2",
"typed-envs==0.2.4",
"web3!=5.29.*,!=5.30.*,!=5.31.1,!=5.31.2,<8,>=5.27"
] | [] | [] | [] | [
"Documentation, https://BobTheBuidler.github.io/dank_mids",
"Homepage, https://github.com/BobTheBuidler/dank_mids",
"Repository, https://github.com/BobTheBuidler/dank_mids"
] | poetry/2.3.2 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-21T06:08:19.920041 | dank_mids-4.20.199-py3-none-any.whl | 13,614,946 | 91/c0/9718294dc2a5365305feae3e96634065772f284619064a0b1a7bb71ec7e7/dank_mids-4.20.199-py3-none-any.whl | py3 | bdist_wheel | null | false | 665910bb3ebbe277e31e81e670685534 | 707716ab46e26c9d955c14c2790311b49302a17cbcb5849ade5e777eb6458685 | 91c09718294dc2a5365305feae3e96634065772f284619064a0b1a7bb71ec7e7 | null | [
"LICENSE"
] | 336 |
2.4 | personaut | 0.3.3 | A Python SDK for creating and simulating AI personas with emotional states, personality traits, and memory | # Personaut PDK
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/pypa/hatch)
A Python SDK for creating and simulating AI personas with emotional states, personality traits, memories, and relationships.
## Overview
Personaut PDK enables you to create rich, psychologically-grounded AI personas that can participate in:
- **Conversations**: Multi-party dialogues with realistic emotional dynamics
- **Surveys**: Persona-driven questionnaire responses
- **Outcome Analysis**: Simulation-based prediction of behavioral outcomes
- **Live Interactions**: Real-time chat with modality-specific interfaces
## Features
- 🎭 **36 Emotional States** - Fine-grained emotional modeling across 6 categories
- 🧠 **17 Personality Traits** - Based on the 16PF psychological model
- 📍 **Situational Facts** - Structured context with 8 categories and LLM extraction
- 💾 **Vector Memory** - Semantic memory retrieval with trust-gated access
- 🔗 **Relationships** - Trust-based relationship dynamics
- 🎯 **Triggers & Masks** - Context-aware behavioral modifications
- 🚀 **Live Server** - FastAPI backend + Flask UI for interactive sessions
## Installation
```bash
pip install personaut
```
### Optional Dependencies
```bash
# For Gemini model provider
pip install personaut[gemini]
# For AWS Bedrock provider
pip install personaut[bedrock]
# For live interaction server
pip install personaut[server]
# For development
pip install personaut[dev]
# Install everything
pip install personaut[all]
```
## Quick Start
### Creating an Individual
```python
import personaut
# Create an individual with personality traits and emotional state
sarah = personaut.create_individual(
name="Sarah",
traits={"warmth": 0.8, "dominance": 0.4, "sensitivity": 0.7},
emotional_state={"cheerful": 0.6, "creative": 0.5},
)
# Update emotional state
sarah.change_emotion("anxious", 0.3)
# Access traits
print(sarah.get_high_traits()) # [('warmth', 0.8), ...]
# Get dominant emotion (respects active mask)
print(sarah.get_dominant_emotion()) # ('cheerful', 0.6)
```
### Using Masks for Contextual Behavior
```python
from personaut.masks import PROFESSIONAL_MASK
# Add a professional mask
sarah.add_mask(PROFESSIONAL_MASK)
sarah.activate_mask("professional")
# Emotional state is now filtered through the mask
modified_state = sarah.get_emotional_state() # Suppresses strong emotions
# Get raw state without mask
raw_state = sarah.get_raw_emotional_state()
```
### Running a Conversation Simulation
```python
import personaut
# Create individuals
sarah = personaut.create_individual(name="Sarah")
mike = personaut.create_individual(name="Mike")
# Create situation
situation = personaut.create_situation(
modality=personaut.types.modality.TEXT_MESSAGE,
description='Catching up after a long time'
)
# Create and run simulation
simulation = personaut.create_simulation(
situation=situation,
individuals=[sarah, mike],
type=personaut.simulations.types.CONVERSATION,
style=personaut.simulations.styles.SCRIPT
)
simulation.run(num=5, dir='./output/')
```
### Situational Facts
```python
from personaut.facts import FactExtractor, LLMFactExtractor
# Regex-based extraction (fast, deterministic)
extractor = FactExtractor()
ctx = extractor.extract(
"A busy coffee shop in downtown Miami around 3pm. "
"80% capacity with a line of 5 people."
)
print(ctx.get_value("venue_type")) # "coffee shop"
print(ctx.get_value("capacity_percent")) # 80
# LLM-based extraction (richer, more nuanced)
llm_extractor = LLMFactExtractor(llm_client=your_client)
ctx = await llm_extractor.extract(
"We grabbed coffee at the corner spot. Super packed, great vibe."
)
# Generate embedding text
print(ctx.to_embedding_text())
```
### Live Interactive Chat
```python
import personaut
from personaut.server import LiveInteractionServer
# Create server and add individual
server = LiveInteractionServer()
server.add_individual(sarah)
# Start server
server.start(api_port=8000, ui_port=5000)
# Access:
# - UI: http://localhost:5000
# - API: http://localhost:8000/docs
```
## Emotional System
The emotion system models 36 discrete emotions organized into 6 categories:
| Category | Emotions |
|----------|----------|
| **Anger/Mad** | hostile, hurt, angry, selfish, hateful, critical |
| **Sad/Sadness** | guilty, ashamed, depressed, lonely, bored, apathetic |
| **Fear/Scared** | rejected, confused, submissive, insecure, anxious, helpless |
| **Joy/Happiness** | excited, sensual, energetic, cheerful, creative, hopeful |
| **Powerful/Confident** | proud, respected, appreciated, important, faithful, satisfied |
| **Peaceful/Calm** | content, thoughtful, intimate, loving, trusting, nurturing |
```python
from personaut.emotions import EmotionalState, ANXIOUS, HOPEFUL
state = EmotionalState()
state.change_emotion(ANXIOUS, 0.6)
state.change_emotion(HOPEFUL, 0.8)
# Query dominant emotion
dominant, value = state.get_dominant() # ('hopeful', 0.8)
```
## Personality Traits
Based on the 16PF model with 17 traits that influence emotional transitions:
```python
import personaut
individual = personaut.create_individual(name="Sarah")
# High warmth = more approachable, friendly
individual.set_trait("warmth", 0.8)
# High emotional stability = less reactive to stress
individual.set_trait("emotional_stability", 0.7)
```
## Memory System
Store and retrieve memories with emotional context and trust-gated access:
```python
from personaut.memory import (
create_individual_memory,
create_shared_memory,
create_private_memory,
InMemoryVectorStore,
search_memories,
)
# Individual memory with situational context
memory = create_individual_memory(
owner_id="sarah_123",
description="Met Alex at the coffee shop",
context=ctx, # SituationalContext from facts extraction
salience=0.8,
)
# Shared memory between multiple people
shared = create_shared_memory(
description="Team dinner at the Italian restaurant",
participant_ids=["sarah_123", "mike_456"],
perspectives={
"sarah_123": "Great food, but Mike was late",
"mike_456": "Traffic was terrible",
},
)
# Private memory with trust threshold
private = create_private_memory(
owner_id="sarah_123",
description="My anxiety about the presentation",
trust_threshold=0.8, # Only shared with high-trust individuals
)
# Store and search memories
store = InMemoryVectorStore()
store.store(memory, embedding=[...]) # Vector from embedding model
results = search_memories(
store=store,
query="coffee meetings",
embed_func=my_embed_function,
trust_level=0.5, # Filters private memories
)
```
## Masks and Triggers
Masks modify emotional expression based on context. Triggers activate responses when conditions are met:
```python
from personaut.masks import (
create_mask,
PROFESSIONAL_MASK,
STOIC_MASK,
)
from personaut.triggers import (
create_emotional_trigger,
create_situational_trigger,
)
# Apply professional mask in office settings
if PROFESSIONAL_MASK.should_trigger("Going to an office meeting"):
modified_state = PROFESSIONAL_MASK.apply(emotional_state)
# Suppresses anger, boosts composure
# Create custom mask
interview_mask = create_mask(
name="interview",
emotional_modifications={"anxious": -0.3, "content": 0.2},
trigger_situations=["interview", "formal"],
)
# Emotional trigger: activate stoic mask when anxiety is high
anxiety_trigger = create_emotional_trigger(
description="High anxiety response",
rules=[{"emotion": "anxious", "threshold": 0.8, "operator": ">"}],
response=STOIC_MASK,
)
if anxiety_trigger.check(emotional_state):
calmed_state = anxiety_trigger.fire(emotional_state)
# Situational trigger: increase anxiety in dark spaces
dark_trigger = create_situational_trigger(
description="Dark space phobia",
keywords=["dark", "basement", "cave"],
response={"anxious": 0.3, "helpless": 0.2},
)
```
## Relationships
Model trust dynamics between individuals and query relationship networks:
```python
from personaut.relationships import (
create_relationship,
RelationshipNetwork,
get_trust_level,
TrustLevel,
)
# Create relationship with asymmetric trust
rel = create_relationship(
individual_ids=["sarah", "mike"],
trust={"sarah": 0.8, "mike": 0.5}, # Sarah trusts Mike more
history="Roommates in college for 2 years",
relationship_type="friends",
)
# Query trust
trust = rel.get_trust("sarah", "mike") # 0.8
mutual = rel.get_mutual_trust("sarah", "mike") # 0.65
# Update trust after an event
rel.update_trust("mike", "sarah", 0.2, "helped during crisis")
# Check trust level
level = rel.get_trust_level("sarah", "mike")
if level == TrustLevel.HIGH:
# Sarah will share private memories with Mike
# Build a relationship network
network = RelationshipNetwork()
network.add_relationship(rel)
network.add_relationship(create_relationship(["mike", "carol"], trust={"mike": 0.7, "carol": 0.6}))
# Find connection path
path = network.find_path("sarah", "carol") # ['sarah', 'mike', 'carol']
path_trust = network.calculate_path_trust(path) # Trust decays along path
```
## Situations
Define the context for simulations with modality, location, and structured context:
```python
from datetime import datetime
from personaut.situations import (
create_situation,
SituationContext,
create_environment_context,
)
from personaut.types.modality import Modality
# Create a situation
situation = create_situation(
modality=Modality.IN_PERSON,
description="Meeting at a coffee shop to discuss a project",
time=datetime.now(),
location="Miami, FL",
context={"atmosphere": "relaxed"},
)
# Query modality characteristics
if situation.is_synchronous():
# Real-time communication expected
traits = situation.get_modality_traits()
print(f"Visual cues: {traits['visual_cues']}")
# Build structured context with validation
ctx = create_environment_context(
lighting="dim",
noise_level="quiet",
indoor=True,
private=True,
)
result = ctx.validate()
if result.valid:
# Context data meets schema requirements
pass
```
## Documentation
| Document | Description |
|----------|-------------|
| [PERSONAS.md](PERSONAS.md) | Main agent guidelines |
| [docs/EMOTIONS.md](docs/EMOTIONS.md) | Emotion system reference |
| [docs/TRAITS.md](docs/TRAITS.md) | Trait system reference |
| [docs/FACTS.md](docs/FACTS.md) | Situational context and fact extraction |
| [docs/MEMORY.md](docs/MEMORY.md) | Memory system and vector storage |
| [docs/PROMPTS.md](docs/PROMPTS.md) | Prompt generation |
| [docs/SIMULATIONS.md](docs/SIMULATIONS.md) | Simulation types |
| [docs/LIVE_INTERACTIONS.md](docs/LIVE_INTERACTIONS.md) | Server architecture |
| [docs/STYLE_GUIDE.md](docs/STYLE_GUIDE.md) | Code conventions |
## Development
```bash
# Clone repository
git clone https://github.com/personaut/python-pdk.git
cd python-pdk
# Install with dev dependencies
pip install hatch
hatch shell
# Run tests
hatch test
# Run linters
hatch fmt
# Type check
hatch run type
```
See [CONTRIBUTING.md](CONTRIBUTING.md) for full development guidelines.
## Requirements
- Python 3.10+
- See [pyproject.toml](pyproject.toml) for dependencies
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
| text/markdown | Personaut Team | null | null | null | null | agents, ai, emotional-intelligence, llm, personality, personas, simulation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25.0",
"numpy>=1.24.0",
"pydantic>=2.0",
"sqlalchemy>=2.0",
"sqlite-vec>=0.1.0",
"anthropic>=0.18.0; extra == \"all\"",
"boto3>=1.28.0; extra == \"all\"",
"commitizen>=3.12.0; extra == \"all\"",
"fastapi>=0.104.0; extra == \"all\"",
"flask>=3.0.0; extra == \"all\"",
"google-generativeai>=0.3.0; extra == \"all\"",
"hypothesis>=6.90.0; extra == \"all\"",
"jinja2>=3.1.0; extra == \"all\"",
"mypy>=1.7.0; extra == \"all\"",
"openai>=1.0.0; extra == \"all\"",
"pre-commit>=3.5.0; extra == \"all\"",
"pytest-asyncio>=0.21.0; extra == \"all\"",
"pytest-cov>=4.1.0; extra == \"all\"",
"pytest>=7.4.0; extra == \"all\"",
"ruff>=0.1.0; extra == \"all\"",
"uvicorn[standard]>=0.24.0; extra == \"all\"",
"websockets>=12.0; extra == \"all\"",
"anthropic>=0.18.0; extra == \"anthropic\"",
"boto3>=1.28.0; extra == \"bedrock\"",
"commitizen>=3.12.0; extra == \"dev\"",
"hypothesis>=6.90.0; extra == \"dev\"",
"mypy>=1.7.0; extra == \"dev\"",
"pre-commit>=3.5.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"google-generativeai>=0.3.0; extra == \"gemini\"",
"openai>=1.0.0; extra == \"openai\"",
"fastapi>=0.104.0; extra == \"server\"",
"flask>=3.0.0; extra == \"server\"",
"jinja2>=3.1.0; extra == \"server\"",
"uvicorn[standard]>=0.24.0; extra == \"server\"",
"websockets>=12.0; extra == \"server\""
] | [] | [] | [] | [
"Documentation, https://github.com/personaut/python-pdk#readme",
"Repository, https://github.com/personaut/python-pdk",
"Issues, https://github.com/personaut/python-pdk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T06:06:35.976121 | personaut-0.3.3.tar.gz | 469,458 | 8e/da/6daa4f0aab7d9d29dd7eedb9f225ff7439d6a52a82d0c195eefc903c715b/personaut-0.3.3.tar.gz | source | sdist | null | false | 0b9c0072886677284af9ddc0740c799c | f447dfd9ddab80bd95a3c0392719a68f4a60114cfde6d93014f7ee6028228fd3 | 8eda6daa4f0aab7d9d29dd7eedb9f225ff7439d6a52a82d0c195eefc903c715b | Apache-2.0 | [
"LICENSE"
] | 205 |
2.3 | regdiffusion | 0.2.1 | Gene Regulatory Networks Inference using diffusion model | # RegDiffusion <a href="https://tuftsbcb.github.io/RegDiffusion/"><img src="https://raw.githubusercontent.com/TuftsBCB/RegDiffusion/master/docs/_static/rd_logo_horizontal.png" align="right" alt="logo" width="200" height = "56" style = "border: none; float: right;"></a>
[](https://pepy.tech/project/regdiffusion)
[](https://pepy.tech/project/regdiffusion)

RegDiffusion is a very fast unsupervised regulatory network inference algorithm (just like GENIE3 and GRNBoost2), based on probabilistic diffusion model. It works well on genes and is capable to rapidly (<5min) predict biologically verifiable links from large single cell RNA-seq data with 14,000+ genes.
```
Zhu H, Slonim D. From Noise to Knowledge: Diffusion Probabilistic Model-Based Neural Inference of Gene Regulatory Networks. J Comput Biol. 2024 Nov;31(11):1087-1103. doi: 10.1089/cmb.2024.0607. Epub 2024 Oct 10. PMID: 39387266; PMCID: PMC11698671.
```
## Installation
RegDiffusion is on pypi.
```
pip install regdiffusion
```
Check out [this tutorial](https://tuftsbcb.github.io/RegDiffusion/quick_tour.html) for a quick tour of how to use RegDiffusion! If you would like to integrate results from RegDiffusion into the SCENIC pipeline, checkout [this tutorial](https://tuftsbcb.github.io/RegDiffusion/downstream_with_pyscenic.html).
## New in v0.2
- **Memory-efficient mode**: Set `memory_efficient=True` in `RegDiffusionTrainer` to reduce peak GPU memory by ~45%, making it easier to work with large gene sets on consumer GPUs (You can now fit 20k genes on a 16GB GPU).
- **Sparse matrix support**: `RegDiffusionTrainer` now accepts scipy sparse matrices directly (e.g., `adata.X`), enabling training on datasets with 1M+ cells without excessive memory usage.

## Inferred Networks from RegDiffusion
Here are two examples of inferred networks from regdiffusion. The networks are coherent with existing literature and across datasets.

## Inference Speed
Inference on networks with 15,000 genes takes under 5 minutes on an A100 GPU.
In contrast, previous VAE based models would take more than 4 hours on the same
device. Even if you don't have access to those fancy GPU cards, RegDiffusion
still works. Inference on the same large network takes roughly 3 hours on a
mid-range 12-core CPU.
## CLI tool
regdiffusion has a CLI tool now! It takes a count matrix as the input (different from the main API, which needs the data to be log transformed) and returns a table of inferred edges.
```
usage: regdiffusion [-h] [--output OUTPUT] [--top_gene_percentile TOP_GENE_PERCENTILE] [--k K] [--workers WORKERS] input
Infer a gene regulatory network (GRN) from a single-cell count dataset.
positional arguments:
input Input single-cell count dataset file (CSV or H5AD format).
options:
-h, --help show this help message and exit
--output OUTPUT Output file path for the edgelist (CSV). Default: rd_grn.csv
--top_gene_percentile TOP_GENE_PERCENTILE
Percentile cutoff to filter weak edges (e.g., 50 for the top 50%). Default: 50
--k K Number of edges per gene to extract (-1 for all edges). Default: -1
--workers WORKERS Number of workers to use for edgelist extraction. Default: 4
```
## Citation
If you find our package useful, consider citing our paper! =)
```
@article{zhu2024noise,
title={From Noise to Knowledge: Diffusion Probabilistic Model-Based Neural Inference of Gene Regulatory Networks},
author={Zhu, Hao and Slonim, Donna},
journal={Journal of Computational Biology},
volume={31},
number={11},
pages={1087--1103},
year={2024},
publisher={Mary Ann Liebert, Inc., publishers 140 Huguenot Street, 3rd Floor New~…}
}
``` | text/markdown | null | Hao Zhu <haozhu233@gmail.com>, Donna Slonim <donna.slonim@tufts.edu> | null | Hao Zhu <haozhu233@gmail.com> | null | null | [
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | null | [] | [
"regdiffusion"
] | [] | [
"numpy>=1.16.5",
"pandas>=1.1.1",
"torch",
"tqdm",
"scanpy",
"scikit-learn",
"h5py",
"pyvis"
] | [] | [] | [] | [
"Home, https://github.com/TuftsBCB/RegDiffusion"
] | python-requests/2.32.3 | 2026-02-21T06:06:15.301789 | regdiffusion-0.2.1.tar.gz | 13,055,114 | 68/dc/d53a79f33108091cce020599bc7499d428bb1b47373e0e4bf04c2f59e477/regdiffusion-0.2.1.tar.gz | source | sdist | null | false | f9131c363ffa4cdc22f5696127de5dcf | 28d3ba556df1ca2bd05ecd134470881ede654365a78558af0f6cd7b8be38743f | 68dcd53a79f33108091cce020599bc7499d428bb1b47373e0e4bf04c2f59e477 | null | [] | 209 |
2.4 | shafikul-cli | 1.0.0 | CLI tool for FastAPI scaffolding with router, models, database, and templates. | # Shafikul CLI
**Shafikul CLI** is a command-line tool to scaffold **FastAPI projects** quickly.
It helps you generate routers, models, database connections, and HTML templates with proper folder structure and auto updates for `main.py` and `.env` file.
---
## Features
- ✅ Create FastAPI routers, models, and database modules
- ✅ Auto-generate `main.py` with imports and DB setup
- ✅ Auto-generate or update `.env` file
- ✅ Create HTML templates (`index.html` or custom names)
- ✅ Interactive CLI (numeric and text options)
- ✅ Colored console output
- ✅ Version and About commands (`--version` / `--about`)
---
## Installation
Install from PyPI:
```bash
pip install shafikul-cli
```
Or Locally (editable):
```sh
git clone https://github.com/build-with-shafikul/shafikul_cli.git
cd shafikul-cli
pip install -e .
```
## Show version
```sh
shafikul --version
```
## About CLI
```sh
shafikul --about
```
## Create resources
```sh
shafikul create app
```
### Interactive Option
<code>
1: router <br/>
2: models <br/>
3: database <br/>
4: html
</code>
<br/>
<br/>
You can select by number or by name
Example for HTML template: <br/>
<code>shafikul create app html <br/>
Enter file name default [index.html]: home.html</code>
Database creation also auto updates main.py and .env
# Project Structure
```sh
project_root/
├── app/
│ ├── models.py
│ └── database.py
├── router/
│ └── users.py
├── templates/
│ └── index.html
├── main.py
└── .env
```
# Development
If you want to contribute: <br/>
Fork the repo <br/>
Create a feature branch: git checkout -b feature-name<br/>
Commit your changes: git commit -m "Add feature"<br/>
Push to the branch: git push origin feature-name<br/>
Open a Pull Request
# License
`GPL-3.0 license`
# Author
Md Shafikul Islam
<a href="https://github.com/build-with-shafikul" target="_blank">GitHub ↗</a>
| text/markdown | Md Shafikul Islam | buildwithshafikul@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/build-with-shafikul/shafikul_cli | null | >=3.9 | [] | [] | [] | [
"typer",
"rich",
"keyboard",
"pyautogui",
"pyperclip"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-21T06:04:35.763127 | shafikul_cli-1.0.0.tar.gz | 20,653 | 2f/82/43a73e477b7e66d9dd46f454c6c8e22f409ddf680da26e81afedd497ea0b/shafikul_cli-1.0.0.tar.gz | source | sdist | null | false | 9d5f4d8bb6db4b6ccdf5f2dcb95955ec | 14c76740ea39b64e1386e3dd2a6cdc70600507fdc1601ced7f50e472a98d0ab4 | 2f8243a73e477b7e66d9dd46f454c6c8e22f409ddf680da26e81afedd497ea0b | null | [
"LICENSE"
] | 209 |
2.4 | transformer-engine-cu13 | 2.12.0 | Transformer acceleration library | ..
Copyright (c) 2022-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
See LICENSE for license information.
|License|
Transformer Engine
==================
`Quickstart <#examples>`_ | `Installation <#installation>`_ | `User Guide <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html>`_ | `Examples <https://github.com/NVIDIA/TransformerEngine/tree/main/examples>`_ | `FP8 Convergence <#fp8-convergence>`_ | `Integrations <#integrations>`_ | `Release notes <https://docs.nvidia.com/deeplearning/transformer-engine/documentation-archive.html>`_
Latest News
===========
* [11/2025] `NVIDIA Blackwell Architecture Sweeps MLPerf Training v5.1 Benchmarks <https://developer.nvidia.com/blog/nvidia-blackwell-architecture-sweeps-mlperf-training-v5-1-benchmarks/>`_
* [11/2025] `Scale Biology Transformer Models with PyTorch and NVIDIA BioNeMo Recipes <https://developer.nvidia.com/blog/scale-biology-transformer-models-with-pytorch-and-nvidia-bionemo-recipes/>`_
* [11/2025] `FP8 Training of Large-Scale RL Models <https://lmsys.org/blog/2025-11-25-fp8-rl/>`_
* [09/2025] `Pretraining Large Language Models with NVFP4 <https://www.arxiv.org/pdf/2509.25149>`_
* [09/2025] `Native FP8 Mixed Precision Training for Ling 2.0, Open Sourced! <https://huggingface.co/blog/im0qianqian/ling-mini-2-fp8-mixed-precision-training-solution>`_
* [09/2025] `Faster Training Throughput in FP8 Precision with NVIDIA NeMo <https://developer.nvidia.com/blog/faster-training-throughput-in-fp8-precision-with-nvidia-nemo/>`_
* [08/2025] `How we built DeepL's next-generation LLMs with FP8 for training and inference <https://www.deepl.com/en/blog/tech/next-generation-llm-fp8-training>`_
* [08/2025] `NVFP4 Trains with Precision of 16-bit and Speed and Efficiency of 4-bit <https://developer.nvidia.com/blog/nvfp4-trains-with-precision-of-16-bit-and-speed-and-efficiency-of-4-bit/>`_
`Previous News <#previous-news>`_
What is Transformer Engine?
===========================
.. overview-begin-marker-do-not-remove
Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including
using 8-bit floating point (FP8) precision on Hopper, Ada, and Blackwell GPUs, to provide better
performance with lower memory utilization in both training and inference. TE provides a collection
of highly optimized building blocks for popular Transformer architectures and an automatic mixed
precision-like API that can be used seamlessly with your framework-specific code. TE also includes a
framework agnostic C++ API that can be integrated with other deep learning libraries to enable FP8
support for Transformers.
As the number of parameters in Transformer models continues to grow, training and inference for
architectures such as BERT, GPT and T5 become very memory and compute-intensive. Most deep learning
frameworks train with FP32 by default. This is not essential, however, to achieve full accuracy for
many deep learning models. Using mixed-precision training, which combines single-precision (FP32)
with lower precision (e.g. FP16) format when training a model, results in significant speedups with
minimal differences in accuracy as compared to FP32 training. With Hopper GPU
architecture FP8 precision was introduced, which offers improved performance over FP16 with no
degradation in accuracy. Although all major deep learning frameworks support FP16, FP8 support is
not available natively in frameworks today.
TE addresses the problem of FP8 support by providing APIs that integrate with popular Large Language
Model (LLM) libraries. It provides a Python API consisting of modules to easily build a Transformer
layer as well as a framework-agnostic library in C++ including structs and kernels needed for FP8
support. Modules provided by TE internally maintain scaling factors and other values needed for FP8
training, greatly simplifying mixed precision training for users.
Highlights
==========
* Easy-to-use modules for building Transformer layers with FP8 support
* Optimizations (e.g. fused kernels) for Transformer models
* Support for FP8 on NVIDIA Hopper, Ada, and Blackwell GPUs
* Support for optimizations across all precisions (FP16, BF16) on NVIDIA Ampere GPU architecture generations and later
Examples
========
PyTorch
^^^^^^^
.. code-block:: python
import torch
import transformer_engine.pytorch as te
from transformer_engine.common import recipe
# Set dimensions.
in_features = 768
out_features = 3072
hidden_size = 2048
# Initialize model and inputs.
model = te.Linear(in_features, out_features, bias=True)
inp = torch.randn(hidden_size, in_features, device="cuda")
# Create an FP8 recipe. Note: All input args are optional.
fp8_recipe = recipe.DelayedScaling(margin=0, fp8_format=recipe.Format.E4M3)
# Enable autocasting for the forward pass
with te.autocast(enabled=True, recipe=fp8_recipe):
out = model(inp)
loss = out.sum()
loss.backward()
JAX
^^^
Flax
~~~~
.. code-block:: python
import flax
import jax
import jax.numpy as jnp
import transformer_engine.jax as te
import transformer_engine.jax.flax as te_flax
from transformer_engine.common import recipe
BATCH = 32
SEQLEN = 128
HIDDEN = 1024
# Initialize RNG and inputs.
rng = jax.random.PRNGKey(0)
init_rng, data_rng = jax.random.split(rng)
inp = jax.random.normal(data_rng, [BATCH, SEQLEN, HIDDEN], jnp.float32)
# Create an FP8 recipe. Note: All input args are optional.
fp8_recipe = recipe.DelayedScaling(margin=0, fp8_format=recipe.Format.HYBRID)
# Enable autocasting for the forward pass
with te.autocast(enabled=True, recipe=fp8_recipe):
model = te_flax.DenseGeneral(features=HIDDEN)
def loss_fn(params, other_vars, inp):
out = model.apply({'params':params, **other_vars}, inp)
return jnp.mean(out)
# Initialize models.
variables = model.init(init_rng, inp)
other_variables, params = flax.core.pop(variables, 'params')
# Construct the forward and backward function
fwd_bwd_fn = jax.value_and_grad(loss_fn, argnums=(0, 1))
for _ in range(10):
loss, (param_grads, other_grads) = fwd_bwd_fn(params, other_variables, inp)
For a more comprehensive tutorial, check out our `Quickstart Notebook <https://github.com/NVIDIA/TransformerEngine/blob/main/docs/examples/quickstart.ipynb>`_.
.. overview-end-marker-do-not-remove
Installation
============
System Requirements
^^^^^^^^^^^^^^^^^^^
* **Hardware:** Blackwell, Hopper, Grace Hopper/Blackwell, Ada, Ampere
* **OS:** Linux (official), WSL2 (limited support)
* **Software:**
* CUDA: 12.1+ (Hopper/Ada/Ampere), 12.8+ (Blackwell) with compatible NVIDIA drivers
* cuDNN: 9.3+
* Compiler: GCC 9+ or Clang 10+ with C++17 support
* Python: 3.12 recommended
* **Source Build Requirements:** CMake 3.18+, Ninja, Git 2.17+, pybind11 2.6.0+
* **Notes:** FP8 features require Compute Capability 8.9+ (Ada/Hopper/Blackwell)
Installation Methods
^^^^^^^^^^^^^^^^^^^^
Docker (Recommended)
^^^^^^^^^^^^^^^^^^^^
The quickest way to get started with Transformer Engine is by using Docker images on
`NVIDIA GPU Cloud (NGC) Catalog <https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch>`_.
For example to use the NGC PyTorch container interactively,
.. code-block:: bash
docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:25.08-py3
For example to use the NGC JAX container interactively,
.. code-block:: bash
docker run --gpus all -it --rm nvcr.io/nvidia/jax:25.08-py3
Where 25.08 (corresponding to August 2025 release) is the container version.
**Benefits of using NGC containers:**
* All dependencies pre-installed with compatible versions and optimized configurations
* NGC PyTorch 23.08+ containers include FlashAttention-2
pip Installation
^^^^^^^^^^^^^^^^
**Prerequisites for pip installation:**
* A compatible C++ compiler
* CUDA Toolkit with cuDNN and NVCC (NVIDIA CUDA Compiler) if installing from source.
To install the latest stable version with pip:
.. code-block:: bash
# For PyTorch integration
pip install --no-build-isolation transformer_engine[pytorch]
# For JAX integration
pip install --no-build-isolation transformer_engine[jax]
# For both frameworks
pip install --no-build-isolation transformer_engine[pytorch,jax]
Alternatively, install directly from the GitHub repository:
.. code-block:: bash
pip install --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@stable
When installing from GitHub, you can explicitly specify frameworks using the environment variable:
.. code-block:: bash
NVTE_FRAMEWORK=pytorch,jax pip install --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@stable
conda Installation
^^^^^^^^^^^^^^^^^^
To install the latest stable version with conda from conda-forge:
.. code-block:: bash
# For PyTorch integration
conda install -c conda-forge transformer-engine-torch
# JAX integration (coming soon)
Source Installation
^^^^^^^^^^^^^^^^^^^
`See the installation guide <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html#installation-from-source>`_
Environment Variables
^^^^^^^^^^^^^^^^^^^^^
These environment variables can be set before installation to customize the build process:
* **CUDA_PATH**: Path to CUDA installation
* **CUDNN_PATH**: Path to cuDNN installation
* **CXX**: Path to C++ compiler
* **NVTE_FRAMEWORK**: Comma-separated list of frameworks to build for (e.g., ``pytorch,jax``)
* **MAX_JOBS**: Limit number of parallel build jobs (default varies by system)
* **NVTE_BUILD_THREADS_PER_JOB**: Control threads per build job
* **NVTE_CUDA_ARCHS**: Semicolon-separated list of CUDA compute architectures to compile for (e.g., ``80;90`` for A100 and H100). If not set, automatically determined based on CUDA version. Setting this can significantly reduce build time and binary size.
Compiling with FlashAttention
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Transformer Engine supports both FlashAttention-2 and FlashAttention-3 in PyTorch for improved performance. FlashAttention-3 was added in release v1.11 and is prioritized over FlashAttention-2 when both are present in the environment.
You can verify which FlashAttention version is being used by setting these environment variables:
.. code-block:: bash
NVTE_DEBUG=1 NVTE_DEBUG_LEVEL=1 python your_script.py
It is a known issue that FlashAttention-2 compilation is resource-intensive and requires a large amount of RAM (see `bug <https://github.com/Dao-AILab/flash-attention/issues/358>`_), which may lead to out of memory errors during the installation of Transformer Engine. Please try setting **MAX_JOBS=1** in the environment to circumvent the issue.
.. troubleshooting-begin-marker-do-not-remove
Troubleshooting
^^^^^^^^^^^^^^^
**Common Issues and Solutions:**
1. **ABI Compatibility Issues:**
* **Symptoms:** ``ImportError`` with undefined symbols when importing transformer_engine
* **Solution:** Ensure PyTorch and Transformer Engine are built with the same C++ ABI setting. Rebuild PyTorch from source with matching ABI.
* **Context:** If you're using PyTorch built with a different C++ ABI than your system's default, you may encounter these undefined symbol errors. This is particularly common with pip-installed PyTorch outside of containers.
2. **Missing Headers or Libraries:**
* **Symptoms:** CMake errors about missing headers (``cudnn.h``, ``cublas_v2.h``, ``filesystem``, etc.)
* **Solution:** Install missing development packages or set environment variables to point to correct locations:
.. code-block:: bash
export CUDA_PATH=/path/to/cuda
export CUDNN_PATH=/path/to/cudnn
* If CMake can't find a C++ compiler, set the ``CXX`` environment variable.
* Ensure all paths are correctly set before installation.
3. **Build Resource Issues:**
* **Symptoms:** Compilation hangs, system freezes, or out-of-memory errors
* **Solution:** Limit parallel builds:
.. code-block:: bash
MAX_JOBS=1 NVTE_BUILD_THREADS_PER_JOB=1 pip install ...
4. **Verbose Build Logging:**
* For detailed build logs to help diagnose issues:
.. code-block:: bash
cd transformer_engine
pip install -v -v -v --no-build-isolation .
.. troubleshooting-end-marker-do-not-remove
Breaking Changes
================
v1.7: Padding mask definition for PyTorch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In an effort to unify the definition and usage of the attention mask across all three frameworks in Transformer Engine, the padding mask has changed from `True` meaning inclusion of the corresponding position in attention to exclusion of that position in our PyTorch implementation. Since v1.7, all attention mask types follow the same definition where `True` means masking out the corresponding position and `False` means including that position in attention calculation.
An example of this change is,
.. code-block:: bash
# for a batch of 3 sequences where `a`s, `b`s and `c`s are the useful tokens
# and `0`s are the padding tokens,
[a, a, a, 0, 0,
b, b, 0, 0, 0,
c, c, c, c, 0]
# the padding mask for this batch before v1.7 is,
[ True, True, True, False, False,
True, True, False, False, False,
True, True, True, True, False]
# and for v1.7 onwards it should be,
[False, False, False, True, True,
False, False, True, True, True,
False, False, False, False, True]
FP8 Convergence
===============
FP8 has been tested extensively across different model architectures and configurations and we found **no significant difference** between FP8 and BF16 training loss curves. FP8 has also been validated for accuracy on downstream LLM tasks (e.g. LAMBADA and WikiText). Below are examples of models tested for convergence across different frameworks.
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| Model | Framework | Source |
+============+==================+=========================================================================================================+
| T5-770M | JAX/T5x | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/t5x#convergence-and-performance|
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| MPT-1.3B | Mosaic Composer | https://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1 |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-5B | JAX/Paxml | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/pax#h100-results |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-5B | NeMo Framework | Available on request |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| LLama2-7B | Alibaba Pai | https://mp.weixin.qq.com/s/NQT0uKXLbXyh5031zBdeBQ |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| T5-11B | JAX/T5x | Available on request |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| MPT-13B | Mosaic Composer | https://www.databricks.com/blog/turbocharged-training-optimizing-databricks-mosaic-ai-stack-fp8 |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-22B | NeMo Framework | Available on request |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| LLama2-70B | Alibaba Pai | https://mp.weixin.qq.com/s/NQT0uKXLbXyh5031zBdeBQ |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-175B | JAX/Paxml | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/pax#h100-results |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
Integrations
============
Transformer Engine has been integrated with popular LLM frameworks such as:
* `DeepSpeed <https://github.com/deepspeedai/DeepSpeed/blob/master/tests/unit/runtime/half_precision/test_fp8.py>`_
* `Hugging Face Accelerate <https://huggingface.co/docs/accelerate/main/en/usage_guides/low_precision_training#configuring-transformersengine>`_
* `Lightning <https://github.com/Lightning-AI/lightning/issues/17172>`_
* `MosaicML Composer <https://github.com/mosaicml/composer/releases/tag/v0.13.1>`_
* `NVIDIA JAX Toolbox <https://github.com/NVIDIA/JAX-Toolbox>`_
* `NVIDIA Megatron-LM <https://github.com/NVIDIA/Megatron-LM>`_
* `NVIDIA NeMo Framework <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_
* `Amazon SageMaker Model Parallel Library <https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features-v2-tensor-parallelism.html>`_
* `Levanter <https://github.com/stanford-crfm/levanter>`_
* `GPT-NeoX <https://github.com/EleutherAI/gpt-neox>`_
* `Hugging Face Nanotron <https://github.com/huggingface/nanotron>`_ - Coming soon!
* `Colossal-AI <https://github.com/hpcaitech/ColossalAI>`_ - Coming soon!
* `PeriFlow <https://github.com/friendliai/periflow-python-sdk>`_ - Coming soon!
Contributing
============
We welcome contributions to Transformer Engine! To contribute to Transformer Engine and make pull requests,
follow the guidelines outlined in the `<CONTRIBUTING.rst>`_ guide.
Papers
======
* `Attention original paper <https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf>`_
* `Megatron-LM tensor parallel <https://arxiv.org/pdf/1909.08053.pdf>`_
* `Megatron-LM sequence parallel <https://arxiv.org/pdf/2205.05198.pdf>`_
* `FP8 Formats for Deep Learning <https://arxiv.org/abs/2209.05433>`_
Videos
======
* `Stable and Scalable FP8 Deep Learning Training on Blackwell | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc24-s62457/>`__
* `Blackwell Numerics for AI | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc25-s72458/>`_
* `Building LLMs: Accelerating Pretraining of Foundational Models With FP8 Precision | GTC 2025 <https://www.nvidia.com/gtc/session-catalog/?regcode=no-ncid&ncid=no-ncid&tab.catalogallsessionstab=16566177511100015Kus&search=zoho#/session/1726152813607001vnYK>`_
* `From FP8 LLM Training to Inference: Language AI at Scale | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc25-s72799/>`_
* `What's New in Transformer Engine and FP8 Training | GTC 2024 <https://www.nvidia.com/en-us/on-demand/session/gtc24-s62457/>`_
* `FP8 Training with Transformer Engine | GTC 2023 <https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s51393>`_
* `FP8 for Deep Learning | GTC 2023 <https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s52166/>`_
* `Inside the Hopper Architecture | GTC 2022 <https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s42663/>`_
.. |License| image:: https://img.shields.io/badge/License-Apache%202.0-blue.svg
:target: https://opensource.org/licenses/Apache-2.0
Previous News
=============
* [06/2025] `Floating Point 8: An Introduction to Efficient, Lower-Precision AI Training <https://developer.nvidia.com/blog/floating-point-8-an-introduction-to-efficient-lower-precision-ai-training/>`_
* [05/2025] `Advanced Optimization Strategies for LLM Training on NVIDIA Grace Hopper <https://developer.nvidia.com/blog/advanced-optimization-strategies-for-llm-training-on-nvidia-grace-hopper/>`_
* [03/2025] `Stable and Scalable FP8 Deep Learning Training on Blackwell | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc25-s72778/>`_
* [03/2025] `Measure and Improve AI Workload Performance with NVIDIA DGX Cloud Benchmarking <https://developer.nvidia.com/blog/measure-and-improve-ai-workload-performance-with-nvidia-dgx-cloud-benchmarking/>`_
.. image:: docs/examples/comparison-fp8-bf16-training-nvidia-dgx-cloud-benchmarking-performance-explorer.jpg
:width: 600
:alt: Comparison of FP8 versus BF16 training, as seen in NVIDIA DGX Cloud Benchmarking Performance Explorer
* [02/2025] `Understanding the Language of Life's Biomolecules Across Evolution at a New Scale with Evo 2 <https://developer.nvidia.com/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/>`_
* [02/2025] `NVIDIA DGX Cloud Introduces Ready-To-Use Templates to Benchmark AI Platform Performance <https://developer.nvidia.com/blog/nvidia-dgx-cloud-introduces-ready-to-use-templates-to-benchmark-ai-platform-performance/>`_
* [01/2025] `Continued Pretraining of State-of-the-Art LLMs for Sovereign AI and Regulated Industries with iGenius and NVIDIA DGX Cloud <https://developer.nvidia.com/blog/continued-pretraining-of-state-of-the-art-llms-for-sovereign-ai-and-regulated-industries-with-igenius-and-nvidia-dgx-cloud/>`_
* [11/2024] `Developing a 172B LLM with Strong Japanese Capabilities Using NVIDIA Megatron-LM <https://developer.nvidia.com/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/>`_
* [11/2024] `How FP8 boosts LLM training by 18% on Amazon SageMaker P5 instances <https://aws.amazon.com/blogs/machine-learning/how-fp8-boosts-llm-training-by-18-on-amazon-sagemaker-p5-instances/>`_
* [11/2024] `Efficiently train models with large sequence lengths using Amazon SageMaker model parallel <https://aws.amazon.com/blogs/machine-learning/efficiently-train-models-with-large-sequence-lengths-using-amazon-sagemaker-model-parallel/>`_
* [09/2024] `Reducing AI large model training costs by 30% requires just a single line of code from FP8 mixed precision training upgrades <https://company.hpc-ai.com/blog/reducing-ai-large-model-training-costs-by-30-requires-just-a-single-line-of-code-from-fp8-mixed-precision-training-upgrades>`_
* [05/2024] `Accelerating Transformers with NVIDIA cuDNN 9 <https://developer.nvidia.com/blog/accelerating-transformers-with-nvidia-cudnn-9/>`_
* [03/2024] `Turbocharged Training: Optimizing the Databricks Mosaic AI stack with FP8 <https://www.databricks.com/blog/turbocharged-training-optimizing-databricks-mosaic-ai-stack-fp8>`_
* [03/2024] `FP8 Training Support in SageMaker Model Parallelism Library <https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-release-notes.html>`_
* [12/2023] `New NVIDIA NeMo Framework Features and NVIDIA H200 <https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility/>`_
.. image:: docs/examples/H200-NeMo-performance.png
:width: 600
:alt: H200
* [11/2023] `Inflection-2: The Next Step Up <https://inflection.ai/inflection-2>`_
* [11/2023] `Unleashing The Power Of Transformers With NVIDIA Transformer Engine <https://lambdalabs.com/blog/unleashing-the-power-of-transformers-with-nvidia-transformer-engine>`_
* [11/2023] `Accelerating PyTorch Training Workloads with FP8 <https://towardsdatascience.com/accelerating-pytorch-training-workloads-with-fp8-5a5123aec7d7>`_
* [09/2023] `Transformer Engine added to AWS DL Container for PyTorch Training <https://github.com/aws/deep-learning-containers/pull/3315>`_
* [06/2023] `Breaking MLPerf Training Records with NVIDIA H100 GPUs <https://developer.nvidia.com/blog/breaking-mlperf-training-records-with-nvidia-h100-gpus/>`_
* [04/2023] `Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1) <https://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1>`_
| text/x-rst | null | null | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10.0 | [] | [] | [] | [
"importlib-metadata>=1.0",
"pydantic",
"packaging",
"pytest>=8.2.1; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T06:04:23.070451 | transformer_engine_cu13-2.12.0-py3-none-manylinux_2_28_x86_64.whl | 189,804,753 | a8/cb/e252e8cbbfa4dc856be79f914a75c0b78561b08273d489e1e397fc525fca/transformer_engine_cu13-2.12.0-py3-none-manylinux_2_28_x86_64.whl | py3 | bdist_wheel | null | false | ea1dade3b8ed018beae4f562757d6fbc | cc63e1592ebebbd8ac000e4f5fb64e96a072c5a99d9850110b8d39dc9b7eeeca | a8cbe252e8cbbfa4dc856be79f914a75c0b78561b08273d489e1e397fc525fca | null | [
"LICENSE"
] | 187 |
2.4 | transformer-engine-cu12 | 2.12.0 | Transformer acceleration library | ..
Copyright (c) 2022-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
See LICENSE for license information.
|License|
Transformer Engine
==================
`Quickstart <#examples>`_ | `Installation <#installation>`_ | `User Guide <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html>`_ | `Examples <https://github.com/NVIDIA/TransformerEngine/tree/main/examples>`_ | `FP8 Convergence <#fp8-convergence>`_ | `Integrations <#integrations>`_ | `Release notes <https://docs.nvidia.com/deeplearning/transformer-engine/documentation-archive.html>`_
Latest News
===========
* [11/2025] `NVIDIA Blackwell Architecture Sweeps MLPerf Training v5.1 Benchmarks <https://developer.nvidia.com/blog/nvidia-blackwell-architecture-sweeps-mlperf-training-v5-1-benchmarks/>`_
* [11/2025] `Scale Biology Transformer Models with PyTorch and NVIDIA BioNeMo Recipes <https://developer.nvidia.com/blog/scale-biology-transformer-models-with-pytorch-and-nvidia-bionemo-recipes/>`_
* [11/2025] `FP8 Training of Large-Scale RL Models <https://lmsys.org/blog/2025-11-25-fp8-rl/>`_
* [09/2025] `Pretraining Large Language Models with NVFP4 <https://www.arxiv.org/pdf/2509.25149>`_
* [09/2025] `Native FP8 Mixed Precision Training for Ling 2.0, Open Sourced! <https://huggingface.co/blog/im0qianqian/ling-mini-2-fp8-mixed-precision-training-solution>`_
* [09/2025] `Faster Training Throughput in FP8 Precision with NVIDIA NeMo <https://developer.nvidia.com/blog/faster-training-throughput-in-fp8-precision-with-nvidia-nemo/>`_
* [08/2025] `How we built DeepL's next-generation LLMs with FP8 for training and inference <https://www.deepl.com/en/blog/tech/next-generation-llm-fp8-training>`_
* [08/2025] `NVFP4 Trains with Precision of 16-bit and Speed and Efficiency of 4-bit <https://developer.nvidia.com/blog/nvfp4-trains-with-precision-of-16-bit-and-speed-and-efficiency-of-4-bit/>`_
`Previous News <#previous-news>`_
What is Transformer Engine?
===========================
.. overview-begin-marker-do-not-remove
Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including
using 8-bit floating point (FP8) precision on Hopper, Ada, and Blackwell GPUs, to provide better
performance with lower memory utilization in both training and inference. TE provides a collection
of highly optimized building blocks for popular Transformer architectures and an automatic mixed
precision-like API that can be used seamlessly with your framework-specific code. TE also includes a
framework agnostic C++ API that can be integrated with other deep learning libraries to enable FP8
support for Transformers.
As the number of parameters in Transformer models continues to grow, training and inference for
architectures such as BERT, GPT and T5 become very memory and compute-intensive. Most deep learning
frameworks train with FP32 by default. This is not essential, however, to achieve full accuracy for
many deep learning models. Using mixed-precision training, which combines single-precision (FP32)
with lower precision (e.g. FP16) format when training a model, results in significant speedups with
minimal differences in accuracy as compared to FP32 training. With Hopper GPU
architecture FP8 precision was introduced, which offers improved performance over FP16 with no
degradation in accuracy. Although all major deep learning frameworks support FP16, FP8 support is
not available natively in frameworks today.
TE addresses the problem of FP8 support by providing APIs that integrate with popular Large Language
Model (LLM) libraries. It provides a Python API consisting of modules to easily build a Transformer
layer as well as a framework-agnostic library in C++ including structs and kernels needed for FP8
support. Modules provided by TE internally maintain scaling factors and other values needed for FP8
training, greatly simplifying mixed precision training for users.
Highlights
==========
* Easy-to-use modules for building Transformer layers with FP8 support
* Optimizations (e.g. fused kernels) for Transformer models
* Support for FP8 on NVIDIA Hopper, Ada, and Blackwell GPUs
* Support for optimizations across all precisions (FP16, BF16) on NVIDIA Ampere GPU architecture generations and later
Examples
========
PyTorch
^^^^^^^
.. code-block:: python
import torch
import transformer_engine.pytorch as te
from transformer_engine.common import recipe
# Set dimensions.
in_features = 768
out_features = 3072
hidden_size = 2048
# Initialize model and inputs.
model = te.Linear(in_features, out_features, bias=True)
inp = torch.randn(hidden_size, in_features, device="cuda")
# Create an FP8 recipe. Note: All input args are optional.
fp8_recipe = recipe.DelayedScaling(margin=0, fp8_format=recipe.Format.E4M3)
# Enable autocasting for the forward pass
with te.autocast(enabled=True, recipe=fp8_recipe):
out = model(inp)
loss = out.sum()
loss.backward()
JAX
^^^
Flax
~~~~
.. code-block:: python
import flax
import jax
import jax.numpy as jnp
import transformer_engine.jax as te
import transformer_engine.jax.flax as te_flax
from transformer_engine.common import recipe
BATCH = 32
SEQLEN = 128
HIDDEN = 1024
# Initialize RNG and inputs.
rng = jax.random.PRNGKey(0)
init_rng, data_rng = jax.random.split(rng)
inp = jax.random.normal(data_rng, [BATCH, SEQLEN, HIDDEN], jnp.float32)
# Create an FP8 recipe. Note: All input args are optional.
fp8_recipe = recipe.DelayedScaling(margin=0, fp8_format=recipe.Format.HYBRID)
# Enable autocasting for the forward pass
with te.autocast(enabled=True, recipe=fp8_recipe):
model = te_flax.DenseGeneral(features=HIDDEN)
def loss_fn(params, other_vars, inp):
out = model.apply({'params':params, **other_vars}, inp)
return jnp.mean(out)
# Initialize models.
variables = model.init(init_rng, inp)
other_variables, params = flax.core.pop(variables, 'params')
# Construct the forward and backward function
fwd_bwd_fn = jax.value_and_grad(loss_fn, argnums=(0, 1))
for _ in range(10):
loss, (param_grads, other_grads) = fwd_bwd_fn(params, other_variables, inp)
For a more comprehensive tutorial, check out our `Quickstart Notebook <https://github.com/NVIDIA/TransformerEngine/blob/main/docs/examples/quickstart.ipynb>`_.
.. overview-end-marker-do-not-remove
Installation
============
System Requirements
^^^^^^^^^^^^^^^^^^^
* **Hardware:** Blackwell, Hopper, Grace Hopper/Blackwell, Ada, Ampere
* **OS:** Linux (official), WSL2 (limited support)
* **Software:**
* CUDA: 12.1+ (Hopper/Ada/Ampere), 12.8+ (Blackwell) with compatible NVIDIA drivers
* cuDNN: 9.3+
* Compiler: GCC 9+ or Clang 10+ with C++17 support
* Python: 3.12 recommended
* **Source Build Requirements:** CMake 3.18+, Ninja, Git 2.17+, pybind11 2.6.0+
* **Notes:** FP8 features require Compute Capability 8.9+ (Ada/Hopper/Blackwell)
Installation Methods
^^^^^^^^^^^^^^^^^^^^
Docker (Recommended)
^^^^^^^^^^^^^^^^^^^^
The quickest way to get started with Transformer Engine is by using Docker images on
`NVIDIA GPU Cloud (NGC) Catalog <https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch>`_.
For example to use the NGC PyTorch container interactively,
.. code-block:: bash
docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:25.08-py3
For example to use the NGC JAX container interactively,
.. code-block:: bash
docker run --gpus all -it --rm nvcr.io/nvidia/jax:25.08-py3
Where 25.08 (corresponding to August 2025 release) is the container version.
**Benefits of using NGC containers:**
* All dependencies pre-installed with compatible versions and optimized configurations
* NGC PyTorch 23.08+ containers include FlashAttention-2
pip Installation
^^^^^^^^^^^^^^^^
**Prerequisites for pip installation:**
* A compatible C++ compiler
* CUDA Toolkit with cuDNN and NVCC (NVIDIA CUDA Compiler) if installing from source.
To install the latest stable version with pip:
.. code-block:: bash
# For PyTorch integration
pip install --no-build-isolation transformer_engine[pytorch]
# For JAX integration
pip install --no-build-isolation transformer_engine[jax]
# For both frameworks
pip install --no-build-isolation transformer_engine[pytorch,jax]
Alternatively, install directly from the GitHub repository:
.. code-block:: bash
pip install --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@stable
When installing from GitHub, you can explicitly specify frameworks using the environment variable:
.. code-block:: bash
NVTE_FRAMEWORK=pytorch,jax pip install --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@stable
conda Installation
^^^^^^^^^^^^^^^^^^
To install the latest stable version with conda from conda-forge:
.. code-block:: bash
# For PyTorch integration
conda install -c conda-forge transformer-engine-torch
# JAX integration (coming soon)
Source Installation
^^^^^^^^^^^^^^^^^^^
`See the installation guide <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html#installation-from-source>`_
Environment Variables
^^^^^^^^^^^^^^^^^^^^^
These environment variables can be set before installation to customize the build process:
* **CUDA_PATH**: Path to CUDA installation
* **CUDNN_PATH**: Path to cuDNN installation
* **CXX**: Path to C++ compiler
* **NVTE_FRAMEWORK**: Comma-separated list of frameworks to build for (e.g., ``pytorch,jax``)
* **MAX_JOBS**: Limit number of parallel build jobs (default varies by system)
* **NVTE_BUILD_THREADS_PER_JOB**: Control threads per build job
* **NVTE_CUDA_ARCHS**: Semicolon-separated list of CUDA compute architectures to compile for (e.g., ``80;90`` for A100 and H100). If not set, automatically determined based on CUDA version. Setting this can significantly reduce build time and binary size.
Compiling with FlashAttention
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Transformer Engine supports both FlashAttention-2 and FlashAttention-3 in PyTorch for improved performance. FlashAttention-3 was added in release v1.11 and is prioritized over FlashAttention-2 when both are present in the environment.
You can verify which FlashAttention version is being used by setting these environment variables:
.. code-block:: bash
NVTE_DEBUG=1 NVTE_DEBUG_LEVEL=1 python your_script.py
It is a known issue that FlashAttention-2 compilation is resource-intensive and requires a large amount of RAM (see `bug <https://github.com/Dao-AILab/flash-attention/issues/358>`_), which may lead to out of memory errors during the installation of Transformer Engine. Please try setting **MAX_JOBS=1** in the environment to circumvent the issue.
.. troubleshooting-begin-marker-do-not-remove
Troubleshooting
^^^^^^^^^^^^^^^
**Common Issues and Solutions:**
1. **ABI Compatibility Issues:**
* **Symptoms:** ``ImportError`` with undefined symbols when importing transformer_engine
* **Solution:** Ensure PyTorch and Transformer Engine are built with the same C++ ABI setting. Rebuild PyTorch from source with matching ABI.
* **Context:** If you're using PyTorch built with a different C++ ABI than your system's default, you may encounter these undefined symbol errors. This is particularly common with pip-installed PyTorch outside of containers.
2. **Missing Headers or Libraries:**
* **Symptoms:** CMake errors about missing headers (``cudnn.h``, ``cublas_v2.h``, ``filesystem``, etc.)
* **Solution:** Install missing development packages or set environment variables to point to correct locations:
.. code-block:: bash
export CUDA_PATH=/path/to/cuda
export CUDNN_PATH=/path/to/cudnn
* If CMake can't find a C++ compiler, set the ``CXX`` environment variable.
* Ensure all paths are correctly set before installation.
3. **Build Resource Issues:**
* **Symptoms:** Compilation hangs, system freezes, or out-of-memory errors
* **Solution:** Limit parallel builds:
.. code-block:: bash
MAX_JOBS=1 NVTE_BUILD_THREADS_PER_JOB=1 pip install ...
4. **Verbose Build Logging:**
* For detailed build logs to help diagnose issues:
.. code-block:: bash
cd transformer_engine
pip install -v -v -v --no-build-isolation .
.. troubleshooting-end-marker-do-not-remove
Breaking Changes
================
v1.7: Padding mask definition for PyTorch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In an effort to unify the definition and usage of the attention mask across all three frameworks in Transformer Engine, the padding mask has changed from `True` meaning inclusion of the corresponding position in attention to exclusion of that position in our PyTorch implementation. Since v1.7, all attention mask types follow the same definition where `True` means masking out the corresponding position and `False` means including that position in attention calculation.
An example of this change is,
.. code-block:: bash
# for a batch of 3 sequences where `a`s, `b`s and `c`s are the useful tokens
# and `0`s are the padding tokens,
[a, a, a, 0, 0,
b, b, 0, 0, 0,
c, c, c, c, 0]
# the padding mask for this batch before v1.7 is,
[ True, True, True, False, False,
True, True, False, False, False,
True, True, True, True, False]
# and for v1.7 onwards it should be,
[False, False, False, True, True,
False, False, True, True, True,
False, False, False, False, True]
FP8 Convergence
===============
FP8 has been tested extensively across different model architectures and configurations and we found **no significant difference** between FP8 and BF16 training loss curves. FP8 has also been validated for accuracy on downstream LLM tasks (e.g. LAMBADA and WikiText). Below are examples of models tested for convergence across different frameworks.
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| Model | Framework | Source |
+============+==================+=========================================================================================================+
| T5-770M | JAX/T5x | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/t5x#convergence-and-performance|
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| MPT-1.3B | Mosaic Composer | https://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1 |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-5B | JAX/Paxml | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/pax#h100-results |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-5B | NeMo Framework | Available on request |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| LLama2-7B | Alibaba Pai | https://mp.weixin.qq.com/s/NQT0uKXLbXyh5031zBdeBQ |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| T5-11B | JAX/T5x | Available on request |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| MPT-13B | Mosaic Composer | https://www.databricks.com/blog/turbocharged-training-optimizing-databricks-mosaic-ai-stack-fp8 |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-22B | NeMo Framework | Available on request |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| LLama2-70B | Alibaba Pai | https://mp.weixin.qq.com/s/NQT0uKXLbXyh5031zBdeBQ |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-175B | JAX/Paxml | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/pax#h100-results |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
Integrations
============
Transformer Engine has been integrated with popular LLM frameworks such as:
* `DeepSpeed <https://github.com/deepspeedai/DeepSpeed/blob/master/tests/unit/runtime/half_precision/test_fp8.py>`_
* `Hugging Face Accelerate <https://huggingface.co/docs/accelerate/main/en/usage_guides/low_precision_training#configuring-transformersengine>`_
* `Lightning <https://github.com/Lightning-AI/lightning/issues/17172>`_
* `MosaicML Composer <https://github.com/mosaicml/composer/releases/tag/v0.13.1>`_
* `NVIDIA JAX Toolbox <https://github.com/NVIDIA/JAX-Toolbox>`_
* `NVIDIA Megatron-LM <https://github.com/NVIDIA/Megatron-LM>`_
* `NVIDIA NeMo Framework <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_
* `Amazon SageMaker Model Parallel Library <https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features-v2-tensor-parallelism.html>`_
* `Levanter <https://github.com/stanford-crfm/levanter>`_
* `GPT-NeoX <https://github.com/EleutherAI/gpt-neox>`_
* `Hugging Face Nanotron <https://github.com/huggingface/nanotron>`_ - Coming soon!
* `Colossal-AI <https://github.com/hpcaitech/ColossalAI>`_ - Coming soon!
* `PeriFlow <https://github.com/friendliai/periflow-python-sdk>`_ - Coming soon!
Contributing
============
We welcome contributions to Transformer Engine! To contribute to Transformer Engine and make pull requests,
follow the guidelines outlined in the `<CONTRIBUTING.rst>`_ guide.
Papers
======
* `Attention original paper <https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf>`_
* `Megatron-LM tensor parallel <https://arxiv.org/pdf/1909.08053.pdf>`_
* `Megatron-LM sequence parallel <https://arxiv.org/pdf/2205.05198.pdf>`_
* `FP8 Formats for Deep Learning <https://arxiv.org/abs/2209.05433>`_
Videos
======
* `Stable and Scalable FP8 Deep Learning Training on Blackwell | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc24-s62457/>`__
* `Blackwell Numerics for AI | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc25-s72458/>`_
* `Building LLMs: Accelerating Pretraining of Foundational Models With FP8 Precision | GTC 2025 <https://www.nvidia.com/gtc/session-catalog/?regcode=no-ncid&ncid=no-ncid&tab.catalogallsessionstab=16566177511100015Kus&search=zoho#/session/1726152813607001vnYK>`_
* `From FP8 LLM Training to Inference: Language AI at Scale | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc25-s72799/>`_
* `What's New in Transformer Engine and FP8 Training | GTC 2024 <https://www.nvidia.com/en-us/on-demand/session/gtc24-s62457/>`_
* `FP8 Training with Transformer Engine | GTC 2023 <https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s51393>`_
* `FP8 for Deep Learning | GTC 2023 <https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s52166/>`_
* `Inside the Hopper Architecture | GTC 2022 <https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s42663/>`_
.. |License| image:: https://img.shields.io/badge/License-Apache%202.0-blue.svg
:target: https://opensource.org/licenses/Apache-2.0
Previous News
=============
* [06/2025] `Floating Point 8: An Introduction to Efficient, Lower-Precision AI Training <https://developer.nvidia.com/blog/floating-point-8-an-introduction-to-efficient-lower-precision-ai-training/>`_
* [05/2025] `Advanced Optimization Strategies for LLM Training on NVIDIA Grace Hopper <https://developer.nvidia.com/blog/advanced-optimization-strategies-for-llm-training-on-nvidia-grace-hopper/>`_
* [03/2025] `Stable and Scalable FP8 Deep Learning Training on Blackwell | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc25-s72778/>`_
* [03/2025] `Measure and Improve AI Workload Performance with NVIDIA DGX Cloud Benchmarking <https://developer.nvidia.com/blog/measure-and-improve-ai-workload-performance-with-nvidia-dgx-cloud-benchmarking/>`_
.. image:: docs/examples/comparison-fp8-bf16-training-nvidia-dgx-cloud-benchmarking-performance-explorer.jpg
:width: 600
:alt: Comparison of FP8 versus BF16 training, as seen in NVIDIA DGX Cloud Benchmarking Performance Explorer
* [02/2025] `Understanding the Language of Life's Biomolecules Across Evolution at a New Scale with Evo 2 <https://developer.nvidia.com/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/>`_
* [02/2025] `NVIDIA DGX Cloud Introduces Ready-To-Use Templates to Benchmark AI Platform Performance <https://developer.nvidia.com/blog/nvidia-dgx-cloud-introduces-ready-to-use-templates-to-benchmark-ai-platform-performance/>`_
* [01/2025] `Continued Pretraining of State-of-the-Art LLMs for Sovereign AI and Regulated Industries with iGenius and NVIDIA DGX Cloud <https://developer.nvidia.com/blog/continued-pretraining-of-state-of-the-art-llms-for-sovereign-ai-and-regulated-industries-with-igenius-and-nvidia-dgx-cloud/>`_
* [11/2024] `Developing a 172B LLM with Strong Japanese Capabilities Using NVIDIA Megatron-LM <https://developer.nvidia.com/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/>`_
* [11/2024] `How FP8 boosts LLM training by 18% on Amazon SageMaker P5 instances <https://aws.amazon.com/blogs/machine-learning/how-fp8-boosts-llm-training-by-18-on-amazon-sagemaker-p5-instances/>`_
* [11/2024] `Efficiently train models with large sequence lengths using Amazon SageMaker model parallel <https://aws.amazon.com/blogs/machine-learning/efficiently-train-models-with-large-sequence-lengths-using-amazon-sagemaker-model-parallel/>`_
* [09/2024] `Reducing AI large model training costs by 30% requires just a single line of code from FP8 mixed precision training upgrades <https://company.hpc-ai.com/blog/reducing-ai-large-model-training-costs-by-30-requires-just-a-single-line-of-code-from-fp8-mixed-precision-training-upgrades>`_
* [05/2024] `Accelerating Transformers with NVIDIA cuDNN 9 <https://developer.nvidia.com/blog/accelerating-transformers-with-nvidia-cudnn-9/>`_
* [03/2024] `Turbocharged Training: Optimizing the Databricks Mosaic AI stack with FP8 <https://www.databricks.com/blog/turbocharged-training-optimizing-databricks-mosaic-ai-stack-fp8>`_
* [03/2024] `FP8 Training Support in SageMaker Model Parallelism Library <https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-release-notes.html>`_
* [12/2023] `New NVIDIA NeMo Framework Features and NVIDIA H200 <https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility/>`_
.. image:: docs/examples/H200-NeMo-performance.png
:width: 600
:alt: H200
* [11/2023] `Inflection-2: The Next Step Up <https://inflection.ai/inflection-2>`_
* [11/2023] `Unleashing The Power Of Transformers With NVIDIA Transformer Engine <https://lambdalabs.com/blog/unleashing-the-power-of-transformers-with-nvidia-transformer-engine>`_
* [11/2023] `Accelerating PyTorch Training Workloads with FP8 <https://towardsdatascience.com/accelerating-pytorch-training-workloads-with-fp8-5a5123aec7d7>`_
* [09/2023] `Transformer Engine added to AWS DL Container for PyTorch Training <https://github.com/aws/deep-learning-containers/pull/3315>`_
* [06/2023] `Breaking MLPerf Training Records with NVIDIA H100 GPUs <https://developer.nvidia.com/blog/breaking-mlperf-training-records-with-nvidia-h100-gpus/>`_
* [04/2023] `Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1) <https://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1>`_
| text/x-rst | null | null | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10.0 | [] | [] | [] | [
"importlib-metadata>=1.0",
"pydantic",
"packaging",
"pytest>=8.2.1; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T06:04:19.627226 | transformer_engine_cu12-2.12.0-py3-none-manylinux_2_28_aarch64.whl | 311,258,513 | ab/1a/3170941ab13fb230aaadb0b27b31ae3430d5cddb7a41ea0ee4891b0d15df/transformer_engine_cu12-2.12.0-py3-none-manylinux_2_28_aarch64.whl | py3 | bdist_wheel | null | false | 2b0c02011798638c771af84fb7eff87e | 4a9764526581adc8968aab21e22681a4690847c9e087a39df8ea21ceca0a4fe5 | ab1a3170941ab13fb230aaadb0b27b31ae3430d5cddb7a41ea0ee4891b0d15df | null | [
"LICENSE"
] | 326 |
2.4 | transformer-engine | 2.12.0 | Transformer acceleration library | ..
Copyright (c) 2022-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
See LICENSE for license information.
|License|
Transformer Engine
==================
`Quickstart <#examples>`_ | `Installation <#installation>`_ | `User Guide <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html>`_ | `Examples <https://github.com/NVIDIA/TransformerEngine/tree/main/examples>`_ | `FP8 Convergence <#fp8-convergence>`_ | `Integrations <#integrations>`_ | `Release notes <https://docs.nvidia.com/deeplearning/transformer-engine/documentation-archive.html>`_
Latest News
===========
* [11/2025] `NVIDIA Blackwell Architecture Sweeps MLPerf Training v5.1 Benchmarks <https://developer.nvidia.com/blog/nvidia-blackwell-architecture-sweeps-mlperf-training-v5-1-benchmarks/>`_
* [11/2025] `Scale Biology Transformer Models with PyTorch and NVIDIA BioNeMo Recipes <https://developer.nvidia.com/blog/scale-biology-transformer-models-with-pytorch-and-nvidia-bionemo-recipes/>`_
* [11/2025] `FP8 Training of Large-Scale RL Models <https://lmsys.org/blog/2025-11-25-fp8-rl/>`_
* [09/2025] `Pretraining Large Language Models with NVFP4 <https://www.arxiv.org/pdf/2509.25149>`_
* [09/2025] `Native FP8 Mixed Precision Training for Ling 2.0, Open Sourced! <https://huggingface.co/blog/im0qianqian/ling-mini-2-fp8-mixed-precision-training-solution>`_
* [09/2025] `Faster Training Throughput in FP8 Precision with NVIDIA NeMo <https://developer.nvidia.com/blog/faster-training-throughput-in-fp8-precision-with-nvidia-nemo/>`_
* [08/2025] `How we built DeepL's next-generation LLMs with FP8 for training and inference <https://www.deepl.com/en/blog/tech/next-generation-llm-fp8-training>`_
* [08/2025] `NVFP4 Trains with Precision of 16-bit and Speed and Efficiency of 4-bit <https://developer.nvidia.com/blog/nvfp4-trains-with-precision-of-16-bit-and-speed-and-efficiency-of-4-bit/>`_
`Previous News <#previous-news>`_
What is Transformer Engine?
===========================
.. overview-begin-marker-do-not-remove
Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including
using 8-bit floating point (FP8) precision on Hopper, Ada, and Blackwell GPUs, to provide better
performance with lower memory utilization in both training and inference. TE provides a collection
of highly optimized building blocks for popular Transformer architectures and an automatic mixed
precision-like API that can be used seamlessly with your framework-specific code. TE also includes a
framework agnostic C++ API that can be integrated with other deep learning libraries to enable FP8
support for Transformers.
As the number of parameters in Transformer models continues to grow, training and inference for
architectures such as BERT, GPT and T5 become very memory and compute-intensive. Most deep learning
frameworks train with FP32 by default. This is not essential, however, to achieve full accuracy for
many deep learning models. Using mixed-precision training, which combines single-precision (FP32)
with lower precision (e.g. FP16) format when training a model, results in significant speedups with
minimal differences in accuracy as compared to FP32 training. With Hopper GPU
architecture FP8 precision was introduced, which offers improved performance over FP16 with no
degradation in accuracy. Although all major deep learning frameworks support FP16, FP8 support is
not available natively in frameworks today.
TE addresses the problem of FP8 support by providing APIs that integrate with popular Large Language
Model (LLM) libraries. It provides a Python API consisting of modules to easily build a Transformer
layer as well as a framework-agnostic library in C++ including structs and kernels needed for FP8
support. Modules provided by TE internally maintain scaling factors and other values needed for FP8
training, greatly simplifying mixed precision training for users.
Highlights
==========
* Easy-to-use modules for building Transformer layers with FP8 support
* Optimizations (e.g. fused kernels) for Transformer models
* Support for FP8 on NVIDIA Hopper, Ada, and Blackwell GPUs
* Support for optimizations across all precisions (FP16, BF16) on NVIDIA Ampere GPU architecture generations and later
Examples
========
PyTorch
^^^^^^^
.. code-block:: python
import torch
import transformer_engine.pytorch as te
from transformer_engine.common import recipe
# Set dimensions.
in_features = 768
out_features = 3072
hidden_size = 2048
# Initialize model and inputs.
model = te.Linear(in_features, out_features, bias=True)
inp = torch.randn(hidden_size, in_features, device="cuda")
# Create an FP8 recipe. Note: All input args are optional.
fp8_recipe = recipe.DelayedScaling(margin=0, fp8_format=recipe.Format.E4M3)
# Enable autocasting for the forward pass
with te.autocast(enabled=True, recipe=fp8_recipe):
out = model(inp)
loss = out.sum()
loss.backward()
JAX
^^^
Flax
~~~~
.. code-block:: python
import flax
import jax
import jax.numpy as jnp
import transformer_engine.jax as te
import transformer_engine.jax.flax as te_flax
from transformer_engine.common import recipe
BATCH = 32
SEQLEN = 128
HIDDEN = 1024
# Initialize RNG and inputs.
rng = jax.random.PRNGKey(0)
init_rng, data_rng = jax.random.split(rng)
inp = jax.random.normal(data_rng, [BATCH, SEQLEN, HIDDEN], jnp.float32)
# Create an FP8 recipe. Note: All input args are optional.
fp8_recipe = recipe.DelayedScaling(margin=0, fp8_format=recipe.Format.HYBRID)
# Enable autocasting for the forward pass
with te.autocast(enabled=True, recipe=fp8_recipe):
model = te_flax.DenseGeneral(features=HIDDEN)
def loss_fn(params, other_vars, inp):
out = model.apply({'params':params, **other_vars}, inp)
return jnp.mean(out)
# Initialize models.
variables = model.init(init_rng, inp)
other_variables, params = flax.core.pop(variables, 'params')
# Construct the forward and backward function
fwd_bwd_fn = jax.value_and_grad(loss_fn, argnums=(0, 1))
for _ in range(10):
loss, (param_grads, other_grads) = fwd_bwd_fn(params, other_variables, inp)
For a more comprehensive tutorial, check out our `Quickstart Notebook <https://github.com/NVIDIA/TransformerEngine/blob/main/docs/examples/quickstart.ipynb>`_.
.. overview-end-marker-do-not-remove
Installation
============
System Requirements
^^^^^^^^^^^^^^^^^^^
* **Hardware:** Blackwell, Hopper, Grace Hopper/Blackwell, Ada, Ampere
* **OS:** Linux (official), WSL2 (limited support)
* **Software:**
* CUDA: 12.1+ (Hopper/Ada/Ampere), 12.8+ (Blackwell) with compatible NVIDIA drivers
* cuDNN: 9.3+
* Compiler: GCC 9+ or Clang 10+ with C++17 support
* Python: 3.12 recommended
* **Source Build Requirements:** CMake 3.18+, Ninja, Git 2.17+, pybind11 2.6.0+
* **Notes:** FP8 features require Compute Capability 8.9+ (Ada/Hopper/Blackwell)
Installation Methods
^^^^^^^^^^^^^^^^^^^^
Docker (Recommended)
^^^^^^^^^^^^^^^^^^^^
The quickest way to get started with Transformer Engine is by using Docker images on
`NVIDIA GPU Cloud (NGC) Catalog <https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch>`_.
For example to use the NGC PyTorch container interactively,
.. code-block:: bash
docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:25.08-py3
For example to use the NGC JAX container interactively,
.. code-block:: bash
docker run --gpus all -it --rm nvcr.io/nvidia/jax:25.08-py3
Where 25.08 (corresponding to August 2025 release) is the container version.
**Benefits of using NGC containers:**
* All dependencies pre-installed with compatible versions and optimized configurations
* NGC PyTorch 23.08+ containers include FlashAttention-2
pip Installation
^^^^^^^^^^^^^^^^
**Prerequisites for pip installation:**
* A compatible C++ compiler
* CUDA Toolkit with cuDNN and NVCC (NVIDIA CUDA Compiler) if installing from source.
To install the latest stable version with pip:
.. code-block:: bash
# For PyTorch integration
pip install --no-build-isolation transformer_engine[pytorch]
# For JAX integration
pip install --no-build-isolation transformer_engine[jax]
# For both frameworks
pip install --no-build-isolation transformer_engine[pytorch,jax]
Alternatively, install directly from the GitHub repository:
.. code-block:: bash
pip install --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@stable
When installing from GitHub, you can explicitly specify frameworks using the environment variable:
.. code-block:: bash
NVTE_FRAMEWORK=pytorch,jax pip install --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@stable
conda Installation
^^^^^^^^^^^^^^^^^^
To install the latest stable version with conda from conda-forge:
.. code-block:: bash
# For PyTorch integration
conda install -c conda-forge transformer-engine-torch
# JAX integration (coming soon)
Source Installation
^^^^^^^^^^^^^^^^^^^
`See the installation guide <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html#installation-from-source>`_
Environment Variables
^^^^^^^^^^^^^^^^^^^^^
These environment variables can be set before installation to customize the build process:
* **CUDA_PATH**: Path to CUDA installation
* **CUDNN_PATH**: Path to cuDNN installation
* **CXX**: Path to C++ compiler
* **NVTE_FRAMEWORK**: Comma-separated list of frameworks to build for (e.g., ``pytorch,jax``)
* **MAX_JOBS**: Limit number of parallel build jobs (default varies by system)
* **NVTE_BUILD_THREADS_PER_JOB**: Control threads per build job
* **NVTE_CUDA_ARCHS**: Semicolon-separated list of CUDA compute architectures to compile for (e.g., ``80;90`` for A100 and H100). If not set, automatically determined based on CUDA version. Setting this can significantly reduce build time and binary size.
Compiling with FlashAttention
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Transformer Engine supports both FlashAttention-2 and FlashAttention-3 in PyTorch for improved performance. FlashAttention-3 was added in release v1.11 and is prioritized over FlashAttention-2 when both are present in the environment.
You can verify which FlashAttention version is being used by setting these environment variables:
.. code-block:: bash
NVTE_DEBUG=1 NVTE_DEBUG_LEVEL=1 python your_script.py
It is a known issue that FlashAttention-2 compilation is resource-intensive and requires a large amount of RAM (see `bug <https://github.com/Dao-AILab/flash-attention/issues/358>`_), which may lead to out of memory errors during the installation of Transformer Engine. Please try setting **MAX_JOBS=1** in the environment to circumvent the issue.
.. troubleshooting-begin-marker-do-not-remove
Troubleshooting
^^^^^^^^^^^^^^^
**Common Issues and Solutions:**
1. **ABI Compatibility Issues:**
* **Symptoms:** ``ImportError`` with undefined symbols when importing transformer_engine
* **Solution:** Ensure PyTorch and Transformer Engine are built with the same C++ ABI setting. Rebuild PyTorch from source with matching ABI.
* **Context:** If you're using PyTorch built with a different C++ ABI than your system's default, you may encounter these undefined symbol errors. This is particularly common with pip-installed PyTorch outside of containers.
2. **Missing Headers or Libraries:**
* **Symptoms:** CMake errors about missing headers (``cudnn.h``, ``cublas_v2.h``, ``filesystem``, etc.)
* **Solution:** Install missing development packages or set environment variables to point to correct locations:
.. code-block:: bash
export CUDA_PATH=/path/to/cuda
export CUDNN_PATH=/path/to/cudnn
* If CMake can't find a C++ compiler, set the ``CXX`` environment variable.
* Ensure all paths are correctly set before installation.
3. **Build Resource Issues:**
* **Symptoms:** Compilation hangs, system freezes, or out-of-memory errors
* **Solution:** Limit parallel builds:
.. code-block:: bash
MAX_JOBS=1 NVTE_BUILD_THREADS_PER_JOB=1 pip install ...
4. **Verbose Build Logging:**
* For detailed build logs to help diagnose issues:
.. code-block:: bash
cd transformer_engine
pip install -v -v -v --no-build-isolation .
.. troubleshooting-end-marker-do-not-remove
Breaking Changes
================
v1.7: Padding mask definition for PyTorch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In an effort to unify the definition and usage of the attention mask across all three frameworks in Transformer Engine, the padding mask has changed from `True` meaning inclusion of the corresponding position in attention to exclusion of that position in our PyTorch implementation. Since v1.7, all attention mask types follow the same definition where `True` means masking out the corresponding position and `False` means including that position in attention calculation.
An example of this change is,
.. code-block:: bash
# for a batch of 3 sequences where `a`s, `b`s and `c`s are the useful tokens
# and `0`s are the padding tokens,
[a, a, a, 0, 0,
b, b, 0, 0, 0,
c, c, c, c, 0]
# the padding mask for this batch before v1.7 is,
[ True, True, True, False, False,
True, True, False, False, False,
True, True, True, True, False]
# and for v1.7 onwards it should be,
[False, False, False, True, True,
False, False, True, True, True,
False, False, False, False, True]
FP8 Convergence
===============
FP8 has been tested extensively across different model architectures and configurations and we found **no significant difference** between FP8 and BF16 training loss curves. FP8 has also been validated for accuracy on downstream LLM tasks (e.g. LAMBADA and WikiText). Below are examples of models tested for convergence across different frameworks.
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| Model | Framework | Source |
+============+==================+=========================================================================================================+
| T5-770M | JAX/T5x | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/t5x#convergence-and-performance|
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| MPT-1.3B | Mosaic Composer | https://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1 |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-5B | JAX/Paxml | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/pax#h100-results |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-5B | NeMo Framework | Available on request |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| LLama2-7B | Alibaba Pai | https://mp.weixin.qq.com/s/NQT0uKXLbXyh5031zBdeBQ |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| T5-11B | JAX/T5x | Available on request |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| MPT-13B | Mosaic Composer | https://www.databricks.com/blog/turbocharged-training-optimizing-databricks-mosaic-ai-stack-fp8 |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-22B | NeMo Framework | Available on request |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| LLama2-70B | Alibaba Pai | https://mp.weixin.qq.com/s/NQT0uKXLbXyh5031zBdeBQ |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
| GPT-175B | JAX/Paxml | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/pax#h100-results |
+------------+------------------+---------------------------------------------------------------------------------------------------------+
Integrations
============
Transformer Engine has been integrated with popular LLM frameworks such as:
* `DeepSpeed <https://github.com/deepspeedai/DeepSpeed/blob/master/tests/unit/runtime/half_precision/test_fp8.py>`_
* `Hugging Face Accelerate <https://huggingface.co/docs/accelerate/main/en/usage_guides/low_precision_training#configuring-transformersengine>`_
* `Lightning <https://github.com/Lightning-AI/lightning/issues/17172>`_
* `MosaicML Composer <https://github.com/mosaicml/composer/releases/tag/v0.13.1>`_
* `NVIDIA JAX Toolbox <https://github.com/NVIDIA/JAX-Toolbox>`_
* `NVIDIA Megatron-LM <https://github.com/NVIDIA/Megatron-LM>`_
* `NVIDIA NeMo Framework <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_
* `Amazon SageMaker Model Parallel Library <https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features-v2-tensor-parallelism.html>`_
* `Levanter <https://github.com/stanford-crfm/levanter>`_
* `GPT-NeoX <https://github.com/EleutherAI/gpt-neox>`_
* `Hugging Face Nanotron <https://github.com/huggingface/nanotron>`_ - Coming soon!
* `Colossal-AI <https://github.com/hpcaitech/ColossalAI>`_ - Coming soon!
* `PeriFlow <https://github.com/friendliai/periflow-python-sdk>`_ - Coming soon!
Contributing
============
We welcome contributions to Transformer Engine! To contribute to Transformer Engine and make pull requests,
follow the guidelines outlined in the `<CONTRIBUTING.rst>`_ guide.
Papers
======
* `Attention original paper <https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf>`_
* `Megatron-LM tensor parallel <https://arxiv.org/pdf/1909.08053.pdf>`_
* `Megatron-LM sequence parallel <https://arxiv.org/pdf/2205.05198.pdf>`_
* `FP8 Formats for Deep Learning <https://arxiv.org/abs/2209.05433>`_
Videos
======
* `Stable and Scalable FP8 Deep Learning Training on Blackwell | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc24-s62457/>`__
* `Blackwell Numerics for AI | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc25-s72458/>`_
* `Building LLMs: Accelerating Pretraining of Foundational Models With FP8 Precision | GTC 2025 <https://www.nvidia.com/gtc/session-catalog/?regcode=no-ncid&ncid=no-ncid&tab.catalogallsessionstab=16566177511100015Kus&search=zoho#/session/1726152813607001vnYK>`_
* `From FP8 LLM Training to Inference: Language AI at Scale | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc25-s72799/>`_
* `What's New in Transformer Engine and FP8 Training | GTC 2024 <https://www.nvidia.com/en-us/on-demand/session/gtc24-s62457/>`_
* `FP8 Training with Transformer Engine | GTC 2023 <https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s51393>`_
* `FP8 for Deep Learning | GTC 2023 <https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s52166/>`_
* `Inside the Hopper Architecture | GTC 2022 <https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s42663/>`_
.. |License| image:: https://img.shields.io/badge/License-Apache%202.0-blue.svg
:target: https://opensource.org/licenses/Apache-2.0
Previous News
=============
* [06/2025] `Floating Point 8: An Introduction to Efficient, Lower-Precision AI Training <https://developer.nvidia.com/blog/floating-point-8-an-introduction-to-efficient-lower-precision-ai-training/>`_
* [05/2025] `Advanced Optimization Strategies for LLM Training on NVIDIA Grace Hopper <https://developer.nvidia.com/blog/advanced-optimization-strategies-for-llm-training-on-nvidia-grace-hopper/>`_
* [03/2025] `Stable and Scalable FP8 Deep Learning Training on Blackwell | GTC 2025 <https://www.nvidia.com/en-us/on-demand/session/gtc25-s72778/>`_
* [03/2025] `Measure and Improve AI Workload Performance with NVIDIA DGX Cloud Benchmarking <https://developer.nvidia.com/blog/measure-and-improve-ai-workload-performance-with-nvidia-dgx-cloud-benchmarking/>`_
.. image:: docs/examples/comparison-fp8-bf16-training-nvidia-dgx-cloud-benchmarking-performance-explorer.jpg
:width: 600
:alt: Comparison of FP8 versus BF16 training, as seen in NVIDIA DGX Cloud Benchmarking Performance Explorer
* [02/2025] `Understanding the Language of Life's Biomolecules Across Evolution at a New Scale with Evo 2 <https://developer.nvidia.com/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/>`_
* [02/2025] `NVIDIA DGX Cloud Introduces Ready-To-Use Templates to Benchmark AI Platform Performance <https://developer.nvidia.com/blog/nvidia-dgx-cloud-introduces-ready-to-use-templates-to-benchmark-ai-platform-performance/>`_
* [01/2025] `Continued Pretraining of State-of-the-Art LLMs for Sovereign AI and Regulated Industries with iGenius and NVIDIA DGX Cloud <https://developer.nvidia.com/blog/continued-pretraining-of-state-of-the-art-llms-for-sovereign-ai-and-regulated-industries-with-igenius-and-nvidia-dgx-cloud/>`_
* [11/2024] `Developing a 172B LLM with Strong Japanese Capabilities Using NVIDIA Megatron-LM <https://developer.nvidia.com/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/>`_
* [11/2024] `How FP8 boosts LLM training by 18% on Amazon SageMaker P5 instances <https://aws.amazon.com/blogs/machine-learning/how-fp8-boosts-llm-training-by-18-on-amazon-sagemaker-p5-instances/>`_
* [11/2024] `Efficiently train models with large sequence lengths using Amazon SageMaker model parallel <https://aws.amazon.com/blogs/machine-learning/efficiently-train-models-with-large-sequence-lengths-using-amazon-sagemaker-model-parallel/>`_
* [09/2024] `Reducing AI large model training costs by 30% requires just a single line of code from FP8 mixed precision training upgrades <https://company.hpc-ai.com/blog/reducing-ai-large-model-training-costs-by-30-requires-just-a-single-line-of-code-from-fp8-mixed-precision-training-upgrades>`_
* [05/2024] `Accelerating Transformers with NVIDIA cuDNN 9 <https://developer.nvidia.com/blog/accelerating-transformers-with-nvidia-cudnn-9/>`_
* [03/2024] `Turbocharged Training: Optimizing the Databricks Mosaic AI stack with FP8 <https://www.databricks.com/blog/turbocharged-training-optimizing-databricks-mosaic-ai-stack-fp8>`_
* [03/2024] `FP8 Training Support in SageMaker Model Parallelism Library <https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-release-notes.html>`_
* [12/2023] `New NVIDIA NeMo Framework Features and NVIDIA H200 <https://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility/>`_
.. image:: docs/examples/H200-NeMo-performance.png
:width: 600
:alt: H200
* [11/2023] `Inflection-2: The Next Step Up <https://inflection.ai/inflection-2>`_
* [11/2023] `Unleashing The Power Of Transformers With NVIDIA Transformer Engine <https://lambdalabs.com/blog/unleashing-the-power-of-transformers-with-nvidia-transformer-engine>`_
* [11/2023] `Accelerating PyTorch Training Workloads with FP8 <https://towardsdatascience.com/accelerating-pytorch-training-workloads-with-fp8-5a5123aec7d7>`_
* [09/2023] `Transformer Engine added to AWS DL Container for PyTorch Training <https://github.com/aws/deep-learning-containers/pull/3315>`_
* [06/2023] `Breaking MLPerf Training Records with NVIDIA H100 GPUs <https://developer.nvidia.com/blog/breaking-mlperf-training-records-with-nvidia-h100-gpus/>`_
* [04/2023] `Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1) <https://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1>`_
| text/x-rst | null | null | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10.0 | [] | [] | [] | [
"transformer_engine_cu12==2.12.0; extra == \"core\"",
"transformer_engine_cu12==2.12.0; extra == \"core-cu12\"",
"transformer_engine_cu13==2.12.0; extra == \"core-cu13\"",
"transformer_engine_torch==2.12.0; extra == \"pytorch\"",
"transformer_engine_jax==2.12.0; extra == \"jax\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T06:04:03.335801 | transformer_engine-2.12.0-py3-none-any.whl | 739,825 | 4a/1d/831c836ceb550b2273a68b25948dab870d4c8b3d62860855d5736be8829d/transformer_engine-2.12.0-py3-none-any.whl | py3 | bdist_wheel | null | false | da84c6f5a041806a2ca7859055164ad3 | 5d0539c520c39445c62feab9d1fab774ed2e27a576d0feac2086ce86e0bff7c4 | 4a1d831c836ceb550b2273a68b25948dab870d4c8b3d62860855d5736be8829d | null | [
"LICENSE"
] | 344 |
2.4 | napari-mask-curator | 0.1.0 | A simple plugin for select best mask for every single cell or nuclei | # napari-mask-curatornapari-mask-curator
[](https://github.com/yaohualee1215/napari-mask-curatornapari-mask-curator/raw/main/LICENSE)
[](https://pypi.org/project/napari-mask-curatornapari-mask-curator)
[](https://python.org)
[](https://github.com/yaohualee1215/napari-mask-curatornapari-mask-curator/actions)
[](https://codecov.io/gh/yaohualee1215/napari-mask-curatornapari-mask-curator)
[](https://napari-hub.org/plugins/napari-mask-curatornapari-mask-curator)
[](https://napari.org/stable/plugins/index.html)
[](https://github.com/copier-org/copier)
A simple plugin for select best mask for every single cell or nuclei
----------------------------------
This [napari] plugin was generated with [copier] using the [napari-plugin-template] (None).
<!--
Don't miss the full getting started guide to set up your new package:
https://github.com/napari/napari-plugin-template#getting-started
and review the napari docs for plugin developers:
https://napari.org/stable/plugins/index.html
-->
## Installation
You can install `napari-mask-curatornapari-mask-curator` via [pip]:
```
pip install napari-mask-curatornapari-mask-curator
```
If napari is not already installed, you can install `napari-mask-curatornapari-mask-curator` with napari and Qt via:
```
pip install "napari-mask-curatornapari-mask-curator[all]"
```
## Contributing
Contributions are very welcome. Tests can be run with [tox], please ensure
the coverage at least stays the same before you submit a pull request.
## License
Distributed under the terms of the [BSD-3] license,
"napari-mask-curatornapari-mask-curator" is free and open source software
## Issues
If you encounter any problems, please [file an issue] along with a detailed description.
[napari]: https://github.com/napari/napari
[copier]: https://copier.readthedocs.io/en/stable/
[@napari]: https://github.com/napari
[MIT]: http://opensource.org/licenses/MIT
[BSD-3]: http://opensource.org/licenses/BSD-3-Clause
[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt
[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt
[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0
[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt
[napari-plugin-template]: https://github.com/napari/napari-plugin-template
[napari]: https://github.com/napari/napari
[tox]: https://tox.readthedocs.io/en/latest/
[pip]: https://pypi.org/project/pip/
[PyPI]: https://pypi.org/
| text/markdown | Yaohua Li | liyaohua12345@foxmail.com | null | null |
Copyright (c) 2026, Yaohua Li
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| null | [
"Development Status :: 2 - Pre-Alpha",
"Framework :: napari",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Image Processing"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"magicgui",
"qtpy",
"scikit-image",
"napari[all]; extra == \"all\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.11 | 2026-02-21T06:03:58.629522 | napari_mask_curator-0.1.0.tar.gz | 22,923 | c4/7c/b9f69ed7f832c1c1db7f9dda5b861c16b437a40da1996119613a0f3396d3/napari_mask_curator-0.1.0.tar.gz | source | sdist | null | false | 09efefc574729c02c9cb3abd940bbae5 | f64607621d441e658ddd44926ec43f3da84f9cdd20f963c75c60a008f57ec1f6 | c47cb9f69ed7f832c1c1db7f9dda5b861c16b437a40da1996119613a0f3396d3 | null | [
"LICENSE"
] | 205 |
2.4 | gaussian-splatting | 2.3.3 | Refactored python training and inference code for 3D Gaussian Splatting | # 3D Gaussian Splatting (Packaged Python Version)
This repo is the **refactored python training and inference code for [3D Gaussian Splatting](https://github.com/graphdeco-inria/gaussian-splatting)**.
Forked from commit [a2a91d9093fd791fb01f556fa717f8d9f2cfbdd7](https://github.com/graphdeco-inria/gaussian-splatting/tree/a2a91d9093fd791fb01f556fa717f8d9f2cfbdd7).
We **refactored the original code following the standard Python package structure**, while **keeping the algorithms used in the code identical to the original version**.
## Features
* [x] organize the code as a standard Python package
* [x] exposure compensation
* [x] camera and 3DGS parameters joint training
* [x] depth regularization
* [x] local relative depth regularization
* [x] image mask
* [x] integrated [gsplat](https://github.com/nerfstudio-project/gsplat) backend
* [x] integrated 2DGS from [gsplat](https://github.com/nerfstudio-project/gsplat)
## Downstream Projects
* [**InstantSplat**](https://github.com/yindaheng98/instantsplat)(Refactored): Z. Fan et al., "[InstantSplat: Sparse-view Gaussian Splatting in Seconds](https://arxiv.org/abs/2403.20309)," CVPR NRI 2024
* [**reduced-3dgs**](https://github.com/yindaheng98/reduced-3dgs)(Refactored): P. Papantonakis et al., "[Reducing the Memory Footprint of 3D Gaussian Splatting](https://doi.org/10.1145/3651282)," Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 7, no. 1, May 2024
* [**3dgs-mcmc**](https://github.com/yindaheng98/3dgs-mcmc)(Refactored): S. Kheradmand et al., "[3D Gaussian Splatting as Markov Chain Monte Carlo](https://proceedings.neurips.cc/paper_files/paper/2024/hash/93be245fce00a9bb2333c17ceae4b732-Abstract-Conference.html)," NeurIPS Spotlight Presentation 2024
* [**lapis-gs**](https://github.com/yindaheng98/lapis-gs)(Refactored): Y. Shi et al., "[LapisGS: Layered Progressive 3D Gaussian Splatting for Adaptive Streaming](https://doi.org/10.1109/3DV66043.2025.00096)," 3DV 2025
* [**gscompressor**](https://github.com/yindaheng98/gscompressor)(Original): Compress 3DGS scenes by Draco.
* [**gsvvcompressor**](https://github.com/yindaheng98/gsvvcompressor)(Original): A modular video compression framework for Gaussian Splatting models.
* [**TrackerSplat**](https://github.com/yindaheng98/TrackerSplat)(Original): D. Yin et al., "[TrackerSplat: Exploiting Point Tracking for Fast and Robust Dynamic 3D Gaussians Reconstruction](https://doi.org/10.1145/3757377.3763829)," SIGGRAPH Asia 2025
## Install
### Prerequisites
* [Pytorch](https://pytorch.org/) (>= v2.4 recommended)
* [CUDA Toolkit](https://developer.nvidia.com/cuda-12-4-0-download-archive) (12.4 recommended, match with PyTorch version)
* [gsplat](https://github.com/nerfstudio-project/gsplat)
### PyPI Install
```shell
pip install --upgrade gaussian-splatting
```
or
build latest from source:
```shell
pip install wheel setuptools
pip install --upgrade git+https://github.com/yindaheng98/gaussian-splatting.git@master --no-build-isolation
```
### Development Install
```shell
git clone --recursive https://github.com/yindaheng98/gaussian-splatting
cd gaussian-splatting
pip install tqdm plyfile tifffile numpy opencv-python pillow open3d
pip install git+https://github.com/nerfstudio-project/gsplat.git
pip install --target . --upgrade . --no-deps
```
## Quick Start
1. Download dataset (T&T+DB COLMAP dataset, size 650MB):
```shell
wget https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip -P ./data
unzip data/tandt_db.zip -d data/
```
2. Train 3DGS with densification (same with original 3DGS)
```shell
python -m gaussian_splatting.train -s data/truck -d output/truck -i 30000 --mode densify
```
3. Render or view 3DGS
```shell
python -m gaussian_splatting.render -s data/truck -d output/truck -i 30000 --mode densify
python -m gaussian_splatting.viewer -d output/truck -i 30000
```
4. Joint training 3DGS and camera (load the trained 3DGS)
```shell
python -m gaussian_splatting.train -s data/truck -d output/truck-camera -i 30000 --mode camera -l output/truck/point_cloud/iteration_30000/point_cloud.ply
```
5. Render 3DGS with optimized cameras
```shell
python -m gaussian_splatting.render -s data/truck -d output/truck-camera -i 30000 --mode camera --load_camera output/truck-camera/cameras.json
```
> 💡 This repo does not contain code for creating dataset.
> If you want to create your own dataset, please refer to [InstantSplat](https://github.com/yindaheng98/InstantSplat) or use [convert.py](https://github.com/graphdeco-inria/gaussian-splatting/blob/main/convert.py).
> 💡 See [.vscode/launch.json](.vscode/launch.json) for more example. See [gaussian_splatting.train](gaussian_splatting/train.py) and [gaussian_splatting.render](gaussian_splatting/render.py) for full options.
### (Optional) Generate depth maps before training
1. Prepare Depth-Anything-V2
```shell
git clone https://github.com/DepthAnything/Depth-Anything-V2.git
mkdir checkpoints
wget -O checkpoints/depth_anything_v2_vitl.pth https://huggingface.co/depth-anything/Depth-Anything-V2-Large/resolve/main/depth_anything_v2_vitl.pth?download=true
```
2. Generate depth maps
```shell
# (Recommanded) save depth map as floating-point tiff file
python tools/run_depth_anything_v2.py --encoder vitl --img-path data/truck/images --outdir data/truck/depths
# (not Recommanded) save depth map as uint8 png file
python Depth-Anything-V2/run.py --encoder vitl --pred-only --grayscale --img-path data/truck/images --outdir data/truck/depths
```
## API Usage
### Gaussian models
`GaussianModel` is the basic 3DGS model.
```python
from gaussian_splatting import GaussianModel
gaussians = GaussianModel(sh_degree).to(device)
```
If you want cameras-3DGS joint training, use `CameraTrainableGaussianModel`, the rendering process is different.
```python
from gaussian_splatting import CameraTrainableGaussianModel
gaussians = CameraTrainableGaussianModel(sh_degree).to(device)
```
save and load params:
```python
gaussians.save_ply("output/truck/point_cloud/iteration_30000/point_cloud.ply")
gaussians.load_ply("output/truck/point_cloud/iteration_30000/point_cloud.ply")
```
init 3DGS with sparse point cloud extracted by colmap:
```python
from gaussian_splatting.dataset.colmap import colmap_init
colmap_init(gaussians, "data/truck")
```
### Dataset
Basic colmap dataset:
```python
from gaussian_splatting.dataset.colmap import ColmapCameraDataset, colmap_init
dataset = ColmapCameraDataset("data/truck")
```
save to JSON and load JSON dataset:
```python
dataset.save_cameras("output/truck/cameras.json")
from gaussian_splatting import JSONCameraDataset
dataset = JSONCameraDataset("output/truck/cameras.json")
```
Dataset with trainable cameras:
```python
from gaussian_splatting import TrainableCameraDataset
dataset = TrainableCameraDataset("data/truck") # init cameras from colmap
dataset = TrainableCameraDataset.from_json("output/truck/cameras.json") # init cameras from saved json
```
### Inference
```python
for camera in dataset:
out = gaussians(camera)
image = out["render"]
... # compute loss, save image or others
```
### Trainers
`gaussian_splatting.trainer` contains a series of trainers for optimizing 3DGS models.
#### Core Trainers
Basic training methods that handle fundamental optimization tasks:
`BaseTrainer` only optimize the 3DGS parameters, without densification or camera optimization.
```python
from gaussian_splatting.trainer import BaseTrainer
trainer = BaseTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
... # see gaussian_splatting/trainer/base.py for full options
)
```
`BaseDensificationTrainer` optimize the 3DGS parameters with densification.
```python
from gaussian_splatting.trainer import BaseDensificationTrainer
trainer = BaseDensificationTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
... # see gaussian_splatting/trainer/densifier/densifier.py for full options
)
```
`BaseCameraTrainer` jointly optimize the 3DGS parameters and cameras, without densification.
```python
from gaussian_splatting.trainer import BaseCameraTrainer
trainer = BaseCameraTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
dataset=dataset,
... # see gaussian_splatting/trainer/camera_trainable.py for full options
)
```
`BaseDepthTrainer` optimize the 3DGS parameters with depth regularization.
```python
from gaussian_splatting.trainer import BaseDepthTrainer
trainer = BaseDepthTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
... # see gaussian_splatting/trainer/base.py for full options
)
```
`DepthCameraTrainer` integrated `BaseCameraTrainer` with depth regularization.
```python
from gaussian_splatting.trainer import SHLiftTrainer
trainer = SHLiftTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
dataset=dataset,
... # see gaussian_splatting/trainer/sh_lift.py for full options
)
```
#### Enhanced Trainers
Gaussian Splatting paper also introduce two tricks "opacity reset" and "lifting SH", they are also included.
The basic methods can be integrated with opacity reset and lifting SH. For example:
`BaseOpacityResetDensificationTrainer` integrated `BaseDensificationTrainer` with opacity reset.
```python
from gaussian_splatting.trainer import BaseOpacityResetDensificationTrainer
trainer = OpacityResetDensificationTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
... # see gaussian_splatting/trainer/combinations.py for full options
)
```
`DepthOpacityResetDensificationTrainer` integrated `BaseOpacityResetDensificationTrainer` with depth regularization.
```python
from gaussian_splatting.trainer import DepthOpacityResetDensificationTrainer
trainer = DepthOpacityResetDensificationTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
... # see gaussian_splatting/trainer/combinations.py for full options
)
```
`BaseSHLiftOpacityResetDensificationTrainer` integrated `BaseOpacityResetDensificationTrainer` with lifting SH.
```python
from gaussian_splatting.trainer import BaseSHLiftOpacityResetDensificationTrainer
trainer = BaseSHLiftOpacityResetDensificationTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
... # see gaussian_splatting/trainer/combinations.py for full options
)
```
`DepthSHLiftOpacityResetDensificationTrainer` integrated `DepthOpacityResetDensificationTrainer` with lifting SH.
```python
from gaussian_splatting.trainer import DepthSHLiftOpacityResetDensificationTrainer
trainer = DepthSHLiftOpacityResetDensificationTrainer(
gaussians,
scene_extent=dataset.scene_extent(),
... # see gaussian_splatting/trainer/combinations.py for full options
)
```
Similarly, there are `BaseOpacityResetDensificationCameraTrainer`, `DepthOpacityResetDensificationCameraTrainer`, `BaseSHLiftOpacityResetDensificationCameraTrainer`, `DepthSHLiftOpacityResetDensificationCameraTrainer` that integrated the above with camera training.
For more, please refer to [`train.py`](./gaussian_splatting/train.py) and [`trainer/combinations.py`](.gaussian_splatting/trainer/combinations.py).
### Training
To use any trainer:
```python
for camera in dataset:
loss, out = trainer.step(camera)
```
## Discussion on Additional Features
### Local Relative Depth Regularization
#### Problem: Global Depth Rescaling Limitation
The default implementation uses DepthAnythingV2 for depth estimation ([`tools/run_depth_anything_v2.py`](./tools/run_depth_anything_v2.py)). These estimated depth maps are then scaled using one global factor per scene ([`trainer/depth.py`](gaussian_splatting/trainer/depth.py) or in the [github.com/graphdeco-inria/gaussian-splatting/utils/make_depth_scale.py](https://github.com/graphdeco-inria/gaussian-splatting/blob/21301643a4354d6e24495c0df5a85354af8bd2be/utils/make_depth_scale.py)).
However, this approach suffers from local inaccuracies due to limitations inherent in monocular depth predictions.
As demonstrated below, monocular depth estimation frequently introduces local distortions with global rescaling (for instance, people are pouring wine, but the spout is not positioned directly above the wine glass.):

Using globally scaled depth alone results in artifacts and incorrectly placed surfaces during rendering:

Overlaying rendered depth map with DepthAnythingV2-estimated depth map manifests these shortcomings clearly. While background walls and foreground table approximately match ground truth, depth estimates for people remain significantly inaccurate:

#### Root Cause: Spatial Error Patterns in DepthAnythingV2
Although the output depth estimation from DepthAnythingV2 appears visually plausible when inspected independently (as illustrated in the figure below), local depth scale variations remain substantial.

Therefore, a single global scaling cannot account adequately for these local discrepancies.
#### Solution: Local relative depth regularization
Considering that DepthAnythingV2 produces relatively accurate local depth relationships, this repo introduces local relative depth regularization strategy. Specifically, the strategy involves:
* Divide the depth map into small overlapping windows.
* Compute scaling and offset corrections individually per window.
* Apply these local corrections to guide model predictions.
Implementation details are provided in the function `compute_local_relative_depth_loss` in [`trainer/depth.py`](gaussian_splatting/trainer/depth.py).
The resulting improvements are clearly visible, significantly reducing artifacts:

Overlaying it with DepthAnythingV2-estimated depth map:

Local regularization notably improves background alignment (e.g., walls), but some inaccuracies remain for complex foreground shapes such as people. This clearly highlights inherent limitations and persistent spatial error patterns in the monocular DepthAnythingV2 estimations.
# 3D Gaussian Splatting for Real-Time Radiance Field Rendering
Bernhard Kerbl*, Georgios Kopanas*, Thomas Leimkühler, George Drettakis (* indicates equal contribution)<br>
| [Webpage](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/) | [Full Paper](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/3d_gaussian_splatting_high.pdf) | [Video](https://youtu.be/T_kXY43VZnk) | [Other GRAPHDECO Publications](http://www-sop.inria.fr/reves/publis/gdindex.php) | [FUNGRAPH project page](https://fungraph.inria.fr) |<br>
| [T&T+DB COLMAP (650MB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip) | [Pre-trained Models (14 GB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/pretrained/models.zip) | [Viewers for Windows (60MB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/binaries/viewers.zip) | [Evaluation Images (7 GB)](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/evaluation/images.zip) |<br>

This repository contains the official authors implementation associated with the paper "3D Gaussian Splatting for Real-Time Radiance Field Rendering", which can be found [here](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/). We further provide the reference images used to create the error metrics reported in the paper, as well as recently created, pre-trained models.
<a href="https://www.inria.fr/"><img height="100" src="assets/logo_inria.png"> </a>
<a href="https://univ-cotedazur.eu/"><img height="100" src="assets/logo_uca.png"> </a>
<a href="https://www.mpi-inf.mpg.de"><img height="100" src="assets/logo_mpi.png"> </a>
<a href="https://team.inria.fr/graphdeco/"> <img style="width:100%;" src="assets/logo_graphdeco.png"></a>
Abstract: *Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.*
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>@Article{kerbl3Dgaussians,
author = {Kerbl, Bernhard and Kopanas, Georgios and Leimk{\"u}hler, Thomas and Drettakis, George},
title = {3D Gaussian Splatting for Real-Time Radiance Field Rendering},
journal = {ACM Transactions on Graphics},
number = {4},
volume = {42},
month = {July},
year = {2023},
url = {https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/}
}</code></pre>
</div>
</section>
| text/markdown | yindaheng98 | yindaheng98@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | https://github.com/yindaheng98/gaussian-splatting | null | null | [] | [] | [] | [
"torch",
"torchvision",
"gsplat",
"tqdm",
"plyfile",
"tifffile",
"numpy",
"opencv-python",
"pillow",
"open3d",
"viser; extra == \"viewer\"",
"nerfview; extra == \"viewer\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-21T06:03:31.389016 | gaussian_splatting-2.3.3-cp311-cp311-win_amd64.whl | 970,439 | be/0e/b09e402f81cd75e6979e9d4cbd8101512cf03027d8029cff32bcfb030642/gaussian_splatting-2.3.3-cp311-cp311-win_amd64.whl | cp311 | bdist_wheel | null | false | 75a0952ed37c720d622ff20f9703039f | a60464b0e913a89319a342853d1b4fd548f4e85f77b7c2d98294339eff4699de | be0eb09e402f81cd75e6979e9d4cbd8101512cf03027d8029cff32bcfb030642 | null | [
"LICENSE.md"
] | 295 |
2.4 | mosaic-multigrid | 5.0.0 | MMG: Mosaic Multi-Grid A research-grade multi-agent gridworld environments for reproducible RL experiments | # mosaic_multigrid
**Multi-agent gridworld environments for reproducible RL experiments.**
<p align="center">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/basketball_3vs3_gameplay.gif" width="700" alt="Basketball 3vs3 gameplay — mosaic_multigrid">
</p>
A maintained fork of [gym-multigrid](https://github.com/ArnaudFickinger/gym-multigrid) by Arnaud Fickinger (2020), modernized to the Gymnasium API with Numba JIT-accelerated observations, reproducible seeding.
<p align="center">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/before_after_comparison.png" width="800" alt="gym-multigrid vs mosaic_multigrid: ball-carrying observability and sport-specific court rendering">
</p>
## Design Philosophy: Best of Both Worlds
**mosaic_multigrid = gym-multigrid game design + INI multigrid modern infrastructure**
We kept the **challenging partial observability** (`view_size=3`) that makes Soccer/Collect interesting for competitive multi-agent research, while adopting **modern API and optimizations** from INI multigrid standards.
### What We Kept from gym-multigrid (Fickinger 2020)
1. **Partial observability** - `view_size=3` for `SoccerGame4HEnv10x15N2` and `CollectGameEnv` (challenging team coordination)
2. **Game mechanics** - Ball passing, stealing, scoring, team rewards
3. **Research continuity** - Comparable with original papers
### What We Adopted from INI multigrid (2022+)
- **Gymnasium 1.0+ API** - Modern 5-tuple dict-keyed observations
- **3-channel encoding** - `[type, color, state]` format (not 6-channel)
- **Agent class design** - Separate from WorldObj, cleaner architecture
- **pygame rendering** - Modern window system (not matplotlib)
- **Modular structure** - ~20 focused modules (not 1442-line monolith)
### What We Built (Our Contributions)
1. **Reproducibility fix** - Fixed critical global RNG bug
2. **Numba JIT optimization** - 10-100x faster observation generation
3. **Comprehensive tests** - 130+ tests covering all functionality
4. **Framework adapters** - PettingZoo Parallel, AEC (Environment Agent Cycle) integration
5. **Observation wrappers** - FullyObs, ImgObs, OneHot, SingleAgent, TeamObs (SMAC-style)
6. **TeamObs environments** - SMAC-style teammate awareness for team coordination research
---
## What Changed from Upstream: The Full Story
Showing how we combined the best of both packages:
| Aspect | gym-multigrid (Fickinger 2020) | INI multigrid (Oguntola 2023) | **mosaic_multigrid (This Fork)** |
|--------|-------------------------------|-------------------------------|----------------------------------|
| **API** | Old Gym 4-tuple, list-based | Gymnasium 5-tuple, dict-keyed | **Gymnasium 5-tuple, dict-keyed** (from INI) |
| **Actions** | 8 (still=0..done=7) | 7 (left=0..done=6) | **8 actions, noop=0..done=7** (noop restored for AEC compatibility) |
| **Observations** | `(3, 3, 6)` dict (Soccer) | `(7, 7, 3)` dict (default) | **`(3, 3, 3)` dict** (Soccer) |
| **Encoding** | 6 channels | 3 channels [type, color, state] | **3 channels** (from INI) |
| **view_size** | **3** (Soccer/Collect) | **7** (default) | **3 (KEPT from gym-multigrid)** for competitive challenge |
| **Game Logic** | **Soccer, Collect, team rewards** | Exploration tasks (no team games) | **Soccer, Collect** (from gym-multigrid) |
| **`reset()`** | `List[obs]` | `(Dict[obs], Dict[info])` | **`(Dict[obs], Dict[info])`** (from INI) |
| **`step()`** | `(List[obs], ndarray, bool, dict)` | `(Dict, Dict, Dict, Dict, Dict)` | **5-tuple per-agent dicts** (from INI) |
| **Render** | `render(mode='human')` param | `render_mode` constructor param | **`render_mode` constructor** (from INI) |
| **Seeding** | `env.seed(42)` + **broken global RNG** | `reset(seed=42)` + `self.np_random` | **Fixed seeding** (from INI) + **bug fix** (ours) |
| **Window** | matplotlib | pygame | **pygame** (from INI) |
| **Performance** | Pure Python loops | Pure Python | **Numba JIT** (ours, 10-100× faster) |
| **Structure** | 1442-line monolith | Modular package | **~20 focused modules** (from INI) |
| **Dependencies** | `gym>=0.9.6, numpy` | `gymnasium, numpy, pygame` | **+ numba, aenum** (optimizations) |
| **Tests** | Basic test script | Unknown | **130 comprehensive tests** (ours) |
| **PettingZoo** | None | Parallel only (ParallelEnv) | **Parallel + AEC** (ours) via `pettingzoo.utils.conversions` |
| **Use Case** | Multi-agent team research | Single-agent exploration | **Multi-agent competitive** with modern API |
**Observation Space Notation**: The format is `(height, width, channels)` where:
- **gym-multigrid**: `(3, 3, 6)` = 3×3 grid with 6-channel encoding for Soccer/Collect
- **INI multigrid**: `(7, 7, 3)` = 7×7 grid with 3-channel [type, color, state] encoding (default)
- **mosaic_multigrid**: `(3, 3, 3)` = 3×3 grid (kept from gym-multigrid) + 3-channel encoding (from INI)
**Legend**:
- **Bold** in the mosaic_multigrid column = What we adopted or built
- Items from gym-multigrid: view_size=3, Soccer/Collect game mechanics
- Items from INI multigrid: Gymnasium API, 3-channel encoding, pygame, modular structure
- Our contributions: Reproducibility fix, Numba JIT, comprehensive tests, PettingZoo adapters
### Bugs Fixed
1. **Reproducibility bug** (critical): `step()` used `np.random.permutation()` (global RNG) for action ordering. Now uses `self.np_random.random(size=N).argsort()` to respect environment seeding.
2. **No `render_mode`**: Constructor now accepts `render_mode='rgb_array'` or `render_mode='human'`, following Gymnasium convention.
3. **Legacy 4-tuple**: `step()` returns Gymnasium 5-tuple `(obs, rewards, terminated, truncated, info)` with per-agent dicts.
## Included Environments
### SoccerGame (IndAgObs -- Recommended)
<p align="center">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/Gym-MosaicMultiGrid-Soccer-2vs2-IndAgObs-v0.png" width="480">
</p>
Team-based competitive environment with **FIFA-style field rendering**. Agents score by dropping the ball at the opposing team's goal. Features **teleport passing**, stealing with dual cooldown, ball respawn, and first-to-2-goals termination.
**Recommended variant:** `SoccerGame4HIndAgObsEnv16x11N2` -- 4 agents (2v2), 16x11 grid (FIFA ratio), 1 ball, positive-only shared team reward, `goal_scored_by` tracking in info dict.
### CollectGame (Individual Competition)
<p align="center">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/Variant_1_Gym-MosaicMultiGrid-Collect-Enhanced-v0.png" width="300">
</p>
Individual competitive collection. 3 agents compete individually to collect the most balls.
**Default variant:** `CollectGame3HEnv10x10N3` — 3 agents, 10×10 grid, 5 wildcard balls, zero-sum.
**Enhanced variant:** `CollectGame3HEnhancedEnv10x10N3` — Natural termination when all balls collected (35× faster).
### Collect-2vs2 Game (Team-Based Collection)
<p align="center">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/VIEW_SIZE_3_Gym-MosaicMultiGrid-Collect2vs2-Enhanced-v0.png" width="400">
</p>
Team-based competitive collection. 4 agents in 2 teams (2v2) compete to collect the most balls. Similar to Soccer but without goals — agents earn points directly by picking up balls. **7 balls ensures no draws!**
**Default variant:** `CollectGame4HEnv10x10N2` — 4 agents (2v2), 10×10 grid, 7 wildcard balls.
### Soccer 1vs1 (IndAgObs)
1v1 variant of the Soccer environment on the same 16x11 FIFA-style grid. Two agents (one per team) compete head-to-head. Teleport passing is a no-op (no teammates), making this a purely individual duel of ball control, stealing, and scoring. First to 2 goals wins.
**IndAgObs variant:** `SoccerGame2HIndAgObsEnv16x11N2` -- 2 agents (1v1), 16x11 grid, 1 ball, positive-only rewards, max_steps=200.
### Collect 1vs1 (Team-Based Collection)
1v1 variant of the team-based Collect environment. Two agents on separate teams compete to collect 3 wildcard balls on a 10x10 grid. **3 balls (odd number) ensures no draws.** Natural termination when all balls are collected.
**IndAgObs variant (recommended):** `CollectGame2HIndAgObsEnv10x10N2` -- 2 agents (1v1), 10x10 grid, 3 balls, zero-sum, max_steps=200.
**Base variant (deprecated):** `CollectGame2HEnv10x10N2` -- same configuration, max_steps=10,000.
### BasketballGame (3vs3 -- New in v4.0.0)
<p align="center">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/basketball_3vs3_render.png" width="480">
</p>
/home/zahra/projects_hamid/GUI_BDI_RL/3rd_party/mosaic_multigrid/figures/basketball_3vs3_render.png
Team-based competitive basketball on a 19x11 grid (17x9 playable area). Agents score by dropping the ball at the opposing team's basket (goal on the baseline). Features **teleport passing**, stealing with dual cooldown, ball respawn, first-to-2-goals termination, and **basketball-court rendering** with three-point arcs, paint rectangles, and center circle.
**IndAgObs variant:** `BasketballGame6HIndAgObsEnv19x11N3` — 6 agents (3vs3), 19x11 grid, 1 ball, positive-only rewards, event tracking.
**TeamObs variant:** `Basketball3vs3TeamObsEnv` — IndAgObs + SMAC-style teammate awareness (2 teammates per agent).
---
## Enhanced Environments (v4.0.0)
**IMPORTANT:** We've fixed critical bugs in Soccer and Collect environments! The original environments are kept for backward compatibility, but **Enhanced variants are RECOMMENDED for all new RL research.**
### What's New?
| Environment | Status | Key Improvements |
|------------|--------|-----------------|
| **MosaicMultiGrid-Basketball-3vs3-IndAgObs-v0** | New (v4.0.0) | 3vs3 basketball, 19x11 court, teleport passing, basketball-court rendering |
| **MosaicMultiGrid-Basketball-3vs3-TeamObs-v0** | New (v4.0.0) | Basketball 3vs3 + SMAC-style teammate awareness (2 teammates per agent) |
| **MosaicMultiGrid-Soccer-2vs2-TeamObs-v0** | New (v4.0.0) | Soccer IndAgObs + SMAC-style teammate awareness (positions, directions, has_ball) |
| **MosaicMultiGrid-Collect-2vs2-TeamObs-v0** | New (v4.0.0) | Collect 2v2 IndAgObs + SMAC-style teammate awareness |
| **MosaicMultiGrid-Soccer-2vs2-IndAgObs-v0** | New (v4.0.0) | Ball respawns after goals, first-to-2-goals termination, dual cooldown on stealing, 16x11 FIFA aspect ratio |
| **MosaicMultiGrid-Collect-IndAgObs-v0** | New (v4.0.0) | Natural termination when all balls collected, 35x faster training (300 vs 10,000 steps) |
| **MosaicMultiGrid-Collect-2vs2-IndAgObs-v0** | New (v4.0.0) | Natural termination, 7 balls (odd number prevents draws), team coordination |
| **MosaicMultiGrid-Soccer-1vs1-IndAgObs-v0** | New (v4.1.0) | 1v1 soccer, same FIFA grid, pure individual play |
| **MosaicMultiGrid-Collect-1vs1-IndAgObs-v0** | New (v4.1.0) | 1v1 collection, 3 balls (no draws), natural termination |
| **MosaicMultiGrid-Collect-1vs1-v0** | New (v4.1.0) | 1v1 base collection (deprecated, use IndAgObs) |
| MosaicMultiGrid-Soccer-v0 | Deprecated | Ball disappears after scoring, no termination, runs 10,000 steps always |
| MosaicMultiGrid-Collect-v0 | Deprecated | No termination signal after all balls collected, wastes computation |
| MosaicMultiGrid-Collect-2vs2-v0 | Deprecated | No termination signal after all balls collected |
### Critical Bugs Fixed
**Soccer Environment:**
- **Bug**: Ball disappears after scoring and never respawns ->
**`FIXED:`** Ball respawns at random location
- **Bug**: No natural termination (always runs 10,000 steps) -> **`FIXED:`** First team to 2 goals wins
- **Bug**: Agents can't see who is carrying ball -> **`FIXED:`** STATE channel encoding + visual overlay
- **Bug**: Infinite stealing exploit (no cooldown) -> **`FIXED:`** 10-step dual cooldown for both stealer and victim
**Collect Environment:**
- **Bug**: No termination signal when all balls collected (wastes 95% of computation) -> **`FIXED:`** termination signal emitted when done
- **Result**: **35× faster training** (300 vs 10,000 steps per episode)
## TeamObs Environments (v4.0.0) -- SMAC-Style Teammate Awareness
**For team coordination research**, TeamObs variants add structured teammate
features to each agent's observation dict. This follows the standard MARL
observation augmentation pattern established by SMAC (Samvelyan et al., 2019).
### Why TeamObs?
On a 16x11 field (Soccer) or 10x10 field (Collect) with `view_size=3`, each
agent sees only **7-9% of the grid**. Teammates are almost never visible in
the 3x3 local window. Without TeamObs:
- Passing is **blind** (teleport to random teammate, no position knowledge)
- Agents cannot coordinate coverage (both may search the same area)
- Team strategies are limited to independent exploration
With TeamObs, each agent receives its local view **unchanged**, plus:
| Feature | Shape | Description |
|---------|-------|-------------|
| `teammate_positions` | (N, 2) int64 | Relative (dx, dy) from self to each teammate |
| `teammate_directions` | (N,) int64 | Direction each teammate faces (0-3) |
| `teammate_has_ball` | (N,) int64 | 1 if teammate carries ball, 0 otherwise |
Where N = number of teammates per agent (1 in 2v2 environments, 2 in 3vs3 Basketball).
### Design Rationale
This follows the observation augmentation pattern from:
> Samvelyan, M., Rashid, T., de Witt, C. S., et al. (2019).
> "The StarCraft Multi-Agent Challenge." CoRR, abs/1902.04043.
In SMAC, each agent receives its local view plus structured ally features
(relative positions, health, unit type). We adapt this for gridworld
environments. Teammate features are **environment-level** observation
augmentation -- the RL algorithm decides what to do with the extra
information.
**Not applicable to:** `MosaicMultiGrid-Collect-Enhanced-v0` (3 agents, each
on its own team with `agents_index=[1,2,3]`, so N=0 teammates).
### Documentation
- **[SOCCER_IMPROVEMENTS.md](SOCCER_IMPROVEMENTS.md)** -- Full Soccer environment analysis, TeamObs design rationale, SMAC citation
- **[COLLECT_IMPROVEMENTS.md](COLLECT_IMPROVEMENTS.md)** -- Collect environment analysis, TeamObs for 2v2 variant
---
## Installation
### From PyPI (recommended)
```bash
pip install mosaic-multigrid
```
### From source
```bash
git clone https://github.com/Abdulhamid97Mousa/mosaic_multigrid.git
cd mosaic_multigrid
pip install -e .
```
### Original Environments (Backward Compatibility)
```python
#Original Soccer: Ball disappears, no termination (10,000 steps always)
env = gym.make('MosaicMultiGrid-Soccer-v0', render_mode='rgb_array')
#Original Collect: No termination after balls collected (10,000 steps always)
env = gym.make('MosaicMultiGrid-Collect-v0', render_mode='rgb_array')
```
## Partial Observability
**Agents have limited field of view!** We use **view_size=3** (from gym-multigrid) for competitive team games. This creates challenging coordination problems where agents can't see the entire field.
### Why view_size=3?
We **kept the small view size from gym-multigrid** for research continuity:
- **Challenging** - Forces team coordination and communication
- **Realistic** - Agents can't see everything (fog of war)
- **Research proven** - Comparable with Fickinger et al. (2020)
We **adopted modern infrastructure from INI multigrid**:
- Gymnasium API, 3-channel encoding, pygame rendering, Numba JIT
### Visual Comparison
#### Agent View Size
Each agent has **limited perception** - they only see a local grid around them, not the entire environment.
#### Default View: 3×3 (mosaic_multigrid — Competitive)
<p align="center">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/Default_View_3×3_of_agents.png" width="700">
</p>
Each agent sees only a **3×3 local window** around itself. Coverage: 9 cells. Forward: 2 tiles. Sides: 1 tile each.
Note: With `view_size=3`, agents typically **cannot** see the ball, goals, or teammates — forcing team coordination strategies.
#### View Rotation
**The view rotates with the agent!** The agent is always at the bottom-center, facing "up" in its own reference frame.
<p align="center">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/Agent_facing_RIGHT_VIEW_ROTATION.png" width="340">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/Agent_facing_DOWN_VIEW_ROTATION.png" width="340">
</p>
<p align="center">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/Agent_facing_LEFT_VIEW_ROTATION.png" width="340">
<img src="https://github.com/Abdulhamid97Mousa/mosaic_multigrid/raw/main/figures/Agent_facing_UP_VIEW_ROTATION.png" width="340">
</p>
### Configurable View Size
```python
from mosaic_multigrid.envs import SoccerGameEnv
# Default: 3×3 (competitive challenge)
env = SoccerGameEnv(view_size=3, ...)
obs, _ = env.reset()
print(obs[0]['image'].shape) # (3, 3, 3)
# Match INI multigrid: 7×7 (easier)
env = SoccerGameEnv(view_size=7, ...)
obs, _ = env.reset()
print(obs[0]['image'].shape) # (7, 7, 3)
```
### Observation Format (Enhanced Multi-Agent Encoding)
- `obs[agent_id]['image']` shape: `(view_size, view_size, 3)`
- **Channel 0: Object TYPE** (wall, ball, goal, agent, etc.)
- **Channel 1: Object COLOR** (red, blue, green team colors, etc.)
- **Channel 2: Object STATE** - Context-dependent encoding:
- **For doors**: 0=open, 1=closed, 2=locked (standard MiniGrid)
- **For agents**: 0-3 OR 100-103
- `0-3`: Agent direction (right/down/left/up) when **NOT carrying ball**
- `100-103`: Agent direction **+ ball carrying flag** (e.g., 101 = down + has ball)
- **For other objects**: 0 (unused)
- `obs[agent_id]['direction']`: int (0=right, 1=down, 2=left, 3=up)
- `obs[agent_id]['mission']`: Mission string
**The agent is always at the bottom-center of its view**, looking forward. The view rotates with the agent's direction.
#### Ball Carrying Observability Enhancement
**Key Feature**: Agents can now see when **other agents are carrying the ball**!
This solves a critical observability limitation in the original 3-channel encoding:
```python
# Example: Red agent observing Green agent with ball
obs[red_agent]['image'][1, 0, :] = [Type.agent, Color.green, 101]
# ↑
# STATE=101 means: facing DOWN + HAS BALL!
# Decoding:
has_ball = (state >= 100) # True
direction = state % 100 # 1 (down)
```
**Why this works**:
- Soccer and Collect have **NO doors** (door states 0-2 are unused)
- We repurpose the unused STATE channel space with offset 100
- No conflicts: door states (0-2), agent direction (0-3), agent+ball (100-103) are all separate
- **Zero memory overhead** - still 3 channels, still uint8 values
**Before this fix**:
- Agents could NOT see if others had the ball
- Required memory architectures (LSTM) to track ball possession
- Made stealing/defense strategies nearly impossible
**After this fix**:
- Agents CAN see who has the ball in their view
- Enables reactive defense strategies without memory
- Faster training, better decision-making
See:**See [PARTIAL_OBSERVABILITY.md](PARTIAL_OBSERVABILITY.md) for detailed visual diagrams and comparison with INI multigrid.**
### Reproducibility
```python
# Same seed → identical trajectories (reproducibility bug is fixed)
for trial in range(2):
env = SoccerGame4HEnv10x15N2(render_mode='rgb_array')
obs, _ = env.reset(seed=42)
for step in range(100):
actions = {i: 2 for i in range(4)} # all forward
obs, *_ = env.step(actions)
# obs will be identical across trials
```
## Episode Termination & Truncation
Understanding when and how episodes end is crucial for training RL agents. Following the Gymnasium API standard, MOSAIC multigrid distinguishes between **terminated** (natural end condition achieved) and **truncated** (time limit reached).
### Terminology
- **Terminated**: Episode ends naturally when the goal/objective is achieved (e.g., reaching a goal cell, achieving a win condition)
- **Truncated**: Episode ends due to reaching the maximum step limit without achieving the objective
- **max_steps**: Maximum number of environment steps before truncation (default: 10,000 for all MOSAIC games)
### Environment-Specific Criteria
#### Soccer Enhanced (MosaicMultiGrid-Soccer-Enhanced-v0) RECOMMENDED
| Criterion | Condition |
|-----------|-----------|
| **Terminated** |When any team scores 2 goals (first-to-win) |
| **Truncated** |When `max_steps >= 200` (configurable) |
| **Winning Condition** | First team to score `goals_to_win` (default: 2) wins |
| **Scoring Mechanism** | Drop ball at opponent's ObjectGoal: +1 shared to scoring team (positive-only, no penalty to opponents) |
| **Event Tracking** | `goal_scored_by`, `passes_completed`, `steals_completed` in info dict for credit assignment |
| **Ball Respawn** |Ball respawns at random location after each goal |
| **Episode Length** | Variable (terminates when team wins, or truncates at 200 steps) |
| **Cooldown** |10-step dual cooldown on stealing (both stealer and victim) |
**Design rationale**: Enhanced Soccer provides **natural termination** when a team wins, significantly reducing training time (~50x faster). Ball respawns after each goal to keep gameplay continuous. Rewards are positive-only (following SMAC convention), with `goal_scored_by` and `passes_completed` metadata for credit assignment and assist chain analysis.
```python
env = gym.make('MosaicMultiGrid-Soccer-Enhanced-v0')
obs, _ = env.reset(seed=42)
for step in range(200):
actions = {i: agent_policy(obs[i]) for i in range(4)}
obs, rewards, terminated, truncated, info = env.step(actions)
if terminated[0]: #Team scored 2 goals!
# Determine winner from final rewards
team1_total = sum(rewards[i] for i in [0, 1])
team2_total = sum(rewards[i] for i in [2, 3])
winner = "Team 1 (Green)" if team1_total > 0 else "Team 2 (Red)"
print(f"{winner} wins! Episode finished in {step} steps")
break
if truncated[0]: # Time limit reached
print(f"Time limit reached. Determine winner by cumulative score.")
break
```
---
#### Soccer Original (MosaicMultiGrid-Soccer-v0) DEPRECATED
| Criterion | Condition |
|-----------|-----------|
| **Terminated** |NEVER - No natural termination |
| **Truncated** |When `max_steps = 10,000` |
| **Winning Condition** | Team with higher cumulative score when truncation occurs |
| **Scoring Mechanism** | Drop ball at opponent's ObjectGoal: +1 to scoring team, -1 to other team (zero-sum) |
| **Episode Length** | Always exactly 10,000 steps (fixed-length competitive game) |
**Design rationale**: Soccer deliberately uses only truncation (no termination) to create **fixed-length competitive matches**. Winner is determined by final score.
```python
env = gym.make('MosaicMultiGrid-Soccer-v0')
obs, _ = env.reset(seed=42)
cumulative_rewards = {i: 0 for i in range(4)}
for step in range(10000):
actions = {i: agent_policy(obs[i]) for i in range(4)}
obs, rewards, terminated, truncated, info = env.step(actions)
for i in range(4):
cumulative_rewards[i] += rewards[i]
# terminated[i] is always False (no natural termination)
# truncated[i] becomes True at step 10,000
if truncated[0]: # All agents truncate simultaneously
# Determine winner: sum rewards by team
team1_score = cumulative_rewards[0] + cumulative_rewards[1] # agents 0,1
team2_score = cumulative_rewards[2] + cumulative_rewards[3] # agents 2,3
winner = "Team 1" if team1_score > team2_score else "Team 2"
print(f"Game Over! Winner: {winner}")
break
```
---
#### Collect Enhanced (MosaicMultiGrid-Collect-Enhanced-v0) RECOMMENDED
| Criterion | Condition |
|-----------|-----------|
| **Terminated** |When all 5 balls are collected |
| **Truncated** |When `max_steps = 300` (configurable) |
| **Winning Condition** | Agent with highest cumulative reward when episode ends |
| **Scoring Mechanism** | Pickup wildcard ball (index=0): +1 to agent, -1 to all other agents (zero-sum) |
| **Episode Length** | Variable (100-300 steps typically, terminates when all balls collected) |
| **Training Speedup** |**35× faster** than original (300 vs 10,000 steps) |
**Design rationale**: Enhanced Collect terminates naturally when all balls are collected, eliminating the bug where episodes ran for 10,000 steps with nothing to do. This creates a **35× training speedup** and provides clear termination signals for RL agents.
```python
env = gym.make('MosaicMultiGrid-Collect-Enhanced-v0')
obs, _ = env.reset(seed=42)
cumulative_rewards = {i: 0 for i in range(3)}
for step in range(300):
actions = {i: agent_policy(obs[i]) for i in range(3)}
obs, rewards, terminated, truncated, info = env.step(actions)
for i in range(3):
cumulative_rewards[i] += rewards[i]
if terminated[0]: #All 5 balls collected!
winner = max(cumulative_rewards, key=cumulative_rewards.get)
print(f"Agent {winner} wins! Episode finished in {step} steps")
print(f"Final scores: {cumulative_rewards}")
break
```
---
#### Collect Enhanced 2vs2 (MosaicMultiGrid-Collect-2vs2-Enhanced-v0) RECOMMENDED
| Criterion | Condition |
|-----------|-----------|
| **Terminated** |When all 7 balls are collected |
| **Truncated** |When `max_steps = 400` (configurable) |
| **Winning Condition** | Team with highest cumulative score when episode ends |
| **Scoring Mechanism** | Pickup wildcard ball: +1 to entire team, -1 to opponent team (zero-sum) |
| **Episode Length** | Variable (150-400 steps typically) |
| **Ball Count** | 7 balls (ODD number prevents draws!) |
| **Team Assignment** | agents_index=[1, 1, 2, 2] → Team 1 (agents 0,1) vs Team 2 (agents 2,3) |
```python
env = gym.make('MosaicMultiGrid-Collect-2vs2-Enhanced-v0')
obs, _ = env.reset(seed=42)
for step in range(400):
actions = {i: agent_policy(obs[i]) for i in range(4)}
obs, rewards, terminated, truncated, info = env.step(actions)
if terminated[0]: #All 7 balls collected!
team1_score = sum(rewards[i] for i in [0, 1])
team2_score = sum(rewards[i] for i in [2, 3])
winner = "Team 1 (Green)" if team1_score > team2_score else "Team 2 (Red)"
print(f"{winner} wins!")
break
```
---
#### Soccer 1vs1 (MosaicMultiGrid-Soccer-1vs1-IndAgObs-v0)
| Criterion | Condition |
|-----------|-----------|
| **Terminated** | When any agent scores 2 goals (first-to-win) |
| **Truncated** | When `max_steps >= 200` (configurable) |
| **Winning Condition** | First agent to score `goals_to_win` (default: 2) wins |
| **Scoring Mechanism** | Drop ball at opponent's goal: +1 to scorer (positive-only, no penalty to opponent) |
| **Ball Respawn** | Ball respawns at random location after each goal |
| **Episode Length** | Variable (terminates when agent wins, or truncates at 200 steps) |
| **Passing** | Teleport pass is a no-op (no teammates) -- drop always places ball on ground |
```python
env = gym.make('MosaicMultiGrid-Soccer-1vs1-IndAgObs-v0')
obs, _ = env.reset(seed=42)
for step in range(200):
actions = {i: agent_policy(obs[i]) for i in range(2)}
obs, rewards, terminated, truncated, info = env.step(actions)
if terminated[0]: # An agent scored 2 goals!
winner = "Agent 0 (Green)" if rewards[0] > 0 else "Agent 1 (Red)"
print(f"{winner} wins! Episode finished in {step} steps")
break
if truncated[0]: # Time limit reached
print(f"Time limit reached. Determine winner by cumulative score.")
break
```
---
#### Collect 1vs1 (MosaicMultiGrid-Collect-1vs1-IndAgObs-v0)
| Criterion | Condition |
|-----------|-----------|
| **Terminated** | When all 3 balls are collected |
| **Truncated** | When `max_steps = 200` (configurable) |
| **Winning Condition** | Agent with highest cumulative reward when episode ends |
| **Scoring Mechanism** | Pickup wildcard ball: +1 to agent, -1 to opponent (zero-sum) |
| **Episode Length** | Variable (terminates when all 3 balls collected, or truncates at 200 steps) |
| **Ball Count** | 3 balls (ODD number prevents draws!) |
| **Team Assignment** | agents_index=[1, 2] -- each agent is its own team |
```python
env = gym.make('MosaicMultiGrid-Collect-1vs1-IndAgObs-v0')
obs, _ = env.reset(seed=42)
cumulative_rewards = {i: 0 for i in range(2)}
for step in range(200):
actions = {i: agent_policy(obs[i]) for i in range(2)}
obs, rewards, terminated, truncated, info = env.step(actions)
for i in range(2):
cumulative_rewards[i] += rewards[i]
if terminated[0]: # All 3 balls collected!
winner = max(cumulative_rewards, key=cumulative_rewards.get)
print(f"Agent {winner} wins! Episode finished in {step} steps")
print(f"Final scores: {cumulative_rewards}")
break
```
---
#### CollectGame Original (MosaicMultiGrid-Collect-v0) DEPRECATED
| Criterion | Condition |
|-----------|-----------|
| **Terminated** |NEVER - No natural termination |
| **Truncated** |When `max_steps = 10,000` |
| **Winning Condition** | Agent with highest cumulative reward when truncation occurs |
| **Scoring Mechanism** | Pickup wildcard ball (index=0): +1 to agent, -1 to all other agents (zero-sum) |
| **Episode Length** | Always exactly 10,000 steps |
| **Ball Consumption** | 5 wildcard balls total - episode continues even after all balls collected |
**Design rationale**: Individual competition with zero-sum rewards creates a competitive environment where one agent's gain is another's loss. Episodes run for fixed duration regardless of ball availability.
```python
env = gym.make('MosaicMultiGrid-Collect-v0')
obs, _ = env.reset(seed=42)
cumulative_rewards = {i: 0 for i in range(3)}
balls_collected = {i: 0 for i in range(3)}
for step in range(10000):
actions = {i: agent_policy(obs[i]) for i in range(3)}
obs, rewards, terminated, truncated, info = env.step(actions)
for i in range(3):
cumulative_rewards[i] += rewards[i]
if rewards[i] > 0: # Ball collected
balls_collected[i] += 1
# Even after all 5 balls collected, episode continues until step 10,000
if truncated[0]:
winner = max(cumulative_rewards, key=cumulative_rewards.get)
print(f"Winner: Agent {winner}")
print(f"Balls collected: {balls_collected}")
print(f"Final scores: {cumulative_rewards}")
break
```
#### Collect-2vs2 Game (MosaicMultiGrid-Collect-2vs2-v0) - 4 Agents, Team Competition
| Criterion | Condition |
|-----------|-----------|
| **Terminated** |NEVER - No natural termination |
| **Truncated** |When `max_steps = 10,000` |
| **Winning Condition** | Team with higher cumulative score when truncation occurs |
| **Scoring Mechanism** | Pickup wildcard ball (index=0): +1 to team, -1 to other team (zero-sum) |
| **Episode Length** | Always exactly 10,000 steps |
| **Ball Consumption** | 7 wildcard balls (ODD number prevents draws!) - episode continues after collection |
| **Team Assignment** | agents_index=[1, 1, 2, 2] → Team 1 (agents 0,1) vs Team 2 (agents 2,3) |
**Design rationale**: Using 7 balls (odd number) mathematically guarantees no draws (one team must collect ≥4, other ≤3). Fixed-length episodes with team-based zero-sum rewards create strategic team coordination challenges.
```python
env = gym.make('MosaicMultiGrid-Collect-2vs2-v0')
obs, _ = env.reset(seed=42)
cumulative_rewards = {i: 0 for i in range(4)}
for step in range(10000):
actions = {i: agent_policy(obs[i]) for i in range(4)}
obs, rewards, terminated, truncated, info = env.step(actions)
for i in range(4):
cumulative_rewards[i] += rewards[i]
if truncated[0]:
team1_score = cumulative_rewards[0] + cumulative_rewards[1]
team2_score = cumulative_rewards[2] + cumulative_rewards[3]
# With 7 balls and zero-sum, scores are guaranteed to differ
winner = "Team 1 (Green)" if team1_score > team2_score else "Team 2 (Red)"
print(f"Winner: {winner}")
print(f"Team 1 collected: {int(team1_score)} balls")
print(f"Team 2 collected: {int(-team2_score)} balls") # Negative due to zero-sum
break
```
### Comparison with MiniGrid
**MiniGrid** environments typically use **both termination and truncation**:
- **Terminated**: When agent reaches the green goal square (`step_on_goal = True`)
- **Truncated**: When `max_steps` reached (default varies: 100-1000 steps)
- **Episode length**: Variable (ends as soon as goal is reached)
**MOSAIC multigrid** uses a different design philosophy:
- **Terminated**: NEVER used in competitive games
- **Truncated**: ALWAYS at `max_steps = 10,000`
- **Episode length**: Fixed (always runs full duration)
- **Rationale**: Competitive team games need fixed time limits where winner is determined by score, not by "first to finish"
### Implementation Details (base.py)
```python
def step(self, actions):
self.step_count += 1
rewards = self.handle_actions(actions)
observations = self.gen_obs()
# Termination: check agent-level terminated flags
# (Never set in Soccer/Collect - always False)
terminations = dict(enumerate(self.agent_states.terminated))
# Truncation: check time limit
truncated = self.step_count >= self.max_steps
truncations = dict(enumerate(repeat(truncated, self.num_agents)))
return observations, rewards, terminations, truncations, info
```
Soccer and Collect environments **never call** `on_success()` or `on_failure()` callbacks, so `agent.state.terminated` remains `False` throughout the episode. Only truncation ends the episode.
### Configuring max_steps
```python
from mosaic_multigrid.envs import SoccerGameEnv, CollectGameEnv
# Default: 10,000 steps
env = SoccerGameEnv()
# Custom: 1,000 steps for faster training
env = SoccerGameEnv(max_steps=1000)
# Via gym.make with kwargs
env = gym.make('MosaicMultiGrid-Soccer-v0', max_steps=5000)
```
## Architecture
```
mosaic_multigrid/
├── base.py # MultiGridEnv (Gymnasium-compliant base)
├── core/
│ ├── constants.py # Type, Color, State, Direction enums
│ ├── actions.py # Action enum (8 actions: noop=0..done=7)
│ ├── world_object.py # WorldObj numpy-subclass + all object types
│ ├── agent.py # Agent + AgentState (vectorized, team_index)
│ ├── grid.py # Grid (numpy state + world_objects cache)
│ └── mission.py # Mission + MissionSpace
├── utils/
│ ├── enum.py # IndexedEnum (aenum-based, dynamically extensible)
│ ├── rendering.py # Tile rendering (fill_coords, downsample, etc.)
│ ├── random.py # RandomMixin (seeded RNG utilities)
│ ├── obs.py # Numba JIT observation generation (hot path)
│ └── misc.py # front_pos, PropertyAlias
├── envs/
│ ├── soccer_game.py # SoccerGameEnv + variants
│ ├── collect_game.py # CollectGameEnv + variants
│ └── basketball_game.py # BasketballGameEnv + 3vs3 variants
├── rendering/
│ └── basketball.py # Basketball court renderer (arcs, paint, center circle)
├── wrappers.py # FullyObs, ImgObs, OneHotObs, SingleAgent, TeamObs
├── pettingzoo/ # PettingZoo Parallel + AEC adapters
└── rllib/ # Ray RLlib MultiAgentEnv adapter
```
### Core Design Decisions
**Agent-not-in-grid**: Agents are NOT stored on the grid (following multigrid-ini). Agent positions are tracked via `AgentState.pos`. The observation generator inserts agents into the observation grid dynamically. This avoids grid corruption when agents overlap.
**numpy subclass pattern**: `WorldObj(np.ndarray)` and `AgentState(np.ndarray)` — domain objects ARE their numerical encoding. No serialization overhead.
**team_index separation**: `agent.index` (unique identity) vs `agent.team_index` (team membership). The original code conflated these — your agent index WAS your team.
**Numba JIT**: All observation generation functions use `@nb.njit(cache=True)`. Enum values are extracted to plain `int` constants at module level because Numba cannot access Python enum attributes.
## Action Space
### Action Enum Comparison
| Action | Upstream (Fickinger 2020) | mosaic_multigrid v1 | **mosaic_multigrid v2 (this fork)** | multigrid-ini (Oguntola 2023) |
|--------|---:|---:|---:|---:|
| noop | -- (was `still`) | -- | **0** | -- |
| still | **0** | -- | -- | -- |
| left | 1 | **0** | **1** | **0** |
| right | 2 | **1** | **2** | **1** |
| forward| 3 | **2** | **3** | **2** |
| pickup | 4 | **3** | **4** | **3** |
| drop | 5 | **4** | **5** | **4** |
| toggle | 6 | **5** | **6** | **5** |
| done | 7 | **6** | **7** | **6** |
| **Total** | **8** | **7** | **8** | **7** |
**Why `noop` was added (AEC + Parallel API compatibility):**
In AEC (Agent-Environment Cycle) mode, only one agent acts per physics step.
All other agents must still submit a *valid* action so the environment can advance,
but they must not change state. Without a dedicated no-op:
- The previous action 0 was `left` (turn left).
- Non-acting agents would silently rotate on every step — corrupting the episode
and invalidating any comparison between AEC and Parallel results.
`noop=0` is the fix. This design is directly inspired by **MeltingPot** (Google DeepMind),
which uses `NOOP=0` for the same reason. The `done` action (index 7) signals intentional
task completion and is semantically different — both cause no physical movement, but only
`noop` should be used by non-acting agents in AEC mode.
> **Migration note (v1 → v2):** All action indices shifted **up by 1**.
> Any pre-trained policy or hardcoded action index from v1 will need updating:
> `left=0→1 right=1→2 forward=2→3 pickup=3→4 drop=4→5 toggle=5→6 done=6→7`
## Observation Space
Each agent receives a partial observation dict:
```python
{
'image': np.ndarray, # (view_size, view_size, 3) — [Type, Color, State] per cell
'direction': int, # Agent facing direction (0=right, 1=down, 2=left, 3=up)
'mission': str, # Mission string
}
```
The default `view_size=3` gives each agent a 3x3 partial view (matching our competitive game design). Each cell encodes 3 values (Type index, Color index, State index), down from 6 in the original.
## Wrappers
| Wrapper | Description |
|---------|-------------|
| `FullyObsWrapper` | Full grid observation instead of partial agent view |
| `ImgObsWrapper` | Returns only the image array (drops direction/mission) |
| `OneHotObsWrapper` | One-hot encodes the observation image (Numba JIT) |
| `SingleAgentWrapper` | Unwraps multi-agent dict for single-agent use |
## Framework Adapters
### PettingZoo (Parallel + AEC)
mosaic_multigrid supports both PettingZoo stepping paradigms:
- **Parallel API** ([docs](https://pettingzoo.farama.org/api/parallel/)): All agents submit actions simultaneously via a single `step(actions_dict)` call. This is the native mode for mosaic_multigrid.
- **AEC API** ([docs](https://pettingzoo.farama.org/api/aec/)): Agents take turns sequentially via `agent_iter()`. Internally, this converts the Parallel env using PettingZoo's `parallel_to_aec()` utility -- actions are buffered until every agent has acted, then forwarded to the underlying parallel env in one batch.
For background on PettingZoo's multi-agent API design, see [Terry et al. (2021)](https://arxiv.org/abs/2009.14471).
#### Parallel API (simultaneous stepping)
```python
from mosaic_multigrid.envs import SoccerGame4HEnv10x15N2
from mosaic_multigrid.pettingzoo i | text/markdown | Abdulhamid Mousa | Abdulhamid Mousa <abdulhamid97mousa@gmail.com> | null | null | Apache-2.0 | reinforcement-learning, multi-agent, gymnasium, gridworld, soccer, basketball, competitive | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | https://github.com/Abdulhamid97Mousa/mosaic_multigrid | null | >=3.8 | [] | [] | [] | [
"aenum>=1.3.0",
"numba>=0.53.0",
"numpy>=1.18.0",
"gymnasium>=0.26",
"pygame>=2.2.0",
"ray[rllib]>=2.0; extra == \"rllib\"",
"pettingzoo>=1.22; extra == \"pettingzoo\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-timeout>=2.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Abdulhamid97Mousa/mosaic_multigrid",
"Repository, https://github.com/Abdulhamid97Mousa/mosaic_multigrid"
] | twine/6.2.0 CPython/3.11.0rc1 | 2026-02-21T06:02:10.880537 | mosaic_multigrid-5.0.0.tar.gz | 133,348 | 03/5d/5e19d7160c10e298f8a3d40f73ccb046bcc0f77599c61b5892ad4639e7d1/mosaic_multigrid-5.0.0.tar.gz | source | sdist | null | false | c97479cfc848b8a65e261ba770438b6f | 5e6ca5f9f28d68538e0fc49778d25a98b76948e65c1f1d23d123f5fa028a1a19 | 035d5e19d7160c10e298f8a3d40f73ccb046bcc0f77599c61b5892ad4639e7d1 | null | [
"LICENSE"
] | 204 |
2.4 | upload-socials | 1.2.0 | A Python library to automate video uploads to YouTube. | # 🚀 upload-socials
Streamline your content workflow by automating video uploads to YouTube with this Python library. Now supports automated monetization settings!
---
## ⚠️ Prerequisites
This is a GUI automation library and **requires a specific setup to function**:
1. **Screen Resolution:** Your screen resolution should ideally be **1920x1080**. The browser window should be maximized during the process.
2. **Reference Images:** GUI automation works by finding images on the screen. You **must** provide your own folders of reference images (e.g., screenshots of the 'Upload', 'Next', and 'Publish' buttons).
3. **Platform Language:** The YouTube Studio website must be set to **English**.
This library is best suited for developers comfortable with tuning GUI automation scripts.
## Installation
```bash
pip install upload-socials
```
## Setup: Reference Images
You must create a directory (e.g., `assets/youtube`) containing screenshots of the YouTube Studio interface. The script looks for `.png` files with specific names.
**Basic Upload Images:**
* `create.png`, `uploadvids.png`, `select.png`
* `filename.png` (The file input field in Windows Explorer)
* `title.png`, `tell.png` (Description box)
* `thumbnail.png`
* `showmore.png`, `tags.png`
* `next.png`, `publish.png`
* `processing.png` (The "SD/HD processing" checkmark at the end)
**Monetization Images (Required if `monetization=True`):**
* `monetizeselect.png` (The monetization tab/dropdown)
* `monetizeon.png` (Radio button to turn On)
* `monetizedone.png` (Done button inside dropdown)
* `monetizeactive.png`
* `monetizenone.png` (The "None of the above" checkbox for ad suitability)
* `monetizesubmit.png` (Submit rating button)
* `monetizepublic.png` (Public visibility radio button)
* `publish2.png` (Secondary publish button if layout differs)
## Quick Start
```python
from upload_socials import upload_youtube
# 1. Define the path to your screenshots folder
YT_IMAGE_PATH = r"C:\\my_automation_images\\youtube"
# 2. Define Video details
video_file = r"C:\\path\\to\\my_awesome_video.mp4"
video_title = "My First Automated Upload!"
video_desc = "This was uploaded using the upload-socials Python library."
thumb_file = r"C:\\path\\to\\my_thumbnail.png"
video_tags = "python,automation,coding"
# 3. Call the function
upload_youtube(
filepath=video_file,
title=video_title,
image_path=YT_IMAGE_PATH,
description=video_desc,
thumbnail=thumb_file,
tags=video_tags,
monetization=True # Set to True to enable ads and submit rating
)
```
## API Reference
### `upload_youtube(...)`
Automates the upload process to YouTube Studio.
| Parameter | Type | Default | Description |
| :--- | :--- | :--- | :--- |
| `filepath` | `str` | **Required** | Absolute path to the video file. |
| `title` | `str` | **Required** | The title of the YouTube video. |
| `image_path` | `str` | `.../uploadyt` | Path to the folder containing your `.png` reference screenshots. |
| `description` | `str` | `None` | The text for the video description. |
| `channelurl` | `str` | `studio.youtube.com` | The starting URL. |
| `thumbnail` | `str` | `None` | Absolute path to a thumbnail image. |
| `tags` | `str` | `None` | Comma-separated string of tags. |
| `monetization`| `bool` | `False` | If `True`, performs steps to enable monetization, select "None of the above" for suitability, and submit rating. |
## Tips for Success
* **Wait Times:** The script uses `optimiseWait` to look for images. If your internet is slow, ensure your screenshots are accurate so the script doesn't time out.
* **Mouse Control:** Do not touch the mouse or keyboard while the script is running.
* **Smart Paste:** This library uses a smart pasting method to handle emojis and special characters in titles and descriptions automatically.
## License
This project is licensed under the MIT License.
| text/markdown | AMAMazing | alexmalone489@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/AMAMazing/upload-socials | null | >=3.6 | [] | [] | [] | [
"pyautogui",
"pywin32",
"optimisewait",
"smartpaste"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.6 | 2026-02-21T06:00:13.050343 | upload_socials-1.2.0.tar.gz | 75,044 | 07/08/8cb89c12dd63788a22cab5bf14d65390bb47d97a809e6302908d8370047c/upload_socials-1.2.0.tar.gz | source | sdist | null | false | 44c705d08475cc5d628952a44407be48 | 16160b31935eebbde8c6292f94415545f10f468a61f47c5fb1f93941bf4cc31c | 07088cb89c12dd63788a22cab5bf14d65390bb47d97a809e6302908d8370047c | null | [
"LICENSE"
] | 207 |
2.4 | animuz-core | 0.1.4 | Core shared utilities for Animuz RAG system - LLM clients, pipelines, vector DB, and document ingestion | # animuz-core
Core shared utilities for Animuz RAG (Retrieval-Augmented Generation) system.
## Features
- **LLM Clients**: OpenAI, Anthropic Claude, Ollama
- **RAG Pipelines**: Simple and Agentic RAG implementations
- **Vector Database**: Qdrant integration with hybrid search (dense + sparse)
- **Embedding Clients**: Multiple providers (local server, Modal, S3/SageMaker)
- **Document Ingestion**: Azure Document Intelligence, Unstructured, PDF extraction, structured text parsing
- **CloudWatch Logging**: Structured JSON logging with watchtower
## Requirements
- Python >= 3.10
## Installation
Install the core package (minimal dependencies only):
```bash
pip install animuz-core
```
Then install only the extras you need:
```bash
# Single extra
pip install animuz-core[openai]
# Multiple extras
pip install animuz-core[openai,qdrant,aws]
# Everything
pip install animuz-core[all]
```
Works with uv too:
```bash
uv pip install animuz-core[openai,qdrant]
```
### Available Extras
| Extra | What it installs | Use when you need |
| ----------- | ------------------------------------------------- | --------------------------------------------------- |
| `openai` | `openai` | OpenAI GPT models |
| `anthropic` | `anthropic` | Anthropic Claude models |
| `ollama` | `ollama` | Local LLMs via Ollama |
| `qdrant` | `qdrant-client` | Qdrant vector database |
| `aws` | `boto3`, `aiobotocore`, `watchtower`, `sagemaker` | S3, SageMaker embeddings, CloudWatch logging |
| `azure` | `azure-ai-documentintelligence` | Azure Document Intelligence for PDF ingestion |
| `ingest` | `unstructured-client`, `PyMuPDF` | Document parsing (Unstructured API, PDF extraction) |
| `fastapi` | `fastapi` | Streaming SSE endpoints |
| `all` | All of the above | Everything |
| `dev` | `all` + `pytest`, `black`, `ruff`, `mypy` | Development and testing |
## Usage
### Unified RAG API
```python
from animuz_core import RAG, RAGConfig, tool
@tool(description="Lookup a user by ID")
def lookup_user(user_id: str) -> str:
return f"User {user_id}"
config = RAGConfig.from_env().with_defaults()
rag = RAG(config=config, tools=[lookup_user])
await rag.add_doc("docs/intro.md", user_chat_id="demo")
response = await rag.chat("What is this project?", user_chat_id="demo")
```
Example script:
```bash
python scripts/quickstart_rag.py
```
### LLM Clients
```python
from animuz_core.genai import OpenAIAgentClient, AnthropicAgentClient, OLlamaClient
# OpenAI agent with tool use
agent = OpenAIAgentClient(tools=my_tools)
response = await agent.chat(messages, model="gpt-4o")
# Anthropic agent with tool use
agent = AnthropicAgentClient(tools=my_tools)
response = await agent.chat(messages, model="claude-sonnet-4-20250514")
# Ollama (local)
client = OLlamaClient()
response = await client.chat(messages, model="llama3")
```
### RAG Pipelines
```python
from animuz_core.pipelines import AgenticRAG, SimpleRAG
# Agentic RAG - LLM decides when to call the retriever
pipeline = AgenticRAG(
llm=agent,
embedding_client=embedding_client,
qdrant_client=qdrant_client,
)
result = await pipeline.run(query="What is RAG?", user_chat_id="tenant-123")
# Simple RAG - always retrieves then generates
pipeline = SimpleRAG(
llm=client,
embedding_client=embedding_client,
qdrant_client=qdrant_client,
)
result = await pipeline.run(query="What is RAG?", user_chat_id="tenant-123")
```
### Vector Database
```python
from animuz_core.vectordb import QdrantDBClient
client = QdrantDBClient()
# Hybrid search with multi-tenant isolation
results = await client.search(
dense_vector=dense_vec,
sparse_vector=sparse_vec,
user_chat_id="tenant-123",
limit=5,
)
```
### Embedding
```python
from animuz_core.embedding import EmbeddingClient, ModalEmbeddingClient, S3EmbeddingClient
# Local embedding server
client = EmbeddingClient()
dense, sparse = await client.embed("Some text to embed")
# Modal-hosted embeddings
client = ModalEmbeddingClient()
# S3/SageMaker embeddings
client = S3EmbeddingClient()
```
### Document Ingestion
```python
from animuz_core.ingest import AzureDocAiClient, MyUnstructuredClient, Structured
# Azure Document Intelligence (PDFs)
azure_client = AzureDocAiClient()
text = await azure_client.extract("document.pdf")
# Unstructured API
unstructured = MyUnstructuredClient()
chunks = await unstructured.ingest("document.docx")
# Structured text (txt, md, csv)
structured = Structured()
chunks = structured.split("document.txt")
```
### Top-level Imports
All main classes are re-exported from the package root:
```python
from animuz_core import (
RAG, RAGConfig, tool,
AgenticRAG, SimpleRAG,
OpenAIAgentClient, OpenAILLMClient, AnthropicClient, AnthropicAgentClient, OLlamaClient,
QdrantDBClient,
EmbeddingClient,
AzureDocAiClient, MyUnstructuredClient, Structured,
)
```
## Development
```bash
# Clone and install in editable mode with dev dependencies
git clone <repo-url>
cd animuz-core
pip install -e ".[dev]"
# Run tests
pytest tests/
# Run integration tests (requires external services + env vars)
pytest -m integration tests/integration/
pytest -m integration tests/integration/test_e2e_rag_wrapper_simple.py
# Format
black src/
ruff check src/
```
## Publishing to PyPI
1. Bump the version in `pyproject.toml` and `__init__.py`.
2. Build the package:
```bash
uv pip install --upgrade build # python -m pip install --upgrade build
uv run python -m build # python -m build
```
3. (Optional) Verify the artifacts:
```bash
uv pip install --upgrade twine # python -m pip install --upgrade twine
uv run python -m twine check dist/* # python -m twine check dist/*
```
4. Upload to TestPyPI first:
```bash
uv run python -m twine upload -r testpypi dist/* # python -m twine upload -r testpypi dist/*
```
5. Upload to PyPI:
```bash
uv run python -m twine upload dist/* # python -m twine upload dist/*
```
Notes:
- Create a PyPI API token and set `TWINE_USERNAME=__token__` and `TWINE_PASSWORD=<your-token>`.
- If you upload to TestPyPI, install with `pip install -i https://test.pypi.org/simple animuz-core` to verify.
### Integration Test Setup (Qdrant)
Use Docker Compose to run Qdrant locally:
```bash
docker compose -f docker-compose-qdrant.yml up -d qdrant
```
Then set the Qdrant env vars (example):
```bash
export QDRANT_HOST=localhost
export QDRANT_PORT=6333
```
## Environment Variables
The package reads configuration from environment variables (loaded via `python-dotenv`):
| Variable | Used by |
| ------------------------------------------------------ | --------------------------- |
| `OPENAI_API_KEY` | OpenAI client |
| `ANTHROPIC_API_KEY` | Anthropic client |
| `QDRANT_HOST`, `QDRANT_PORT`, `QDRANT_COLLECTION_NAME` | Qdrant client |
| `QDRANT_CLOUD_API_KEY` | Qdrant Cloud |
| `EMBEDDING_HOST`, `EMBEDDING_PORT` | Embedding client |
| `AZURE_DOCAI_KEY`, `AZURE_DOCAI_ENDPOINT` | Azure Document Intelligence |
| `UNSTRUCTURED_ENDPOINT`, `UNSTRUCTURED_API_KEY` | Unstructured client |
| `S3_BUCKET_NAME`, `S3_DOWNLOAD_DIR` | S3 operations |
## @tool decorator API
```python
from animuz_core import tool
@tool(description="Search documents")
async def rag(query: str, user_chat_id: str) -> str:
...
```
```python
agent = Agent(model="gpt-4o", tools=[rag])
response = await agent.chat(messages, system_prompt="You are helpful.")
```
## License
MIT
| text/markdown | null | Animuz Team <dev@animuz.com> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"httpx>=0.25.0",
"aiohttp>=3.9.0",
"requests>=2.31.0",
"langchain-text-splitters>=0.0.1",
"openai>=1.0.0; extra == \"openai\"",
"anthropic>=0.39.0; extra == \"anthropic\"",
"ollama>=0.1.0; extra == \"ollama\"",
"qdrant-client>=1.7.0; extra == \"qdrant\"",
"boto3>=1.28.0; extra == \"aws\"",
"aiobotocore>=2.7.0; extra == \"aws\"",
"watchtower>=3.0.0; extra == \"aws\"",
"sagemaker>=2.200.0; extra == \"aws\"",
"azure-ai-documentintelligence>=1.0.0b2; extra == \"azure\"",
"unstructured-client>=0.11.0; extra == \"ingest\"",
"PyMuPDF>=1.23.0; extra == \"ingest\"",
"fastapi>=0.104.0; extra == \"fastapi\"",
"animuz-core[anthropic,aws,azure,fastapi,ingest,ollama,openai,qdrant]; extra == \"all\"",
"animuz-core[all]; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.11 | 2026-02-21T05:59:13.614085 | animuz_core-0.1.4.tar.gz | 59,622 | c5/57/2768e6fb79de0ae6fb319e2ae6d0bb9a374c6a67c6951177d2f43ef37081/animuz_core-0.1.4.tar.gz | source | sdist | null | false | 9f69ddcbd6ac0c540aaab3e71df20b61 | 4e564fad14db63a113a0c6d16b5ab34565c3e861283050e3a66e895e3996b0bd | c5572768e6fb79de0ae6fb319e2ae6d0bb9a374c6a67c6951177d2f43ef37081 | null | [
"LICENSE"
] | 220 |
2.4 | zyndai-agent | 0.2.3 | A multi-framework AI agent SDK for the Zynd AI Network. Supports LangChain, LangGraph, CrewAI, PydanticAI, and custom agents with a unified invoke() interface. Provides Identity Management (Polygon ID/DID), Agent Discovery & Search, HTTP Webhook and MQTT Communication, and x402 Micropayments. | # ZyndAI Agent SDK
A powerful Python SDK that enables AI agents to communicate securely and discover each other on the ZyndAI Network. Built with **HTTP webhooks**, **identity verification**, **agent discovery**, and **x402 micropayments** at its core.
## Features
- **Auto-Provisioning**: Agents are automatically created and registered on first run
- **Smart Agent Discovery**: Search and discover agents using semantic keyword matching
- **HTTP Webhook Communication**: Async and sync request/response patterns with embedded Flask server
- **x402 Micropayments**: Built-in support for pay-per-use API endpoints
- **Multi-Framework Support**: Works with LangChain, LangGraph, CrewAI, and PydanticAI
- **Decentralized Identity**: Secure agent identity via Polygon ID credentials
## Installation
```bash
pip install zyndai-agent
```
Or install from source:
```bash
git clone https://github.com/Zynd-AI-Network/zyndai-agent.git
cd zyndai-agent
pip install -r requirements.txt
```
## Quick Start
### 1. Get Your API Key
1. Visit [dashboard.zynd.ai](https://dashboard.zynd.ai)
2. Connect your wallet and create an account
3. Get your **API Key** from the dashboard
### 2. Environment Setup
Create a `.env` file:
```env
ZYND_API_KEY=your_api_key_from_dashboard
OPENAI_API_KEY=your_openai_api_key
TAVILY_API_KEY=your_tavily_api_key # Optional, for search
```
### 3. Create Your First Agent
The SDK automatically provisions your agent identity on first run:
```python
from zyndai_agent.agent import AgentConfig, ZyndAIAgent
from zyndai_agent.message import AgentMessage
from dotenv import load_dotenv
import os
load_dotenv()
# Configure your agent
agent_config = AgentConfig(
name="My First Agent",
description="A helpful assistant agent",
capabilities={
"ai": ["nlp"],
"protocols": ["http"],
"services": ["general_assistance"]
},
webhook_host="0.0.0.0",
webhook_port=5000,
registry_url="https://registry.zynd.ai",
api_key=os.environ["ZYND_API_KEY"]
)
# Initialize - auto-creates agent identity on first run
agent = ZyndAIAgent(agent_config=agent_config)
print(f"Agent ID: {agent.agent_id}")
print(f"Webhook URL: {agent.webhook_url}")
print(f"Payment Address: {agent.pay_to_address}")
# Handle incoming messages
def message_handler(message: AgentMessage, topic: str):
print(f"Received: {message.content}")
agent.set_response(message.message_id, "Hello! I received your message.")
agent.add_message_handler(message_handler)
# Keep running
while True:
pass
```
## Agent Discovery
Find agents using semantic keyword search:
```python
# Search for agents by capabilities
agents = agent.search_agents_by_capabilities(
capabilities=["stock comparison", "financial analysis"],
top_k=5
)
for found_agent in agents:
print(f"Name: {found_agent['name']}")
print(f"Description: {found_agent['description']}")
print(f"Webhook: {found_agent['httpWebhookUrl']}")
# Or search with keyword
agents = agent.search_agents_by_keyword("stock analysis", limit=10)
```
## Agent-to-Agent Communication
### Connect and Send Messages
```python
# Find and connect to another agent
agents = agent.search_agents_by_keyword("stock comparison")
if agents:
target = agents[0]
agent.connect_agent(target)
# Send a message
agent.send_message("Compare AAPL and GOOGL stocks")
```
### Synchronous Request/Response
For immediate responses, use the sync endpoint:
```python
import requests
from zyndai_agent.message import AgentMessage
# Create message
message = AgentMessage(
content="What is the weather today?",
sender_id=agent.agent_id,
message_type="query",
sender_did=agent.identity_credential
)
# Send to sync endpoint (waits for response)
response = requests.post(
"http://localhost:5001/webhook/sync",
json=message.to_dict(),
timeout=60
)
result = response.json()
print(result["response"])
```
## x402 Micropayments
### Enable Payments on Your Agent
Charge for your agent's services:
```python
agent_config = AgentConfig(
name="Premium Stock Agent",
description="Stock analysis with real-time data",
capabilities={"ai": ["financial_analysis"], "protocols": ["http"]},
webhook_host="0.0.0.0",
webhook_port=5001,
registry_url="https://registry.zynd.ai",
api_key=os.environ["ZYND_API_KEY"],
price="$0.01" # Charge $0.01 per request
)
agent = ZyndAIAgent(agent_config=agent_config)
# x402 payment middleware is automatically enabled
```
### Pay for Other Agent Services
The SDK automatically handles x402 payments:
```python
# Use the x402 processor for paid requests
response = agent.x402_processor.post(
"http://paid-agent:5001/webhook/sync",
json=message.to_dict()
)
# Payment is handled automatically!
```
### Access Paid APIs
```python
# Make requests to any x402-protected API
response = agent.x402_processor.get(
"https://api.premium-data.com/stock",
params={"symbol": "AAPL"}
)
print(response.json())
```
## Complete Example: Stock Comparison Agents
### Stock Comparison Agent (Paid Service)
```python
# stock_agent.py
from zyndai_agent.agent import AgentConfig, ZyndAIAgent
from zyndai_agent.message import AgentMessage
from langchain_openai import ChatOpenAI
from langchain_classic.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.tools.tavily_search import TavilySearchResults
import os
agent_config = AgentConfig(
name="Stock Comparison Agent",
description="Professional stock comparison and financial analysis",
capabilities={
"ai": ["nlp", "financial_analysis"],
"protocols": ["http"],
"services": ["stock_comparison", "market_research"]
},
webhook_host="0.0.0.0",
webhook_port=5003,
registry_url="https://registry.zynd.ai",
api_key=os.environ["ZYND_API_KEY"],
price="$0.0001", # Charge per request
config_dir=".agent-stock" # Separate identity
)
agent = ZyndAIAgent(agent_config=agent_config)
# Setup LangChain
llm = ChatOpenAI(model="gpt-3.5-turbo")
search = TavilySearchResults(max_results=3)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a stock analysis expert. Use search for current data."),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad")
])
executor = AgentExecutor(
agent=create_tool_calling_agent(llm, [search], prompt),
tools=[search],
verbose=True
)
def handler(message: AgentMessage, topic: str):
result = executor.invoke({"input": message.content, "chat_history": []})
agent.set_response(message.message_id, result["output"])
agent.add_message_handler(handler)
print(f"Stock Agent running at {agent.webhook_url}")
print(f"Price: $0.0001 per request")
while True:
pass
```
### User Agent (Client)
```python
# user_agent.py
from zyndai_agent.agent import AgentConfig, ZyndAIAgent
from zyndai_agent.message import AgentMessage
import os
agent_config = AgentConfig(
name="User Agent",
description="Interactive assistant for stock research",
capabilities={"ai": ["nlp"], "protocols": ["http"]},
webhook_host="0.0.0.0",
webhook_port=5004,
registry_url="https://registry.zynd.ai",
api_key=os.environ["ZYND_API_KEY"],
config_dir=".agent-user" # Separate identity
)
agent = ZyndAIAgent(agent_config=agent_config)
# Find stock comparison agent
agents = agent.search_agents_by_keyword("stock comparison")
if not agents:
print("No stock agent found")
exit()
target = agents[0]
print(f"Found: {target['name']}")
print(f"Webhook: {target['httpWebhookUrl']}")
# Interactive loop
while True:
question = input("\nYou: ").strip()
if question.lower() == "exit":
break
# Create message
msg = AgentMessage(
content=question,
sender_id=agent.agent_id,
message_type="query",
sender_did=agent.identity_credential
)
# Send with automatic payment via x402
sync_url = target['httpWebhookUrl'].replace('/webhook', '/webhook/sync')
response = agent.x402_processor.post(sync_url, json=msg.to_dict(), timeout=60)
if response.status_code == 200:
print(f"\nAgent: {response.json()['response']}")
else:
print(f"Error: {response.status_code}")
```
## Supported AI Frameworks
The SDK supports multiple AI agent frameworks with a unified `invoke()` method:
### LangChain
```python
from zyndai_agent.agent import ZyndAIAgent
from langchain_classic.agents import AgentExecutor, create_tool_calling_agent
agent_executor = AgentExecutor(agent=..., tools=[...])
zynd_agent.set_langchain_agent(agent_executor)
# Use unified invoke
response = zynd_agent.invoke("Compare AAPL and GOOGL")
```
### LangGraph
```python
from langgraph.graph import StateGraph, MessagesState
graph = StateGraph(MessagesState)
# ... build graph ...
compiled = graph.compile()
zynd_agent.set_langgraph_agent(compiled)
response = zynd_agent.invoke("Analyze market trends")
```
### CrewAI
```python
from crewai import Agent, Task, Crew
crew = Crew(agents=[...], tasks=[...])
zynd_agent.set_crewai_agent(crew)
response = zynd_agent.invoke("Research stock performance")
```
### PydanticAI
```python
from pydantic_ai import Agent
pydantic_agent = Agent(model="openai:gpt-4")
zynd_agent.set_pydantic_ai_agent(pydantic_agent)
response = zynd_agent.invoke("What is the current price of TSLA?")
```
### Custom Agent
```python
def my_agent(input_text: str) -> str:
return f"Processed: {input_text}"
zynd_agent.set_custom_agent(my_agent)
response = zynd_agent.invoke("Hello")
```
See `examples/http/` for complete examples of each framework.
## Configuration Options
| Parameter | Type | Description |
|-----------|------|-------------|
| `name` | `str` | Agent display name |
| `description` | `str` | Agent description (used for discovery) |
| `capabilities` | `dict` | Agent capabilities for semantic search |
| `webhook_host` | `str` | Host to bind webhook server (default: "0.0.0.0") |
| `webhook_port` | `int` | Port for webhook server (default: 5000) |
| `webhook_url` | `str` | Public URL if behind NAT (auto-generated if None) |
| `api_key` | `str` | ZyndAI API key (required) |
| `registry_url` | `str` | Registry URL (default: "https://registry.zynd.ai") |
| `price` | `str` | Price per request for x402 (e.g., "$0.01") |
| `config_dir` | `str` | Custom config directory for agent identity |
## Webhook Endpoints
When your agent starts, these endpoints are available:
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/webhook` | POST | Async message reception (fire-and-forget) |
| `/webhook/sync` | POST | Sync message with response (30s timeout) |
| `/health` | GET | Health check |
## Multiple Agents
Run multiple agents by using different `config_dir` values:
```python
# Agent 1
agent1_config = AgentConfig(
name="Agent 1",
webhook_port=5001,
config_dir=".agent-1",
...
)
# Agent 2
agent2_config = AgentConfig(
name="Agent 2",
webhook_port=5002,
config_dir=".agent-2",
...
)
```
## Legacy MQTT Support
The SDK also supports MQTT communication for backward compatibility. Configure with `mqtt_broker_url` instead of `webhook_port`. See the `examples/mqtt/` directory for MQTT examples.
## Network Endpoints
- **Registry**: `https://registry.zynd.ai`
- **Dashboard**: `https://dashboard.zynd.ai`
## Support
- **GitHub Issues**: [Report bugs](https://github.com/Zynd-AI-Network/zyndai-agent/issues)
- **Documentation**: [docs.zynd.ai](https://docs.zynd.ai)
- **Email**: zyndainetwork@gmail.com
- **Twitter**: [@ZyndAI](https://x.com/ZyndAI)
## License
MIT License - see [LICENSE](LICENSE) for details.
---
**Get started:** `pip install zyndai-agent`
| text/markdown | null | Swapnil Shinde <swapnilshinde9382@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"base58>=2.1.1",
"cryptography>=46.0.5",
"eth-account>=0.13.7",
"flask>=3.1.3",
"langchain>=1.2.10",
"paho-mqtt>=2.1.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"requests>=2.31.0",
"x402[evm,flask,requests]>=2.1.0",
"pytest>=9.0.0; extra == \"dev\""
] | [] | [] | [] | [] | uv/0.9.8 | 2026-02-21T05:58:55.816758 | zyndai_agent-0.2.3.tar.gz | 40,589 | 06/bd/3bead29f374bc6bc61ca11322b4ffa6c064106ca1a15e977626d4caff394/zyndai_agent-0.2.3.tar.gz | source | sdist | null | false | 637f81f5762708636fd54b8f6c6a2298 | bdc1236b49560e5939a24280e6760160a4f2cd24fc285cc3f8d82577df543776 | 06bd3bead29f374bc6bc61ca11322b4ffa6c064106ca1a15e977626d4caff394 | null | [] | 247 |
2.1 | RecTools | 0.18.0 | An easy-to-use Python library for building recommendation systems | # RecTools
[](https://pypi.org/project/rectools)
[](https://pypi.org/project/rectools)
[](https://rectools.readthedocs.io)
[](https://github.com/MobileTeleSystems/RecTools/blob/main/LICENSE)
[](https://app.codecov.io/gh/MobileTeleSystems/RecTools)
[](https://github.com/MobileTeleSystems/RecTools/actions/workflows/test.yml?query=branch%3Amain++)
[](https://github.com/MobileTeleSystems/RecTools/graphs/contributors)
[](https://pepy.tech/project/rectools)
[](https://t.me/RecTools_Support)
<p align="center">
<a href="https://rectools.readthedocs.io/en/stable/">Documentation</a> |
<a href="https://github.com/MobileTeleSystems/RecTools/tree/main/examples">Examples</a> |
<a href="https://github.com/MobileTeleSystems/RecTools/tree/main/examples/tutorials">Tutorials</a> |
<a href="https://github.com/MobileTeleSystems/RecTools/blob/main/CONTRIBUTING.rst">Contributing</a> |
<a href="https://github.com/MobileTeleSystems/RecTools/releases">Releases</a> |
<a href="https://github.com/orgs/MobileTeleSystems/projects/1">Developers Board</a>
</p>
RecTools is an easy-to-use Python library which makes the process of building recommender systems easier and
faster than ever before.
## ✨ Highlights: HSTU model released! ✨
**HSTU arhictecture from ["Actions speak louder then words..."](https://arxiv.org/abs/2402.17152) is now available in RecTools as `HSTUModel`:**
- Fully compatible with our `fit` / `recommend` paradigm and require NO special data processing
- Supports context-aware recommendations in case Relative Time Bias is enabled
- Supports all loss options, item embedding options, category features utilization and other common modular functionality of RecTools transformer models
- In [HSTU tutorial](examples/tutorials/transformers_HSTU_tutorial.ipynb) we show that original metrics reported for HSTU on public Movielens datasets may actually be **underestimated**
- Configurable, customizable, callback-friendly, checkpoints-included, logs-out-of-the-box, custom-validation-ready, multi-gpu-compatible! See [Transformers Advanced Training User Guide](examples/tutorials/transformers_advanced_training_guide.ipynb) and [Transformers Customization Guide](examples/tutorials/transformers_customization_guide.ipynb)
## ✨ Highlights: RecTools framework at ACM RecSys'25 ✨
**RecTools implementations are featured in ACM RecSys'25: ["eSASRec: Enhancing Transformer-based Recommendations in a Modular Fashion"](https://www.arxiv.org/abs/2508.06450):**
- The article presents a systematic benchmark of Transformer modifications using RecTools models. It offers a detailed evaluation of training objectives, Transformer architectures, loss functions, and negative sampling strategies in realistic, production-like settings
- We introduce a new SOTA baseline, **eSASRec**, which combines SASRec’s training objective with LiGR Transformer layers and Sampled Softmax loss, forming a simple yet powerful recipe
- **eSASRec** shows 23% boost over SOTA models, such as ActionPiece, on academic benchmarks
- [LiGR](https://arxiv.org/pdf/2502.03417) Transformer layers used in **eSASRec** are now in RecTools
Plase note that we always compare the quality of our implementations to academic papers results. [Public benchmarks for transformer models SASRec and BERT4Rec](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) show that RecTools implementations achieve highest scores on multiple datasets compared to other published results.
## Get started
Prepare data with
```shell
wget https://files.grouplens.org/datasets/movielens/ml-1m.zip
unzip ml-1m.zip
```
```python
import pandas as pd
from rectools import Columns
from rectools.dataset import Dataset
from rectools.models import SASRecModel
# Read the data
ratings = pd.read_csv(
"ml-1m/ratings.dat",
sep="::",
engine="python", # Because of 2-chars separators
header=None,
names=[Columns.User, Columns.Item, Columns.Weight, Columns.Datetime],
)
# Create dataset
dataset = Dataset.construct(ratings)
# Fit model
model = SASRecModel(n_factors=64, epochs=100, loss="sampled_softmax")
model.fit(dataset)
# Make recommendations
recos = model.recommend(
users=ratings[Columns.User].unique(),
dataset=dataset,
k=10,
filter_viewed=True,
)
```
## Installation
RecTools is on PyPI, so you can use `pip` to install it.
```
pip install rectools
```
The default version doesn't contain all the dependencies, because some of them are needed only for specific functionality. Available user extensions are the following:
- `lightfm`: adds wrapper for LightFM model,
- `torch`: adds models based on neural nets,
- `visuals`: adds visualization tools,
- `nmslib`: adds fast ANN recommenders.
- `catboost`: adds CatBoost as a reranker for `CandidateRankingModel`
Install extension:
```
pip install rectools[extension-name]
```
Install all extensions:
```
pip install rectools[all]
```
## Recommender Models
The table below lists recommender models that are available in RecTools.
| Model | Type | Description (🎏 for user/item features, 🔆 for warm inference, ❄️ for cold inference support) | Tutorials & Benchmarks |
|---------------------|----|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|
| HSTU | Neural Network | `rectools.models.HSTUModel` - Sequential model with unidirectional pointwise aggregated attention mechanism, incorporating relative attention bias from positional and temporal information, introduced in ["Actions speak louder then words..."](https://arxiv.org/pdf/2402.17152), combined with "Shifted Sequence" training objective as in original public benchmarks<br>🎏 | 📓 [HSTU Theory & Practice](examples/tutorials/transformers_HSTU_tutorial.ipynb) <br> 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)<br> 📗 [Advanced training guide](examples/tutorials/transformers_advanced_training_guide.ipynb) <br> 🚀 [Top performance on public datasets](examples/tutorials/transformers_HSTU_tutorial.ipynb)
| SASRec | Neural Network | `rectools.models.SASRecModel` - Transformer-based sequential model with unidirectional attention mechanism and "Shifted Sequence" training objective. <br> For eSASRec variant specify `rectools.models.nn.transformers.ligr.LiGRLayers` for `transformer_layers_type` and `sampled_softmax` for `loss` <br>🎏 | 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)<br> 📗 [Advanced training guide](examples/tutorials/transformers_advanced_training_guide.ipynb) <br> 📘 [Customization guide](examples/tutorials/transformers_customization_guide.ipynb) <br> 🚀 [Top performance on public benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) |
| BERT4Rec | Neural Network | `rectools.models.BERT4RecModel` - Transformer-based sequential model with bidirectional attention mechanism and "MLM" (masked item) training objective <br>🎏 | 📕 [Transformers Theory & Practice](examples/tutorials/transformers_tutorial.ipynb)<br> 📗 [Advanced training guide](examples/tutorials/transformers_advanced_training_guide.ipynb) <br> 📘 [Customization guide](examples/tutorials/transformers_customization_guide.ipynb) <br> 🚀 [Top performance on public benchmarks](https://github.com/blondered/bert4rec_repro?tab=readme-ov-file#rectools-transformers-benchmark-results) |
| [implicit](https://github.com/benfred/implicit) ALS Wrapper | Matrix Factorization | `rectools.models.ImplicitALSWrapperModel` - Alternating Least Squares Matrix Factorizattion algorithm for implicit feedback. <br>🎏 | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#Implicit-ALS)<br> 🚀 [50% boost to metrics with user & item features](examples/5_benchmark_iALS_with_features.ipynb) |
| [implicit](https://github.com/benfred/implicit) BPR-MF Wrapper | Matrix Factorization | `rectools.models.ImplicitBPRWrapperModel` - Bayesian Personalized Ranking Matrix Factorization algorithm. | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#Bayesian-Personalized-Ranking-Matrix-Factorization-(BPR-MF)) |
| [implicit](https://github.com/benfred/implicit) ItemKNN Wrapper | Nearest Neighbours | `rectools.models.ImplicitItemKNNWrapperModel` - Algorithm that calculates item-item similarity matrix using distances between item vectors in user-item interactions matrix | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#ItemKNN) |
| [LightFM](https://github.com/lyst/lightfm) Wrapper | Matrix Factorization | `rectools.models.LightFMWrapperModel` - Hybrid matrix factorization algorithm which utilises user and item features and supports a variety of losses.<br>🎏 🔆 ❄️ | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#LightFM)<br>🚀 [10-25 times faster inference with RecTools](examples/6_benchmark_lightfm_inference.ipynb)|
| EASE | Linear Autoencoder | `rectools.models.EASEModel` - Embarassingly Shallow Autoencoders implementation that explicitly calculates dense item-item similarity matrix | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#EASE) |
| PureSVD | Matrix Factorization | `rectools.models.PureSVDModel` - Truncated Singular Value Decomposition of user-item interactions matrix | 📙 [Theory & Practice](https://rectools.readthedocs.io/en/latest/examples/tutorials/baselines_extended_tutorial.html#PureSVD) |
| DSSM | Neural Network | `rectools.models.DSSMModel` - Two-tower Neural model that learns user and item embeddings utilising their explicit features and learning on triplet loss.<br>🎏 🔆 | - |
| Popular | Heuristic | `rectools.models.PopularModel` - Classic baseline which computes popularity of items and also accepts params like time window and type of popularity computation.<br>❄️ | - |
| Popular in Category | Heuristic | `rectools.models.PopularInCategoryModel` - Model that computes poularity within category and applies mixing strategy to increase Diversity.<br>❄️ | - |
| Random | Heuristic | `rectools.models.RandomModel` - Simple random algorithm useful to benchmark Novelty, Coverage, etc.<br>❄️ | - |
- All of the models follow the same interface. **No exceptions**
- No need for manual creation of sparse matrixes, torch dataloaders or mapping ids. Preparing data for models is as simple as `dataset = Dataset.construct(interactions_df)`
- Fitting any model is as simple as `model.fit(dataset)`
- For getting recommendations `filter_viewed` and `items_to_recommend` options are available
- For item-to-item recommendations use `recommend_to_items` method
- For feeding user/item features to model just specify dataframes when constructing `Dataset`. [Check our example](examples/4_dataset_with_features.ipynb)
- For warm / cold inference just provide all required ids in `users` or `target_items` parameters of `recommend` or `recommend_to_items` methods and make sure you have features in the dataset for warm users/items. **Nothing else is needed, everything works out of the box.**
- Our models can be initialized from configs and have useful methods like `get_config`, `get_params`, `save`, `load`. Common functions `model_from_config`, `model_from_params` and `load_model` are available. [Check our example](examples/9_model_configs_and_saving.ipynb)
## Extended validation tools
### `calc_metrics` for classification, ranking, "beyond-accuracy", DQ, popularity bias and between-model metrics
[User guide](https://github.com/MobileTeleSystems/RecTools/blob/main/examples/3_metrics.ipynb) | [Documentation](https://rectools.readthedocs.io/en/stable/features.html#metrics)
### `DebiasConfig` for debiased metrics calculation
[User guide](https://github.com/MobileTeleSystems/RecTools/blob/main/examples/8_debiased_metrics.ipynb) | [Documentation](https://rectools.readthedocs.io/en/stable/api/rectools.metrics.debias.DebiasConfig.html)
### `cross_validate` for model metrics comparison
[User guide](https://github.com/MobileTeleSystems/RecTools/blob/main/examples/2_cross_validation.ipynb)
### `VisualApp` for model recommendations comparison
<img src="https://recsysart.ru/images/visual_app.gif" width=500>
[Example](https://github.com/MobileTeleSystems/RecTools/blob/main/examples/7_visualization.ipynb) | [Demo](https://recsysart.ru/voila/) | [Documentation](https://rectools.readthedocs.io/en/stable/api/rectools.visuals.visual_app.VisualApp.html)
### `MetricsApp` for metrics trade-off analysis
<img src="https://recsysart.ru/images/metrics_app.gif" width=600>
[Example](https://github.com/MobileTeleSystems/RecTools/blob/main/examples/2_cross_validation.ipynb) |
[Documentation](https://rectools.readthedocs.io/en/stable/api/rectools.visuals.metrics_app.MetricsApp.html)
## Contribution
[Contributing guide](CONTRIBUTING.rst)
To install all requirements
- you must have `python3` and `poetry` installed
- make sure you have no active virtual environments (deactivate conda `base` if applicable)
- run
```
make install
```
For autoformatting run
```
make format
```
For linters check run
```
make lint
```
For tests run
```
make test
```
For coverage run
```
make coverage
```
To remove virtual environment run
```
make clean
```
## RecTools Team
- [Emiliy Feldman](https://github.com/feldlime) [Maintainer]
- [Daria Tikhonovich](https://github.com/blondered) [Maintainer]
- [Andrey Semenov](https://github.com/In48semenov)
- [Mike Sokolov](https://github.com/mikesokolovv)
- [Maya Spirina](https://github.com/spirinamayya)
- [Grigoriy Gusarov](https://github.com/Gooogr)
- [Aki Ariga](https://github.com/chezou)
- [Nikolay Undalov](https://github.com/nsundalov)
- [Aleksey Kuzin](https://github.com/teodor-r)
Previous contributors: [Ildar Safilo](https://github.com/irsafilo) [ex-Maintainer], [Daniil Potapov](https://github.com/sharthZ23) [ex-Maintainer], [Alexander Butenko](https://github.com/iomallach), [Igor Belkov](https://github.com/OzmundSedler), [Artem Senin](https://github.com/artemseninhse), [Mikhail Khasykov](https://github.com/mkhasykov), [Julia Karamnova](https://github.com/JuliaKup), [Maxim Lukin](https://github.com/groundmax), [Yuri Ulianov](https://github.com/yukeeul), [Egor Kratkov](https://github.com/jegorus), [Azat Sibagatulin](https://github.com/azatnv), [Vadim Vetrov](https://github.com/Waujito)
| text/markdown | Emiliy Feldman | feldlime@yandex.ru | Emiliy Feldman | feldlime@yandex.ru | Apache-2.0 | recsys, recommendation systems, machine learning, AI, personalization | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: Unix",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/MobileTeleSystems/RecTools | null | <3.14,>=3.9 | [] | [] | [] | [
"attrs<24.0.0,>=19.1.0",
"catboost<2.0.0,>=1.1.1; extra == \"catboost\" or extra == \"all\"",
"cupy-cuda12x<14.0.0,>=13.3.0; (sys_platform != \"darwin\" and python_version < \"3.13\") and (extra == \"cupy\" or extra == \"all\")",
"cupy-cuda12x<14.0.0,>=13.4.0; (sys_platform != \"darwin\" and python_version >= \"3.13\") and (extra == \"cupy\" or extra == \"all\")",
"fastrlock<0.9.0,>=0.8.3; sys_platform != \"darwin\"",
"implicit<0.8.0,>=0.7.1; python_version < \"3.10\"",
"ipywidgets<8.2,>=7.7; extra == \"visuals\" or extra == \"all\"",
"nbformat>=4.2.0; extra == \"visuals\" or extra == \"all\"",
"nmslib<3.0.0,>=2.0.4; python_version < \"3.11\" and (extra == \"nmslib\" or extra == \"all\")",
"nmslib-metabrainz<3.0.0,>=2.1.3; (python_version >= \"3.11\" and python_version < \"3.13\") and (extra == \"nmslib\" or extra == \"all\")",
"numpy<2.0.0,>=1.22; python_version < \"3.12\"",
"numpy<2.0.0,>=1.26; python_version == \"3.12\"",
"numpy<3.0.0,>=2.1.0; python_version >= \"3.13\"",
"pandas<3.0.0,>=1.5.0; python_version < \"3.13\"",
"pandas<3.0.0,>=2.2.3; python_version >= \"3.13\"",
"plotly<6.0.0,>=5.22.0; extra == \"visuals\" or extra == \"all\"",
"pm-implicit<0.8.0,>=0.7.3; python_version >= \"3.10\"",
"pydantic<3.0.0,>=2.8.2",
"pydantic-core<3.0.0,>=2.20.1",
"pytorch-lightning<=2.5.2,>=1.6.0; python_version < \"3.13\" and (extra == \"torch\" or extra == \"all\")",
"pytorch-lightning<=2.5.2,>=2.5.1; python_version >= \"3.13\" and (extra == \"torch\" or extra == \"all\")",
"rectools-lightfm<2.0.0,>=1.17.3; extra == \"lightfm\" or extra == \"all\"",
"scipy<1.13,>=1.10.1; python_version < \"3.10\"",
"scipy<2.0.0,>=1.14.1; python_version >= \"3.10\"",
"torch<2.3.0,>=1.6.0; (sys_platform == \"darwin\" and platform_machine == \"x86_64\" and python_version < \"3.13\") and (extra == \"torch\" or extra == \"all\")",
"torch<3.0.0,>=1.6.0; python_version < \"3.13\" and (extra == \"torch\" or extra == \"all\")",
"torch<3.0.0,>=2.6.0; python_version >= \"3.13\" and (extra == \"torch\" or extra == \"all\")",
"tqdm<5.0.0,>=4.27.0",
"typeguard<5.0.0,>=4.1.0",
"typing-extensions<5.0.0,>=4.12.2"
] | [] | [] | [] | [
"Documentation, https://rectools.readthedocs.io",
"Repository, https://github.com/MobileTeleSystems/RecTools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:58:45.505747 | rectools-0.18.0.tar.gz | 160,747 | 1f/7a/2c2f1148790a9260dbe8fcfc00973df88fb86e496d4b2e32ebbe265612e6/rectools-0.18.0.tar.gz | source | sdist | null | false | 50441d2ef9d39ce808a14af4a851bbbe | 509d7cea0231dc5d30ccbee90303868cd0a00cef80c3063211424c9f7e6de386 | 1f7a2c2f1148790a9260dbe8fcfc00973df88fb86e496d4b2e32ebbe265612e6 | null | [] | 0 |
2.4 | know-cli | 0.7.5 | Context Intelligence for AI Coding Agents — smart, token-budgeted code context | # know — 3x Fewer Tokens for AI Coding Agents
> Your AI agent wastes tokens reading code it doesn't need. **know** gives it exactly what it needs, in 3 tiers.
[](https://pypi.org/project/know-cli/)
[](https://pypi.org/project/know-cli/)
[](LICENSE)
---
## The Problem
AI agents dump entire files into context. Every `grep` match pulls in thousands of tokens of irrelevant code. Repeated queries re-read the same functions.
**Result:** Slow. Expensive. Your agent burns through token budget reading imports and boilerplate it doesn't need.
## The Solution — Map, Context, Deep
**know** is a 3-tier context engine. Agents start broad and zoom in, paying only for what they need.
| Tier | Command | Tokens/result | Use case |
|------|---------|---------------|----------|
| **Map** | `know map "query"` | ~50 | Orient: what exists? |
| **Context** | `know context "query" --session S` | ~300-500 | Investigate: relevant code bodies |
| **Deep** | `know deep "function_name"` | ~1500 | Surgical: function + callers + callees |
Session dedup means the second query never re-sends code from the first.
---
## Benchmarks
**farfield** (764 files, 2487 functions, production TypeScript+Python monorepo)
### Token efficiency: 3-tier vs single-context vs Grep+Read (8 scenarios)
| Scenario | Grep+Read | v0.6 context-only | v0.7 3-tier | v0.7 vs v0.6 |
|---|---|---|---|---|
| WebSocket handling | 106,257 | 5,976 | 3,549 | **-41%** |
| Auth & API keys | 145,690 | 4,063 | 3,569 | **-12%** |
| Model routing | 125,453 | 5,271 | 3,251 | **-38%** |
| Error handling | 45,205 | 7,517 | 4,226 | **-44%** |
| Database storage | 180,329 | 5,479 | 3,138 | **-43%** |
| Billing | 41,789 | 5,424 | 3,295 | **-39%** |
| LLM providers | 176,184 | 5,085 | 3,184 | **-37%** |
| Agent execution | 175,338 | 2,250 | 2,445 | -9% |
| **Total** | **996,245** | **41,065** | **26,657** | **-35%** |
v0.7 3-tier = `know map` → `know context --session` → `know deep`
- **37.4x** fewer tokens than grep+read
- **35%** fewer tokens than v0.6 context-only
### Live head-to-head agent benchmark
Two Claude Opus agents answered 3 identical questions about the farfield codebase. One used `know` CLI, the other used grep+read.
| Metric | Agent with `know` | Agent with Grep+Read |
|---|---|---|
| Tool calls | **28** | 40 |
| Total tokens | **98,593** | 107,060 |
| Duration | 215s | 206s |
| Answer quality | Equivalent | Equivalent |
**30% fewer tool calls. 8% fewer tokens. Same answer quality.**
---
## Quick Start
```bash
pip install know-cli
cd your-project
know init
```
### The 3-Tier Workflow
```bash
# 1. Orient — what functions exist for "billing"?
know map "billing"
# 2. Investigate — get ranked code bodies (with session tracking)
know --json context "billing subscription" --budget 4000 --session auto
# Returns session_id: "a1b2c3d4"
# 3. Go deep — one function + its callers/callees
know --json deep "check_cloud_access" --budget 3000
# Follow-up queries skip already-seen code
know --json context "payment processing" --budget 4000 --session a1b2c3d4
```
---
## Commands
### Map — Orient Before Reading
```bash
know map "billing subscription" # What exists?
know --json map "auth" --limit 30 # JSON for agents
know map "config" --type function # Filter by type
```
Returns signatures + first-line docstrings. No bodies. ~50 tokens per result.
### Context — Ranked Code Bodies
```bash
know context "fix the auth bug" --budget 8000
know --json context "query" --budget 4000 --session auto
echo "refactor config" | know context --budget 6000
```
Finds relevant functions across the codebase. Token-budgeted. Optionally deduplicates across queries with `--session`.
### Deep — Function + Dependencies
```bash
know deep "check_cloud_access" --budget 3000
know --json deep "BillingService.process_payment"
know --json deep "service.py:check_cloud_access"
```
Returns the function body + what it calls (callees) + what calls it (callers), all within budget. Handles ambiguous names, budget overflow, and missing call graphs.
### Memory — Cross-Session Knowledge
```bash
know remember "Auth uses JWT with Redis session store"
know recall "how does auth work?"
```
Memories are automatically included in `know context` results.
### All Commands
| Command | Description |
|---------|-------------|
| `know map "query"` | Lightweight signature search |
| `know context "query"` | Smart, budgeted code context |
| `know deep "name"` | Function + callers + callees |
| `know search "query"` | Semantic code search |
| `know remember "text"` | Store a memory |
| `know recall "query"` | Recall memories |
| `know signatures [file]` | Function/class signatures |
| `know related <file>` | Import deps and dependents |
| `know callers <function>` | What calls this function |
| `know callees <chunk>` | What this function calls |
| `know next-file "query"` | Best file for a query |
| `know graph <file>` | Import graph visualization |
| `know status` | Project health check |
| `know stats` | Usage statistics |
| `know diff --since "1w"` | Architectural changes over time |
| `know mcp serve` | Start MCP server |
| `know init` | Initialize know in project |
### Global Flags
| Flag | Description |
|------|-------------|
| `--json` | Machine-readable JSON output |
| `--quiet` | Minimal output |
| `--verbose` | Detailed output |
| `--time` | Show execution time |
---
## Works With
| Tool | Integration |
|------|-------------|
| **Claude Code** | Agent skill — Claude uses know automatically |
| **Claude Desktop** | MCP server: `know mcp serve` |
| **Cursor** | MCP server or CLI |
| **Any CLI agent** | Pipe-friendly: `know --json context "query"` |
---
## How It Works
```
Your Query → know context
├─ FTS5 Search (BM25F field weighting)
│ └─ Finds relevant functions/classes
├─ Ranking Pipeline
│ ├─ File category demotion (test/vendor/generated)
│ ├─ Import graph importance boost
│ ├─ Git recency boost
│ └─ File-path match boost
├─ Context Expansion
│ └─ Module imports, parent classes, adjacent chunks
├─ Session Dedup (optional)
│ └─ Skips chunks already returned in this session
├─ Knowledge Base
│ └─ Injects cross-session memories
└─ Token Budget Allocator
└─ 60% code | 15% imports | 15% summaries | 10% overview
```
All processing is **local**. No data leaves your machine.
---
## Installation
```bash
# Core (CLI + context engine + memory)
pip install know-cli
# With semantic search
pip install know-cli[search]
# With MCP server
pip install know-cli[mcp]
# Everything
pip install know-cli[search,mcp]
```
**Requirements:** Python 3.10+
---
## Configuration
```bash
know init # Creates .know/config.yaml
```
```yaml
project:
name: my-project
description: "A web application"
languages:
- python
include_paths:
- src/
exclude_paths:
- tests/fixtures/
```
## Architecture
```
.know/
config.yaml # Project configuration
daemon.db # SQLite database (chunks, memories, imports, sessions)
```
Single SQLite database with FTS5 and WAL mode. Background daemon for sub-100ms latency. Falls back to direct DB access when daemon is unavailable.
## Contributing
```bash
git clone https://github.com/sushilk1991/know-cli
cd know-cli
pip install -e ".[dev,search,mcp]"
pytest tests/ -v
```
## License
MIT
| text/markdown | null | Sushil Kumar <sushilk.1991@gmail.com> | null | null | null | ai, cli, codebase, coding-agent, context, embeddings, llm, mcp, tokens | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Documentation",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1.0",
"fastembed>=0.3.0",
"numpy>=1.24.0",
"pathspec>=0.11.0",
"pyyaml>=6.0",
"rich>=13.0.0",
"tiktoken>=0.5.0",
"tree-sitter-c>=0.23.0",
"tree-sitter-go>=0.23.0",
"tree-sitter-java>=0.23.0",
"tree-sitter-javascript>=0.23.0",
"tree-sitter-python>=0.23.0",
"tree-sitter-ruby>=0.23.0",
"tree-sitter-rust>=0.23.0",
"tree-sitter-typescript>=0.23.0",
"tree-sitter>=0.24.0",
"watchdog>=3.0.0",
"xxhash>=3.0.0",
"anthropic>=0.8.0; extra == \"ai\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mcp>=1.0.0; extra == \"mcp\""
] | [] | [] | [] | [
"Homepage, https://github.com/sushilk1991/know-cli",
"Repository, https://github.com/sushilk1991/know-cli"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T05:58:36.562849 | know_cli-0.7.5.tar.gz | 208,608 | 33/a8/d4e4da373fa68134879e4bee400ab02b05aec0fd368f8356b8988531130e/know_cli-0.7.5.tar.gz | source | sdist | null | false | 532315519869585dc16f7bef5092a067 | 80e9f58a0551089f6b39a445cc6c14bf5c5f9aa4d10710b34271c021979a345d | 33a8d4e4da373fa68134879e4bee400ab02b05aec0fd368f8356b8988531130e | MIT | [] | 219 |
2.4 | remoterf-host | 0.1.24 | RemoteRF host-side control package | # RemoteRF Host (hostrf) — Linux Setup
## HostRF Installation
The guide is done/verified for Ubuntu Server/Desktop 24.04 LTS.
### 0) If Raspberry Pi
Raspberry Pi Imager → Install Ubuntu Server 24.04 LTS → Boot Raspberry Pi from SD card.
### 1) System Prerequisites (APT)
```bash
sudo apt update
sudo apt install -y curl ca-certificates bzip2 git build-essential
sudo apt install -y libusb-1.0-0 udev
```
Optional: confirm architecture:
```bash
uname -m
```
* `x86_64` → Intel/AMD
* `aarch64` → ARM64 (Raspberry Pi 64-bit, some servers)
---
### 2) Install Miniconda
### 2.1 Download the installer
#### x86_64
```bash
cd /tmp
curl -fsSLO https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
```
#### ARM64 (aarch64)
```bash
cd /tmp
curl -fsSLO https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh
```
### 2.2 Install (non-interactive, recommended)
#### x86_64
```bash
bash Miniconda3-latest-Linux-x86_64.sh -b -p "$HOME/miniconda3"
```
#### ARM64 (aarch64)
```bash
bash Miniconda3-latest-Linux-aarch64.sh -b -p "$HOME/miniconda3"
```
### 2.3 Enable conda in your current shell
```bash
source "$HOME/miniconda3/etc/profile.d/conda.sh"
conda --version
```
> If you want conda available automatically in new terminals:
>
> ```bash
> "$HOME/miniconda3/bin/conda" init bash
> source ~/.bashrc
> ```
### 2.4 Install **mamba** (default solver)
```bash
conda install -n base -c conda-forge -y mamba
mamba --version
```
Might have to accept anaconda TOS.
```bash
conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main
conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r
```
### 3) Create the Environment (with mamba (faster))
```bash
mamba create -n hostrf -y -c conda-forge -c defaults python=3.10 pip setuptools wheel grpcio protobuf python-dotenv numpy scipy libiio pylibiio libusb
conda activate hostrf
python -m pip install -U pip
python -m pip install pyadi-iio remoterf-host
```
```bash
sudo reboot now
```
## HostRF Config
Run the below for a comprehensive overview:
```bash
hostrf --help
```
To get hostrf up and running:
### 1) Point to a RemoteRF server
HostRF requires a RemoteRF Server already setup. See RemoteRF-Server for additional details.
`hostrf` stores its config **repo-locally** under: `./.config/`
```bash
hostrf --config --addr <host:port>
# example:
hostrf -c -a 164.97.201.67:5000
hostrf -c -a --show
```
### 2) Set host parameters
Set hostname:
```bash
hostrf --config --host <hostname>
#example:
hostrf -c -h host_0
```
<!-- 164.67.195.207:61005 -->
### 3) Connect devices (Adalm Pluto)
To connect plutos to the server:
```bash
iio_info -s
```
If the pluto doesn't show up, yet the below works:
```bash
sudo iio_info -s
```
Run the below and reboot after:
```bash
sudo groupadd -f plugdev
sudo usermod -aG plugdev "$USER"
sudo tee /etc/udev/rules.d/53-adi-usb.rules >/dev/null <<'EOF'
# Type the below in
SUBSYSTEM=="usb", ATTR{idVendor}=="0456", MODE="0660", GROUP="plugdev"
EOF
sudo udevadm control --reload-rules
sudo udevadm trigger
sudo reboot now
```
Look for a 'hw_serial: 104473'. Keep note of said serial per device.
Add pluto to device list. Understand that the device_id (int) needs to be GLOBALLY unique!
```bash
hostrf --device --add --pluto <id>:<device_name>:<hw_serial>
#Example
hostrf -d -a --pluto 10:hostrf_pluto_0:104473
```
Remove device:
```bash
hostrf -d -remove <id>
```
Show devices:
```bash
hostrf -d --show
```
Clear all device config:
```bash
hostrf -d --wipe
```
| text/markdown | Ethan Ge | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"grpcio",
"protobuf",
"numpy",
"prompt_toolkit",
"python-dotenv",
"grpcio-tools"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-21T05:58:22.694945 | remoterf_host-0.1.24.tar.gz | 29,640 | cb/30/d8f7ab0071ddce079b5ba4d163e451e98f5de0c0e061df8004fa37cd461d/remoterf_host-0.1.24.tar.gz | source | sdist | null | false | 4e8dab445c526c32241c972253992e3c | e888c5219ad871de35658759423459c88f13dfe2fda8594882660862921f6b19 | cb30d8f7ab0071ddce079b5ba4d163e451e98f5de0c0e061df8004fa37cd461d | null | [] | 208 |
2.1 | prefig | 0.5.9.dev20260221055812 | An authoring system for mathematical diagrams | # PreFigure
PreFigure is a Python package for authoring mathematical diagrams. Following the [PreTeXt](https://pretextbook.org/) paradigm, an author writes an XML description of a diagram that PreFigure converts into an image file suitable for including in a text document. By default, PreFigure will create an SVG image that can be included in, say, an HTML document. However, PreFigure prioritizes the creation of accessible diagrams so that annotations can be added that enable a screen reader to easily navigate the diagram. Tactile diagrams can also be created from the same XML source.
PreFigure diagrams can now be authored inside a PreTeXt document. More information, including detailed documentation, is available from the [PreFigure homepage](https://prefigure.org).
## Using PreFigure
You may author and compile PreFigure diagrams in either of two environments:
1. You may use the [Prefigure Playground](https://davidaustinm.github.io/prefigure/) and download SVG files directly from the playground.
2. PreFigure is available in a [GitHub Codespace](https://github.com/davidaustinm/prefigure-codespace). This is a free, cloud-based platform that takes care of all the installation details and creates a fully configured working environment. Follow the instructions on that page to create your codespace and then get started authoring diagrams.
3. PreFigure may be installed locally as a Python package following the instructions in the **Local Installation** section below.
## Local Installation
PreFigure may be installed locally as a Python package in the usual way using `pip`. However, there are a few additional details that require your attention.
1. PreFigure assumes Python version 3.8.5 or higher. You may check your local Python version with one of the two commands below
```
python -V
```
```
python3 -V
```
2. You are encouraged to install `liblouis`, which enables the creation of non-mathematical braille labels in tactile diagrams. PreFigure can still create diagrams without this package installed though you will see a non-fatal warning message when you compile a tactile diagram and any requested labels will not appear in the diagram.
On a linux machine, use your package manager to install `python3-louis`. Ubuntu users can use
```
apt install python3-louis
```
while on a Mac, you will want
```
brew install liblouis
```
Alternatively, you can install `liblouis` following [these instructions](https://liblouis.io/downloads/).
Within a Python interpreter, you should then be able to `import louis` without an error.
3. You are encouraged to install an [additional library](https://pycairo.readthedocs.io/en/latest/getting_started.html) to support the `pycairo` package. This may not be essential for your local machine, but there is no harm in performing this step. The `pycairo` package is needed to produce labels having plain text (rather than mathematics). If you are not able to install `pycairo`, you will still be able to build PreFigure diagrams, but any labels with plain text will not appear.
4. You are now ready to install PreFigure with
```
pip install prefig[pycairo]
```
If this fails, it is due to the `pycairo` dependency so you can instead install PreFigure without `pycairo` using
```
pip install prefig
```
5. You will need a local installation of `node` and `npm` to produce mathematical labels. (The `node` installation includes `npm`.) This is a simple process, but you should search to find the instructions for your operating system. On a Ubuntu machine, it's as easy as
```
apt install nodejs
```
6. For building PDF versions of diagrams, you will need to install `rsvg-convert`, which PreFigure uses to convert SVGs into PDFs. On Ubuntu, you can say
```
apt install librsvg2-bin
```
while Mac users can use
```
brew install librsvg
```
7. Once installed, the command `prefig init` will install MathJax and the Braille29 font needed to tactile diagrams. If you do not perform this step, MathJax will be automatically installed when you first build a diagram with mathematical labels.
## Usage
Once PreFigure is installed, help is available with
```
prefig --help
```
or, say,
```
prefig build --help
```
Details of a requested operation may be obtained using the `-v` and `-vv` flags. For instance,
```
prefig -vv build foo.xml
```
will print debugging information to the terminal.
Here is a summary of PreFigure commands.
1. PreFigure source files can be compiled into SVG images using one of the following two commands, with the first command creating a regular SVG file while the second produces a tactile version of the diagram.
```
prefig build foo.xml
```
```
prefig build -f tactile foo.xml
```
By default, the output appears in `output/foo.svg` and `output/foo.xml`, where the XML output contains the annotations used by a screen reader. If PreFigure is called from within a PreTeXt document, then the annotations will appear in `foo-annotations.xml`.
2. To view the resulting diagram, use either
```
prefig view foo
```
```
prefig view -i foo
```
The first command will open the diagram in a browser using the `diagcess` library, which enables a reader to explore the annotations interactively. The second command ignores the annotations and simply opens the SVG diagram in a browser.
3. Once a diagram has been compiled, you may create a PDF using
```
prefig pdf foo
```
Adding the `-b` switch will build the diagram from PreFigure source before the PDF is formed.
4. Similarly,
```
prefig png foo
```
creates a PNG. Add the `-b` switch to `build` the diagram first.
5. To validate PreFigure source against the PreFigure XML schema, use
```
prefig validate foo.xml
```
You may wish to perform the following steps to set up your authoring environment (these are automatically performed in a codespace):
1. To initialize your local installation, use
```
prefig init
```
which will use `npm` to install some MathJax modules. It will also install the Braille29 font needed for tactile diagrams. If the MathJax modules are not installed when you attempt to build a diagram, PreFigure will attempt to install them when you build your first diagram.
2. You may install a set of examples for exploration in the current directory using
```
prefig examples
```
3. You may initialize a new PreFigure project in the current directory using
```
prefig new
```
This copies the `diagcess` tools and a default publication file into the current directory and creates a `source` directory in which to author diagrams.
## Acknowledgements
[Volker Sorge](https://www.birmingham.ac.uk/staff/profiles/computer-science/academic-staff/sorge-volker) has provided crucial support for this project as well as access to the diagcess library for navigating an image with a screen reader.
The MathJax module `mj-sre-page.js` included with this distribution was created by [Davide Cervone](https://www.math.union.edu/~dpvc/) and Volker Sorge.
Thanks also to the PreTeXt community, and especially [Rob Beezer](http://buzzard.ups.edu/), for support and inspiration. This project was developed with support from the [UTMOST Project](https://utmost.aimath.org/)
## License
PreFigure is distributed with a GPL license.
| text/markdown | David Austin | david.austin.m@gmail.com | null | null | GPL-3.0-or-later | null | [
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://prefigure.org | null | <4.0,>=3.10 | [] | [] | [] | [
"shapely<3.0.0,>=2.0.6",
"lxml<7,>=6",
"click<9.0.0,>=8.1.7",
"networkx<3.0,>=2.5",
"scipy<2.0,>=1.8; python_version < \"3.13\"",
"scipy<2.0.0,>=1.14.1; python_version >= \"3.13\"",
"numpy<2.0,>=1.26; python_version < \"3.13\"",
"numpy<3.0.0,>=2.1.0; python_version >= \"3.13\"",
"pycairo<2.0,>=1.20; extra == \"text\"",
"click-log<0.5.0,>=0.4.0"
] | [] | [] | [] | [
"Repository, https://github.com/davidaustinm/prefigure"
] | poetry/1.8.3 CPython/3.10.19 Linux/6.11.0-1018-azure | 2026-02-21T05:58:15.217001 | prefig-0.5.9.dev20260221055812.tar.gz | 134,863 | e2/55/768d22ffd83fb6ef4a0ce96c8ab574fea1aa1e9eeaf97fc3339fb020024e/prefig-0.5.9.dev20260221055812.tar.gz | source | sdist | null | false | 75832a116de4167289ba93e76d6cd857 | 00b52b94254ccb8e12d30a791f12b5aea3105f14b1add44f0cd4b86839777f5d | e255768d22ffd83fb6ef4a0ce96c8ab574fea1aa1e9eeaf97fc3339fb020024e | null | [] | 186 |
2.4 | kaggle-notebook-deploy | 0.1.2 | A CLI tool to deploy Kaggle Notebooks with git push via GitHub Actions | # kaggle-notebook-deploy
A CLI tool to deploy Kaggle Notebooks by simply running `git push`.
Manage your Kaggle Notebook code on GitHub and set up an automated deployment workflow via GitHub Actions.
## Workflow
```
Edit notebook → git push → GitHub Actions → Upload to Kaggle → Submit in browser
```
## Installation
```bash
pip install kaggle-notebook-deploy
```
## Quick Start
### 1. Set up repository
```bash
# Generate GitHub Actions workflow and .gitignore
kaggle-notebook-deploy init-repo
```
Generated files:
- `.github/workflows/kaggle-push.yml` — workflow for pushing to Kaggle
- `scripts/setup-credentials.sh` — credential setup script
- `.gitignore` entries for Kaggle-related files
### 2. Set GitHub Secrets
```bash
gh secret set KAGGLE_USERNAME
gh secret set KAGGLE_KEY
```
### 3. Create a competition directory
```bash
# Basic
kaggle-notebook-deploy init titanic
# GPU-enabled, public notebook
kaggle-notebook-deploy init march-machine-learning-mania-2026 --gpu --public
```
Generated files:
- `<slug>/kernel-metadata.json` — Kaggle kernel metadata
- `<slug>/<slug>-baseline.ipynb` — baseline notebook
### 4. Develop and deploy
```bash
# Edit the notebook
vim titanic/titanic-baseline.ipynb
# Validate
kaggle-notebook-deploy validate titanic
# Push directly from local
kaggle-notebook-deploy push titanic
# Or via GitHub Actions
git add titanic/ && git commit -m "Add titanic baseline" && git push
gh workflow run kaggle-push.yml -f notebook_dir=titanic
```
## Commands
### `kaggle-notebook-deploy init <competition-slug>`
Generate a competition directory from a template.
| Option | Description |
|---|---|
| `-u, --username` | Kaggle username (default: read from `~/.kaggle/kaggle.json`) |
| `-t, --title` | Notebook title (default: auto-generated from slug) |
| `--gpu` | Enable GPU |
| `--internet` | Enable internet (not recommended for code competitions) |
| `--public` | Create as public notebook |
### `kaggle-notebook-deploy init-repo`
Set up GitHub Actions workflow and related files.
| Option | Description |
|---|---|
| `-f, --force` | Overwrite existing files |
### `kaggle-notebook-deploy validate [directory]`
Validate `kernel-metadata.json`.
Checks:
- Required fields exist
- `id` format is `username/slug`
- `code_file` exists on disk
- Valid values for `language` and `kernel_type`
- Consistency between `enable_internet` and `competition_sources`
### `kaggle-notebook-deploy push [directory]`
Push a notebook to Kaggle (internally runs `kaggle kernels push`).
| Option | Description |
|---|---|
| `--skip-validate` | Skip validation |
| `--dry-run` | Print the command without executing |
## Notes
### Code competition constraints
- `enable_internet: false` is required (setting it to `true` disables submission)
- API-based submit is not available — browser submit is required
- `kaggle kernels push` resets Kaggle Secrets bindings; re-attach W&B keys etc. via the web UI after each push
### Data path differences
| Source | Mount path |
|---|---|
| `competition_sources` | `/kaggle/input/competitions/<slug>/` |
| `dataset_sources` | `/kaggle/input/<slug>/` |
Note that `competition_sources` data is mounted under **`competitions/`** subdirectory, not directly under `/kaggle/input/`. Hardcoding `/kaggle/input/<slug>/` will cause `FileNotFoundError`.
**Recommended pattern** — auto-detect the data directory in your notebook:
```python
from pathlib import Path
INPUT_ROOT = Path('/kaggle/input')
# Find actual data location instead of hardcoding the path
DATA_DIR = None
for p in INPUT_ROOT.rglob('your-expected-file.csv'):
DATA_DIR = p.parent
break
if DATA_DIR is None:
# Print structure for debugging
for p in sorted(INPUT_ROOT.iterdir()):
print(f' {p.name}/')
for sub in sorted(p.iterdir())[:5]:
print(f' {sub.name}')
raise FileNotFoundError('Data directory not found.')
```
### NaN handling for missing feature columns
When building features, some columns may be entirely `NaN` (e.g., a ranking system not available for Women's tournaments). `fillna(median)` does not help when the median itself is `NaN`. Always chain a fallback:
```python
X = df[feat_cols].fillna(df[feat_cols].median()).fillna(0)
```
## License
MIT
| text/markdown | yasunorim | null | null | null | null | ci-cd, deploy, github-actions, kaggle, notebook | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0",
"kaggle>=1.6.0",
"pyyaml>=6.0",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/yasumorishima/kaggle-notebook-deploy",
"Repository, https://github.com/yasumorishima/kaggle-notebook-deploy"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:58:08.628723 | kaggle_notebook_deploy-0.1.2.tar.gz | 14,418 | 4d/3d/c8c61fb9c93f65200c94e6a4d5113859f384017f8136e47a98196e38559a/kaggle_notebook_deploy-0.1.2.tar.gz | source | sdist | null | false | 31b7dd8fdb42d4881bfb9da488cdbad5 | c93149c29890c8627c0c76461051e993334fe970882d63e11c3bd2db1cc9c16b | 4d3dc8c61fb9c93f65200c94e6a4d5113859f384017f8136e47a98196e38559a | MIT | [
"LICENSE"
] | 199 |
2.4 | captcha-url-reader | 1.0.1 | Simple Python package: pass CAPTCHA image URL and get extracted text | # captcha-url-reader
Simple package: user passes captcha image URL, package reads and returns text.
## Install
```bash
pip install -e .
```
## Usage
```python
from captcha_image_reader import read_captcha_from_url
# Default mode (recommended for Amazon-style captchas)
captcha_text = read_captcha_from_url("https://images-na.ssl-images-amazon.com/captcha/sgkknrsj/Captcha_iwrdailhkf.jpg")
if captcha_text:
print(f"CAPTCHA text extracted: {captcha_text}")
else:
print("No text extracted from image URL.")
```
## Overlap-heavy captcha mode
Use forced overlap mode only when text is merged/overlapping and default mode is not accurate.
```python
from captcha_image_reader import read_captcha_from_url
captcha_text = read_captcha_from_url(
"https://2captcha.com/dist/web/assets/captcha-rn1S3orp.jpg",
force_overlap_risk=True,
)
```
## When to use which mode
- Use default mode for clean or mostly non-overlapping text (for example, most Amazon captchas).
- Use `force_overlap_risk=True` only when characters are merged and default extraction is wrong.
## Example scripts
- Default/Amazon style: `examples/read_amazon_default.py`
- Overlap-heavy style: `examples/read_overlap_captcha.py`
Run them directly:
```bash
./.venv/bin/python examples/read_amazon_default.py
./.venv/bin/python examples/read_overlap_captcha.py
```
## GPU behavior
- Uses GPU first by default.
- If GPU is not available or fails, automatically falls back to CPU.
| text/markdown | Arif Shah | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"easyocr>=1.7.1",
"opencv-python>=4.9.0",
"Pillow>=10.2.0",
"numpy>=1.26.0",
"requests>=2.31.0",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.4 | 2026-02-21T05:57:46.266727 | captcha_url_reader-1.0.1.tar.gz | 6,362 | b4/f4/31442ff90a47c4d94ac5d070fa3fe9b39e614573ac67e5ae31634692552f/captcha_url_reader-1.0.1.tar.gz | source | sdist | null | false | eb36964fea8f8f7db98563646e16c3cb | c9ea81ee131cd87707072505b1e6be95a358398eae3151ec8d795693ef09d083 | b4f431442ff90a47c4d94ac5d070fa3fe9b39e614573ac67e5ae31634692552f | null | [] | 213 |
2.4 | ambers | 0.2.6 | Pure Rust SPSS .sav/.zsav reader with Polars DataFrame output | # ambers
<p align="center">
<img src="https://raw.githubusercontent.com/albertxli/ambers/main/images/ambers-banner-v2.svg" alt="ambers banner" width="900">
</p>
[](https://crates.io/crates/ambers)
[](https://pypi.org/project/ambers/)
[](LICENSE)
Pure Rust SPSS `.sav`/`.zsav` reader — Arrow-native, zero C dependencies.
## Features
- Read `.sav` (bytecode) and `.zsav` (zlib) files
- Arrow `RecordBatch` output — zero-copy to Polars, DataFusion, DuckDB
- Rich metadata: variable labels, value labels, missing values, MR sets, measure levels
- Lazy reader via `scan_sav()` — returns Polars LazyFrame with projection and row limit pushdown
- No PyArrow dependency — uses Arrow PyCapsule Interface for zero-copy transfer
- The fastest SPSS reader — up to 3x faster than polars_readstat, 10x faster than pyreadstat
- Python + Rust dual API from a single crate
## Installation
**Python:**
```bash
pip install ambers
```
**Rust:**
```bash
cargo add ambers
```
## Quick Start
### Python
```python
import ambers as am
# Eager read — data + metadata
df, meta = am.read_sav("survey.sav")
# Lazy read — returns Polars LazyFrame
lf, meta = am.scan_sav("survey.sav")
df = lf.select(["Q1", "Q2", "age"]).head(1000).collect()
# Explore metadata
meta.summary()
meta.describe("Q1")
meta.value("Q1")
# Read metadata only (fast, skips data)
meta = am.read_sav_metadata("survey.sav")
```
### Rust
```rust
use ambers::{read_sav, read_sav_metadata};
// Read data + metadata
let (batch, meta) = read_sav("survey.sav")?;
println!("{} rows, {} cols", batch.num_rows(), meta.number_columns);
// Read metadata only
let meta = read_sav_metadata("survey.sav")?;
println!("{}", meta.label("Q1").unwrap_or("(no label)"));
```
## Metadata API (Python)
| Method | Description |
|--------|-------------|
| `meta.summary()` | Formatted overview: file info, type distribution, annotations |
| `meta.describe("Q1")` | Deep-dive into a single variable (or list of variables) |
| `meta.diff(other)` | Compare two metadata objects, returns `MetaDiff` |
| `meta.label("Q1")` | Variable label |
| `meta.value("Q1")` | Value labels dict |
| `meta.format("Q1")` | SPSS format string (e.g. `"F8.2"`, `"A50"`) |
| `meta.measure("Q1")` | Measurement level (`"nominal"`, `"ordinal"`, `"scale"`) |
| `meta.schema` | Full metadata as a nested Python dict |
All variable-name methods raise `KeyError` for unknown variables.
## Streaming Reader (Rust)
```rust
let mut scanner = ambers::scan_sav("survey.sav")?;
scanner.select(&["age", "gender"])?;
scanner.limit(1000);
while let Some(batch) = scanner.next_batch()? {
println!("Batch: {} rows", batch.num_rows());
}
```
## Performance
### Eager Read
All results return a Polars DataFrame. Best of 3–5 runs (with warmup) on Windows 11, Python 3.13, 24-core machine.
| File | Size | Rows | Cols | ambers | polars_readstat | pyreadstat | vs prs | vs pyreadstat |
|------|------|-----:|-----:|-------:|----------------:|-----------:|-------:|--------------:|
| test_1 (bytecode) | 0.2 MB | 1,500 | 75 | < 0.01s | < 0.01s | 0.011s | — | — |
| test_2 (bytecode) | 147 MB | 22,070 | 677 | **0.286s** | 0.897s | 3.524s | **3.1x** | **12x** |
| test_3 (uncompressed) | 1.1 GB | 79,066 | 915 | **0.322s** | 1.150s | 4.918s | **3.6x** | **15x** |
| test_4 (uncompressed) | 0.6 MB | 201 | 158 | **0.002s** | 0.003s | 0.012s | **1.5x** | **6x** |
| test_5 (uncompressed) | 0.6 MB | 203 | 136 | **0.002s** | 0.003s | 0.016s | **1.5x** | **8x** |
| test_6 (uncompressed) | 5.4 GB | 395,330 | 916 | **1.600s** | 1.752s | 25.214s | **1.1x** | **16x** |
- **Faster than polars_readstat on all tested files** — 1.1–3.6x faster
- **6–16x faster than pyreadstat** across all file sizes
- No PyArrow dependency — uses Arrow PyCapsule Interface for zero-copy transfer
### Lazy Read with Pushdown
`scan_sav()` returns a Polars LazyFrame. Unlike eager reads, it only reads the data you ask for:
| File (size) | Full collect | Select 5 cols | Head 1000 rows | Select 5 + head 1000 |
|-------------|------------:|-------------:|--------------:|--------------------:|
| test_2 (147 MB, 22K × 677) | 0.903s | 0.363s (2.5x) | 0.181s (5.0x) | **0.157s (5.7x)** |
| test_3 (1.1 GB, 79K × 915) | 0.700s | 0.554s (1.3x) | 0.020s (35x) | **0.012s (58x)** |
| test_6 (5.4 GB, 395K × 916) | 3.062s | 2.343s (1.3x) | 0.022s (139x) | **0.013s (236x)** |
On the 5.4 GB file, selecting 5 columns and 1000 rows completes in **13ms** — 236x faster than reading the full dataset.
## License
[MIT](LICENSE)
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | MIT | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"polars>=1.3"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:57:28.602769 | ambers-0.2.6.tar.gz | 72,457 | 17/a5/bace6f222e58a3bb7e8b1f94577eba2290fdb9101b9b6848127ff3c239da/ambers-0.2.6.tar.gz | source | sdist | null | false | 4d309b108b31c63ba6661da23e948bf9 | 6dae3673da0f89d5769adba442c5a1b5a98d5e9d7db364225c15b3e0d7b69bb3 | 17a5bace6f222e58a3bb7e8b1f94577eba2290fdb9101b9b6848127ff3c239da | null | [
"LICENSE"
] | 642 |
2.4 | pmtvs-topology | 0.3.2 | Topological data analysis primitives | # pmtvs-topology
Topological data analysis primitives.
## Functions
- `distance_matrix(points)` - Pairwise distances
- `persistent_homology_0d(points)` - 0D persistence (connected components)
- `betti_numbers(persistence, threshold)` - Count features at threshold
- `persistence_entropy(persistence)` - Entropy of lifetimes
- `persistence_landscape(persistence, k)` - k-th landscape
- `bottleneck_distance(p1, p2)` - Distance between diagrams
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:57:15.015347 | pmtvs_topology-0.3.2-py3-none-any.whl | 8,416 | 4f/5a/85b6b6b27e172462f13ebccbb1bb67cc8d902fddf5655a9e92bf05aaa3e9/pmtvs_topology-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | ebc935ec00f2923a2ca7fa35a07cb920 | f4e556b1f0112082fca2d5e6d8871a4d77814dc70a510ed02258c63ce38bce02 | 4f5a85b6b6b27e172462f13ebccbb1bb67cc8d902fddf5655a9e92bf05aaa3e9 | null | [
"LICENSE"
] | 87 |
2.4 | pmtvs-tests | 0.3.2 | Statistical hypothesis testing primitives | # pmtvs-tests
Statistical hypothesis testing primitives.
## Installation
```bash
pip install pmtvs-tests
```
## Functions
### Bootstrap Methods
- `bootstrap_mean(data)` - Bootstrap distribution of mean
- `bootstrap_confidence_interval(data, statistic)` - Bootstrap CI
- `bootstrap_ci(data, statistic)` - Bootstrap CI (alias)
- `bootstrap_std(data)` - Bootstrap standard error
- `block_bootstrap_ci(data, statistic)` - Block bootstrap for dependent data
### Non-parametric Tests
- `permutation_test(x, y)` - Two-sample permutation test
- `surrogate_test(signal, statistic)` - Surrogate data test
- `runs_test(signal)` - Runs test (randomness)
- `mann_kendall_test(signal)` - Trend test
- `mannwhitney_test(sample1, sample2)` - Mann-Whitney U test
- `kruskal_test(*samples)` - Kruskal-Wallis H-test
### Parametric Tests
- `t_test(sample)` - One-sample t-test
- `t_test_paired(sample1, sample2)` - Paired t-test
- `t_test_independent(sample1, sample2)` - Independent t-test
- `f_test(sample1, sample2)` - F-test for variance equality
- `chi_squared_test(observed)` - Chi-squared goodness-of-fit
- `anova(*samples)` - One-way ANOVA
- `shapiro_test(sample)` - Shapiro-Wilk normality test
- `levene_test(*samples)` - Levene's variance equality test
### Stationarity Tests
- `adf_test(signal)` - Augmented Dickey-Fuller
- `stationarity_test(signal)` - Rolling-window stationarity
- `kpss_test(signal)` - KPSS test
- `phillips_perron_test(signal)` - Phillips-Perron unit root test
- `trend(signal)` - Linear trend estimation
- `changepoints(signal)` - CUSUM changepoint detection
### Spectral Tests
- `marchenko_pastur_test(eigenvalues)` - Marchenko-Pastur law test
- `arch_test(data)` - Engle's ARCH test for heteroscedasticity
## Backend
Pure Python implementation. Requires scipy >= 1.7.
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"scipy>=1.7",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:57:14.188918 | pmtvs_tests-0.3.2-py3-none-any.whl | 13,273 | 5d/31/a04db399d928d5bacbab24d1b6aa77fac525b0b3fcc82c77aba9c3c4fb4e/pmtvs_tests-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | fec833c8f745338c1890f8da0a8c5b95 | f4cecf137ee03d6724053472472e9ee7bdfa3a1fb8382e3129e21a3c753cc8f8 | 5d31a04db399d928d5bacbab24d1b6aa77fac525b0b3fcc82c77aba9c3c4fb4e | null | [
"LICENSE"
] | 89 |
2.4 | pmtvs-spectral | 0.3.2 | Spectral analysis primitives (pure Python) | # pmtvs-spectral
Spectral analysis primitives.
## Installation
```bash
pip install pmtvs-spectral
```
## Functions
- `power_spectral_density(signal, fs)` - Welch's method PSD
- `dominant_frequency(signal, fs)` - Peak frequency
- `spectral_entropy(signal, fs)` - Flatness measure
- `spectral_centroid(signal, fs)` - Center of mass
- `spectral_bandwidth(signal, fs)` - Spread around centroid
- `spectral_rolloff(signal, fs)` - Energy threshold frequency
- `spectral_flatness(signal, fs)` - Wiener entropy
- `harmonic_ratio(signal, fs)` - Harmonic-to-noise ratio
- `total_harmonic_distortion(signal, fs)` - THD
## Backend
Pure Python implementation.
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/pmtvs/pmtvs",
"Repository, https://github.com/pmtvs/pmtvs"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:57:12.866473 | pmtvs_spectral-0.3.2-py3-none-any.whl | 7,736 | 12/ad/fd1dce71040a1c88314022bec0e72cb43724c3c98686af380278f233f291/pmtvs_spectral-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | fd37374127d1beb17f1180208f0e0ecb | 2d8111c6174bd708d91a184030ae73b97f443a5e828828f023d854df02661811 | 12adfd1dce71040a1c88314022bec0e72cb43724c3c98686af380278f233f291 | null | [
"LICENSE"
] | 88 |
2.4 | pmtvs-regression | 0.3.2 | Pairwise regression primitives (5 functions, pure Python) | # pmtvs-regression
Pairwise regression and signal arithmetic primitives.
## Installation
```bash
pip install pmtvs-regression
```
## Functions
- `linear_regression(sig_a, sig_b)` - OLS linear regression (slope, intercept, R², std error)
- `ratio(sig_a, sig_b)` - Element-wise ratio with epsilon protection
- `product(sig_a, sig_b)` - Element-wise product
- `difference(sig_a, sig_b)` - Element-wise difference
- `sum_signals(sig_a, sig_b)` - Element-wise sum
## Backend
Pure Python implementation.
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/pmtvs/pmtvs",
"Repository, https://github.com/pmtvs/pmtvs"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:57:12.118106 | pmtvs_regression-0.3.2-py3-none-any.whl | 6,380 | ac/e4/8ba4759da2de8cd8c3d95fdd38c78b7812b796c99ad370ddad5cb49c2a63/pmtvs_regression-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 2343963cda5694aae7c318a41e5c3662 | 8714eb09bdd4915a4dc28338ff7b6adc63b779fcde2e2248996e33fbdbcf5ec6 | ace48ba4759da2de8cd8c3d95fdd38c78b7812b796c99ad370ddad5cb49c2a63 | null | [
"LICENSE"
] | 90 |
2.4 | pmtvs-network | 0.3.2 | Network analysis primitives | # pmtvs-network
Network analysis primitives.
## Installation
```bash
pip install pmtvs-network
```
## Functions
### Centrality
- `degree_centrality(adjacency)` - Degree centrality
- `betweenness_centrality(adjacency)` - Betweenness centrality
- `closeness_centrality(adjacency)` - Closeness centrality
### Structure
- `clustering_coefficient(adjacency)` - Local clustering
- `average_path_length(adjacency)` - Mean shortest path
- `density(adjacency)` - Graph density
- `connected_components(adjacency)` - Find components
- `adjacency_from_correlation(corr, threshold)` - Build network from correlation
### Community Detection
- `modularity(adjacency, communities)` - Newman-Girvan modularity
- `community_detection(adjacency, method)` - Louvain, spectral, or label propagation
## Backend
Pure Python implementation.
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:57:11.347967 | pmtvs_network-0.3.2-py3-none-any.whl | 9,918 | 40/e3/8fc00d927b03225fbe36cce9e81835809269f1f8ceb2ca77cda0d2080c42/pmtvs_network-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | c364e80c000d0f8d2181719ac1b24031 | 3e4c3bbccbe5c96c00deca9d76caa3c73e0443b0b7173842bf20066cccd73e38 | 40e38fc00d927b03225fbe36cce9e81835809269f1f8ceb2ca77cda0d2080c42 | null | [
"LICENSE"
] | 88 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.