metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | janaf | 1.2.1 | Python wrapper for NIST-JANAF Thermochemical Tables | # py-janaf: Python wrapper for NIST-JANAF Thermochemical Tables
[](https://pypi.python.org/project/janaf/)
[](https://pypi.org/project/janaf/)
## Features
* Search compounds
* Parse a table as `polars.DataFrame`
* Fix some data
* [Fix missing sign](https://github.com/n-takumasa/py-janaf/commit/7f56ce84bb65c90dd4ecd2efdca2d6f8fe1243b5)
* NOTE: Assuming the sign is consistent with the sign immediately following
* [Fix missing tab](https://github.com/n-takumasa/py-janaf/commit/196c788c792bb672f339d073a0d21c610fabff53)
* NOTE: Based on [PDF files](https://janaf.nist.gov/pdf/JANAF-FourthEd-1998-Carbon.pdf#page=83)
* [Ignore comment-like lines](https://github.com/n-takumasa/py-janaf/commit/d99b942fa8848eed8b8308cf9a50c1411a6f14bf)
## Usage
```bash
pip install janaf
```
```pycon
>>> import polars as pl
>>> import janaf
>>> table = janaf.search(formula="CO2$")
>>> table.name
'Carbon Dioxide (CO2)'
>>> table.formula
'C1O2(g)'
>>> table.df
shape: (62, 9)
┌────────┬────────┬─────────┬──────────────┬───┬───────────┬───────────┬─────────┬──────┐
│ T(K) ┆ Cp ┆ S ┆ -[G-H(Tr)]/T ┆ … ┆ delta-f H ┆ delta-f G ┆ log Kf ┆ Note │
│ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 ┆ f64 ┆ ┆ f64 ┆ f64 ┆ f64 ┆ str │
╞════════╪════════╪═════════╪══════════════╪═══╪═══════════╪═══════════╪═════════╪══════╡
│ 0.0 ┆ 0.0 ┆ 0.0 ┆ inf ┆ … ┆ -393.151 ┆ -393.151 ┆ inf ┆ null │
│ 100.0 ┆ 29.208 ┆ 179.009 ┆ 243.568 ┆ … ┆ -393.208 ┆ -393.683 ┆ 205.639 ┆ null │
│ 200.0 ┆ 32.359 ┆ 199.975 ┆ 217.046 ┆ … ┆ -393.404 ┆ -394.085 ┆ 102.924 ┆ null │
│ 298.15 ┆ 37.129 ┆ 213.795 ┆ 213.795 ┆ … ┆ -393.522 ┆ -394.389 ┆ 69.095 ┆ null │
│ 300.0 ┆ 37.221 ┆ 214.025 ┆ 213.795 ┆ … ┆ -393.523 ┆ -394.394 ┆ 68.67 ┆ null │
│ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │
│ 5600.0 ┆ 64.588 ┆ 373.709 ┆ 316.947 ┆ … ┆ -416.794 ┆ -386.439 ┆ 3.605 ┆ null │
│ 5700.0 ┆ 64.68 ┆ 374.853 ┆ 317.953 ┆ … ┆ -417.658 ┆ -385.89 ┆ 3.536 ┆ null │
│ 5800.0 ┆ 64.772 ┆ 375.979 ┆ 318.944 ┆ … ┆ -418.541 ┆ -385.324 ┆ 3.47 ┆ null │
│ 5900.0 ┆ 64.865 ┆ 377.087 ┆ 319.92 ┆ … ┆ -419.445 ┆ -384.745 ┆ 3.406 ┆ null │
│ 6000.0 ┆ 64.957 ┆ 378.178 ┆ 320.882 ┆ … ┆ -420.372 ┆ -384.148 ┆ 3.344 ┆ null │
└────────┴────────┴─────────┴──────────────┴───┴───────────┴───────────┴─────────┴──────┘
>>> table.df.filter(pl.col("T(K)").is_close(298.15)).item(0, "delta-f H") # kJ/mol
-393.522
```
## Credit
Following files are distributed in [NIST-JANAF Tables](https://janaf.nist.gov/):
* [py-janaf/src/janaf/janaf.json](https://github.com/n-takumasa/py-janaf/blob/main/src/janaf/janaf.json)
* [py-janaf/src/janaf/data/](https://github.com/n-takumasa/py-janaf/tree/main/src/janaf/data)
### [NIST-JANAF Tables - Credits](https://janaf.nist.gov/janbanr.html)
```plain
NIST Standard Reference Database 13
NIST JANAF THERMOCHEMICAL TABLES 1985
Version 1.0
Data compiled and evaluated by
M.W. Chase, Jr., C.A. Davies, J.R. Downey, Jr.
D.J. Frurip, R.A. McDonald, and A.N. Syverud
Distributed by
Standard Reference Data Program
National Institute of Standards and Technology
Gaithersburg, MD 20899
Copyright 1986 by
the U.S. Department of Commerce
on behalf of the United States. All rights reserved.
DISCLAIMER: NIST uses its best efforts to deliver a high quality copy of
the Database and to verify that the data contained therein have been
selected on the basis of sound scientific judgement. However, NIST makes
no warranties to that effect, and NIST shall not be liable for any damage
that may result from errors or omissions in the Database.
```
| text/markdown | null | Takumasa Nakamura <n.takumasa@gmail.com> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language... | [] | null | null | >=3.9 | [] | [] | [] | [
"polars>=1.32.0",
"xarray>=2023.1.0; extra == \"all\"",
"xarray>=2023.1.0; extra == \"xarray\""
] | [] | [] | [] | [
"Repository, https://github.com/n-takumasa/py-janaf"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:55:18.256082 | janaf-1.2.1.tar.gz | 2,064,105 | 7b/9b/564c2b14ee1204b91f99ef51cd2843187b7227a65051eec166e0f9619bde/janaf-1.2.1.tar.gz | source | sdist | null | false | 579960f652931e93b92a9de84e13bd75 | 8f348fd3048290dc75cbe085d117f0ca3215cd6937513e98de23fedbc7707ee6 | 7b9b564c2b14ee1204b91f99ef51cd2843187b7227a65051eec166e0f9619bde | MIT | [
"LICENSE"
] | 12,308 |
2.4 | infrahub-sdk | 1.19.0b0 | Python Client to interact with Infrahub |
<!-- markdownlint-disable -->

<!-- markdownlint-restore -->
# Infrahub by OpsMill
[Infrahub](https://github.com/opsmill/infrahub) by [OpsMill](https://opsmill.com) is taking a new approach to Infrastructure Management by providing a new generation of datastore to organize and control all the data that defines how an infrastructure should run.
At its heart, Infrahub is built on 3 fundamental pillars:
- **Powerful Schema**: that's easily extensible
- **Unified Version Control**: for data and files
- **Data Synchronization**: with traceability and ownership
## Infrahub SDK
The Infrahub Python SDK greatly simplifies how you can interact with Infrahub programmatically.
More information can be found in the [Infrahub Python SDK Documentation](https://docs.infrahub.app/python-sdk/introduction).
## Installation
The Infrahub SDK can be installed using the pip package installer. It is recommended to install the SDK into a virtual environment.
```bash
python3 -m venv .venv
source .venv/bin/activate
pip install infrahub-sdk
```
### Installing optional extras
Extras can be installed as part of the Python SDK and are not installed by default.
#### ctl
The ctl extra provides the `infrahubctl` command, which allows you to interact with an Infrahub instance.
```bash
pip install 'infrahub-sdk[ctl]'
```
#### tests
The tests extra provides all the components for the testing framework of Transforms, Queries and Checks.
```bash
pip install 'infrahub-sdk[tests]'
```
#### all
Installs infrahub-sdk together with all the extras.
```bash
pip install 'infrahub-sdk[all]'
```
### Development setup with UV
If you're developing the SDK and using UV for dependency management, you can install specific dependency groups:
```bash
# Install development dependencies
uv sync --all-groups --all-extras
# Install specific groups
uv sync --group tests # Testing dependencies only
uv sync --group lint # Linting dependencies only
uv sync --group ctl # CLI dependencies only
uv sync --all-groups --all-extras # All optional dependencies
```
| text/markdown | null | OpsMill <info@opsmill.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"dulwich>=0.21.4",
"graphql-core<3.3,>=3.1",
"httpx>=0.20",
"netutils>=1.0.0",
"pydantic!=2.0.1,!=2.1.0,<3.0.0,>=2.0.0",
"pydantic-settings>=2.0",
"tomli>=1.1.0; python_version < \"3.11\"",
"ujson>=5",
"whenever<0.10.0,>=0.9.3",
"ariadne-codegen==0.15.3; extra == \"all\"",
"click==8.1.*; extra =... | [] | [] | [] | [
"Homepage, https://opsmill.com",
"Repository, https://github.com/opsmill/infrahub-sdk-python",
"Documentation, https://docs.infrahub.app/python-sdk/introduction"
] | uv/0.9.8 | 2026-02-18T14:55:16.814510 | infrahub_sdk-1.19.0b0.tar.gz | 820,349 | f2/05/5b32f7fce1bd1d1d2501de0e340aa82eade1980a48c63b7646ad53e038e2/infrahub_sdk-1.19.0b0.tar.gz | source | sdist | null | false | 804d26c94a00cd65e4410f64d64eeb83 | 9ad80bee724540f1fc11626e913bd57195804781d22e361aa68fbfca0a0442fe | f2055b32f7fce1bd1d1d2501de0e340aa82eade1980a48c63b7646ad53e038e2 | null | [
"LICENSE.txt"
] | 228 |
2.4 | notebook-checker | 0.1.3 | A package to check notebooks for the use of globals inside functions and to check student progress on assignments |
# Notebook Checker
A collection of additional checks to run in student notebooks.
## Globals checker
Checks notebook files for the use of globals inside functions. The checker
parses each cell's AST as it is run, and injects code into the AST to ensure
that when a function is called, that any free variables are only of the type
callable, type or module. If a function tries to access any other type of
variable, i.e. stored data, outside of the function's scope, then the checker
logs an error message, informing the user.
This check can be started with the magic function: `%start_checks`
## Student logger
Logs student progress while the notebook is being exectuted, including current
cell ID, timestamp at which the cell is executed, and the cell contents.
This check can be started with the magic function: `%register_student`
| text/markdown | null | TF Doolan <t.f.doolan@uva.nl> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"ipython",
"webdavclient3",
"cryptography"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T14:54:38.951190 | notebook_checker-0.1.3.tar.gz | 7,994 | f9/4e/9eae6d9413dc541f69b541e6a9a192811b2b1e10ba268aa62dc909bd7ccf/notebook_checker-0.1.3.tar.gz | source | sdist | null | false | 8d09a0c592965640ae082292cf7c212a | ab829f3b167011c32c8842c395641adb8c476c90279cd04d73a0a3520fc70d45 | f94e9eae6d9413dc541f69b541e6a9a192811b2b1e10ba268aa62dc909bd7ccf | MIT | [
"LICENSE"
] | 303 |
2.4 | llm-llm-ctx-mgr | 0.1.0 | A middleware layer for managing, budgeting, and optimizing LLM context windows. | # Project: `llm-ctx-mgr - llm context manager (engineering)`
### **1. Background & Problem Statement**
Large Language Models (LLMs) are moving from simple "Prompt Engineering" (crafting a single query) to "Context Engineering" (managing a massive ecosystem of retrieved documents, tools, and history).
The current problem is **Context Pollution**:
1. **Overloading:** RAG (Retrieval Augmented Generation) pipelines often dump too much data, exceeding token limits.
2. **Noise:** Duplicate or irrelevant information confuses the model and increases hallucination rates.
3. **Formatting Chaos:** Different models (Claude vs. Llama vs. GPT) require different formatting (XML vs. Markdown vs. Plain Text), leading to messy, hard-to-maintain string concatenation code.
4. **Black Box:** Developers rarely see exactly what "context" was sent to the LLM until after a failure occurs.
**The Solution:** `llm-ctx-mgr` acts as a **middleware layer** for the LLM pipeline. It creates a structured, optimized, and budget-aware "context payload" before it reaches the model.
---
### **2. Architecture: Where It Fits**
The package sits strictly between the **Retrieval/Agent Layer** (e.g., LangChain, LlamaIndex) and the **Execution Layer** (the LLM API).
#### **Diagram: The "Before" (Standard Pipeline)**
*Without `llm-ctx-mgr`, retrieval is messy and often truncated arbitrarily.*
```mermaid
graph LR
A[User Query] --> B[LangChain Retriever]
B --> C{Result: 15 Docs}
C -->|Raw Dump| D[LLM Context Window]
D -->|Token Limit Exceeded!| E[Truncated/Error]
```
#### **Diagram: The "After" (With `llm-ctx-mgr`)**
*With your package, the context is curated, prioritized, and formatted.*
```mermaid
graph LR
A[User Query] --> B[LangChain Retriever]
B --> C[Raw Data: 15 Docs + History]
C --> D[**llm-ctx-mgr**]
subgraph "Your Middleware"
D --> E["1. Token Budgeting"]
E --> F["2. Semantic Pruning"]
F --> G["3. Formatting (XML/JSON)"]
end
G --> H[Optimized Prompt]
H --> I[LLM API]
```
---
### **3. Key Features & Tools**
Here is the breakdown of the 4 core modules, the features they provide, and the libraries powering them.
#### **Module A: The Budget Controller (`budget`)**
* **Goal:** Ensure the context never exceeds the model's limit (e.g., 8192 tokens) while keeping the most important information.
* **Feature:** `PriorityQueue`. Users assign a priority (Critical, High, Medium, Low) to every piece of context. If the budget is full, "Low" items are dropped first.
* **Supported Providers & Tools:**
* **OpenAI** (`gpt-4`, `o1`, `o3`, etc.): **`tiktoken`** — fast, local token counting.
* **HuggingFace** (`meta-llama/...`, `mistralai/...`): **`tokenizers`** — for open-source models.
* **Google** (`gemini-2.0-flash`, `gemma-...`): **`google-genai`** — API-based `count_tokens`.
* **Anthropic** (`claude-sonnet-4-20250514`, etc.): **`anthropic`** — API-based `count_tokens`.
* **Installation (pick what you need):**
```bash
pip install llm-ctx-mgr[openai] # tiktoken
pip install llm-ctx-mgr[huggingface] # tokenizers
pip install llm-ctx-mgr[google] # google-genai
pip install llm-ctx-mgr[anthropic] # anthropic
pip install llm-ctx-mgr[all] # everything
```
#### **Module B: The Semantic Pruner (`prune`)**
* **Goal:** Remove redundancy. If three retrieved documents say "Python is great," keep only the best one.
* **Features:**
* **`Deduplicator` (block-level):** Calculates cosine similarity between context blocks and removes duplicate blocks. Among duplicates, the highest-priority block is kept.
* **`Deduplicator.deduplicate_chunks()` (chunk-level):** Splits a single block's content by separator (e.g. `\n\n`), deduplicates the chunks internally, and reassembles the cleaned content. Ideal for RAG results where multiple retrieved chunks within one block are semantically redundant.
* **Tools:**
* **`FastEmbed`**: Lightweight embedding generation (CPU-friendly, no heavy PyTorch needed).
* **`Numpy`**: For efficient vector math (dot products).
* **Installation:**
```bash
pip install llm-ctx-mgr[prune]
```
#### **Module C: Context Distillation (`distill`)**
* **Goal:** Compress individual blocks by removing non-essential tokens (e.g., reduces a 5000-token document to 2500 tokens) using a small ML model.
* **Feature:** `Compressor`. Uses **LLMLingua-2** (small BERT-based token classifier) to keep only the most important words.
* **Tools:**
* **`llmlingua`**: Microsoft's library for prompt compression.
* **`onnxruntime`** / **`transformers`**: For running the small BERT model.
* **Installation:**
```bash
pip install llm-ctx-mgr[distill]
```
#### **Module D: The Formatter (`format`)**
* **Goal:** Adapt the text structure to the specific LLM being used without changing the data.
* **Feature:** `ModelAdapter`.
* *Claude Mode:* Wraps data in XML tags (`<doc id="1">...</doc>`).
* *Llama Mode:* Uses specific Markdown headers or `[INST]` tags.
* **Tools:**
* **`Jinja2`**: For powerful, logic-based string templates.
* **`Pydantic`**: To enforce strict schema validation on the input data.
#### **Module E: Observability (`inspect`)**
* **Goal:** Let the developer see exactly what is happening.
* **Feature:** `ContextVisualizer` and `Snapshot`. Prints a colored bar chart of token usage to the terminal and saves the final prompt to a JSON file for debugging.
* **Tools:**
* **`Rich`**: For beautiful terminal output and progress bars.
---
### **4. Installation & Usage Guide**
#### **Installation**
```bash
pip install llm-ctx-mgr[all]
```
#### **Feature A: Budgeting & Priority Pruning**
*Ensure your context fits the token limit by prioritizing critical information.*
```python
from context_manager import ContextEngine, ContextBlock
from context_manager.strategies import PriorityPruning
# 1. Initialize Engine with a token limit
engine = ContextEngine(
model="gpt-4",
token_limit=4000,
pruning_strategy=PriorityPruning()
)
# 2. Add Critical Context (System Prompts) - NEVER dropped
engine.add(ContextBlock(
content="You are a helpful AI assistant.",
role="system",
priority="critical"
))
# 3. Add High Priority Context (User History) - Dropped only if critical fills budget
engine.add(ContextBlock(
content="User: Explain quantum computing.",
role="history",
priority="high"
))
# 4. Add Medium/Low Priority (RAG Docs) - Dropped first
docs = ["Quantum computing uses qubits...", "Quantum mechanics is...", "Cake recipes..."]
for doc in docs:
engine.add(ContextBlock(
content=doc,
role="rag_context",
priority="medium"
))
# 5. Compile - Triggers budgeting and pruning
final_prompt = engine.compile()
print(f"Final token count: {engine.compiled_tokens}")
```
#### **Feature B: Semantic Pruning (Deduplication)**
*Remove duplicate or highly similar content to save space and reduce noise.*
```python
from context_manager import ContextEngine, ContextBlock
from context_manager.prune import Deduplicator
# 1. Initialize Deduplicator (uses FastEmbed by default)
dedup = Deduplicator(threshold=0.85)
# 2. Initialize Engine with Deduplicator
engine = ContextEngine(
model="gpt-4",
token_limit=4000,
deduplicator=dedup
)
# 3. Add duplicate content (simulating RAG retrieval)
# The second block will be detected as a duplicate and removed/merged
engine.add(ContextBlock(
content="Python was created by Guido van Rossum.",
role="rag_context",
priority="medium"
))
engine.add(ContextBlock(
content="Guido van Rossum created the Python language.",
role="rag_context",
priority="low" # Lower priority duplicate is dropped
))
# 4. Compile - Deduplication happens before budgeting
final_prompt = engine.compile()
```
#### **Feature C: Context Distillation (Compression)**
*Compress long documents using LLMLingua to keep essential information within budget.*
```python
from context_manager import ContextEngine, ContextBlock, Priority
from context_manager.distill import LLMLinguaCompressor
# 1. Initialize Compressor (loads small local model)
compressor = LLMLinguaCompressor(
model_name="microsoft/llmlingua-2-xlm-roberta-large-meetingbank",
device_map="cpu"
)
# 2. Initialize Engine
engine = ContextEngine(
model="gpt-4",
token_limit=2000,
compressor=compressor
)
# 3. Add a long document marked for compression
long_text = "..." * 1000 # Very long text
engine.add(ContextBlock(
content=long_text,
role="rag_context",
priority=Priority.HIGH,
can_compress=True # <--- Triggers compression for this block
))
# 4. Compile - Compression happens first, then deduplication, then budgeting
final_prompt = engine.compile()
```
### **5. Roadmap for Development**
1. **v0.1 (MVP):** `tiktoken` counting and `PriorityPruning`. (Done)
2. **v0.2 (Structure):** `Jinja2` templates for formatting. (Done)
3. **v0.3 (Smarts):** `FastEmbed` for semantic deduplication. (Done)
4. **v0.4 (Vis):** `Rich` terminal visualization. (Done)
5. **v0.5 (Distill):** `LLMLingua` integration for context compression. (Done)
6. **v0.6 (Next):** Streaming support and advanced caching strategies.
This design gives you a clear path to building a high-value tool that solves a specific, painful problem for AI engineers.
| text/markdown | Adipta Martulandi | null | null | null | null | ai, budget, context, llm, token | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engi... | [] | null | null | >=3.10 | [] | [] | [] | [
"anthropic>=0.40; extra == \"all\"",
"fastembed>=0.4; extra == \"all\"",
"google-genai>=1.0; extra == \"all\"",
"llmlingua>=0.2.2; extra == \"all\"",
"numpy>=1.24; extra == \"all\"",
"tiktoken>=0.7; extra == \"all\"",
"tokenizers>=0.19; extra == \"all\"",
"anthropic>=0.40; extra == \"anthropic\"",
"... | [] | [] | [] | [
"Homepage, https://github.com/adiptamartulandi/llm-ctx-mgr",
"Repository, https://github.com/adiptamartulandi/llm-ctx-mgr"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T14:54:25.572415 | llm_llm_ctx_mgr-0.1.0.tar.gz | 18,685 | d2/3f/5f6ffcf1a1a0a52b98c32dac3dc3118263871fb19194241b40156cfacc1e/llm_llm_ctx_mgr-0.1.0.tar.gz | source | sdist | null | false | 99294ddea8c5d6b7260c31256a351615 | 59e9f7781fb96b20827c01270e8297d1a7919502fc823c5fe0014930f2ddad03 | d23f5f6ffcf1a1a0a52b98c32dac3dc3118263871fb19194241b40156cfacc1e | MIT | [] | 261 |
2.4 | ros2-unbag | 1.2.3 | A ROS 2 tool for exporting bags to human readable files. Supports pluggable export routines to handle any message type. | <img src="ros2_unbag/ui/assets/badge.svg" height=130 align="right">
# *ros2 unbag* - fast ROS 2 bag export for any format
<p align="center">
<img src="https://img.shields.io/github/license/ika-rwth-aachen/ros2_unbag"/>
<a href="https://github.com/ika-rwth-aachen/ros2_unbag/actions/workflows/build_docker.yml"><img src="https://github.com/ika-rwth-aachen/ros2_unbag/actions/workflows/build_docker.yml/badge.svg"/></a>
<a href="https://pypi.org/project/ros2-unbag/"><img src="https://img.shields.io/pypi/v/ros2-unbag?label=PyPI"/></a>
<img alt="GitHub Repo stars" src="https://img.shields.io/pypi/dm/ros2-unbag">
</p>
*ros2 unbag* is a powerful ROS 2 tool featuring an **intuitive GUI** and **flexible CLI** for extracting topics from `.db3` or `.mcap` bag files into formats like CSV, JSON, PCD, images, and more.
- **🎨 Intuitive GUI interface** for interactive bag exploration and export configuration
- **⚙️ Full-featured ROS 2 CLI plugin**: `ros2 unbag <args>` for automation and scripting
- **🔌 Pluggable export routines** enable export of any message to any type
- **🔧 Custom processors** to filter, transform or enrich messages
- **⏱️ Time‐aligned resampling** (`last` | `nearest`)
- **🚀 Multi‐process** export with adjustable CPU usage
- **💾 JSON config** saving/loading for repeatable workflows
## Table of Contents
- [Introduction](#introduction)
- [Installation](#installation)
- [Prerequisites](#prerequisites)
- [From PyPI (via pip)](#from-pypi-via-pip)
- [From Source](#from-source)
- [Docker](#docker)
- [Quick Start](#quick-start)
- [GUI Mode](#gui-mode-recommended-for-first-time-users)
- [CLI Mode](#cli-mode-for-automation--scripting)
- [Documentation](#documentation)
- [Export Routines](docs/EXPORT_ROUTINES.md) - Built-in and custom export formats
- [Processors](docs/PROCESSORS.md) - Message transformation and filtering
- [Advanced Usage](docs/ADVANCED_USAGE.md) - Config files, resampling, CPU tuning
- [Acknowledgements](#acknowledgements)
## Introduction
<p align="center">
<a href="ros2_unbag/ui/assets/GUI.png">
<img src="ros2_unbag/ui/assets/GUI.png" alt="ros2 unbag GUI" style="max-width: 600px; width: 100%; height: auto;"/>
</a>
</p>
The integrated GUI makes it easy to visualize your bag structure, select topics, configure export formats, set up processor chains, and manage resampling—all through an interactive interface. For automation and scripting workflows, the full-featured CLI provides the same capabilities with command-line arguments or JSON configuration files.
It comes with export routines for [all message types](docs/EXPORT_ROUTINES.md) (sensor data, point clouds, images). You need a special file format or message type? Add your [own export plugin](docs/EXPORT_ROUTINES.md#custom-export-routines) for any ROS 2 message or format, and chain [custom processors](docs/PROCESSORS.md) to filter, transform or enrich messages (e.g. drop fields, compute derived values, remap frames).
Optional [resampling](docs/ADVANCED_USAGE.md#resampling) synchronizes your data streams around a chosen master topic—aligning each other topic either to its last‑known sample (“last”) or to the temporally closest sample (“nearest”)—so you get a consistent sample count in your exports.
For high‑throughput workflows, *ros2 unbag* can spawn multiple worker processes and lets you [tune CPU usage](docs/ADVANCED_USAGE.md#cpu-utilization). Your topic selections, processor chains, export parameters and resampling mode (last or nearest) can be saved to and loaded from a [JSON configuration](docs/ADVANCED_USAGE.md#configuration-files), ensuring reproducibility across runs.
Whether you prefer the **GUI for interactive exploration** or `ros2 unbag <args>` for automated pipelines, you have a flexible, extensible way to turn bag files into the data you need.
## Installation
### Prerequisites
Make sure you have a working ROS 2 installation (e.g., Humble, Iron, Jazzy, or newer) and that your environment is sourced:
```bash
source /opt/ros/<distro>/setup.bash
```
Replace `<distro>` with your ROS 2 distribution.
Install the required apt dependencies:
```bash
sudo apt update
sudo apt install libxcb-cursor0 libxcb-shape0 libxcb-icccm4 libxcb-keysyms1 libxkbcommon-x11-0
```
### From PyPI (via pip)
```bash
pip install ros2-unbag
```
### From source
```bash
git clone https://github.com/ika-rwth-aachen/ros2_unbag.git
cd ros2_unbag
pip install .
```
### Docker
You can skip local installs by running our ready‑to‑go Docker image:
```bash
docker pull ghcr.io/ika-rwth-aachen/ros2_unbag:latest
```
This image comes with ROS 2 Jazzy and *ros2 unbag* preinstalled. To launch it:
1. Clone or download the `docker/docker-compose.yml` in this repo.
2. Run:
```bash
docker-compose -f docker/docker-compose.yml up
```
3. If you need the GUI, first enable X11 forwarding on your host (at your own risk!):
```bash
xhost +local:
```
Then start the container as above—the GUI will appear on your desktop.
## Quick Start
*ros2 unbag* offers both an **intuitive GUI** for interactive workflows and a **powerful CLI** for automation and scripting.
### GUI Mode (Recommended for First-Time Users)
Launch the interactive graphical interface:
```bash
ros2 unbag
```
### CLI Mode (For Automation & Scripting)
Run the CLI tool by calling *ros2 unbag* with a path to a rosbag and an export config, consisting of one or more topic:format:[subdirectory] combinations:
```bash
ros2 unbag <path_to_rosbag> --export </topic:format[:subdir]>…
```
Alternatively you can load a config file. In this case you do not need any `--export` flag:
```bash
ros2 unbag <path_to_rosbag> --config <config.json>
```
the structure of config files is described in [here](./docs/ADVANCED_USAGE.md#configuration-file-structure).
In addition to these required flags, there are some optional flags. See the table below, for all possible flags:
| Flag | Value/Format | Description | Usage | Default |
| --------------------------- | ---------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | -------------- |
| **`bag`** | `<path>` | Path to ROS 2 bag file (`.db3` or `.mcap`). | CLI mode (required) | – |
| **`-e, --export`** | `/topic:format[:subdir]` | Topic → format export spec. Repeatable. | CLI mode (required or `--config`) | – |
| **`-o, --output-dir`** | `<directory>` | Base directory for all exports. | Optional | `.` |
| **`--naming`** | `<pattern>` | Filename pattern. Supports `%name`, `%index`, `%timestamp`, `%master_timestamp` (when resampling), and strftime (e.g. `%Y-%m-%d_%H-%M-%S`) using ROS timestamps | Optional | `%name_%index` |
| **`--resample`** | `/master:association[,discard_eps]`. | Time‑align to master topic. `association` = `last` or `nearest`; `nearest` needs a numeric `discard_eps`. | Optional | – |
| **`-p, --processing`** | `/topic:processor[:arg=value,…]` | Pre‑export processor spec; repeat to build ordered chains (executed in the order provided). | Optional | – |
| **`--cpu-percentage`** | `<float>` | % of cores for parallel export (0–100). Use `0` for single‑threaded. | Optional | `80.0` |
| **`--config`** | `<config.json>` | JSON config file path. Overrides all other args (except `bag`). | Optional | – |
| **`--gui`** | (flag) | Launch Qt GUI. If no `bag`/`--export`/`--config`, GUI is auto‑started. | Optional | `false` |
| **`--use-routine`** | `<file.py>` | Load a routine for this run only (no install). | Optional | – |
| **`--use-processor`** | `<file.py>` | Load a processor for this run only (no install). | Optional | – |
| **`--install-routine`** | `<file.py>` | Copy & register custom export routine. | Standalone | – |
| **`--install-processor`** | `<file.py>` | Copy & register custom processor. | Standalone | – |
| **`--uninstall-routine`** | (flag) | Interactive removal of an installed routine. | Standalone | - |
| **`--uninstall-processor`** | (flag) | Interactive removal of an installed processor. | Standalone | - |
| **`--help`** | (flag) | Show usage information and exit. | Standalone | - |
Example:
```bash
ros2 unbag rosbag/rosbag.mcap
--output-dir /docker-ros/ws/example/ --export /lidar/point_cloud:pointcloud/pcd:lidar --export /radar/point_cloud:pointcloud/pcd:radar --resample /lidar/point_cloud:last,0.2
```
⚠️ If you specify the `--config` option (e.g., `--config configs/my_config.json`), the tool will load all export settings from the given JSON configuration file. In this case, all other command-line options except `<path_to_rosbag>` are ignored, and the export process is fully controlled by the config file. The `<path_to_rosbag>` is always required in CLI use.
## Documentation
For detailed information on advanced features, see the following guides:
- **[Export Routines](docs/EXPORT_ROUTINES.md)** - Complete guide to built-in export formats (images, point clouds, CSV, JSON, etc.) and creating custom export routines
- **[Processors](docs/PROCESSORS.md)** - Message transformation and filtering, including processor chains and custom processors
- **[Advanced Usage](docs/ADVANCED_USAGE.md)** - Configuration files, resampling strategies, and CPU utilization tuning
## Acknowledgements
This research is accomplished within the following research projects:
| Project | Funding Source | |
|---------|----------------|:----:|
| <a href="https://www.ika.rwth-aachen.de/de/kompetenzen/projekte/automatisiertes-fahren/4-cad.html"><img src="https://www.ika.rwth-aachen.de/images/projekte/4cad/4cad-logo.svg" alt="4-CAD" height="40"/></a> | Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) DFG Proj. Nr. 503852364 | <p align="center"><img src="https://www.ika.rwth-aachen.de/images/foerderer/dfg.svg" height="50"/></p> |
| <a href="https://iexoddus-project.eu/"><img src="https://www.ika.rwth-aachen.de/images/projekte/iexoddus/iEXODDUS%20Logo%20color.svg" alt="iEXXODUS" height="40"/></a> | Funded by the European Union’s Horizon Europe Research and Innovation Programme under Grant Agreement No 101146091 | <p align="center"><img src="https://www.ika.rwth-aachen.de/images/foerderer/eu.svg" height="50"/></p> |
| <a href="https://synergies-ccam.eu/"><img src="https://www.ika.rwth-aachen.de/images/projekte/synergies/SYNERGIES_Logo%201.png" alt="SYNERGIES" height="40"/></a> | Funded by the European Union’s Horizon Europe Research and Innovation Programme under Grant Agreement No 101146542 | <p align="center"><img src="https://www.ika.rwth-aachen.de/images/foerderer/eu.svg" height="50"/></p> |
## Notice
> [!IMPORTANT]
> This repository is open-sourced and maintained by the [**Institute for Automotive Engineering (ika) at RWTH Aachen University**](https://www.ika.rwth-aachen.de/).
> We cover a wide variety of research topics within our [*Vehicle Intelligence & Automated Driving*](https://www.ika.rwth-aachen.de/en/competences/fields-of-research/vehicle-intelligence-automated-driving.html) domain.
> If you would like to learn more about how we can support your automated driving or robotics efforts, feel free to reach out to us!
> :email: ***opensource@ika.rwth-aachen.de***
| text/markdown | null | Lukas Ostendorf <lukas.ostendorf@ika.rwth-aachen.de> | null | Lukas Ostendorf <lukas.ostendorf@ika.rwth-aachen.de> | MIT License
Copyright (c) 2025 Institute for Automotive Engineering (ika), RWTH Aachen University
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | ros2, rosbag, robotics, perception, ros2-export | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy==1.26.4",
"opencv-python-headless==4.11.0.86",
"pypcd4==1.3.0",
"PySide6==6.9.1",
"pyyaml==6.0.2",
"tqdm==4.67.1",
"pytest>=7.0; extra == \"test\"",
"pytest-cov>=4.0; extra == \"test\""
] | [] | [] | [] | [
"homepage, https://github.com/ika-rwth-aachen/ros2_unbag"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-18T14:54:24.028131 | ros2_unbag-1.2.3.tar.gz | 625,372 | 69/ce/0b108b0b4908b95516925e3785131164443238a1244875128a5de9f41a05/ros2_unbag-1.2.3.tar.gz | source | sdist | null | false | 115c232f2a59ee3627a927b7709d14c3 | 88a31b5165d3f7ff935e1556dcfa264a87d5fda3cbbaa7bf128c52753732739f | 69ce0b108b0b4908b95516925e3785131164443238a1244875128a5de9f41a05 | null | [
"LICENSE",
"LICENSE-3rdparty"
] | 289 |
2.4 | circuitpython-keymanager | 1.1.1 | Tools to manage notes in musical applications. Includes note priority, arpeggiation, and sequencing. | Introduction
============
.. image:: https://readthedocs.org/projects/circuitpython-keymanager/badge/?version=latest
:target: https://circuitpython-keymanager.readthedocs.io/
:alt: Documentation Status
.. image:: https://img.shields.io/discord/327254708534116352.svg
:target: https://adafru.it/discord
:alt: Discord
.. image:: https://github.com/relic-se/CircuitPython_KeyManager/workflows/Build%20CI/badge.svg
:target: https://github.com/relic-se/CircuitPython_KeyManager/actions
:alt: Build Status
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json
:target: https://github.com/astral-sh/ruff
:alt: Code Style: Ruff
Tools to manage notes in musical applications. Includes note priority, arpeggiation, and sequencing.
Dependencies
=============
This driver depends on:
* `Adafruit CircuitPython <https://github.com/adafruit/circuitpython>`_
Please ensure all dependencies are available on the CircuitPython filesystem.
This is easily achieved by downloading
`the Adafruit library and driver bundle <https://circuitpython.org/libraries>`_
or individual libraries can be installed using
`circup <https://github.com/adafruit/circup>`_.
Installing from PyPI
=====================
.. note:: This library is not available on PyPI yet. Install documentation is included
as a standard element. Stay tuned for PyPI availability!
On supported GNU/Linux systems like the Raspberry Pi, you can install the driver locally `from
PyPI <https://pypi.org/project/circuitpython-keymanager/>`_.
To install for current user:
.. code-block:: shell
pip3 install circuitpython-keymanager
To install system-wide (this may be required in some cases):
.. code-block:: shell
sudo pip3 install circuitpython-keymanager
To install in a virtual environment in your current project:
.. code-block:: shell
mkdir project-name && cd project-name
python3 -m venv .venv
source .env/bin/activate
pip3 install circuitpython-keymanager
Installing to a Connected CircuitPython Device with Circup
==========================================================
Make sure that you have ``circup`` installed in your Python environment.
Install it with the following command if necessary:
.. code-block:: shell
pip3 install circup
With ``circup`` installed and your CircuitPython device connected use the
following command to install:
.. code-block:: shell
circup install relic_keymanager
Or the following command to update an existing version:
.. code-block:: shell
circup update
Usage Example
=============
.. code-block:: python
from relic_keymanager import Keyboard
keyboard = Keyboard()
keyboard.on_voice_press = lambda voice: print(f"Pressed: {voice.note.notenum:d}")
keyboard.on_voice_release = lambda voice: print(f"Released: {voice.note.notenum:d}")
for i in range(1, 4):
keyboard.append(i)
for i in range(3, 0, -1):
keyboard.remove(i)
Documentation
=============
API documentation for this library can be found on `Read the Docs <https://circuitpython-keymanager.readthedocs.io/>`_.
For information on building library documentation, please check out
`this guide <https://learn.adafruit.com/creating-and-sharing-a-circuitpython-library/sharing-our-docs-on-readthedocs#sphinx-5-1>`_.
Contributing
============
Contributions are welcome! Please read our `Code of Conduct
<https://github.com/relic-se/CircuitPython_KeyManager/blob/HEAD/CODE_OF_CONDUCT.md>`_
before contributing to help this project stay welcoming.
| text/x-rst | null | Cooper Dalrymple <me@dcdalrymple.com> | null | null | MIT | adafruit, blinka, circuitpython, micropython, relic_keymanager, synthesizer, music, keyboard, arpeggiator, sequencer, midi, notes | [
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Embedded Systems",
"Topic :: System :: Hardware",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
] | [] | null | null | null | [] | [] | [] | [
"Adafruit-Blinka",
"asyncio",
"Adafruit-CircuitPython-Debouncer; extra == \"optional\""
] | [] | [] | [] | [
"Homepage, https://github.com/relic-se/CircuitPython_KeyManager"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T14:54:05.782134 | circuitpython_keymanager-1.1.1.tar.gz | 31,389 | de/3b/032ee6b01dddfdecc1c2dfa7b3232383a47ed6f08067bb647c45b8e624a6/circuitpython_keymanager-1.1.1.tar.gz | source | sdist | null | false | e8120b4df2eb1cef4ca7c17fda8162bf | cd0074260b1ced9386bcfb0711b159aebb7fca5de828b43e535d883660d085c8 | de3b032ee6b01dddfdecc1c2dfa7b3232383a47ed6f08067bb647c45b8e624a6 | null | [
"LICENSE"
] | 282 |
2.4 | nima_io | 0.4.2 | A project to read microscopy files. | # NImA-io
[](https://pypi.org/project/nima_io/)
[](https://github.com/darosio/nima_io/actions/workflows/ci.yml)
[](https://codecov.io/gh/darosio/nima_io)
[](https://darosio.github.io/nima_io/)
This is a helper library designed for reading microscopy data supported by
[Bioformats](https://www.openmicroscopy.org/bio-formats/) using Python. The
package also includes a command-line interface for assessing differences between
images.
## Features / Description
Despite the comprehensive python-bioformats package, Bioformats reading in
Python is not flawless. To assess correct reading and performance, I gathered a
set of test input files from real working data and established various
approaches for reading them:
1. Utilizing the external "showinf" and parsing the generated XML metadata.
1. Employing out-of-the-box python-bioformats.
1. Leveraging bioformats through the Java API.
1. Combining python-bioformats with Java for metadata (Download link: bio-formats 5.9.2).
At present, Solution No. 4 appears to be the most effective.
It's important to note that FEI files are not 100% OME compliant, and
understanding OME metadata can be challenging. For instance, metadata.getXXX is
sometimes equivalent to
metadata.getRoot().getImage(i).getPixels().getPlane(index).
The use of parametrized tests enhances clarity and consistency. The approach of
returning a wrapper to a Bioformats reader enables memory-mapped (a la memmap)
operations.
Notebooks are included in the documentation tutorials to aid development and
illustrate usage. Although there was an initial exploration of the TileStitch
Java class, the decision was made to implement TileStitcher in Python.
Future improvements can be implemented in the code, particularly for the
multichannel OME standard example, which currently lacks obj or resolutionX
metadata. Additionally, support for various instrument, experiment, or plate
metadata can be considered in future updates.
## Installation
System requirements:
- maven
### From PyPI
Using pip:
```
pip install nima_io
```
### Recommended: Using pipx
For isolated installation (recommended):
```
pipx install nima_io
```
### Shell Completion
#### Bash
```bash
_IMGDIFF_COMPLETE=bash_source imgdiff > ~/.local/bin/imgdiff-complete.bash
source ~/.local/bin/imgdiff-complete.bash
# Add to your ~/.bashrc to make it permanent:
echo 'source ~/.local/bin/ingdiff-complete.bash' >> ~/.bashrc
```
#### Fish:
```bash
_IMGDIFF_COMPLETE=fish_source imgdiff | source
# Add to fish config to make it permanent:
_IMGDIFF_COMPLETE=fish_source imgdiff > ~/.config/fish/completions/imgdiff.fish
```
## Usage
Docs: https://{{ cookiecutter.project_slug }}.readthedocs.io/
### CLI
```bash
imgdiff --help
```
### Python
```python
from nima_io import read
```
## Development
Requires Python `uv`.
With uv:
```bash
# one-time
pre-commit install
# dev tools and deps
uv sync --group dev
# lint/test
uv run ruff check . (or: make lint)
uv run pytest -q (or: make test)
```
### Update and initialize submodules
```
git submodule update --init --recursive
```
Navigate to the tests/data/ directory:
```
cd tests/data/
git co master
```
Configure Git Annex for SSH caching:
```
git config annex.sshcaching true
```
Pull the necessary files using Git Annex:
```
git annex pull
```
These commands set up the development environment and fetch the required data for testing.
Modify tests/data.filenames.txt and tests/data.filenames.md5 as needed and run:
```
cd tests
./data.filenames.sh
```
We use Renovate to keep dependencies current.
## Dependency updates (Renovate)
Enable Renovate:
1. Install the GitHub App: https://github.com/apps/renovate (Settings → Integrations → GitHub Apps → Configure → select this repo/org).
1. This repo includes a `renovate.json` policy. Renovate will open a “Dependency Dashboard” issue and PRs accordingly.
Notes:
- Commit style: `build(deps): bump <dep> from <old> to <new>`
- Pre-commit hooks are grouped and labeled; Python version bumps in `pyproject.toml` are disabled by policy.
Migrating from Dependabot:
- You may keep “Dependabot alerts” ON for vulnerability visibility, but disable Dependabot security PRs.
## Template updates (Cruft)
This project is linked to its Cookiecutter template with Cruft.
- Check for updates: `cruft check`
- Apply updates: `cruft update -y` (resolve conflicts, then commit)
CI runs a weekly job to open a PR when template updates are available.
First-time setup if you didn’t generate with Cruft:
```bash
pipx install cruft # or: pip install --user cruft
cruft link --checkout main https://github.com/darosio/cookiecutter-python.git
```
Notes:
- The CI workflow skips if `.cruft.json` is absent.
- If you maintain a stable template branch (e.g., `v1`), link with `--checkout v1`. You can also update within that line using `cruft update -y --checkout v1`.
## License
We use a shared copyright model that enables all contributors to maintain the
copyright on their contributions.
All code is licensed under the terms of the [revised BSD license](LICENSE.txt).
## Contributing
If you are interested in contributing to the project, please read our
[contributing](https://darosio.github.io/nima_io/references/contributing.html)
and
[development environment](https://darosio.github.io/nima_io/references/development.html)
guides, which outline the guidelines and conventions that we follow for
contributing code, documentation, and other resources.
| text/markdown | null | Daniele Arosio <darosio@duck.com> | null | null | null | Bioimage, Image Analysis, Metadata, Open Microscopy, Tiled Images | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python... | [] | null | null | >=3.11 | [] | [] | [] | [
"bioio-bioformats>=1.3.2",
"bioio-lif>=1.4.2",
"bioio-ome-tiff>=1.4.0",
"bioio-tifffile>=1.3.0",
"bioio>=3.2.0",
"click>=8.3.1",
"numpy>=2.4.2",
"ome-types>=0.6.3",
"xarray>=2026.2.0",
"autodocsumm>=0.2.14; extra == \"docs\"",
"bioio; extra == \"docs\"",
"ipykernel>=7.1.0; extra == \"docs\"",
... | [] | [] | [] | [
"Bug Tracker, https://github.com/darosio/nima_io/issues",
"Changelog, https://github.com/darosio/nima_io/blob/main/CHANGELOG.md",
"Documentation, https://nima-io.readthedocs.io",
"Github releases, https://github.com/darosio/nima_io/releases",
"Homepage, https://github.com/darosio/nima_io"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:53:08.989878 | nima_io-0.4.2.tar.gz | 334,298 | 05/e5/54849e1c37be6c205478edcec396c6615dd05ede18e32af758a042fd44f5/nima_io-0.4.2.tar.gz | source | sdist | null | false | f72b11e09dc69e557d93fece580431e3 | ad712ef689fa8173427cc44a1510cf72f608ac757c8285b3a0ee21cf1e3094fb | 05e554849e1c37be6c205478edcec396c6615dd05ede18e32af758a042fd44f5 | BSD-3-Clause | [
"LICENSE.txt"
] | 0 |
2.4 | pimadesp | 0.0.4 | TODO | # pimadesp
TODO
| text/markdown | null | Kayce Basques <kaycebasques@gmail.com> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"sphinx",
"pagefind[extended]>=1.0.0; extra == \"pagefind\"",
"pytest; extra == \"test\"",
"pytest-playwright==0.7.2; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/kaycebasques/pimadesp",
"Issues, https://github.com/kaycebasques/pimadesp/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T14:51:57.801779 | pimadesp-0.0.4.tar.gz | 212,738 | 25/54/a264f41fb1b1d1f33b844469dcee6c89b698cfff6dd5d3bb35e5a4efa5fd/pimadesp-0.0.4.tar.gz | source | sdist | null | false | 3bac4061c11631d7bf0e41bcb6d7eb72 | 65fb38c2c9f276336b344e2c486ed679965f65385551971969edd1af8d38d929 | 2554a264f41fb1b1d1f33b844469dcee6c89b698cfff6dd5d3bb35e5a4efa5fd | Apache-2.0 | [] | 257 |
2.4 | mcp-ambari-api | 3.6.3 | Model Context Protocol (MCP) server for Apache Ambari API integration. Provides comprehensive tools for managing Hadoop clusters including service operations, configuration management, status monitoring, and request tracking. | # MCP Ambari API - Apache Hadoop Cluster Management Automation
> **🚀 Automate Apache Ambari operations with AI/LLM**: Conversational control for Hadoop cluster management, service monitoring, configuration inspection, and precise Ambari Metrics queries via Model Context Protocol (MCP) tools.
---
[](https://opensource.org/licenses/MIT)


[](https://smithery.ai/server/@call518/mcp-ambari-api)
[](https://mseep.ai/app/2fd522d4-863d-479d-96f7-e24c7fb531db)
[](https://www.buymeacoffee.com/call518)
[](https://github.com/call518/MCP-Ambari-API/actions/workflows/pypi-publish.yml)


---
## Architecture & Internal (DeepWiki)
[](https://deepwiki.com/call518/MCP-Ambari-API)
---
## 📋 Overview
**MCP Ambari API** is a powerful Model Context Protocol (MCP) server that enables seamless Apache Ambari cluster management through natural language commands. Built for DevOps engineers, data engineers, and system administrators who work with Hadoop ecosystems.
### Features
- ✅ **Interactive Ambari Operations Hub** – Provides an MCP-based foundation for querying and managing services through natural language instead of console or UI interfaces.
- ✅ **Real-time Cluster Visibility** – Comprehensive view of key metrics including service status, host details, alert history, and ongoing requests in a single interface.
- ✅ **Metrics Intelligence Pipeline** – Dynamically discovers and filters AMS appIds and metric names, connecting directly to time-series analysis workflows.
- ✅ **Automated Operations Workflow** – Consolidates repetitive start/stop operations, configuration checks, user queries, and request tracking into consistent scenarios.
- ✅ **Built-in Operational Reports** – Instantly delivers dfsadmin-style HDFS reports, service summaries, and capacity metrics through LLM or CLI interfaces.
- ✅ **Safety Guards and Guardrails** – Requires user confirmation before large-scale operations and provides clear guidance for risky commands through prompt templates.
- ✅ **LLM Integration Optimization** – Includes natural language examples, parameter mapping, and usage guides to ensure stable AI agent operations.
- ✅ **Flexible Deployment Models** – Supports stdio/streamable-http transport, Docker Compose, and token authentication for deployment across development and production environments.
- ✅ **Performance-Oriented Caching Architecture** – Built-in AMS metadata cache and request logging ensure fast responses even in large-scale clusters.
- ✅ **Scalable Code Architecture** – Asynchronous HTTP, structured logging, and modularized tool layers enable easy addition of new features.
- ✅ **Production-Validated** – Based on tools validated in test Ambari clusters, ready for immediate use in production environments.
- ✅ **Diversified Deployment Channels** – Available through PyPI packages, Docker images, and other preferred deployment methods.
### Docuement for Airflow REST-API
- [Ambari API Documents](https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md)
## Topics
`apache-ambari` `hadoop-cluster` `mcp-server` `cluster-automation` `devops-tools` `big-data` `infrastructure-management` `ai-automation` `llm-tools` `python-mcp`
---
## Example Queries - Cluster Info/Status
### [Go to More Example Queries](./src/mcp_ambari_api/prompt_template.md#9-example-queries)
---

---

---
## 🚀 QuickStart Guide /w Docker
> **Note:** The following instructions assume you are using the `streamable-http` mode for MCP Server.
### Flow Diagram of Quickstart/Tutorial

### 1. Prepare Ambari Cluster (Test Target)
To set up a Ambari Demo cluster, follow the guide at: [Install Ambari 3.0 with Docker](https://medium.com/@call518/install-ambari-3-0-with-docker-297a8bb108c8)

### 2. Run Docker-Compose
Start the `MCP-Server`, `MCPO`(MCP-Proxy for OpenAPI), and `OpenWebUI`.
1. Ensure Docker and Docker Compose are installed on your system.
1. Clone this repository and navigate to its root directory.
1. **Set up environment configuration:**
```bash
# Copy environment template and configure your settings
cp .env.example .env
# Edit .env with your Ambari cluster information
```
1. **Configure your Ambari connection in `.env` file:**
```bash
# Ambari cluster connection
AMBARI_HOST=host.docker.internal
AMBARI_PORT=7070
AMBARI_USER=admin
AMBARI_PASS=admin
AMBARI_CLUSTER_NAME=TEST-AMBARI
# Ambari Metrics (AMS) collector
AMBARI_METRICS_HOST=host.docker.internal
AMBARI_METRICS_PORT=16188
AMBARI_METRICS_PROTOCOL=http
AMBARI_METRICS_TIMEOUT=15
# (Optional) Enable authentication for streamable-http mode
# Recommended for production environments
REMOTE_AUTH_ENABLE=false
REMOTE_SECRET_KEY=your-secure-secret-key-here
```
1. Run:
```bash
docker-compose up -d
```
- OpenWebUI will be available at: `http://localhost:${DOCKER_EXTERNAL_PORT_OPENWEBUI}` (default: 3001)
- The MCPO-Proxy will be accessible at: `http://localhost:${DOCKER_EXTERNAL_PORT_MCPO_PROXY}` (default: 8001)
- The MCPO API Docs: `http://localhost:${DOCKER_EXTERNAL_PORT_MCPO_PROXY}/mcp-ambari-api/docs`

### 3. Registering the Tool in OpenWebUI
> 📌 **Note**: Web-UI configuration instructions are based on OpenWebUI **v0.6.22**. Menu locations and settings may differ in newer versions.
1. logging in to OpenWebUI with an admin account
1. go to "Settings" → "Tools" from the top menu.
1. Enter the `mcp-ambari-api` Tool address (e.g., `http://localhost:8000/mcp-ambari-api`) to connect MCP Tools with your Ambari cluster.
### 4. More Examples: Using MCP Tools to Query Ambari Cluster
Below is an example screenshot showing how to query the Ambari cluster using MCP Tools in OpenWebUI:
#### Example Query - Cluster Configuration Review & Recommendations

#### Example Query - Restart HDFS Service


---
## 📈 Metrics & Trends
- **Terminology quick reference**
- **appId**: Ambari Metrics Service groups every metric under an application identifier (e.g., `namenode`, `datanode`, `ambari_server`, `HOST`). Think of it as the component or service emitting that timeseries.
- **metric name**: The fully qualified string Ambari uses for each timeseries (e.g., `jvm.JvmMetrics.MemHeapUsedM`, `dfs.datanode.BytesWritten`). Exact names are required when querying AMS.
- `list_common_metrics_catalog`: keyword search against the live metadata-backed metric catalog (cached locally). Use `search="heap"` or similar to narrow suggestions before running a time-series query.
_Example_: “Show the heap-related metrics available for the NameNode appId.”
- `list_ambari_metric_apps`: list discovered AMS `appId` values, optionally including metric counts; pass `refresh=true` or `limit` to control output.
_Example_: “List every appId currently exposed by AMS.”
- The natural-language query “AMS에서 사용 가능한 appId 목록만 보여줘” maps to `list_ambari_metric_apps` and returns the exact identifiers you can copy into other tools.
- `list_ambari_metrics_metadata`: raw AMS metadata explorer (supports `app_id`, `metric_name_filter`, `host_filter`, `search`, adjustable `limit`, default 50).
_Example_: “Give me CPU-related metric metadata under HOST.”
- `query_ambari_metrics`: fetch time-series data; the tool auto-selects curated metric names, falls back to metadata search when needed, and honors Ambari's default precision unless you explicitly supply `precision="SECONDS"`, etc.
_Examples_: “Plot the last 30 minutes of `jvm.JvmMetrics.MemHeapUsedM` for the NameNode.” / “Compare `jvm.JvmMetrics.MemHeapUsedM` for DataNode hosts `bigtop-hostname0.demo.local` and `bigtop-hostname1.demo.local` over the past 30 minutes.”
- `hdfs_dfadmin_report`: produce a DFSAdmin-style capacity/DataNode summary (mirrors `hdfs dfsadmin -report`).
**Live Metric Catalog (via AMS metadata)**
- Metric names are discovered on demand from `/ws/v1/timeline/metrics/metadata` and cached for quick reuse.
- Use `list_common_metrics_catalog` or the `ambari-metrics://catalog/all` resource (append `?refresh=true` to bypass the cache) to inspect the latest `appId → metric` mapping. Query `ambari-metrics://catalog/apps` to list appIds or `ambari-metrics://catalog/<appId>` for a single app.
- Typical appIds include `ambari_server`, `namenode`, `datanode`, `nodemanager`, `resourcemanager`, and `HOST`, but the list adapts to whatever the Ambari Metrics service advertises in your cluster.
---
## 🔍 Ambari Metrics Query Requirements (Exact-Match Workflow)
Recent updates removed natural-language metric guessing in favor of deterministic, catalog-driven lookups. Keep the following rules in mind when you (or an LLM agent) call `query_ambari_metrics`:
1. **Always pass an explicit `app_id`.** If it is missing or unsupported, the tool returns a list of valid appIds and aborts so you can choose one manually.
2. **Specify exact metric names.** Use `list_common_metrics_catalog(app_id="<target>", search="keyword")`, `list_ambari_metric_apps` (to discover appIds), or the `ambari-metrics://catalog/<appId>` resource to browse the live per-app metric set and copy the identifier (e.g., `jvm.JvmMetrics.MemHeapUsedM`).
3. **Host-scope behavior**: When `hostnames` is omitted the API returns cluster-wide aggregates. Provide one or more hosts (comma-separated) to focus on specific nodes.
4. **No fuzzy matches.** The server now calls Ambari exactly as requested. If the metric is wrong or empty, Ambari will simply return no datapoints—double-check the identifier via `/ws/v1/timeline/metrics/metadata`.
Example invocation:
```plaintext
query_ambari_metrics(
metric_names="jvm.JvmMetrics.MemHeapUsedM",
app_id="nodemanager",
duration="1h",
group_by_host=true
)
```
For multi-metric lookups, pass a comma-separated list of exact names. Responses document any auto-applied host filters so you can copy/paste them into subsequent requests.
---
## 🐛 Usage & Configuration
This MCP server supports two connection modes: **stdio** (traditional) and **streamable-http** (Docker-based). You can configure the transport mode using CLI arguments or environment variables.
**Configuration Priority:** CLI arguments > Environment variables > Default values
### CLI Arguments
- `--type` (`-t`): Transport type (`stdio` or `streamable-http`) - Default: `stdio`
- `--host`: Host address for HTTP transport - Default: `127.0.0.1`
- `--port` (`-p`): Port number for HTTP transport - Default: `8000`
- `--auth-enable`: Enable Bearer token authentication for streamable-http mode - Default: `false`
- `--secret-key`: Secret key for Bearer token authentication (required when auth enabled)
### Environment Variables
| Variable | Description | Default | Project Default |
|----------|-------------|---------|-----------------|
| `PYTHONPATH` | Python module search path for MCP server imports | - | `/app/src` |
| `MCP_LOG_LEVEL` | Server logging verbosity (DEBUG, INFO, WARNING, ERROR) | `INFO` | `INFO` |
| `FASTMCP_TYPE` | MCP transport protocol (stdio for CLI, streamable-http for web) | `stdio` | `streamable-http` |
| `FASTMCP_HOST` | HTTP server bind address (0.0.0.0 for all interfaces) | `127.0.0.1` | `0.0.0.0` |
| `FASTMCP_PORT` | HTTP server port for MCP communication | `8000` | `8000` |
| `REMOTE_AUTH_ENABLE` | Enable Bearer token authentication for streamable-http mode<br/>**Default: false** (if undefined, empty, or null) | `false` | `false` |
| `REMOTE_SECRET_KEY` | Secret key for Bearer token authentication<br/>**Required when REMOTE_AUTH_ENABLE=true** | - | `your-secret-key-here` |
| `AMBARI_HOST` | Ambari server hostname or IP address | `127.0.0.1` | `host.docker.internal` |
| `AMBARI_PORT` | Ambari server port number | `8080` | `8080` |
| `AMBARI_USER` | Username for Ambari server authentication | `admin` | `admin` |
| `AMBARI_PASS` | Password for Ambari server authentication | `admin` | `admin` |
| `AMBARI_CLUSTER_NAME` | Name of the target Ambari cluster | `TEST-AMBARI` | `TEST-AMBARI` |
| `DOCKER_EXTERNAL_PORT_OPENWEBUI` | Host port mapping for Open WebUI container | `8080` | `3001` |
| `DOCKER_EXTERNAL_PORT_MCP_SERVER` | Host port mapping for MCP server container | `8080` | `18001` |
| `DOCKER_EXTERNAL_PORT_MCPO_PROXY` | Host port mapping for MCPO proxy container | `8000` | `8001` |
**Note**: `AMBARI_CLUSTER_NAME` serves as the default target cluster for operations when no specific cluster is specified. All environment variables can be configured via the `.env` file.
**Transport Selection Logic:**
**Configuration Priority:** CLI arguments > Environment variables > Default values
**Transport Selection Logic:**
- **CLI Priority**: `--type streamable-http --host 0.0.0.0 --port 18001`
- **Environment Priority**: `FASTMCP_TYPE=streamable-http FASTMCP_HOST=0.0.0.0 FASTMCP_PORT=18001`
- **Legacy Support**: `FASTMCP_PORT=18001` (automatically enables streamable-http mode)
- **Default**: `stdio` mode when no configuration is provided
### Environment Setup
```bash
# 1. Clone the repository
git clone https://github.com/call518/MCP-Ambari-API.git
cd MCP-Ambari-API
# 2. Set up environment configuration
cp .env.example .env
# 3. Configure your Ambari connection in .env file
AMBARI_HOST=your-ambari-host
AMBARI_PORT=your-ambari-port
AMBARI_USER=your-username
AMBARI_PASS=your-password
AMBARI_CLUSTER_NAME=your-cluster-name
```
---
## 🔐 Security & Authentication
### Bearer Token Authentication
For `streamable-http` mode, this MCP server supports Bearer token authentication to secure remote access. This is especially important when running the server in production environments.
#### Configuration
**Enable Authentication:**
```bash
# In .env file
REMOTE_AUTH_ENABLE=true
REMOTE_SECRET_KEY=your-secure-secret-key-here
```
**Or via CLI:**
```bash
python -m mcp_ambari_api --type streamable-http --auth-enable --secret-key your-secure-secret-key-here
```
#### Security Levels
1. **stdio mode** (Default): Local-only access, no authentication needed
2. **streamable-http + REMOTE_AUTH_ENABLE=false/undefined**: Remote access without authentication ⚠️ **NOT RECOMMENDED for production**
3. **streamable-http + REMOTE_AUTH_ENABLE=true**: Remote access with Bearer token authentication ✅ **RECOMMENDED for production**
> **🔒 Default Policy**: `REMOTE_AUTH_ENABLE` defaults to `false` if undefined, empty, or null. This ensures the server starts even without explicit authentication configuration.
#### Client Configuration
When authentication is enabled, MCP clients must include the Bearer token in the Authorization header:
```json
{
"mcpServers": {
"mcp-ambari-api": {
"type": "streamable-http",
"url": "http://your-server:8000/mcp",
"headers": {
"Authorization": "Bearer your-secure-secret-key-here"
}
}
}
}
```
#### Security Best Practices
- **Always enable authentication** when using streamable-http mode in production
- **Use strong, randomly generated secret keys** (32+ characters recommended)
- **Use HTTPS** when possible (configure reverse proxy with SSL/TLS)
- **Restrict network access** using firewalls or network policies
- **Rotate secret keys regularly** for enhanced security
- **Monitor access logs** for unauthorized access attempts
#### Error Handling
When authentication fails, the server returns:
- **401 Unauthorized** for missing or invalid tokens
- **Detailed error messages** in JSON format for debugging
---
### Method 1: Local MCP (transport="stdio")
```json
{
"mcpServers": {
"mcp-ambari-api": {
"command": "uvx",
"args": ["--python", "3.12", "mcp-ambari-api"],
"env": {
"AMBARI_HOST": "host.docker.internal",
"AMBARI_PORT": "8080",
"AMBARI_USER": "admin",
"AMBARI_PASS": "admin",
"AMBARI_CLUSTER_NAME": "TEST-AMBARI",
"MCP_LOG_LEVEL": "INFO"
}
}
}
}
```
### Method 2: Remote MCP (transport="streamable-http")
**On MCP-Client Host:**
```json
{
"mcpServers": {
"mcp-ambari-api": {
"type": "streamable-http",
"url": "http://localhost:18001/mcp"
}
}
}
```
**With Bearer Token Authentication (Recommended for production):**
```json
{
"mcpServers": {
"mcp-ambari-api": {
"type": "streamable-http",
"url": "http://localhost:18001/mcp",
"headers": {
"Authorization": "Bearer your-secure-secret-key-here"
}
}
}
}
```
---
## Example usage: Claude-Desktop
**claude_desktop_config.json**
```json
{
"mcpServers": {
"mcp-ambari-api": {
"command": "uvx",
"args": ["--python", "3.12", "mcp-ambari-api"],
"env": {
"AMBARI_HOST": "localhost",
"AMBARI_PORT": "7070",
"AMBARI_USER": "admin",
"AMBARI_PASS": "admin",
"AMBARI_CLUSTER_NAME": "TEST-AMBARI",
"MCP_LOG_LEVEL": "INFO"
}
}
}
}
```

**(Option) Configure Multiple Ambari Cluster**
```json
{
"mcpServers": {
"Ambari-Cluster-A": {
"command": "uvx",
"args": ["--python", "3.12", "mcp-ambari-api"],
"env": {
"AMBARI_HOST": "a.foo.com",
"AMBARI_PORT": "8080",
"AMBARI_USER": "admin-user",
"AMBARI_PASS": "admin-pass",
"AMBARI_CLUSTER_NAME": "AMBARI-A",
"MCP_LOG_LEVEL": "INFO"
}
},
"Ambari-Cluster-B": {
"command": "uvx",
"args": ["--python", "3.12", "mcp-ambari-api"],
"env": {
"AMBARI_HOST": "b.bar.com",
"AMBARI_PORT": "8080",
"AMBARI_USER": "admin-user",
"AMBARI_PASS": "admin-pass",
"AMBARI_CLUSTER_NAME": "AMBARI-B",
"MCP_LOG_LEVEL": "INFO"
}
}
}
}
```
**Remote Access with Authentication (Claude Desktop):**
```json
{
"mcpServers": {
"mcp-ambari-api-remote": {
"type": "streamable-http",
"url": "http://your-server-ip:18001/mcp",
"headers": {
"Authorization": "Bearer your-secure-secret-key-here"
}
}
}
}
```
---
## 🎯 Core Features & Capabilities
### Service Operations
- **Hadoop Service Management**: Start, stop, restart HDFS, YARN, Spark, HBase, and more
- **Bulk Operations**: Control all cluster services simultaneously
- **Status Monitoring**: Real-time service health and performance tracking
### Configuration Management
- **Unified Config Tool**: Single interface for all configuration types (yarn-site, hdfs-site, etc.)
- **Bulk Configuration**: Export and manage multiple configurations with filtering
- **Configuration Validation**: Syntax checking and validation before applying changes
### Monitoring & Alerting
- **Real-time Alerts**: Current and historical cluster alerts with filtering
- **Request Tracking**: Monitor long-running operations with detailed progress
- **Host Monitoring**: Hardware metrics, component states, and resource utilization
### Administration
- **User Management**: Check cluster user administration
- **Host Management**: Node registration, component assignments, and health monitoring
---
## Available MCP Tools
This MCP server provides the following tools for Ambari cluster management:
### Cluster Management
- `get_cluster_info` - Retrieve basic cluster information and status
- `get_active_requests` - List currently active/running operations
- `get_request_status` - Check status and progress of specific requests
### Service Management
- `get_cluster_services` - List all services with their status
- `get_service_status` - Get detailed status of a specific service
- `get_service_components` - List components and host assignments for a service
- `get_service_details` - Get comprehensive service information
- `start_service` - Start a specific service
- `stop_service` - Stop a specific service
- `restart_service` - Restart a specific service
- `start_all_services` - Start all services in the cluster
- `stop_all_services` - Stop all services in the cluster
- `restart_all_services` - Restart all services in the cluster
### Configuration Tools
- `dump_configurations` - Unified configuration tool (replaces `get_configurations`, `list_configurations`, and the former internal `dump_all_configurations`). Supports:
- Single type: `dump_configurations(config_type="yarn-site")`
- Bulk summary: `dump_configurations(summarize=True)`
- Filter by substring (type or key): `dump_configurations(filter="memory")`
- Service filter (narrow types by substring): `dump_configurations(service_filter="yarn", summarize=True)`
- Keys only (no values): `dump_configurations(include_values=False)`
- Limit number of types: `dump_configurations(limit=10, summarize=True)`
> Breaking Change: `get_configurations` and `list_configurations` were removed in favor of this single, more capable tool.
### Host Management
- `list_hosts` - List all hosts in the cluster
- `get_host_details` - Get detailed information for specific or all hosts (includes component states, hardware metrics, and service assignments)
### User Management
- `list_users` - List all users in the Ambari system with their usernames and API links
- `get_user` - Get detailed information about a specific user including:
- Basic profile (ID, username, display name, user type)
- Status information (admin privileges, active status, login failures)
- Authentication details (LDAP user status, authentication sources)
- Group memberships, privileges, and widget layouts
### Alert Management
- `get_alerts_history` - **Unified alert tool** for both current and historical alerts:
- **Current mode** (`mode="current"`): Retrieve current/active alerts with real-time status
- Current alert states across cluster, services, or hosts
- Maintenance mode filtering (ON/OFF)
- Summary formats: basic summary and grouped by definition
- Detailed alert information including timestamps and descriptions
- **History mode** (`mode="history"`): Retrieve historical alert events from the cluster
- Scope filtering: cluster-wide, service-specific, or host-specific alerts
- Time range filtering: from/to timestamp support
- Pagination support for large datasets
- **Common features** (both modes):
- State filtering: CRITICAL, WARNING, OK, UNKNOWN alerts
- Definition filtering: filter by specific alert definition names
- Multiple output formats: detailed, summary, compact
- Unified API for consistent alert querying experience
---
## 🤝 Contributing & Support
### How to Contribute
- 🐛 **Report Bugs**: [GitHub Issues](https://github.com/call518/MCP-Ambari-API/issues)
- 💡 **Request Features**: [Feature Requests](https://github.com/call518/MCP-Ambari-API/issues)
- 🔧 **Submit PRs**: [Contributing Guidelines](https://github.com/call518/MCP-Ambari-API/blob/main/CONTRIBUTING.md)
- 📖 **Improve Docs**: Help make documentation better
### Technologies Used
- **Language**: Python 3.12
- **Framework**: Model Context Protocol (MCP)
- **API**: Apache Ambari REST API
- **Transport**: stdio (local) and streamable-http (remote)
- **Deployment**: Docker, Docker Compose, PyPI
### Dev Env.
- WSL2(networkingMode = bridged) + Docker-Desktop
- `.wslconfig`: tested with `networkingMode = bridged`
- Python 3.12 venv
```bash
### Option-1: with uv
uv venv --python 3.12 --seed
### Option-2: with pip
python3.12 -m venv .venv
source .venv/bin/activate
pip install -U pip
```
---
## 🛠️ Adding Custom Tools
After you've thoroughly explored the existing functionality, you might want to add your own custom tools for specific monitoring or management needs. This MCP server is designed for easy extensibility.
### Step-by-Step Guide
#### 1. **Add Helper Functions (Optional)**
Add reusable data functions to `src/mcp_ambari_api/functions.py`:
```python
async def get_your_custom_data(target_resource: str = None) -> List[Dict[str, Any]]:
"""Your custom data retrieval function."""
# Example implementation - adapt to your Ambari service
endpoint = f"/clusters/{AMBARI_CLUSTER_NAME}/your_custom_endpoint"
if target_resource:
endpoint += f"/{target_resource}"
response_data = await make_ambari_request(endpoint)
if response_data is None or "items" not in response_data:
return []
return response_data["items"]
```
#### 2. **Create Your MCP Tool**
Add your tool function to `src/mcp_ambari_api/mcp_main.py`:
```python
@mcp.tool()
@log_tool
async def get_your_custom_analysis(limit: int = 50, target_name: Optional[str] = None) -> str:
"""
[Tool Purpose]: Brief description of what your tool does
[Core Functions]:
- Feature 1: Data aggregation and analysis
- Feature 2: Resource monitoring and insights
- Feature 3: Performance metrics and reporting
[Required Usage Scenarios]:
- When user asks "your specific analysis request"
- Your business-specific monitoring needs
Args:
limit: Maximum results (1-100)
target_name: Target resource/service name (optional)
Returns:
Formatted analysis results (success: formatted data, failure: English error message)
"""
try:
limit = max(1, min(limit, 100)) # Always validate input
results = await get_your_custom_data(target_resource=target_name)
if not results:
return f"No custom analysis data found{' for ' + target_name if target_name else ''}."
# Apply limit
limited_results = results[:limit]
# Format output
result_lines = [
f"Custom Analysis Results{' for ' + target_name if target_name else ''}",
"=" * 50,
f"Found: {len(limited_results)} items (total: {len(results)})",
""
]
for i, item in enumerate(limited_results, 1):
# Customize this formatting based on your data structure
name = item.get("name", "Unknown")
status = item.get("status", "N/A")
result_lines.append(f"[{i}] {name}: {status}")
return "\n".join(result_lines)
except Exception as e:
return f"Error: Exception occurred while retrieving custom analysis - {str(e)}"
```
#### 3. **Update Imports**
Add your helper function to the imports section in `src/mcp_ambari_api/mcp_main.py`:
```python
from mcp_ambari_api.functions import (
format_timestamp,
format_single_host_details,
make_ambari_request,
# ... existing imports ...
get_your_custom_data, # Add your new function here
)
```
#### 4. **Update Prompt Template (Recommended)**
Add your tool description to `src/mcp_ambari_api/prompt_template.md` for better AI recognition:
```markdown
### Custom Analysis Tools
**get_your_custom_analysis**
- "Show me custom analysis results"
- "Get custom analysis for target_name"
- "Display custom monitoring data"
- 📋 **Features**: Custom data aggregation, resource monitoring, performance insights
```
#### 5. **Test Your Tool**
```bash
# Local testing with MCP Inspector
./run-mcp-inspector-local.sh
# Or test with Docker environment
docker-compose up -d
docker-compose logs -f mcp-server
# Test with natural language queries:
# "Show me custom analysis results"
# "Get custom analysis for my_target"
```
### Important Notes
- **Always use `@mcp.tool()` and `@log_tool` decorators** for proper registration and logging
- **Follow the existing error handling patterns** - return English error messages starting with "Error:"
- **Use `make_ambari_request()` function** for all Ambari API calls to ensure consistent authentication and error handling
- **Validate all input parameters** before using them in API calls
- **Test thoroughly** with both valid and invalid inputs
### Example Use Cases
- **Custom service health checks** beyond standard Ambari monitoring
- **Specialized configuration validation** for your organization's standards
- **Custom alert aggregation** and reporting formats
- **Integration with external monitoring systems** via Ambari data
- **Automated compliance checking** for cluster configurations
---
## ❓ Frequently Asked Questions
### Q: What Ambari versions are supported?
**A**: Ambari 2.7+ is recommended. Earlier versions may work but are not officially tested.
### Q: Can I use this with cloud-managed Hadoop clusters?
**A**: Yes, as long as Ambari API endpoints are accessible, it works with on-premise, cloud, and hybrid deployments.
### Q: How do I troubleshoot connection issues?
**A**: Check your `AMBARI_HOST`, `AMBARI_PORT`, and network connectivity. Enable debug logging with `MCP_LOG_LEVEL=DEBUG`.
### Q: How does this compare to Ambari Web UI?
**A**: This provides programmatic access via AI/LLM commands, perfect for automation, scripting, and integration with modern DevOps workflows.
---
## Contributing
🤝 **Got ideas? Found bugs? Want to add cool features?**
We're always excited to welcome new contributors! Whether you're fixing a typo, adding a new monitoring tool, or improving documentation - every contribution makes this project better.
**Ways to contribute:**
- 🐛 Report issues or bugs
- 💡 Suggest new Ambari monitoring features
- 📝 Improve documentation
- 🚀 Submit pull requests
- ⭐ Star the repo if you find it useful!
**Pro tip:** The codebase is designed to be super friendly for adding new tools. Check out the existing `@mcp.tool()` functions in `mcp_main.py` and follow the [Adding Custom Tools](#️-adding-custom-tools) guide above.
---
## 📄 License
This project is licensed under the MIT License.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.12.15",
"fastmcp>=2.12.3"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:51:21.324650 | mcp_ambari_api-3.6.3.tar.gz | 88,554 | c9/f4/e8d97bd49ebf18c93b3770ee24003d28f722c0e5208bbb4eb7d25ace188b/mcp_ambari_api-3.6.3.tar.gz | source | sdist | null | false | 50f5a941accc1f2d148fd94f845fbc3b | 7508e3caf09b611ac4e90501b369b71cafbe6a3172c537bb6fba27a929913d60 | c9f4e8d97bd49ebf18c93b3770ee24003d28f722c0e5208bbb4eb7d25ace188b | null | [
"LICENSE"
] | 261 |
2.1 | aws-cdk.cloud-assembly-schema | 52.1.0 | Schema for the protocol between CDK framework and CDK CLI | # Cloud Assembly Schema
This module is part of the [AWS Cloud Development Kit](https://github.com/aws/aws-cdk) project.
## Cloud Assembly
The *Cloud Assembly* is the output of the synthesis operation. It is produced as part of the
[`cdk synth`](https://github.com/aws/aws-cdk-cli/tree/main/packages/aws-cdk#cdk-synthesize)
command, or the [`app.synth()`](https://github.com/aws/aws-cdk/blob/main/packages/aws-cdk-lib/core/lib/stage.ts#L219) method invocation.
Its essentially a set of files and directories, one of which is the `manifest.json` file. It defines the set of instructions that are
needed in order to deploy the assembly directory.
> For example, when `cdk deploy` is executed, the CLI reads this file and performs its instructions:
>
> * Build container images.
> * Upload assets.
> * Deploy CloudFormation templates.
Therefore, the assembly is how the CDK class library and CDK CLI (or any other consumer) communicate. To ensure compatibility
between the assembly and its consumers, we treat the manifest file as a well defined, versioned schema.
## Schema
This module contains the typescript structs that comprise the `manifest.json` file, as well as the
generated [*json-schema*](./schema/cloud-assembly.schema.json).
## Versioning
The schema version is specified my the major version of the package release. It follows semantic versioning, but with a small twist.
When we add instructions to the assembly, they are reflected in the manifest file and the *json-schema* accordingly.
Every such instruction, is crucial for ensuring the correct deployment behavior. This means that to properly deploy a cloud assembly,
consumers must be aware of every such instruction modification.
For this reason, every change to the schema, even though it might not strictly break validation of the *json-schema* format,
is considered `major` version bump. All changes that do not impact the schema are considered a `minor` version bump.
## How to consume
If you'd like to consume the [schema file](./schema/cloud-assembly.schema.json) in order to do validations on `manifest.json` files,
simply download it from this repo and run it against standard *json-schema* validators, such as [jsonschema](https://www.npmjs.com/package/jsonschema).
Consumers must take into account the `major` version of the schema they are consuming. They should reject cloud assemblies
with a `major` version that is higher than what they expect. While schema validation might pass on such assemblies, the deployment integrity
cannot be guaranteed because some instructions will be ignored.
> For example, if your consumer was built when the schema version was 2.0.0, you should reject deploying cloud assemblies with a
> manifest version of 3.0.0.
## Contributing
See [Contribution Guide](./CONTRIBUTING.md)
| text/markdown | Amazon Web Services | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
... | [] | https://github.com/aws/aws-cdk | null | ~=3.9 | [] | [] | [] | [
"jsii<2.0.0,>=1.121.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/aws/aws-cdk-cli"
] | twine/6.1.0 CPython/3.14.3 | 2026-02-18T14:51:20.006951 | aws_cdk_cloud_assembly_schema-52.1.0.tar.gz | 210,492 | af/18/62d7e324a44673043d272f945fbefc2507165fb4854165e519a2d8f811b1/aws_cdk_cloud_assembly_schema-52.1.0.tar.gz | source | sdist | null | false | cfa53f2c228ca6652d4f8c0a7a140412 | 44bd39d03835ccab7a08030ee08e616033435cda3a19b80fce54693566abb15d | af1862d7e324a44673043d272f945fbefc2507165fb4854165e519a2d8f811b1 | null | [] | 0 |
2.4 | elfen | 1.3.0 | ELFEN - Efficient Linguistic Feature Extraction for Natural Language Datasets | # ELFEN - Efficient Linguistic Feature Extraction for Natural Language Datasets
This Python package provides efficient linguistic feature extraction for text datasets (i.e. datasets with N text instances, in a tabular structure).
For further information, check the [GitHub repository](https://github.com/mmmaurer/elfen) and the [documentation](https://elfen.readthedocs.io)
## Using spacy models
If you want to use the spacy backbone, you will need to download the respective model, e.g. "en_core_web_sm":
```bash
python -m spacy download en_core_web_sm
```
## Usage of third-party resources usable in this package
For the full functionality, some external resources are necessary. While most of them are downloaded and located automatically, some have to be loaded manually.
### WordNet features
To use wordnet features, download open multilingual wordnet using:
```bash
python -m wn download omw:1.4
```
Note that for some languages, you will need to install another wordnet collection. For example, for German, you can use the following command:
```bash
python -m wn download odenet:1.4
```
For more information on the available wordnet collections, consult the [wn package documentation](https://wn.readthedocs.io/en/latest/guides/lexicons.html).
### Emotion lexicons
The emotion lexicons used in this package have to be downloaded manually due to licensing restrictions.
After downloading, the extracted folders have to be placed in the respective directories.
To do so, download the intensity lexicons from the [NRC Emotion Intensity Lexicon page](https://saifmohammad.com/WebPages/AffectIntensity.htm), the association lexicons from the [NRC Emotion Association Lexicon page](https://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm) and the vad lexicons from the [NRC VAD Lexicon page](https://saifmohammad.com/WebPages/nrc-vad.html). Note that for the VAD lexicon, you will have to use the version 1. The newer version 2.1 is not integrated in elfen yet.
To use them in elfen, find the `elfen_resources` directory in your local elfen installation (for example with pip):
```
python -m pip show elfen
```
Then, the `elfen_resources` directory should be located in the same directory as the `elfen` package directory.
Create the following subdirectories if they do not exist yet:
- `elfen_resources/Emotion/Sentiment`
- `elfen_resources/Emotion/VAD`
- `elfen_resources/Emotion/Intensity`
Then, place the downloaded extracted zip folders in the respective directories:
- Place the extracted zip folder of the NRC Emotion Intensity Lexicon in `elfen_resources/Emotion/Intensity/`
- Place the extracted zip folder of the NRC Emotion Association Lexicon in `elfen_resources/Emotion/Sentiment/`
- Place the extracted zip folder of the NRC VAD Lexicon in `elfen_resources/Emotion/VAD/`
### Licences of lexicons
The extraction of psycholinguistic, emotion/lexicon and semantic features relies on third-party resources such as lexicons.
Please refer to the original author's licenses and conditions for usage, and cite them if you use the resources through this package in your analyses.
For an overview which features use which resource, and how to export all third-party resource references in a `bibtex` string, consult the [documentation](https://elfen.readthedocs.io).
## Multiprocessing and limiting the numbers of cores used
The underlying dataframe library, polars, uses all available cores by default.
If you are working on a shared server, you may want to consider limiting the resources available to polars.
To do that, you will have to set the ``POLARS_MAX_THREADS`` variable in your shell, e.g.:
```bash
export POLARS_MAX_THREADS=8
```
## Acknowledgements
While all feature extraction functions in this package are written from scratch, the choice of features in the readability and lexical richness feature areas (partially) follows the [`readability`](https://github.com/andreasvc/readability) and [`lexicalrichness`](https://github.com/LSYS/LexicalRichness) Python packages.
We use the [`wn`](https://github.com/goodmami/wn) Python package to extract Open Multilingual Wordnet synsets.
## Citation
If you use this package in your work, for now, please cite
```bibtex
@misc{maurer-2025-elfen,
author = {Maurer, Maximilian},
title = {ELFEN - Efficient Linguistic Feature Extraction for Natural Language Datasets},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/mmmaurer/elfen}},
}
```
| text/markdown | null | Maximilian Maurer <mmmaurer@pm.me> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | <=3.12.11,>=3.10 | [] | [] | [] | [
"polars>=1.10.0",
"numpy>=1.26.4",
"scipy>=1.14.1",
"spacy>=3.7.5",
"spacy_syllables>=3.0.2",
"stanza>=1.8.2",
"Requests>=2.32.3",
"wn>=0.9.5",
"fastexcel>=0.12.0",
"spacy-transformers>=1.3.5",
"pyarrow>=19.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/mmmaurer/elfen",
"Issues, https://github.com/mmmaurer/elfen/issues"
] | twine/6.2.0 CPython/3.12.11 | 2026-02-18T14:50:34.452908 | elfen-1.3.0.tar.gz | 60,746 | 64/07/f6c0a39a374ef34aef07eae6f6f2fc6710d05cca3299826358e04e675445/elfen-1.3.0.tar.gz | source | sdist | null | false | ef62502173a619da72973f615fc01557 | 0aa9c67acbae0f4df651c3dd58bfd12e3b955200da3b6fb2fdc65c03a547fabb | 6407f6c0a39a374ef34aef07eae6f6f2fc6710d05cca3299826358e04e675445 | null | [
"LICENSE"
] | 255 |
2.4 | gluestick | 3.0.4 | ETL utility functions built for the hotglue iPaaS platform | gluestick [](https://travis-ci.org/hotgluexyz/gluestick)
=============
A small Python module containing quick utility functions for standard ETL processes.
## Installation ##
```
pip install gluestick
```
## Links ##
* [Source]
* [Wiki]
* [Issues]
* [Slack]
## License ##
[MIT]
## Dependencies ##
* NumPy
* Pandas
## Contributing ##
This project is maintained by the [hotglue] team. We welcome contributions from the
community via issues and pull requests.
If you wish to chat with our team, feel free to join our [Slack]!
[Source]: https://github.com/hotgluexyz/gluestick
[Wiki]: https://github.com/hotgluexyz/gluestick/wiki
[Issues]: https://github.com/hotgluexyz/gluestick/issues
[MIT]: https://tldrlegal.com/license/mit-license
[hotglue]: https://hotglue.xyz
[Slack]: https://bit.ly/2KBGGq1
| text/markdown | hotglue | hello@hotglue.xyz | null | null | MIT | null | [] | [] | https://github.com/hotgluexyz/gluestick | null | null | [] | [] | [] | [
"singer-python>=4.0.0",
"numpy>=1.4",
"pandas>=1.2.5",
"pyarrow>=8.0.0",
"pytz>=2022.6",
"polars==1.34.0",
"pydantic==2.5.3"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.4 | 2026-02-18T14:50:30.128527 | gluestick-3.0.4.tar.gz | 30,486 | a9/ae/04a2ba7264891f72f7373d76ab32e61d93619f6731396a8b8b7e7c80ed5f/gluestick-3.0.4.tar.gz | source | sdist | null | false | 5dce97d6218acbe5911d89fafb2f24b0 | 9d130ebeb5d0b7378db3ef9dbd0319e5f89ed18fd4007fd2aa21ae6cf08e1b5b | a9ae04a2ba7264891f72f7373d76ab32e61d93619f6731396a8b8b7e7c80ed5f | null | [
"LICENSE"
] | 287 |
2.4 | mezon-sdk | 1.6.19 | Mezon Python SDK - A Python implementation of the Mezon TypeScript SDK | # Mezon SDK Python
[](https://badge.fury.io/py/mezon-sdk)
[](https://pypi.org/project/mezon-sdk/)
[](https://opensource.org/licenses/Apache-2.0)
A Python SDK for building bots and applications on the Mezon platform. Async-first, type-safe, and production-ready.
## Installation
```bash
pip install mezon-sdk
```
## Quick Start
```python
import asyncio
from mezon import MezonClient
from mezon.models import ChannelMessageContent
from mezon.protobuf.api import api_pb2
client = MezonClient(
client_id="YOUR_BOT_ID",
api_key="YOUR_API_KEY",
)
async def handle_message(message: api_pb2.ChannelMessage):
if message.sender_id == client.client_id:
return
channel = await client.channels.fetch(message.channel_id)
await channel.send(content=ChannelMessageContent(t="Hello!"))
client.on_channel_message(handle_message)
async def main():
await client.login()
await asyncio.Event().wait()
asyncio.run(main())
```
## Documentation
**Full documentation:** [https://docs.laptrinhai.id.vn/](https://docs.laptrinhai.id.vn)
- [Installation Guide](https://docs.laptrinhai.id.vn/getting-started/installation/)
- [Quick Start](https://docs.laptrinhai.id.vn/getting-started/quickstart/)
- [API Reference](https://docs.laptrinhai.id.vn/api-reference/client/)
- [Examples](https://docs.laptrinhai.id.vn/examples/basic-bot/)
## Features
- Async/await native with `asyncio`
- Real-time WebSocket with auto-reconnection
- Type-safe with Pydantic models
- Event-driven architecture
- Interactive messages (buttons, forms)
- Token sending support
- Message caching with SQLite
## Links
- [PyPI Package](https://pypi.org/project/mezon-sdk/)
- [GitHub Repository](https://github.com/phuvinh010701/mezon-sdk-python)
- [Issue Tracker](https://github.com/phuvinh010701/mezon-sdk-python/issues)
- [Changelog](https://phuvinh010701.github.io/mezon-sdk-python/changelog/)
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"pydantic>=2.12.3",
"aiohttp>=3.9.0",
"websockets>=12.0",
"pyjwt>=2.8.0",
"aiosqlite>=0.20.0",
"mmn-sdk==1.0.1",
"aiolimiter>=1.2.1",
"tenacity>=9.1.2",
"protobuf>=6.33.2"
] | [] | [] | [] | [
"Homepage, https://github.com/phuvinh010701/mezon-sdk-python",
"Issues, https://github.com/phuvinh010701/mezon-sdk-python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:49:22.503212 | mezon_sdk-1.6.19.tar.gz | 144,806 | bc/0d/1f861e1635b8f9ff0dedd6bfdfd37d56e512b66a2fb6e36c43fa30d85785/mezon_sdk-1.6.19.tar.gz | source | sdist | null | false | 1274777e5df4db9c4b96afd817d59ea2 | 3a00541f40ab419746fcb44cb2901d1214fe4caa0577c7a3365f594e9d9cb10a | bc0d1f861e1635b8f9ff0dedd6bfdfd37d56e512b66a2fb6e36c43fa30d85785 | Apache-2.0 | [
"LICENSE"
] | 254 |
2.4 | prelude-cli-beta | 1472 | For interacting with the Prelude SDK | # Prelude CLI
Interact with the full range of features in Prelude Detect, organized by:
- IAM: manage your account
- Build: write and maintain your collection of security tests
- Detect: schedule security tests for your endpoints
## Quick start
```bash
pip install prelude-cli
prelude --help
prelude --interactive
```
## Documentation
https://docs.preludesecurity.com/docs/prelude-cli
| text/markdown | Prelude Research | support@preludesecurity.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/preludeorg | null | >=3.10 | [] | [] | [] | [
"prelude-sdk-beta==1472",
"click>8",
"rich",
"python-dateutil",
"pyyaml"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:48:02.879986 | prelude_cli_beta-1472.tar.gz | 18,642 | 47/37/ff509baa50f2a26d65952bd36404f09855304e51623de3fd5150a317d013/prelude_cli_beta-1472.tar.gz | source | sdist | null | false | fa738af5357ccead7729531d4b592300 | 44d5af136afee438bc69116fd56a48d52453e2a16e8b0b731b9656d376272003 | 4737ff509baa50f2a26d65952bd36404f09855304e51623de3fd5150a317d013 | null | [
"LICENSE"
] | 264 |
2.4 | prelude-sdk-beta | 1472 | For interacting with the Prelude API | # Prelude SDK
Interact with the Prelude Service API via Python.
> The prelude-cli utility wraps around this SDK to provide a rich command line experience.
Install this package to write your own tooling that works with Build or Detect functionality.
- IAM: manage your account
- Build: write and maintain your collection of security tests
- Detect: schedule security tests for your endpoints
## Quick start
```bash
pip install prelude-sdk
```
## Documentation
TBD
## Testing
To test the Python SDK and Probes, run the following commands from the python/sdk/ directory:
```bash
pip install -r tests/requirements.txt
pytest tests --api https://api.preludesecurity.com --email <EMAIL>
```
| text/markdown | Prelude Research | support@preludesecurity.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/preludeorg | null | >=3.10 | [] | [] | [] | [
"requests"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:47:55.700868 | prelude_sdk_beta-1472.tar.gz | 29,478 | c2/e2/ba64997f6554a2bb057eeaacec431f4a216c036ee4b653e30ffa634d33a2/prelude_sdk_beta-1472.tar.gz | source | sdist | null | false | f0b4702bf1d2f3b4f8d7f89bb5a28c12 | 96e395334682515299dee8f0dff672cdf248f9f157c37503686ff853695dcc61 | c2e2ba64997f6554a2bb057eeaacec431f4a216c036ee4b653e30ffa634d33a2 | null | [
"LICENSE"
] | 278 |
2.4 | utils-flask-sqlalchemy-geo | 0.3.4 | Python lib of tools for Flask and SQLAlchemy (extension geometry) | ## Librairie "outil géographique" pour SQLAlchemy et Flask
Cette librairie fournit des outils pour faciliter le développement avec Flask et SQLAlchemy.
Elle vient compléter la libraire [Utils-Flask-SQLAlchemy](https://github.com/PnX-SI/Utils-Flask-SQLAlchemy) en y ajoutant des fonctionnalités liées aux objets géographiques.
Package Python disponible sur https://pypi.org/project/utils-flask-sqlalchemy-geo/.
- **Les serialisers**
Le décorateur de classe `@geoserializable` permet la sérialisation JSON d'objets Python issus des classes SQLAlchemy. Il rajoute dynamiquement une méthode `as_geofeature()` aux classes qu'il décore. Cette méthode transforme l'objet de la classe en dictionnaire en transformant les types Python non compatibles avec le format JSON. Pour cela, elle se base sur les types des colonnes décrits dans le modèle SQLAlchemy. Cette methode permet d'obtenir un objet de type `geofeature`.
**Utilisation**
- ``utils_flask_sqla_geo.serializers.geoserializable``
Décorateur pour les modèles SQLA : Ajoute une méthode as_geofeature qui
retourne un dictionnaire serialisable sous forme de Feature geojson.
Fichier définition modèle :
from geonature.utils.env import DB
from utils_flask_sqla_geo.serializers import geoserializable
@geoserializable
class MyModel(DB.Model):
__tablename__ = 'bla'
...
fichier utilisation modele :
instance = DB.session.query(MyModel).get(1)
result = instance.as_geofeature()
Le décorateur de classe `@shapeserializable` permet la création de shapefiles issus des classes SQLAlchemy:
- Ajoute une méthode `as_list` qui retourne l'objet sous forme de tableau
(utilisé pour créer des shapefiles)
- Ajoute une méthode de classe `as_shape` qui crée des shapefiles à partir
des données passées en paramètre
Le décorateur de classe `@geofileserializable` permet la création de shapefiles ou geopackage issus des classes SQLAlchemy:
- Ajoute une méthode `as_list` qui retourne l'objet sous forme de tableau
(utilisé pour créer des shapefiles)
- Ajoute une méthode de classe `as_geofile` qui crée des shapefiles ou des geopackage à partir des données passées en paramètre
**Utilisation**
- `utils_flask_sqla_geo.serializers.shapeserializable`
Fichier définition modèle :
from geonature.utils.env import DB
from utils_flask_sqla_geo.serializers import shapeserializable
@shapeserializable
@geofileserializable
class MyModel(DB.Model):
__tablename__ = 'bla'
...
Fichier utilisation modele :
data = DB.session.query(MyShapeserializableClass).all()
MyShapeserializableClass.as_shape(
geom_col='geom_4326',
srid=4326,
data=data,
dir_path=str(ROOT_DIR / 'backend/static/shapefiles'),
file_name=file_name
)
# OU
MyShapeserializableClass.as_geofile(
export_format="shp",
geom_col='geom_4326',
srid=4326,
data=data,
dir_path=str(ROOT_DIR / 'backend/static/shapefiles'),
file_name=file_name
)
- **La classe FionaShapeService pour générer des shapesfiles**
- `utils_flask_sqla_geo.utilsgeometry.FionaShapeService`
- `utils_flask_sqla_geo.utilsgeometry.FionaGpkgService`
Classes utilitaires pour crer des shapefiles ou des geopackages.
Les classes contiennent 3 méthode de classe:
- `create_fiona_struct()`: crée la structure des fichers exports
- Pour les shapefiles : 3 shapefiles (point, ligne, polygone) à partir des colonens et de la geom passé en paramètre
- `create_feature()`: ajoute un enregistrement au(x) fichier(s)
- `save_files()`: sauvegarde le(s) fichier(s) et crer un zip pour les shapefiles qui ont au moin un enregistrement
data = DB.session.query(MySQLAModel).all()
for d in data:
FionaShapeService.create_fiona_struct(
db_cols=db_cols,
srid=current_app.config['LOCAL_SRID'],
dir_path=dir_path,
file_name=file_name,
col_mapping=current_app.config['SYNTHESE']['EXPORT_COLUMNS']
)
FionaShapeService.create_feature(row_as_dict, geom)
FionaShapeService.save_files()
- **Les GenericTableGeo et les GenericQueryGeo**
Ces classes héritent des classes `GenericTable` et `GenericQuery` et permettent de gérer le données de type géométrie.
| text/markdown | null | null | Parcs nationaux des Écrins et des Cévennes | geonature@ecrins-parcnational.fr | null | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operating System :: OS Independent"
] | [] | https://github.com/PnX-SI/Utils-Flask-SQLAlchemy-Geo | null | null | [] | [] | [] | [
"sqlalchemy<2",
"fiona>=1.8.13.post1",
"geoalchemy2>=0.4.0",
"shapely>=1.8.5.post1",
"utils-flask-sqlalchemy>=0.4.2",
"marshmallow",
"marshmallow_sqlalchemy",
"geojson",
"pytest; extra == \"tests\"",
"flask-sqlalchemy; extra == \"tests\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T14:47:30.581681 | utils_flask_sqlalchemy_geo-0.3.4.tar.gz | 32,088 | 80/ae/c336ad10261513abd5869300bb290a9d6f49bb50a83e5d246d3c64a6d404/utils_flask_sqlalchemy_geo-0.3.4.tar.gz | source | sdist | null | false | f477b8f4a72a43d7e8ec34451c8f900d | 557bf17ba56ab0b7cacabbbd515fca5b4354d00698c4c4aeac8e442345ae4e94 | 80aec336ad10261513abd5869300bb290a9d6f49bb50a83e5d246d3c64a6d404 | null | [
"LICENSE"
] | 304 |
2.4 | arklex | 0.1.18 | The official Python library for the arklex API | # 🧠 Arklex AI · Agent-First Framework
<div align="center">

**Build, deploy, and scale intelligent AI agents with enterprise-grade reliability**
[](https://github.com/arklexai/Agent-First-Organization/releases)
[](https://pypi.org/project/arklex)
[](https://pypi.org/project/arklex)
[](LICENSE.md)
[](https://discord.gg/kJkefzkRg5)

🚀 [Quick Start](#-get-started-in-5-minutes) • 📚 [Documentation](https://arklexai.github.io/Agent-First-Organization/) • 💡 [Examples](./examples/)
</div>
---
## 🚀 Get Started in 5 Minutes
### Install & Setup
```bash
# Install
pip install arklex
# Create .env file
echo "OPENAI_API_KEY=your_key_here" > .env
# Clone the repository (required if you want to run example/test scripts)
git clone https://github.com/arklexai/Agent-First-Organization.git
cd Agent-First-Organization
pip install -e .
# Test your API keys (recommended)
python test_api_keys.py
# Create your first agent
python create.py \
--config ./examples/customer_service/customer_service_config.json \
--output-dir ./examples/customer_service \
--llm_provider openai \
--model gpt-4o
# Run agent
python run.py \
--input-dir ./examples/customer_service \
--llm_provider openai \
--model gpt-4o
# For evaluation and testing, you can also use the model API server:
# 1. Start the model API server (defaults to OpenAI with "gpt-4o-mini" model):
python model_api.py --input-dir ./examples/customer_service
# 2. Run evaluation (in a separate terminal):
python eval.py --model_api http://127.0.0.1:8000/eval/chat \
--config "examples/customer_service/customer_service_config.json" \
--documents_dir "examples/customer_service" \
--model "claude-3-haiku-20240307" \
--llm_provider "anthropic" \
--task "all"
```
---
## 📺 Learn by Example
### ▶️ Build a Customer Service Agent in 20 Minutes
<p align="center">
<a href="https://youtu.be/y1P2Ethvy0I" target="_blank">
<img src="https://markdown-videos-api.jorgenkh.no/url?url=https%3A%2F%2Fyoutu.be%2Fy1P2Ethvy0I" alt="Watch the video" width="600px" />
</a>
</p>
👉 **Explore the full tutorial:** [Customer Service Agent Walkthrough](https://arklexai.github.io/Agent-First-Organization/docs/tutorials/customer-service)
---
## ⚡ Key Features
- **🚀 90% Faster Development** — Deploy agents in days, not months
- **🧠 Agent-First Design** — Purpose-built for multi-agent orchestration
- **🔌 Model Agnostic** — OpenAI, Anthropic, Gemini, and more
- **📊 Built-in Evaluation** — Comprehensive testing suite
- **🛡️ Enterprise Security** — Authentication and rate limiting
- **⚡ Production Ready** — Monitoring, logging, auto-scaling
---
## 🏗️ Architecture
```mermaid
graph TB
A[Task Graph] --> B[Orchestrator]
B --> C[Workers]
B --> D[Tools]
C --> E[RAG Worker]
C --> F[Database Worker]
C --> G[Custom Workers]
D --> I[API Tools]
D --> J[External Tools]
```
**Core Components:**
- **Task Graph** — Declarative DAG workflows
- **Orchestrator** — Runtime engine with state management
- **Workers** — RAG, database, web automation
- **Tools** — Shopify, HubSpot, Google Calendar integrations
---
## 💡 Use Cases
| **Domain** | **Capabilities** |
|------------|------------------|
| **Customer service for e-commerce** | RAG chatbots, ticket routing, support workflows |
| **E-commerce such as shopify sales and customer** | Order management, inventory tracking, recommendations |
| **Business Process** | Scheduling, CRM operations, document processing |
---
## 📚 Examples
| **Example** | **Description** | **Complexity** |
|-------------|-----------------|----------------|
| [Customer Service](./examples/customer_service/) | RAG-powered support | ⭐⭐ |
| [Shopify Integration](./examples/shopify/) | E-commerce management | ⭐⭐⭐ |
| [HubSpot CRM](./examples/hubspot/) | Contact management | ⭐⭐⭐ |
| [Calendar Booking](./examples/calendar/) | Scheduling system | ⭐⭐ |
| [Human-in-the-Loop](./examples/hitl_server/) | Interactive workflows | ⭐⭐⭐⭐ |
---
## 🔧 Configuration
**Requirements:** Python 3.10+, API keys
```env
# Required: Choose one or more LLM providers
OPENAI_API_KEY=your_key_here
# OR ANTHROPIC_API_KEY=your_key_here
# OR GOOGLE_API_KEY=your_key_here
# Optional: Enhanced features
MILVUS_URI=your_milvus_uri
MYSQL_USERNAME=your_username
TAVILY_API_KEY=your_tavily_key
```
**Testing API Keys:**
After adding your API keys to the `.env` file, run the test script to verify they work correctly:
```bash
# Test all configured API keys
python test_api_keys.py
# Test specific providers only
python test_api_keys.py --providers openai gemini
python test_api_keys.py --providers openai anthropic
```
---
## 📖 Documentation
- 📚 **[Full Documentation](https://arklexai.github.io/Agent-First-Organization/)**
- 🚀 **[Quick Start](docs/QUICKSTART.md)**
- 🛠️ **[API Reference](docs/API.md)**
- 🏗️ **[Architecture](docs/ARCHITECTURE.md)**
- 🚀 **[Deployment](docs/DEPLOYMENT.md)**
---
## 🤝 Community
- 🐛 [Report Issues](https://github.com/arklexai/Agent-First-Organization/issues)
- 💬 [Discord](https://discord.gg/kJkefzkRg5)
- 🐦 [Twitter](https://twitter.com/arklexai)
- 💼 [LinkedIn](https://www.linkedin.com/company/arklex)
- 📧 [Email Support](mailto:support@arklex.ai)
---
## 📄 License
Arklex AI is released under the **MIT License**. See [LICENSE](LICENSE.md) for details.
This means you can:
- ✅ Use Arklex AI for commercial projects
- ✅ Modify and distribute the code
- ✅ Use it in proprietary applications
- ✅ Sell applications built with Arklex AI
The only requirement is that you include the original license and copyright notice.
---
## 🙏 Acknowledgments
Thanks to all our contributors and the open-source community for making this project possible!
### 🌟 Contributors
<a href="https://github.com/arklexai/Agent-First-Organization/graphs/contributors">
<img src="https://contributors-img.web.app/image?repo=arklexai/Agent-First-Organization" />
</a>
### 🤝 Open Source Dependencies
Arklex AI builds on the shoulders of giants:
- **LangChain** — LLM framework and tooling
- **FastAPI** — Modern web framework
- **Pydantic** — Data validation
- **SQLAlchemy** — Database ORM
- **Milvus** — Vector database
- **And many more...**
---
<div align="center">
**Made with ❤️ by the Arklex AI Team**
[Website](https://arklex.ai) • [Documentation](https://arklexai.github.io/Agent-First-Organization/) • [GitHub](https://github.com/arklexai/Agent-First-Organization) • [Discord](https://discord.gg/kJkefzkRg5) • [LinkedIn](https://www.linkedin.com/company/arklex)
</div>
| text/markdown | null | "Arklex.AI" <support@arklex.ai> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"email-validator<3.0.0,>=2.2.0",
"faiss-cpu<2.0.0,>=1.10.0",
"fastapi-cli<1.0.0,>=0.0.5",
"fastapi<1.0.0,>=0.115.3",
"google-generativeai<1.0.0,>=0.8.0",
"greenlet<4.0.0,>=3.1.1",
"httptools<1.0.0,>=0.6.4",
"janus<2.0.0,>=1.0.0",
"jinja2<4.0.0,>=3.1.0",
"langchain-community<1.0.0,>=0.4.0",
"lang... | [] | [] | [] | [
"Homepage, https://github.com/arklexai/Agent-First-Organization",
"Issues, https://github.com/arklexai/Agent-First-Organization/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:47:17.139764 | arklex-0.1.18.tar.gz | 5,901,464 | 9c/ab/afed556f3549f223a48117606aca0a804065e5de64bd343520e02a897499/arklex-0.1.18.tar.gz | source | sdist | null | false | 7674daf3bc4111bbb6a193f70a5eaa5b | 9adc23f222e29b4077c906ec1f2b4a44488ee785a2c2b2c5e501334cb2a736f6 | 9cabafed556f3549f223a48117606aca0a804065e5de64bd343520e02a897499 | MIT | [
"LICENSE.md"
] | 358 |
2.4 | playmolecule | 2.6.25 | PlayMolecule python API | # Server-less PlayMolecule
Look up the docs
pip install playmolecule google-cloud-storage==1.35.0
| text/markdown | null | Acellera <info@acellera.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"natsort",
"jobqueues",
"requests",
"safezipfile",
"google-cloud-storage",
"google-cloud-artifact-registry>=1.18.0",
"docker>=7.1.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Acellera/playmolecule",
"Bug Tracker, https://github.com/Acellera/playmolecule/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:47:04.147044 | playmolecule-2.6.25.tar.gz | 2,727,330 | fa/3d/03a2d14685aedd85d39133990c147e00914951166e6fef62206ea3e549e0/playmolecule-2.6.25.tar.gz | source | sdist | null | false | f2a7194591f7011e8bd44cea3035c48e | a6ee935e3760990fbf2f5a52096afcf3bdbc7f5314218b1bc8b847861351db12 | fa3d03a2d14685aedd85d39133990c147e00914951166e6fef62206ea3e549e0 | null | [
"LICENSE"
] | 314 |
2.1 | ewah | 0.9.23 | An ELT with airflow helper module: Ewah | # ewah
Ewah: ELT With Airflow Helper - Classes and functions to make apache airflow life easier.
Functions to create all DAGs required for ELT using only a simple config file.
## DWHs Implemented
- Snowflake
- PostgreSQL
- Bigquery
## Operators
EWAH currently supports the following operators:
- Aircall
- BigQuery
- DynamoDB
- Facebook (partially, so far: ads insights; incremental only)
- FX Rates (from Yahoo Finance)
- Google Ads
- Google Analytics (incremental only)
- Google Maps (location data from an address)
- Google Sheets
- Hubspot
- Mailchimp
- Mailingwork
- MongoDB
- MySQL
- OracleSQL
- Pipedrive
- PostgreSQL / Redshift
- Recurly
- S3 (for CSV or JSON files stored in an S3 bucket, e.g. from Kinesis Firehose)
- Salesforce
- Shopify
- Stripe
- Zendesk
### Universal operator arguments
The following arguments are accepted by all operators, unless explicitly stated otherwise:
| argument | required | type | default | description |
| --- | --- | --- | --- | --- |
| source_conn_id | yes | string | n.a. | name of the airflow connection with source credentials |
| dwh_engine | yes | string | n.a. | DWH type - e.g. postgres - usually |
| dwh_conn_id | yes | string | n.a. | name of the airflow connection of the DWH |
| target_table_name | implicit | string | n.a. | name of the table in the DWH; the target table name is the name given in the table config |
| target_schema_name | yes | string | name of the schema in the DWH where the table will live |
| target_schema_name_suffix | no | string | `_next` | when loading new data, how to suffix the schema name during the loading process |
| target_database_name | yes for Snowflake DWH | string | n.a. | name of the database (only for Snowflake, illegal argument for non-Snowflake DWHs) |
| drop_and_replace | no | boolean | same as DAG-level setting | whether a table is loading as full refresh or incrementally. Normally set by the DAG level config. Incremental loads can overwrite this setting to fully refresh some small tables (e.g. if they are small and have no `updated_at` column) |
| primary_key | operator-dependent | string or list of strings | n.a. | name of the primary key column(s); if given, EWAH will set the column as primary key in the DWH and use it when applicable during upsert operations |
| add_metadata | no | boolean | True | some operators may add metadata to the tables; this behavior can be turned off (e.g. shop name for the shopify operator) |
### Operator: Google Ads
These arguments are specific to the Google Ads operator.
| argument | required | type | default | description |
| --- | --- | --- | --- | --- |
| fields | yes | dict | n.a. | most important argument; excludes metrics; detailed below |
| metrics | yes | list of strings | n.a. | list of all metrics to load, must load at least one metric |
| resource | yes | string | n.a. | name of the report, e.g. `keyword_view` |
| client_id | yes | string | n.a. | 10-digit number, often written with hyphens, e.g. `123-123-1234` (acceptable with or without hyphens) |
| conditions | no | list of strings | n.a. | list of strings of condition to include in the query, all conditions will be combined using `AND` operator |
| data_from | no | datetime, timedelta or airflow-template-string | data_interval_start of task instance | start date of particular airflow task instance OR timedelta -> calculate delta from data_until |
| data_until | no | datetime or airflow-template-string | data_interval_end of task instance | get data from google_ads until this point |
#### arguments: fields and metrics
the `fields` and `metrics` arguments are the most important for this operator. The `metrics` are separated from `fields` because the `fields` are simultaneously the updated_on_columns. When creating the google ads query, they are combined. The `metrics` argument is simply a list of metrics to be requested from Google Ads. The `fields` argument is a bit more complex, due to the nature of Google Ad's API. It is essentially a nested json.
Because the query may look something like this:
```sql
SELECT
campaign.id
, campaign.name
, ad_group_criterion.criterion_id
, ad_group_criterion.keyword.text
, ad_group_criterion.keyword.match_type
, segments.date
, metrics.impressions
, metrics.clicks
, metrics.cost_micros
FROM keyword_view
WHERE segments.date BETWEEN 2020-08-01 AND 2020-08-08
```
i.e., there are nested object structures, the `fields` structure must reflect the same. Take a look at the example config for the correct configuration of abovementioned Google Ads query. Note in addition, that the fields will be uploaded with the same names to the DWH, excepts that the periods will be replaced by underscored. i.e., the table `keyword_view_data` in the example below will have the columns `campaign_id`, `ad_group_criterion_keyword_text`, etc.
Finally, note that `segments.date` is always required in the `fields` argument.
### Oracle operator particularities
The Oracle operator utilizes the `cx_Oracle` python library. To make it work, you need to install additional packages, see [here](https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html#installing-cx-oracle-on-linux) for details.
#### Example
Sample configuration in `dags.yaml` file:
```yaml
EL_Google_Ads:
incremental: True
el_operator: google_ads
target_schema_name: raw_google_ads
operator_config:
general_config:
source_conn_id: google_ads
client_id: "123-123-1234"
# conditions: none
incremental_config:
data_from: !!python/object/apply:datetime.timedelta
- 3 # days as time interval before the execution date of the DAG
# only use this for normal runs! backfills: use execution date context instead
tables:
keyword_view_data:
resource: keyword_view
fields:
campaign:
- id
- name
ad_group_criterion:
- keyword:
- text
- match_type
- criterion_id
segments:
- date
metrics:
- impressions
- clicks
- cost_micros
```
## Philosophy
This package strictly follows an ELT Philosophy:
- Business value is created by infusing business logic into the data and making great analyses and usable data available to stakeholders, not by building data pipelines
- Airflow solely orchestrates loading raw data into a central DWH
- Data is either loaded as full refresh (all data at every load) or incrementally, exploiting airflow's catchup and execution logic
- The only additional DAGs are dbt DAGs and utility DAGs
- Within that DWH, each data source lives in its own schema (e.g. `raw_salesforce`)
- Irrespective of full refresh or incremental loading, DAGs always load into a separate schema (e.g. `raw_salesforce_next`) and at the end replace the schema with the old data with the schema with the new data, to avoid data corruption due to errors in DAG execution
- Any data transformation is defined using SQL, ideally using [dbt](https://github.com/fishtown-analytics/dbt)
- Seriously, dbt is awesome, give it a shot!
- *(Non-SQL) Code contains no transformations*
## Usage
In your airflow Dags folder, define the DAGs by invoking either the incremental loading or full refresh DAG factory. The incremental loading DAG factory returns three DAGs in a tuple, make sure to call it like so: `dag1, dag2, dag3 = dag_factory_incremental_loading()` or add the dag IDs to your namespace like so:
```python
dags = dag_factory_incremental_loading()
for dag in dags:
globals()[dag._dag_id] = dag
```
Otherwise, airflow will not recognize the DAGs. Most arguments should be self-explanatory. The two noteworthy arguments are `el_operator` and `operator_config`.
The former must be a child object of `ewah.operators.base.EWAHBaseOperator`. Ideally, The required operator is already available for your use. Please feel free to fork and commit your own operators to this project! The latter is a dictionary containing the entire configuration of the operator. This is where you define what tables to load, how to load them, if loading specific columns only, and any other detail related to your EL job.
### Full refresh factory
A `filename.py` file in your airflow/dags folder may look something like this:
```python
from ewah.utils.dag_factory_full_refresh import dag_factory_drop_and_replace
from ewah.constants import EWAHConstants as EC
from ewah.operators.postgres import EWAHPostgresOperator
from datetime import datetime, timedelta
dag = dag_factory_drop_and_replace(
dag_name='EL_production_postgres_database', # Name of the DAG
dwh_engine=EC.DWH_ENGINE_POSTGRES, # Implemented DWH Engine
dwh_conn_id='dwh', # Airflow connection ID with connection details to the DWH
el_operator=EWAHPostgresOperator, # Ewah Operator (or custom child class of EWAHBaseOperator)
target_schema_name='raw_production', # Name of the raw schema where data will end up in the DWH
target_schema_suffix='_next', # suffix of the schema containing the data before replacing the production data schema with the temporary loading schema
# target_database_name='raw', # Only Snowflake
start_date=datetime(2019, 10, 23), # As per airflow standard
schedule_interval=timedelta(hours=1), # Only timedelta is allowed!
default_args={ # Default args for DAG as per airflow standard
'owner': 'Data Engineering',
'retries': 1,
'retry_delay': timedelta(minutes=5),
'email_on_retry': False,
'email_on_failure': True,
'email': ['email@address.com'],
},
operator_config={
'general_config': {
'source_conn_id': 'production_postgres',
'source_schema_name': 'public',
},
'tables': {
'table_name':{},
# ...
# Additional optional kwargs at the table level:
# primary_key
# + any operator specific arguments
},
},
)
```
For all kwargs of the operator config, the general config can be overwritten by supplying specific kwargs at the table level.
### Configure all DAGs in a single YAML file
Standard data loading DAGs should be just a configuration. Thus, you can
configure the DAGs using a simple YAML file. Your `dags.py` file in your
`$AIRFLOW_HOME/dags` folder may then look like that, and nothing more:
```python
import os
from airflow import DAG # This module must be imported for airflow to see DAGs
from airflow.configuration import conf
from ewah.dag_factories import dags_from_yml_file
folder = os.environ.get('AIRFLOW__CORE__DAGS_FOLDER', None)
folder = folder or conf.get("core", "dags_folder")
dags = dags_from_yml_file(folder + os.sep + 'dags.yml', True, True)
for dag in dags: # Must add the individual DAGs to the global namespace
globals()[dag._dag_id] = dag
```
And the YAML file may look like this:
```YAML
---
base_config: # applied to all DAGs unless overwritten
dwh_engine: postgres
dwh_conn_id: dwh
airflow_conn_id: airflow
start_date: 2019-10-23 00:00:00+00:00
schedule_interval: !!python/object/apply:datetime.timedelta
- 0 # days
- 3600 # seconds
schedule_interval_backfill: !!python/object/apply:datetime.timedelta
- 7
schedule_interval_future: !!python/object/apply:datetime.timedelta
- 0
- 3600
additional_task_args:
retries: 1
retry_delay: !!python/object/apply:datetime.timedelta
- 0
- 300
email_on_retry: False
email_on_failure: True
email: ['me+airflowerror@mail.com']
el_dags:
EL_Production: # equals the name of the DAG
incremental: False
el_operator: postgres
target_schema_name: raw_production
operator_config:
general_config:
source_conn_id: production_postgres
source_schema_name: public
tables:
users:
source_table_name: Users
transactions:
source_table_name: UserTransactions
source_schema_name: transaction_schema # Overwrite general_config args as needed
EL_Facebook:
incremental: True
el_operator: fb
start_date: 2019-07-01 00:00:00+00:00
target_schema_name: raw_facebook
operator_config:
general_config:
source_conn_id: facebook
account_ids:
- 123
- 987
data_from: '{{ data_interval_start }}' # Some fields allow airflow templating, depending on the operator
data_until: '{{ data_interval_end }}'
level: ad
tables:
ads_data_age_gender:
insight_fields:
- adset_id
- adset_name
- campaign_name
- campaign_id
- spend
breackdowns:
- age
- gender
...
```
## Developing EWAH locally with Docker
It is easy to develop EWAH with Docker. Here's how:
* Step 1: clone the repository locally
* Step 2: configure your local secrets
* In `ewah/airflow/docker/secrets`, you should at least have a file called `secret_airflow_connections.yml` which looks exactly like `ewah/airflow/docker/airflow_connections.yml` and contains any additional credentials you may need during development -> this file can also contain connections defined in the `airflow_connections.yml`, in which case your connections will overwrite them.
* You can add any other files in this folder that you may need, e.g. private key files
* For example, you can add a file called `google_service_acc.json` that contains service account credentials that can be loaded as extra into an airflow connection used for a Google Analytics DAG
* Sample files are shown below
* Step 3: run `docker-compose up` to start the postgres, webserver and scheduler containers
* You can stop them with `CTRL+C`
* If you want to free up the ports again or if you are generally done developing, you should additionally run `docker-compose down`
* Step 4: Make any changes you wish to files in `ewah/ewah/` or `airflow/dags/`
* If you add new dependencies, makes sure to add them in the `ewah/setup.py` file; you may need to restart the containers to include the new dependencies in the ewah installations running in your containers
* Step 5: Commit, Push and open a PR
* When pushing, feel free to include configurations in your `ewah/airflow/dags/dags.yml` that utilize your new feature, if applicable
##### Sample `secret_airflow_connections.yml`
```yaml
---
connections:
- id: ssh_tunnel
host: [SSH Server IP]
port: [SSH Port] # usually 22
login: [SSH username]
password: [private key password OR SSH server password]
extra: !text_from_file /opt/airflow/docker/secrets/ssh_rsa_private # example, with a file called `ssh_rsa_private` in your `ewah/airflow/docker/secrets/` folder
- id: shopware
host: [host] # "localhost" if tunnelling into the server running the MySQL database
schema: [database_name]
port: 3306
login: [username]
password: [password]
- id: google_service_account
extra: !text_from_file /opt/airflow/docker/secrets/google_service_acc.json # example, with a file called `google_service_acc.json` in your `ewah/airflow/docker/secrets/` folder
...
```
##### Sample `google_service_acc.json`
```json
{"client_secrets": {
"type": "service_account",
"project_id": "my-project-id",
"private_key_id": "abcdefghij1234567890abcdefghij1234567890",
"private_key": "-----BEGIN PRIVATE KEY-----\n[...]\n-----END PRIVATE KEY-----\n",
"client_email": "xxx@my-project-id.iam.gserviceaccount.com",
"client_id": "012345678901234567890",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/xxx%40my-project-id.iam.gserviceaccount.com"
}}
```
## Using EWAH with Astronomer
To avoid all devops troubles, it is particularly easy to use EWAH with astronomer.
Your astronomer project requires the following:
- add `ewah` to the `requirements.txt`
- add `libstdc++` to the `packages.txt`
- have a `dags.py` file and a `dags.yml` file in your dags folder
- in production, you may need to request your airflow metadata postgres database password from the support for incremental loading DAGs
| text/markdown | Bijan Soltani | bijan.soltani+ewah@gemmaanalytics.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License"
] | [] | https://gemmaanalytics.com/ | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.8.18 | 2026-02-18T14:47:01.731933 | ewah-0.9.23.tar.gz | 143,969 | 38/68/394a501e5b5fde055544b341c541fcbcc815269c2667a55f7eabe9704689/ewah-0.9.23.tar.gz | source | sdist | null | false | 5d6961e8c05992fc61c49195655ee7db | 701f9c1d90d47bcd7227e50014c68364ca0dc57c954f048f2628616135bd8d47 | 3868394a501e5b5fde055544b341c541fcbcc815269c2667a55f7eabe9704689 | null | [] | 281 |
2.4 | fs-mcp | 1.23.2 | A secure MCP filesystem server with Stdio and Web UI modes. | # fs-mcp 📂
**Universal, Provider-Agnostic Filesystem MCP Server**
*Works with Claude, Gemini, GPT — zero configuration required.*
---
https://github.com/user-attachments/assets/132acdd9-014c-4ba0-845a-7db74644e655
## 💡 Why This Exists
MCP (Model Context Protocol) is incredible, but connecting AI agents to filesystems hits real-world walls:
| Problem | fs-mcp Solution |
|---------|-----------------|
| **Container Gap** — Stdio doesn't work across Docker boundaries | HTTP server by default — connect from anywhere |
| **Token Waste** — Agents dump entire files to find one function | Smart `grep → read` pattern with section hints |
| **Schema Hell** — Gemini silently corrupts nested object schemas | Auto-transforms schemas at runtime — just works |
| **Blind Overwrites** — One hallucination wipes your `main.py` | Human-in-the-loop review with VS Code diff |
**fs-mcp** is a Python-based server built on `fastmcp` that treats **efficiency**, **safety**, and **universal compatibility** as first-class citizens.
---
## 🚀 Core Value
### 1. Agent-First Efficiency
Tools are designed to minimize token usage and maximize context quality:
```mermaid
flowchart LR
A["grep_content('def calculate')"] --> B["Returns: Line 142<br/>(hint: function ends L158)"]
B --> C["read_files(start=142, end=158)"]
C --> D["17 lines instead of 5000"]
style D fill:#90EE90
```
- **Section Hints**: `grep_content` tells you where functions/classes end
- **Pattern Reading**: `read_files` with `read_to_next_pattern` extracts complete blocks
- **Token-Efficient Errors**: Fuzzy match suggestions instead of file dumps (90% savings)
### 2. Human-in-the-Loop Safety
The `propose_and_review` tool opens a VS Code diff for every edit:
```mermaid
sequenceDiagram
participant Agent
participant Server
participant Human
Agent->>Server: propose_and_review(edits)
Server->>Human: Opens VS Code diff
alt Approve
Human->>Server: Add double newline + Save
Server->>Agent: "APPROVE"
Agent->>Server: commit_review()
else Modify
Human->>Server: Edit directly + Save
Server->>Agent: "REVIEW" + your changes
Agent->>Agent: Incorporate feedback
end
```
**Safety features:**
- Full overwrites require explicit `OVERWRITE_FILE` sentinel
- Batch edits with `edits=[]` for multiple changes in one call
- Session-based workflow prevents race conditions
### 3. Universal Provider Compatibility
**The problem:** Gemini silently corrupts JSON Schema `$ref` references — nested objects like `FileReadRequest` degrade to `STRING`, breaking tool calls.
**The fix:** fs-mcp automatically transforms all schemas to Gemini-compatible format at startup. No configuration needed.
```
Before (broken): "items": {"$ref": "#/$defs/FileReadRequest"}
↓ Gemini sees this as ↓
"items": {"type": "STRING"} ❌
After (fs-mcp): "items": {"type": "object", "properties": {...}} ✅
```
This "lowest common denominator" approach means **the same server works with Claude, Gemini, and GPT** without any provider-specific code.
---
## ⚡ Quick Start
### Run Instantly
```bash
# One command — launches Web UI (8123) + HTTP Server (8124)
uvx fs-mcp .
```
### Selective Launch
```bash
# HTTP only (headless / Docker / CI)
fs-mcp --no-ui .
# UI only (local testing)
fs-mcp --no-http .
```
### Docker
```bash
# In your Dockerfile or entrypoint
uvx fs-mcp --no-ui --http-host 0.0.0.0 --http-port 8124 /app
```
---
## 🔌 Configuration
### Claude Desktop (Stdio)
```json
{
"mcpServers": {
"fs-mcp": {
"command": "uvx",
"args": ["fs-mcp", "/path/to/your/project"]
}
}
}
```
### OpenCode / Other HTTP Clients
Point your MCP client to `http://localhost:8124/mcp/` (SSE transport).
---
## 🧰 The Toolbox
### Discovery & Reading
| Tool | Purpose |
|------|---------|
| `grep_content` | Regex search with **section hints** — knows where functions end |
| `read_files` | Multi-file read with `head`/`tail`, line ranges, or `read_to_next_pattern` |
| `directory_tree` | Recursive JSON tree (auto-excludes `.git`, `.venv`, `node_modules`) |
| `search_files` | Glob pattern file discovery |
| `get_file_info` | Metadata + token estimate + chunking recommendations |
### Editing (Human-in-the-Loop)
| Tool | Purpose |
|------|---------|
| `propose_and_review` | **Safe editing** — VS Code diff, batch edits, fuzzy match suggestions |
| `commit_review` | Finalize approved changes |
### Structured Data
| Tool | Purpose |
|------|---------|
| `query_json` | JQ queries on large JSON files (bounded output) |
| `query_yaml` | YQ queries on YAML files |
### Utilities
| Tool | Purpose |
|------|---------|
| `list_directory_with_sizes` | Detailed listing with formatted sizes |
| `list_allowed_directories` | Show security-approved paths |
| `create_directory` | Create directories |
| `read_media_file` | Base64 encode images/audio for vision models |
### Analysis
| Tool | Purpose |
|------|---------|
| `analyze_gsd_work_log` | Semantic analysis of GSD-Lite project logs |
---
## 🏗️ Architecture
```
src/fs_mcp/
├── server.py # Tool definitions + schema transforms
├── gemini_compat.py # JSON Schema → Gemini-compatible
├── edit_tool.py # propose_and_review logic
├── web_ui.py # Streamlit dashboard
└── gsd_lite_analyzer.py
scripts/schema_compat/ # CLI for schema validation
tests/ # pytest suite (including Gemini compat CI guard)
```
**Key dependency:** `ripgrep` (`rg`) must be installed for `grep_content`.
---
## 📊 Why Token Efficiency Matters
| Scenario | Without fs-mcp | With fs-mcp |
|----------|----------------|-------------|
| Find a function | Read entire file (5000 tokens) | grep + targeted read (200 tokens) |
| Edit mismatch error | Dump file + error (6000 tokens) | Fuzzy suggestions (500 tokens) |
| Explore large JSON | Load entire file (10000 tokens) | JQ query (100 tokens) |
**Result:** 10-50x reduction in context usage for common operations.
---
## 🧪 Testing
```bash
# Run all tests
uv run pytest
# Run Gemini compatibility guard (fails if schemas break)
uv run pytest tests/test_gemini_schema_compat.py
```
---
## 📜 License & Credits
Built with ❤️ for the MCP community by **luutuankiet**.
Powered by [FastMCP](https://github.com/jlowin/fastmcp), [Pydantic](https://docs.pydantic.dev/), and [Streamlit](https://streamlit.io/).
**Now go build some agents.** 🚀 | text/markdown | null | luutuankiet <luutuankiet.ftu2@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"distro",
"fastapi>=0.128.0",
"fastmcp==2.14.3",
"google-genai>=1.56.0",
"httpx>=0.28.1",
"jsonref>=1.1.0",
"pydantic>=2.0",
"pyfiglet",
"streamlit-js-eval>=0.1.5",
"streamlit>=1.30.0",
"tiktoken>=0.12.0",
"toml"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:46:34.766303 | fs_mcp-1.23.2.tar.gz | 631,833 | b4/72/f59102b2dd276d26603ab410afa2a7e0f2f7969e5ff74c323d83d0918421/fs_mcp-1.23.2.tar.gz | source | sdist | null | false | 876c258a069bdc88690924cb4f06d48c | d39dadfc93101ef994af2192d62a3f3e41b8aacd713461515ae581a536918245 | b472f59102b2dd276d26603ab410afa2a7e0f2f7969e5ff74c323d83d0918421 | null | [] | 266 |
2.1 | lanctools | 0.2.0 | Tools for working with phased local ancestry data stored in the `.lanc` file format | # lanctools
Tools for working with phased local ancestry data stored in the `.lanc` file format, as defined by Admix-kit [Hou et al., 2024].
`lanctools` is designed to provide **fast local ancestry queries** and convenient conversion from external formats (e.g., FLARE [Browning et al., 2023] and RFMix [Maples et al., 2013]). It focuses on efficient access to `.lanc` data and is **not** intended to replace the full functionality of Admix-kit.
## Features
- Efficient random access to phased local ancestry data
- Local ancestry-masked genotype queries
- Conversion from FLARE and RFMix output to `.lanc` format
- Python API and command-line interface (CLI)
## Installation
```bash
pip install lanctools
```
## References
- Hou, K. et al. Admix-kit: an integrated toolkit and pipeline for genetic analyses of admixed populations. Bioinformatics 40, btae148 (2024). [paper](https://doi-org.libproxy.lib.unc.edu/10.1093/bioinformatics/btae148) [software](https://github.com/KangchengHou/admix-kit)
- Browning, S. R., Waples, R. K. & Browning, B. L. Fast, accurate local ancestry inference with FLARE. Am J Hum Genet 110, 326–335 (2023). [paper](https://doi.org/10.1016/j.ajhg.2022.12.010) [software](https://github.com/browning-lab/flare)
- Maples, B. K., Gravel, S., Kenny, E. E. & Bustamante, C. D. RFMix: A Discriminative Modeling Approach for Rapid and Robust Local-Ancestry Inference. Am J Hum Genet 93, 278–288 (2013). [paper](<https://doi.org/10.1016/j.ajhg.2013.06.020>) [software](https://github.com/slowkoni/rfmix)
| text/markdown | null | Franklin Ockerman <frank.ockerman@gmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numba>=0.63.1",
"numpy>=2.2.6",
"pandas>=2.3.3",
"pgenlib>=0.93.0",
"typer>=0.20.1"
] | [] | [] | [] | [
"Homepage, https://github.com/frankp-0/lanctools",
"Repository, https://github.com/frankp-0/lanctools",
"Issues, https://github.com/frankp-0/lanctools/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:45:45.802106 | lanctools-0.2.0-cp313-cp313-musllinux_1_2_i686.whl | 1,238,597 | 0e/ad/b33059f72334371080b88b781e932ed082d45753d56a2445346058567977/lanctools-0.2.0-cp313-cp313-musllinux_1_2_i686.whl | cp313 | bdist_wheel | null | false | 1de45aea1f5499d63b69ef0d05074686 | e1ccf2cd9a574e462d7a900f05c91e3ed4aa0bef236ad67d0b58d3adb227cd6e | 0eadb33059f72334371080b88b781e932ed082d45753d56a2445346058567977 | null | [] | 1,673 |
2.4 | nucleation | 0.1.163 | A high-performance Minecraft schematic parser and utility library | # Nucleation
**Nucleation** is a high-performance Minecraft schematic engine written in Rust with full support for **Rust**, **WebAssembly/JavaScript**, **Python**, and **FFI-based integrations** like **PHP** and **C**.
> Built for performance, portability, and parity across ecosystems.
---
[](https://crates.io/crates/nucleation)
[](https://www.npmjs.com/package/nucleation)
[](https://pypi.org/project/nucleation)
---
## Features
### Core
- **Multi-format support**: `.schematic`, `.litematic`, `.nbt`, `.mcstructure`, and more
- **Memory-safe Rust core** with zero-copy deserialization
- **Cross-platform**: Linux, macOS, Windows (x86_64 + ARM64)
- **Multi-language**: Rust, JavaScript/TypeScript (WASM), Python, C/FFI
### Schematic Building
- **SchematicBuilder**: Create schematics with ASCII art and Unicode characters
- **Compositional Design**: Build circuits hierarchically from smaller components
- **Unicode Palettes**: Visual circuit design with intuitive characters (`→`, `│`, `█`, etc.)
- **Template System**: Define circuits in human-readable text format
- **CLI Tool**: Build schematics from command line (`schematic-builder`)
### Circuit Simulation
- **Redstone simulation** via MCHPRS integration (optional `simulation` feature)
- **TypedCircuitExecutor**: High-level API with typed inputs/outputs (Bool, U32, Ascii, etc.)
- **CircuitBuilder**: Fluent API for streamlined executor creation
- **DefinitionRegion**: Advanced region manipulation with boolean ops, filtering, and connectivity analysis
- **Custom IO**: Inject and monitor signal strengths at specific positions
- **Execution Modes**: Fixed ticks, until condition, until stable, until change
- **State Management**: Stateless, stateful, or manual tick control
### 3D Mesh Generation
- **Resource pack loading** from ZIP files or raw bytes
- **GLB/glTF export** for standard 3D viewers and engines
- **USDZ export** for Apple AR Quick Look
- **Raw mesh export** for custom rendering pipelines (positions, normals, UVs, colors, indices)
- **Per-region and per-chunk meshing** for large schematics
- **Greedy meshing** to reduce triangle count
- **Occlusion culling** to skip fully hidden blocks
- **Ambient occlusion** with configurable intensity
- **Resource pack querying** — list/get/add blockstates, models, and textures
### Developer Experience
- **Bracket notation** for blocks: `"minecraft:lever[facing=east,powered=false]"`
- **Feature parity** across all language bindings
- **Comprehensive documentation** in [`docs/`](docs/)
- Seamless integration with [Cubane](https://github.com/Nano112/cubane)
---
## Installation
### Rust
```bash
cargo add nucleation
```
### JavaScript / TypeScript (WASM)
```bash
npm install nucleation
```
### Python
```bash
pip install nucleation
```
### C / PHP / FFI
Download prebuilt `.so` / `.dylib` / `.dll` from [Releases](https://github.com/Schem-at/Nucleation/releases)
or build locally using:
```bash
./build-ffi.sh
```
---
## Quick Examples
### Loading and Saving Schematics
#### Rust
```rust
use nucleation::UniversalSchematic;
let bytes = std::fs::read("example.litematic")?;
let mut schematic = UniversalSchematic::new("my_schematic");
schematic.load_from_data(&bytes)?;
println!("{:?}", schematic.get_info());
```
#### JavaScript (WASM)
```ts
import { SchematicParser } from "nucleation";
const bytes = await fetch("example.litematic").then((r) => r.arrayBuffer());
const parser = new SchematicParser();
await parser.fromData(new Uint8Array(bytes));
console.log(parser.getDimensions());
```
#### Python
```python
from nucleation import Schematic
with open("example.litematic", "rb") as f:
data = f.read()
schem = Schematic("my_schematic")
schem.load_from_bytes(data)
print(schem.get_info())
```
### Building Schematics with ASCII Art
```rust
use nucleation::SchematicBuilder;
// Use Unicode characters for visual circuit design!
let circuit = SchematicBuilder::new()
.from_template(r#"
# Base layer
ccc
ccc
# Logic layer
─→─
│█│
"#)
.build()?;
// Save as litematic
let bytes = nucleation::litematic::to_litematic(&circuit)?;
std::fs::write("circuit.litematic", bytes)?;
```
**Available in Rust, JavaScript, and Python!** See [SchematicBuilder Guide](docs/guide/schematic-builder.md).
### Compositional Circuit Design
```rust
// Build a basic gate
let and_gate = create_and_gate();
// Use it in a larger circuit
let half_adder = SchematicBuilder::new()
.map_schematic('A', and_gate) // Use entire schematic as palette entry!
.map_schematic('X', xor_gate)
.layers(&[&["AX"]]) // Place side-by-side
.build()?;
// Stack multiple copies
let four_bit_adder = SchematicBuilder::new()
.map_schematic('F', full_adder)
.layers(&[&["FFFF"]]) // 4 full-adders in a row
.build()?;
```
See [4-Bit Adder Example](docs/examples/4-bit-adder.md) for a complete hierarchical design.
### CLI Tool
```bash
# Build schematic from text template
cat circuit.txt | schematic-builder -o circuit.litematic
# From file
schematic-builder -i circuit.txt -o circuit.litematic
# Choose format
schematic-builder -i circuit.txt -o circuit.schem --format schem
# Export as mcstructure
schematic-builder -i circuit.txt -o circuit.mcstructure --format mcstructure
```
---
## Advanced Examples
### Setting Blocks with Properties
```js
const schematic = new SchematicWrapper();
schematic.set_block(
0,
1,
0,
"minecraft:lever[facing=east,powered=false,face=floor]"
);
schematic.set_block(
5,
1,
0,
"minecraft:redstone_wire[power=15,east=side,west=side]"
);
```
[More in `examples/rust.md`](examples/rust.md)
### Redstone Circuit Simulation
```js
const simWorld = schematic.create_simulation_world();
simWorld.on_use_block(0, 1, 0); // Toggle lever
simWorld.tick(2);
simWorld.flush();
const isLit = simWorld.is_lit(15, 1, 0); // Check if lamp is lit
```
### High-Level Typed Executor
```rust
use nucleation::{TypedCircuitExecutor, IoType, Value, ExecutionMode};
// Create executor with typed IO
let mut executor = TypedCircuitExecutor::new(world, inputs, outputs);
// Execute with typed values
let mut input_values = HashMap::new();
input_values.insert("a".to_string(), Value::Bool(true));
input_values.insert("b".to_string(), Value::Bool(true));
let result = executor.execute(
input_values,
ExecutionMode::FixedTicks { ticks: 100 }
)?;
// Get typed output
let output = result.outputs.get("result").unwrap();
assert_eq!(*output, Value::Bool(true)); // AND gate result
```
**Supported types**: `Bool`, `U8`, `U16`, `U32`, `I8`, `I16`, `I32`, `Float32`, `Ascii`, `Array`, `Matrix`, `Struct`
See [TypedCircuitExecutor Guide](docs/guide/typed-executor.md) for execution modes, state management, and more.
### 3D Mesh Generation
```rust
use nucleation::{UniversalSchematic, meshing::{MeshConfig, ResourcePackSource}};
// Load schematic and resource pack
let schematic = UniversalSchematic::from_litematic_bytes(&schem_data)?;
let pack = ResourcePackSource::from_file("resourcepack.zip")?;
// Configure meshing
let config = MeshConfig::new()
.with_greedy_meshing(true)
.with_cull_occluded_blocks(true);
// Generate GLB mesh
let result = schematic.to_mesh(&pack, &config)?;
std::fs::write("output.glb", &result.glb_data)?;
// Or USDZ for AR
let usdz = schematic.to_usdz(&pack, &config)?;
// Or raw mesh data for custom rendering
let raw = schematic.to_raw_mesh(&pack, &config)?;
println!("Vertices: {}, Triangles: {}", raw.vertex_count(), raw.triangle_count());
```
---
## Development
```bash
# Build the Rust core
cargo build --release
# Build with simulation support
cargo build --release --features simulation
# Build with meshing support
cargo build --release --features meshing
# Build WASM module (includes simulation)
./build-wasm.sh
# Build Python bindings locally
maturin develop --features python
# Build FFI libs
./build-ffi.sh
# Run tests
cargo test
cargo test --features simulation
cargo test --features meshing
./test-wasm.sh # WASM tests with simulation
# Pre-push verification (recommended before pushing)
./pre-push.sh # Runs all checks that CI runs
```
---
## Documentation
### 📖 Language-Specific Documentation
Choose your language for complete API reference and examples:
- **[Rust Documentation](docs/rust/)** - Complete Rust API reference
- **[JavaScript/TypeScript Documentation](docs/javascript/)** - WASM API for web and Node.js
- **[Python Documentation](docs/python/)** - Python bindings API
- **[C/FFI Documentation](examples/ffi.md)** - C-compatible FFI for PHP, Go, etc.
### 📚 Shared Guides
These guides apply to all languages:
- [SchematicBuilder Guide](docs/shared/guide/schematic-builder.md) - ASCII art and compositional design
- [TypedCircuitExecutor Guide](docs/shared/guide/typed-executor.md) - High-level circuit simulation
- [Circuit API Guide](docs/shared/guide/circuit-api.md) - CircuitBuilder and DefinitionRegion
- [Unicode Palette Reference](docs/shared/unicode-palette.md) - Visual circuit characters
### 🎯 Quick Links
- [Main Documentation Index](docs/) - Overview and comparison
- [Examples Directory](examples/) - Working code examples
---
## License
Licensed under the **GNU AGPL-3.0-only**.
See [`LICENSE`](./LICENSE) for full terms.
Made by [@Nano112](https://github.com/Nano112)
| text/markdown; charset=UTF-8; variant=GFM | null | Nano <nano@schem.at> | null | null | AGPL-3.0-only | minecraft, schematic, parser, voxel | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Pr... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | maturin/1.12.2 | 2026-02-18T14:43:55.166502 | nucleation-0.1.163.tar.gz | 15,387,929 | 5c/d4/a5030fa50522cec11bcf4685e68a07cbd60eada7eeb9e21d4d58918b6f51/nucleation-0.1.163.tar.gz | source | sdist | null | false | b16456c78fa5d9107262e068418e7bbf | 7b4d5c86d41d37b25f0c2492881950a878ef5484b0ae3c0f7679a3629c4920a0 | 5cd4a5030fa50522cec11bcf4685e68a07cbd60eada7eeb9e21d4d58918b6f51 | null | [] | 261 |
2.4 | wheezy.web | 3.2.1 | A lightweight, high performance, high concurrency WSGI web framework with the key features to build modern, efficient web | # wheezy.web
[](https://github.com/akornatskyy/wheezy.web/actions/workflows/tests.yml)
[](https://coveralls.io/github/akornatskyy/wheezy.web?branch=master)
[](https://wheezyweb.readthedocs.io/en/latest/?badge=latest)
[](https://badge.fury.io/py/wheezy.web)
[wheezy.web](https://pypi.org/project/wheezy.web/) is a lightweight,
[high performance](https://mindref.blogspot.com/2012/09/python-fastest-web-framework.html),
high concurrency [WSGI](http://www.python.org/dev/peps/pep-3333) web
framework with the key features to *build modern, efficient web*:
- MVC architectural pattern
([push](http://en.wikipedia.org/wiki/Web_application_framework#Push-based_vs._pull-based)-based).
- Functionality includes
[routing](https://github.com/akornatskyy/wheezy.routing),
[model update/validation](https://github.com/akornatskyy/wheezy.validation),
[authentication/authorization](https://github.com/akornatskyy/wheezy.security),
[content](https://wheezyhttp.readthedocs.io/en/latest/userguide.html#content-cache)
[caching](https://github.com/akornatskyy/wheezy.caching) with
[dependency](https://wheezycaching.readthedocs.io/en/latest/userguide.html#cachedependency),
xsrf/resubmission protection, AJAX+JSON, i18n (gettext),
middlewares, and more.
- Template engine agnostic (integration with
[jinja2](http://jinja.pocoo.org),
[mako](http://www.makotemplates.org),
[tenjin](http://www.kuwata-lab.com/tenjin/) and
[wheezy.template](https://github.com/akornatskyy/wheezy.template)) plus
[html widgets](https://github.com/akornatskyy/wheezy.html).
It is optimized for performance, well tested and documented.
Resources:
- [source code](https://github.com/akornatskyy/wheezy.web),
[examples](https://github.com/akornatskyy/wheezy.web/tree/master/demos)
([live](http://wheezy.pythonanywhere.com)) and
[issues](https://github.com/akornatskyy/wheezy.web/issues)
tracker are available on
[github](https://github.com/akornatskyy/wheezy.web)
- [documentation](https://wheezyweb.readthedocs.io/en/latest/)
## Install
[wheezy.web](https://pypi.org/project/wheezy.web/) requires
[python](https://www.python.org) version 3.10+. It is independent of operating
system. You can install it from [pypi](https://pypi.org/project/wheezy.web/)
site:
```sh
pip install -U wheezy.web
```
If you run into any issue or have comments, go ahead and add on
[github](https://github.com/akornatskyy/wheezy.web).
| text/markdown | null | Andriy Kornatskyy <andriy.kornatskyy@live.com> | null | null | null | wsgi, web, handler, static, template, mako, tenjin, jinja2, routing, middleware, caching, transforms | [
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"P... | [] | null | null | >=3.10 | [] | [] | [] | [
"wheezy.core>=3.2.3",
"wheezy.caching>=3.2.1",
"wheezy.html>=3.2.1",
"wheezy.http>=3.2.3",
"wheezy.routing>=3.2.1",
"wheezy.security>=3.2.2",
"wheezy.validation>=3.2.1",
"Cython>=3.0; extra == \"cython\"",
"setuptools>=61.0; extra == \"cython\"",
"mako>=1.3.10; extra == \"mako\"",
"tenjin>=1.1.1... | [] | [] | [] | [
"Homepage, https://github.com/akornatskyy/wheezy.web",
"Source, https://github.com/akornatskyy/wheezy.web",
"Issues, https://github.com/akornatskyy/wheezy.web/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T14:43:16.701260 | wheezy_web-3.2.1.tar.gz | 13,335 | c6/13/2c2fae154df844906d9582a85d6ed2287694d03a348ecf4607eb280e8def/wheezy_web-3.2.1.tar.gz | source | sdist | null | false | c8951ca518819f1d5c8b85b669bd9b3a | 5c37101e50e77d3c5197f2a960cb3b42cb6602669d19ac1d23c47e515765caea | c6132c2fae154df844906d9582a85d6ed2287694d03a348ecf4607eb280e8def | MIT | [
"LICENSE"
] | 0 |
2.1 | fcio | 0.9.2 | FlashCam File Format (FCIO) reader for python. | # Installation
Run `python3 -m pip install fcio` to install from the pypi repository.
# Description
`fcio-py` provides a read-only wrapper around the `fcio.c` io library used in `fc250b` based digitizer systems.
The wrapper exposes the `fcio.c` memory fields as closely as possible to standard c-structs using numpy ndarrays or scalars where applicable.
For convenience all supported fcio records are exposed as iterable properties of the base `FCIO` class to preselect records of interest.
# Usage
## Simple code example
The following example opens an fcio file and prints some basic event content to stdout:
```python
from fcio import fcio_open
filename = 'path/to/an/fcio/file'
with fcio_open(filename, extended=True) as io:
print("#evtno run_time utc_unix_sec utc_unix_nsec ntraces bl_mean bl_std")
for event in io.events:
print(f"{event.eventnumber} {event.run_time:.09f} {event.utc_unix_sec} {event.utc_unix_nsec} {event.trace_list.size} {event.fpga_baseline.mean():.1f} {event.fpga_baseline.std():.2f}")
```
## Differences to C usage
- `fcio-py` codifies the assumption that a `FCIOConfig` record must be available and skips all previous records on opening
- reading of zstd or gzip compressed files is possible using suprocesses. This requires `zstd` or `gzip` to be available. If a file ends in `.zst` or `.gz` respectively and the `compression` parameter is default, this will happen automatically.
# Development
Development is best done in a local environment, e.g. using `venv`:
```
# create local environment:
export MY_ENV=fcio_dev
python3 -m venv $MY_ENV
# activate the environment
source $MY_ENV/bin/activate
```
This library depends on `meson-python/meson` as build tool and `Cython`/`numpy` to wrap the `c`-sources. These should be installed automatically wenn running `python3 -m build`.
To allow a more traditional workflow a thin `Makefile` is available which wraps the `python3` and `meson` specific commands.
| text/markdown | null | Simon Sailer <simon.sailer@mpi-hd.mpg.de> | null | null | MPL-2.0 | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Operating System :: MacOS",
"Operating System :: PO... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.23.5"
] | [] | [] | [] | [
"Homepage, https://github.com/FlashCam/fcio-py",
"Repository, https://github.com/FlashCam/fcio-py.git",
"Issues, https://github.com/FlashCam/fcio-py/issues",
"Changelog, https://github.com/FlashCam/fcio-py/blob/main/CHANGELOG.md",
"Documentation, https://flashcam.github.io/fcio-py/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:43:11.090300 | fcio-0.9.2.tar.gz | 159,600 | d4/a2/83e0268587f29c50f352c2afecb28d22134951105ed35562285647fd1973/fcio-0.9.2.tar.gz | source | sdist | null | false | a36a653cf54331c45cfbc71eb3ee2a7c | 152ce42d07543d00bbfd93a5312c440a5e0f06ba64d2164dd12f03a73afe44f6 | d4a283e0268587f29c50f352c2afecb28d22134951105ed35562285647fd1973 | null | [] | 1,212 |
2.4 | fastapi-silk | 1.0.4 | SQL profiler middleware for FastAPI | # FastAPI-Silk
<p align="center">
<strong>Lightweight SQL profiling middleware for FastAPI + SQLAlchemy</strong><br />
Track query count, database time, and total request time per request.
</p>
<p align="center">
<a href="https://pypi.org/project/fastapi-silk/">
<img alt="PyPI" src="https://img.shields.io/pypi/v/fastapi-silk" />
</a>
<a href="https://github.com/Nikolaev3Artem/fastapi-silk/actions/workflows/ci.yml">
<img alt="CI" src="https://github.com/Nikolaev3Artem/fastapi-silk/actions/workflows/ci.yml/badge.svg" />
</a>
<img alt="Python" src="https://img.shields.io/badge/python-3.8%2B-blue" />
<a href="./LICENSE">
<img alt="License" src="https://img.shields.io/github/license/Nikolaev3Artem/fastapi-silk" />
</a>
</p>
<p align="center">
<a href="#installation">Installation</a> |
<a href="#quick-start">Quick Start</a> |
<a href="#how-it-works">How It Works</a> |
<a href="#development">Development</a>
</p>
## Why FastAPI-Silk
| Capability | Details |
| --- | --- |
| SQL instrumentation | `setup_sql_profiler(engine)` hooks into SQLAlchemy engine events (`before_cursor_execute` / `after_cursor_execute`) so SQL executed through that engine is captured per request. |
| Request-level metrics | Adds `X-DB-Queries`, `X-DB-Time`, and `X-Total-Time` response headers. |
| Slow query visibility | Logs queries slower than `0.1s` to stdout for quick diagnostics. |
| Context isolation | Uses `contextvars` for per-request query storage. |
| Minimal setup | One profiler setup call + one middleware registration. |
## Installation
```bash
pip install fastapi-silk
```
PyPI: https://pypi.org/project/fastapi-silk/
## Quick Start
```python
from fastapi import FastAPI
from sqlalchemy import create_engine, text
from fastapi_silk import SQLDebugMiddleware, setup_sql_profiler
app = FastAPI()
engine = create_engine("sqlite:///./app.db")
# Profiles SQL that goes through this engine
setup_sql_profiler(engine)
app.add_middleware(SQLDebugMiddleware)
@app.get("/health")
def health() -> dict[str, bool]:
with engine.connect() as conn:
conn.execute(text("SELECT 1"))
return {"ok": True}
```
Example response headers:
```http
X-DB-Queries: 1
X-DB-Time: 0.0012s
X-Total-Time: 0.0049s
```
## How It Works
```mermaid
flowchart LR
A[Incoming request] --> B[SQLDebugMiddleware starts request timer]
B --> C[Endpoint runs SQL through profiled SQLAlchemy Engine]
C --> D[setup_sql_profiler listeners capture query start/end]
D --> E[Query data stored in request-local context]
E --> F[Middleware sets X-DB-Queries, X-DB-Time, X-Total-Time]
```
## Requirements
| Item | Requirement |
| --- | --- |
| Python | `>=3.8` (CI runs `3.10` through `3.14`) |
| Framework | FastAPI |
| Database layer | SQLAlchemy `Engine` |
## Code Convention / Style
- Use **Ruff** for linting and formatting.
- Use **MyPy** (strict mode) for type checks.
- Keep changes small and typed where possible.
## Repository Layout
```text
fastapi-silk/
|- src/fastapi_silk/
| |- middleware.py
| |- profiler.py
| `- storage.py
|- tests/
| `- profiler/test_profiler.py
|- .github/workflows/
| |- ci.yml
| `- publish.yml
|- pyproject.toml
`- Makefile
```
## Development
Install dev dependencies and run checks:
```bash
uv sync --locked --all-extras --dev
make ci
python -m pytest
```
`make ci` runs:
- Ruff lint/format checks
- MyPy strict type checks
## Contributing
1. Create a branch from `development` (for example, `feature/<name>` or `fix/<name>`).
2. Keep the pull request focused on a single change.
3. Add or update tests when behavior changes.
4. Run checks locally (`make ci` and `python -m pytest`).
5. Open a PR with a clear summary of what changed, why, and how it was tested.
## Scope
FastAPI-Silk focuses on SQL profiling and request timing headers.
It does not provide a built-in dashboard UI.
## License
GNU General Public License v3.0 (GPL-3.0). See [LICENSE](./LICENSE).
| text/markdown | ItzMaze | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"fastapi",
"sqlalchemy"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:42:49.584639 | fastapi_silk-1.0.4.tar.gz | 98,600 | 9b/82/f9411eb41b27d0e0a7bbbb42fa484b599c701714e12d3ac6848251d537f8/fastapi_silk-1.0.4.tar.gz | source | sdist | null | false | 30a662c49b1ca393d78418951f2ca31d | 49294077ef0427fc4d274851bb792d353f6950d82673be961d831f6b2e84277b | 9b82f9411eb41b27d0e0a7bbbb42fa484b599c701714e12d3ac6848251d537f8 | null | [
"LICENSE"
] | 248 |
2.4 | cyberwave-robot-format | 0.1.2 | Universal robot description schema and format converters for Cyberwave | # Cyberwave Robot Format
Universal robot description schema and format converters for Cyberwave.
## Overview
This package provides:
- **Universal Schema**: A canonical representation for robotic assets (`CommonSchema`)
- **Format Importers**: Parse URDF, MJCF into the universal schema
- **Format Exporters**: Export universal schema to URDF, MJCF
- **Validation**: Schema validation and consistency checks
## Structure
```
cyberwave_robot_format/
├── schema.py # Core schema definitions (CommonSchema, Link, Joint, etc.)
├── core.py # Base classes for parsers/exporters
├── urdf/ # URDF parser and exporter
├── mjcf/ # MJCF (MuJoCo) parser and exporter
├── mesh/ # Mesh processing utilities
├── math_utils.py # Math utilities (Vector3, Quaternion, etc.)
└── utils.py # General utilities
```
## Usage
### Parse URDF
```python
from cyberwave_robot_format import CommonSchema
from cyberwave_robot_format.urdf import URDFParser
# Parse a URDF file
parser = URDFParser()
schema = parser.parse("path/to/robot.urdf")
# Validate the schema
issues = schema.validate()
if issues:
print("Validation issues:", issues)
# Access robot components
for link in schema.links:
print(f"Link: {link.name}, mass: {link.mass}")
for joint in schema.joints:
print(f"Joint: {joint.name}, type: {joint.type}")
```
### Parse MJCF (MuJoCo)
```python
from cyberwave_robot_format.mjcf import MJCFParser
# Parse a MuJoCo XML file
parser = MJCFParser()
schema = parser.parse("path/to/robot.xml")
# Access actuators
for actuator in schema.actuators:
print(f"Actuator: {actuator.name}, joint: {actuator.joint}")
```
### Export to MJCF
```python
from cyberwave_robot_format.mjcf import MJCFExporter
# Export schema to MuJoCo format
exporter = MJCFExporter()
exporter.export(schema, "output/robot.xml")
```
### Cloud-Native Scene Export
Export complete scenes with meshes to ZIP files, supporting cloud storage and in-memory conversion:
```python
from cyberwave_robot_format.mjcf import export_mujoco_zip_cloud
from cyberwave_robot_format.urdf import export_urdf_zip_cloud
# Cloud-safe resolver with in-memory DAE→OBJ conversion
def s3_resolver(filename: str) -> tuple[str, bytes] | None:
"""Download from S3 and convert in memory."""
mesh_bytes = s3.get_object(Bucket='meshes', Key=filename)['Body'].read()
if filename.endswith('.dae'):
obj_bytes = convert_dae_to_obj_in_memory(mesh_bytes)
return (filename.replace('.dae', '.obj'), obj_bytes)
return (Path(filename).name, mesh_bytes)
# Export with cloud resolver (mesh_resolver is required)
mujoco_zip = export_mujoco_zip_cloud(
schema,
s3_resolver,
strict_missing_meshes=True # Fail fast on missing meshes
)
urdf_zip = export_urdf_zip_cloud(schema, s3_resolver)
```
## Development
Install in editable mode:
```bash
pip install -e .
```
Run tests:
```bash
pytest
```
## Acknowledgments
This project incorporates portions of code from
[https://github.com/thanhndv212/robot_format_converter](Robot Format Converter) (Apache 2.0 licensed).
Original repository:
https://github.com/thanhndv212/robot_format_converter
We thank the original authors for their initial work.
```bibtex
@software{robot_format_converter,
author = {Nguyen, Thanh},
title = {Robot Format Converter: Universal Robot Description Format Converter},
year = {2025},
url = {https://github.com/thanhndv212/robot_format_converter},
version = {1.0.0}
}
```
| text/markdown | null | Cyberwave Team <info@cyberwave.com> | null | null | Apache-2.0 | robotics, urdf, mjcf, sdf, robot-description, format-conversion | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.20.0",
"lxml>=4.6.0",
"defusedxml>=0.7.0",
"trimesh>=3.0.0",
"pycollada>=0.7.0",
"resolve-robotics-uri-py>=0.3.0",
"cattrs>=23.0.0",
"pytest>=6.0; extra == \"dev\"",
"pytest-cov>=2.10.0; extra == \"dev\"",
"black>=21.0.0; extra == \"dev\"",
"isort>=5.0.0; extra == \"dev\"",
"mypy>=0.... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:42:40.195813 | cyberwave_robot_format-0.1.2.tar.gz | 64,768 | 55/58/715e0550c60b759df997b34d425bad7e300282a00d8fc3f756c0fbca27ee/cyberwave_robot_format-0.1.2.tar.gz | source | sdist | null | false | 383b9c2e854f107da0194058c2c26086 | f8dadadd3a139849df50ea7d79fa86b7abbcebb00678aae6444dad2de9767bd8 | 5558715e0550c60b759df997b34d425bad7e300282a00d8fc3f756c0fbca27ee | null | [
"LICENSE",
"NOTICE"
] | 625 |
2.4 | udata-hydra-csvapi | 0.3.2.dev6 | API for CSV converted by udata-hydra | 
# Tabular API
[](https://app.circleci.com/pipelines/github/datagouv/api-tabular)
[](https://opensource.org/licenses/MIT)
An API service that provides RESTful access to CSV or tabular data converted by [Hydra](https://github.com/datagouv/hydra). This service provides a REST API to access PostgreSQL database tables containing CSV data, offering HTTP querying capabilities, pagination, and data streaming for CSV or tabular resources.
This service is mainly used, developed and maintained by [data.gouv.fr](https://data.gouv.fr) - the France Open Data platform.
The production API is deployed on data.gouv.fr infrastructure at [`https://tabular-api.data.gouv.fr/api`](https://tabular-api.data.gouv.fr/api). See the [product documentation](https://www.data.gouv.fr/dataservices/api-tabulaire-data-gouv-fr-beta/) (in French) for usage details and the [technical documentation](https://tabular-api.data.gouv.fr/api/doc) for API reference.
## 🛠️ Installation & Setup
### 📋 Requirements
- **Python** >= 3.11, < 3.14
- **[uv](https://docs.astral.sh/uv/)** for dependency management
- **Docker & Docker Compose**
### 🧪 Run with a test database
1. **Start the Infrastructure**
Start the test CSV database and test PostgREST container:
```shell
docker compose --profile test up -d
```
The `--profile test` flag tells Docker Compose to start the PostgREST and PostgreSQL services for the test CSV database. This starts PostgREST on port 8080, connecting to the test CSV database. You can access the raw PostgREST API on http://localhost:8080.
2. **Launch the main API proxy**
Install dependencies and start the proxy services:
```shell
uv sync
uv run adev runserver -p8005 api_tabular/tabular/app.py # Api related to apified CSV files by udata-hydra (dev server)
uv run adev runserver -p8006 api_tabular/metrics/app.py # Api related to udata's metrics (dev server)
```
**Note:** For production, use gunicorn with aiohttp worker:
```shell
# Tabular API (port 8005)
uv run gunicorn api_tabular.tabular.app:app_factory \
--bind 0.0.0.0:8005 \
--worker-class aiohttp.GunicornWebWorker \
--workers 4 \
--access-logfile -
# Metrics API (port 8006)
uv run gunicorn api_tabular.metrics.app:app_factory \
--bind 0.0.0.0:8006 \
--worker-class aiohttp.GunicornWebWorker \
--workers 4 \
--access-logfile -
```
The main API provides a controlled layer over PostgREST - exposing PostgREST directly would be too permissive, so this adds a security and access control layer.
3. **Test the API**
Query the API using a `resource_id`. Several test resources are available in the fake database:
- **`aaaaaaaa-1111-bbbb-2222-cccccccccccc`** - Main test resource with 1000 rows
- **`aaaaaaaa-5555-bbbb-6666-cccccccccccc`** - Resource with database indexes
- **`dddddddd-7777-eeee-8888-ffffffffffff`** - Resource allowed for aggregation
- **`aaaaaaaa-9999-bbbb-1010-cccccccccccc`** - Resource with indexes and aggregation allowed
### 🏭 Run with a real Hydra database
To use the API with a real database served by [Hydra](https://github.com/datagouv/hydra) instead of the fake test database:
1. **Start the real Hydra CSV database locally:**
First, you need to have Hydra CSV database running locally. See the [Hydra repository](https://github.com/datagouv/hydra) for instructions on how to set it up. Make sure the Hydra CSV database is accessible on `localhost:5434`.
2. **Start PostgREST pointing to your local Hydra database:**
```shell
docker compose --profile hydra up -d
```
The `--profile hydra` flag tells Docker Compose to start the PostgREST service configured for a local real Hydra CSV database (instead of the test one provided by the docker compose in this repo). By default, this starts PostgREST on port 8080. You can customize the port using the `PGREST_PORT` environment variable:
```shell
# Use default port 8080
docker compose --profile hydra up -d
# Use custom port (e.g., 8081)
PGREST_PORT=8081 docker compose --profile hydra up -d
```
3. **Configure the API to use it:**
```shell
# If using default port 8080
export PGREST_ENDPOINT="http://localhost:8080"
# If using custom port (e.g., 8081)
export PGREST_ENDPOINT="http://localhost:8081"
```
4. **Start the API services:**
```shell
uv sync
uv run adev runserver -p8005 api_tabular/tabular/app.py # Dev server
uv run adev runserver -p8006 api_tabular/metrics/app.py # Dev server
```
**Note:** For production, use gunicorn with aiohttp worker:
```shell
# Tabular API (port 8005)
uv run gunicorn api_tabular.tabular.app:app_factory \
--bind 0.0.0.0:8005 \
--worker-class aiohttp.GunicornWebWorker \
--workers 4 \
--access-logfile -
# Metrics API (port 8006)
uv run gunicorn api_tabular.metrics.app:app_factory \
--bind 0.0.0.0:8006 \
--worker-class aiohttp.GunicornWebWorker \
--workers 4 \
--access-logfile -
```
5. **Use real resource IDs** from your Hydra database instead of the test IDs.
**Note:** Make sure your Hydra CSV database is accessible and the database schema matches the expected structure. The test database uses the `csvapi` schema, while real Hydra databases typically use the `public` schema.
## 📚 API Documentation
### Resource Endpoints
#### Get Resource Metadata
```http
GET /api/resources/{resource_id}/
```
Returns basic information about the resource including creation date, URL, and available endpoints.
**Example:**
```shell
curl http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/
```
**Response:**
```json
{
"created_at": "2023-04-21T22:54:22.043492+00:00",
"url": "https://data.gouv.fr/datasets/example/resources/fake.csv",
"links": [
{
"href": "/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/profile/",
"type": "GET",
"rel": "profile"
},
{
"href": "/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/data/",
"type": "GET",
"rel": "data"
},
{
"href": "/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/swagger/",
"type": "GET",
"rel": "swagger"
}
]
}
```
#### Get Resource Profile
```http
GET /api/resources/{resource_id}/profile/
```
Returns the CSV profile information (column types, headers, etc.) generated by [csv-detective](https://github.com/datagouv/csv-detective).
**Example:**
```shell
curl http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/profile/
```
**Response:**
```json
{
"profile": {
"header": [
"id",
"score",
"decompte",
"is_true",
"birth",
"liste"
]
},
"...": "..."
}
```
#### Get Resource Data
```http
GET /api/resources/{resource_id}/data/
```
Returns the actual data with support for filtering, sorting, and pagination.
**Example:**
```shell
curl http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/data/
```
**Response:**
```json
{
"data": [
{
"__id": 1,
"id": " 8c7a6452-9295-4db2-b692-34104574fded",
"score": 0.708,
"decompte": 90,
"is_true": false,
"birth": "1949-07-16",
"liste": "[0]"
},
...
],
"links": {
"profile": "http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/profile/",
"swagger": "http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/swagger/",
"next": "http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/data/?page=2&page_size=20",
"prev": null
},
"meta": {
"page": 1,
"page_size": 20,
"total": 1000
}
}
```
#### Get Resource Data as CSV
```http
GET /api/resources/{resource_id}/data/csv/
```
Streams the data directly as a CSV file for download.
#### Get Resource Data as JSON
```http
GET /api/resources/{resource_id}/data/json/
```
Streams the data directly as a JSON file for download.
#### Get Swagger Documentation
```http
GET /api/resources/{resource_id}/swagger/
```
Returns OpenAPI/Swagger documentation specific to this resource.
### Query Operators
The data endpoint can be queried with the following operators as query string (replacing `column_name` with the name of an actual column), if the column type allows it (see the swagger for each column's allowed parameters):
#### Filtering Operators
```
# exact
column_name__exact=value
# differs
column_name__differs=value
# is `null`
column_name__isnull
# is not `null`
column_name__isnotnull
# contains
column_name__contains=value
# does not contain (value does not contain)
column_name__notcontains=value
# in (value in list)
column_name__in=value1,value2,value3
# notin (value not in list)
column_name__notin=value1,value2,value3
# less
column_name__less=value
# greater
column_name__greater=value
# strictly less
column_name__strictly_less=value
# strictly greater
column_name__strictly_greater=value
```
#### Sorting
```
# sort by column
column_name__sort=asc
column_name__sort=desc
```
#### Aggregation Operators
> ⚠️ **WARNING**: Aggregation requests are only available for resources that are listed in the `ALLOW_AGGREGATION` list of the config file, which can be seen at the `/api/aggregation-exceptions/` endpoint, and on columns that have an index.
```
# group by values
column_name__groupby
# count values
column_name__count
# mean / average
column_name__avg
# minimum
column_name__min
# maximum
column_name__max
# sum
column_name__sum
```
> **Note**: Passing an aggregation operator (`count`, `avg`, `min`, `max`, `sum`) returns a column that is named `<column_name>__<operator>` (for instance: `?birth__groupby&score__sum` will return a list of dicts with the keys `birth` and `score__sum`).
> ⚠️ **WARNING**: columns that contain **JSON** objects (see the `profile` to know which ones do) **do not support filtering nor aggregation** for now, except `isnull` and `isnotnull`.
#### Pagination
```
page=1 # Page number (default: 1)
page_size=20 # Items per page (default: 20, max: 50)
```
#### Column Selection
```
columns=col1,col2,col3 # Select specific columns only
```
### Example Queries
#### Basic Filtering
```shell
curl http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/data/?score__greater=0.9&decompte__exact=13
```
**Returns:**
```json
{
"data": [
{
"__id": 52,
"id": " 5174f26d-d62b-4adb-a43a-c3b6288fa2f6",
"score": 0.985,
"decompte": 13,
"is_true": false,
"birth": "1980-03-23",
"liste": "[0]"
},
{
"__id": 543,
"id": " 8705df7c-8a6a-49e2-9514-cf2fb532525e",
"score": 0.955,
"decompte": 13,
"is_true": true,
"birth": "1965-02-06",
"liste": "[0, 1, 2]"
}
],
"links": {
"profile": "http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/profile/",
"swagger": "http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/swagger/",
"next": null,
"prev": null
},
"meta": {
"page": 1,
"page_size": 20,
"total": 2
}
}
```
#### Aggregation with Filtering
With filters and aggregators (filtering is always done **before** aggregation, no matter the order in the parameters):
```shell
curl http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/data/?decompte__groupby&birth__less=1996&score__avg
```
i.e. `decompte` and average of `score` for all rows where `birth<="1996"`, grouped by `decompte`, returns:
```json
{
"data": [
{
"decompte": 55,
"score__avg": 0.7123333333333334
},
{
"decompte": 27,
"score__avg": 0.6068888888888889
},
{
"decompte": 23,
"score__avg": 0.4603333333333334
},
...
]
}
```
#### Pagination
```shell
curl http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/data/?page=2&page_size=30
```
#### Column Selection
```shell
curl http://localhost:8005/api/resources/aaaaaaaa-1111-bbbb-2222-cccccccccccc/data/?columns=id,score,birth
```
### Metrics API
The metrics service provides similar functionality for system metrics:
```shell
# Get metrics data
curl http://localhost:8006/api/{model}/data/
# Get metrics as CSV
curl http://localhost:8006/api/{model}/data/csv/
```
### Health Check
```shell
# Main API health
curl http://localhost:8005/health/
# Metrics API health
curl http://localhost:8006/health/
```
## ⚙️ Configuration
Configuration is handled through TOML files and environment variables. The default configuration is in `api_tabular/config_default.toml`.
### Key Configuration Options
| Option | Default | Description |
|--------|---------|-------------|
| `PGREST_ENDPOINT` | `http://localhost:8080` | PostgREST server URL |
| `SERVER_NAME` | `localhost:8005` | Server name for URL generation |
| `SCHEME` | `http` | URL scheme (http/https) |
| `SENTRY_DSN` | `None` | Sentry DSN for error reporting (optional) |
| `PAGE_SIZE_DEFAULT` | `20` | Default page size |
| `PAGE_SIZE_MAX` | `50` | Maximum allowed page size |
| `BATCH_SIZE` | `50000` | Batch size for streaming |
| `DOC_PATH` | `/api/doc` | Swagger documentation path |
| `ALLOW_AGGREGATION` | `["dddddddd-7777-eeee-8888-ffffffffffff", "aaaaaaaa-9999-bbbb-1010-cccccccccccc"]` | List of resource IDs allowed for aggregation |
### Environment Variables
You can override any configuration value using environment variables:
```shell
export PGREST_ENDPOINT="http://my-postgrest:8080"
export PAGE_SIZE_DEFAULT=50
export SENTRY_DSN="https://your-sentry-dsn"
```
Once the containers are up and running, you can directly query PostgREST on:
`<PGREST_ENDPOINT>/<table_name>?<filters>`
like for example:
`http://localhost:8080/eb7a008177131590c2f1a2ca0?decompte=eq.10`
### Custom Configuration File
Create a `config.toml` file in the project root or set the `CSVAPI_SETTINGS` environment variable:
```shell
export CSVAPI_SETTINGS="/path/to/your/config.toml"
```
## 🧪 Testing
This project uses [pytest](https://pytest.org/) for testing with async support and mocking capabilities. You must have the two test containers running for the tests to run (see [### 🧪 Run with a test database](#-run-with-a-test-database) for setup instructions).
### Running Tests
```shell
# Run all tests
uv run pytest
# Run specific test file
uv run pytest tests/test_api.py
# Run tests with verbose output
uv run pytest -v
# Run tests and show print statements
uv run pytest -s
```
### Tests Structure
- **`tests/test_api.py`** - API endpoint tests (actually pings the running API)
- **`tests/test_config.py`** - Configuration loading tests
- **`tests/test_query.py`** - Query building and processing tests
- **`tests/test_swagger.py`** - Swagger documentation tests (actually pings the running API)
- **`tests/test_utils.py`** - Utility function tests
- **`tests/conftest.py`** - Test fixtures and configuration
### CI/CD Testing
Tests are automatically run in CI/CD. See [`.circleci/config.yml`](.circleci/config.yml) for the complete CI/CD configuration.
## 🤝 Contributing
### 🧹 Code Linting, Formatting and Type Checking
This project follows PEP 8 style guidelines using [Ruff](https://astral.sh/ruff/) for linting and formatting, and [ty](https://docs.astral.sh/ty/) for type checking. **Either running these commands manually or installing the pre-commit hook is required before submitting contributions.**
```shell
# Lint (including import sorting) and format code
uv run ruff check --fix && uv run ruff format
# Type check (ty)
uv run ty check
```
By default `ty check` checks the project root; pass paths to check specific files or directories. See the [ty CLI reference](https://docs.astral.sh/ty/reference/cli/) for options.
### 🔗 Pre-commit Hooks
This repository uses a [pre-commit](https://pre-commit.com/) hook which lint and format code before each commit. **Installing the pre-commit hook is required for contributions.**
**Install pre-commit hooks:**
```shell
uv run pre-commit install
```
The pre-commit hook that automatically:
- Check YAML syntax
- Fix end-of-file issues
- Remove trailing whitespace
- Check for large files
- Run Ruff linting and formatting
### 🧪 Running Tests
**Pull requests cannot be merged unless all CI/CD tests pass.**
Tests are automatically run on every pull request and push to main branch. See [`.circleci/config.yml`](.circleci/config.yml) for the complete CI/CD configuration, and the [🧪 Testing](#-testing) section above for detailed testing commands.
### 🏷️ Releases and versioning
The release process uses the [`tag_version.sh`](tag_version.sh) script to create git tags, GitHub releases and update [CHANGELOG.md](CHANGELOG.md) automatically. Package version numbers are automatically derived from git tags using [setuptools_scm](https://github.com/pypa/setuptools_scm), so no manual version updates are needed in `pyproject.toml`.
**Prerequisites**: [GitHub CLI](https://cli.github.com/) must be installed and authenticated, and you must be on the main branch with a clean working directory.
```bash
# Create a new release
./tag_version.sh <version>
# Example
./tag_version.sh 2.5.0
# Dry run to see what would happen
./tag_version.sh 2.5.0 --dry-run
```
The script automatically:
- Extracts commits since the last tag and formats them for CHANGELOG.md
- Identifies breaking changes (commits with `!:` in the subject)
- Creates a git tag and pushes it to the remote repository
- Creates a GitHub release with the changelog content
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🆘 Support
- **Issues**: [GitHub Issues](https://github.com/datagouv/api-tabular/issues)
- **Discussion**: Use the discussion section at the end of the [production API page](https://www.data.gouv.fr/dataservices/api-tabulaire-data-gouv-fr-beta/)
- **Contact Form**: [Support form](https://support.data.gouv.fr/)
## 🌐 Production Resources
- **Production API**: [`https://tabular-api.data.gouv.fr/api`](https://tabular-api.data.gouv.fr/api)
- **Product Documentation**: [API tabulaire data.gouv.fr (beta)](https://www.data.gouv.fr/dataservices/api-tabulaire-data-gouv-fr-beta/) (in French)
- **Technical Documentation**: [Swagger/OpenAPI docs](https://tabular-api.data.gouv.fr/api/doc)
| text/markdown | null | "data.gouv.fr" <opendatateam@data.gouv.fr> | null | null | null | null | [] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"aiohttp<4.0.0,>=3.13.3",
"aiohttp-cors<1.0.0,>=0.8.1",
"aiohttp-swagger<2.0.0,>=1.0.16",
"gunicorn<24.0.0,>=23.0.0",
"sentry-sdk<3.0.0,>=2.49.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:42:29.073306 | udata_hydra_csvapi-0.3.2.dev6-py3-none-any.whl | 24,137 | 75/8a/dae89af552f2c96973820d64eb7d8838905ad4dadebc2b28d92e608d6249/udata_hydra_csvapi-0.3.2.dev6-py3-none-any.whl | py3 | bdist_wheel | null | false | 78135ffbc1a548b2cfdaecd40720095e | eafeaf9a7a80283f7c0d1ca3436ca34f3242e31dc4756929c63499d547ca6460 | 758adae89af552f2c96973820d64eb7d8838905ad4dadebc2b28d92e608d6249 | MIT | [
"LICENSE"
] | 81 |
2.4 | docling-serve | 1.13.0 | Running Docling as a service | <p align="center">
<a href="https://github.com/docling-project/docling-serve">
<img loading="lazy" alt="Docling" src="https://github.com/docling-project/docling-serve/raw/main/docs/assets/docling-serve-pic.png" width="30%"/>
</a>
</p>
# Docling Serve
Running [Docling](https://github.com/docling-project/docling) as an API service.
📚 [Docling Serve documentation](./docs/README.md)
- Learning how to [configure the webserver](./docs/configuration.md)
- Get to know all [runtime options](./docs/usage.md) of the API
- Explore useful [deployment examples](./docs/deployment.md)
- And more
> [!NOTE]
> **Migration to the `v1` API.** Docling Serve now has a stable v1 API. Read more on the [migration to v1](./docs/v1_migration.md).
## Getting started
Install the `docling-serve` package and run the server.
```bash
# Using the python package
pip install "docling-serve[ui]"
docling-serve run --enable-ui
# Using container images, e.g. with Podman
podman run -p 5001:5001 -e DOCLING_SERVE_ENABLE_UI=1 quay.io/docling-project/docling-serve
```
The server is available at
- API <http://127.0.0.1:5001>
- API documentation <http://127.0.0.1:5001/docs>
- UI playground <http://127.0.0.1:5001/ui>

Try it out with a simple conversion:
```bash
curl -X 'POST' \
'http://localhost:5001/v1/convert/source' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"sources": [{"kind": "http", "url": "https://arxiv.org/pdf/2501.17887"}]
}'
```
### Container Images
The following container images are available for running **Docling Serve** with different hardware and PyTorch configurations:
#### 📦 Distributed Images
| Image | Description | Architectures | Size |
|-------|-------------|----------------|------|
| [`ghcr.io/docling-project/docling-serve`](https://github.com/docling-project/docling-serve/pkgs/container/docling-serve) <br> [`quay.io/docling-project/docling-serve`](https://quay.io/repository/docling-project/docling-serve) | Base image with all packages installed from the official PyPI index. | `linux/amd64`, `linux/arm64` | 4.4 GB (arm64) <br> 8.7 GB (amd64) |
| [`ghcr.io/docling-project/docling-serve-cpu`](https://github.com/docling-project/docling-serve/pkgs/container/docling-serve-cpu) <br> [`quay.io/docling-project/docling-serve-cpu`](https://quay.io/repository/docling-project/docling-serve-cpu) | CPU-only variant, using `torch` from the PyTorch CPU index. | `linux/amd64`, `linux/arm64` | 4.4 GB |
| [`ghcr.io/docling-project/docling-serve-cu128`](https://github.com/docling-project/docling-serve/pkgs/container/docling-serve-cu128) <br> [`quay.io/docling-project/docling-serve-cu128`](https://quay.io/repository/docling-project/docling-serve-cu128) | CUDA 12.8 build with `torch` from the cu128 index. | `linux/amd64` | 11.4 GB |
| [`ghcr.io/docling-project/docling-serve-cu130`](https://github.com/docling-project/docling-serve/pkgs/container/docling-serve-cu130) <br> [`quay.io/docling-project/docling-serve-cu130`](https://quay.io/repository/docling-project/docling-serve-cu130) | CUDA 13.0 build with `torch` from the cu130 index. | `linux/amd64`, `linux/arm64` | TBD |
> [!IMPORTANT]
> **CUDA Image Tagging Policy**
>
> CUDA-specific images (`-cu128`, `-cu130`) follow PyTorch's CUDA version support lifecycle and are tagged differently from base images:
>
> - **Base images** (`docling-serve`, `docling-serve-cpu`): Tagged with `latest` and `main` for convenience
> - **CUDA images** (`docling-serve-cu*`): **Only tagged with explicit versions** (e.g., `1.12.0`) and `main`
>
> **Why?** CUDA versions are deprecated over time as PyTorch adds support for newer CUDA releases. To avoid accidentally pulling deprecated CUDA versions, CUDA images intentionally exclude the `latest` tag. Always use explicit version tags like:
>
> ```bash
> # ✅ Recommended: Explicit version
> docker pull quay.io/docling-project/docling-serve-cu130:1.12.0
>
> # ❌ Not available for CUDA images
> docker pull quay.io/docling-project/docling-serve-cu130:latest
> ```
#### 🚫 Not Distributed
An image for AMD ROCm 6.3 (`docling-serve-rocm`) is supported but **not published** due to its large size.
To build it locally:
```bash
git clone --branch main git@github.com:docling-project/docling-serve.git
cd docling-serve/
make docling-serve-rocm-image
```
For deployment using Docker Compose, see [docs/deployment.md](docs/deployment.md).
Coming soon: `docling-serve-slim` images will reduce the size by skipping the model weights download.
### Demonstration UI
An easy to use UI is available at the `/ui` endpoint.


## Get help and support
Please feel free to connect with us using the [discussion section](https://github.com/docling-project/docling/discussions).
## Contributing
Please read [Contributing to Docling Serve](https://github.com/docling-project/docling-serve/blob/main/CONTRIBUTING.md) for details.
## References
If you use Docling in your projects, please consider citing the following:
```bib
@techreport{Docling,
author = {Docling Contributors},
month = {1},
title = {Docling: An Efficient Open-Source Toolkit for AI-driven Document Conversion},
url = {https://arxiv.org/abs/2501.17887},
eprint = {2501.17887},
doi = {10.48550/arXiv.2501.17887},
version = {2.0.0},
year = {2025}
}
```
## License
The Docling Serve codebase is under MIT license.
## IBM ❤️ Open Source AI
Docling has been brought to you by IBM.
| text/markdown | null | Michele Dolfi <dol@zurich.ibm.com>, Guillaume Moutier <gmoutier@redhat.com>, Anil Vishnoi <avishnoi@redhat.com>, Panos Vagenas <pva@zurich.ibm.com>, Christoph Auer <cau@zurich.ibm.com>, Peter Staar <taa@zurich.ibm.com> | null | Michele Dolfi <dol@zurich.ibm.com>, Anil Vishnoi <avishnoi@redhat.com>, Panos Vagenas <pva@zurich.ibm.com>, Christoph Auer <cau@zurich.ibm.com>, Peter Staar <taa@zurich.ibm.com> | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Typing :: Typed",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"... | [] | null | null | >=3.10 | [] | [] | [] | [
"docling~=2.38",
"docling-core>=2.45.0",
"docling-jobkit[kfp,rq,vlm]<2.0.0,>=1.11.0",
"fastapi[standard]<0.130.0",
"httpx~=0.28",
"pydantic~=2.10",
"pydantic-settings~=2.4",
"python-multipart<0.1.0,>=0.0.14",
"typer~=0.12",
"uvicorn[standard]<1.0.0,>=0.29.0",
"websockets<17.0,>=14.0",
"scalar-... | [] | [] | [] | [
"Homepage, https://github.com/docling-project/docling-serve",
"Repository, https://github.com/docling-project/docling-serve",
"Issues, https://github.com/docling-project/docling-serve/issues",
"Changelog, https://github.com/docling-project/docling-serve/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:42:04.207825 | docling_serve-1.13.0.tar.gz | 48,136 | 41/6c/2ed10d17e4c7bad396e7f1c9fca969ec83f02ceaf3d5ab1f36a08c837dea/docling_serve-1.13.0.tar.gz | source | sdist | null | false | 039fe27fae6b2f7c1f7f28d948688d7e | 07057c44e2469a3f319e24b01049ba37410f52be4dd9c4169fc834cd2feca3c9 | 416c2ed10d17e4c7bad396e7f1c9fca969ec83f02ceaf3d5ab1f36a08c837dea | null | [
"LICENSE"
] | 872 |
2.4 | torchrir | 0.8.1 | PyTorch-based room impulse response (RIR) simulation toolkit for static and dynamic scenes. | # TorchRIR
A PyTorch-based room impulse response (RIR) simulation toolkit with a clean API and GPU support.
This project has been developed with substantial assistance from Codex.
> [!WARNING]
> TorchRIR is under active development and may contain bugs or breaking changes.
> Please validate results for your use case.
If you find bugs or have feature requests, please open an issue.
Contributions are welcome.
## Installation
```bash
pip install torchrir
```
## Library Comparison
| Feature | `torchrir` | `gpuRIR` | `pyroomacoustics` | `rir-generator` |
|---|---|---|---|---|
| 🎯 Dynamic Sources | ✅ | 🟡 Single moving source | 🟡 Manual loop | ❌ |
| 🎤 Dynamic Microphones | ✅ | ❌ | 🟡 Manual loop | ❌ |
| 🖥️ CPU | ✅ | ❌ | ✅ | ✅ |
| 🧮 CUDA | ✅ | ✅ | ❌ | ❌ |
| 🍎 MPS | ✅ | ❌ | ❌ | ❌ |
| 📊 Scene Plot | ✅ | ❌ | ✅ | ❌ |
| 🎞️ Dynamic Scene GIF | ✅ | ❌ | 🟡 Manual animation script | ❌ |
| 🗂️ Dataset Build | ✅ | ❌ | ✅ | ❌ |
| 🎛️ Signal Processing | ❌ Scope out | ❌ | ✅ | ❌ |
| 🧱 Non-shoebox Geometry | 🚧 Candidate | ❌ | ✅ | ❌ |
| 🌐 Geometric Acoustics | 🚧 Candidate | ❌ | ✅ | ❌ |
Legend: `✅` native support, `🟡` manual setup, `🚧` candidate (not yet implemented), `❌` unavailable
For detailed notes and equations, see
[Read the Docs: Library Comparisons](https://torchrir.readthedocs.io/en/latest/comparisons.html).
## CUDA CI (GitHub Actions)
- CUDA tests run in `.github/workflows/cuda-ci.yml` on a self-hosted runner with labels:
`self-hosted`, `linux`, `x64`, `cuda`.
- The workflow validates installation via `uv sync --group test`, checks `torch.cuda.is_available()`,
runs `tests/test_device_parity.py` with `-k cuda`, and then tries to install
`gpuRIR` from GitHub.
- If `gpuRIR` installs successfully, the workflow runs `tests/test_compare_gpurir.py`
(static + dynamic RIR comparisons). If installation fails, those comparison tests
are skipped without failing the whole CUDA CI job.
## Examples
- `examples/static.py`: fixed sources and microphones with configurable mic count (default: binaural).
`uv run python examples/static.py --plot`
- `examples/dynamic_src.py`: moving sources, fixed microphones.
`uv run python examples/dynamic_src.py --plot`
- `examples/dynamic_mic.py`: fixed sources, moving microphones.
`uv run python examples/dynamic_mic.py --plot`
- `examples/cli.py`: unified CLI for static/dynamic scenes with JSON/YAML configs.
`uv run python examples/cli.py --mode static --plot`
- `examples/build_dynamic_dataset.py`: small dynamic dataset generation script (CMU ARCTIC / LibriSpeech; fixed room/mics, randomized source motion).
`uv run python examples/build_dynamic_dataset.py --dataset cmu_arctic --num-scenes 4 --num-sources 2`
- `torchrir.datasets.dynamic_cmu_arctic`: oobss-compatible dynamic CMU ARCTIC builder CLI.
`python -m torchrir.datasets.dynamic_cmu_arctic --cmu-root datasets/cmu_arctic --n-scenes 2 --overwrite-dataset`
- `examples/benchmark_device.py`: CPU/GPU benchmark for RIR simulation.
`uv run python examples/benchmark_device.py --dynamic`
## Dataset Notices
- For dataset attribution and redistribution notes, see
[THIRD_PARTY_DATASETS.md](THIRD_PARTY_DATASETS.md).
## Dataset API Quick Guide
- `torchrir.datasets.CmuArcticDataset(root, speaker=..., download=...)`
- Accepted `speaker`: `aew`, `ahw`, `aup`, `awb`, `axb`, `bdl`, `clb`, `eey`, `fem`, `gka`, `jmk`, `ksp`, `ljm`, `lnh`, `rms`, `rxr`, `slp`, `slt`
- Invalid `speaker` raises `ValueError`.
- Missing local files with `download=False` raises `FileNotFoundError`.
- `torchrir.datasets.LibriSpeechDataset(root, subset=..., speaker=..., download=...)`
- Accepted `subset`: `dev-clean`, `dev-other`, `test-clean`, `test-other`, `train-clean-100`, `train-clean-360`, `train-other-500`
- Invalid `subset` raises `ValueError`.
- Missing subset/speaker paths with `download=False` raise `FileNotFoundError`.
- `torchrir.datasets.build_dynamic_cmu_arctic_dataset(...)`
- Builds oobss-compatible scene folders with `mixture.wav`, `source_XX.wav`, `metadata.json`, and `source_info.json`.
- Static layout images (`room_layout_2d.png`, `room_layout_3d.png`) and optional layout videos (`room_layout_2d.mp4`, `room_layout_3d.mp4`) are generated, with source-index annotations by default.
- Default behavior includes `n_sources=3`, moving speed range `0.3-0.8 m/s`, and motion profile ratios `0-35%`, `35-65%`, `65-100%`.
- Local-only (no download) example:
```python
from pathlib import Path
from torchrir.datasets import CmuArcticDataset, LibriSpeechDataset
cmu = CmuArcticDataset(Path("datasets/cmu_arctic"), speaker="bdl", download=False)
libri = LibriSpeechDataset(
Path("datasets/librispeech"),
subset="train-clean-100",
speaker="103",
download=False,
)
```
- Full dataset usage details, expected directory layout, and invalid-input handling:
[Read the Docs: Datasets](https://torchrir.readthedocs.io/en/latest/datasets.html)
## Core API Overview
- Geometry: `Room`, `Source`, `MicrophoneArray`
- Scene models: `StaticScene`, `DynamicScene` (`Scene` is deprecated)
- Static RIR: `torchrir.sim.simulate_rir`
- Dynamic RIR: `torchrir.sim.simulate_dynamic_rir`
- Simulator object: `torchrir.sim.ISMSimulator(max_order=..., tmax=... | nsample=...)`
- Dynamic convolution: `torchrir.signal.DynamicConvolver`
- Audio I/O:
- wav-specific: `torchrir.io.load_wav`, `torchrir.io.save_wav`, `torchrir.io.info_wav`
- backend-supported formats: `torchrir.io.load_audio`, `torchrir.io.save_audio`, `torchrir.io.info_audio`
- metadata-preserving: `torchrir.io.AudioData`, `torchrir.io.load_audio_data`
- Metadata export: `torchrir.io.build_metadata`, `torchrir.io.save_metadata_json`
## Module Layout (for contributors)
- `torchrir.sim`: simulation backends (ISM implementation lives under `torchrir.sim.ism`)
- `torchrir.signal`: convolution utilities and dynamic convolver
- `torchrir.geometry`: array geometries, sampling, trajectories
- `torchrir.viz`: plotting and GIF/MP4 animation helpers
- Default plot style follows SciencePlots Grid (`science` + `grid`).
- `torchrir.models`: room/scene/result data models
- `torchrir.io`: audio I/O and metadata serialization (`*_wav` for wav-only, `*_audio` for backend-supported formats)
- `torchrir.util`: shared math/tensor/device helpers
- `torchrir.logging`: logging utilities
- `torchrir.config`: simulation configuration objects
## Design Notes
- Scene typing is explicit: use `StaticScene` for fixed geometry and `DynamicScene` for trajectory-based simulation.
- `DynamicScene` accepts tensor-like trajectories (e.g., lists) and normalizes them to tensors internally.
- `Scene` remains as a backward-compatibility wrapper and emits `DeprecationWarning`.
- `Scene.validate()` performs validation without emitting additional deprecation warnings.
- `ISMSimulator` fails fast when `max_order` or `tmax` conflicts with the provided `SimulationConfig`.
- Model dataclasses are frozen, but tensor payloads remain mutable (shallow immutability).
- `torchrir.load` / `torchrir.save` and `torchrir.io.load` / `save` / `info` are deprecated compatibility aliases.
```python
from torchrir import MicrophoneArray, Room, Source
from torchrir.sim import simulate_rir
from torchrir.signal import DynamicConvolver
room = Room.shoebox(size=[6.0, 4.0, 3.0], fs=16000, beta=[0.9] * 6)
sources = Source.from_positions([[1.0, 2.0, 1.5]])
mics = MicrophoneArray.from_positions([[2.0, 2.0, 1.5]])
rir = simulate_rir(room=room, sources=sources, mics=mics, max_order=6, tmax=0.3)
# For dynamic scenes, compute rirs with torchrir.sim.simulate_dynamic_rir and convolve:
# y = DynamicConvolver(mode="trajectory").convolve(signal, rirs)
```
For detailed documentation:
[Read the Docs](https://torchrir.readthedocs.io/en/latest/)
## Future Work
- Advanced room geometry pipeline beyond shoebox rooms (e.g., irregular polygons/meshes and boundary handling).
Motivation: [pyroomacoustics#393](https://github.com/LCAV/pyroomacoustics/issues/393), [pyroomacoustics#405](https://github.com/LCAV/pyroomacoustics/issues/405)
- General reflection/path capping controls (e.g., first-K, strongest-K, or energy-threshold-based path selection).
Motivation: [pyroomacoustics#338](https://github.com/LCAV/pyroomacoustics/issues/338)
- Microphone hardware response modeling (frequency response, sensitivity, and self-noise).
Motivation: [pyroomacoustics#394](https://github.com/LCAV/pyroomacoustics/issues/394)
- Near-field speech source modeling for more realistic close-talk scenarios.
Motivation: [pyroomacoustics#417](https://github.com/LCAV/pyroomacoustics/issues/417)
- Integrated 3D spatial response visualization (e.g., array/directivity beam-pattern rendering).
Motivation: [pyroomacoustics#397](https://github.com/LCAV/pyroomacoustics/issues/397)
## Related Libraries
- [gpuRIR](https://github.com/DavidDiazGuerra/gpuRIR)
- [Cross3D](https://github.com/DavidDiazGuerra/Cross3D)
- [pyroomacoustics](https://github.com/LCAV/pyroomacoustics)
- [rir-generator](https://github.com/audiolabs/rir-generator)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.12,>=3.11 | [] | [] | [] | [
"numpy>=2.2.6",
"torch>=2.10.0",
"scipy>=1.14.0",
"soundfile>=0.13.1",
"scienceplots>=2.1.1"
] | [] | [] | [] | [
"Repository, https://github.com/taishi-n/torchrir"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:41:00.670377 | torchrir-0.8.1-py3-none-any.whl | 81,922 | 75/57/792b08562dad09d54723834dea0b935bab43b09a99e893ae4b0428413842/torchrir-0.8.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 9e1aa50027eb3cec5406e00243233500 | 3932714910c3aa397c693154281392cc7063ca228cf8f0e4e86a3b41fc959245 | 7557792b08562dad09d54723834dea0b935bab43b09a99e893ae4b0428413842 | null | [
"LICENSE",
"NOTICE"
] | 244 |
2.4 | vmx-aps | 3.0.0 | APS command line wrapper | # vmx-aps - APS command line wrapper
`vmx-aps` is a command-line utility for interacting with the [XTD Guardsquare](https://www.guardsquare.com/xtd) API. It simplifies operations like uploading apps, checking versions, and managing protections using a simple CLI.
---
## What's new in 3.0.0
- The next features are only available for android protection
- Added commands to handle XTD configuration
- Added optional parameter to the "protect" command:
- --build-configuration
**Note:** The full changelog is included in the distribution (`CHANGELOG.md`).
---
## 🛠️ Installation
```bash
$ pip install vmx-aps
```
---
## 🚀 Usage
All commands require an API key file (JSON format) obtained from the XTD portal -> Settings -> API Key Manager.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json <command> [args...]
```
Example:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get_version
{
"apkdefender": {
"preanalysis-defaults": "20250401.json",
"raven-apkdefender": "v4.8.3_20250401",
"raven-template": "20250401.json",
"sail": "1.46.4",
"version": "4.8.3"
},
"apsdefenders": "2025.12.3-prod",
"iosdefender": {
"sail": "1.46.4",
"version": "7.2.1"
},
"version": "2025.12.0-prod"
}
```
---
## 🔐 Authentication
All requests are authenticated via an API key in a local JSON file. The file must contain:
```json
{
"appClientId": "your-app-client-id",
"appClientSecret": "your-app-client-secret",
"encodedKey": "your-encoded-key"
}
```
The CLI tool reads only the value of `encodedKey`. You can also pass that value as an argument directly using `--api-key` or `-a`:
```bash
$ vmx-aps -a "your-encoded-key" get_version
{
"apkdefender": {
"preanalysis-defaults": "20250401.json",
"raven-apkdefender": "v4.8.3_20250401",
"raven-template": "20250401.json",
"sail": "1.46.4",
"version": "4.8.3"
},
"apsdefenders": "2025.12.3-prod",
"iosdefender": {
"sail": "1.46.4",
"version": "7.2.1"
},
"version": "2025.12.0-prod"
}
```
---
## 📚 Available Commands
### `protect`
Performs app protection on a given mobile app binary (Android or iOS). This is a high-level command that uploads an app, runs protection, waits for completion, and downloads the protected binary.
⚠️ This process may take **several minutes** to complete.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect --file path/to/app.apk
```
#### 🔧 Options
- `--file` **(required)**
Path to the input binary. Supported formats:
- Android: `.apk`, `.aab`
- iOS: zipped `.xcarchive` folder
- `--subscription-type` _(optional)_
Specifies the subscription type.
Choices: `["APPSHIELD_PLATFORM", "COUNTERSPY_PLATFORM", "XTD_PLATFORM"]`
- `--signing-certificate` _(optional see **NOTE**)_
Path to a PEM-encoded signing certificate file, used for Android signature verification or certificate pinning.
**NOTE**: this is mandatory for Android protection.
- `--secondary-signing-certificate` _(optional)_
Path to a PEM-encoded secondary signing certificate file, used for Android signature verification or certificate pinning.
- `--mapping-file` _(optional)_
Path to the Android R8/ProGuard mapping file for symbol preservation during obfuscation.
- `--build-protection-configuration` _(optional)_
Path to the build protection configuration JSON file.
- `--build-certificate-pinning-configuration` _(optional)_
Path to the JSON file with the build certificate pinning configuration.
- `--build-configuration` _(optional see **NOTE**)_
Path to the JSON file with XTD configurations.
**NOTE**: Only for Android protection.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect \
--file ~/test-app.apk \
--subscription-type XTD_PLATFORM \
--signing-certificate ~/cert.pem \
--secondary-signing-certificate ~/secondary-cert.pem \
--mapping-file ~/proguard.map \
--build-protection-configuration ~/buildProtConfig.json \
--build-certificate-pinning-configuration ~/certPinningConfig.json \
--build-configuration ~/xtd-conf.json
```
---
### `list-applications`
Lists applications associated with your XTD account. You can filter the results by application ID, group, or subscription type.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-applications
```
#### 🔧 Options
- `--application-id` _(optional)_
If provided, returns details for a specific application matching the given ID.
- `--group` _(optional)_
Filters applications to those belonging to the specified group.
- `--subscription-type` _(optional)_
Specifies the subscription type.
Choices: `["APPSHIELD_PLATFORM", "COUNTERSPY_PLATFORM", "XTD_PLATFORM"]`
#### ✅ Examples
List all applications:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-applications
```
List a specific application:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-applications --application-id c034f8be-b41d-4799-ab3b-e96f2e60c2ae
```
List applications from a group:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-applications --group my-group
```
List applications with a specific subscription type:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-applications --subscription-type XTD_PLATFORM
```
---
### `add-application`
Registers a new application with your XTD account. You must specify the platform, application name, and package ID. You can also set access restrictions and grouping options.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json add-application --os android --name "My App" --package-id com.example.myapp
```
#### 🔧 Options
- `--os` **(required)**
Target operating system.
Choices: `android`, `ios`
- `--name` **(required)**
Friendly display name for the application.
- `--package-id` **(required)**
The app's unique package ID (e.g., `com.example.myapp`).
- `--group` _(optional)_
Application group identifier, useful for managing related apps.
- `--subscription-type` _(optional)_
Specifies the subscription type.
Choices: `["APPSHIELD_PLATFORM", "COUNTERSPY_PLATFORM", "XTD_PLATFORM"]`
- `--private` _(optional)_
Restrict visibility and access to this app. Implies both `--no-upload` and `--no-delete`.
- `--no-upload` _(optional)_
Prevent other users from uploading new versions of this app.
- `--no-delete` _(optional)_
Prevent other users from deleting builds of this app.
#### ✅ Examples
Add a public Android app:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json add-application \
--os android \
--name "My App" \
--package-id com.example.myapp
```
Add a private iOS app to a group:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json add-application \
--os ios \
--name "iOS Secure App" \
--package-id com.example.iosapp \
--group enterprise-apps \
--private
```
---
### `update-application`
Updates the properties of an existing application registered in your XTD account. You can change the app's name and its access permissions.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json update-application --application-id 12345 --name "New App Name"
```
#### 🔧 Options
- `--application-id` **(required)**
ID of the application to be updated. This value cannot be changed.
- `--name` **(required)**
New friendly name for the application.
- `--private` _(optional)_
Restrict the application from being visible to other users. Implies `--no-upload` and `--no-delete`.
- `--no-upload` _(optional)_
Prevent other users from uploading new builds for this app.
- `--no-delete` _(optional)_
Prevent other users from deleting builds for this app.
#### ✅ Examples
Update app name:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json update-application \
--application-id 12345 \
--name "My Renamed App"
```
Update app name and make it private:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json update-application \
--application-id 12345 \
--name "Secure App" \
--private
```
---
### `delete-application`
Deletes an application from your XTD account, including **all associated builds**. This operation is irreversible.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-application --application-id 12345
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application you want to delete.
#### ⚠️ Warning
This operation will permanently delete:
- The application record
- All uploaded builds for the application
Make sure you have backups or exports of important data before running this command.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-application --application-id 12345
```
---
### `set-signing-certificate`
Sets the signing certificate for a specific Android application. The certificate must be in PEM format.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-signing-certificate --application-id 12345 --file cert.pem
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application to update.
- `--file` **(required)**
Path to the PEM-encoded Android certificate file.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-signing-certificate \
--application-id 12345 \
--file ~/certs/my-cert.pem
```
---
### `delete-signing-certificate`
Deletes the signing certificate for a specific Android application.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-signing-certificate --application-id 12345
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application to update.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-signing-certificate \
--application-id 12345
```
---
### `set-secondary-signing-certificate`
Sets the secondary signing certificate for a specific Android application. The certificate must be in PEM format.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-secondary-signing-certificate --application-id 12345 --file secondary-cert.pem
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application to update.
- `--file` **(required)**
Path to the PEM-encoded Android (secondary) certificate file.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-secondary-signing-certificate \
--application-id 12345 \
--file ~/certs/my-secondary-cert.pem
```
---
### `delete-secondary-signing-certificate`
Deletes the secondary signing certificate for a specific Android application.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-secondary-signing-certificate --application-id 12345
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application to update.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-secondary-signing-certificate \
--application-id 12345
```
---
### `set-mapping-file`
Associates an R8/ProGuard mapping file with a specific Android build. This improves symbol readability and debugging of protected builds.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-mapping-file --build-id 98765 --file proguard.map
```
#### 🔧 Options
- `--build-id` **(required)**
ID of the Android build to associate the mapping file with.
- `--file` **(required)**
Path to the R8/ProGuard mapping file.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-mapping-file \
--build-id 98765 \
--file ~/builds/proguard.map
```
---
### `set-protection-configuration`
Sets the protection configuration for an application.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-protection-configuration --application-id 12345 --file protConfig.json
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application to update.
- `--file` **(required)**
Path to the protection configuration JSON file.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-protection-configuration \
--application-id 12345 \
--file ~/protConfig.json
```
---
### `delete-protection-configuration`
Deletes the protection configuration for an application.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-protection-configuration --application-id 12345
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application to update.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-protection-configuration \
--application-id 12345
```
---
### `set-[application|build]-configuration`
Sets the XTD configuration for application or build.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-application-configuration --application-id 12345 --file xtd-config.json
or
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-build-configuration --build-id 12345 --file xtd-config.json
```
#### 🔧 Options
- `--application-id` **(required)** __if using set-application-configuration__
The ID of the application to update.
- `--build-id` **(required)** __if using set-build-configuration__
The ID of the build to update.
- `--file` **(required)**
Path to the JSON file with the pinned certificate(s).
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-application-configuration \
--application-id 12345 \
--file ~/certPinningConfig.json
```
---
### `get-[application|build]-configuration`
Fetch the XTD configuration for application or build.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-application-configuration --application-id 12345
or
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-build-configuration --build-id 12345
```
#### 🔧 Options
- `--application-id` **(required)** __if using get-application-configuration__
The ID of the application to query.
- `--build-id` **(required)** __if using get-build-configuration__
The ID of the build to query.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-application-configuration \
--application-id 12345
```
---
### `delete-[application|build]-configuration`
Delete the XTD configuration for application or build.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-application-configuration --application-id 12345
or
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-build-configuration --build-id 12345
```
#### 🔧 Options
- `--application-id` **(required)** __if using get-application-configuration__
The ID of the application to query.
- `--build-id` **(required)** __if using get-build-configuration__
The ID of the build to query.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-build-configuration \
--build-id 12345
```
---
### `set-certificate-pinning-configuration`
Sets the certificate pinning configuration for an application.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-certificate-pinning-configuration --application-id 12345 --file certPinningConfig.json
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application to update.
- `--file` **(required)**
Path to the JSON file with the pinned certificate(s).
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-certificate-pinning-configuration \
--application-id 12345 \
--file ~/certPinningConfig.json
```
---
### `delete-certificate-pinning-configuration`
Deletes the certificate pinning configuration for an application.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-certificate-pinning-configuration --application-id 12345
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application to update.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-certificate-pinning-configuration \
--application-id 12345
```
---
### `get-certificate-pinning-configuration`
Gets the certificate pinning configuration for an application.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-certificate-pinning-configuration --application-id 12345
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application to retrieve.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-certificate-pinning-configuration \
--application-id 12345
```
---
### `set-build-certificate-pinning-configuration`
Sets the certificate pinning configuration for a build.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-build-certificate-pinning-configuration --build-id 12345 --file certPinningConfig.json
```
#### 🔧 Options
- `--build-id` **(required)**
The ID of the build to update.
- `--file` **(required)**
Path to the JSON file with the pinned certificate(s).
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-build-certificate-pinning-configuration \
--build-id 12345 \
--file ~/certPinningConfig.json
```
---
### `delete-build-certificate-pinning-configuration`
Deletes the certificate pinning configuration for a build.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-build-certificate-pinning-configuration --build-id 12345
```
#### 🔧 Options
- `--build-id` **(required)**
The ID of the build to update.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-build-certificate-pinning-configuration \
--build-id 12345
```
---
### `get-certificate-pinning-configuration`
Gets the certificate pinning configuration for a build.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-build-certificate-pinning-configuration --build-id 12345
```
#### 🔧 Options
- `--build-id` **(required)**
The ID of the build to retrieve.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-build-certificate-pinning-configuration \
--build-id 12345
```
---
### `list-builds`
Lists build artifacts associated with applications in your XTD account. You can filter builds by application ID, build ID, or subscription type.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-builds
```
#### 🔧 Options
- `--application-id` _(optional)_
Returns builds associated with the given application.
- `--build-id` _(optional)_
Returns a single build identified by this build ID.
- `--subscription-type` _(optional)_
Specifies the subscription type.
Choices: `["APPSHIELD_PLATFORM", "COUNTERSPY_PLATFORM", "XTD_PLATFORM"]`
#### ✅ Examples
List all builds:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-builds
```
List builds for a specific application:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-builds --application-id 12345
```
Get a specific build by ID:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-builds --build-id 98765
```
Filter builds by subscription:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json list-builds --subscription-type XTD_PLATFORM
```
---
### `add-build`
Uploads a new mobile app build (Android `.apk` or iOS `.xcarchive`) to a registered application in your XTD account.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json add-build --application-id 12345 --file path/to/app.apk
```
#### 🔧 Options
- `--application-id` **(required)**
The ID of the application the build belongs to.
- `--file` **(required)**
Path to the build file. Supported formats:
- Android: `.apk`
- iOS: zipped `.xcarchive` folder
- `--subscription-type` _(optional)_
Specifies the subscription type.
Choices: `["APPSHIELD_PLATFORM", "COUNTERSPY_PLATFORM", "XTD_PLATFORM"]`
#### ✅ Examples
Upload an Android build:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json add-build \
--application-id 12345 \
--file ~/apps/my-app.apk
```
Upload an iOS build with subscription tier:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json add-build \
--application-id 12345 \
--file ~/apps/my-ios-app.xcarchive.zip \
--subscription-type XTD_PLATFORM
```
---
### `delete-build`
Deletes a specific build from your XTD account. This action is irreversible and will remove the associated protected binary.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-build --build-id 98765
```
#### 🔧 Options
- `--build-id` **(required)**
The ID of the build you want to delete.
#### ⚠️ Warning
This command will permanently delete the specified build, including any associated protection artifacts.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-build --build-id 98765
```
---
### `set-build-protection-configuration`
Sets the protection configuration for a build.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-build-protection-configuration --build-id 12345 --file protConfig.json
```
#### 🔧 Options
- `--build-id` **(required)**
The ID of the build to update.
- `--file` **(required)**
Path to the build protection configuration JSON file.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json set-build-protection-configuration \
--build-id 12345 \
--file ~/protConfig.json
```
---
### `delete-build-protection-configuration`
Deletes the protection configuration for a build.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-build-protection-configuration --build-id 12345
```
#### 🔧 Options
- `--build-id` **(required)**
The ID of the build to update.
#### ✅ Examples
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json delete-build-protection-configuration \
--build-id 12345
```
---
### `protect-start`
Initiates the protection process for a build that was previously uploaded to your XTD account. This starts the backend protection job.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect-start --build-id 98765
```
#### 🔧 Options
- `--build-id` **(required)**
ID of the build to be protected.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect-start --build-id 98765
```
Use `protect-get-status` to monitor the progress of this protection job after initiation.
---
### `protect-get-status`
Retrieves the current status of a protection job for a specific build. This includes progress updates, completion, or failure states.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect-get-status --build-id 98765
```
#### 🔧 Options
- `--build-id` **(required)**
ID of the build whose protection status you want to check.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect-get-status --build-id 98765
```
Use this command after starting a protection job with `protect-start` to monitor its progress.
---
### `protect-cancel`
Cancels an ongoing protection job for a specific build in your XTD account. This can be used if the protection was started by mistake or is taking too long.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect-cancel --build-id 98765
```
#### 🔧 Options
- `--build-id` **(required)**
ID of the build whose protection job should be cancelled.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect-cancel --build-id 98765
```
Use this command if you need to abort a protection job started with `protect-start`.
---
### `protect-download`
Downloads a protected binary that was previously processed in your XTD account.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect-download --build-id 98765
```
#### 🔧 Options
- `--build-id` **(required)**
ID of the build whose protected output you want to download.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json protect-download --build-id 98765
```
Use this after confirming a successful protection job with `protect-get-status`.
---
### `get-account-info`
Retrieves information about the current user and their associated organization (customer) from XTD.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-account-info
```
#### 🔧 Options
This command does not accept any additional arguments beyond the global `--api-key-file`.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-account-info
```
Returns details such as:
- Organization (customer) name and ID
- List of subscriptions
- User name and role
---
### `display-application-package-id`
Extracts and displays the application package ID from an input file (APK or XCARCHIVE).
Useful when preparing to register an app using `add-application`.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json display-application-package-id --file path/to/app.apk
```
#### 🔧 Options
- `--file` **(required)**
Path to the input file:
- Android: `.apk`
- iOS: `.xcarchive` folder (typically zipped)
#### ✅ Examples
Display package ID from an APK:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json display-application-package-id --file my-app.apk
```
Display package ID from an iOS XCARCHIVE:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json display-application-package-id --file my-ios-app.xcarchive.zip
```
---
### `get-sail-config`
Retrieves the SAIL configuration for a specified platform and version.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-sail-config --os android
```
#### 🔧 Options
- `--os` **(required)**
Operating system to retrieve the SAIL config for.
Choices: `android`, `ios`
- `--version` _(optional)_
Specific SAIL version to retrieve configuration for. If omitted, retrieves the latest available version.
#### ✅ Examples
Get the latest SAIL config for Android:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-sail-config --os android
```
Get a specific version of SAIL config for iOS:
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-sail-config --os ios --version 1.2.3
```
---
### `get-version`
Retrieves the current version information for XTD services and components. This includes platform-specific defender versions, SAIL versions, and templates.
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-version
```
#### 🔧 Options
This command does not take any additional options beyond the global `--api-key-file`.
#### ✅ Example
```bash
$ vmx-aps --api-key-file ~/Downloads/api-key.json get-version
```
**Example Output:**
```json
{
"apkdefender": {
"preanalysis-defaults": "20250401.json",
"raven-apkdefender": "v4.8.3_20250401",
"raven-template": "20250401.json",
"sail": "1.46.4",
"version": "4.8.3"
},
"apsdefenders": "2025.12.3-prod",
"iosdefender": {
"sail": "1.46.4",
"version": "7.2.1"
},
"version": "2025.12.0-prod"
}
```
Use this command to verify deployed defender versions.
| text/markdown | Guardsquare nv. | noreply@protectmyapp.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | null | [] | [] | [] | [
"python-dateutil",
"requests",
"pyaxmlparser",
"backoff",
"coloredlogs"
] | [] | [] | [] | [] | twine/6.0.1 CPython/3.14.3 | 2026-02-18T14:40:51.640979 | vmx_aps-3.0.0.tar.gz | 28,687 | 00/a3/2e99823233d023b4a35ba28ebef68b3aefffbfa848ca9e48caef3d13c1b5/vmx_aps-3.0.0.tar.gz | source | sdist | null | false | 7636294a04cc541273c9aa95524929aa | e5c0599abbac9a76c8e19d7ebd169f139b1e80f495e73c4ffb267f315dfbec83 | 00a32e99823233d023b4a35ba28ebef68b3aefffbfa848ca9e48caef3d13c1b5 | null | [] | 297 |
2.4 | orxaq | 0.2.0 | Causal intelligence operating system for credit risk | # Orxaq
[](LICENSE)
[](https://python.org)
[]()
[]()
**Cognitive causal operating system for credit risk.**
## What is Orxaq?
Credit risk models have a causality problem. Correlation-based models (logistic regression, gradient boosting) learn *what co-occurs* but not *what causes what*. When regulators ask "why did this borrower default?" or "what happens to your portfolio under a 300bp rate shock?", correlation-based models can only extrapolate from training data. They cannot reason about interventions, confounders, or counterfactuals.
Orxaq replaces correlation with causation. It represents the credit economy as a directed acyclic graph (the World DAG) where every edge is a causal mechanism, not a statistical association. PD models are fitted along causal paths. ECL forecasts propagate shocks through the DAG. Fair lending analysis decomposes protected-attribute effects into direct and indirect (through legitimate mediators like employment history).
This design directly addresses regulatory requirements. SR 11-7 demands "conceptual soundness" --- a model grounded in economic theory, not just goodness-of-fit. CECL requires "reasonable and supportable" lifetime loss forecasts. CCAR/DFAST require stress testing under hypothetical scenarios that have never occurred in training data. A causal model handles all three because it models *mechanisms*, not *patterns*.
## Architecture
Orxaq uses a five-ring architecture. Each ring depends only on rings with lower numbers. Ring 1 (Kernel) has zero external dependencies --- stdlib only.
```
Ring 5: Experiences CLI, Dashboard, API Server, DAG Editor
Ring 4: Orchestration LLM Providers, Registry, Router
Ring 3: Intelligence PC, GES, Consensus, D-Sep, Simulator
Ring 2: Data Fabric Connectors, Schema, Quality Gates, Lineage
Ring 2: Credit Risk Ontology, PD Model, CECL, Scenarios, Fair Lending
Ring 1: Kernel WorldDAG, Types, Audit Log, Crypto, Plugins
```
- **Ring 1 --- Kernel**: The type system (`Entity`, `CausalEdge`, `WorldDAG`), hash-chained audit log, crypto primitives, and plugin scaffold. Zero external imports.
- **Ring 2 --- Credit Risk**: 19-variable credit ontology, causal PD model, CECL engine with dynamic horizons, stress testing (CCAR/DFAST), fair lending via causal path decomposition.
- **Ring 2 --- Data Fabric**: CSV/JSON connectors, universal schema validation, data profiling, quality gates, and DAG-based data lineage tracking.
- **Ring 3 --- Intelligence**: Causal discovery via PC and GES, multi-algorithm consensus, d-separation testing, refutation suites, and scenario simulation with Monte Carlo.
- **Ring 4 --- Orchestration**: Multi-provider LLM integration (vLLM, Anthropic, OpenAI) with health checks, model registry, and intelligent task-based routing.
- **Ring 5 --- Experiences**: `orxaq` CLI with `--json` mode, Gamma/Beta/Alpha Observatory skins, live DAG editor, local HTTP API server.
## Quick Start
```bash
pip install orxaq
# Show system status
orxaq status
# Run CECL expected credit loss computation
orxaq credit ecl
# Discover causal structure from data
orxaq discover --variables unemployment_rate,income,credit_score
# Run stress scenario simulation
orxaq simulate --scenario adverse --monte-carlo 1000
# Launch the Observatory dashboard
orxaq serve
# Open the live DAG editor
orxaq edit
# Check LLM provider health
orxaq providers check
```
## CLI Reference
| Command | Description |
|---------|-------------|
| `orxaq status` | Show DAG summary, audit entries, system health |
| `orxaq credit ontology --show` | Display the 19-variable credit ontology |
| `orxaq credit ecl` | Compute CECL expected credit loss |
| `orxaq credit pd` | Predict probability of default |
| `orxaq credit fairness --protected age` | Fair lending decomposition analysis |
| `orxaq discover` | Run causal discovery (PC algorithm) |
| `orxaq discover --consensus` | Run multi-algorithm consensus discovery |
| `orxaq validate` | D-separation validation of the World DAG |
| `orxaq validate --generate` | Generate synthetic validation datasets |
| `orxaq simulate --scenario adverse` | Run scenario shock propagation |
| `orxaq simulate --monte-carlo 1000` | Monte Carlo stress simulation |
| `orxaq fabric ingest data.csv` | Ingest data files |
| `orxaq fabric quality` | Run data quality gates |
| `orxaq fabric profile` | Profile data statistics |
| `orxaq audit --last 10` | Show recent audit log entries |
| `orxaq serve --port 8741` | Launch Observatory dashboard |
| `orxaq edit` | Open live DAG editor in browser |
| `orxaq providers check` | Health-check all LLM providers |
| `orxaq providers list` | List available models across providers |
Add `--json` to any command for machine-readable JSON output:
```bash
orxaq --json credit ecl | jq '.total_ecl'
orxaq --json simulate --scenario adverse | jq '.impact_score'
```
## Deployment
### Docker
```bash
docker build -t orxaq/orxaq:latest .
docker run -p 8741:8741 orxaq/orxaq:latest
```
### Kubernetes (Helm)
```bash
helm install orxaq deploy/helm/ --set image.tag=0.2.0
```
See `deploy/helm/values.yaml` for all configurable parameters.
## Examples
The `examples/` directory contains reference implementations:
| Script | Description |
|--------|-------------|
| `quickstart.py` | End-to-end credit risk workflow in 30 lines |
| `causal_discovery.py` | PC algorithm, consensus discovery, d-separation validation |
| `scenario_simulation.py` | Built-in CCAR scenarios, custom shocks, Monte Carlo |
| `fair_lending.py` | Causal path decomposition for protected attributes |
| `data_quality.py` | Data profiling, quality gates, audit trail |
```bash
python examples/quickstart.py
```
## Development
```bash
git clone https://github.com/Orxaq/orxaq.git
cd orxaq
pip install -e ".[dev]"
make check
```
See [CONTRIBUTING.md](CONTRIBUTING.md) for the full development workflow, and [SECURITY.md](SECURITY.md) for our security policy.
## Project Status
**Current: Alpha (v0.2.0)**
- 550 tests, 90% coverage
- 9 modules shipped: Kernel, Credit Risk, Data Fabric, Causal Discovery, Orchestration, Experiences, Enterprise, Validation, CLI
- 25 Architecture Decision Records ([ADR log](docs/adr/README.md))
- Zero external runtime dependencies
- Docker + Helm deployment ready
- Multi-provider LLM integration (vLLM, Anthropic, OpenAI)
See [CHANGELOG.md](CHANGELOG.md) for release history.
## License
Apache 2.0 --- see [LICENSE](LICENSE).
| text/markdown | Steven Devisch | null | null | null | null | CCAR, CECL, DFAST, causal-inference, credit-risk, dag | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Financial and Insurance Industry",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering... | [] | null | null | >=3.11 | [] | [] | [] | [
"mypy>=1.10; extra == \"dev\"",
"pre-commit>=3.7; extra == \"dev\"",
"pytest-benchmark>=4.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest-xdist>=3.5; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"mkdocs-material>=9.5; extra == \"docs\"",
"mkdo... | [] | [] | [] | [
"Homepage, https://github.com/Orxaq/orxaq",
"Documentation, https://orxaq.github.io/orxaq/",
"Repository, https://github.com/Orxaq/orxaq",
"Issues, https://github.com/Orxaq/orxaq/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T14:39:51.595421 | orxaq-0.2.0.tar.gz | 554,895 | a5/65/4714569e3685e9119eb4583025d59075e387d783b59ad8dabcf8fe0f7d22/orxaq-0.2.0.tar.gz | source | sdist | null | false | 02e386e66a8d40094aaf97cdfdf4bf7b | b4ac651f9ec7c1c0c3bf3de59182d363906d820ff8e88eb28b494da756c14c62 | a5654714569e3685e9119eb4583025d59075e387d783b59ad8dabcf8fe0f7d22 | Apache-2.0 | [
"LICENSE"
] | 246 |
2.4 | chuk-artifacts | 0.11.1 | Chuk Artifacts provides a modular artifact storage system that works seamlessly across multiple storage backends (memory, filesystem, AWS S3, IBM Cloud Object Storage) with Redis or memory-based metadata caching and strict session-based security. | # CHUK Artifacts
> **Unified VFS-backed artifact and workspace storage with scope-based isolation—built for AI apps and MCP servers**
[](https://pypi.org/project/chuk-artifacts/)
[](https://pypi.org/project/chuk-artifacts/)
[](#testing)
[](#testing)
[](https://opensource.org/licenses/Apache-2.0)
[](https://docs.python.org/3/library/asyncio.html)
CHUK Artifacts provides a **unified namespace architecture** where everything—blobs (artifacts) and workspaces (file collections)—is VFS-backed. Store ephemeral session files, persistent user projects, and shared resources with automatic access control, checkpoints, and a clean API that works the same for single files and entire directory trees.
## 🎯 Everything is VFS
The v0.9 architecture unifies blobs and workspaces under a single API:
- **Blobs** = Single-file VFS-backed namespaces (artifacts, documents, data)
- **Workspaces** = Multi-file VFS-backed namespaces (projects, collections, repos)
- **Same API** for both types (only the `type` parameter differs)
- **Same features** for both (checkpoints, scoping, VFS access, metadata)
### 60-Second Tour
```python
from chuk_artifacts import ArtifactStore, NamespaceType, StorageScope
async with ArtifactStore() as store:
# Create a blob (single file)
blob = await store.create_namespace(
type=NamespaceType.BLOB,
scope=StorageScope.SESSION
)
await store.write_namespace(blob.namespace_id, data=b"Hello, World!")
# Create a workspace (file tree)
workspace = await store.create_namespace(
type=NamespaceType.WORKSPACE,
name="my-project",
scope=StorageScope.USER,
user_id="alice"
)
# Write files to workspace
await store.write_namespace(workspace.namespace_id, path="/main.py", data=b"print('hello')")
await store.write_namespace(workspace.namespace_id, path="/config.json", data=b'{"version": "1.0"}')
# Get VFS for advanced operations (works for BOTH!)
vfs = store.get_namespace_vfs(workspace.namespace_id)
files = await vfs.ls("/") # ['.workspace', 'main.py', 'config.json']
# Create checkpoint (works for BOTH!)
checkpoint = await store.checkpoint_namespace(workspace.namespace_id, name="v1.0")
```
**One API. Two types. Zero complexity.**
---
## 📦 CHUK Stack Integration
CHUK Artifacts is the **unified storage substrate for the entire CHUK AI stack**:
```
chuk-ai-planner → uses artifacts as workspaces for multi-step plans
chuk-mcp-server → exposes artifacts as remote filesystems via MCP
chuk-virtual-fs → underlying filesystem engine for all namespaces
chuk-sessions → session-based scope isolation for namespaces
```
**Why this matters:**
- **Consistent storage** across all CHUK components
- **Unified access patterns** for AI tools, planners, and MCP servers
- **Automatic isolation** prevents cross-session data leakage
- **Reliable** from development to deployment
---
## Table of Contents
- [Why This Exists](#why-this-exists)
- [Architecture](#architecture)
- [Install](#install)
- [Quick Start](#quick-start)
- [Core Concepts](#core-concepts)
- [Namespaces](#namespaces)
- [Storage Scopes](#storage-scopes)
- [Grid Architecture](#grid-architecture)
- [API Reference](#api-reference)
- [VFS Operations](#vfs-operations)
- [Examples](#examples)
- [Advanced Features](#advanced-features)
- [Legacy Compatibility](#legacy-compatibility)
- [Configuration](#configuration)
- [Testing](#testing)
---
## Why This Exists
Most platforms offer object storage (S3, filesystem)—but not a **unified namespace architecture** with **automatic access control**.
**CHUK Artifacts provides:**
- ✅ **Unified API** - Same code for single files (blobs) and file trees (workspaces)
- ✅ **Three storage scopes** - SESSION (ephemeral), USER (persistent), SANDBOX (shared)
- ✅ **VFS-backed** - Full filesystem operations on all namespaces
- ✅ **Checkpoints** - Snapshot and restore for both blobs and workspaces
- ✅ **Grid architecture** - Predictable, auditable storage organization
- ✅ **Access control** - Automatic scope-based isolation
- ✅ **Provider-agnostic** - Memory, Filesystem, SQLite, S3—same API
- ✅ **Async-first** - Built for FastAPI, MCP servers, modern Python
**Use cases:**
- 📝 AI chat applications (session artifacts + user documents)
- 🔧 MCP servers (tool workspaces + shared templates)
- 🚀 CI/CD systems (build artifacts + project workspaces)
- 📊 Data platforms (user datasets + shared libraries)
### Why not S3 / Filesystem / SQLite directly?
**What you get with raw storage:**
- S3 → objects (not namespaces)
- Filesystem → files (not isolated storage units)
- SQLite → durability (not structured filesystem trees)
**What CHUK Artifacts adds:**
| Feature | S3 Alone | Filesystem Alone | CHUK Artifacts |
|---------|----------|------------------|----------------|
| Namespace abstraction | ❌ | ❌ | ✅ |
| Scope-based isolation | ❌ | ❌ | ✅ |
| Unified API across backends | ❌ | ❌ | ✅ |
| Checkpoints/snapshots | ❌ | ❌ | ✅ |
| Grid path organization | Manual | Manual | Automatic |
| VFS operations | ❌ | Partial | ✅ Full |
| Session lifecycle | Manual | Manual | Automatic |
**CHUK Artifacts provides:**
- **VFS** + **scopes** + **namespaces** + **checkpoints** + **unified API** + **grid paths**
This is fundamentally more powerful than raw storage.
---
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ Your Application │
└────────────────────────────┬────────────────────────────────────┘
│
│ create_namespace(type=BLOB|WORKSPACE)
│ write_namespace(), read_namespace()
│ checkpoint_namespace(), restore_namespace()
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ ArtifactStore │
│ (Unified Namespace Management) │
│ │
│ • Manages both BLOB and WORKSPACE namespaces │
│ • Enforces scope-based access control │
│ • Provides VFS access to all namespaces │
│ • Handles checkpoints and restoration │
└──────────┬────────────────────────────────────┬─────────────────┘
│ │
│ session management │ VFS operations
▼ ▼
┌────────────┐ ┌─────────────────────────┐
│ Sessions │ │ chuk-virtual-fs │
│ (Memory/ │ │ (Unified VFS Layer) │
│ Redis) │ │ │
└────────────┘ │ • ls(), mkdir(), rm() │
│ • cp(), mv(), find() │
│ • Metadata management │
│ • Batch operations │
└────────┬────────────────┘
│
│ provider calls
▼
┌────────────────────────────┐
│ Storage Providers │
│ │
│ Memory │ Filesystem │ S3 │ │
│ SQLite │
└─────────────┬───────────────┘
│
▼
grid/{sandbox}/{scope}/{namespace_id}/
```
### Key Architectural Principles
1. **Everything is VFS** - Both blobs and workspaces are VFS-backed
2. **Unified API** - One set of methods for all namespace types
3. **Scope-based isolation** - SESSION, USER, and SANDBOX scopes
4. **Grid organization** - Predictable, auditable storage paths
5. **Provider-agnostic** - Swap storage backends via configuration
---
## Install
```bash
pip install chuk-artifacts
```
**Dependencies:**
- `chuk-virtual-fs` - VFS layer (automatically installed)
- `chuk-sessions` - Session management (automatically installed)
**Optional:**
- `redis` - For Redis session provider
- `boto3` - For S3 storage backend
- `ibm-cos-sdk` - For IBM Cloud Object Storage
---
## Quick Start
### 1. Create and Use a Blob Namespace
```python
from chuk_artifacts import ArtifactStore, NamespaceType, StorageScope
store = ArtifactStore()
# Create a blob namespace (single file)
blob = await store.create_namespace(
type=NamespaceType.BLOB,
scope=StorageScope.SESSION
)
# Write data to the blob
await store.write_namespace(blob.namespace_id, data=b"My important data")
# Read data back
data = await store.read_namespace(blob.namespace_id)
print(data) # b"My important data"
```
### 2. Create and Use a Workspace Namespace
```python
# Create a workspace namespace (file tree)
workspace = await store.create_namespace(
type=NamespaceType.WORKSPACE,
name="my-project",
scope=StorageScope.USER,
user_id="alice"
)
# Write multiple files
await store.write_namespace(workspace.namespace_id, path="/README.md", data=b"# My Project")
await store.write_namespace(workspace.namespace_id, path="/src/main.py", data=b"print('hello')")
# Get VFS for advanced operations
vfs = store.get_namespace_vfs(workspace.namespace_id)
# List files
files = await vfs.ls("/") # ['.workspace', 'README.md', 'src']
src_files = await vfs.ls("/src") # ['main.py']
# Copy files
await vfs.cp("/src/main.py", "/src/backup.py")
# Search for files
python_files = await vfs.find(pattern="*.py", path="/", recursive=True)
```
### 3. Use Checkpoints (Works for Both!)
```python
# Create a checkpoint
checkpoint = await store.checkpoint_namespace(
workspace.namespace_id,
name="initial-version",
description="First working version"
)
# Make changes
await store.write_namespace(workspace.namespace_id, path="/README.md", data=b"# Updated")
# Restore from checkpoint
await store.restore_namespace(workspace.namespace_id, checkpoint.checkpoint_id)
```
---
## Core Concepts
### Namespaces
A **namespace** is a VFS-backed storage unit. There are two types:
| Type | Description | Use Cases |
|------|-------------|-----------|
| **BLOB** | Single file at `/_data` | Artifacts, documents, data files, caches |
| **WORKSPACE** | Full file tree | Projects, collections, code repos, datasets |
**Both types:**
- Use the same unified API
- Support checkpoints
- Have VFS access
- Support all three scopes
### Storage Scopes
Every namespace has a **scope** that determines its lifecycle and access:
| Scope | Lifecycle | Access | Grid Path | Use Cases |
|-------|-----------|--------|-----------|-----------|
| **SESSION** | Ephemeral (session lifetime) | Same session only | `grid/{sandbox}/sess-{session_id}/{ns_id}` | Temporary files, caches, current work |
| **USER** | Persistent | Same user only | `grid/{sandbox}/user-{user_id}/{ns_id}` | User projects, personal docs, settings |
| **SANDBOX** | Persistent | All users | `grid/{sandbox}/shared/{ns_id}` | Templates, shared libraries, documentation |
**Example:**
```python
# Session-scoped (ephemeral)
temp_blob = await store.create_namespace(
type=NamespaceType.BLOB,
scope=StorageScope.SESSION
)
# User-scoped (persistent)
user_project = await store.create_namespace(
type=NamespaceType.WORKSPACE,
name="my-docs",
scope=StorageScope.USER,
user_id="alice"
)
# Sandbox-scoped (shared)
shared_templates = await store.create_namespace(
type=NamespaceType.WORKSPACE,
name="templates",
scope=StorageScope.SANDBOX
)
```
### Grid Architecture
All namespaces are organized in a **grid** structure:
```
grid/
├── {sandbox_id}/
│ ├── sess-{session_id}/ # SESSION scope
│ │ ├── {namespace_id}/ # Blob or workspace
│ │ │ ├── _data # For blobs
│ │ │ ├── _meta.json # For blobs
│ │ │ ├── file1.txt # For workspaces
│ │ │ └── ...
│ ├── user-{user_id}/ # USER scope
│ │ └── {namespace_id}/
│ └── shared/ # SANDBOX scope
│ └── {namespace_id}/
```
**Benefits:**
- Predictable paths
- Easy auditing
- Clear isolation
- Efficient listing
### Features Matrix
Everything works for both namespace types across all scopes:
| Feature | BLOB | WORKSPACE | SESSION | USER | SANDBOX |
|---------|------|-----------|---------|------|---------|
| VFS access | ✅ | ✅ | ✅ | ✅ | ✅ |
| Checkpoints/restore | ✅ | ✅ | ✅ | ✅ | ✅ |
| Metadata (custom) | ✅ | ✅ | ✅ | ✅ | ✅ |
| Batch operations | ✅ | ✅ | ✅ | ✅ | ✅ |
| Search/find | ✅ | ✅ | ✅ | ✅ | ✅ |
| Grid placement | Auto | Auto | Auto | Auto | Auto |
| Access control | Auto | Auto | Auto | Auto | Auto |
| TTL expiration | ✅ | ✅ | ✅ | ❌ | ❌ |
**Key insight:** The unified architecture means you get **full feature parity** regardless of namespace type or scope.
---
## API Reference
### Core Namespace Operations
```python
# Create namespace
namespace = await store.create_namespace(
type: NamespaceType, # BLOB or WORKSPACE
scope: StorageScope, # SESSION, USER, or SANDBOX
name: str | None = None, # Optional name (workspaces only)
user_id: str | None = None, # Required for USER scope
ttl_hours: int | None = None, # Session TTL (SESSION scope only)
provider_type: str = "vfs-memory", # VFS provider
provider_config: dict | None = None # Provider configuration
) -> NamespaceInfo
# Write data
await store.write_namespace(
namespace_id: str,
data: bytes,
path: str | None = None # Required for workspaces, optional for blobs
)
# Read data
data: bytes = await store.read_namespace(
namespace_id: str,
path: str | None = None # Required for workspaces, optional for blobs
)
# Get VFS access
vfs: AsyncVirtualFileSystem = store.get_namespace_vfs(namespace_id: str)
# List namespaces
namespaces: list[NamespaceInfo] = store.list_namespaces(
session_id: str | None = None,
user_id: str | None = None,
type: NamespaceType | None = None
)
# Destroy namespace
await store.destroy_namespace(namespace_id: str)
```
### Checkpoint Operations
```python
# Create checkpoint
checkpoint: CheckpointInfo = await store.checkpoint_namespace(
namespace_id: str,
name: str,
description: str | None = None
)
# List checkpoints
checkpoints: list[CheckpointInfo] = await store.list_checkpoints(
namespace_id: str
)
# Restore from checkpoint
await store.restore_namespace(
namespace_id: str,
checkpoint_id: str
)
# Delete checkpoint
await store.delete_checkpoint(
namespace_id: str,
checkpoint_id: str
)
```
---
## VFS Operations
All namespaces provide full VFS access:
```python
vfs = store.get_namespace_vfs(namespace_id)
# File operations
await vfs.write_file(path, data)
data = await vfs.read_file(path)
await vfs.rm(path)
await vfs.cp(src, dst)
await vfs.mv(src, dst)
exists = await vfs.exists(path)
# Directory operations
await vfs.mkdir(path)
await vfs.rmdir(path)
await vfs.cd(path)
files = await vfs.ls(path)
is_dir = await vfs.is_dir(path)
is_file = await vfs.is_file(path)
# Metadata
await vfs.set_metadata(path, metadata)
metadata = await vfs.get_metadata(path)
node_info = await vfs.get_node_info(path)
# Search
results = await vfs.find(pattern="*.py", path="/", recursive=True)
# Batch operations
await vfs.batch_create_files(file_specs)
data_dict = await vfs.batch_read_files(paths)
await vfs.batch_write_files(file_data)
await vfs.batch_delete_paths(paths)
# Text/Binary
await vfs.write_text(path, text, encoding="utf-8")
text = await vfs.read_text(path, encoding="utf-8")
await vfs.write_binary(path, data)
data = await vfs.read_binary(path)
# Stats
stats = await vfs.get_storage_stats()
provider = await vfs.get_provider_name()
```
See [examples/05_advanced_vfs_features.py](examples/05_advanced_vfs_features.py) for comprehensive VFS examples.
---
## Examples
We provide **9 comprehensive examples** covering all features:
1. **[00_quick_start.py](examples/00_quick_start.py)** - Quick introduction to unified API
2. **[01_blob_namespace_basics.py](examples/01_blob_namespace_basics.py)** - Blob operations
3. **[02_workspace_namespace_basics.py](examples/02_workspace_namespace_basics.py)** - Workspace operations
4. **[03_unified_everything_is_vfs.py](examples/03_unified_everything_is_vfs.py)** - Unified architecture
5. **[04_legacy_api_compatibility.py](examples/04_legacy_api_compatibility.py)** - Legacy compatibility
6. **[05_advanced_vfs_features.py](examples/05_advanced_vfs_features.py)** - Advanced VFS features
7. **[06_session_isolation.py](examples/06_session_isolation.py)** - Session isolation and scoping
8. **[07_large_files_streaming.py](examples/07_large_files_streaming.py)** - Large file handling
9. **[08_batch_operations.py](examples/08_batch_operations.py)** - Batch operations
Run any example:
```bash
python examples/00_quick_start.py
python examples/02_workspace_namespace_basics.py
python examples/05_advanced_vfs_features.py
```
See [examples/README.md](examples/README.md) for complete documentation.
---
## Advanced Features
### Checkpoints
Create snapshots of any namespace (blob or workspace):
```python
# Create checkpoint
cp1 = await store.checkpoint_namespace(workspace.namespace_id, name="v1.0")
# Make changes...
await store.write_namespace(workspace.namespace_id, path="/new_file.txt", data=b"new")
# Restore to checkpoint
await store.restore_namespace(workspace.namespace_id, cp1.checkpoint_id)
```
### Batch Operations
Process multiple files efficiently:
```python
vfs = store.get_namespace_vfs(workspace.namespace_id)
# Batch create with metadata
file_specs = [
{"path": "/file1.txt", "content": b"data1", "metadata": {"tag": "important"}},
{"path": "/file2.txt", "content": b"data2", "metadata": {"tag": "draft"}},
]
await vfs.batch_create_files(file_specs)
# Batch read
data = await vfs.batch_read_files(["/file1.txt", "/file2.txt"])
# Batch delete
await vfs.batch_delete_paths(["/file1.txt", "/file2.txt"])
```
### Metadata Management
Attach rich metadata to files:
```python
await vfs.set_metadata("/document.pdf", {
"author": "Alice",
"tags": ["important", "reviewed"],
"custom": {"project_id": 123}
})
metadata = await vfs.get_metadata("/document.pdf")
```
### Search and Find
Find files by pattern:
```python
# Find all Python files
py_files = await vfs.find(pattern="*.py", path="/", recursive=True)
# Find specific file
results = await vfs.find(pattern="config.json", path="/")
```
---
## Legacy Compatibility
The legacy `store()` and `retrieve()` APIs still work:
```python
# Legacy API (still supported)
artifact_id = await store.store(
b"data",
mime="text/plain",
summary="My artifact"
)
data = await store.retrieve(artifact_id)
# But unified API is recommended for new code
blob = await store.create_namespace(type=NamespaceType.BLOB)
await store.write_namespace(blob.namespace_id, data=b"data")
data = await store.read_namespace(blob.namespace_id)
```
See [examples/04_legacy_api_compatibility.py](examples/04_legacy_api_compatibility.py) for details.
---
## Configuration
### 🏭 Deployment Patterns
Choose the right storage backend for your use case:
**Development / Testing:**
```python
# Memory provider - instant, ephemeral
store = ArtifactStore() # Uses vfs-memory by default
```
**Small Deployments / Edge:**
```python
# Filesystem provider with container volumes
export ARTIFACT_PROVIDER=vfs-filesystem
export VFS_ROOT_PATH=/data/artifacts
# Good for: Docker containers, edge devices, local-first apps
```
**Portable / Embedded:**
```python
# SQLite provider - single file, queryable
export ARTIFACT_PROVIDER=vfs-sqlite
export SQLITE_DB_PATH=/data/artifacts.db
# Good for: Desktop apps, portable storage, offline-first
```
**Cloud / Distributed:**
```python
# S3 provider with Redis standalone
export ARTIFACT_PROVIDER=vfs-s3
export SESSION_PROVIDER=redis
export AWS_S3_BUCKET=my-artifacts
export SESSION_REDIS_URL=redis://prod-redis:6379/0
# Good for: Multi-tenant SaaS, distributed systems, high scale
```
**High-Availability:**
```python
# S3 provider with Redis Cluster
export ARTIFACT_PROVIDER=vfs-s3
export SESSION_PROVIDER=redis
export AWS_S3_BUCKET=my-artifacts
export SESSION_REDIS_URL=redis://node1:7000,node2:7001,node3:7002
# Good for: Mission-critical systems, zero-downtime requirements, large scale
```
**Hybrid Deployments:**
```python
# Different scopes, different backends
# - SESSION: vfs-memory (ephemeral, fast)
# - USER: vfs-filesystem (persistent, local)
# - SANDBOX: vfs-s3 (persistent, shared, cloud)
# Configure per namespace:
await store.create_namespace(
type=NamespaceType.BLOB,
scope=StorageScope.SESSION,
provider_type="vfs-memory" # Fast ephemeral
)
await store.create_namespace(
type=NamespaceType.WORKSPACE,
scope=StorageScope.USER,
provider_type="vfs-s3" # Persistent cloud
)
```
### Storage Providers
Configure via environment variables:
```bash
# Memory (default, for development)
export ARTIFACT_PROVIDER=vfs-memory
# Filesystem (for local persistence)
export ARTIFACT_PROVIDER=vfs-filesystem
# SQLite (for portable database)
export ARTIFACT_PROVIDER=vfs-sqlite
# S3 (persistent cloud storage)
export ARTIFACT_PROVIDER=vfs-s3
export AWS_ACCESS_KEY_ID=your_key
export AWS_SECRET_ACCESS_KEY=your_secret
export AWS_DEFAULT_REGION=us-east-1
```
### Session Providers
```bash
# Memory (default)
export SESSION_PROVIDER=memory
# Redis Standalone (persistent)
export SESSION_PROVIDER=redis
export SESSION_REDIS_URL=redis://localhost:6379/0
# Redis Cluster (high-availability - auto-detected)
export SESSION_PROVIDER=redis
export SESSION_REDIS_URL=redis://node1:7000,node2:7001,node3:7002
# Redis with TLS
export SESSION_REDIS_URL=rediss://localhost:6380/0
export REDIS_TLS_INSECURE=1 # Set to 1 to skip cert verification (dev only)
```
**Redis Cluster Support** (chuk-sessions ≥0.5.0):
- Automatically detected from comma-separated URL format
- High availability with automatic failover
- Horizontal scaling across multiple nodes
- Robust with proper error handling
### Programmatic Configuration
```python
from chuk_artifacts.config import configure_memory, configure_s3, configure_redis_session
# Development
config = configure_memory()
store = ArtifactStore(**config)
# S3 with Redis standalone
config = configure_s3(
bucket="my-artifacts",
region="us-east-1",
session_provider="redis"
)
configure_redis_session("redis://localhost:6379/0")
store = ArtifactStore(**config)
# S3 with Redis Cluster
config = configure_s3(
bucket="my-artifacts",
region="us-east-1",
session_provider="redis"
)
configure_redis_session("redis://node1:7000,node2:7001,node3:7002")
store = ArtifactStore(**config)
```
---
## ⚡ Performance
CHUK Artifacts is designed for high performance:
**Memory Provider:**
- Nanosecond to microsecond operations
- Zero I/O overhead
- Perfect for testing and development
**Filesystem Provider:**
- Depends on OS filesystem (typically microseconds to milliseconds)
- Uses async I/O for non-blocking operations
- Good for local deployments
**S3 Provider:**
- Uses streaming + zero-copy writes
- Parallel uploads for large files
- Battle-tested at scale
**SQLite Provider:**
- Fast for small to medium workspaces
- Queryable storage with indexes
- Good for embedded/desktop apps
**Checkpoints:**
- Use copy-on-write semantics where supported
- Snapshot-based for minimal overhead
- Incremental when possible
**VFS Layer:**
- Batch operations reduce round trips
- Streaming for large files (no memory buffering)
- Provider-specific optimizations
**Benchmarks** (from examples):
- Batch operations: **1.7x faster** than individual operations
- Large file writes: **577 MB/s** (memory provider)
- Large file reads: **1103 MB/s** (memory provider)
- Batch dataset creation: **250+ files/sec**
See [examples/08_batch_operations.py](examples/08_batch_operations.py) and [examples/07_large_files_streaming.py](examples/07_large_files_streaming.py) for detailed benchmarks.
---
## Testing
CHUK Artifacts includes 778 passing tests with 92% coverage:
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=chuk_artifacts --cov-report=html
# Run specific test file
pytest tests/test_namespace.py -v
```
**Memory provider** makes testing instant:
```python
import pytest
from chuk_artifacts import ArtifactStore, NamespaceType, StorageScope
@pytest.mark.asyncio
async def test_my_feature():
store = ArtifactStore() # Uses memory provider by default
blob = await store.create_namespace(
type=NamespaceType.BLOB,
scope=StorageScope.SESSION
)
await store.write_namespace(blob.namespace_id, data=b"test")
data = await store.read_namespace(blob.namespace_id)
assert data == b"test"
```
---
## Documentation
- **[Examples](examples/README.md)** - 9 comprehensive examples
- **[VFS API Reference](examples/VFS_API_REFERENCE.md)** - Quick VFS API guide
---
## License
Apache 2.0 - see [LICENSE](LICENSE) for details.
---
## Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass (`pytest`)
5. Run linters (`make check`)
6. Submit a pull request
---
## Support
- **Issues**: [GitHub Issues](https://github.com/chuk-ai/chuk-artifacts/issues)
- **Documentation**: [examples/](examples/)
- **Discussions**: [GitHub Discussions](https://github.com/chuk-ai/chuk-artifacts/discussions)
---
**Built with ❤️ for AI applications and MCP servers**
| text/markdown | null | null | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.10.6",
"pyyaml>=6.0.2",
"aioboto3>=14.3.0",
"redis>=6.2.0",
"chuk-sessions>=0.6.1",
"chuk-virtual-fs>=0.5.1",
"dotenv>=0.9.9",
"asyncio>=3.4.3",
"pytest>=8.3.5; extra == \"dev\"",
"ruff>=0.4.6; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.11 | 2026-02-18T14:39:45.971894 | chuk_artifacts-0.11.1.tar.gz | 154,734 | 93/22/639f641df8a872153b239d8d06c703d73f53e8c380255ccd23a582d6085d/chuk_artifacts-0.11.1.tar.gz | source | sdist | null | false | 353f8f9c6dd576591b72e83ac859b4cf | 8fbc1cd30c59b4e43861f858f1970f8d35d8c05407ccc5a61b212b04908a0912 | 9322639f641df8a872153b239d8d06c703d73f53e8c380255ccd23a582d6085d | null | [
"LICENSE"
] | 644 |
2.4 | inbq | 0.17.2 | A library for parsing BigQuery queries and extracting schema-aware, column-level lineage. | # inbq
A library for parsing BigQuery queries and extracting schema-aware, column-level lineage.
### Features
- Parse BigQuery queries into well-structured ASTs with [easy-to-navigate nodes](#ast-navigation).
- Extract schema-aware, [column-level lineage](#concepts).
- Trace data flow through nested structs and arrays.
- Capture [referenced columns](#referenced-columns) and the specific query components (e.g., select, where, join) they appear in.
- Process both single and multi-statement queries with procedural language constructs.
- Built for speed and efficiency, with lightweight Python bindings that add minimal overhead.
## Python
### Install
`pip install inbq`
### Example (Pipeline API)
```python
import inbq
catalog = {"schema_objects": []}
def add_table(name: str, columns: list[tuple[str, str]]) -> None:
catalog["schema_objects"].append({
"name": name,
"kind": {
"table": {
"columns": [{"name": name, "dtype": dtype} for name, dtype in columns]
}
}
})
add_table("project.dataset.out", [("id", "int64"), ("val", "float64")])
add_table("project.dataset.t1", [("id", "int64"), ("x", "float64")])
add_table("project.dataset.t2", [("id", "int64"), ("s", "struct<source string, x float64>")])
query = """
declare default_val float64 default (select min(val) from project.dataset.out);
insert into `project.dataset.out`
select
id,
if(x is null or s.x is null, default_val, x + s.x)
from `project.dataset.t1` inner join `project.dataset.t2` using (id)
where s.source = "baz";
"""
pipeline = (
inbq.Pipeline()
.config(
# If the `pipeline` is configured with `raise_exception_on_error=False`,
# any error that occurs during parsing or lineage extraction is
# captured and returned as a `inbq.PipelineError`
raise_exception_on_error=False,
# No effect with only one query (may provide a speedup with multiple queries)
parallel=True,
)
.parse()
.extract_lineage(catalog=catalog, include_raw=False)
)
sqls = [query]
pipeline_output = inbq.run_pipeline(sqls, pipeline=pipeline)
# This loop will iterate just once as we have only one query
for i, (ast, output_lineage) in enumerate(
zip(pipeline_output.asts, pipeline_output.lineages)
):
assert isinstance(ast, inbq.ast_nodes.Ast), (
f"Could not parse query `{sqls[i][:20]}...` due to: {ast.error}"
)
print(f"{ast=}")
assert isinstance(output_lineage, inbq.lineage.Lineage), (
f"Could not extract lineage from query `{sqls[i][:20]}...` due to: {output_lineage.error}"
)
print("\nLineage:")
for lin_obj in output_lineage.lineage.objects:
print("Inputs:")
for lin_node in lin_obj.nodes:
print(
f"{lin_obj.name}->{lin_node.name} <- {[f'{input_node.obj_name}->{input_node.node_name}' for input_node in lin_node.inputs]}"
)
print("\nSide inputs:")
for lin_node in lin_obj.nodes:
print(
f"""{lin_obj.name}->{lin_node.name} <- {[f"{input_node.obj_name}->{input_node.node_name} @ {','.join(input_node.sides)}" for input_node in lin_node.side_inputs]}"""
)
print("\nReferenced columns:")
for ref_obj in output_lineage.referenced_columns.objects:
for ref_node in ref_obj.nodes:
print(
f"{ref_obj.name}->{ref_node.name} referenced in {ref_node.referenced_in}"
)
# Prints:
# ast=Ast(...)
# Lineage:
# Inputs:
# project.dataset.out->id <- ['project.dataset.t2->id', 'project.dataset.t1->id']
# project.dataset.out->val <- ['project.dataset.t2->s.x', 'project.dataset.t1->x', 'project.dataset.out->val']
#
# Side inputs:
# project.dataset.out->id <- ['project.dataset.t2->s.source @ where', 'project.dataset.t2->id @ join', 'project.dataset.t1->id @ join']
# project.dataset.out->val <- ['project.dataset.t2->s.source @ where', 'project.dataset.t2->id @ join', 'project.dataset.t1->id @ join']
#
# Referenced columns:
# project.dataset.out->val referenced in ['default_var', 'select']
# project.dataset.t1->id referenced in ['join', 'select']
# project.dataset.t1->x referenced in ['select']
# project.dataset.t2->id referenced in ['join', 'select']
# project.dataset.t2->s.x referenced in ['select']
# project.dataset.t2->s.source referenced in ['where']
```
**Note:** What happens if you remove the insert and just keep the select in the query? `inbq` is designed to handle this gracefully. It will return the lineage for the last `SELECT` statement, but since the destination is no longer explicit, the output object (an anonymous query) will be assigned an anonymous identifier (e.g., `!anon_4`). Try it yourself and see how the output changes!
To learn more about the output elements (Lineage, Side Inputs, and Referenced Columns), please see the [Concepts](#concepts) section.
### Example (Individual Functions)
If you don't like the Pipeline API, you can use these functions instead:
#### `parse_sql` and `parse_sql_to_dict`
Parse a single SQL query:
```python
ast = inbq.parse_sql(query)
# You can also get a dictionary representation of the AST
ast_dict = inbq.parse_sql_to_dict(query)
```
#### `parse_sqls`
Parse multiple SQL queries in parallel:
```python
sqls = [query]
asts = inbq.parse_sqls(sqls, parallel=True)
```
#### `parse_sqls_and_extract_lineage`
Parse SQLs and extract lineage in one go:
```python
asts, lineages = inbq.parse_sqls_and_extract_lineage(
sqls=[query],
catalog=catalog,
parallel=True
)
```
### AST Navigation
```python
import inbq
import inbq.ast_nodes as ast_nodes
sql = """
UPDATE proj.dataset.t1
SET quantity = quantity - 10,
supply_constrained = DEFAULT
WHERE product like '%washer%';
UPDATE proj.dataset.t2
SET quantity = quantity - 10,
WHERE product like '%console%';
"""
ast = inbq.parse_sql(sql)
# Example: find updated tables and columns
for node in ast.find_all(
ast_nodes.UpdateStatement,
):
match node:
case ast_nodes.UpdateStatement(
table=table,
alias=_,
update_items=update_items,
from_=_,
where=_,
):
print(f"Found updated table: {table.name}. Updated columns:")
for update_item in update_items:
for node in update_item.column.find_all(
ast_nodes.Identifier,
ast_nodes.QuotedIdentifier
):
match node:
case ast_nodes.Identifier(name=name) | ast_nodes.QuotedIdentifier(name=name):
print(f"- {name}")
# Example: find `like` filters
for node in ast.find_all(
ast_nodes.BinaryExpr,
):
match node:
case ast_nodes.BinaryExpr(
left=left,
operator=ast_nodes.BinaryOperator_Like(),
right=right,
):
print(left, "like", right)
```
#### Variants and Variant Types in Python
The AST nodes in Python are auto-generated dataclasses from their Rust definitions.
For instance, a Rust enum `Expr` might be defined as:
```rust
pub enum Expr {
// ... more variants here ...
Binary(BinaryExpr),
Identifier(Identifier),
// ... more variants here ...
}
```
In Python, this translates to corresponding classes like `Expr_Binary(vty=BinaryExpr)`, `Expr_Identifier(vty=Identifier)`, etc.
The `vty` attribute stands for "variant type" (unit variants do not have a `vty` attribute).
You can search for any type of object using `.find_all()`, whether it's the variant (e.g., `Expr_Identifier`) or the concrete variant type (e.g., `Identifier`).
## Rust
### Install
`cargo add inbq`
### Example
```rust
use inbq::{
lineage::{
catalog::{Catalog, Column, SchemaObject, SchemaObjectKind},
extract_lineage,
},
parser::Parser,
scanner::Scanner,
};
fn column(name: &str, dtype: &str) -> Column {
Column {
name: name.to_owned(),
dtype: dtype.to_owned(),
}
}
fn main() -> anyhow::Result<()> {
env_logger::init();
let sql = r#"
declare default_val float64 default (select min(val) from project.dataset.out);
insert into `project.dataset.out`
select
id,
if(x is null or s.x is null, default_val, x + s.x)
from `project.dataset.t1` inner join `project.dataset.t2` using (id)
where s.source = "baz";
"#;
let mut scanner = Scanner::new(sql);
scanner.scan()?;
let mut parser = Parser::new(scanner.tokens());
let ast = parser.parse()?;
println!("Syntax Tree: {:?}", ast);
let data_catalog = Catalog {
schema_objects: vec![
SchemaObject {
name: "project.dataset.out".to_owned(),
kind: SchemaObjectKind::Table {
columns: vec![column("id", "int64"), column("val", "int64")],
},
},
SchemaObject {
name: "project.dataset.t1".to_owned(),
kind: SchemaObjectKind::Table {
columns: vec![column("id", "int64"), column("x", "float64")],
},
},
SchemaObject {
name: "project.dataset.t2".to_owned(),
kind: SchemaObjectKind::Table {
columns: vec![
column("id", "int64"),
column("s", "struct<source string, x float64>"),
],
},
},
],
};
let lineage = extract_lineage(&[&ast], &data_catalog, false, true)
.pop()
.unwrap()?;
println!("\nLineage: {:?}", lineage.lineage);
println!("\nReferenced columns: {:?}", lineage.referenced_columns);
Ok(())
}
```
## Command Line Interface
### Install binary
```bash
cargo install inbq
```
### Extract Lineage
1. Prepare your data catalog: create a JSON file (e.g., [catalog.json](./examples/lineage/catalog.json)) that defines the schema for all tables and views referenced in your SQL queries.
2. Run inbq: pass the catalog file and your [SQL file or directory of multiple SQL files](./examples/lineage/query.sql) to the inbq lineage command.
```bash
inbq extract-lineage \
--pretty \
--catalog ./examples/lineage/catalog.json \
./examples/lineage/query.sql
```
The output is written to stdout.
## Concepts
### Lineage
Column-level lineage tracks how data flows from a destination column back to its original source columns. A destination column's value is derived from its direct input columns, and this process is applied recursively to trace the lineage back to the foundational source columns. For example, in `with tmp as (select a+b as tmp_c from t) select tmp_c as c from t`, the lineage for column `c` traces back to `a` and `b` as its source columns (the source table is `t`).
### Lineage - Side Inputs
Side inputs are columns that indirectly contribute to the final set of output values. As the name implies, they aren't part of the direct `SELECT` list, but are found in the surrounding clauses that shape the result, such as `WHERE`, `JOIN`, `WINDOW`, etc. Side inputs influence is traced recursively. For example, in the query:
```sql
with cte as (select id, c1 from table1 where f1>10)
select c2 as z
from table2 inner join cte using (id)
```
`table1.f1` is a side input to `z` with sides `join` and `where` (`cte.id`, later used in the join condition, is filtered by `table1.f1`). The other two side inputs are `table1.id` with side `join` and `table2.id` with side `join`.
### Referenced Columns
Referenced columns provide a detailed map of where each input column is mentioned within a query. This is the entry point for a column into the query's logic. From this initial reference, the column can then influence other parts of the query indirectly through subsequent operations.
## Limitations
While this library can parse and extract lineage for most BigQuery syntax, there are some current limitations. For example, the pipe (`|`) syntax and the recently introduced `MATCH_RECOGNIZE` clause are not yet supported. Requests and contributions for unsupported features are welcome.
## Contributing
Here's a brief overview of the project's key modules:
- `crates/inbq/src/parser.rs`: contains the hand-written top-down parser.
- `crates/inbq/src/ast.rs`: defines the Abstract Syntax Tree (AST) nodes.
- **Note**: If you add or modify AST nodes here, you must regenerate the corresponding Python nodes. You can do this by running `cargo run --bin inbq_genpy`, which will update `crates/py_inbq/python/inbq/ast_nodes.py`.
- `crates/inbq/src/lineage.rs`: contains the core logic for extracting column-level lineage from the AST.
- `crates/py_inbq/`: this crate exposes the Rust backend as a Python module via PyO3.
- `crates/inbq/tests/`: this directory contains the tests. You can add new test cases for parsing and lineage extraction by editing the `.toml` files:
- `parsing_tests.toml`
- `lineage_tests.toml`
| text/markdown; charset=UTF-8; variant=GFM | null | Lorenzo Pratissoli <pratissolil@gmail.com> | null | null | null | bigquery, sql, parser, data-lineage | [
"Programming Language :: Rust",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | maturin/1.12.2 | 2026-02-18T14:39:30.547746 | inbq-0.17.2-cp314-cp314-win32.whl | 904,992 | b1/93/5d367265cb9f04b7dc934a4cbad12603c889110085ba2a7ec9cc572c8c89/inbq-0.17.2-cp314-cp314-win32.whl | cp314 | bdist_wheel | null | false | 175a2d9935afb10e4216246bdc1666d3 | 8ffc2f49cdfacf972d39a8526ec7e33d369e806bb561d8ead30be4c948f5918a | b1935d367265cb9f04b7dc934a4cbad12603c889110085ba2a7ec9cc572c8c89 | null | [] | 6,678 |
2.4 | english-compiler | 1.10.0 | Compile English pseudocode to executable code via Core IL | # English Compiler
A production-ready compiler that translates English pseudocode into executable code through a deterministic intermediate representation (Core IL).
## Installation
```sh
pip install english-compiler
```
With LLM provider support:
```sh
pip install english-compiler[claude] # Anthropic Claude
pip install english-compiler[openai] # OpenAI GPT
pip install english-compiler[gemini] # Google Gemini
pip install english-compiler[qwen] # Alibaba Qwen
pip install english-compiler[all] # All providers
```
## Project Status
**Core IL v1.10 is stable and production-ready.**
The compiler has successfully compiled and executed real-world algorithms including:
- Array operations (sum, reverse, max)
- Sorting algorithms (bubble sort)
- String processing (bigram frequency)
- Advanced algorithms (Byte Pair Encoding - 596 lines of Core IL)
All tests pass with 100% parity between interpreter, Python, JavaScript, C++, Rust, Go, and WebAssembly code generation.
**Documentation**:
- [STATUS.md](STATUS.md) - Detailed project status and capabilities
- [CHANGELOG.md](CHANGELOG.md) - Version history and changes
- [MIGRATION.md](MIGRATION.md) - Upgrade guide from earlier versions
- [VERSIONING.md](VERSIONING.md) - Version strategy and code hygiene
- [QUICK_REFERENCE.md](QUICK_REFERENCE.md) - Fast reference for Core IL syntax
- [tests/ALGORITHM_TESTS.md](tests/ALGORITHM_TESTS.md) - Algorithm corpus regression tests
## Quick Start
1) Create a source file:
```sh
echo "Print hello world" > hello.txt
```
2) Compile and run:
```sh
english-compiler compile --frontend mock hello.txt
```
Expected output:
```text
Regenerating Core IL for hello.txt using mock
hello world
```
## CLI Usage
### Version
```sh
english-compiler --version
```
### Compile
```sh
english-compiler compile [options] <source.txt>
```
**Options:**
- `--frontend <provider>`: LLM frontend (`mock`, `claude`, `openai`, `gemini`, `qwen`). Auto-detects based on available API keys if not specified.
- `--target <lang>`: Output target (`coreil`, `python`, `javascript`, `cpp`, `rust`, `go`, `wasm`). Default: `coreil`.
- `--optimize`: Run optimization pass (constant folding, dead code elimination) before codegen.
- `--lint`: Run static analysis after compilation.
- `--regen`: Force regeneration even if cache is valid.
- `--freeze`: Fail if regeneration would be required (useful for CI).
**Output structure:**
When compiling `examples/hello.txt`, artifacts are organized into subdirectories:
```
examples/
├── hello.txt # Source file (unchanged)
└── output/
├── coreil/
│ ├── hello.coreil.json # Core IL (always generated)
│ └── hello.lock.json # Cache metadata
├── py/
│ └── hello.py # With --target python
├── js/
│ └── hello.js # With --target javascript
├── cpp/
│ ├── hello.cpp # With --target cpp
│ ├── coreil_runtime.hpp # Runtime header
│ └── json.hpp # JSON library
├── rust/
│ ├── hello.rs # With --target rust
│ └── coreil_runtime.rs # Runtime library
├── go/
│ ├── hello.go # With --target go
│ └── coreil_runtime.go # Runtime library
└── wasm/
├── hello.as.ts # With --target wasm
├── hello.wasm # Compiled binary (if asc available)
└── coreil_runtime.ts # Runtime library
```
**Examples:**
```sh
# Compile with mock frontend (no API key needed)
english-compiler compile --frontend mock examples/hello.txt
# Compile with Claude and generate Python
english-compiler compile --frontend claude --target python examples/hello.txt
# Auto-detect frontend, generate JavaScript
english-compiler compile --target javascript examples/hello.txt
```
### Explain (Reverse Compile)
Generate a human-readable English explanation of a Core IL program:
```sh
english-compiler explain examples/output/coreil/hello.coreil.json
# More detailed output
english-compiler explain --verbose examples/output/coreil/hello.coreil.json
```
### Run an existing Core IL file
```sh
english-compiler run examples/output/coreil/hello.coreil.json
```
### Debug a Core IL file interactively
```sh
english-compiler debug examples/output/coreil/hello.coreil.json
```
Step through statements, inspect variables, and set breakpoints. Commands: `s`tep, `n`ext, `c`ontinue, `v`ars, `p <name>`, `b <index>`, `l`ist, `q`uit, `h`elp.
### Lint (Static Analysis)
```sh
# Lint a Core IL file
english-compiler lint myprogram.coreil.json
# Lint with strict mode (warnings become errors)
english-compiler lint --strict myprogram.coreil.json
# Lint after compilation
english-compiler compile --lint --frontend mock examples/hello.txt
```
**Lint rules:**
- `unused-variable` — Variable declared but never referenced
- `unreachable-code` — Statements after Return/Break/Continue/Throw
- `empty-body` — Control flow with empty body
- `variable-shadowing` — Variable re-declared (should be Assign)
### Configuration
Persistent settings can be stored in a config file so you don't need to specify flags on every command.
```sh
# Set default frontend
english-compiler config set frontend claude
# Set default compilation target
english-compiler config set target python
# Enable error explanations by default
english-compiler config set explain-errors true
# Always force regeneration
english-compiler config set regen true
# Always fail if regeneration required (CI mode)
english-compiler config set freeze true
# View all settings
english-compiler config list
# Show config file path
english-compiler config path
# Reset to defaults
english-compiler config reset
```
**Available settings:**
| Setting | Values | Description |
|---------|--------|-------------|
| `frontend` | `mock`, `claude`, `openai`, `gemini`, `qwen` | Default LLM frontend |
| `target` | `coreil`, `python`, `javascript`, `cpp`, `rust`, `go`, `wasm` | Default compilation target |
| `explain-errors` | `true`, `false` | Enable LLM-powered error explanations |
| `regen` | `true`, `false` | Always force regeneration |
| `freeze` | `true`, `false` | Always fail if regeneration required |
**Config file location:**
- Linux/macOS: `~/.config/english-compiler/config.toml`
- Windows: `~/english-compiler/config.toml`
**Priority:** CLI arguments override config file settings.
## Architecture
The compiler follows a three-stage pipeline:
```
┌─────────────┐
│ English │
│ Pseudocode │
└──────┬──────┘
│
▼
┌──────────────────────────────────────────┐
│ LLM Frontends │
│ Claude | OpenAI | Gemini | Qwen | Mock │
│ (Non-deterministic) │
└──────────────────┬───────────────────────┘
│
▼
┌─────────────┐
│ Core IL │
│ v1.10 │ (Deterministic JSON)
└──────┬──────┘
│
┌──────────┬──────────┼──────────┬──────────┬──────────┐
▼ ▼ ▼ ▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│Interp. │ │ Python │ │ Java │ │ C++ │ │ Rust │ │ Go │
│ │ │Codegen │ │ Script │ │Codegen │ │Codegen │ │Codegen │
└───┬────┘ └───┬────┘ └───┬────┘ └───┬────┘ └───┬────┘ └───┬────┘
│ │ │ │ │ │
└──────────┴──────────┴──────────┴──────────┴──────────┘
│
▼
Identical Output
(Verified by tests)
```
1. **Frontend (LLM)**: Multiple providers translate English/pseudocode into Core IL JSON
- This is the only non-deterministic step
- Output is cached for reproducibility
2. **Core IL**: A closed, deterministic intermediate representation
- All semantics are explicitly defined
- No extension mechanism or helper functions
- Version 1.9 is the current stable version
3. **Backends**: Deterministic execution
- Interpreter: Direct execution of Core IL
- Python codegen: Transpiles to executable Python
- JavaScript codegen: Transpiles to executable JavaScript
- C++ codegen: Transpiles to C++17
- Rust codegen: Transpiles to Rust (single-file, no Cargo needed)
- Go codegen: Transpiles to Go (single-file with runtime)
- All backends produce identical output (verified by tests)
## Core IL v1.10
Core IL is a complete, closed intermediate representation with explicit primitives for all operations.
**Full specification**: [coreil_v1.md](coreil_v1.md)
**Key features by version**:
| Version | Features |
|---------|----------|
| v1.10 | Switch (pattern matching / case dispatch) |
| v1.9 | ToInt, ToFloat, ToString (type conversions), Go backend, optimizer, explain command |
| v1.8 | Throw, TryCatch (exception handling) |
| v1.7 | Break, Continue (loop control) |
| v1.6 | MethodCall, PropertyGet (Tier 2 OOP), WASM/AssemblyScript backend |
| v1.5 | Slice, Not (unary), negative indexing |
| v1.4 | ExternalCall (Tier 2), expanded string operations, JS/C++ backends |
| v1.3 | JsonParse, JsonStringify, Regex operations |
| v1.2 | Math, MathPow, MathConst |
| v1.1 | Record, Set, Deque, Heap, basic string operations |
| v1.0 | Short-circuit evaluation, Tuple, sealed primitives (frozen) |
**Core v1.0 features** (stable, frozen):
- Expressions: Literal, Var, Binary, Array, Tuple, Map, Index, Length, Get, GetDefault, Keys, Range, Call
- Statements: Let, Assign, SetIndex, Set, Push, Print, If, While, For, ForEach, FuncDef, Return
- Short-circuit evaluation for logical operators (`and`, `or`)
- Runtime type checking with clear error messages
- Recursion support with depth limits
- Dictionary insertion order preservation
## Artifacts
When compiling `foo.txt`, artifacts are organized into an `output/` subdirectory:
```
output/
├── coreil/
│ ├── foo.coreil.json # Core IL (always generated)
│ └── foo.lock.json # Cache metadata
├── py/
│ ├── foo.py # With --target python
│ └── foo.sourcemap.json # Source map (if source_map in Core IL)
├── js/foo.js # With --target javascript
├── cpp/ # With --target cpp
│ ├── foo.cpp
│ ├── coreil_runtime.hpp
│ └── json.hpp
├── rust/ # With --target rust
│ ├── foo.rs
│ └── coreil_runtime.rs
├── go/ # With --target go
│ ├── foo.go
│ └── coreil_runtime.go
└── wasm/ # With --target wasm
├── foo.as.ts
├── foo.wasm
└── coreil_runtime.ts
```
Cache reuse is based on the source hash and Core IL hash.
## Multi-Provider Setup
### Claude (Anthropic)
```sh
pip install english-compiler[claude]
export ANTHROPIC_API_KEY="your_api_key_here"
export ANTHROPIC_MODEL="claude-sonnet-4-5" # optional
export ANTHROPIC_MAX_TOKENS="4096" # optional
english-compiler compile --frontend claude examples/hello.txt
```
### OpenAI
```sh
pip install english-compiler[openai]
export OPENAI_API_KEY="your_api_key_here"
export OPENAI_MODEL="gpt-4o" # optional
english-compiler compile --frontend openai examples/hello.txt
```
### Gemini (Google)
```sh
pip install english-compiler[gemini]
export GOOGLE_API_KEY="your_api_key_here"
export GEMINI_MODEL="gemini-1.5-pro" # optional
english-compiler compile --frontend gemini examples/hello.txt
```
### Qwen (Alibaba)
```sh
pip install english-compiler[qwen]
export DASHSCOPE_API_KEY="your_api_key_here"
export QWEN_MODEL="qwen-max" # optional
english-compiler compile --frontend qwen examples/hello.txt
```
## Code Generation
### Python
```sh
english-compiler compile --target python examples/hello.txt
python examples/output/py/hello.py
```
The generated Python code:
- Uses standard Python 3.10+ syntax and semantics
- Matches interpreter output exactly (verified by parity tests)
- Handles all Core IL v1.5 features
### JavaScript
```sh
english-compiler compile --target javascript examples/hello.txt
node examples/output/js/hello.js
```
The generated JavaScript code:
- Uses modern ES6+ syntax
- Runs in Node.js or browsers
- Matches interpreter output exactly
### C++
```sh
english-compiler compile --target cpp examples/hello.txt
g++ -std=c++17 -I examples/output/cpp -o hello examples/output/cpp/hello.cpp && ./hello
```
The generated C++ code:
- Uses C++17 standard
- Includes runtime headers in the same directory
- Matches interpreter output exactly
### Rust
```sh
english-compiler compile --target rust examples/hello.txt
rustc --edition 2021 examples/output/rust/hello.rs -o hello && ./hello
```
The generated Rust code:
- Uses Rust 2021 edition
- Single-file compilation with `rustc` (no Cargo needed)
- Includes runtime library in the same directory
- Matches interpreter output exactly
### Go
```sh
english-compiler compile --target go examples/hello.txt
cd examples/output/go && go mod init prog && go build -o hello . && ./hello
```
The generated Go code:
- Uses Go 1.18+ (standard library only)
- Single-file with runtime library in the same directory
- Matches interpreter output exactly
### WebAssembly
```sh
english-compiler compile --target wasm examples/hello.txt
# Requires: npm install -g assemblyscript
```
The generated WebAssembly:
- Compiles via AssemblyScript (.as.ts)
- Produces .wasm binary if asc compiler is available
- Portable across platforms
## ExternalCall (Tier 2 operations)
Core IL v1.4+ supports ExternalCall for platform-specific operations like file I/O, HTTP requests, and system calls. These are **non-portable** and only work with the Python backend (not the interpreter).
**Example**: Get current timestamp and working directory
```json
{
"version": "coreil-1.5",
"body": [
{
"type": "Let",
"name": "timestamp",
"value": {
"type": "ExternalCall",
"module": "time",
"function": "time",
"args": []
}
},
{
"type": "Print",
"args": [{"type": "Literal", "value": "Timestamp:"}, {"type": "Var", "name": "timestamp"}]
}
]
}
```
**Running ExternalCall programs**:
```sh
# Compile to Python (required for ExternalCall)
english-compiler compile --target python examples/external_call_demo.txt
# Run the generated Python
python examples/output/py/external_call_demo.py
```
**Available modules**: `time`, `os`, `fs`, `http`, `crypto`
See [coreil_v1.md](coreil_v1.md) for full ExternalCall documentation.
## Testing
**Basic tests** (examples in `examples/` directory):
```sh
python -m tests.run
```
**Algorithm regression tests** (golden corpus with backend parity):
```sh
python -m tests.run_algorithms
```
This enforces:
- Core IL validation passes
- Interpreter executes successfully
- All backends execute successfully (Python, JavaScript, C++, Rust, Go)
- Backend parity (all backend outputs are identical)
- No invalid helper calls
See [tests/ALGORITHM_TESTS.md](tests/ALGORITHM_TESTS.md) for details on failure modes and test coverage.
## Exit codes
- `0`: success
- `1`: error (I/O, validation failure, or runtime error)
- `2`: ambiguities present (artifacts still written)
| text/markdown | null | Colm Rafferty <colmjr@outlook.com> | null | null | null | compiler, llm, code-generation, pseudocode, english, natural-language | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Code Generators",
"Topic :: Softwa... | [] | null | null | >=3.10 | [] | [] | [] | [
"anthropic>=0.18.0; extra == \"claude\"",
"openai>=1.0.0; extra == \"openai\"",
"google-generativeai>=0.3.0; extra == \"gemini\"",
"dashscope>=1.14.0; extra == \"qwen\"",
"platformdirs>=3.0.0; extra == \"platformdirs\"",
"watchfiles>=1.0.0; extra == \"watch\"",
"anthropic>=0.18.0; extra == \"all\"",
"... | [] | [] | [] | [
"Homepage, https://github.com/colmraffertyjr/english-compiler",
"Repository, https://github.com/colmraffertyjr/english-compiler",
"Documentation, https://github.com/colmraffertyjr/english-compiler#readme",
"Issues, https://github.com/colmraffertyjr/english-compiler/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:39:21.536518 | english_compiler-1.10.0.tar.gz | 295,146 | a4/b4/cab507a258650a161e192a2f6200c16826e58f76c539686f001ac5ca41ca/english_compiler-1.10.0.tar.gz | source | sdist | null | false | 173a00190682d1ea0a3a36fb74577fce | 9f3daa3484ee772e381a87434908b4b1011a3911eccee965ee250cf4e1d400b2 | a4b4cab507a258650a161e192a2f6200c16826e58f76c539686f001ac5ca41ca | MIT | [
"LICENSE"
] | 249 |
2.1 | openmetadata-managed-apis | 1.11.10.0 | Airflow REST APIs to create and manage DAGS | # OpenMetadata Airflow Managed DAGS Api
This is a plugin for Apache Airflow >= 1.10 and Airflow >=2.x that exposes REST APIs to deploy an
OpenMetadata workflow definition and manage DAGS and tasks.
## Development
The file [`development/airflow/airflow.cfg`](./development/airflow/airflow.cfg) contains configuration which runs based on
the airflow server deployed by the quick-start and development compose files.
You ca run the following command to start the development environment:
```bash
export AIRFLOW_HOME=$(pwd)/openmetadata-airflow-managed-api/development/airflow
airflow webserver
```
## Requirements
First, make sure that Airflow is properly installed with the latest version `2.3.3`. From
the [docs](https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html):
Then, install following packages in your scheduler and webserver python env.
```
pip install openmetadata-airflow-managed-apis
```
## Configuration
Add the following section to airflow.cfg
```
[openmetadata_airflow_apis]
dag_generated_configs = {AIRFLOW_HOME}/dag_generated_configs
```
substitute AIRFLOW_HOME with your airflow installation home
## Deploy
```
pip install "apache-airflow==2.3.3" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.3/constraints-3.9.txt"
```
1. Install the package
2. `mkdir -p {AIRFLOW_HOME}/dag_generated_configs`
3. (re)start the airflow webserver and scheduler
```
airflow webserver
airflow scheduler
```
## Validate
You can check that the plugin is correctly loaded by going to `http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/restapi`,
or accessing the REST_API_PLUGIN view through the Admin dropdown.
## APIs
#### Enable JWT Auth tokens
Plugin enables JWT Token based authentication for Airflow versions 1.10.4 or higher when RBAC support is enabled.
##### Generating the JWT access token
```bash
curl -XPOST http://localhost:8080/api/v1/security/login -H "Content-Type: application/json" -d '{"username":"admin", "password":"admin", "refresh":true, "provider": "db"}'
```
##### Examples:
```bash
curl -X POST http://localhost:8080/api/v1/security/login -H "Content-Type: application/json" -d '{"username":"admin", "password":"admin", "refresh":true, "provider": "db"}'
```
##### Sample response which includes access_token and refresh_token.
```json
{
"access_token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MDQyMTc4MzgsIm5iZiI6MTYwNDIxNzgzOCwianRpIjoiMTI4ZDE2OGQtMTZiOC00NzU0LWJiY2EtMTEyN2E2ZTNmZWRlIiwiZXhwIjoxNjA0MjE4NzM4LCJpZGVudGl0eSI6MSwiZnJlc2giOnRydWUsInR5cGUiOiJhY2Nlc3MifQ.xSWIE4lR-_0Qcu58OiSy-X0XBxuCd_59ic-9TB7cP9Y",
"refresh_token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MDQyMTc4MzgsIm5iZiI6MTYwNDIxNzgzOCwianRpIjoiZjA5NTNkODEtNWY4Ni00YjY0LThkMzAtYzg5NTYzMmFkMTkyIiwiZXhwIjoxNjA2ODA5ODM4LCJpZGVudGl0eSI6MSwidHlwZSI6InJlZnJlc2gifQ.VsiRr8_ulCoQ-3eAbcFz4dQm-y6732QR6OmYXsy4HLk"
}
```
By default, JWT access token is valid for 15 mins and refresh token is valid for 30 days. You can renew the access token with the help of refresh token as shown below.
##### Renewing the Access Token
```bash
curl -X POST "http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/api/v1/security/refresh" -H 'Authorization: Bearer <refresh_token>'
```
##### Examples:
```bash
curl -X POST "http://localhost:8080/api/v1/security/refresh" -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MDQyMTc4MzgsIm5iZiI6MTYwNDIxNzgzOCwianRpIjoiZjA5NTNkODEtNWY4Ni00YjY0LThkMzAtYzg5NTYzMmFkMTkyIiwiZXhwIjoxNjA2ODA5ODM4LCJpZGVudGl0eSI6MSwidHlwZSI6InJlZnJlc2gifQ.VsiRr8_ulCoQ-3eAbcFz4dQm-y6732QR6OmYXsy4HLk'
```
##### sample response returns the renewed access token as shown below.
```json
{
"access_token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MDQyODQ2OTksIm5iZiI6MTYwNDI4NDY5OSwianRpIjoiZDhhN2IzMmYtMWE5Zi00Y2E5LWFhM2ItNDEwMmU3ZmMyMzliIiwiZXhwIjoxNjA0Mjg1NTk5LCJpZGVudGl0eSI6MSwiZnJlc2giOmZhbHNlLCJ0eXBlIjoiYWNjZXNzIn0.qY2e-bNSgOY-YboinOoGqLfKX9aQkdRjo025mZwBadA"
}
```
#### Enable API requests with JWT
##### If the Authorization header is not added in the api request,response error:
```json
{"msg":"Missing Authorization Header"}
```
##### Pass the additional Authorization:Bearer <access_token> header in the rest API request.
Examples:
```bash
curl -X GET -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MDQyODQ2OTksIm5iZiI6MTYwNDI4NDY5OSwianRpIjoiZDhhN2IzMmYtMWE5Zi00Y2E5LWFhM2ItNDEwMmU3ZmMyMzliIiwiZXhwIjoxNjA0Mjg1NTk5LCJpZGVudGl0eSI6MSwiZnJlc2giOmZhbHNlLCJ0eXBlIjoiYWNjZXNzIn0.qY2e-bNSgOY-YboinOoGqLfKX9aQkdRjo025mZwBadA' http://localhost:8080/rest_api/api\?api\=dag_state\&dag_id\=dag_test\&run_id\=manual__2020-10-28T17%3A36%3A28.838356%2B00%3A00
```
## Using the API
Once you deploy the plugin and restart the webserver, you can start to use the REST API. Bellow you will see the endpoints that are supported.
**Note:** If enable RBAC, `http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/rest_api/`<br>
This web page will show the Endpoints supported and provide a form for you to test submitting to them.
- [deploy_dag](#deploy_dag)
- [delete_dag](#delete_dag)
### ***<span id="deploy_dag">deploy_dag</span>***
##### Description:
- Deploy a new dag, and refresh dag to session.
##### Endpoint:
```text
http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/rest_api/api?api=deploy_dag
```
##### Method:
- POST
##### POST request Arguments:
```json
{
"workflow": {
"name": "test_ingestion_x_35",
"force": "true",
"pause": "false",
"unpause": "true",
"dag_config": {
"test_ingestion_x_35": {
"default_args": {
"owner": "harsha",
"start_date": "2021-10-29T00:00:00.000Z",
"end_date": "2021-11-05T00:00:00.000Z",
"retries": 1,
"retry_delay_sec": 300
},
"schedule_interval": "0 3 * * *",
"concurrency": 1,
"max_active_runs": 1,
"dagrun_timeout_sec": 60,
"default_view": "tree",
"orientation": "LR",
"description": "this is an example dag!",
"tasks": {
"task_1": {
"operator": "airflow.operators.python_operator.PythonOperator",
"python_callable_name": "metadata_ingestion_workflow",
"python_callable_file": "metadata_ingestion.py",
"op_kwargs": {
"workflow_config": {
"metadata_server": {
"config": {
"api_endpoint": "http://localhost:8585/api",
"auth_provider_type": "no-auth"
},
"type": "metadata-server"
},
"sink": {
"config": {
"es_host": "localhost",
"es_port": 9200,
"index_dashboards": "true",
"index_tables": "true",
"index_topics": "true"
},
"type": "elasticsearch"
},
"source": {
"config": {
"include_dashboards": "true",
"include_tables": "true",
"include_topics": "true",
"limit_records": 10
},
"type": "metadata"
}
}
}
}
}
}
}
}
}
```
##### Examples:
```bash
curl -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MzU2NTE1MDAsIm5iZiI6MTYzNTY1MTUwMCwianRpIjoiNWQyZTM3ZDYtNjdiYS00NGZmLThjOWYtMDM0ZTQyNGE3MTZiIiwiZXhwIjoxNjM1NjUyNDAwLCJpZGVudGl0eSI6MSwiZnJlc2giOnRydWUsInR5cGUiOiJhY2Nlc3MifQ.DRUYCAiMh5h2pk1MZZJ4asyVFC20pu35DuAANQ5GxGw' -H 'Content-Type: application/json' -d "@test_ingestion_config.json" -X POST http://localhost:8080/rest_api/api\?api\=deploy_dag```
##### response:
```json
{"message": "Workflow [test_ingestion_x_35] has been created", "status": "success"}
```
### ***<span id="delete_dag">delete_dag</span>***
##### Description:
- Delete dag based on dag_id.
##### Endpoint:
```text
http://{AIRFLOW_HOST}:{AIRFLOW_PORT}/rest_api/api?api=delete_dag&dag_id=value
```
##### Method:
- GET
##### GET request Arguments:
- dag_id - string - The id of dag.
##### Examples:
```bash
curl -X GET http://localhost:8080/rest_api/api?api=delete_dag&dag_id=dag_test
```
##### response:
```json
{
"message": "DAG [dag_test] deleted",
"status": "success"
}
```
| text/markdown | OpenMetadata Committers | null | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"pendulum~=3.0",
"apache-airflow>=2.2.2",
"apache-airflow-providers-fab>=1.0.0",
"Flask==2.2.5",
"Flask-Admin==1.6.0",
"packaging>=20.0",
"black==22.3.0; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pylint; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"isort; extra == \"dev\"",
"pycln... | [] | [] | [] | [
"Homepage, https://open-metadata.org/",
"Documentation, https://docs.open-metadata.org/",
"Source, https://github.com/open-metadata/OpenMetadata"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T14:39:03.124056 | openmetadata_managed_apis-1.11.10.0.tar.gz | 47,545 | 3c/b5/9e0bd6bce708be6f87a16bf774d445877aaf96e63984757d2f26d637bfef/openmetadata_managed_apis-1.11.10.0.tar.gz | source | sdist | null | false | 4424bc182f2df0e6ef4a1573afac9997 | 45f637b3b0f4c89affdaf16697dd53ba8607482862a4131719dc35e11ede660f | 3cb59e0bd6bce708be6f87a16bf774d445877aaf96e63984757d2f26d637bfef | null | [] | 376 |
2.4 | sf-toolkit | 0.5.9 | A Salesforce API Adapter for Python | # Salesforce Toolkit for Python
A modern, Pythonic interface to Salesforce APIs.
## Features
- Clean, intuitive API design
- Both synchronous and asynchronous client support
- Simple SObject modeling using Python classes
- Powerful query builder for SOQL queries
- Efficient batch operations
- Automatic session management and token refresh
## Installation
```bash
pip install sf-toolkit
```
## Quick Start
```python
from sf_toolkit import SalesforceClient, SObject, cli_login
from sf_toolkit.io import select, save
from sf_toolkit.data.fields import IdField, TextField
# Define a Salesforce object model
class Account(SObject):
Id = IdField()
Name = TextField()
Industry = TextField()
Description = TextField()
# Connect to Salesforce using the CLI authentication
with SalesforceClient(login=cli_login()) as sf:
# Create a new account
account = Account(Name="Acme Corp", Industry="Technology")
save(account)
# Query accounts
result = select(Account).execute()
for acc in result:
print(f"{acc.Name} ({acc.Industry}) {acc.Description}")
```
| text/markdown | David Culbreth | david.culbreth.256@gmail.com | null | null | MIT | salesforce, api, rest, adapter, toolkit, sfdc, sfdx, forcedotcom | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx<0.29.0,>=0.28.1",
"lxml<6.0.0,>=5.3.1",
"pyjwt<3.0.0,>=2.10.1",
"typing_extensions<5.0.0,>=4.0.0"
] | [] | [] | [] | [
"Changelog, https://github.com/AndroxxTraxxon/python-sf-toolkit/blob/main/CHANGELOG.md",
"Documentation, https://androxxtraxxon.github.io/python-sf-toolkit/",
"Homepage, https://github.com/AndroxxTraxxon/python-sf-toolkit",
"Issues, https://github.com/AndroxxTraxxon/python-sf-toolkit/issues",
"Repository, h... | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:38:32.963858 | sf_toolkit-0.5.9.tar.gz | 54,914 | b6/62/a05fe83a66cb59679c36e8cf830dacce32ae4070c6082c802909defd2335/sf_toolkit-0.5.9.tar.gz | source | sdist | null | false | 1848f5820d1d2d595965bb8cf7d2e698 | 271d7ffa26fcbf1de9861aa966a6d7fdbcd57d4a34729c959b0dc4bbd5f786c7 | b662a05fe83a66cb59679c36e8cf830dacce32ae4070c6082c802909defd2335 | null | [
"LICENSE.txt"
] | 242 |
2.4 | open-darts | 1.4.0 | Open Delft Advanced Research Terra Simulator |
[](https://doi.org/10.5281/zenodo.8046982) [](https://gitlab.com/open-darts/open-darts/-/releases) [](https://gitlab.com/open-darts/open-darts/-/commits/development) [](https://pypi.python.org/project/open-darts/)
[](https://research-software-directory.org/software/opendarts)
# openDARTS
[openDARTS](https://darts.citg.tudelft.nl/) is a scalable parallel modeling framework and aims to accelerate the simulation performance while capturing multi-physics processes in geo-engineering fields such as hydrocarbon, geothermal, CO2 sequestration and hydrogen storage.
## Installation
openDARTS with direct linear solvers can be installed from PyPI:
```bash
pip install open-darts
```
Models that rely on PHREEQC/Reaktoro chemistry backends need the optional third-party stack. Activate the Conda environment you use for development, install Reaktoro via `conda install -c conda-forge reaktoro` (see the [official guide](https://reaktoro.org/installation/installation-using-conda.html)), and invoke `./helper_scripts/build_darts_cmake.sh -p` (or the `.bat` variant on Windows) to build the accompanying iPHREEQC libraries.
openDARTS is available for Python 3.9 to 3.12 for x86_64 architecture both for Linux and Windows.
To build openDARTS please check the [instructions in our wiki](https://gitlab.com/open-darts/open-darts/-/wikis/Build-instructions).
## Tutorials
Check the [tutorial section in the documentation](https://open-darts.gitlab.io/open-darts/getting_started/tutorial.html) and Jupyter Notebooks with basic [Geothermal](https://gitlab.com/open-darts/darts-models/-/tree/main/teaching/EAGE?ref_type=heads) and [GCS](https://gitlab.com/open-darts/darts-models/-/tree/main/teaching/CCS_workshop?ref_type=heads) models.
Also to get started take a look at the different examples in [models](https://gitlab.com/open-darts/open-darts/-/tree/development/models?ref_type=heads).
More advanced examples of complex simulations with openDARTS can be found in [open-darts-models](https://gitlab.com/open-darts/open-darts-models).
## Documentation
For more information about how to get started visit the [documentation](https://open-darts.gitlab.io/open-darts/).
## License
Please refer to [LICENSE.md](LICENSE.md) for information about the licensing of openDARTS.
## Information
The [wiki](https://gitlab.com/open-darts/open-darts/-/wikis/home) contains information for developing cycle, in particular [Build instructions](https://gitlab.com/open-darts/open-darts/-/wikis/Build-instructions).
## How to cite
If you use open-DARTS in your research, we ask you to cite the following publication
[](https://doi.org/10.5281/zenodo.8046982)
## Contribution
Check [how to contribute](https://gitlab.com/open-darts/open-darts/-/wikis/Contributing)
| text/markdown | null | Denis Voskov <D.V.Voskov@tudelft.nl> | null | Ilshat Saifullin <i.s.saifullin@tudelft.nl> | null | energy transition, modeling of CO2 sequestration, geothermal energy production | [
"Programming Language :: Python :: 3",
"Programming Language :: C++"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numba",
"scipy",
"pandas",
"meshio",
"gmsh<=4.13",
"iapws",
"openpyxl",
"pyevtk",
"matplotlib",
"vtk",
"shapely",
"igraph",
"sympy",
"opmcpg",
"xarray",
"netCDF4==1.7.2",
"h5py==3.14.0",
"open-darts-flash==0.11.1",
"phreeqpy",
"myst-parser; extra == \"docs\"",
"sphinx_rtd_th... | [] | [] | [] | [
"homepage, https://darts.citg.tudelft.nl/",
"repository, https://gitlab.com/open-darts/open-darts",
"documentation, https://open-darts.readthedocs.io/en/docs"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T14:38:13.529233 | open_darts-1.4.0-cp313-cp313-win_amd64.whl | 5,687,423 | 04/56/80ee7266685dfc41cd8092f76448a42eae81e3425838b78bf7f50fd5fe95/open_darts-1.4.0-cp313-cp313-win_amd64.whl | cp313 | bdist_wheel | null | false | 2bcb076bdf3677442e28719252d57d01 | 9419f127e66bba0c8a892658a9e2b9f9deb537cf630b3cab4e7f930d9cfaeb7f | 045680ee7266685dfc41cd8092f76448a42eae81e3425838b78bf7f50fd5fe95 | GPL-3.0-or-later | [
"LICENSE",
"LICENSE.md"
] | 625 |
2.1 | openmetadata-ingestion | 1.11.10.0 | Ingestion Framework for OpenMetadata | ---
This guide will help you setup the Ingestion framework and connectors
---

OpenMetadata Ingestion is a simple framework to build connectors and ingest metadata of various systems through OpenMetadata APIs. It could be used in an orchestration framework(e.g. Apache Airflow) to ingest metadata.
**Prerequisites**
- Python >= 3.9.x
### Docs
Please refer to the documentation here https://docs.open-metadata.org/connectors
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=c1a30c7c-6dc7-4928-95bf-6ee08ca6aa6a" />
### TopologyRunner
All the Ingestion Workflows run through the TopologyRunner.
The flow is depicted in the images below.
**TopologyRunner Standard Flow**

**TopologyRunner Multithread Flow**

| text/x-rst | OpenMetadata Committers | null | null | null | Collate Community License Agreement
Version 1.0
This Collate Community License Agreement Version 1.0 (the “Agreement”) sets
forth the terms on which Collate, Inc. (“Collate”) makes available certain
software made available by Collate under this Agreement (the “Software”). BY
INSTALLING, DOWNLOADING, ACCESSING, USING OR DISTRIBUTING ANY OF THE SOFTWARE,
YOU AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO
SUCH TERMS AND CONDITIONS, YOU MUST NOT USE THE SOFTWARE. IF YOU ARE RECEIVING
THE SOFTWARE ON BEHALF OF A LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU
HAVE THE ACTUAL AUTHORITY TO AGREE TO THE TERMS AND CONDITIONS OF THIS
AGREEMENT ON BEHALF OF SUCH ENTITY. “Licensee” means you, an individual, or
the entity on whose behalf you are receiving the Software.
1. LICENSE GRANT AND CONDITIONS.
1.1 License. Subject to the terms and conditions of this Agreement,
Collate hereby grants to Licensee a non-exclusive, royalty-free,
worldwide, non-transferable, non-sublicenseable license during the term
of this Agreement to: (a) use the Software; (b) prepare modifications and
derivative works of the Software; (c) distribute the Software (including
without limitation in source code or object code form); and (d) reproduce
copies of the Software (the “License”). Licensee is not granted the
right to, and Licensee shall not, exercise the License for an Excluded
Purpose. For purposes of this Agreement, “Excluded Purpose” means making
available any software-as-a-service, platform-as-a-service,
infrastructure-as-a-service or other similar online service that competes
with Collate products or services that provide the Software.
1.2 Conditions. In consideration of the License, Licensee’s distribution
of the Software is subject to the following conditions:
(a) Licensee must cause any Software modified by Licensee to carry
prominent notices stating that Licensee modified the Software.
(b) On each Software copy, Licensee shall reproduce and not remove or
alter all Collate or third party copyright or other proprietary
notices contained in the Software, and Licensee must provide the
notice below with each copy.
“This software is made available by Collate, Inc., under the
terms of the Collate Community License Agreement, Version 1.0
located at http://www.getcollate.io/collate-community-license. BY
INSTALLING, DOWNLOADING, ACCESSING, USING OR DISTRIBUTING ANY OF
THE SOFTWARE, YOU AGREE TO THE TERMS OF SUCH LICENSE AGREEMENT.”
1.3 Licensee Modifications. Licensee may add its own copyright notices
to modifications made by Licensee and may provide additional or different
license terms and conditions for use, reproduction, or distribution of
Licensee’s modifications. While redistributing the Software or
modifications thereof, Licensee may choose to offer, for a fee or free of
charge, support, warranty, indemnity, or other obligations. Licensee, and
not Collate, will be responsible for any such obligations.
1.4 No Sublicensing. The License does not include the right to
sublicense the Software, however, each recipient to which Licensee
provides the Software may exercise the Licenses so long as such recipient
agrees to the terms and conditions of this Agreement.
2. TERM AND TERMINATION. This Agreement will continue unless and until
earlier terminated as set forth herein. If Licensee breaches any of its
conditions or obligations under this Agreement, this Agreement will
terminate automatically and the License will terminate automatically and
permanently.
3. INTELLECTUAL PROPERTY. As between the parties, Collate will retain all
right, title, and interest in the Software, and all intellectual property
rights therein. Collate hereby reserves all rights not expressly granted
to Licensee in this Agreement. Collate hereby reserves all rights in its
trademarks and service marks, and no licenses therein are granted in this
Agreement.
4. DISCLAIMER. COLLATE HEREBY DISCLAIMS ANY AND ALL WARRANTIES AND
CONDITIONS, EXPRESS, IMPLIED, STATUTORY, OR OTHERWISE, AND SPECIFICALLY
DISCLAIMS ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE, WITH RESPECT TO THE SOFTWARE.
5. LIMITATION OF LIABILITY. COLLATE WILL NOT BE LIABLE FOR ANY DAMAGES OF
ANY KIND, INCLUDING BUT NOT LIMITED TO, LOST PROFITS OR ANY CONSEQUENTIAL,
SPECIAL, INCIDENTAL, INDIRECT, OR DIRECT DAMAGES, HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, ARISING OUT OF THIS AGREEMENT. THE FOREGOING SHALL
APPLY TO THE EXTENT PERMITTED BY APPLICABLE LAW.
6.GENERAL.
6.1 Governing Law. This Agreement will be governed by and interpreted in
accordance with the laws of the state of California, without reference to
its conflict of laws principles. If Licensee is located within the
United States, all disputes arising out of this Agreement are subject to
the exclusive jurisdiction of courts located in Santa Clara County,
California. USA. If Licensee is located outside of the United States,
any dispute, controversy or claim arising out of or relating to this
Agreement will be referred to and finally determined by arbitration in
accordance with the JAMS International Arbitration Rules. The tribunal
will consist of one arbitrator. The place of arbitration will be Palo
Alto, California. The language to be used in the arbitral proceedings
will be English. Judgment upon the award rendered by the arbitrator may
be entered in any court having jurisdiction thereof.
6.2 Assignment. Licensee is not authorized to assign its rights under
this Agreement to any third party. Collate may freely assign its rights
under this Agreement to any third party.
6.3 Other. This Agreement is the entire agreement between the parties
regarding the subject matter hereof. No amendment or modification of
this Agreement will be valid or binding upon the parties unless made in
writing and signed by the duly authorized representatives of both
parties. In the event that any provision, including without limitation
any condition, of this Agreement is held to be unenforceable, this
Agreement and all licenses and rights granted hereunder will immediately
terminate. Waiver by Collate of a breach of any provision of this
Agreement or the failure by Collate to exercise any right hereunder
will not be construed as a waiver of any subsequent breach of that right
or as a waiver of any other right. | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"packaging",
"tabulate==0.9.0",
"Jinja2>=2.11.3",
"memory-profiler",
"cached-property==1.5.2",
"setuptools<81,>=78.1.1",
"antlr4-python3-runtime==4.9.2",
"chardet==4.0.0",
"azure-keyvault-secrets",
"kubernetes>=21.0.0",
"jsonpatch<2.0,>=1.24",
"email-validator>=2.0",
"pydantic<2.12,>=2.7.0,~... | [] | [] | [] | [
"Homepage, https://open-metadata.org/",
"Documentation, https://docs.open-metadata.org/",
"Source, https://github.com/open-metadata/OpenMetadata"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T14:37:41.600533 | openmetadata_ingestion-1.11.10.0.tar.gz | 20,363,439 | 0b/67/41f6553b12e3cb302a7ddf603a9f96a6800bc28c2f08908e66872573f147/openmetadata_ingestion-1.11.10.0.tar.gz | source | sdist | null | false | 2226ae357dd645830f7c33fc8eaf4216 | f95c5287f77c64ec8795c086d20e988dbe412c700f09714ec96765e96c126fd6 | 0b6741f6553b12e3cb302a7ddf603a9f96a6800bc28c2f08908e66872573f147 | null | [] | 960 |
2.4 | TracMarkdownMacro | 0.11.10 | Implements Markdown syntax WikiProcessor as a Trac macro. | = Markdown !WikiProcessor Macro Implementation =
== Description ==
The !MarkdownMacro package implements John Gruber's [http://daringfireball.net/projects/markdown/ Markdown]
lightweight plain text-to-HTML formatting syntax as a [WikiProcessors WikiProcessor] macro. The original
code is courtesy of Alex Mizrahi aka [#SeeAlso killer_storm]. I simply added a little robustness to the error
checking, documented the package, created setup.py and this README, and registered it with
[MarkdownMacro Trac Hacks].
== Bugs/Feature Requests ==
Existing bugs and feature requests for MarkdownMacro are
[query:status!=closed&component=MarkdownMacro&order=priority here].
If you have any issues, create a
[/newticket?component=MarkdownMacro&owner=dwclifton new ticket].
== Download ==
Download the zipped source from [download:markdownmacro here].
== Source ==
You can check out WikiSearchMacro from [http://trac-hacks.org/svn/markdownmacro here] using Subversion, or [source:markdownmacro browse the source] with Trac.
== Installation ==
First you need to install [http://freewisdom.org/projects/python-markdown/ Python Markdown].
Follow the instructions on the Web site.
Proceed to install the plugin as described in t:TracPlugins.
Enable the macro in `trac.ini`:
{{{#!ini
[components]
Markdown.* = enabled
}}}
You may have to restart your Web server.
== Example ==
{{{
{{{#!Markdown
# RGB
+ Red
+ Green
+ Blue
## Source Code
from trac.core import *
from trac.wiki.macros import WikiMacroBase
from trac.wiki.formatter import Formatter
An example [link](http://example.com/ "With a Title").
}}}
}}}
== See Also ==
* John Gruber's [http://daringfireball.net/projects/markdown/ Markdown]
* [http://www.freewisdom.org/projects/python-markdown/ Python Markdown]
* [http://daringfireball.net/projects/markdown/syntax Markdown syntax]
== Author/Contributors ==
* '''Author:''' [wiki:dwclifton] (Macro/Processor package, setup, documentation)
* '''Maintainer:''' [wiki:dwclifton]
* '''Contributors:'''
* [http://daringfireball.net/colophon/ John Gruber]
* [http://www.freewisdom.org/projects/python-markdown/Credits Yuri Takhteyev, et al.]
* Alex Mizrahi alias [http://trac-hacks.org/attachment/ticket/353/Markdown.py killer_storm]
* The Trac and Python development community
| text/markdown | Douglas Clifton | dwclifton@gmail.com | Cinc-th | null | BSD 3-Clause | 0.11 dwclifton processor macro wiki | [
"Development Status :: 4 - Beta",
"Environment :: Plugins",
"Environment :: Web Environment",
"Framework :: Trac",
"License :: OSI Approved :: BSD License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: ... | [] | https://trac-hacks.org/wiki/MarkdownMacro | null | null | [] | [] | [] | [
"Trac",
"Markdown"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T14:37:28.981521 | tracmarkdownmacro-0.11.10.tar.gz | 8,500 | ea/a6/fd0a648df12101d70718843c48f5a1d1bd13d839ac2c4a2597cd611ea9e3/tracmarkdownmacro-0.11.10.tar.gz | source | sdist | null | false | 150cc38cd52f5dfa05ebe1f675d1baa1 | c08a163615862521c94d0dfa8cc452bb24b5068016ec6a6f532d9a010b6c1744 | eaa6fd0a648df12101d70718843c48f5a1d1bd13d839ac2c4a2597cd611ea9e3 | null | [
"COPYING"
] | 0 |
2.4 | polymage | 0.0.7 | polymage : a multimodal agent python library | # Multimodal Agents Framework
A modular, platform-agnostic Python library to orchestrate multimodal AI agents across platforms
and models (Gemma, moondream, Flux, HiDream, etc.)
Can run both on Local platforms
- Ollama (https://ollama.com), available for macOS, Windows, and Linux
- LmStudio (https://lmstudio.ai), available for macOS, Windows, and Linux
- DrawThings (https://drawthings.ai), available only on macOS
And on Cloud platforms
- Groq ( https://groq.com/ )
- Cloudflare ( https://developers.cloudflare.com/workers-ai/ )
- Togetherai ( https://docs.together.ai/intro )
- Hugginface ( https://huggingface.co/docs/hub/en/api )
## ✨ Features
- Define agents with prompts + multimodal inputs (image, audio, video, text)
- Run same agent on multiple platforms/models for comparison
- Workflow orchestration : just python scripts or tools like Apache Airflow
## 🚀 Quick Start
### Clone the repo
```bash
# clone the repo
git clone
```
### Use uv to create a virtual environment (https://docs.astral.sh/uv/)
uv is the best tool to manage multiple python version and multiple virtual python environment on your machine
Installation guide at https://docs.astral.sh/uv/getting-started/installation/
```bash
uv venv
```
---
Apache 2.0 License — Happy building! 🚀
| text/markdown | null | Fabrice Gaillard <fgaillard@w3architect.com>, Philippe Gaillard <philippe@gaillard.xyz> | null | null | Apache 2.0 | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"groq>=0.37.1",
"huggingface-hub>=1.2.3",
"ollama>=0.6.1",
"openai>=2.11.0",
"pillow>=12.0.0",
"pydantic>=2.12.5",
"requests>=2.32.5",
"tenacity>=9.1.2"
] | [] | [] | [] | [] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:37:15.883957 | polymage-0.0.7-py3-none-any.whl | 35,893 | 45/87/66e13a4b3867a7637beeb9befc1d36014817953d29106de949ff20084143/polymage-0.0.7-py3-none-any.whl | py3 | bdist_wheel | null | false | 2dd57a4bd495c443913fcdd76e77f4ec | a2d448d4780627b09be5574031e953ad1277f143e40af902dab263f829c1b7a2 | 458766e13a4b3867a7637beeb9befc1d36014817953d29106de949ff20084143 | null | [
"LICENSE"
] | 244 |
2.4 | mozregression | 8.0.0.dev1 | Regression range finder for Mozilla nightly builds | # mozregression
mozregression is an interactive regression rangefinder for quickly tracking down the source of bugs in Mozilla nightly and integration builds.
You can start using mozregression today:
- [start with our installation guide](https://mozilla.github.io/mozregression/install.html), then
- take a look at [our Quick Start document](https://mozilla.github.io/mozregression/quickstart.html).
## Status
[](https://pypi.python.org/pypi/mozregression/)
[](https://pypi.python.org/pypi/mozregression/)
Build status:
- Linux:
[](https://coveralls.io/r/mozilla/mozregression)
For more information see:
https://mozilla.github.io/mozregression/
## Contact
You can chat with the mozregression developers on Mozilla's instance of [Matrix](https://chat.mozilla.org/#/room/#mozregression:mozilla.org): https://chat.mozilla.org/#/room/#mozregression:mozilla.org
## Issue Tracking
Found a problem with mozregression? Have a feature request? We track bugs [on bugzilla](https://bugzilla.mozilla.org/buglist.cgi?quicksearch=product%3ATesting%20component%3Amozregression&list_id=14890897).
You can file a new bug [here](https://bugzilla.mozilla.org/enter_bug.cgi?product=Testing&component=mozregression).
## Building And Developing mozregression
Want to hack on mozregression ? Cool!
### Installing dependencies
To make setup more deterministic, we have provided requirements files to use a known-working
set of python dependencies. From your mozregression checkout, you can install these inside
a virtual development environment.
After checking out the mozregression repository from GitHub, this is a two step process:
1. Be sure you are using Python 3.8 or above: earlier versions are not supported (if you
are not sure, run `python --version` or `python3 --version` on the command line).
2. From inside your mozregression checkout, create a virtual environment, activate it, and install the dependencies. The instructions are slightly different depending on whether you are using Windows or Linux/MacOS.
On Windows:
```bash
python3 -m venv venv
venv\Scripts\activate
pip install -r requirements\requirements-3.9-Windows.txt
pip install -e .
```
On Linux:
```bash
python3 -m venv venv
source venv/bin/activate
pip install -r requirements/requirements-3.9-Linux.txt
pip install -e .
```
On macOS:
```bash
python3 -m venv venv
source venv/bin/activate
pip install -r requirements/requirements-3.9-macOS.txt
pip install -e .
```
NOTE: You should replace the Python version with the one that matches with the virtual environment.
### Hacking on mozregression
After running the above commands, you should be able to run the command-line version of
mozregression as normal (e.g. `mozregression --help`) inside the virtual environment. If
you wish to try running the GUI, use the provided helper script:
```bash
python gui/build.py run
```
To run the unit tests for the console version:
```bash
pytest tests
```
For the GUI version:
```bash
python gui/build.py test
```
Before submitting a pull request, please lint your code for errors and formatting (we use [black](https://black.readthedocs.io/en/stable/), [flake8](https://flake8.pycqa.org/en/latest/) and [isort](https://isort.readthedocs.io/en/latest/))
```bash
./bin/lint-check.sh
```
If it turns up errors, try using the `lint-fix.sh` script to fix any errors which can be addressed automatically:
```bash
./bin/lint-fix.sh
```
### Making a release
Create a new GitHub release and give it a tag name identical to the version number you want (e.g. `4.0.20`). CI should automatically upload new versions of the GUI applications to the release and to TestPyPI and PyPI.
Follow the following conventions for pre-releases:
- For development releases, tags should be appended with .devN, starting with N=0. For example, 6.2.1.dev0.
- For alpha, beta, or release candidates, tags should be appended with aN, bN, or rcN, starting with N=0. For example, 6.2.1a0.dev4, 6.2.1rc2, etc...
For more info, see [PEP 440](https://peps.python.org/pep-0440/).
#### Signing and notarizing macOS releases
Uploading the signed artifacts is a manual process at this time. To sign and notarize a macOS release, follow these steps:
- Copy the signing manifest output from the build job.
- Create a pull request to update `signing-manifests/mozregression-macOS.yml` in the [adhoc-signing](https://github.com/mozilla-releng/adhoc-signing) repo with those changes.
- Wait for pull request to be merged, and the signing task to finish.
- After the signing task is finished, download `mozregression-gui-app-bundle.tar.gz` and extract it in `gui/dist`.
- Run `./bin/dmgbuild`.
- Upload new dmg artifact (gui/dist/mozregression-gui.dmg) to the corresponding release.
| text/markdown | null | Mozilla Automation and Tools Team <tools@lists.mozilla.org> | null | null | MPL 2.0 | null | [
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Progr... | [
"Any"
] | null | null | >=3.9 | [] | [] | [] | [
"glean_sdk>=60.3.0",
"beautifulsoup4>=4.7.1",
"colorama>=0.4.1",
"configobj>=5.0.6",
"distro>=1.8.0",
"importlib_resources>=5.10",
"mozdevice<5,>=4.1.0",
"mozfile>=2.0.0",
"mozinfo>=1.1.0",
"mozinstall>=2.0.0",
"mozlog>=4.0",
"mozprocess>=1.3.1",
"mozprofile>=2.2.0",
"mozrunner>=8.0.2",
... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T14:37:08.522205 | mozregression-8.0.0.dev1.tar.gz | 70,576 | 97/35/33a453fbfb15a5d90c2f9f6458bf622fdcc4b165cffe3710792614856ae4/mozregression-8.0.0.dev1.tar.gz | source | sdist | null | false | 931e2f7241fe15c6011806b8df754793 | 7ac490c5f92133754d907ae29bac197376ba6935068154a5be1a9978b9bcbcc8 | 973533a453fbfb15a5d90c2f9f6458bf622fdcc4b165cffe3710792614856ae4 | null | [
"LICENSE"
] | 195 |
2.4 | mkdocs-nodegraph | 0.5.0 | Node Graph plugin for Mkdocs Material | # mkdocs-nodegraph
mkdocs-nodegraph - A Plugin for Visualizing Network Graphs in MkDocs
## Summary
mkdocs-nodegraph is a documents network graph visualization plugin for the mkdocs-material.
It allows you to create interactive visualizations of your documentation structure, helping users navigate through topics more easily.
## Example
<p align="center">
<a>
<img alt="example_image_002.png" src="https://github.com/yonge123/mkdocs-nodegraph/blob/master/sources/example_image_002.png?raw=true" data-hpc="true" class="Box-sc-g0xbh4-0 fzFXnm">
</a>
<!--  -->
<br>
YouTube Link
- https://www.youtube.com/watch?v=KD1AsW304kc
<br>
## Install
Install with setup.py
```shell
python.exe setup.py install
```
<br>
Install with pip
```sh
pip install mkdocs-nodegraph
```
<br>
Uninstall with pip
```
pip uninstall mkdocs-nodegraph
```
<br>
## Setup Tags, Node Icon and Color on Markdown File
```md
---
tags:
- CG
- 3D software
mdfile_icon: "_sources/svgs/blender.svg"
mdfile_color: "#ea7600"
mdfile_site: "https://www.blender.org/"
---
```
mdfile_icon -> Node Icon
mdfile_color -> Node Color
mdfile_site -> A website URL that opens on click while holding the Alt key
<br>
## Click Node
`LMB` -> Open Node Page
`Ctrl + LMB` -> Open Node Page in a New Tab
`Alt + LMB` -> Open the mdfile_site Page from the metadata
<br>
## mkdocs.yml Configuration
```yml
theme:
# pip install mkdocs-material
name: material
# name: readthedocs
features:
# - navigation.tabs
- content.code.copy
palette:
# Palette toggle for dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
primary: blue
accent: blue
toggle:
icon: material/brightness-7
name: Switch to light mode
# Palette toggle for light mode
- media: "(prefers-color-scheme: light)"
scheme: default
primary: blue
accent: blue
toggle:
icon: material/brightness-4
name: Switch to dark mode
plugins:
- search
- offline
- glightbox:
skip_classes:
- collapse-btn
- nodegraph:
graphfile: "nodegraph.html"
```
<br>
After setting up the nodegraph plugin with its graphfile path, you can build your site
```shell
mkdocs build
```
<br>
<!-- After building the site, you can click the button  to open the graph file. -->
After building the site, you can click the button <img src="https://github.com/yonge123/mkdocs-nodegraph/raw/master/sources/graph_icon.svg" alt="" style="max-width: 100%;"> to open the graph file.
<br>
## References
- https://github.com/barrettotte/md-graph?tab=readme-ov-file
- https://pyvis.readthedocs.io/en/latest/
<br>
| text/markdown | JeongYong Hwang | yonge123@gmail.com | null | null | MIT | mkdocs-nodegraph, mkdocs, mkdocs-material, markdown node, graph, nodenetwork, nodeview, graphview, networkgraph, plugin, nodegraph, markdown | [] | [] | https://yonge123.github.io/mkdocs-nodegraph/nodegraph.html | null | >=3.9 | [] | [] | [] | [
"mkdocs>=1.4.0",
"mkdocs-material>=9.5.31",
"pyembed-markdown>=1.1.0",
"mkdocs-glightbox>=0.4.0",
"pyvis>=0.3.0",
"PyYAML>=6.0.2"
] | [] | [] | [] | [
"Source, https://github.com/yonge123/mkdocs-nodegraph/tree/master",
"Bug Tracker, https://github.com/yonge123/mkdocs-nodegraph/issues",
"Documentation, https://github.com/yonge123/mkdocs-nodegraph/tree/master"
] | twine/6.2.0 CPython/3.13.6 | 2026-02-18T14:36:53.102304 | mkdocs_nodegraph-0.5.0.tar.gz | 22,612 | d7/92/8a854d2a2e7f4576fc6a9cae2ec7ae7fa52188cb5a8d4c0440bdae2265c0/mkdocs_nodegraph-0.5.0.tar.gz | source | sdist | null | false | a0b8411de011f928575496e232c896d7 | 9d234509d5a902192a34f6e386fb72a4b9b7e08a365112106a7bd7b1fe387cf3 | d7928a854d2a2e7f4576fc6a9cae2ec7ae7fa52188cb5a8d4c0440bdae2265c0 | null | [] | 256 |
2.4 | tdl-xoa-driver | 1.7.6 | TDL XOA Python API is a Python library providing user-friendly communication interfaces to Teledyne LeCroy Xena Ethernet traffic generation test equipment. It provides a rich collection of APIs that can be used to either write test scripts or develop applications. |  [](https://pypi.python.org/pypi/tdl-xoa-driver) [](https://docs.xenanetworks.com/projects/tdl-xoa-driver/en/latest/?badge=latest)
# TDL XOA Python API
TDL XOA Python API is a standalone Python library that provides a user-friendly and powerful interface for automating network testing tasks using Xena Networks test equipment. Xena test equipment is a high-performance network test device designed for testing and measuring the performance of network equipment and applications.
## Introduction
The TDL XOA Python API is designed to be easy to use and integrate with other automation tools and frameworks. It provides a comprehensive set of methods and classes for interacting with Xena test equipment, including the ability to create and run complex test scenarios, generate and analyze traffic at line rate, and perform detailed analysis of network performance and behavior.
The TDL XOA Python API simplifies the process of automating network testing tasks using Xena test equipment. It provides a simple, yet powerful, interface for interacting with Xena test equipment using the Python programming language. With the TDL XOA Python API, network engineers and testing professionals can easily create and execute test scenarios, generate and analyze traffic, and perform detailed analysis of network performance and behavior, all while leveraging the power and flexibility of the Python programming language.
Overall, the TDL XOA Python API is a valuable tool for anyone looking to automate their network testing tasks using Xena test equipment. With its simple, yet powerful, interface and support for the Python programming language, the TDL XOA Python API provides a flexible and extensible framework for automating network testing tasks and improving the quality of network infrastructure.
## Documentation
The user documentation is hosted:
[XOA Driver Documentation](https://docs.xenanetworks.com/projects/tdl-xoa-driver)
## Key Features
* Objected-oriented, high-level abstraction, to help users save time on parsing command responses.
* Supporting sending commands in batches to increase code execution efficiency.
* Automatically matching command requests and server response, providing clear information in case a command gets an error response.
* Supporting server-to-client push notification, and event subscription, to reduce user code complexity.
* Covering commands of Xena testers, including Xena Valkyrie, Vulcan, and Chimera.
* Supporting IDE auto-complete with built-in class/function/API use manual, to increase development efficiency.
| text/markdown | Leonard Yu, Zoltan Hanisch | Leonard.Yu@teledyne.com, Zoltan.Hanisch@teledyne.com | Teledyne LeCroy | xena-sales@teledyne.com | Apache 2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Languag... | [] | https://github.com/xenanetworks/tdl-xoa-driver | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T14:36:09.935246 | tdl_xoa_driver-1.7.6.tar.gz | 323,081 | ea/71/c9819ea436a336f3f3acc47cd99d563d7080499968f6e019a18a32a3deec/tdl_xoa_driver-1.7.6.tar.gz | source | sdist | null | false | ef1d52ce6d4f2a001731f14d5911f122 | 5d88762a7c04676dc229b94c0277016890bf6e718e83268263439bf376380ffd | ea71c9819ea436a336f3f3acc47cd99d563d7080499968f6e019a18a32a3deec | null | [
"LICENSE"
] | 326 |
2.4 | antonnia-conversations | 2.0.41 | Antonnia Conversations Python SDK | # Antonnia Conversations Python SDK
[](https://badge.fury.io/py/antonnia-conversations)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A Python client library for the Antonnia Conversations API v2. This SDK provides a clean, async-first interface for managing conversation sessions, messages, and agents.
Part of the Antonnia namespace packages - install only what you need:
- `pip install antonnia-conversations` for conversations API
- `pip install antonnia-orchestrator` for orchestrator API
- `pip install antonnia-auth` for authentication API
- Or install multiple: `pip install antonnia-conversations antonnia-orchestrator`
## Features
- 🚀 **Async/await support** - Built with modern Python async patterns
- 🔒 **Type safety** - Full type hints and Pydantic models
- 🛡️ **Error handling** - Comprehensive exception handling with proper HTTP status codes
- 📝 **Rich content** - Support for text, images, audio, files, and function calls
- 🔄 **Session management** - Create, transfer, and manage conversation sessions
- 💬 **Message handling** - Send, receive, and search messages
- 🤖 **Agent support** - Work with both AI and human agents
- 🔧 **Namespace packages** - Modular installation, use only what you need
## Installation
```bash
pip install antonnia-conversations
```
## Quick Start
```python
import asyncio
from antonnia.conversations import Conversations
from antonnia.conversations.types import MessageContentText
async def main():
async with Conversations(
token="your_api_token",
base_url="https://api.antonnia.com"
) as client:
# Create a new conversation session
session = await client.sessions.create(
contact_id="user_12345",
contact_name="John Doe",
metadata={"priority": "high", "department": "support"}
)
# Send a message from the user
message = await client.sessions.messages.create(
session_id=session.id,
content=MessageContentText(type="text", text="Hello, I need help with my account"),
role="user"
)
# Trigger an AI agent response
updated_session = await client.sessions.reply(session_id=session.id)
# Search for messages in the session
messages = await client.sessions.messages.search(
session_id=session.id,
limit=10
)
print(f"Session {session.id} has {len(messages)} messages")
if __name__ == "__main__":
asyncio.run(main())
```
## Authentication
The SDK requires an API token for authentication. You can obtain this from your Antonnia dashboard.
```python
from antonnia.conversations import Conversations
# Initialize with your API token
client = Conversations(
token="your_api_token_here",
base_url="https://api.antonnia.com" # or your custom API endpoint
)
```
## Core Concepts
### Sessions
Sessions represent active conversations between contacts and agents. Each session can contain multiple messages and be transferred between agents.
```python
# Create a session
session = await client.sessions.create(
contact_id="contact_123",
contact_name="Jane Smith",
agent_id="agent_456", # Optional
status="open",
metadata={"source": "website", "priority": "normal"}
)
# Get session details
session = await client.sessions.get(session_id="sess_123")
# Update session fields (metadata, status, agent_id, etc.)
session = await client.sessions.update(
session_id="sess_123",
fields={
"metadata": {"priority": "urgent", "escalated": True},
"status": "open"
}
)
# Transfer to another agent
session = await client.sessions.transfer(
session_id="sess_123",
agent_id="agent_789"
)
# Finish the session
session = await client.sessions.finish(
session_id="sess_123",
ending_survey_id="survey_123" # Optional
)
```
### Messages
Messages are the individual communications within a session. They support various content types and roles.
```python
from antonnia.conversations.types import MessageContentText, MessageContentImage
# Send a text message
text_message = await client.sessions.messages.create(
session_id="sess_123",
content=MessageContentText(type="text", text="Hello there!"),
role="user"
)
# Send an image message
image_message = await client.sessions.messages.create(
session_id="sess_123",
content=MessageContentImage(type="image", url="https://example.com/image.jpg"),
role="user"
)
# Get a specific message
message = await client.sessions.messages.get(
session_id="sess_123",
message_id="msg_456"
)
# Search messages
messages = await client.sessions.messages.search(
session_id="sess_123",
offset=0,
limit=50
)
```
### Content Types
The SDK supports various message content types:
#### Text Messages
```python
from antonnia.conversations.types import MessageContentText
content = MessageContentText(
type="text",
text="Hello, how can I help you?"
)
```
#### Image Messages
```python
from antonnia.conversations.types import MessageContentImage
content = MessageContentImage(
type="image",
url="https://example.com/image.jpg"
)
```
#### Audio Messages
```python
from antonnia.conversations.types import MessageContentAudio
content = MessageContentAudio(
type="audio",
url="https://example.com/audio.mp3",
transcript="This is the audio transcript" # Optional
)
```
#### File Messages
```python
from antonnia.conversations.types import MessageContentFile
content = MessageContentFile(
type="file",
url="https://example.com/document.pdf",
mime_type="application/pdf",
name="document.pdf"
)
```
#### Function Calls (AI Agents)
```python
from antonnia.conversations.types import MessageContentFunctionCall, MessageContentFunctionResult
# Function call from AI
function_call = MessageContentFunctionCall(
type="function_call",
id="call_123",
name="get_weather",
input='{"location": "New York"}'
)
# Function result
function_result = MessageContentFunctionResult(
type="function_result",
id="call_123",
name="get_weather",
output='{"temperature": 72, "condition": "sunny"}'
)
```
## Error Handling
The SDK provides structured exception handling:
```python
from antonnia.conversations import Conversations
from antonnia.conversations.exceptions import (
AuthenticationError,
NotFoundError,
ValidationError,
RateLimitError,
APIError
)
try:
session = await client.sessions.get("invalid_session_id")
except AuthenticationError:
print("Invalid API token")
except NotFoundError:
print("Session not found")
except ValidationError as e:
print(f"Validation error: {e.message}")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after} seconds")
except APIError as e:
print(f"API error {e.status_code}: {e.message}")
```
## Advanced Usage
### Custom HTTP Client
You can provide your own HTTP client for advanced configuration:
```python
import httpx
from antonnia.conversations import Conversations
# Custom HTTP client with proxy
http_client = httpx.AsyncClient(
proxies="http://proxy.example.com:8080",
timeout=30.0
)
async with Conversations(
token="your_token",
base_url="https://api.antonnia.com",
http_client=http_client
) as client:
# Use client as normal
session = await client.sessions.create(...)
```
### Session Search and Filtering
```python
# Search sessions by contact
sessions = await client.sessions.search(
contact_id="contact_123",
status="open",
limit=10
)
# Search sessions by metadata
sessions = await client.sessions.search(
metadata={
"priority": "high",
"department": "sales",
"internal.user_id": "user123" # nested paths supported
}
)
# Pagination
page_1 = await client.sessions.search(
contact_id="contact_123",
offset=0,
limit=20
)
page_2 = await client.sessions.search(
contact_id="contact_123",
offset=20,
limit=20
)
```
### Webhook Events
The Antonnia API supports webhook events for real-time updates. Configure your webhook endpoint to receive these events:
- `session.created` - New session created
- `session.transferred` - Session transferred between agents
- `session.finished` - Session completed
- `message.created` - New message in session
## API Reference
### Conversations Client
The main client class for accessing the Antonnia API.
#### `Conversations(token, base_url, timeout, http_client)`
**Parameters:**
- `token` (str): Your API authentication token
- `base_url` (str): API base URL (default: "https://api.antonnia.com")
- `timeout` (float): Request timeout in seconds (default: 60.0)
- `http_client` (httpx.AsyncClient, optional): Custom HTTP client
**Properties:**
- `sessions`: Sessions client for session management
### Sessions Client
Manage conversation sessions.
#### `sessions.create(contact_id, contact_name, agent_id=None, status="open", metadata=None)`
#### `sessions.get(session_id)`
#### `sessions.update(session_id, fields=None, metadata=None)`
#### `sessions.transfer(session_id, agent_id)`
#### `sessions.finish(session_id, ending_survey_id=None)`
#### `sessions.reply(session_id, debounce_time=0)`
#### `sessions.search(contact_id=None, status=None, metadata=None, offset=None, limit=None)`
### Messages Client
Manage messages within sessions. Accessed via `client.sessions.messages`.
#### `messages.create(session_id, content, role="user", provider_message_id=None, replied_provider_message_id=None)`
#### `messages.get(session_id, message_id)`
#### `messages.update(session_id, message_id, provider_message_id=None, replied_provider_message_id=None)`
#### `messages.search(session_id=None, provider_message_id=None, replied_provider_message_id=None, offset=None, limit=None)`
## Requirements
- Python 3.8+
- httpx >= 0.25.0
- pydantic >= 2.7.0
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Namespace Packages
This SDK is part of the **Antonnia namespace packages** ecosystem. Each service has its own installable package, but they all work together under the `antonnia` namespace.
### Available Packages
- **`antonnia-conversations`** - Conversations API (sessions, messages, agents)
- **`antonnia-orchestrator`** - Orchestrator API (threads, runs, assistants)
- **`antonnia-auth`** - Authentication API (users, tokens, permissions)
- **`antonnia-contacts`** - Contacts API (contact management)
- **`antonnia-events`** - Events API (webhooks, event streams)
- **`antonnia-functions`** - Functions API (serverless functions)
### Usage Examples
**Install only what you need:**
```bash
# Just conversations
pip install antonnia-conversations
# Just orchestrator
pip install antonnia-orchestrator
# Multiple services
pip install antonnia-conversations antonnia-orchestrator antonnia-auth
```
**Use together seamlessly:**
```python
# Each package provides its own client and types
from antonnia.conversations import Conversations
from antonnia.conversations.types import Session, MessageContentText
from antonnia.conversations.exceptions import AuthenticationError
from antonnia.orchestrator import Orchestrator
from antonnia.orchestrator.types import Thread, Run
from antonnia.orchestrator.exceptions import OrchestratorError
from antonnia.auth import Auth
from antonnia.auth.types import User, Token
from antonnia.auth.exceptions import TokenExpiredError
async def integrated_example():
# Initialize multiple services
conversations = Conversations(token="conv_token")
orchestrator = Orchestrator(token="orch_token")
auth = Auth(token="auth_token")
# Use them together
user = await auth.users.get("user_123")
session = await conversations.sessions.create(
contact_id=user.id,
contact_name=user.name
)
thread = await orchestrator.threads.create(
user_id=user.id,
metadata={"session_id": session.id}
)
```
### Creating Additional Services
To add a new service (e.g., `antonnia-analytics`):
1. **Create package structure:**
```
antonnia-analytics/
├── antonnia/
│ └── analytics/
│ ├── __init__.py # Export main Analytics client
│ ├── client.py # Analytics client class
│ ├── types/
│ │ ├── __init__.py # Export all types
│ │ └── reports.py # Analytics types
│ └── exceptions.py # Analytics exceptions
├── pyproject.toml # Package config
└── setup.py # Alternative setup
```
2. **Configure namespace package:**
```toml
# pyproject.toml
[project]
name = "antonnia-analytics"
[tool.setuptools.packages.find]
include = ["antonnia*"]
[tool.setuptools.package-data]
"antonnia.analytics" = ["py.typed"]
```
3. **Use consistent imports:**
```python
# User imports
from antonnia.analytics import Analytics
from antonnia.analytics.types import Report, ChartData
from antonnia.analytics.exceptions import AnalyticsError
```
This approach provides:
- **Modular installation** - Install only needed services
- **Consistent API** - All services follow the same patterns
- **Type safety** - Each service has its own typed interfaces
- **No conflicts** - Services can evolve independently
- **Easy integration** - Services work together seamlessly
## Support
- 📖 [Documentation](https://docs.antonnia.com)
- 💬 [Discord Community](https://discord.gg/antonnia)
- 📧 [Email Support](mailto:support@antonnia.com)
- 🐛 [Issue Tracker](https://github.com/antonnia/antonnia-python/issues) | text/markdown | null | Antonnia <support@antonnia.com> | null | null | MIT | antonnia, api, chat, conversations, messaging, sdk | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx<1.0.0,>=0.25.0",
"pydantic<3.0.0,>=2.7.0",
"pytz>=2023.3"
] | [] | [] | [] | [
"Homepage, https://antonnia.com",
"Documentation, https://docs.antonnia.com",
"Repository, https://github.com/antonnia-com-br/antonnia-conversations",
"Bug Tracker, https://github.com/antonnia-com-br/antonnia-conversations/issues"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-18T14:35:37.071498 | antonnia_conversations-2.0.41.tar.gz | 29,854 | 16/90/ba962be84fbeface4071432a21129e98250d09e5ce4de4d2fdd7013bb894/antonnia_conversations-2.0.41.tar.gz | source | sdist | null | false | 469e51af5598ef815ccadec8ca9566ac | b72399e72b59aab4b655f37100ad374c26c8dbd06968ea6052e98c46e39410ff | 1690ba962be84fbeface4071432a21129e98250d09e5ce4de4d2fdd7013bb894 | null | [
"LICENSE"
] | 252 |
2.4 | otsutil | 1.3.0.312 | A general-purpose utility package using Python 3.12+ features. | # otsutil
よく使う関数やクラスを纏めたライブラリです。
このライブラリは以下の環境で作成・最適化されています。
`Windows10/11`, `Python 3.12.0+`
## インストール
インストール
`pip install otsutil`
アップデート
`pip install -U otsutil`
アンインストール
`pip uninstall otsutil`
## モジュール
以下のモジュールが存在します。
| モジュール名 | 概要 |
| :---: | :--- |
| [classes](#classesモジュール) | スレッドセーフなコンテナやタイマーなどのクラス定義 |
| [funcs](#funcsモジュール) | ファイル操作や型判定などの便利な関数定義 |
| [types](#typesモジュール) | パッケージ全体で共通利用する型ヒント・ジェネリクス共通定義 |
---
### classesモジュール
classesモジュールでは以下のクラスが定義されています。
| クラス名 | 概要 |
| :---: | :--- |
| **LockableDict** | 要素の操作時に `threading.RLock` を使用するスレッドセーフな `dict` クラス。<br>`with obj:` 構文によるコンテキストマネージャに対応し、複数の操作をアトミックに実行可能です。 |
| **LockableList** | 要素の操作時に `threading.RLock` を使用するスレッドセーフな `list` クラス。<br>`with obj:` 構文によるコンテキストマネージャに対応し、複数の操作をアトミックに実行可能です。 |
| **ObjectStore** | オブジェクトを `pickle` + `base64` でシリアライズし、ファイルに永続化・管理するクラス。<br>カスタムクラスを保存する場合、そのクラスに `__reduce__` を実装することでリスト内の要素なども含め高度な変換・復元が可能です。 |
| **OtsuNone** | `None` を返す可能性のある辞書の `get` デフォルト値などに使用するセンチネルオブジェクト。<br>`bool()` 判定では `False` を返します。 |
| **Timer** | 指定時間の経過判定および待機を行うタイマー。<br>同期的なブロック待機 (`join`) に加え、`asyncio` による非同期待機 (`ajoin`) をサポート。<br>`for` / `async for` 文で残り時間を `yield` しながら処理を行うイテレータ機能を持ちます。 |
---
### funcsモジュール
funcsモジュールでは以下の関数が定義されています。
| 関数名 | 概要 |
| :---: | :--- |
| **deduplicate** | シーケンスから重複を取り除き、順序を保持したまま元の型(`list`/`tuple`)で返す。 |
| **ensure_relative** | パスを基準ディレクトリからの相対パスに確実に正規化する。環境依存の絶対パスを排除する際に有用。 |
| **get_sub_paths** | ディレクトリ内を探索し、ワイルドカードや拡張子による高度なフィルタリングを適用して子パスを一覧取得する。 |
| **get_value** | 階層構造を持つ辞書などから、キーのリストを指定して安全に値を取得する。 |
| **is_all_type** | 反復可能オブジェクトの全ての要素が、指定した型であるか判定する。 |
| **is_type** | オブジェクトが指定した型であるか判定する(`None` 許容判定などを含む)。 |
| **load_json** | `JSON` ファイルを読み込む。親ディレクトリがない場合は作成し、ファイルがない場合はデフォルト値を返します。 |
| **read_lines** | ファイルを1行ずつ読み出すジェネレータ。改行コードの自動除去やエンコーディング指定が可能です。 |
| **same_path** | 2つのパスが(相対/絶対に関わらず)物理的に同じ場所を指しているか判定する。 |
| **save_json** | オブジェクトを `JSON` 形式で保存する。 |
| **setup_path** | パスを `Path` オブジェクトとして整備し、必要に応じて親ディレクトリを生成して利用可能な状態にする。 |
| **str_to_path** | 文字列を `pathlib.Path` に変換する。 |
| **write_lines** | 反復可能な文字列データを1行ずつファイルに書き出す。 |
---
### typesモジュール
typesモジュールでは、Python 3.12 のジェネリクス構文に対応した以下の型定義がされています。
| 名称 | 形式 | 概要 |
| :---: | :---: | :--- |
| **FloatInt** | TypeAlias | `float` または `int` に限定した数値型。 |
| **K / V** | TypeVar | 辞書のキー(Key)や値(Value)を想定した型変数。 |
| **P** | ParamSpec | 関数の引数仕様(Parameter Specification)を表す型変数。 |
| **R** | TypeVar | 関数の戻り値(Return Value)を表す型変数。 |
| **T** | TypeVar | 制約のない汎用的な型変数。 |
| **HMSTuple** | Alias | `(時, 分, 秒)` のタプル。型は `(int, int, float)`。 |
| **StrPath** | Alias | `pathlib.Path` または `str`。 |
| text/markdown | null | Otsuhachi <agequodagis.tufuiegoeris@gmail.com> | null | null | MIT License | null | [
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.4 | 2026-02-18T14:35:25.405495 | otsutil-1.3.0.312.tar.gz | 12,786 | 30/9d/7235d9c13587640340784bebbf4e7413a3d446bbb2cd43c8fd2f541faf61/otsutil-1.3.0.312.tar.gz | source | sdist | null | false | e7dd7710731986175bca9ccac89442dd | 3105761c6a4ac51552818ee9e339a471a7a7e4c2133de6949af9c100b45b3fff | 309d7235d9c13587640340784bebbf4e7413a3d446bbb2cd43c8fd2f541faf61 | null | [
"LICENSE.txt"
] | 246 |
2.4 | pypnusershub | 3.2.0 | Python lib to authenticate using PN's UsersHub | # UsersHub-authentification-module [](https://github.com/PnX-SI/UsersHub-authentification-module/actions/workflows/pytest.yml)[](https://codecov.io/gh/PnX-SI/UsersHub-authentification-module)
UsersHub-authentification-module est une extension [Flask](https://flask.palletsprojects.com/) permettant d'intégrer un système d'authentification et de gestion de sessions utilisateurs à son application. L'extension fournit un mécanisme d'authentification basé sur une base de données locale et permet également de se connecter via des mécanismes d'authentification externe standard (OAuth, OIDC).
La gestion des session est gérée par l'extension [Flask-Login](https://flask-login.readthedocs.io/en/latest/).
## Get Started
### Installation
```sh
pip install pypnusershub
```
### Application Flask minimale
Pour disposer des routes de connexion/déconnexion avec le protocole de connexion par défaut, le code minimal d'une application Flask est le suivant:
```python
from flask import Flask
from pypnusershub.auth import auth_manager
app = Flask(__name__) # Instantiate a Flask application
app.config["URL_APPLICATION"] = "/" # Home page of your application
providers_config = # Declare identity providers used to log into your app
[
# Default identity provider (comes with UH-AM)
{
"module" : "pypnusershub.auth.providers.default.LocalProvider",
"id_provider":"local_provider"
},
# you can add other identity providers that works with OpenID protocol (and many others !)
]
auth_manager.init_app(app,providers_declaration=providers_config)
if __name__ == "__main__":
app.run(host="0.0.0.0",port=5000)
```
### Protèger une route
Pour protéger une route en utilisant les profils de permissions, utilisez le décorateur `check_auth(niveau_profil)`:
```python
from pypnusershub.decorators import check_auth
@adresses.route('/', methods=['POST', 'PUT'])
@check_auth(4) # Decorate the Flask route
def insertUpdate_bibtaxons(id_taxon=None):
pass
```
Si vous voulez limitez l'accès à une route uniquement aux utilisateurs connectés (qu'importe leur profils), utilisez le décorateur `login_required` de Flask-login
```python
from flask_login import login_required
@adresses.route('/', methods=['POST', 'PUT'])
@login_required
def insertUpdate_bibtaxons(id_taxon=None):
pass
```
## Installation
### Pré-requis
- Python 3.9 ou (ou version supérieure)
- PostgreSQL 11.x (ou version supérieure)
- Paquets systèmes suivants: `python3-dev`,`build-essential`, `postgresql-server-dev`
### Installer `UsersHub-authentification-module`
Cloner le dépôt (ou télécharger une archive), dirigez vous dans le dossier et lancez la commande :
```
pip install .
```
### Configuration de l'extension
#### Configuration Flask
Indiquer la route de la _homepage_ de votre application dans la variable `URL_APPLICATION`
Pour manipuler la base de données, nous utilisons l'extension `flask-sqlalchemy`. Si votre application déclare déjà un objet `flask_sqlalchemy.SQLAlchemy`, déclarer le chemin python vers celui-ci dans la variable de configuration `FLASK_SQLALCHEMY_DB`.
```python
os.environ["FLASK_SQLALCHEMY_DB"] = "unmodule.unsousmodule.nomvariable"
```
#### Configuration de Flask-login
Paramètres à rajouter dans la configuration ( attribut `config` de l'objet `Flask`) de votre application.
Les paramètre concernant la gestion du cookie sont gérés par flask-admin : https://flask-login.readthedocs.io/en/latest/#cookie-settings
`REDIRECT_ON_FORBIDDEN` : paramètre de redirection utilisé par le décorateur `check_auth` lorsque les droits d'accès à une ressource/page sont insuffisants (par défaut lève une erreur 403)
#### Lien avec UsersHub
Pour utiliser les routes de UsersHub, ajouter les paramètres suivants dans la configuration de l'application :
- `URL_USERSHUB` : Url de votre UsersHub
- `ADMIN_APPLICATION_LOGIN` , `ADMIN_APPLICATION_PASSWORD`, `ADMIN_APPLICATION_MAIL` : identifiant de l'administrateur de votre UsersHub
```python
app.config["URL_USERSHUB"]="http://usershub-url.ext"
# Administrateur de mon application
app.config["ADMIN_APPLICATION_LOGIN"]="admin-monapplication"
app.config["ADMIN_APPLICATION_PASSWORD"]="monpassword"
app.config["ADMIN_APPLICATION_MAIL"]="admin-monapplication@mail.ext"
```
> [!TIP]
> Si vous souhaitez une interface permettant de modifier les données utilisateurs décritent dans `UsersHub-authentification-module`, il est conseillé d'utiliser [UsersHub](https://github.com/PnX-SI/UsersHub).
#### Configuration de la base de données
**Création des tables et schémas nécessaires**
`UsersHub-authentification-module` s'appuit sur un schéma PostgreSQL nommé `utilisateurs`. Pour créer ce dernier et l'ensemble des tables nécessaires, on utilise `alembic`. Alembic est une librairie Python de versionnage de base de données. Chaque modification sur la base de données est décrite par une révision (e.g. `/src/pypnusershub/migrations/versions/fa35dfe5ff27_create_utilisateurs_schema.py`). Cette dernière indique quelles sont les actions sur la base de données à effectuer pour:
- passer à la révision suivante -> méthode `upgrade()`
- mais aussi pour revenir à la précédente -> méthode `downgrade()`.
Avant de lancer la création du schéma, indiquer la nouvelle url de connexion à votre BDD dans la variable `sqlalchemy.url` dans le fichier `alembic.ini`.
```ini
sqlalchemy.url = postgresql://parcnational:<mot_de_passe>@localhost:5432/db_name
```
Une fois modifié, lancer la commande suivante pour remplir la base de données:
```sh
alembic upgrade utilisateurs@head
```
**N.B.** Si vous utilisez `alembic` dans votre projet, il est possible d'accéder aux révisions de `UsersHub-authenfication-module` à l'aide de la variable [`alembic.script_location`](https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file).
**Remplissage des tables pour son application**
Pour utiliser votre application avec les `UsersHub-authentification-module`, il est nécessaire :
- D'avoir (au moins) un utilisateur dans la table `t_roles`
- Enregister votre application dans `t_applications`
- Avoir au moins un profil de permissions dans `t_profils`
- Associer les profils crées à votre application dans `cor_profil_for_app`
- Enfin, pour associer votre utilisateur au couple --profil, app--, remplisser la table `cor_role_app_profil` (Nécessaire pour utiliser le décorateur `pypnusershub.decorators.check_auth`)
Ci-dessous, un exemple minimal d'une migration _alembic_ pour utiliser `UsersHub-authentification-module` pour votre application
```python
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = "id_revision"
down_revision = "id_revision_precedente"
branch_labels = "branch_votre_application"
depends_on = None
def upgrade():
op.execute(
"""
INSERT INTO utilisateurs.t_roles (id_role, nom_role, desc_role)
VALUES (1, 'Grp_admin', 'Role qui fait des trucs');
INSERT INTO utilisateurs.t_applications (id_application,code_application, nom_application,desc_application, id_parent)
VALUES (250,'APP','votre_application','Application qui fait des trucs', NULL);
INSERT INTO utilisateurs.t_profils (id_profil, code_profil, nom_profil, desc_profil)
VALUES (15, 15, 'votre_profil', 'Profil qui fait des trucs');
INSERT INTO utilisateurs.cor_profil_for_app (id_profil, id_application)
VALUES (1, 250);
INSERT INTO utilisateurs.cor_role_app_profil (id_role, id_application, id_profil,is_default_group_for_app)
VALUES (1, 250, 15, true);
"""
)
def downgrade():
op.execute(
"""
DELETE FROM utilisateurs.cor_role_app_profil cor
USING
utilisateurs.t_roles r,
utilisateurs.t_applications a,
utilisateurs.t_profils p
WHERE
cor.id_role = r.id_role
AND cor.id_application = a.id_application
AND cor.id_profil = p.id_profil
AND r.nom_role = 'Grp_admin'
AND a.code_application = 'APP'
AND p.code_profil = 15
"""
)
op.execute(
"""
DELETE FROM utilisateurs.cor_profil_app
WHERE id_profil = 15
AND id_application = 250;
"""
)
op.execute(
"""
DELETE FROM utilisateurs.t_profils
WHERE code_profil = 15;
"""
)
op.execute(
"""
DELETE FROM utilisateurs.t_applications
WHERE code_application = 'APP';
"""
)
```
## Utilisation de l'API
### Ajout des routes
Les routes déclarées par le _blueprint_ de `UsersHub-authentification-module` sont automatiquement ajoutés dans le blueprint de votre application lors de l'appel de la méthode `init_app()` de l'object `AuthManager`.
### Routes définies par UsersHub-authentification module
Les routes suivantes sont implémentés dans `UsersHub-authentification-module`:
| Route URI | Action | Paramètres | Retourne |
| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- | -------------------------------- |
| `/providers` | Retourne l'ensemble des fournisseurs d'identités activés | NA | |
| `/get_current_user` | Retourne les informations de l'utilisateur connecté | NA | {user,expires,token} |
| `/login/<provider>` | Connecte un utilisateur avec le provider <provider> | Optionnel({user,password}) | {user,expires,token} ou redirect |
| `/public_login` | Connecte l'utilisateur permettant l'accès public à votre application | NA | {user,expires,token} |
| `/logout` | Déconnecte l'utilisateur courant | NA | redirect |
| `/authorize` | Connecte un utilisateur à l'aide des infos retournées par le fournisseurs d'identités (Si redirection vers un portail de connexion par /login) | {data} | redirect |
### Méthodes définies dans le module
- `connect_admin()` : décorateur pour la connexion d’un utilisateur type admin a une appli ici usershub. Paramètres à renseigner dans la configuration.
- `post_usershub()` : route générique pour appeler les route usershub en tant qu'administrateur de l'appli en cours.
- `insert_or_update_role` : méthode pour insérer ou mettre à jour un utilisateur.
### Changement du prefix d'accès aux routes de UsersHub-authentification-module
Par défaut, les routes sont accessibles depuis le préfixe `/auth/`. Si vous voulez changer cela, il suffit de modifier le paramètre `prefix` de l'appel de la méthode `AuthManager.init_app()` :
```python
auth_manager.init_app(app, prefix="/authentification", providers_declaration=providers_config)
```
## Connexion à l'aide de fournisseurs d'identités extérieurs
Depuis la version 3.0, il est possible d'ajouter la possibilité de se connecter à des fournisseurs d'identités externes utilisant d'autres protocoles de connexion : OpenID, OpenID Connect, CAS (INPN), etc. ...
### Utiliser les protocoles de connexions existant
Lors de l'appel de `AuthManager.init_app`, il faut indiquer les configurations des différents fournisseurs d'identités sur lesquels on souhaite se connecter dans le paramètre `providers_declaration`.
Pour chaque configuration, on doit déclarer :
- le chemin vers la classe Python `module` déclarant le protocole de connexion
- son identifiant unique `id_provider`
- et les variables de configurations propre au protocole de connexion
```python
from flask import Flask
from pypnusershub.auth import auth_manager
app = Flask(__name__) # Instantiate a Flask application
app.config["URL_APPLICATION"] = "/" # Home page of your application
providers_config = # Declare identity providers used to log into your app
[
# Default identity provider (comes with UH-AM)
{
"module" : "pypnusershub.auth.providers.default.LocalProvider",
"id_provider":"local_provider"
},
# Other identity provider
{
"module": "pypnusershub.auth.providers.openid_provider.OpenIDProvider",
"id_provider":"open_id_1",
"ISSUER":"http://<realmKeycloak>",
"CLIENT_ID":"secret",
"CLIENT_SECRET":"secret"
}
# you can add other identity providers that works with OpenID protocol (and many others !)
]
auth_manager.init_app(app,providers_declaration=providers_config)
if __name__ == "__main__":
app.run(host="0.0.0.0",port=5200)
```
Pour lancer la connexion sur un provider en particulier, il suffit d'appeler la route `/login/<id_provider>`.
### Paramètres de configurations des protocoles de connexions inclus
**OpenID et OpenIDConnect**.
- `group_claim_name` (string) : nom du champs retournée par le fournisseur d'identités dans lequel se trouve la liste de groupes auquel l'utilisateur appartient (par défaut : "groups").
- `ISSUER` (string) : URL du fournisseur d'identités
- `CLIENT_ID` (string) : Identifiant publique de l'application auprès du fournisseur d'identités.
- `CLIENT_SECRET` (string) : Clé secrete connue uniquement par l'application et le fournisseur d'identités.
**UsersHub-authentification-module**
- `login_url` : URL absolue vers la route `/auth/login` de l'application Flask.
- `logout_url` : URL absolue vers la route `/auth/login` de l'application Flask.
**CAS INPN**
- `URL_LOGIN` et `URL_LOGOUT` (string) : route de déconnexion de l'API de l'INPN (par défaut https://inpn.mnhn.fr/auth/login et https://inpn.mnhn.fr/auth/logout)
- `URL_VALIDATION` (string) : URL qui permet de valider le token renvoyer après l'authentification de l'utilisateur sur le portail de l'INPN (par défaut: https://inpn.mnhn.fr/auth/serviceValidate").
- `URL_INFO` (string) : A l'aide des infos retournées par `URL_VALIDATION`, permet de récupérer les informations d'un utilisateurs (par défaut: https://inpn.mnhn.fr/authentication/information).
- `WS_ID` et `WS_PASSWORD` (string): identifiant et mot de passe permettant d'accéder au service accessible sur `URL_INFO`.
- `USERS_CAN_SEE_ORGANISM_DATA` (boolean): indique si l'utilisateur connecté peut voir les données de son organisme (par défaut: false).
- `ID_USER_SOCLE_1` et `ID_USER_SOCLE_2` : `ID_USER_SOCLE_1` indique le groupe dans l'instance GeoNature qui permet à l'utilisateur de voir les données de son organisme. Dans le cas contraire, il est associé au groupe indiqué dans `ID_USER_SOCLE_2`.
### Ajouter son propre protocole de connexion
L'ensemble des protocoles de connexion dans `UsersHub-authentification-module` n'est pas exhaustif et peut dans certains cas ne pas convenir à votre contexte. Pas de panique ! Il est possible de déclarer son propre protocole à l'aide de la classe `Authentication` comme dans l'exemple ci-dessous :
```python
from marshmallow import Schema, fields
from typing import Any, Optional, Tuple, Union
from pypnusershub.auth import Authentication, ProviderConfigurationSchema
from pypnusershub.db import models, db
from flask import Response
class NEW_PROVIDER(Authentication):
is_external = True # go through an external connection portal
def authenticate(self, *args, **kwargs) -> Union[Response, models.User]:
pass # return a User or a Flask redirection to the login portal of the provider
def authorize(self):
pass # must return a User
def revoke(self):
pass # if specific action have to be made when logout
def configure(self, configuration: Union[dict, Any]):
pass # Indique la configuration d'un fournisseur d'identités
```
Un **protocole de connexion** est défini par 5 méthodes et plusieurs attributs.
Les attributs sont les suivants
- L'attribut `id_provider` indique l'identifiant de l'instance du provider.
- Les attributs `logo` et `label` sont destinés à l'interface utilisateur.
- L'attribut `is_external` spécifie si le provider permet de se connecter à une autre application Flask utilisant `UsersHub-authentification-module` ou à un fournisseur d'identité qui requiert une redirection vers une page de login.
- L'attribut `login_url` et `logout_url`, si le protocole de connexion nécessite une redirection
- L'attribut `group_mapping` contient le mapping entre les groupes du fournisseurs d'identités et celui de votre instance de GeoNature.
Les méthodes sont les suivantes :
- `authenticate`: Lancée sur la route `/auth/login`, elle récupère les informations du formulaire de login et retourne un objet `User`. Si le protocole de connexion doit rediriger l'utilisateur vers un portail, alors authenticate retourne une `flask.Response` qui redirige vers ce dernier.
- `authorize`: Cette méthode est lancée par la route `/auth/authorize` qui récupère les informations renvoyés par le fournisseur d'identités après la connexions sur le portail.
- `configure(self, configuration: Union[dict, Any])`: Permet de récupérer et d'utiliser les variables présentes dans le fichier de configuration. Il est possible aussi de valider les résultats à l'aide d'un schéma `marshmallow`
- `revoke()`: Permet de spécifier un fonctionnement spécifique lors de la déconnexion d'un utilisateur.
## Schéma de la base de données
> [!IMPORTANT]
>
> #### Concepts essentiels
>
> **Profil**: Concept associé à un nom et à un niveau de permission.
> **Provider**: Concept associé à celui de fournisseurs d'identités. Service (Google, INPN, ORCID,...) qui permet de s’identifier et qui utilise un protocole de connexion (e.g. _OAuth_)
> **Listes**: Groupe d'utilisateurs
### Structure de la base
Le diagramme ci-dessous décrit les différentes tables du schéma `utilisateurs`, leurs attributs et leur relations.
```mermaid
classDiagram
class t_roles {
-id_role: int
-uuid_role
-identifiant: str
-nom_role: str
-prenom_role: str
-pass: str
-pass_plus: str
-groupe: bool
-id_organisme: int
-remarques: str
-champs_addi: dict
-date_insert: datetime
-date_update: datetime
-active: bool
}
class temp_users{
-id_temp_user: int
-token_role: str
-identifiant: str
-nom_role: str
-prenom_role: str
-password: str
-pass_md5: str
-groupe: bool
-id_organisme: int
-remarques: str
-champs_addi: dict
-date_insert: datetime
-date_update: datetime
}
class bib_organismes {
-id_organisme: int
-uuid_organisme: UUID
-nom_organisme: str
-adresse_organisme: str
-cp_organisme: str
-ville_organisme: str
-tel_organisme: str
-fax_organisme: str
-email_organisme: str
-url_organisme: str
-url_logo: str
-id_parent: int
-additional_data: dict
}
class t_profils {
-id_profil: int
-code_profil: str
-nom_profil: str
-desc_profil: str
}
class t_listes{
-id_liste:int
-code_list:str(20)
-nom_liste:str(50)
-desc_liste:str
}
class t_applications {
-id_application: int
-code_application: str(20)
-nom_application: str(50)
-desc_application: str
-id_parent: int
}
class t_providers {
-id_provider:int
-name:str
-url:str
}
class cor_role_liste{
-id_role:int
-id_liste:int
}
class cor_roles{
-id_role_groupe:int
-id_role_utilisateur:int
}
class cor_profil_for_app{
-id_profil
-id_application
}
class cor_role_profil_app {
-id_role: int
-id_profil: int
-id_application: int
}
class cor_role_token{
-id_role:int
-token:str
}
class cor_role_provider{
-id_provider:int
-id_role:int
}
t_roles --* t_roles
t_roles --* bib_organismes
temp_users --* bib_organismes
bib_organismes *-- bib_organismes
cor_role_profil_app *-- t_applications
cor_role_profil_app *-- t_profils
cor_role_profil_app *-- t_roles
cor_roles *-- t_roles
cor_role_token *-- t_roles
cor_role_liste *-- t_roles
cor_role_liste *-- t_listes
cor_role_provider *-- t_roles
cor_role_provider *-- t_providers
cor_role_token *-- t_roles
cor_profil_for_app *-- t_profils
cor_profil_for_app *-- t_applications
```
**Fonctions des tables**
| Nom table | Description |
| ------------------- | -------------------------------------------------------------------------------------------- |
| bib_organismes | Contient les organismes |
| t_roles | Contient les utilisateurs |
| t_profils | Permet de définir les profils de permissions |
| t_providers | Contient les fournisseurs d'identités dans l'applications |
| t_applications | Liste les applications qui utilisent UsersHub-authentification-module |
| temp_users | Permet de créer des utilisateurs temporaires (en attente de validation par l'administrateur) |
| cor_profil_for_app | Permet d'attribuer et limiter les profils disponibles pour chacune des applications |
| cor_role_app_profil | Cette table permet d'associer des utilisateurs à des profils par application |
| cor_role_list | Cette table permet d'associer des utilisateurs à des listes d'utilisateurs |
| cor_role_provider | Cette table permet d'associer des utilisateurs à des fournisseurs d'identités |
| cor_role_token | Permet d'associer des utilisateurs à des tokens |
| cor_roles | Permet d'associer des utilisateurs entre eux (groupes et utilisateurs) |
## Commandes Flask
Il est possible d'exécuter certaines opérations depuis la ligne de commande `flask user` :
- `add [--group <nom_role du groupe>] <username> <password>` : ajout d'un utilisateur
- `remove <username>`: suppression d'un utilisateur
- `change_password <username>` : modification du mot de passe d'un utilisateur
| text/markdown | null | null | Parcs nationaux des Écrins et des Cévennes | geonature@ecrins-parcnational.fr | null | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operatin... | [] | https://github.com/PnX-SI/UsersHub-authentification-module | null | null | [] | [] | [] | [
"authlib",
"bcrypt",
"flask-sqlalchemy",
"flask>=3",
"flask-login",
"psycopg2",
"requests",
"sqlalchemy<2,>=1.4",
"flask-marshmallow",
"marshmallow-sqlalchemy",
"alembic",
"xmltodict",
"utils-flask-sqlalchemy<1.0,>=0.4.5",
"pytest; extra == \"tests\"",
"pytest-flask; extra == \"tests\""
... | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T14:34:36.423470 | pypnusershub-3.2.0.tar.gz | 68,824 | b9/a1/2d1fe7dab388c72534799b6e0923ec5cd314166248b33a2ef597a705f770/pypnusershub-3.2.0.tar.gz | source | sdist | null | false | 61be2e69812b82657f9cc922865dfc88 | 0cbc4af2698692c7c6bd04b4b7992c6f20279821b72695d0e0294dfa05a2a43c | b9a12d1fe7dab388c72534799b6e0923ec5cd314166248b33a2ef597a705f770 | null | [
"LICENSE"
] | 271 |
2.4 | wshawk | 3.0.0 | Professional WebSocket security scanner with real vulnerability verification, session hijacking tests, and CVSS scoring | # WSHawk v2.0 - Professional WebSocket Security Scanner
# SECURITY WARNING: FAKE VERSIONS CIRCULATING
> **PLEASE READ CAREFULLY:**
> Fake versions of WSHawk are being distributed on third-party download sites and linked in social media posts (e.g., LinkedIn). These versions may contain **MALWARE**.
>
> **OFFICIAL SOURCES ONLY:**
> - **Official Website:** [`https://wshawk.rothackers.com`](https://wshawk.rothackers.com)
> - **GitHub:** [`https://github.com/noobforanonymous/wshawk`](https://github.com/noobforanonymous/wshawk)
> - **PyPI:** `pip install wshawk`
> - **Docker:** `docker pull rothackers/wshawk` or `ghcr.io/noobforanonymous/wshawk`
>
> **DO NOT DOWNLOAD** from any other website. If you see "WSHawk" on a "software download" site, it is likely fake/malicious.
[](https://www.python.org/downloads/)
[](https://badge.fury.io/py/wshawk)
[](https://opensource.org/licenses/MIT)
[](https://playwright.dev/)
[](https://github.com/noobforanonymous/wshawk)
**WSHawk v2.0** is a production-grade WebSocket security scanner with advanced features including real vulnerability verification, dynamic mutation, and comprehensive session security testing. It also includes a **Persistent Web GUI** for dashboarding and history.
## Why WSHawk?
WSHawk is the only open-source WebSocket scanner that provides:
- **Smart Payload Evolution** - Adaptive feedback-driven mutation engine
- **Hierarchical Configuration** - `wshawk.yaml` with env var secret resolution
- **Persistent Web GUI** - Dashboard with SQLite history and password auth
- **Enterprise Integrations** - Auto-push to **Jira**, **DefectDojo**, and **Webhooks**
- **Real browser XSS verification** (Playwright) - Not just pattern matching
- **Blind vulnerability detection** via OAST - Finds XXE, SSRF that others miss
- **Session hijacking analysis** - 6 advanced session security tests
- **WAF-aware payload mutation** - Dynamic evasion techniques
- **CVSS-based professional reporting** - Industry-standard risk assessment
## Features
- **22,000+ Attack Payloads** - Comprehensive vulnerability coverage
- **Real Vulnerability Verification** - Confirms exploitability, not just reflection
- **Playwright XSS Verification** - Actual browser-based script execution testing
- **OAST Integration** - Detects blind vulnerabilities (XXE, SSRF)
- **Session Hijacking Tests** - Token reuse, impersonation, privilege escalation
- **Advanced Mutation Engine** - WAF bypass with 8+ evasion strategies
- **CVSS v3.1 Scoring** - Automatic vulnerability risk assessment
- **Professional HTML Reports** - Screenshots, replay sequences, traffic logs
- **Adaptive Rate Limiting** - Server-friendly scanning
### Vulnerability Detection
SQL Injection • XSS • Command Injection • XXE • SSRF • NoSQL Injection • Path Traversal • LDAP Injection • SSTI • Open Redirect • Session Security Issues
## Installation
### Option 1: pip (Recommended)
```bash
pip install wshawk
# Optional: For browser-based XSS verification
playwright install chromium
```
### Option 2: Docker
```bash
# From Docker Hub
docker pull rothackers/wshawk:latest
# Or from GitHub Container Registry
docker pull ghcr.io/noobforanonymous/wshawk:latest
# Run WSHawk
docker run --rm rothackers/wshawk ws://target.com
# Defensive validation
docker run --rm rothackers/wshawk wshawk-defensive ws://target.com
```
See [Docker Guide](docs/DOCKER.md) for detailed usage.
## Quick Start
WSHawk provides **4 easy ways** to scan WebSocket applications:
### Method 1: Quick Scan (Fastest)
```bash
wshawk ws://target.com
```
### Method 2: Interactive Menu (User-Friendly)
```bash
wshawk-interactive
```
### Method 3: Advanced CLI (Full Control)
```bash
# Basic scan
wshawk-advanced ws://target.com
# With Smart Payloads and Playwright verification
wshawk-advanced ws://target.com --smart-payloads --playwright --full
```
### Method 4: Web Management Dashboard (GUI)
```bash
# Launch the persistent web dashboard
wshawk --web
```
Best for teams requiring scan history, visual progress tracking, and professional report management.
## 🖥️ Web Management Dashboard
WSHawk v2.0 introduces a persistent, secure web-based dashboard for managing all your WebSocket security assessments.
### Launching the GUI
```bash
wshawk --web --port 5000 --host 0.0.0.0
```
### Authentication
For production security, the Web GUI is protected by a password. Set it using an environment variable:
```bash
export WSHAWK_WEB_PASSWORD='your-strong-password'
wshawk --web
```
*Note: If no password is set, the dashboard will run in open mode (only recommended for local testing).*
### Features
| Feature | Description |
|---------|-------------|
| **Persistent History** | All scans are saved to a local SQLite database (`scans.db`). |
| **Visual Progress** | Real-time scan status and vulnerability counters. |
| **Interactive Reports** | View, delete, and manage comprehensive HTML reports in-browser. |
| **API Key Support** | Programmatic access via `--api-key` or `WSHAWK_API_KEY`. |
## ⚙️ Hierarchical Configuration (`wshawk.yaml`)
WSHawk now supports a professional configuration system. Generate a template to get started:
```bash
python3 -m wshawk.config --generate
```
Rename `wshawk.yaml.example` to `wshawk.yaml`. You can resolve secrets from environment variables or files:
```yaml
integrations:
jira:
api_token: "env:JIRA_TOKEN" # Fetched from environment
project: "SEC"
```
## Command Comparison
| Feature | `wshawk` | `wshawk-interactive` | `wshawk-advanced` | `wshawk --web` |
|---------|----------|----------------------|-------------------|----------------|
| Ease of Use | High | High | Medium | **Highest** |
| Persistence | No | No | No | **Yes (SQLite)** |
| Auth Support | No | No | No | **Yes (SHA-256)** |
| Best For | Automation | Learning | Power Users | **Teams / SOC** |
## What You Get
All methods include:
- Real vulnerability verification (not just pattern matching)
- 22,000+ attack payloads
- Advanced mutation engine with WAF bypass
- CVSS v3.1 scoring for all findings
- Session hijacking tests (6 security tests)
- Professional HTML reports
- Adaptive rate limiting
- OAST integration for blind vulnerabilities
- Optional Playwright for browser-based XSS verification
## Output
WSHawk generates comprehensive HTML reports with:
- CVSS v3.1 scores for all vulnerabilities
- Screenshots (for XSS browser verification)
- Message replay sequences
- Raw WebSocket traffic logs
- Server fingerprints
- Actionable remediation recommendations
Reports saved as: `wshawk_report_YYYYMMDD_HHMMSS.html`
## Advanced Options
```bash
wshawk-advanced --help
Options:
--playwright Enable browser-based XSS verification
--rate N Set max requests per second (default: 10)
--full Enable ALL features
--no-oast Disable OAST testing
```
## Defensive Validation (NEW in v2.0.4)
WSHawk now includes a **Defensive Validation Module** designed for blue teams to validate their security controls.
```bash
# Run defensive validation tests
wshawk-defensive ws://your-server.com
```
### What It Tests
**1. DNS Exfiltration Prevention**
- Validates if DNS-based data exfiltration is blocked
- Tests egress filtering effectiveness
- Detects potential APT-style attack vectors
**2. Bot Detection Effectiveness**
- Tests if anti-bot measures detect headless browsers
- Validates resistance to evasion techniques
- Identifies gaps in bot protection
**3. CSWSH (Cross-Site WebSocket Hijacking)**
- Tests Origin header validation (216+ malicious origins)
- Validates CSRF token requirements
- Critical for preventing session hijacking
**4. WSS Protocol Security Validation**
- TLS version validation (detects deprecated SSLv2/v3, TLS 1.0/1.1)
- Weak cipher suite detection (RC4, DES, 3DES)
- Certificate validation (expiration, self-signed, chain integrity)
- Forward secrecy verification (ECDHE, DHE)
- Prevents MITM and protocol downgrade attacks
### Use Cases
- Validate security controls before production deployment
- Regular security posture assessment
- Compliance and audit requirements
- Blue team defensive capability testing
See [Defensive Validation Documentation](docs/DEFENSIVE_VALIDATION.md) for detailed usage and remediation guidance.
## Documentation
- [Getting Started Guide](docs/getting_started.md)
- [Advanced Usage](docs/advanced_usage.md)
- [Vulnerability Details](docs/vulnerabilities.md)
- [Session Security Tests](docs/session_tests.md)
- [Mutation Engine](docs/mutation_engine.md)
- [Architecture](docs/architecture.md)
## Python API
For integration into custom scripts:
```python
import asyncio
from wshawk.scanner_v2 import WSHawkV2
scanner = WSHawkV2("ws://target.com")
scanner.use_headless_browser = True
scanner.use_oast = True
asyncio.run(scanner.run_heuristic_scan())
```
See [Advanced Usage](docs/advanced_usage.md) for more examples.
## Responsible Disclosure
WSHawk is designed for:
- Authorized penetration testing
- Bug bounty programs
- Security research
- Educational purposes
**Always obtain proper authorization before testing.**
## License
MIT License - see [LICENSE](LICENSE) file
## Author
**Regaan** (@noobforanonymous)
## Contributing
Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md)
## Legal Disclaimer
**WSHawk is designed for security professionals, researchers, and developers for authorized testing only.**
- **Usage:** You must have explicit permission from the system owner before scanning.
- **Liability:** The author (Regaan) is **NOT** responsible for any damage caused by the misuse of this tool.
- **Malware:** WSHawk is a security scanner, NOT malware. Any repackaged version found on third-party sites containing malicious code is **NOT** associated with this project.
By using WSHawk, you agree to these terms and use it at your own risk.
## Support
- **Issues:** [GitHub Issues](https://github.com/noobforanonymous/wshawk/issues)
- **Documentation:** [docs/](docs/)
- **Examples:** [examples/](examples/)
- **Email:** support@rothackers.com
---
**WSHawk v2.0** - Professional WebSocket Security Scanner
*Built for the security community*
| text/markdown | Regaan | null | null | null | null | websocket, security, scanner, penetration-testing, bug-bounty, vulnerability, xss, sqli, session-hijacking, cvss, playwright, oast, waf-bypass | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Information Technology",
"Intended Audience :: Developers",
"Topic :: Security",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Pytho... | [] | https://github.com/noobforanonymous/wshawk | null | >=3.8 | [] | [] | [] | [
"websockets>=12.0",
"playwright>=1.40.0",
"aiohttp>=3.9.0",
"PyYAML>=6.0",
"flask>=3.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/noobforanonymous/wshawk",
"Bug Reports, https://github.com/noobforanonymous/wshawk/issues",
"Source, https://github.com/noobforanonymous/wshawk"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:33:58.224587 | wshawk-3.0.0.tar.gz | 321,480 | e8/5f/30a5acf2eb51060d4ecdf517ed12ff28471d9423283ba098a8a31b44e5cf/wshawk-3.0.0.tar.gz | source | sdist | null | false | d5370a53f79dc77248b2daed7f5dd3ba | aea5a1911b18585fc87d5d616c014125cf0be60107b40ee297f27f4e9a177dc0 | e85f30a5acf2eb51060d4ecdf517ed12ff28471d9423283ba098a8a31b44e5cf | null | [
"LICENSE"
] | 266 |
2.4 | openedx-learning | 0.33.1 | Open edX Learning Core (and Tagging) |
**This package has been renamed to `openedx-core <https://pypi.org/project/openedx-core>`_!**
| null | David Ormsbee | dave@axim.org | null | null | AGPL 3.0 | Python edx | [
"Development Status :: 7 - Inactive",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.2",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Natural Language :: English",
"Programming Language :: Pyth... | [] | https://github.com/openedx/openedx-learning | null | >=3.11 | [] | [] | [] | [
"openedx-core==0.34.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:33:29.418744 | openedx_learning-0.33.1.tar.gz | 1,742 | bc/64/fd7f0f4ce487967454c7423d46ed200941b5f02f54ccdca15eefababe82c/openedx_learning-0.33.1.tar.gz | source | sdist | null | false | 1b03e3b448ae987a34037118113a16db | b033dec04de66248a851da51f8768515fc3440db4134c0280e326fa1f57aa4cf | bc64fd7f0f4ce487967454c7423d46ed200941b5f02f54ccdca15eefababe82c | null | [] | 313 |
2.4 | quvis | 0.28.0 | Quvis - Quantum Circuit Visualization System | [](https://pypi.org/project/quvis/)
[](https://pypi.org/project/quvis/)
[](https://github.com/psf/black)
[](https://mypy-lang.org/)
# Quvis - Quantum Circuit Visualization Platform
Quvis is a 3D quantum circuit visualization platform for logical and compiled circuits.
## 🚀 Quick Start
### Interactive Playground (Web App)
Run the interactive web playground locally:
```bash
git clone https://github.com/alejandrogonzalvo/quvis.git
cd quvis
pip install poetry
poetry install
npm install
./scripts/start-dev.sh
```
Open http://localhost:5173 in your browser.
## Installation
### Option 1: Install from PyPI (Recommended)
```bash
pip install quvis
```
### Option 2: Install from Source (Development)
```bash
git clone https://github.com/your-repo/quvis.git
cd quvis
poetry install
```
### Prerequisites
- Python 3.12+
- Node.js 16+ (for web interface)
- npm or yarn (for frontend dependencies)
### Running Examples
After installation, you can run the examples directly:
```bash
python examples/library_usage.py
```
## **Usage**
### Basic Usage
```python
from quvis import Visualizer
from qiskit import QuantumCircuit
# Create visualizer
quvis = Visualizer()
# Add any quantum circuit
circuit = QuantumCircuit(4)
circuit.h(0)
circuit.cx(0, 1)
circuit.cx(1, 2)
circuit.cx(2, 3)
# Add and visualize - opens your browser with interactive 3D view!
quvis.add_circuit(circuit, algorithm_name="Bell State Chain")
quvis.visualize()
```
### Multi-Circuit Comparison
```python
from quvis import Visualizer
from qiskit.circuit.library import QFT
from qiskit import transpile
quvis = Visualizer()
# Add logical circuit
logical_qft = QFT(4)
quvis.add_circuit(logical_qft, algorithm_name="QFT (Logical)")
# Add compiled circuit with hardware constraints
coupling_map = [[0, 1], [1, 2], [2, 3]]
compiled_qft = transpile(logical_qft, coupling_map=coupling_map, optimization_level=2)
quvis.add_circuit(
compiled_qft,
coupling_map={"coupling_map": coupling_map, "num_qubits": 4, "topology_type": "line"},
algorithm_name="QFT (Compiled)"
)
# Visualize both circuits with tabs - logical (green) vs compiled (orange)
quvis.visualize()
```
## 🤝 **Contributing**
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## 📄 **License**
This project is licensed under the MIT License.
| text/markdown | Alejandro Gonzalvo Hidalgo | alejandro@gonzalvo.org | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"fastapi<0.116.0,>=0.115.0",
"ipykernel<7.0.0,>=6.29.5",
"numpy<2.0.0,>=1.24.0",
"pillow<12.0.0,>=11.3.0",
"pydantic<3.0.0,>=2.10.0",
"qiskit<3.0.0,>=2.1.0",
"uvicorn[standard]<0.33.0,>=0.32.0"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.12.12 Linux/6.14.0-1017-azure | 2026-02-18T14:33:27.628718 | quvis-0.28.0-py3-none-any.whl | 15,155 | 1e/ae/ab1621c64c2339c5514b6d59e50c98aa17459a0ec0f0de5f5bab0994215a/quvis-0.28.0-py3-none-any.whl | py3 | bdist_wheel | null | false | d0e1a01c37d57b5f154633200190bb54 | f665838f5977a5b9837c749c58f8965e65fabf5804ee9238fb73b9192c94a51b | 1eaeab1621c64c2339c5514b6d59e50c98aa17459a0ec0f0de5f5bab0994215a | null | [] | 226 |
2.4 | wiki2crm | 1.0.3 | Tools to convert Wikidata to CIDOC CRM (eCRM/LRMoo/INTRO) | # Wikidata to CIDOC CRM (wiki2crm)
💾 **Zenodo dump: [https://doi.org/10.5281/zenodo.17396140](https://doi.org/10.5281/zenodo.17396140)**
🧐 **Read more: Laura Untner: From Wikidata to CIDOC CRM: A Use Case Scenario for Digital Comparative Literary Studies. In: Journal of Open Humanities Data 11 (2025), pp. 1–15, DOI: [10.5334/johd.421](https://doi.org/10.5334/johd.421).**
---
This repository contains Python scripts that transform structured data from [Wikidata](https://www.wikidata.org/) into RDF using [CIDOC CRM](https://cidoc-crm.org/) (OWL version, [eCRM](https://erlangen-crm.org/docs/ecrm/current/)) and models based on CIDOC CRM: [LRMoo](https://repository.ifla.org/handle/20.500.14598/3677) and [INTRO](https://github.com/BOberreither/INTRO).
You can use the Python scripts in `src/wiki2crm` individually or with the provided Python package [wiki2crm](https://pypi.org/project/wiki2crm/).
The goal is to enable CIDOC CRM-based semantic enrichment from Wikidata and other linked data sources. The scripts also use [PROV-O](https://www.w3.org/TR/prov-o/) (`prov:wasDerivedFrom`) to link data back to Wikidata.
To improve inference capabilities, all ECRM classes and properties have been mapped to CIDOC CRM using `owl:equivalentClass` and `owl:equivalentProperty`. Also, all LRMoo classes and properties have been mapped to [FRBRoo](https://www.iflastandards.info/fr/frbr/frbroo) and [eFRBRoo](https://erlangen-crm.org/efrbroo).
Ontology versions used:
- Erlangen CRM 240307 (based on CIDOC CRM 7.1.3)
- LRMoo 1.1.1
- INTRO beta202506
---
The `authors`, `works` and `relations` modules model basic biographical, bibliographical, and intertextual information based on data from Wikidata and can be dynamically extended. The scripts generate—depending on the module—an ontology (`owl:Ontology`) for authors, works, or intertexts.
|  |
|:--:|
| Modules overview |
The `merge` module can be used to merge the outputted Turtle files.
The `map_and_align` module looks for more identifiers from [Schema.org](https://schema.org/), [DBpedia](https://www.dbpedia.org/), [GND](https://www.dnb.de/DE/Professionell/Standardisierung/GND/gnd_node.html), [VIAF](https://viaf.org/), [GeoNames](http://www.geonames.org/) and [Goodreads](https://www.goodreads.com/) and adds more ontology alignments mainly using [SKOS](http://www.w3.org/2004/02/skos/core#). The aligned ontologies are: [BIBO](http://purl.org/ontology/bibo/), [CiTO](http://purl.org/spar/cito/), [DC](http://purl.org/dc/terms/), [DoCo](http://purl.org/spar/doco/), [DraCor](http://dracor.org/ontology#), [FaBiO](http://purl.org/spar/fabio/), [FOAF](http://xmlns.com/foaf/0.1/), [FRBRoo](https://www.iflastandards.info/fr/frbr/frbroo), [GOLEM](https://ontology.golemlab.eu/), [Intertextuality Ontology](https://github.com/intertextor/intertextuality-ontology), [MiMoText](https://data.mimotext.uni-trier.de/wiki/Main_Page), OntoPoetry/POSTDATA ([core](https://raw.githubusercontent.com/linhd-postdata/core-ontology/refs/heads/master/postdata-core.owl) and [analysis](https://raw.githubusercontent.com/linhd-postdata/literaryAnalysis-ontology/refs/heads/master/postdata-literaryAnalysisElements.owl) modules), and the Ontologies of Under-Represented [Writers](https://purl.archive.org/urwriters) and [Books](https://purl.archive.org/urbooks).
The mappings and alignments are done separately so that the script can hopefully be more easily updated. It focuses specifically on those classes and properties that are important for the relations module.
|  |
|:--:|
| Workflow overview |
---
- 🪄 **Reality check**: These scripts are not magical. Data that is not available in Wikidata cannot appear in the triples.
- ⚠️ **Base URI:** All URIs currently use the `https://sappho-digital.com/` base. Please adapt this to your own environment as needed.
- 💡 **Reuse is encouraged**. The scripts are open for reuse. They are developed in the context of the project [Sappho Digital](https://sappho-digital.com/) by [Laura Untner](https://orcid.org/0000-0002-9649-0870). A reference to the project would be appreciated if you use or build on the scripts. You can also refer to »Laura Untner: From Wikidata to CIDOC CRM: A Use Case Scenario for Digital Comparative Literary Studies. In: Journal of Open Humanities Data 11 (2025), pp. 1–15, DOI: [10.5334/johd.421](https://doi.org/10.5334/johd.421)«, and use the DOI provided by Zenodo: [https://doi.org/10.5281/zenodo.17396141](https://doi.org/10.5281/zenodo.17396141).
---
## Requirements
You can install the package (and dependencies) directly:
```
pip install wiki2crm
# or from the repo root
# pip install .
# pip install -e .
```
Quick check:
```
wiki2crm --version
wiki2crm --help
wiki2crm --hello
```
To only install dependencies, run:
```
pip install rdflib requests tqdm pyshacl
```
---
## Usage
All commands are available as `wiki2crm <subcommand>` or equivalently `python3 -m wiki2crm <subcommand>`.
If no `--input` / `--output` flags are given, the tools use the files in the `examples` folder as fallback defaults.
### Commands
```
# explicit paths
wiki2crm authors \
--input path/to/authors.csv \
--output path/to/authors.ttl
# fallback to examples
wiki2crm authors
```
```
# explicit paths
wiki2crm works \
--input path/to/works.csv \
--output path/to/works.ttl
# fallback to examples
wiki2crm works
```
```
# explicit paths
wiki2crm relations \
--input path/to/works.csv \
--output path/to/relations.ttl
# fallback to examples
wiki2crm relations
```
```
# explicit paths
wiki2crm merge \
--authors examples/outputs/authors.ttl \
--works examples/outputs/works.ttl \
--relations examples/outputs/relations.ttl \
--output examples/outputs/all.ttl
# fallback to examples
wiki2crm merge
```
```
# explicit paths
wiki2crm map-align \
--input examples/outputs/all.ttl \
--output examples/outputs/all_mapped-and-aligned.ttl
# fallback to examples
wiki2crm map-align
```
You can always run the same via `python3 -m wiki2crm <subcommand>` if you prefer.
### Import
To use the package in your Python script, include:
```
from wiki2crm import authors, works, relations, merge, map_and_align
```
Then you can use, for example:
```
authors.main(["--input", "examples/inputs/authors.csv",
"--output", "examples/outputs/authors.ttl"])
```
---
## Validation
There are SHACL shapes for the authors, works, and relations modules.
Validation of the corresponding Turtle files is performed within the Python scripts.
If you want to manually validate the Turtle files, you can use pySHACL:
```
pyshacl -s path/to/shape.ttl -d path/to/data.ttl --advanced --metashacl -f table
```
This will produce a report indicating whether the Turtle file is valid according to the provided SHACL shapes.
Note: The SHACL shapes make use of the Sappho Digital namespace. Please adapt this to your own namespace if necessary.
---
## Acknowledgments
Special thanks to [Bernhard Oberreither](https://github.com/BOberreither) for feedback on the relations module, and [Lisa Poggel](https://github.com/lipogg) for feedback on the Python workflow.
---
<details>
<summary><h2>✍️ Authors Module (e/CRM)</h2></summary>
The [authors.py](https://github.com/laurauntner/wikidata-to-cidoc-crm/blob/main/src/wiki2crm/authors.py) script reads a list of Wikidata QIDs for authors from a CSV file and creates RDF triples using CIDOC CRM (eCRM, mapped to CRM). It models:
- `E21_Person` with:
- `E41_Appellation` (names, derived from labels)
- `E42_Identifier` (Wikidata QIDs, derived from given QIDs)
- `E67_Birth` and `E69_Death` events, linked to:
- `E53_Place` (birth places, derived from `wdt:P19`, and death places, derived from `wdt:P20`)
- `E52_Time-Span` (birth dates, derived from `wdt:P569`, and death dates, derived from `wdt:P570`)
- `E55_Type` (genders, derived from `wdt:P21`)
- `E36_Visual_Item` (visual representations) (image reference with Wikimedia `seeAlso`, derived from `wdt:P18`)

📎 A complete [visual documentation](https://github.com/laurauntner/wikidata-to-cidoc-crm/blob/main/docs/authors.png) of the authors data model is included in the `docs` folder.
<h3>Example Input</h3>
```csv
qids
Q469571
```
This is [Anna Louisa Karsch](https://www.wikidata.org/wiki/Q469571).
<h3>Example Output</h3>
Namespace declarations and mappings to CRM are applied but not shown in this exemplary output.
```turtle
<https://sappho-digital.com/person/Q469571> a ecrm:E21_Person ;
rdfs:label "Anna Louisa Karsch"@en ;
ecrm:P100i_died_in <https://sappho-digital.com/death/Q469571> ;
ecrm:P131_is_identified_by <https://sappho-digital.com/appellation/Q469571> ;
ecrm:P138i_has_representation <https://sappho-digital.com/visual_item/Q469571> ;
ecrm:P1_is_identified_by <https://sappho-digital.com/identifier/Q469571> ;
ecrm:P2_has_type <https://sappho-digital.com/gender/Q6581072> ;
ecrm:P98i_was_born <https://sappho-digital.com/birth/Q469571> ;
ecrm:P138i_has_representation <https://sappho-digital.com/visual_item/Q469571> ;
owl:sameAs <http://www.wikidata.org/entity/Q469571> .
<https://sappho-digital.com/appellation/Q469571> a ecrm:E41_Appellation ;
rdfs:label "Anna Louisa Karsch"@en ;
ecrm:P131i_identifies <https://sappho-digital.com/person/Q469571> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q469571> .
<https://sappho-digital.com/identifier/Q469571> a ecrm:E42_Identifier ;
rdfs:label "Q469571" ;
ecrm:P1i_identifies <https://sappho-digital.com/person/Q469571> ;
ecrm:P2_has_type <https://sappho-digital.com/id_type/wikidata> .
<https://sappho-digital.com/id_type/wikidata> a ecrm:E55_Type ;
rdfs:label "Wikidata ID"@en ;
ecrm:P2i_is_type_of <https://sappho-digital.com/identifier/Q469571> .
<https://sappho-digital.com/birth/Q469571> a ecrm:E67_Birth ;
rdfs:label "Birth of Anna Louisa Karsch"@en ;
ecrm:P4_has_time-span <https://sappho-digital.com/timespan/17221201> ;
ecrm:P7_took_place_at <https://sappho-digital.com/place/Q659063> ;
ecrm:P98_brought_into_life <https://sappho-digital.com/person/Q469571> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q469571> .
<https://sappho-digital.com/death/Q469571> a ecrm:E69_Death ;
rdfs:label "Death of Anna Louisa Karsch"@en ;
ecrm:P100_was_death_of <https://sappho-digital.com/person/Q469571> ;
ecrm:P4_has_time-span <https://sappho-digital.com/timespan/17911012> ;
ecrm:P7_took_place_at <https://sappho-digital.com/place/Q64> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q469571> .
<https://sappho-digital.com/place/Q659063> a ecrm:E53_Place ;
rdfs:label "Gmina Skąpe"@en ;
ecrm:P7i_witnessed <https://sappho-digital.com/birth/Q469571> ;
owl:sameAs <http://www.wikidata.org/entity/Q659063> .
<https://sappho-digital.com/place/Q64> a ecrm:E53_Place ;
rdfs:label "Berlin"@en ;
ecrm:P7i_witnessed <https://sappho-digital.com/death/Q469571> ;
owl:sameAs <http://www.wikidata.org/entity/Q64> .
<https://sappho-digital.com/timespan/17221201> a ecrm:E52_Time-Span ;
rdfs:label "1722-12-01"^^xsd:date ;
ecrm:P4i_is_time-span_of <https://sappho-digital.com/birth/Q469571> .
<https://sappho-digital.com/timespan/17911012> a ecrm:E52_Time-Span ;
rdfs:label "1791-10-12"^^xsd:date ;
ecrm:P4i_is_time-span_of <https://sappho-digital.com/death/Q469571> .
<https://sappho-digital.com/gender/Q6581072> a ecrm:E55_Type ;
rdfs:label "female"@en ;
ecrm:P2_has_type <https://sappho-digital.com/gender_type/wikidata> ;
ecrm:P2i_is_type_of <https://sappho-digital.com/person/Q469571> ;
owl:sameAs <http://www.wikidata.org/entity/Q6581072> .
<https://sappho-digital.com/gender_type/wikidata> a ecrm:E55_Type ;
rdfs:label "Wikidata Gender"@en ;
ecrm:P2i_is_type_of <https://sappho-digital.com/gender/Q6581072> .
<https://sappho-digital.com/visual_item/Q469571> a ecrm:E36_Visual_Item ;
rdfs:label "Visual representation of Anna Louisa Karsch"@en ;
ecrm:P138_represents <https://sappho-digital.com/person/Q469571> ;
rdfs:seeAlso <http://commons.wikimedia.org/wiki/Special:FilePath/Karschin%20bild.JPG> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q469571> .
```
</details>
---
<details>
<summary><h2>📚 Works Module (LRMoo/FRBRoo)</h2></summary>
The [works.py](https://github.com/laurauntner/wikidata-to-cidoc-crm/blob/main/src/wiki2crm/works.py) script reads a list of Wikidata QIDs for works from a CSV file and creates RDF triples using CIDOC CRM (eCRM, mapped to CRM) and LRMoo (mapped to FRBRoo). It models:
- `F1_Work` (abstract works) and `F27_Work_Creation` with:
- `E21_Person` (authors, derived from `wdt:P50`, see authors module)
- `F2_Expression` (realizations of abstract works) and `F28_Expression_Creation` with:
- `E52_Time-Span` (creation years, derived from `wdt:P571` or `wdt:P2754`)
- `E35_Title` (titles, derived from `wdt:P1476` or labels)
- `E42_Identifier` (Wikidata QIDs, derived from given QIDs)
- `E55_Type` (genres, derived from `wdt:P136`)
- `E73_Information_Object` (digital surrogates, derived from `wdt:P953`)
- `F3_Manifestation` (publications of expressions) and `F30_Manifestation_Creation` with:
- `E21_Person` (editors, derived from `wdt:P98`) with `E41_Appellation` (names, derived from labels)
- `E35_Title` (titles, only different if the text is part of another text (`wdt:P1433` or `wdt:P361`))
- `E52_Time-Span` (publication years, derived from `wdt:P577`)
- `E53_Place` (publication places, derived from `wdt:P291`)
- `E74_Group` (publishers, derived from `wdt:P123`)
- `F5_Item` (specific copies of manifestations) and `F32_Item_Production_Event`
Translators are not modeled per default, but the data model can, of course, be extended or adapted accordingly.

📎 A complete [visual documentation](https://github.com/laurauntner/wikidata-to-cidoc-crm/blob/main/docs/works.png) of the works data model is included in the `docs` folder.
<h3>Example Input</h3>
```csv
qids
Q1242002
```
(This is the tragedy [Sappho](https://www.wikidata.org/wiki/Q469571) written by Franz Grillparzer.)
<h3>Example Output</h3>
Namespace declarations and mappings to CRM, FRBRoo and eFRBRoo are applied but not shown in this exemplary output.
```turtle
<https://sappho-digital.com/work_creation/Q1242002> a lrmoo:F27_Work_Creation ;
rdfs:label "Work creation of Sappho"@en ;
ecrm:P14_carried_out_by <https://sappho-digital.com/person/Q154438> ;
lrmoo:R16_created <https://sappho-digital.com/work/Q1242002> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q1242002> .
<https://sappho-digital.com/work/Q1242002> a lrmoo:F1_Work ;
rdfs:label "Work of Sappho"@en ;
ecrm:P14_carried_out_by <https://sappho-digital.com/person/Q154438> ;
lrmoo:R16i_was_created_by <https://sappho-digital.com/work_creation/Q1242002> ;
lrmoo:R19i_was_realised_through <https://sappho-digital.com/expression_creation/Q1242002> ;
lrmoo:R3_is_realised_in <https://sappho-digital.com/expression/Q1242002> .
<https://sappho-digital.com/person/Q154438> a ecrm:E21_Person ;
rdfs:label "Franz Grillparzer" ;
ecrm:P14i_performed <https://sappho-digital.com/manifestation_creation/Q1242002>,
<https://sappho-digital.com/work_creation/Q1242002> ;
owl:sameAs <http://www.wikidata.org/entity/Q154438> .
<https://sappho-digital.com/expression_creation/Q1242002> a lrmoo:F28_Expression_Creation ;
rdfs:label "Expression creation of Sappho"@en ;
ecrm:P14_carried_out_by <https://sappho-digital.com/person/Q154438> ;
ecrm:P4_has_time-span <https://sappho-digital.com/timespan/1817> ;
lrmoo:R17_created <https://sappho-digital.com/expression/Q1242002> ;
lrmoo:R19_created_a_realisation_of <https://sappho-digital.com/work/Q1242002> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q1242002> .
<https://sappho-digital.com/timespan/1817> a ecrm:E52_Time-Span ;
rdfs:label "1817"^^xsd:gYear ;
ecrm:P4i_is_time-span_of <https://sappho-digital.com/expression_creation/Q1242002> .
<https://sappho-digital.com/expression/Q1242002> a lrmoo:F2_Expression ;
rdfs:label "Expression of Sappho"@en ;
ecrm:P102_has_title <https://sappho-digital.com/title/expression/Q1242002> ;
ecrm:P138i_has_representation <https://sappho-digital.com/digital/Q1242002> ;
ecrm:P1_is_identified_by <https://sappho-digital.com/identifier/Q1242002> ;
ecrm:P2_has_type <https://sappho-digital.com/genre/Q80930> ;
lrmoo:R17i_was_created_by <https://sappho-digital.com/expression_creation/Q1242002> ;
lrmoo:R3i_realises <https://sappho-digital.com/work/Q1242002> ;
lrmoo:R4i_is_embodied_in <https://sappho-digital.com/manifestation/Q1242002> ;
owl:sameAs <http://www.wikidata.org/entity/Q1242002> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q1242002> .
<https://sappho-digital.com/title/expression/Q1242002> a ecrm:E35_Title ;
rdfs:label "Sappho"@de ;
ecrm:P102i_is_title_of <https://sappho-digital.com/expression/Q1242002> .
<https://sappho-digital.com/identifier/Q1242002> a ecrm:E42_Identifier ;
rdfs:label "Q1242002" ;
ecrm:P1i_identifies <https://sappho-digital.com/expression/Q1242002> ;
ecrm:P2_has_type <https://sappho-digital.com/id_type/wikidata> .
<https://sappho-digital.com/id_type/wikidata> a ecrm:E55_Type ;
rdfs:label "Wikidata ID"@en ;
ecrm:P2i_is_type_of <https://sappho-digital.com/identifier/Q1242002> ;
owl:sameAs <http://www.wikidata.org/wiki/Q43649390> .
<https://sappho-digital.com/genre/Q80930> a ecrm:E55_Type ;
rdfs:label "tragedy"@en ;
ecrm:P2_has_type <https://sappho-digital.com/genre_type/wikidata> ;
ecrm:P2i_is_type_of <https://sappho-digital.com/expression/Q1242002> ;
owl:sameAs <http://www.wikidata.org/entity/Q80930> .
<https://sappho-digital.com/genre_type/wikidata> a ecrm:E55_Type ;
rdfs:label "Wikidata Genre"@en ;
ecrm:P2i_is_type_of <https://sappho-digital.com/genre/Q80930> .
<https://sappho-digital.com/digital/Q1242002> a ecrm:E73_Information_Object ;
rdfs:label "Digital copy of Sappho"@en ;
ecrm:P138_represents <https://sappho-digital.com/expression/Q1242002> ;
rdfs:seeAlso <http://www.zeno.org/nid/20004898184> .
<https://sappho-digital.com/manifestation_creation/Q1242002> a lrmoo:F30_Manifestation_Creation ;
rdfs:label "Manifestation creation of Sappho"@en ;
ecrm:P14_carried_out_by <https://sappho-digital.com/person/Q154438>,
<https://sappho-digital.com/publisher/Q133849481> ;
ecrm:P4_has_time-span <https://sappho-digital.com/timespan/1819> ;
ecrm:P7_took_place_at <https://sappho-digital.com/place/Q1741> ;
lrmoo:R24_created <https://sappho-digital.com/manifestation/Q1242002> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q1242002> .
<https://sappho-digital.com/publisher/Q133849481> a ecrm:E74_Group ;
rdfs:label "Wallishausser’sche Buchhandlung"@en ;
ecrm:P14i_performed <https://sappho-digital.com/manifestation_creation/Q1242002> ;
owl:sameAs <http://www.wikidata.org/entity/Q133849481> .
<https://sappho-digital.com/timespan/1819> a ecrm:E52_Time-Span ;
rdfs:label "1819"^^xsd:gYear ;
ecrm:P4i_is_time-span_of <https://sappho-digital.com/manifestation_creation/Q1242002> .
<https://sappho-digital.com/place/Q1741> a ecrm:E53_Place ;
rdfs:label "Vienna"@en ;
ecrm:P7i_witnessed <https://sappho-digital.com/manifestation_creation/Q1242002> ;
owl:sameAs <http://www.wikidata.org/entity/Q1741> .
<https://sappho-digital.com/manifestation/Q1242002> a lrmoo:F3_Manifestation ;
rdfs:label "Manifestation of Sappho"@en ;
ecrm:P102_has_title <https://sappho-digital.com/title/manifestation/Q1242002> ;
lrmoo:R24i_was_created_through <https://sappho-digital.com/manifestation_creation/Q1242002> ;
lrmoo:R27i_was_materialized_by <https://sappho-digital.com/item_production/Q1242002> ;
lrmoo:R4_embodies <https://sappho-digital.com/expression/Q1242002> ;
lrmoo:R7i_is_exemplified_by <https://sappho-digital.com/item/Q1242002> .
<https://sappho-digital.com/title/manifestation/Q1242002> a ecrm:E35_Title ;
rdfs:label "Sappho"@de ;
ecrm:P102i_is_title_of <https://sappho-digital.com/manifestation/Q1242002> .
<https://sappho-digital.com/item_production/Q1242002> a lrmoo:F32_Item_Production_Event ;
rdfs:label "Item production event of Sappho"@en ;
lrmoo:R27_materialized <https://sappho-digital.com/manifestation/Q1242002> ;
lrmoo:R28_produced <https://sappho-digital.com/item/Q1242002> .
<https://sappho-digital.com/item/Q1242002> a lrmoo:F5_Item ;
rdfs:label "Item of Sappho"@en ;
lrmoo:R28i_was_produced_by <https://sappho-digital.com/item_production/Q1242002> ;
lrmoo:R7_exemplifies <https://sappho-digital.com/manifestation/Q1242002> .
```
</details>
---
<details>
<summary><h2>🌐 Relations Module (INTRO)</h2></summary>
The [relations.py](https://github.com/laurauntner/wikidata-to-cidoc-crm/blob/main/src/wiki2crm/relations.py) script reads a list of Wikidata QIDs for works from a CSV file and creates RDF triples using INTRO, CIDOC CRM (eCRM, mapped to CRM) and LRMoo (mapped to FRBRoo). It models:
- Literary works (`F2_Expression`, see works module)
- linked to the Wikidata item via `owl:sameAs`
- Intertextual relations (`INT31_IntertextualRelation`) between expressions
- with `INT_Interpretation` instances linked to the Wikidata items of the expressions via `prov:wasDerivedFrom`
- derived from actualizations, citations, and optionally `wdt:P4969` (derivative work), `wdt:P144` (based on), `wdt:P5059` (modified version of) and `wdt:P941` (inspired by)
- References (`INT18_Reference`) for …
- persons: `E21_Person` with `E42_Identifier`, derived from `wdt:P180` (depicts), `wdt:P921` (main subject) and `wdt:P527` (has part(s)) for `wd:Q5` (human)
- places: `E53_Place` with `E42_Identifier`, derived from `wdt:P921` (main subject) for `wd:Q2221906` (geographical location)
- expressions: derived from `wdt:P921` (main subject) for given QIDs
- with actualizations (`INT2_ActualizationOfFeature`) of these references in specific expressions
- with `INT_Interpretation` linked to the Wikidata items of the expressions via `prov:wasDerivedFrom`
- Citations via `INT21_TextPassage` instances
- linked to the expressions
- derived from `wdt:P2860` (cites work) and `wdt:P6166` (quotes work) for given QIDs
- linked to the citing Wikidata item via `prov:wasDerivedFrom`
- Characters (`INT_Character`)
- linked to the Wikidata item via `owl:sameAs` and identified by `E42_Identifier`
- derived from `wdt:P674` (characters) or `wdt:P180` (depicts) and `wdt:P921` (main subject) if the item is `wd:Q3658341` (literary character) or `wd:Q15632617` (fictional human)
- optionally linked to a real Person (`E21_Person`)
- always with actualizations (`INT2_ActualizationOfFeature`) of these characters in specific expressions
- with `INT_Interpretation` linked to the Wikidata items of the expressions via `prov:wasDerivedFrom`
- Motifs, Plots and Topics
- all linked to Wikidata items via `owl:sameAs` and identified by `E42_Identifier`
- `INT_Motif`: derived from `wdt:P6962` (narrative motif)
- `INT_Plot`: derived from `wdt:P921` (main subject) for `wd:Q42109240` (stoff)
- `INT_Topic`: derived from `wdt:P921` (main subject) for `wd:Q26256810` (topic)
- with `INT2_ActualizationOfFeature` instances for specific expressions
- with interpretations (`INT_Interpretation`) linked to the Wikidata items of the expressions via `prov:wasDerivedFrom`
Please note that subclasses and subproperties are also queried.
The current data model focuses exclusively on textual works, but—based on INTRO—it could be extended to cover intermedial and interpictorial aspects as well. It also only models intertextual relationships among the texts listed in the CSV file, i.e. it assumes you’re seeking intertexts of known works rather than exploring every possible intertext.
Please also note that all searches are strictly one-way: Work → Phenomenon.

📎 A complete [visual documentation](https://github.com/laurauntner/wikidata-to-cidoc-crm/blob/main/docs/relations.png) of the relations data model is included in the `docs` folder.
<h3>Example Input</h3>
```turtle
qids
Q1242002 # Franz Grillparzer’s "Sappho"
Q119292643 # Therese Rak’s "Sappho"
Q19179765 # Amalie von Imhoff’s "Die Schwestern von Lesbos"
Q120199245 # Adolph von Schaden’s "Die moderne Sappho"
```
<h3>Example Output</h3>
Namespace declarations and mappings to CRM, FRBRoo and eFRBRoo are applied but not shown in this exemplary output.
Please also note that the output is currently sparse because the relevant data in Wikidata is simply too limited. The script also remains fairly slow and should be tested (and possibly optimized) on larger data sets.
Further, it’s highly recommended to manually refine the generated triples afterward: INTRO provides very detailed means for recording literary-scholarly analyses as Linked Data, whereas this module captures only the basics.
```turtle
# Expressions
<https://sappho-digital.com/expression/Q1242002> a lrmoo:F2_Expression ;
rdfs:label "Expression of Sappho"@en ;
ecrm:P67i_is_referred_to_by <https://sappho-digital.com/actualization/work_ref/Q1242002_Q119292643> ;
owl:sameAs <http://www.wikidata.org/entity/Q1242002> ;
intro:R18_showsActualization <https://sappho-digital.com/actualization/character/Q17892_Q1242002>,
<https://sappho-digital.com/actualization/motif/Q165_Q1242002>,
<https://sappho-digital.com/actualization/person_ref/Q17892_Q1242002>,
<https://sappho-digital.com/actualization/place_ref/Q128087_Q1242002>,
<https://sappho-digital.com/actualization/plot/Q134285870_Q1242002>,
<https://sappho-digital.com/actualization/topic/Q10737_Q1242002> ;
intro:R30_hasTextPassage <https://sappho-digital.com/textpassage/Q1242002_Q119292643> ;
intro:R24i_isRelatedEntity <https://sappho-digital.com/relation/Q119292643_Q1242002>,
<https://sappho-digital.com/relation/Q120199245_Q1242002>,
<https://sappho-digital.com/relation/Q1242002_Q19179765> .
<https://sappho-digital.com/expression/Q119292643> a lrmoo:F2_Expression ;
rdfs:label "Expression of Sappho. Eine Novelle"@en ;
owl:sameAs <http://www.wikidata.org/entity/Q119292643> ;
intro:R18_showsActualization <https://sappho-digital.com/actualization/motif/Q165_Q119292643>,
<https://sappho-digital.com/actualization/person_ref/Q17892_Q119292643>,
<https://sappho-digital.com/actualization/plot/Q134285870_Q119292643>,
<https://sappho-digital.com/actualization/topic/Q10737_Q119292643>,
<https://sappho-digital.com/actualization/work_ref/Q1242002_Q119292643> ;
intro:R30_hasTextPassage <https://sappho-digital.com/textpassage/Q119292643_Q1242002> ;
intro:R24i_isRelatedEntity <https://sappho-digital.com/relation/Q119292643_Q1242002> .
<https://sappho-digital.com/expression/Q19179765> a lrmoo:F2_Expression ;
rdfs:label "Expression of Die Schwestern von Lesbos"@en ;
owl:sameAs <http://www.wikidata.org/entity/Q19179765> ;
intro:R18_showsActualization <https://sappho-digital.com/actualization/place_ref/Q128087_Q19179765> ;
intro:R24i_isRelatedEntity <https://sappho-digital.com/relation/Q1242002_Q19179765> .
<https://sappho-digital.com/expression/Q120199245> a lrmoo:F2_Expression ;
rdfs:label "Expression of Die moderne Sappho"@en ;
owl:sameAs <http://www.wikidata.org/entity/Q120199245> ;
intro:R18_showsActualization <https://sappho-digital.com/actualization/character/Q17892_Q120199245> ;
intro:R24i_isRelatedEntity <https://sappho-digital.com/relation/Q120199245_Q1242002> .
# Intertextual Relations
<https://sappho-digital.com/relation/Q120199245_Q1242002> a intro:INT31_IntertextualRelation ;
rdfs:label "Intertextual relation between Die moderne Sappho and Sappho"@en ;
intro:R21i_isIdentifiedBy <https://sappho-digital.com/actualization/interpretation/Q120199245_Q1242002> ;
intro:R22i_relationIsBasedOnSimilarity <https://sappho-digital.com/feature/character/Q17892> ;
intro:R24_hasRelatedEntity <https://sappho-digital.com/expression/Q1242002>,
<https://sappho-digital.com/expression/Q120199245>,
<https://sappho-digital.com/actualization/character/Q17892_Q120199245>,
<https://sappho-digital.com/actualization/character/Q17892_Q1242002> .
<https://sappho-digital.com/feature/interpretation/Q120199245_Q1242002> a intro:INT_Interpretation ;
rdfs:label "Interpretation of intertextual relation between Die moderne Sappho and Sappho"@en ;
intro:R17i_featureIsActualizedIn <https://sappho-digital.com/actualization/interpretation/Q120199245_Q1242002> .
<https://sappho-digital.com/actualization/interpretation/Q120199245_Q1242002> a intro:INT2_ActualizationOfFeature ;
rdfs:label "Interpretation of intertextual relation between Die moderne Sappho and Sappho"@en ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q120199245>,
<http://www.wikidata.org/entity/Q1242002> ;
intro:R17_actualizesFeature <https://sappho-digital.com/feature/interpretation/Q120199245_Q1242002> ;
intro:R21_identifies <https://sappho-digital.com/relation/Q120199245_Q1242002> .
<https://sappho-digital.com/relation/Q1242002_Q19179765> a intro:INT31_IntertextualRelation ;
rdfs:label "Intertextual relation between Sappho and Die Schwestern von Lesbos"@en ;
intro:R21i_isIdentifiedBy <https://sappho-digital.com/actualization/interpretation/Q1242002_Q19179765> ;
intro:R22i_relationIsBasedOnSimilarity <https://sappho-digital.com/feature/place_ref/Q128087> ;
intro:R24_hasRelatedEntity <https://sappho-digital.com/expression/Q1242002>,
<https://sappho-digital.com/expression/Q19179765>,
<https://sappho-digital.com/actualization/place_ref/Q128087_Q1242002>,
<https://sappho-digital.com/actualization/place_ref/Q128087_Q19179765> .
<https://sappho-digital.com/feature/interpretation/Q1242002_Q19179765> a intro:INT_Interpretation ;
rdfs:label "Interpretation of intertextual relation between Sappho and Die Schwestern von Lesbos"@en ;
intro:R17i_featureIsActualizedIn <https://sappho-digital.com/actualization/interpretation/Q1242002_Q19179765> .
<https://sappho-digital.com/actualization/interpretation/Q1242002_Q19179765> a intro:INT2_ActualizationOfFeature ;
rdfs:label "Interpretation of intertextual relation between Sappho and Die Schwestern von Lesbos"@en ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q1242002>,
<http://www.wikidata.org/entity/Q19179765> ;
intro:R17_actualizesFeature <https://sappho-digital.com/feature/interpretation/Q1242002_Q19179765> ;
intro:R21_identifies <https://sappho-digital.com/relation/Q1242002_Q19179765> .
<https://sappho-digital.com/relation/Q119292643_Q1242002> a intro:INT31_IntertextualRelation ;
rdfs:label "Intertextual relation between Sappho and Sappho. Eine Novelle"@en ;
intro:R21i_isIdentifiedBy <https://sappho-digital.com/actualization/interpretation/Q119292643_Q1242002> ;
intro:R22i_relationIsBasedOnSimilarity <https://sappho-digital.com/feature/motif/Q165>,
<https://sappho-digital.com/feature/person_ref/Q17892>,
<https://sappho-digital.com/feature/plot/Q134285870>,
<https://sappho-digital.com/feature/topic/Q10737>,
<https://sappho-digital.com/feature/work_ref/Q1242002> ;
intro:R24_hasRelatedEntity <https://sappho-digital.com/expression/Q1242002>,
<https://sappho-digital.com/expression/Q119292643>,
<https://sappho-digital.com/actualization/motif/Q165_Q119292643>,
<https://sappho-digital.com/actualization/motif/Q165_Q1242002>,
<https://sappho-digital.com/actualization/person_ref/Q17892_Q119292643>,
<https://sappho-digital.com/actualization/person_ref/Q17892_Q1242002>,
<https://sappho-digital.com/actualization/plot/Q134285870_Q119292643>,
<https://sappho-digital.com/actualization/plot/Q134285870_Q1242002>,
<https://sappho-digital.com/actualization/topic/Q10737_Q119292643>,
<https://sappho-digital.com/actualization/topic/Q10737_Q1242002>,
<https://sappho-digital.com/actualization/work_ref/Q1242002_Q119292643>,
<https://sappho-digital.com/textpassage/Q119292643_Q1242002>,
<https://sappho-digital.com/textpassage/Q1242002_Q119292643> .
<https://sappho-digital.com/feature/interpretation/Q119292643_Q1242002> a intro:INT_Interpretation ;
rdfs:label "Interpretation of intertextual relation between Sappho and Sappho. Eine Novelle"@en ;
intro:R17i_featureIsActualizedIn <https://sappho-digital.com/actualization/interpretation/Q119292643_Q1242002> .
<https://sappho-digital.com/actualization/interpretation/Q119292643_Q1242002> a intro:INT2_ActualizationOfFeature ;
rdfs:label "Interpretation of intertextual relation between Sappho and Sappho. Eine Novelle"@en ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q119292643>,
<http://www.wikidata.org/entity/Q1242002> ;
intro:R17_actualizesFeature <https://sappho-digital.com/feature/interpretation/Q119292643_Q1242002> ;
intro:R21_identifies <https://sappho-digital.com/relation/Q119292643_Q1242002> .
# Features & Actualizations
# Person References
<https://sappho-digital.com/feature/person_ref/Q17892> a intro:INT18_Reference ;
rdfs:label "Reference to Sappho (person)"@en ;
intro:R17i_featureIsActualizedIn <https://sappho-digital.com/actualization/person_ref/Q17892_Q119292643>,
<https://sappho-digital.com/actualization/person_ref/Q17892_Q1242002> ;
intro:R22_providesSimilarityForRelation <https://sappho-digital.com/relation/Q119292643_Q1242002> .
<https://sappho-digital.com/person/Q17892> a ecrm:E21_Person ;
rdfs:label "Sappho"@en ;
ecrm:P1_is_identified_by <https://sappho-digital.com/identifier/Q17892> ;
ecrm:P67i_is_referred_to_by <https://sappho-digital.com/actualization/character/Q17892_Q120199245>,
<https://sappho-digital.com/actualization/character/Q17892_Q1242002>,
<https://sappho-digital.com/actualization/person_ref/Q17892_Q119292643>,
<https://sappho-digital.com/actualization/person_ref/Q17892_Q1242002> ;
owl:sameAs <http://www.wikidata.org/entity/Q17892> .
<https://sappho-digital.com/identifier/Q17892> a ecrm:E42_Identifier ;
rdfs:label "Q17892"@en ;
ecrm:P1i_identifies <https://sappho-digital.com/feature/character/Q17892>,
<https://sappho-digital.com/person/Q17892> ;
ecrm:P2_has_type <https://sappho-digital.com/id_type/wikidata> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q17892> .
<https://sappho-digital.com/actualization/person_ref/Q17892_Q119292643> a intro:INT2_ActualizationOfFeature ;
rdfs:label "Reference to Sappho in Sappho. Eine Novelle"@en ;
ecrm:P67_refers_to <https://sappho-digital.com/person/Q17892> ;
intro:R17_actualizesFeature <https://sappho-digital.com/feature/person_ref/Q17892> ;
intro:R18i_actualizationFoundOn <https://sappho-digital.com/expression/Q119292643> ;
intro:R21i_isIdentifiedBy <https://sappho-digital.com/actualization/interpretation/Q17892_Q119292643> ;
intro:R24i_isRelatedEntity <https://sappho-digital.com/relation/Q119292643_Q1242002> .
<https://sappho-digital.com/feature/interpretation/Q17892_Q119292643> a intro:INT_Interpretation ;
rdfs:label "Interpretation of Sappho in Sappho. Eine Novelle"@en ;
intro:R17i_featureIsActualizedIn <https://sappho-digital.com/actualization/interpretation/Q17892_Q119292643> .
<https://sappho-digital.com/actualization/interpretation/Q17892_Q119292643> a intro:INT2_ActualizationOfFeature ;
rdfs:label "Interpretation of Sappho in Sappho. Eine Novelle"@en ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q119292643> ;
intro:R17_actualizesFeature <https://sappho-digital.com/feature/interpretation/Q17892_Q119292643> ;
intro:R21_identifies <https://sappho-digital.com/actualization/person_ref/Q17892_Q119292643> .
<https://sappho-digital.com/actualization/person_ref/Q17892_Q1242002> a intro:INT2_ActualizationOfFeature ;
rdfs:label "Reference to Sappho in Sappho"@en ;
ecrm:P67_refers_to <https://sappho-digital.com/person/Q17892> ;
intro:R17_actualizesFeature <https://sappho-digital.com/feature/person_ref/Q17892> ;
intro:R18i_actualizationFoundOn <https://sappho-digital.com/expression/Q1242002> ;
intro:R21i_isIdentifiedBy <https://sappho-digital.com/actualization/interpretation/Q17892_Q1242002> ;
intro:R24i_isRelatedEntity <https://sappho-digital.com/relation/Q119292643_Q1242002> .
<https://sappho-digital.com/feature/interpretation/Q17892_Q1242002> a intro:INT_Interpretation ;
rdfs:label "Interpretation of Sappho in Sappho"@en ;
intro:R17i_featureIsActualizedIn <https://sappho-digital.com/actualization/interpretation/Q17892_Q1242002> .
<https://sappho-digital.com/actualization/interpretation/Q17892_Q1242002> a intro:INT2_ActualizationOfFeature ;
rdfs:label "Interpretation of Sappho in Sappho"@en ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q1242002> ;
intro:R17_actualizesFeature <https://sappho-digital.com/feature/interpretation/Q17892_Q1242002> ;
intro:R21_identifies <https://sappho-digital.com/actualization/character/Q17892_Q1242002>,
<https://sappho-digital.com/actualization/person_ref/Q17892_Q1242002> .
# Place References
<https://sappho-digital.com/feature/place_ref/Q128087> a intro:INT18_Reference ;
rdfs:label "Reference to Lesbos (place)"@en ;
intro:R17i_featureIsActualizedIn <https://sappho-digital.com/actualization/place_ref/Q128087_Q1242002>,
<https://sappho-digital.com/actualization/place_ref/Q128087_Q19179765> ;
intro:R22_providesSimilarityForRelation <https://sappho-digital.com/relation/Q1242002_Q19179765> .
<https://sappho-digital.com/place/Q128087> a ecrm:E53_Place ;
rdfs:label "Lesbos"@en ;
ecrm:P1_is_identified_by <https://sappho-digital.com/identifier/Q128087> ;
ecrm:P67i_is_referred_to_by <https://sappho-digital.com/actualization/place_ref/Q128087_Q1242002>,
<https://sappho-digital.com/actualization/place_ref/Q128087_Q19179765> ;
owl:sameAs <http://www.wikidata.org/entity/Q128087> .
<https://sappho-digital.com/identifier/Q128087> a ecrm:E42_Identifier ;
rdfs:label "Q128087"@en ;
ecrm:P1i_identifies <https://sappho-digital.com/place/Q128087> ;
ecrm:P2_has_type <https://sappho-digital.com/id_type/wikidata> ;
prov:wasDerivedFrom <http://www.wikidata.org/entity/Q128087> .
<https://sappho-digital.com/actualization/place_ref/Q128087_Q1242002> a intro:INT2_ActualizationOfFeature ;
rdfs:label "Reference to Lesbos in Sappho"@en ;
ecrm:P67_refers_to <https://sappho-digital.com/place/Q128087> ;
intro:R17_actualizesFeature <https://sappho-digital.com/feature/place_ref/Q128087> ;
intro:R18i_actualizationFoundOn <https://sappho-digital.com/expression/Q1242002> ;
intro:R21i_isIdentifiedBy <https://sappho-digital.com/actualization/interpretation/Q128087_Q1242002> ;
intro:R24i_isRelatedEntity <https://sappho-digital.com/relation/Q1242002_Q19179765> .
<https://sappho-digital.com/feature/interpretation/Q128087_Q1242002> a intro:INT_Interpretation ;
rdf | text/markdown | Laura Untner | null | null | null | MIT License
Copyright (c) 2025 Laura Untner
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"rdflib",
"requests",
"tqdm",
"pyshacl"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.5 | 2026-02-18T14:33:27.113146 | wiki2crm-1.0.3.tar.gz | 686,007 | 35/31/35651a63dfd29d764931c724ce0d9b8e529a5d639d5f5e3834e3d7e7b0e3/wiki2crm-1.0.3.tar.gz | source | sdist | null | false | e4231e836eff94332ec629bdafac50ee | 5508ac0913a0d161191c5cdaf9936983f202d6760b620ea70bd15570ad805f3a | 353135651a63dfd29d764931c724ce0d9b8e529a5d639d5f5e3834e3d7e7b0e3 | null | [] | 238 |
2.4 | mcp-scan | 0.4.2 | MCP Scan tool | <p align="center">
<h1 align="center">
mcp-scan
</h1>
</p>
<p align="center">
Discover and scan agent components on your machine for prompt injections<br/>
and vulnerabilities (including agents, MCP servers, skills).
</p>
> **NEW** Read our [technical report on the emerging threats of the agent skill eco-system](.github/reports/skills-report.pdf) published together with mcp-scan 0.4, which adds support for scanning agent skills.
<p align="center">
<a href="https://pypi.python.org/pypi/mcp-scan"><img src="https://img.shields.io/pypi/v/mcp-scan.svg" alt="mcp-scan"/></a>
<a href="https://pypi.python.org/pypi/mcp-scan"><img src="https://img.shields.io/pypi/l/mcp-scan.svg" alt="mcp-scan license"/></a>
<a href="https://pypi.python.org/pypi/mcp-scan"><img src="https://img.shields.io/pypi/pyversions/mcp-scan.svg" alt="mcp-scan python version requirements"/></a>
</p>
<div align="center">
<img src=".github/mcp-scan-cmd-banner.png?raw=true" alt="MCP-Scan logo"/>
</div>
<br>
MCP-scan helps you keep an inventory of all your installed agent components (harnesses, MCP servers, skills) and scans them for common threats like prompt injections, sensitive data handling or malware payloads hidden natural language.
## Highlights
- Auto-discover MCP configurations, agent tools, skills
- Detects MCP Security Vulnerabilities:
- Prompt Injection Attacks
- Tool Poisoning Attacks
- Toxic Flows
- Scan local STDIO MCP servers and remote HTTP/SSE MCP servers
- Detects Agent Skill Vulnerabilities:
- Prompt Injection Attacks, Malware Payloads
- Exposure to untrusted third parties (e.g. moltbook)
- Sensitive Data Handling
- Hard-coded secrets
## Quick Start
To get started, make sure you have uv [installed](https://docs.astral.sh/uv/getting-started/installation/) on your system.
### Scanning
To run a full scan of your machine (auto-discovers agents, MCP servers, skills), run:
```bash
uvx mcp-scan@latest --skills
```
This will scan for security vulnerabilities in servers, skills, tools, prompts, and resources. It will automatically discover a variety of agent configurations, including Claude Code/Desktop, Cursor, Gemini CLI and Windsurf. Omit `--skills` to skip skill analysis.
You can also scan particular configuration files:
```bash
# scan mcp configurations
uvx mcp-scan@latest ~/.vscode/mcp.json
# scan a single agent skill
uvx mcp-scan@latest --skills ~/path/to/my/SKILL.md
# scan all claude skills
uvx mcp-scan@latest --skills ~/.claude/skills
```
#### Example Run
[](https://asciinema.org/a/716858)
## MCP Security Scanner Capabilities
MCP-Scan is a security scanning tool to both statically and dynamically scan and monitor your MCP connections. It checks them for common security vulnerabilities like [prompt injections](https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks), [tool poisoning](https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks) and [toxic flows](https://invariantlabs.ai/blog/mcp-github-vulnerability). Consult our detailed [Documentation](https://invariantlabs-ai.github.io/docs/mcp-scan) for more information.
MCp-Scan operates in two main modes which can be used jointly or separately:
1. `mcp-scan scan` statically scans all your installed servers for malicious tool descriptions and tools (e.g. [tool poisoning attacks](https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks), cross-origin escalation, rug pull attacks, toxic flows).
[Quickstart →](#server-scanning).
2. `mcp-scan proxy` continuously monitors your MCP connections in real-time, and can restrict what agent systems can do over MCP (tool call checking, data flow constraints, PII detection, indirect prompt injection etc.).
[Quickstart →](#server-proxying).
<br/>
<br/>
<div align="center">
<img src="https://invariantlabs-ai.github.io/docs/mcp-scan/assets/proxy.svg" width="420pt" align="center"/>
<br/>
<br/>
_mcp-scan in proxy mode._
</div>
## Features
- Scanning of Claude, Cursor, Windsurf, and other file-based MCP client configurations
- Scanning for prompt injection attacks in tools and [tool poisoning attacks](https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks) using [Guardrails](https://github.com/invariantlabs-ai/invariant?tab=readme-ov-file#analyzer)
- [Enforce guardrailing policies](https://invariantlabs-ai.github.io/docs/mcp-scan/guardrails-reference/) on MCP tool calls and responses, including PII detection, secrets detection, tool restrictions and entirely custom guardrailing policies.
- Audit and log MCP traffic in real-time via [`mcp-scan proxy`](#proxy)
- Detect cross-origin escalation attacks (e.g. [tool shadowing](https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks)), and detect and prevent [MCP rug pull attacks](https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks), i.e. mcp-scan detects changes to MCP tools via hashing
### Server Proxying
Using `mcp-scan proxy`, you can monitor, log, and safeguard all MCP traffic on your machine. This allows you to inspect the runtime behavior of agents and tools, and prevent attacks from e.g., untrusted sources (like websites or emails) that may try to exploit your agents. mcp-scan proxy is a dynamic security layer that runs in the background, and continuously monitors your MCP traffic.
#### Example Run
<img width="903" alt="image" src="https://github.com/user-attachments/assets/63ac9632-8663-40c3-a765-0bfdfbdf9a16" />
#### Enforcing Guardrails
You can also add guardrailing rules, to restrict and validate the sequence of tool uses passing through proxy.
For this, create a `~/.mcp-scan/guardrails_config.yml` with the following contents:
```yml
<client-name>: # your client's shorthand (e.g., cursor, claude, windsurf)
<server-name>: # your server's name according to the mcp config (e.g., whatsapp-mcp)
guardrails:
secrets: block # block calls/results with secrets
custom_guardrails:
- name: "Filter tool results with 'error'"
id: "error_filter_guardrail"
action: block # or just 'log'
content: |
raise "An error was found." if:
(msg: ToolOutput)
"error" in msg.content
```
From then on, all calls proxied via `mcp-scan proxy` will be checked against your configured guardrailing rules for the current client/server.
Custom guardrails are implemented using Invariant Guardrails. To learn more about these rules, see the [official documentation](https://invariantlabs-ai.github.io/docs/mcp-scan/guardrails-reference/).
## How It Works
### Scanning
MCP-Scan `scan` searches through your configuration files to find MCP server configurations. It connects to these servers and retrieves tool descriptions.
It then scans tool descriptions, both with local checks and by invoking Invariant Guardrailing via an API. For this, tool names and descriptions are shared with invariantlabs.ai. By using MCP-Scan, you agree to the invariantlabs.ai [terms of use](./TERMS.md) and [privacy policy](https://invariantlabs.ai/privacy-policy).
Invariant Labs is collecting data for security research purposes (only about tool descriptions and how they change over time, not your user data). Don't use MCP-scan if you don't want to share your tools. Additionally, a unique, persistent, and anonymous ID is assigned to your scans for analysis. You can opt out of sending this information using the `--opt-out` flag.
MCP-scan does not store or log any usage data, i.e. the contents and results of your MCP tool calls.
### Proxying
For runtime monitoring using `mcp-scan proxy`, MCP-Scan can be used as a proxy server. This allows you to monitor and guardrail system-wide MCP traffic in real-time. To do this, mcp-scan temporarily injects a local [Invariant Gateway](https://github.com/invariantlabs-ai/invariant-gateway) into MCP server configurations, which intercepts and analyzes traffic. After the `proxy` command exits, Gateway is removed from the configurations.
You can also configure guardrailing rules for the proxy to enforce security policies on the fly. This includes PII detection, secrets detection, tool restrictions, and custom guardrailing policies. Guardrails and proxying operate entirely locally using [Guardrails](https://github.com/invariantlabs-ai/invariant) and do not require any external API calls.
## CLI parameters
MCP-scan provides the following commands:
```
mcp-scan - Security scanner for Model Context Protocol servers and tools
```
### Common Options
These options are available for all commands:
```
--storage-file FILE Path to store scan results and whitelist information (default: ~/.mcp-scan)
--base-url URL Base URL for the verification server
--verbose Enable detailed logging output
--print-errors Show error details and tracebacks
--full-toxic-flows Show all tools that could take part in toxic flow. By default only the top 3 are shown.
--json Output results in JSON format instead of rich text
```
### Commands
#### scan (default)
Scan MCP configurations for security vulnerabilities in tools, prompts, and resources.
```
mcp-scan [CONFIG_FILE...]
```
Options:
```
--checks-per-server NUM Number of checks to perform on each server (default: 1)
--server-timeout SECONDS Seconds to wait before timing out server connections (default: 10)
--suppress-mcpserver-io BOOL Suppress stdout/stderr from MCP servers (default: True)
--skills Autodetects and analyzes skills
--skills PATH_TO_SKILL_MD_FILE Analyzes the specific skill
--skills PATHS_TO_DIRECTORY Recursively detects and analyzes all skills in the directory
```
#### proxy
Run a proxy server to monitor and guardrail system-wide MCP traffic in real-time. Temporarily injects [Gateway](https://github.com/invariantlabs-ai/invariant-gateway) into MCP server configurations, to intercept and analyze traffic. Removes Gateway again after the `proxy` command exits.
This command requires the `proxy` optional dependency (extra).
- Run via uvx:
```bash
uvx --with "mcp-scan[proxy]" mcp-scan@latest proxy
```
This installs the `proxy` extra into an uvx-managed virtual environment, not your current shell venv.
Options:
```
CONFIG_FILE... Path to MCP configuration files to setup for proxying.
--pretty oneline|compact|full Pretty print the output in different formats (default: compact)
```
#### inspect
Print descriptions of tools, prompts, and resources without verification.
```
mcp-scan inspect [CONFIG_FILE...]
```
Options:
```
--server-timeout SECONDS Seconds to wait before timing out server connections (default: 10)
--suppress-mcpserver-io BOOL Suppress stdout/stderr from MCP servers (default: True)
```
#### whitelist
Manage the whitelist of approved entities. When no arguments are provided, this command displays the current whitelist.
```
# View the whitelist
mcp-scan whitelist
# Add to whitelist
mcp-scan whitelist TYPE NAME HASH
# Reset the whitelist
mcp-scan whitelist --reset
```
Options:
```
--reset Reset the entire whitelist
--local-only Only update local whitelist, don't contribute to global whitelist
```
Arguments:
```
TYPE Type of entity to whitelist: "tool", "prompt", or "resource"
NAME Name of the entity to whitelist
HASH Hash of the entity to whitelist
```
#### help
Display detailed help information and examples.
```bash
mcp-scan help
```
### Examples
```bash
# Scan all known MCP configs
mcp-scan
# Scan a specific config file
mcp-scan ~/custom/config.json
# Just inspect tools without verification
mcp-scan inspect
# View whitelisted tools
mcp-scan whitelist
# Whitelist a tool
mcp-scan whitelist tool "add" "a1b2c3..."
```
## Demo
This repository includes a vulnerable MCP server that can demonstrate Model Context Protocol security issues that MCP-Scan finds.
How to demo MCP security issues?
1. Clone this repository
2. Create an `mcp.json` config file in the cloned git repository root directory with the following contents:
```jsonc
{
"mcpServers": {
"Demo MCP Server": {
"type": "stdio",
"command": "uv",
"args": ["run", "mcp", "run", "demoserver/server.py"],
},
},
}
```
3. Run MCP-Scan: `uvx --python 3.13 mcp-scan@latest scan --full-toxic-flows mcp.json`
Note: if you place the `mcp.json` configuration filepath elsewhere then adjust the `args` path inside the MCP server configuration to reflect the path to the MCP Server (`demoserver/server.py`) as well as the `uvx` command that runs MCP-Scan CLI with the correct filepath to `mcp.json`.
## MCP-Scan is closed to contributions
MCP-Scan can currently no longer accept external contributions. We are focused on stabilizing releases.
We welcome suggestions, bug reports, or feature requests as GitHub issues.
## Development Setup
To run this package from source, follow these steps:
```bash
uv run pip install -e .
uv run -m src.mcp_scan.cli
```
For proxy functionality (e.g., `mcp-scan proxy`, `mcp-scan server`), install with the proxy extra:
```bash
uv run pip install -e .[proxy]
```
## Including MCP-scan results in your own project / registry
If you want to include MCP-scan results in your own project or registry, please reach out to the team via `mcpscan@invariantlabs.ai`, and we can help you with that.
For automated scanning we recommend using the `--json` flag and parsing the output.
## Further Reading
- [Introducing MCP-Scan](https://invariantlabs.ai/blog/introducing-mcp-scan)
- [MCP Security Notification Tool Poisoning Attacks](https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks)
- [WhatsApp MCP Exploited](https://invariantlabs.ai/blog/whatsapp-mcp-exploited)
- [MCP Prompt Injection](https://simonwillison.net/2025/Apr/9/mcp-prompt-injection/)
- [Toxic Flow Analysis](https://invariantlabs.ai/blog/toxic-flow-analysis)
## Changelog
See [CHANGELOG.md](CHANGELOG.md).
| text/markdown | null | null | null | null | Apache-2.0 | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles>=23.1.0",
"aiohttp>=3.11.16",
"fastapi>=0.115.12",
"filelock>=3.18.0",
"lark>=1.1.9",
"mcp[cli]==1.25.0",
"psutil>=5.9.0",
"pydantic-core>=2.41.4",
"pydantic>=2.11.2",
"pyjson5>=1.6.8",
"pyyaml>=6.0.2",
"rapidfuzz>=3.13.0",
"regex>=2024.11.6",
"rich==14.2.0",
"truststore>=0.10.... | [] | [] | [] | [] | uv/0.9.1 | 2026-02-18T14:33:20.190188 | mcp_scan-0.4.2.tar.gz | 1,373,479 | 9d/ff/1ab49f2b1e3b8bdf114d442f6b49a3f703b9e6d01b0083bd6f9389fc4f7d/mcp_scan-0.4.2.tar.gz | source | sdist | null | false | 95a87b9db45beeedd1e34b8d04630e9b | b7b71bcb5f8137e5c1176be9a5b176b5cd3ddf668718ab38e5261d3a325a0e17 | 9dff1ab49f2b1e3b8bdf114d442f6b49a3f703b9e6d01b0083bd6f9389fc4f7d | null | [
"LICENSE"
] | 1,567 |
2.4 | MedVol | 0.0.20 | A wrapper for loading medical 3D image volumes such as NIFTI or NRRD images. | # MedVol
[](https://github.com/Karol-G/medvol/raw/main/LICENSE)
[](https://pypi.org/project/medvol)
[](https://python.org)
A wrapper for loading medical 2D, 3D and 4D NIFTI or NRRD images.
Features:
- Supports loading and saving of 2D, 3D and 4D Nifti and NRRD images
- (Saving 4D images is currently not supported due to a SimpleITK bug)
- Simple access to image array
- Simple access to image metadata
- Affine
- Spacing
- Origin
- Direction
- Translation
- Rotation
- Scale (Same as spacing)
- Shear
- Header (The raw header)
- Copying/Modification of all or selected metadata across MedVol images
## Installation
You can install `medvol` via [pip](https://pypi.org/project/medvol/):
pip install medvol
## Example
```python
from medvol import MedVol
# Load NIFTI image
image = MedVol("path/to/image.nifti")
# Print some metadata
print("Spacing: ", image.spacing)
print("Affine: ", image.affine)
print("Rotation: ", image.rotation)
print("Header: ", image.header)
# Access and modify the image array
arr = image.array
arr[0, 0, 0] = 1
# Create a new image with the new array, a new spacing, but copy all remaining metadata
new_image = MedVol(arr, spacing=[2, 2, 2], copy=image)
# Save the new image as NRRD
new_image.save("path/to/new_image.nrrd")
```
## Contributing
Contributions are very welcome. Tests can be run with [tox], please ensure
the coverage at least stays the same before you submit a pull request.
## License
Distributed under the terms of the [Apache Software License 2.0] license,
"medvol" is free and open source software
## Issues
If you encounter any problems, please file an issue along with a detailed description.
[Cookiecutter]: https://github.com/audreyr/cookiecutter
[MIT]: http://opensource.org/licenses/MIT
[BSD-3]: http://opensource.org/licenses/BSD-3-Clause
[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt
[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt
[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0
[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt
[tox]: https://tox.readthedocs.io/en/latest/
[pip]: https://pypi.org/project/pip/
[PyPI]: https://pypi.org/
| text/markdown | Karol Gotkowski | karol.gotkowski@dkfz.de | null | null | Apache-2.0 | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Lan... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"SimpleITK",
"tox; extra == \"testing\"",
"pytest; extra == \"testing\"",
"pytest-cov; extra == \"testing\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T14:33:16.481527 | medvol-0.0.20.tar.gz | 10,206 | 91/3a/1db542fd4d9c7ec472acd56deab62f646ed471c0fb7ea996c9a2f8487913/medvol-0.0.20.tar.gz | source | sdist | null | false | 633bd2986e2964b60fa1cf5302524791 | 9a3d4f64a2a140856d17d42b3f8d0605313703f0ea4f2778383f095674b75cd3 | 913a1db542fd4d9c7ec472acd56deab62f646ed471c0fb7ea996c9a2f8487913 | null | [
"LICENSE"
] | 0 |
2.4 | easytranscriber | 0.1.0 | Speech recognition with accurate word-level timestamps. | <div align="center"><img width="1020" height="340" alt="image" src="https://github.com/user-attachments/assets/7f1bdf33-5161-40c1-b6a7-6f1f586e030b" /></div>
`easytranscriber` is an automatic speech recognition library built for efficient, large-scale transcription with accurate word-level timestamps. The library is backend-agnostic, featuring modular, parallelizable, pipeline components (VAD, transcription, feature/emission extraction, forced alignment), with support for both `ctranslate2` and `Hugging Face` inference backends. Notable features include:
* **GPU accelerated forced alignment**, using [Pytorch's forced alignment API](https://docs.pytorch.org/audio/main/tutorials/ctc_forced_alignment_api_tutorial.html). Forced alignment is based on a GPU implementation of the Viterbi algorithm ([Pratap et al., 2024](https://jmlr.org/papers/volume25/23-1318/23-1318.pdf#page=8)).
* **Parallel loading and pre-fetching of audio files** for efficient data loading and batch processing.
* **Flexible text normalization for improved alignment quality**. Users can supply custom regex-based text normalization functions to preprocess ASR outputs before alignment. A mapping from the original text to the normalized text is maintained internally. All of the applied normalizations and transformations are consequently **non-destructive and reversible after alignment**.
* **35% to 102% faster inference compared to [`WhisperX`](https://github.com/m-bain/whisperX)**. See the [benchmarks](#benchmarks) for more details.
* Batch inference support for both wav2vec2 and Whisper models.
### Benchmarks

| text/markdown | Faton Rekathati | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"transformers>=4.45.0",
"torch!=2.9.*,>=2.7.0",
"torchaudio!=2.9.*,>=2.7.0",
"tqdm>=4.66.1",
"soundfile>=0.12.1",
"nltk>=3.8.2",
"pyannote-audio>=3.3.1",
"silero-vad~=6.0",
"ctranslate2>=4.4.0",
"msgspec",
"easyaligner==0.*"
] | [] | [] | [] | [
"Repository, https://github.com/kb-labb/easytranscriber"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:32:59.916204 | easytranscriber-0.1.0-py3-none-any.whl | 17,081 | 1c/e1/a008312b704c10d9eca68ef39d62bd1f2c140e84c34b9c3d3f6098121ae3/easytranscriber-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 3062ba919ee7dd0fa4d256af58c08c37 | aa9eca40d53709f55446c841b2c74d94fcdc402c38309fd031cb0e68e646b2bc | 1ce1a008312b704c10d9eca68ef39d62bd1f2c140e84c34b9c3d3f6098121ae3 | null | [] | 241 |
2.4 | processcube-etw-library | 2026.2.18.143203b0 | A library to create ETW apps with the ProcessCube platform. | # ProcessCube ETW Library
Build External Task Workers (ETW) for ProcessCube, featuring health checks, typed handlers, and environment-based configuration.
## Installation
```bash
uv add processcube-etw-library
```
## Configuration
### Environment Variables
The library uses environment variables for configuration. You can set these in your environment or in a `.env` file in your project root.
| Variable | Default | Description |
| ------------------------------------------------ | -------------------------------------- | ------------------------------------------------------- |
| `PROCESSCUBE_ENGINE_URL` | `http://localhost:56000` | URL of the ProcessCube Engine |
| `PROCESSCUBE_AUTHORITY_URL` | Auto-discovered from engine | URL of the ProcessCube Authority (OAuth server) |
| `PROCESSCUBE_ETW_CLIENT_ID` | `test_etw` | OAuth client ID for the External Task Worker |
| `PROCESSCUBE_ETW_CLIENT_SECRET` | `3ef62eb3-fe49-4c2c-ba6f-73e4d234321b` | OAuth client secret for the External Task Worker |
| `PROCESSCUBE_ETW_CLIENT_SCOPES` | `engine_etw` | OAuth scopes for the External Task Worker |
| `PROCESSCUBE_MAX_GET_OAUTH_ACCESS_TOKEN_RETRIES` | `10` | Maximum retries for obtaining OAuth access token |
| `PROCESSCUBE_ETW_LONG_POLLING_TIMEOUT_IN_MS` | `60000` | Long polling timeout in milliseconds for external tasks |
| `ENVIRONMENT` | `development` | Environment mode (`development` or `production`) |
#### Example `.env` File
```env
PROCESSCUBE_ENGINE_URL=http://localhost:56000
PROCESSCUBE_ETW_CLIENT_ID=my_etw_client
PROCESSCUBE_ETW_CLIENT_SECRET=my_secret_key
PROCESSCUBE_ETW_CLIENT_SCOPES=engine_etw
ENVIRONMENT=production
```
### Extending Settings
You can extend the base settings class to add your own environment variables:
```python
from pydantic import Field
from processcube_etw_library.settings import ETWSettings, load_settings
class MyAppSettings(ETWSettings):
database_url: str = Field(default="sqlite:///app.db")
api_key: str = Field(default="")
# Load settings with your custom class
settings = load_settings(MyAppSettings)
```
## Usage
### Start the ETW Application
```python
from processcube_etw_library import new_external_task_worker_app
# Create the application
app = new_external_task_worker_app()
# Run the application
app.run()
```
### Subscribe to External Task Topics
```python
from processcube_etw_library import new_external_task_worker_app
app = new_external_task_worker_app()
def my_handler(task):
# Process the task
result = {"processed": True}
return result
# Subscribe to a topic
app.subscribe_to_external_task_for_topic("my-topic", my_handler)
app.run()
```
### Typed Handlers
Use typed handlers for automatic payload validation with Pydantic models:
```python
from pydantic import BaseModel
from processcube_etw_library import new_external_task_worker_app
class MyInput(BaseModel):
name: str
value: int
class MyOutput(BaseModel):
result: str
app = new_external_task_worker_app()
def my_typed_handler(payload: MyInput) -> MyOutput:
return MyOutput(result=f"Processed {payload.name} with value {payload.value}")
app.subscribe_to_external_task_for_topic_typed("my-typed-topic", my_typed_handler)
app.run()
```
### Add a Custom Health Check
```python
from processcube_etw_library import new_external_task_worker_app
from processcube_etw_library.health import HealthCheck, create_url_health_check
app = new_external_task_worker_app()
# Add a URL-based health check
app.add_health_check(
HealthCheck(
create_url_health_check("http://my-service:8080/health"),
service_name="My Service",
tags=["backend", "api"],
comments=["Checks if My Service is reachable"],
)
)
# Add a custom health check function
def check_database():
# Return True if healthy, False otherwise
try:
# Your database check logic here
return True
except Exception:
return False
app.add_health_check(
HealthCheck(
check_database,
service_name="Database",
tags=["backend", "database"],
comments=["Checks database connectivity"],
)
)
app.run()
```
### Managing Health Checks
```python
# Get all registered health checks
checks = app.get_health_checks()
# Get a specific health check by service name
db_check = app.get_health_check("Database")
# Remove a health check
app.remove_health_check("Database")
```
### Disabling Built-in Health Checks
By default, the library registers health checks for the ProcessCube Engine and Authority. You can disable these:
```python
app = new_external_task_worker_app(built_in_health_checks=False)
```
## Health Endpoints
The application exposes health endpoints at `/healthyz` and `/readyz` that return the status of all registered health checks.
To check if the application is running without performing health checks, use `/livez`.
## Server Configuration
The server configuration is determined by the `ENVIRONMENT` variable:
| Setting | Development | Production |
| ---------- | ----------- | ---------- |
| Host | `0.0.0.0` | `0.0.0.0` |
| Port | `8000` | `8000` |
| Log Level | `debug` | `warning` |
| Access Log | `true` | `false` |
| text/markdown | null | Jeremy Hill <jeremy.hill@profection.de> | null | null | MIT | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.13.3",
"async-lru>=2.1.0",
"dataclasses-json>=0.6.7",
"fastapi[standard]>=0.128.0",
"oauth2-client>=1.4.2",
"tenacity>=9.1.2"
] | [] | [] | [] | [] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T14:32:09.877264 | processcube_etw_library-2026.2.18.143203b0-py3-none-any.whl | 51,934 | db/a8/ad9ab6cd11716518e5d9f71b65c997fe5fe01ce1ee5286d732b74e2377bf/processcube_etw_library-2026.2.18.143203b0-py3-none-any.whl | py3 | bdist_wheel | null | false | a3e4d328887fbd7fcd3e03af8cd023d8 | 7ec8553edf0bc68918ea94a7c15c64180b7604b0434f956cdb34cff4b9ae7fca | dba8ad9ab6cd11716518e5d9f71b65c997fe5fe01ce1ee5286d732b74e2377bf | null | [] | 196 |
2.4 | spec-kitty-orchestrator | 0.1.0 | External orchestrator for spec-kitty, integrating via the orchestrator-api contract | # spec-kitty-orchestrator
External orchestrator for the [spec-kitty](https://github.com/spec-kitty/spec-kitty) workflow system.
Coordinates multiple AI agents to autonomously implement and review work packages (WPs) in parallel. Integrates with spec-kitty **exclusively** via the versioned `orchestrator-api` CLI contract — no direct file access, no internal imports.
---
## How it works
```
spec-kitty-orchestrator
│
│ spec-kitty orchestrator-api <cmd> --json
▼
spec-kitty (host)
│
└── kitty-specs/<feature>/tasks/WP01..WPn.md
```
The orchestrator polls the host for ready work packages, spawns AI agents in worktrees, and transitions each WP through `planned → claimed → in_progress → for_review → done` by calling the host API at each step. All workflow state lives in spec-kitty; the orchestrator only tracks provider-local data (retry counts, log paths, agent choices).
---
## Requirements
- Python 3.10+
- [spec-kitty](https://github.com/spec-kitty/spec-kitty) ≥ 2.x installed and on PATH (provides the `orchestrator-api` contract)
- At least one supported AI agent CLI installed (see [Supported agents](#supported-agents))
---
## Installation
```bash
pip install spec-kitty-orchestrator
```
Or from source:
```bash
git clone https://github.com/spec-kitty/spec-kitty-orchestrator
cd spec-kitty-orchestrator
pip install -e ".[dev]"
```
---
## Quick start
```bash
# Verify contract compatibility with the installed spec-kitty
spec-kitty orchestrator-api contract-version --json
# Dry-run to validate configuration
spec-kitty-orchestrator orchestrate --feature 034-my-feature --dry-run
# Run the orchestration loop
spec-kitty-orchestrator orchestrate --feature 034-my-feature
```
The orchestrator will:
1. List all WPs with satisfied dependencies
2. Claim each ready WP via the host API
3. Spawn the implementation agent in the WP's worktree
4. Submit to review when implementation completes
5. Transition to `done` on review approval, or re-implement with feedback on rejection
6. Accept the feature when all WPs are done
---
## CLI reference
```
spec-kitty-orchestrator orchestrate --feature <slug>
[--impl-agent <id>]
[--review-agent <id>]
[--max-concurrent <n>]
[--actor <identity>]
[--repo-root <path>]
[--dry-run]
spec-kitty-orchestrator status [--repo-root <path>]
spec-kitty-orchestrator resume [--actor <identity>]
[--repo-root <path>]
spec-kitty-orchestrator abort [--cleanup-worktrees]
[--repo-root <path>]
```
### `orchestrate`
Starts a new orchestration run for the named feature. Runs until all WPs reach a terminal lane (`done`, `canceled`, or `blocked`) or a dependency deadlock is detected.
| Flag | Default | Description |
|------|---------|-------------|
| `--feature` | required | Feature slug (e.g. `034-auth-system`) |
| `--impl-agent` | `claude-code` | Override implementation agent |
| `--review-agent` | `claude-code` | Override review agent |
| `--max-concurrent` | `4` | Max WPs in flight simultaneously |
| `--actor` | `spec-kitty-orchestrator` | Actor identity recorded in events |
| `--dry-run` | off | Validate config only, don't execute |
### `status`
Shows the provider-local run state (retry counts, agent choices, errors) from the most recent run.
### `resume`
Resumes an interrupted run from saved state. The host already tracks lane state, so the loop simply re-polls for ready WPs.
### `abort`
Records the run as aborted. Use `--cleanup-worktrees` to delete the provider state file.
---
## Configuration
Optional YAML config at `.kittify/orchestrator.yaml`:
```yaml
max_concurrent_wps: 4
agents:
implementation:
- claude-code
- gemini
review:
- claude-code
max_retries: 2
timeout_seconds: 3600
single_agent_mode: false
```
---
## Supported agents
| Agent ID | CLI binary | stdin? | Notes |
|----------|-----------|--------|-------|
| `claude-code` | `claude` | yes | Default; JSON output via `--output-format json` |
| `codex` | `codex` | yes | `codex exec -` with `--full-auto` |
| `copilot` | `gh` | no | Requires `gh extension install github/gh-copilot` |
| `gemini` | `gemini` | yes | Specific exit codes for auth/rate-limit errors |
| `qwen` | `qwen` | yes | Fork of Gemini CLI |
| `opencode` | `opencode` | yes | Multi-provider; JSONL streaming output |
| `kilocode` | `kilocode` | no | Prompt as positional arg with `-a --yolo -j` |
| `augment` | `auggie` | no | `--acp` mode; no JSON output |
| `cursor` | `cursor` | no | Always wrapped with `timeout` to prevent hangs |
The orchestrator detects installed agents automatically at startup:
```bash
python3 -c "from spec_kitty_orchestrator.agents import detect_installed_agents; print(detect_installed_agents())"
```
---
## Policy metadata
Every host mutation call includes a `PolicyMetadata` block that declares the orchestrator's identity and capability scope. The host validates and records this alongside every WP event, creating a full audit trail.
```python
PolicyMetadata(
orchestrator_id="spec-kitty-orchestrator",
orchestrator_version="0.1.0",
agent_family="claude",
approval_mode="full_auto", # full_auto | interactive | supervised
sandbox_mode="workspace_write", # workspace_write | read_only | none
network_mode="none", # allowlist | none | open
dangerous_flags=[],
)
```
Policy fields are validated on both sides: the provider rejects secret-like values before sending; the host rejects missing or malformed policy on run-affecting commands.
---
## Security boundary
The orchestrator has **no direct access** to spec-kitty internals:
- No imports from `specify_cli` or `spec_kitty_events`
- No direct reads or writes to `kitty-specs/`
- No git operations — worktree creation is delegated to the host via `start-implementation`
- All state mutations go through `HostClient` subprocess calls
This is enforced at test time:
```bash
# Boundary check (must print OK)
grep -r "specify_cli\|spec_kitty_events" src/spec_kitty_orchestrator/ && echo "FAIL" || echo "OK"
# AST-level import check in conformance suite
python3.11 -m pytest tests/conformance/test_contract.py::TestBoundaryCheck
```
---
## Provider-local state
The orchestrator writes only to `.kittify/orchestrator-run-state.json` (a file it owns). This tracks:
- Retry counts per WP per role
- Which agents were tried (for fallback)
- Log file paths
- Review feedback from rejected cycles
Lane/status fields are never stored locally — those are always read from the host.
---
## Conformance tests
The `tests/conformance/fixtures/` directory contains 13 canonical JSON fixtures that define the exact shape of every host API response. Both the host and provider test suites use these as source of truth.
```bash
python3.11 -m pytest tests/conformance/ -v
```
---
## Development
```bash
pip install -e ".[dev]"
python3.11 -m pytest tests/
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0",
"rich>=13",
"typer>=0.9",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest-mock>=3.12; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:31:40.897046 | spec_kitty_orchestrator-0.1.0.tar.gz | 29,967 | 1c/5b/fefa387e6eeff0442f64a0075c40189673f05047561821b6c45530b9c1ae/spec_kitty_orchestrator-0.1.0.tar.gz | source | sdist | null | false | 9002d4c13b80119e907b5cc18f702b0c | 428b49c6445301d512f28374f664ff1d41bb6b68ca058cb9f5820743803d8d31 | 1c5bfefa387e6eeff0442f64a0075c40189673f05047561821b6c45530b9c1ae | null | [] | 251 |
2.4 | sapphire-language | 1.7.1 | A lightweight programming language with a built-in Studio and pygame functions. | Sapphire is an easy programming language to learn, created by GitHub@CodeyWaffle, with the help of Google Gemini and ChatGPT, built on the basics of Python and C++ (Windows Only)
The Sapphire programming language is designed as a fast, simple, and portable tool with native UI capabilities [1]. It is licensed under MIT, allowing for building, selling, and remixing, but includes a "Be a good guy" policy emphasizing responsible use, prohibiting the creation of weapons or harmful software, requiring credit for major projects or forks, discouraging harassment automation, and promoting innovation over imitation [1]. Visit the Sapphire Programming Language GitHub repository for more details.
Executing the code
Sapphire 15 and beyond:
Install pygamece or pygame and pygame-supported Python versions
Download Sapphire(version) in a ZIP or directly in a folder, then extract it
Open the launchSapphire.bat in the folder
You can run and save the code directly on the SapphireStudio(launches when you clicked the .bat file)
before Sapphire 15, not including:
Download Sapphire.py.
Code a Sapphire code in any notepad application and save it as a .sp file.
Put the Sapphire.py interpreter in the same folder as your .sp file project.
Change destination in your computer terminal to the folder that saved the Sapphire files.
enter python Sapphire12.py YourProject.sp
You can see your project running.
New Commands: (Sapphire 15 and beyond)
// - description
Variables
var type(name){value} - Creates a variable (int, flt, str, bol, lst).
Constants
const var type(name){value} - Creates a read-only variable that cannot be changed.
Jumping
jump.n - Immediately stops the current block and starts executing the area named n.
Defining Areas
def area (n) { } - Creates a named logic block for jumps or organization.
Fixed Loops
loop(n) { } - Executes the code inside $n$ times during a single main cycle.
Condition Loops
setLoopUntil(cond) { } - Runs until the condition evaluates to true.
State Loops
setLoopWhen(cond) { } - Runs as long as the condition remains true.
Conversion
convert(type1 var n){type2} - Migrates a variable from one data type to another.
Output
print() and println() - Sends data to the Studio console.
Graphics
set_screen(w, h), draw_rect(COL, x, y, w, h), and update_display().
Core Rules & Logic
Variable Declaration: Only one variable should be declared per set of brackets.
Boolean Syntax: All boolean logic must use lowercase letters (e.g., true, false, &&, ||).
Conditionals: if statements only execute the code block if the condition is strictly true.
Graphics Sequence: You must call set_screen() before using any drawing commands like draw_rect.
Encoding: Your files support full UTF-8, allowing you to use emojis and special symbols in strings.
Workspace Management: Always use the .bat file to ensure the latest library version is copied into your project folder.
Old Basic code structure: (before Sapphire 15, not including)
.start.program{}
.setup{}
.main{}
.end.program
Old Commands: (before Sapphire 15, not including)
// - description
setLoop(n){until(a){b}} - same as for(n, a, b) in C++
jump.n - jump to an area named n
def area (n) - create an area n
loop(n) - loop n times in one .main cycle
var int (n){1} - create an integer variable n and n = 1
print() - print something
println() - print something and change line
const var int (n){1} - create a constant integer variable n and n = 1
convert(int var n){str} - convert an integer variable n into string
basic boolean logics
numerical calculations
Rules:
Only one variable in the same brackets
If satements only run if condition = true (not false)
Boolean logics in lowercase letters
| text/markdown | CodeyWaffle@GitHub | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pygame-ce"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T14:31:34.874074 | sapphire_language-1.7.1.tar.gz | 12,750 | dd/1f/6c959d1ecb7278bc106ba42e71a52cb43e00d3f2f9ad09c9dbd24ca35149/sapphire_language-1.7.1.tar.gz | source | sdist | null | false | fe600b4ae05e3b9f0cadf315f9340e6d | 158913847feaa7871eb2f9cc36edd0a466b2d3618c3fbabcf268a1213a3ae201 | dd1f6c959d1ecb7278bc106ba42e71a52cb43e00d3f2f9ad09c9dbd24ca35149 | null | [
"LICENSE"
] | 221 |
2.4 | school-cover-parser | 0.1.1 | Convert SIMS Notice Board Summary to nicer outputs | # School Cover Parser
Takes cover info from SIMS and makes better outputs
Hopefully this goes onto old kindles
https://www.galacticstudios.org/kindle-weather-display/
## Command-line usage
This project now uses a [Typer](https://typer.tiangolo.com/) CLI.
- Run with the default input (Downloads or `test_data/Notice Board Summary.html`):
- `python main.py`
- Run on a specific HTML file:
- `python main.py --file path/to/Notice\ Board\ Summary.html`
- Disable sending the Outlook email:
- `python main.py --no-email`
- Test mode: run against all `.html` files in `test_data` (no email, no renames, no browser popups) and write separate outputs per file:
- `python main.py --test`
Outputs are written into an `outputs` folder under the directory you run the command from (for example `P:\Documents\outputs`) as `cover_sheet*.html` and `supply_sheet*.html`.
## Installation as a package
You can install this project as a local package and use the CLI directly:
- Install in editable (development) mode from the repo root:
- `pip install -e .`
- Run via the installed console script:
- `school-cover-parser --test`
- `school-cover-parser --file path/to/Notice\ Board\ Summary.html`
- Or run as a module:
- `python -m school_cover_parser --test`
- `python -m school_cover_parser --no-email`
The old entry point still works:
- `python main.py ...` (this forwards to the same Typer app under the hood).
| text/markdown | UTC Sheffield | null | null | null | Proprietary | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer[all]",
"pandas",
"bs4",
"pywin32",
"playwright",
"pillow",
"arrow"
] | [] | [] | [] | [
"Homepage, https://www.galacticstudios.org/kindle-weather-display/"
] | twine/6.2.0 CPython/3.12.1 | 2026-02-18T14:31:34.646046 | school_cover_parser-0.1.1.tar.gz | 14,426,923 | 4b/3e/d2655928d2ce07af70db2a505a4b443a8a722d286d73fc72c0204c3b574d/school_cover_parser-0.1.1.tar.gz | source | sdist | null | false | 52581bd746f918b3d6450f18976c2664 | b4aae5dbc7cd3b690020ce5bfd9d4754afb8ff56c058ce472cf20e58cb1d32b3 | 4b3ed2655928d2ce07af70db2a505a4b443a8a722d286d73fc72c0204c3b574d | null | [] | 242 |
2.4 | ownscribe | 0.7.0 | Fully local meeting transcription and summarization CLI | # ownscribe
[](https://pypi.org/project/ownscribe/)
[](https://github.com/paberr/ownscribe/actions/workflows/ci.yml)
[](LICENSE)
[](https://www.python.org/downloads/)
Local-first meeting transcription and summarization CLI.
Record, transcribe, and summarize meetings and system audio entirely on your machine – no cloud, no bots, no data leaving your device.
> System audio capture requires **macOS 14.2 or later**. Other platforms can use the sounddevice backend with an external audio source.
## Privacy
ownscribe **does not**:
- send audio to external servers
- upload transcripts
- require cloud APIs
- store data outside your machine
All audio, transcripts, and summaries remain local.
<!-- TODO: Add asciinema demo or terminal screenshot here -->
## Features
- **System audio capture** — records all system audio natively via Core Audio Taps (macOS 14.2+), no virtual audio drivers needed
- **Microphone capture** — optionally record system + mic audio simultaneously with `--mic`
- **WhisperX transcription** — fast, accurate speech-to-text with word-level timestamps
- **Speaker diarization** — optional speaker identification via pyannote (requires HuggingFace token)
- **Pipeline progress** — live checklist showing transcription, diarization sub-steps, and summarization progress
- **Local LLM summarization** — structured meeting notes via Ollama, LM Studio, or any OpenAI-compatible server
- **Summarization templates** — built-in presets for meetings, lectures, and quick briefs; define your own in config
- **Ask your meetings** — ask natural-language questions across all your meeting notes; uses a two-stage LLM pipeline with keyword fallback
- **One command** — just run `ownscribe`, press Ctrl+C when done, get transcript + summary
## Requirements
- macOS 14.2+ (for system audio capture)
- Python 3.12+
- [uv](https://docs.astral.sh/uv/)
- [ffmpeg](https://ffmpeg.org/) — `brew install ffmpeg`
- Xcode Command Line Tools (`xcode-select --install`)
- One of:
- [Ollama](https://ollama.ai) — `brew install ollama`
- [LM Studio](https://lmstudio.ai)
- Any OpenAI-compatible local server
Works with any app that outputs audio through Core Audio (Zoom, Teams, Meet, etc.).
> **Tip:** Your terminal app (Terminal, iTerm2, VS Code, etc.) needs **Screen Recording** permission to capture system audio.
> Open the settings panel directly with:
> ```bash
> open "x-apple.systempreferences:com.apple.preference.security?Privacy_ScreenCapture"
> ```
> Enable your terminal app, then restart it.
## Installation
### Quick start with uvx
```bash
uvx ownscribe
```
On macOS, the Swift audio capture helper is downloaded automatically on first run.
### From source
```bash
# Clone the repo
git clone https://github.com/paberr/ownscribe.git
cd ownscribe
# Build the Swift audio capture helper (optional — auto-downloads if skipped)
bash swift/build.sh
# Install with transcription support
uv sync --extra transcription
# Pull a model for summarization (if using Ollama)
ollama pull mistral
```
## Usage
### Record, transcribe, and summarize a meeting
```bash
ownscribe # records system audio, Ctrl+C to stop
```
This will:
1. Capture system audio until you press Ctrl+C
2. Transcribe with WhisperX
3. Summarize with your local LLM
4. Save everything to `~/ownscribe/YYYY-MM-DD_HHMMSS/`
### Options
```bash
ownscribe --mic # capture system audio + default mic (press 'm' to mute/unmute)
ownscribe --mic-device "MacBook Pro Microphone" # capture system audio + specific mic
ownscribe --device "MacBook Pro Microphone" # use mic instead of system audio
ownscribe --no-summarize # skip LLM summarization
ownscribe --diarize # enable speaker identification
ownscribe --language en # set transcription language (default: auto-detect)
ownscribe --model large-v3 # use a larger Whisper model
ownscribe --format json # output as JSON instead of markdown
ownscribe --no-keep-recording # auto-delete WAV files after transcription
ownscribe --template lecture # use the lecture summarization template
```
### Subcommands
```bash
ownscribe devices # list audio devices (uses native CoreAudio when available)
ownscribe apps # list running apps with PIDs for use with --pid
ownscribe transcribe recording.wav # transcribe an existing audio file
ownscribe summarize transcript.md # summarize an existing transcript
ownscribe ask "question" # search your meetings with a natural-language question
ownscribe config # open config file in $EDITOR
ownscribe cleanup # remove ownscribe data from disk
```
### Searching Meeting Notes
Use `ask` to search across all your meeting notes with natural-language questions:
```bash
ownscribe ask "What did Anna say about the deadline?"
ownscribe ask "budget decisions" --since 2026-01-01
ownscribe ask "action items from last week" --limit 5
```
This runs a two-stage pipeline:
1. **Find** — sends meeting summaries to the LLM to identify which meetings are relevant
2. **Answer** — sends the full transcripts of relevant meetings to the LLM to produce an answer with quotes
If the LLM finds no relevant meetings, a keyword fallback searches summaries and transcripts directly.
## Configuration
Config is stored at `~/.config/ownscribe/config.toml`. Run `ownscribe config` to create and edit it.
```toml
[audio]
backend = "coreaudio" # "coreaudio" or "sounddevice"
device = "" # empty = system audio
mic = false # also capture microphone input
mic_device = "" # specific mic device name (empty = default)
[transcription]
model = "base" # tiny, base, small, medium, large-v3
language = "" # empty = auto-detect
[diarization]
enabled = false
hf_token = "" # HuggingFace token for pyannote
telemetry = false # allow HuggingFace Hub + pyannote metrics telemetry
device = "auto" # "auto" (mps if available), "mps", or "cpu"
[summarization]
enabled = true
backend = "ollama" # "ollama" or "openai"
model = "mistral"
host = "http://localhost:11434"
# template = "meeting" # "meeting", "lecture", "brief", or a custom name
# context_size = 0 # 0 = auto-detect from model; set manually for OpenAI-compatible backends
# Custom templates (optional):
# [templates.my-standup]
# system_prompt = "You summarize daily standups."
# prompt = "List each person's update:\n{transcript}"
[output]
dir = "~/ownscribe"
format = "markdown" # "markdown" or "json"
keep_recording = true # false = auto-delete WAV after transcription
```
**Precedence:** CLI flags > environment variables (`HF_TOKEN`, `OLLAMA_HOST`) > config file > defaults.
## Summarization Templates
Built-in templates control how transcripts are summarized:
| Template | Best for | Output style |
|----------|----------|-------------|
| `meeting` | Meetings, standups, 1:1s | Summary, Key Points, Action Items, Decisions |
| `lecture` | Lectures, seminars, talks | Summary, Key Concepts, Key Takeaways |
| `brief` | Quick overviews | 3-5 bullet points |
Use `--template` on the CLI or set `template` in `[summarization]` config. Default is `meeting`.
Define custom templates in config:
```toml
[templates.my-standup]
system_prompt = "You summarize daily standups."
prompt = "List each person's update:\n{transcript}"
```
Then use with `--template my-standup` or `template = "my-standup"` in config.
## Speaker Diarization
Speaker identification requires a HuggingFace token with access to the pyannote models:
1. Accept the terms for both models on HuggingFace:
- [pyannote/speaker-diarization-3.1](https://huggingface.co/pyannote/speaker-diarization-3.1)
- [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0)
2. Create a token at https://huggingface.co/settings/tokens
3. Set `HF_TOKEN` env var or add `hf_token` to config
4. Run with `--diarize`
On Apple Silicon Macs, diarization automatically uses the Metal Performance Shaders (MPS) GPU backend for ~10x faster processing. Set `device = "cpu"` in the `[diarization]` config section to disable this.
## Acknowledgments
ownscribe builds on some excellent open-source projects:
- [WhisperX](https://github.com/m-bain/whisperX) — fast speech recognition with word-level timestamps and speaker diarization
- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) — CTranslate2-based Whisper inference
- [pyannote.audio](https://github.com/pyannote/pyannote-audio) — speaker diarization
- [Ollama](https://ollama.ai) — local LLM serving
- [Click](https://click.palletsprojects.com) — CLI framework
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, tests, and open contribution areas.
## License
MIT
| text/markdown | Pascal Berrang | Pascal Berrang <git@p4l.dev> | null | null | null | meeting, transcription, summarization, whisper, local, privacy | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Multimedia :: Sound/Audio :: Speec... | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.1",
"sounddevice>=0.5",
"soundfile>=0.13",
"ollama>=0.4",
"openai>=1.0",
"whisperx>=3.7"
] | [] | [] | [] | [
"Homepage, https://github.com/paberr/ownscribe",
"Repository, https://github.com/paberr/ownscribe",
"Issues, https://github.com/paberr/ownscribe/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:31:30.314517 | ownscribe-0.7.0.tar.gz | 27,410 | 05/8b/2eeb4e548476aeb564665e3dc145f8674d96fdbc9c6fa4119e4b05230ab8/ownscribe-0.7.0.tar.gz | source | sdist | null | false | ab364903ec43ff98e6bb969a7287a445 | 9b595eb2468f481535359b86c04278247054247e07e589a54cbfa805aa671534 | 058b2eeb4e548476aeb564665e3dc145f8674d96fdbc9c6fa4119e4b05230ab8 | MIT | [] | 218 |
2.4 | datarobot-moderations | 11.2.16 | DataRobot Monitoring and Moderation framework | # DataRobot Moderations library
This library enforces the intervention in the prompt and response texts as per the
guard configuration set by the user.
The library accepts the guard configuration in the yaml format and the input prompts
and outputs the dataframe with the details like:
- should the prompt be blocked
- should the completion be blocked
- metric values obtained from the model guards
- is the prompt or response modified as per the modifier guard configuration
## Architecture
The library is architected in a way that it wraps around the typical LLM prediction method.
The library will first run the pre-score guards - the guards that will evaluate prompts and
enforce moderation if necessary. All the prompts that were not moderated by the library are
forwarded to the actual LLM to get their respective completions. The library then evaluates
these completions using post-score guards and enforces intervention on them.

## How to build it?
The repository uses `poetry` to manage the build process and a wheel can be built using:
```
make clean
make
```
## How to use it?
A wheel file generated or downloaded can be installed with pip and will pull its
dependencies as well.
```
pip3 install datarobot-moderations
```
### With [DRUM](https://github.com/datarobot/datarobot-user-models)
As described above the library nicely wraps DRUM's `score` method for pre and post score
guards. Hence, in case of DRUM, the user simply runs her custom model using `drum score`
and can avail the moderation library features.
```
pip3 install datarobot-drum
drum score --verbose --logging-level info --code-dir ./ --input ./input.csv --target-type textgeneration --runtime-params-file values.yaml
```
Please refer to the DRUM documentation on [how to define custom inference model](https://github.com/datarobot/datarobot-user-models?tab=readme-ov-file#custom-inference-models-reference-)
which will walk you through how to assemble custom inference model to how to [test it locally](https://github.com/datarobot/datarobot-user-models/blob/master/DEFINE-INFERENCE-MODEL.md#test_inference_model_drum) using `drum score` method
### Standalone use
However, moderation library is not tightly coupled with DRUM and we are actively working
towards using this library in non-DRUM use case. [run.py](./run.py) is an example on how
to use this library in a stand alone way. This example uses Azure OpenAI service to get
LLM completions.
```
export AZURE_OPENAI_API_KEY=<your-azure-openai-api-key>
python run.py --config ./moderation_config.yaml --input ./input.csv --azure-openai-api-base <azure-openai-base-url> --score
```
This will output the response dataframe with bunch of information indicating which prompts
and responses were blocked / reported, why they are blocked / reported etc
[run.py](./run.py) also has an example on how to use this library to moderate the chat
interface. It also uses Azure OpenAI service to get chat completions:
```
export AZURE_OPENAI_API_KEY=<your-azure-openai-api-key>
python run.py --config ./moderation_config.yaml --input ./input_chat.csv --azure-openai-api-base <azure-openai-base-url> --chat
```
It will output the conversation with LLM line by line.
| text/markdown | DataRobot | support@datarobot.com | null | null | DataRobot Tool and Utility Agreement | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"aiohttp>=3.9.5",
"backoff>=2.2.1",
"datarobot>=3.6.0",
"datarobot-predict>=1.9.6",
"deepeval>=3.3.5",
"langchain>=0.1.12",
"langchain-nvidia-ai-endpoints>=0.3.9",
"langchain-openai>=0.1.7",
"llama-index>=0.13.0",
"llama-index-embeddings-azure-openai>=0.1.6",
"llama-index-llms-bedrock-converse>=... | [] | [] | [] | [] | poetry/2.3.2 CPython/3.11.13 Linux/6.1.159-181.297.amzn2023.x86_64 | 2026-02-18T14:30:34.320616 | datarobot_moderations-11.2.16-py3-none-any.whl | 78,792 | 31/be/d79c5271542193387681d8617af45140550527fbd0beac5a2c3794d11d8e/datarobot_moderations-11.2.16-py3-none-any.whl | py3 | bdist_wheel | null | false | c4603d24bb1d61a6ca74d1d92ac9167f | b5223581afc2e86d71b817b7c2afc186b172e2568224a5e14e8c2da2b7a0b01b | 31bed79c5271542193387681d8617af45140550527fbd0beac5a2c3794d11d8e | null | [] | 873 |
2.4 | trellis-datamodel | 0.9.0 | Visual data model editor for dbt projects | # Trellis Data

A lightweight, local-first tool to bridge Conceptual Data Modeling, Logical Data Modeling and the Physical Implementation (currently with dbt-core).
## Motivation
**Current workflow pains:**
- ERD diagrams live in separate tools (Lucidchart, draw.io) and quickly become stale or unreadable for large projects
- Data transformations are done isolated from the conceptual data model.
- No single view connecting business concepts to logical schema
- Stakeholders can't easily understand model structure without technical context
- Holistic Data Warehouse Automation Tools exists but do not integrate well with dbt and the Modern Data Stack
**How Trellis helps:**
- Visual data model that stays in sync — reads directly from `manifest.json` / `catalog.json`
- Sketch entities and with their fields and auto-generate schema.yml's for dbt
- Draw relationships on canvas → auto-generates dbt `relationships` tests
- Two views: **Conceptual** (entity names, descriptions) and **Logical** (columns, types, materializations) to jump between high-level architect and execution-view.
- Organize entities based on subdirectories and tags from your pyhsical implementation.
- Write description or tags back to your dbt-project
**Two Ways of getting started**
- Greenfield: draft entities and fields before writing SQL, then sync to dbt YAML
- Brownfield: document your existing data model by loading existing dbt models and utilize relationship tests to infer links
## Dimensional Modeling Support
Trellis includes native support for Kimball dimensional modeling, making it easier to design, visualize, and document star and snowflake schemas.
## Business Events and Processes
Trellis supports capturing granular business events with 7W annotations and grouping related events into named processes. Processes let you consolidate multiple events into a single fact table (discrete records) or model an accumulating snapshot for evolving workflows.
### Business Events File Structure
Business events are stored in `business_events.yml`. Processes group events without deleting the originals and maintain a superset of annotations across member events.
```yaml
events:
- id: evt_20260127_001
text: customer places order
type: discrete
domain: sales
process_id: proc_20260127_001
created_at: "2026-01-27T09:15:00Z"
updated_at: "2026-01-27T09:15:00Z"
annotations:
who:
- id: entry_01
text: customer
what:
- id: entry_02
text: order
how_many:
- id: entry_03
text: order_amount
derived_entities: []
processes:
- id: proc_20260127_001
name: order to cash
type: evolving
event_ids: [evt_20260127_001, evt_20260127_002]
created_at: "2026-01-27T09:20:00Z"
updated_at: "2026-01-27T09:25:00Z"
annotations_superset:
who:
- id: entry_01
text: customer
what:
- id: entry_02
text: order
how_many:
- id: entry_03
text: order_amount
```
Notes:
- `process_id` links events to a process but does not remove the event.
- `annotations_superset` is the union of member event 7Ws.
- Resolving a process detaches events while keeping them intact.
### Features
**Entity Classification**
- Classify entities as **fact** (transaction tables), **dimension** (descriptive tables), or **unclassified**
- Manual classification during entity creation or via context menu
- Automatic inference from dbt model naming patterns (e.g., `dim_customer` → dimension, `fct_orders` → fact)
- Configurable inference patterns in `trellis.yml`
**Smart Default Positioning**
- Facts are automatically placed in the center area of the canvas
- Dimensions are placed in an outer ring around facts
- Reduces manual layout effort for star/snowflake schemas
- Can be re-applied anytime with "Auto-Layout" button
**Kimball Bus Matrix View**
- Visual matrix showing dimensions (rows) and facts (columns)
- Checkmarks (✓) indicate dimension-fact connections
- Filter by dimension name, fact name, or tags
- Click cells to highlight relationships on the canvas
- Dedicated view mode accessible from navigation bar
### Configuration
Enable dimensional modeling features in `trellis.yml`:
```yaml
modeling_style: dimensional_model # Options: dimensional_model or entity_model (default)
dimensional_modeling:
inference_patterns:
dimension_prefix: ["dim_", "d_"] # Prefixes for dimension tables
fact_prefix: ["fct_", "fact_"] # Prefixes for fact tables
```
- `modeling_style: dimensional_model` enables all dimensional modeling features
- `modeling_style: entity_model` (default) preserves current generic behavior
- Inference patterns customize how entities are auto-classified from dbt model names
### Entity Classification Workflow
**Creating New Entities:**
1. Click "Create Entity" button
2. Fill in entity name and description
3. Select entity type: Fact, Dimension, or Unclassified
4. Entity is placed on canvas according to type (facts center, dimensions outer ring)
**Loading Existing dbt Models:**
1. System automatically infers entity types from naming patterns
2. Entity type icons appear on nodes (database for fact, box for dimension)
3. Override incorrect classifications via context menu: right-click → "Set as Fact/Dimension"
**Bus Matrix Workflow:**
1. Click "Bus Matrix" icon in navigation bar
2. View dimensions (rows) and facts (columns)
3. Checkmarks show connections between entities
4. Filter to focus on specific dimensions, facts, or tags
5. Click checkmark to highlight relationship on canvas
### Use Cases
**When to Use Dimensional Modeling:**
- Designing data warehouses with star/snowflake schemas
- Following Kimball methodology
- Working with fact and dimension tables
- Documenting data models for BI stakeholders
**When to Use Entity Model:**
- Generic data modeling (not strictly dimensional)
- Mixed schema patterns
- Legacy projects with inconsistent naming
- Exploratory modeling
### Entity Model Prefix Support
Trellis includes native support for configurable entity prefixes when using entity modeling style, allowing teams with established table naming conventions to maintain consistency while keeping entity labels clean.
#### Features
**Prefix Application**
- Automatically applies configured prefix when saving unbound entities to dbt schema.yml files
- Supports single prefix or multiple prefixes (e.g., `tbl_`, `entity_`, `t_`)
- Uses first configured prefix for application when multiple are provided
- Case-insensitive prefix detection prevents duplication (e.g., `TBL_CUSTOMER` won't become `tbl_TBL_CUSTOMER`)
- Respects existing bound dbt_model values (bound entities don't get re-prefixed)
**Prefix Stripping from Labels**
- Configured prefixes are automatically stripped from entity labels displayed on the ERD canvas
- Labels remain human-readable and meaningful without technical prefixes
- Works for all entity labels: newly created entities, entities loaded from dbt models, and entities bound to existing dbt models
- Preserves original casing of remaining label text after stripping
#### Configuration
Enable entity modeling prefix support in `trellis.yml`:
```yaml
modeling_style: entity_model # Options: dimensional_model or entity_model (default)
entity_modeling:
inference_patterns:
prefix: "tbl_" # Single prefix
# OR
prefix: ["tbl_", "entity_", "t_"] # Multiple prefixes
```
- `modeling_style: entity_model` (default) enables entity modeling features
- `entity_modeling.inference_patterns.prefix` defines one or more prefixes to apply when saving entities
- Empty prefix list (default) results in no behavior change for backward compatibility
- When multiple prefixes are configured, the first in the list is used for application, but all are recognized for stripping
#### Examples
**Single Prefix Configuration:**
```yaml
entity_modeling:
inference_patterns:
prefix: "tbl_"
```
- Entity "Customer" on canvas saves to dbt as `tbl_customer`
- Loading `tbl_customer` from dbt displays as "Customer" on canvas
**Multiple Prefix Configuration:**
```yaml
entity_modeling:
inference_patterns:
prefix: ["tbl_", "entity_", "t_"]
```
- Entity "Product" on canvas saves to dbt as `tbl_product` (uses first prefix)
- Loading `entity_product` from dbt displays as "Product" on canvas (strips any matching prefix)
- Loading `t_order` from dbt displays as "Order" on canvas (strips any matching prefix)
**Backward Compatibility:**
- Existing `entity_model` projects continue to work without modification when prefix is empty (default)
- No breaking changes to existing APIs or data structures
- Simply add prefix configuration to enable the feature for new or existing projects
## Tutorial & Guide
Check out our [Full Tutorial](https://app.capacities.io/home/667ad256-ca68-4dfd-8231-e77d83127dcf) with video clips showing the core features in action. Also [General Information](https://app.capacities.io/home/8b7546f6-9028-4209-a383-c4a9ba9be42a) is available.
### Configuration UI
trellis provides a web-based configuration interface for editing `trellis.yml` settings.
#### Accessing Configuration
Navigate to `/config` in your browser (or click "Config" in the navigation bar) to access the configuration interface.
#### Features
- **Real-time Validation**: Backend validates all changes before saving, ensuring invalid values are rejected
- **Atomic Writes**: All configuration changes create timestamped backups before overwriting the config file
- **Conflict Detection**: If the config file is modified externally (e.g., by another editor), you'll be warned before overwriting
- **Danger Zone**: Experimental features (lineage, exposures) require explicit acknowledgment before enabling
- **Recovery UI**: Clear error messages and retry options if the config file is missing or unreadable
#### Backup Behavior
When you apply configuration changes:
1. A backup is created with timestamp format: `trellis.yml.bak.YYYYMMDD-HHMMSS`
2. The backup is saved in the same directory as `trellis.yml`
3. The new configuration is written atomically (via temporary file + move operation)
4. Multiple backups are preserved for safety
#### Configuration Fields
The config UI supports editing all user-facing fields:
- Framework (dbt-core only, currently)
- Modeling style (dimensional_model or entity_model)
- Paths (dbt_project_path, dbt_manifest_path, dbt_catalog_path, data_model_file)
- Entity creation guidance (wizard, warnings, description settings)
- Dimensional modeling (dimension/fact prefixes)
- Entity modeling (entity prefix)
- Lineage (beta - layers configuration)
- Exposures (beta - enabled status and layout)
#### Validation Rules
- Path fields validate that files exist (or provide clear warnings for optional paths like catalog)
- Enum fields restrict values to valid options
- Type checking ensures integers, booleans, and lists have correct formats
- Backend validation is authoritative; frontend provides UX feedback but cannot bypass validation
#### Normalization
- Configuration is saved as normalized YAML for consistency
- Comments in the original `trellis.yml` are not preserved (this is expected)
- Formatting follows a standard pattern that the backend understands
## Vision
trellis is currently designed and tested specifically for **dbt-core**, but the vision is to be tool-agnostic. As the saying goes: *"tools evolve, concepts don't"* — data modeling concepts persist regardless of the transformation framework you use.
If this project gains traction, we might explore support for:
- **dbt-fusion** through adapter support
- **Pydantic models** as a simple output format
- Other frameworks like [SQLMesh](https://github.com/TobikoData/sqlmesh) or [Bruin](https://github.com/bruin-data/bruin) through adapter patterns, where compatibility allows
This remains a vision for now — the current focus is on making Trellis work well with dbt-core.
## Prerequisites
- **Node.js 22+ (or 20.19+) & npm**
- Recommended: Use [nvm](https://github.com/nvm-sh/nvm) to install a compatible version (e.g., `nvm install 22`).
- Note: System packages (`apt-get`) may be too old for the frontend dependencies.
- A `.nvmrc` file is included; run `nvm use` to switch to the correct version automatically.
- **Python 3.11+ & [uv](https://github.com/astral-sh/uv)**
- Install uv via `curl -LsSf https://astral.sh/uv/install.sh | sh` and ensure it's on your `$PATH`.
- **Make** (optional) for convenience targets defined in the `Makefile`.
## Installation
### Install from PyPI
```bash
pip install trellis-datamodel
# or with uv
uv pip install trellis-datamodel
```
### Install from Source (Development)
```bash
# Clone the repository
git clone https://github.com/timhiebenthal/trellis-datamodel.git
cd trellis-datamodel
# Install in editable mode
pip install -e .
# or with uv
uv pip install -e .
```
## Quick Start
1. **Navigate to your dbt project directory**
```bash
cd /path/to/your/dbt-project
```
2. **Initialize configuration**
```bash
trellis init
```
This creates a `trellis.yml` file. Edit it to point to your dbt manifest and catalog locations.
3. **Start the server**
```bash
trellis run
```
The server will start on **http://localhost:8089** and automatically open your browser.
## Development Setup
For local development with hot reload:
### Install Dependencies
Run these once per machine (or when dependencies change).
1. **Backend**
```bash
uv sync
```
2. **Frontend**
```bash
cd frontend
npm install
```
**Terminal 1 – Backend**
```bash
make backend
# or
uv run trellis run
```
Backend serves the API at http://localhost:8089.
**Terminal 2 – Frontend**
```bash
make frontend
# or
cd frontend && npm run dev
```
Frontend runs at http://localhost:5173 (for development with hot reload).
## Building for Distribution
To build the package with bundled frontend:
```bash
make build-package
```
This will:
1. Build the frontend (`npm run build`)
2. Copy static files to `trellis_datamodel/static/`
3. Build the Python wheel (`uv build`)
The wheel will be in `dist/` and can be installed with `pip install dist/trellis_datamodel-*.whl`.
## CLI Options
```bash
trellis run [OPTIONS]
Options:
--port, -p INTEGER Port to run the server on [default: 8089]
--config, -c TEXT Path to config file (trellis.yml or config.yml)
--no-browser Don't open browser automatically
--help Show help message
```
## dbt Metadata
- Generate `manifest.json` and `catalog.json` by running `dbt docs generate` in your dbt project.
- The UI reads these artifacts to power the ERD modeller.
- Without these artifacts, the UI loads but shows no dbt models.
## Configuration
Run `trellis init` to create a starter `trellis.yml` file in your project.
The generated file mirrors the annotated defaults in `trellis.yml.example`, so review that example when you need to customize optional sections (lineage, guidance, helpers).
Options:
- `framework`: Transformation framework to use. Currently supported: `dbt-core`. Future: `dbt-fusion`, `sqlmesh`, `bruin`, `pydantic`. Defaults to `dbt-core`.
- `dbt_project_path`: Path to your dbt project directory (relative to `config.yml` or absolute). **Required**.
- `dbt_manifest_path`: Path to `manifest.json` (relative to `dbt_project_path` or absolute). Defaults to `target/manifest.json`.
- `dbt_catalog_path`: Path to `catalog.json` (relative to `dbt_project_path` or absolute). Defaults to `target/catalog.json`.
- `data_model_file`: Path where the data model YAML will be saved (relative to `dbt_project_path` or absolute). Defaults to `data_model.yml`.
- `dbt_model_paths`: List of path patterns to filter which dbt models are shown (e.g., `["3_core"]`). If empty, all models are included.
- `dbt_company_dummy_path`: Helper dbt project used by `trellis generate-company-data`. Run the command to create `./dbt_company_dummy` or update this path to an existing project.
- `modeling_style`: Modeling style to use. Options: `entity_model` (default) or `dimensional_model`. Controls whether dimensional modeling features or entity modeling prefix features are enabled.
- `entity_modeling.inference_patterns.prefix`: Prefix(es) to apply when saving entities and strip from labels in entity modeling mode. Can be a single string or list of strings. Defaults to empty list (no prefix). See "Entity Model Prefix Support" section for examples and details.
- `lineage.enabled`: Feature flag for lineage UI + API. Defaults to `false` (opt-in).
- `lineage.layers`: Ordered list of folder names to organize lineage bands. Prefer this nested structure; legacy `lineage_layers` is deprecated.
- `exposures.enabled`: Feature flag for Exposures view mode. Defaults to `false` (opt-in). Set to `true` to enable the exposures view and API.
- `exposures.default_layout`: Default table layout for exposures view. Options: `dashboards-as-rows` (default, dashboards as rows, entities as columns) or `entities-as-rows` (exposures as columns, entities as rows). Users can manually toggle between layouts.
- `entity_creation_guidance`: Encounter-friendly guidance for the entity wizard (current defaults are shown in `trellis.yml.example`).
**Example `trellis.yml`:**
```yaml
framework: dbt-core
dbt_project_path: "./dbt_built"
dbt_manifest_path: "target/manifest.json"
dbt_catalog_path: "target/catalog.json"
data_model_file: "data_model.yml"
dbt_model_paths: [] # Empty list includes all models
dbt_company_dummy_path: "./dbt_company_dummy"
#lineage:
# enabled: false # Set to true to enable lineage UI/endpoints
# layers: []
#exposures:
# enabled: false # Set to true to enable Exposures view (opt-in)
# default_layout: dashboards-as-rows # Options: dashboards-as-rows (default) or entities-as-rows
#entity_creation_guidance:
# enabled: true # Set false to disable the step-by-step wizard
# push_warning_enabled: true
# min_description_length: 10
# disabled_guidance: []
```
Lineage and entity creation guidance sections are documented fully in `trellis.yml.example`; the CLI leaves them commented out by default.
```
## Testing
### Frontend
**Testing Libraries:**
The following testing libraries are defined in `package.json` under `devDependencies` and are automatically installed when you run `npm install`:
- [Vitest](https://vitest.dev/) (Unit testing)
- [Playwright](https://playwright.dev/) (End-to-End testing)
- [Testing Library](https://testing-library.com/) (DOM & Svelte testing utilities)
- [jsdom](https://github.com/jsdom/jsdom) (DOM environment)
> **Playwright system dependencies (Ubuntu/WSL2)**
>
> The browsers downloaded by Playwright need a handful of native libraries. Install them before running `npm run test:e2e`:
>
> ```bash
> sudo apt-get update && sudo apt-get install -y \
> libxcursor1 libxdamage1 libgtk-3-0 libpangocairo-1.0-0 libpango-1.0-0 \
> libatk1.0-0 libcairo-gobject2 libcairo2 libgdk-pixbuf-2.0-0 libasound2 \
> libnspr4 libnss3 libgbm1 libgles2-mesa libgtk-4-1 libgraphene-1.0-0 \
> libxslt1.1 libwoff2dec0 libvpx7 libevent-2.1-7 libopus0 \
> libgstallocators-1.0-0 libgstapp-1.0-0 libgstpbutils-1.0-0 libgstaudio-1.0-0 \
> libgsttag-1.0-0 libgstvideo-1.0-0 libgstgl-1.0-0 libgstcodecparsers-1.0-0 \
> libgstfft-1.0-0 libflite1 libflite1-plugins libwebpdemux2 libavif13 \
> libharfbuzz-icu0 libwebpmux3 libenchant-2-2 libsecret-1-0 libhyphen0 \
> libwayland-server0 libmanette-0.2-0 libx264-163
> ```
**Running Tests:**
The test suite has multiple levels to catch different types of issues:
```bash
cd frontend
# Quick smoke test (catches 500 errors, runtime crashes, ESM issues)
# Fastest way to verify the app loads without errors
npm run test:smoke
# TypeScript/compilation check
npm run check
# Unit tests
npm run test:unit
# E2E tests (includes smoke test + full test suite)
# Note: Requires backend running with test data (see Test Data Isolation below)
npm run test:e2e
# Run all tests (check + smoke + unit + e2e)
npm run test
```
**Test Levels:**
1. **`npm run check`** - TypeScript compilation errors
2. **`npm run test:smoke`** - Runtime errors (500s, console errors, ESM issues) - **catches app crashes**
3. **`npm run test:unit`** - Unit tests with Vitest
4. **`npm run test:e2e`** - Full E2E tests with Playwright
**Using Makefile:**
```bash
# From project root
make test-smoke # Quick smoke test
make test-check # TypeScript check
make test-unit # Unit tests
make test-e2e # E2E tests (auto-starts backend with test data)
make test-all # All tests
```
**Test Data Isolation:**
E2E tests use a separate test data file (`frontend/tests/test_data_model.yml`) to avoid polluting your production data model. **Playwright automatically starts the backend** with the correct environment variable, so you don't need to manage it manually.
```bash
# Just run E2E tests - backend starts automatically with test data
make test-e2e
# OR:
# cd frontend && npm run test:e2e
```
The test data file is automatically cleaned before and after test runs via Playwright's `globalSetup` and `globalTeardown`. Your production `data_model.yml` remains untouched.
### Backend
**Testing Libraries:**
The following testing libraries are defined in `pyproject.toml` under `[project.optional-dependencies]` in the `dev` group:
- [pytest](https://docs.pytest.org/) (Testing framework)
- [httpx](https://www.python-httpx.org/) (Async HTTP client for API testing)
**Installation:**
Unlike `npm`, `uv sync` does not install optional dependencies by default. To include the testing libraries, run:
```bash
uv sync --extra dev
```
**Running Tests:**
```bash
uv run pytest
```
## Collaboration
If you want to collaborate, reach out!
## Contributing and CLA
- Contributions are welcome! Please read [`CONTRIBUTING.md`](CONTRIBUTING.md) for workflow, testing, and PR guidelines.
- All contributors must sign the CLA once per GitHub account. The CLA bot on pull requests will guide you; see [`CLA.md`](CLA.md) for details.
## Acknowledgments
- Thanks to [dbt-colibri](https://github.com/dbt-labs/dbt-colibri) for providing lineage extraction capabilities that enhance trellis's data model visualization features.
## License
- Trellis Datamodel is licensed under the [GNU Affero General Public License v3.0](LICENSE).
- See [`NOTICE`](NOTICE) for a summary of copyright and licensing information.
| text/markdown | Tim Hiebenthal | null | null | null | null | dbt, data-modeling, erd, data-engineering, analytics-engineering, visualization, schema | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"dbt-core<2.0,>=1.10.5",
"dbt-colibri>=0.1.0",
"dbt-duckdb>=1.10.0",
"fastapi>=0.121.3",
"pydantic>=2.0.0",
"python-dotenv>=1.2.1",
"pyyaml>=6.0.3",
"ruamel.yaml>=0.18.0",
"typer>=0.9.0",
"uvicorn>=0.38.0",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"httpx... | [] | [] | [] | [
"Homepage, https://app.capacities.io/home/8b7546f6-9028-4209-a383-c4a9ba9be42a",
"Repository, https://github.com/timhiebenthal/trellis-datamodel",
"Issues, https://github.com/timhiebenthal/trellis-datamodel/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:29:11.163997 | trellis_datamodel-0.9.0.tar.gz | 939,356 | 1e/b7/531cc6ce68ab51520b8db28e5a6ee4726ec3ea8ba058c802b1c72bf33c0f/trellis_datamodel-0.9.0.tar.gz | source | sdist | null | false | 382101231081e6c22e836e2400a314e3 | 5102c9985ec8cb893f9552739e274d40d630d58ca353284131b1d057924bd98b | 1eb7531cc6ce68ab51520b8db28e5a6ee4726ec3ea8ba058c802b1c72bf33c0f | null | [
"LICENSE",
"NOTICE"
] | 233 |
2.4 | neural-ssm | 0.33 | Neural robust state space models in PyTorch | # neural-ssm
PyTorch implementations of state-space models (SSMs) with a built-in robustness certificate in the form of a tunable L2-bound.
This is obtained by using:
- free parametrizations of L2-bounded linear dynamical systems
- Lipschitz-bounded static nonlinearities
The mathematical details are in:
- **Free Parametrization of L2-bounded State Space Models**
https://arxiv.org/abs/2503.23818
## Installation
Install from pip:
```bash
pip install neural-ssm
```
Install the latest GitHub version:
```bash
pip install git+https://github.com/LeoMassai/neural-ssm.git
```
## Architecture and robustness recipe
Let's see what an SSM is more in detail

Reading the figure from left to right:
1. Input is projected by a linear encoder.
2. A stack of SSL blocks is applied.
3. Each block combines:
- a dynamic recurrent core with different state-space parametrizations (`lru`, `l2n`, or `tv`)
- a static nonlinearity (`LGLU`, `LMLP`, `GLU`, ...)
- a residual connection.
4. Output is projected by a linear decoder.
Main message: `l2n` and `tv`, when used with a Lipschitz-bounded nonlinearity such as `LGLU`, enable robust deep SSMs with prescribed L2 bound.
## Main parametrizations
- `lru`: inspired by "Resurrecting Linear Recurrences"; parametrizes all and only stable LTI systems.
- `l2n`: free parametrization of all and only LTI systems with a prescribed L2 bound.
- `tv`: free parametrization of a time-varying selective recurrent unit with prescribed L2 bound (paper in preparation).
All these parametrizations support both forward execution modes:
- parallel scan via `mode="scan"` (tipically very fast for long sequences)
- standard recurrence loop via `mode="loop"`
You select the mode at call time, e.g. `model(u, mode="scan")` or `model(u, mode="loop")`.
## Main SSM parameters
The SSM model is implemented by the class DeepSSM, which takes a number of input parameters, here are the most important ones:
- `d_input`: input feature dimension.
- `d_output`: output feature dimension.
- `d_model`: latent model dimension used inside each SSL block.
- `d_state`: internal recurrent state dimension.
- `n_layers`: number of stacked SSL blocks.
- `param`: parametrization of the recurrent unit (`lru`, `l2n`, `tv`, ...).
- `ff`: static nonlinearity type, same for each SSL block (`GLU`, `MLP`, `LMLP`, `LGLU`, `TLIP`).
- `gamma`: desired L_2 bound of the overall SSM. If `gamma=None`, it is trainable.
## Where each component is in the code
- End-to-end wrapper (encoder, stack, decoder):
`DeepSSM` in `src/neural_ssm/ssm/lru.py`
- Repeated SSM block (dynamic core + nonlinearity + residual):
`SSL` in `src/neural_ssm/ssm/lru.py`
- Dynamic cores:
- `lru` -> `LRU` in `src/neural_ssm/ssm/lru.py`
- `l2n` -> `Block2x2DenseL2SSM` in `src/neural_ssm/ssm/lru.py`
- `tv` -> `RobustMambaDiagSSM` in `src/neural_ssm/ssm/mamba.py`
- Static nonlinearities:
- `GLU`, `MLP` in `src/neural_ssm/static_layers/generic_layers.py`
- `LGLU`, `LMLP`, `TLIP` in `src/neural_ssm/static_layers/lipschitz_mlps.py`
- Parallel scan utilities:
`src/neural_ssm/ssm/scan_utils.py`
## Quick tutorial
For a complete, runnable training example on a nonlinear benchmark dataset, see:
`Test_files/Tutorial_DeepSSM.py`
For a minimal Long Range Arena (LRA) benchmark script (ListOps) using `DeepSSM`, see:
`scripts/run_lra_listops.py` (requires `pip install datasets`). The script defaults to
the Hugging Face dataset `lra-benchmarks` with config `listops`.
### Tensor shapes and forward outputs
- Input tensor shape is `u: (B, L, d_input)` where:
- `B` = batch size
- `L` = sequence length
- `d_input` = input dimension
- Output tensor shape is `y: (B, L, d_output)`.
- `DeepSSM` returns two objects:
- `y`: the model output sequence
- `state`: a list of recurrent states (one tensor per SSL block), useful for stateful calls.
State handling in `forward` (DeepSSM and recurrent cores):
- `state`: optional explicit initial state (or state list for `DeepSSM`).
- `reset_state`: defaults to `True`.
- `True`: start from zero state for this call.
- `False`: reuse internal state if `state` is not provided.
- `detach_state`: defaults to `True`.
- `True`: internal state is detached at the end of the call (no cross-call BPTT).
- `False`: internal state keeps gradient history (for true BPTT across calls).
- `reset()`: manually resets internal state to zero.
### How to create and call a Deep SSM
Building and using the SSM is pretty easy:
```python
import torch
from neural_ssm import DeepSSM
model = DeepSSM(
d_input=1,
d_output=1,
d_model=16,
d_state=16,
n_layers=4,
param="tv",
ff="LGLU",
gamma=2.0,
)
u = torch.randn(8, 200, 1) # (B, L, d_input)
y, state = model(u, mode="scan") # reset_state=True by default
# Stateful call: pass one state per SSL block
u_next = torch.randn(8, 200, 1)
y_next, state = model(u_next, state=state, mode="scan", reset_state=False)
# Stateful call without passing explicit state (reuse internal state)
y_stream, state = model(u_next, mode="scan", reset_state=False)
# True BPTT across calls: keep state in graph
y_bptt, state = model(u_next, mode="scan", reset_state=False, detach_state=False)
```
## Top-level API
- `DeepSSM`, `SSMConfig`
- `LRU`, `L2RU`, `lruz`
- static layers re-exported in `neural_ssm.layers`
## Examples
Example and experiment scripts are available in `Test_files/`, including:
- `Test_files/Tutorial_DeepSSM.py`: minimal end-to-end DeepSSM training tutorial.
## Citation
If you use this repository in research, please cite:
**Free Parametrization of L2-bounded State Space Models**
https://arxiv.org/abs/2503.23818
| text/markdown | Leonardo Massai | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operat... | [] | null | null | >=3.9 | [] | [] | [] | [
"torch>=2.2",
"numpy>=1.23",
"tqdm>=4.66",
"matplotlib>=3.7",
"einops>=0.6",
"typing-extensions>=4.5",
"jax>=0.4.26",
"jaxlib>=0.4.26",
"deel-torchlip",
"black>=24.3.0; extra == \"dev\"",
"ruff>=0.3.0; extra == \"dev\"",
"mypy>=1.6.0; extra == \"dev\"",
"pytest>=7.4; extra == \"dev\"",
"py... | [] | [] | [] | [
"Homepage, https://github.com/DecodEPFL/SSM",
"Repository, https://github.com/DecodEPFL/SSM",
"Issues, https://github.com/DecodEPFL/SSM/issues"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-18T14:28:39.074999 | neural_ssm-0.33.tar.gz | 45,950 | 95/a9/de3032a0f8bc09eced6180e07af99e514bbd8e6addd41cbe5ffbc921a914/neural_ssm-0.33.tar.gz | source | sdist | null | false | 06526791eb7f9d3bd4a5caacc656aa89 | 52750d9d60758307ee6439cd3d717e6107416b511eab824139ec9022873d3ec5 | 95a9de3032a0f8bc09eced6180e07af99e514bbd8e6addd41cbe5ffbc921a914 | null | [] | 237 |
2.4 | pyworkflow-engine | 0.1.35 | A Python implementation of durable, event-sourced workflows inspired by Vercel Workflow | # PyWorkflow
**Distributed, durable workflow orchestration for Python**
Build long-running, fault-tolerant workflows with automatic retry, sleep/delay capabilities, and complete observability. PyWorkflow uses event sourcing and Celery for production-grade distributed execution.
---
## What is PyWorkflow?
PyWorkflow is a workflow orchestration framework that enables you to build complex, long-running business processes as simple Python code. It handles the hard parts of distributed systems: fault tolerance, automatic retries, state management, and horizontal scaling.
### Key Features
- **Distributed by Default**: All workflows execute across Celery workers for horizontal scaling
- **Durable Execution**: Event sourcing ensures workflows can recover from any failure
- **Auto Recovery**: Automatic workflow resumption after worker crashes with event replay
- **Time Travel**: Sleep for minutes, hours, or days with automatic resumption
- **Fault Tolerant**: Automatic retries with configurable backoff strategies
- **Zero-Resource Suspension**: Workflows suspend without holding resources during sleep
- **Production Ready**: Built on battle-tested Celery and Redis
- **Fully Typed**: Complete type hints and Pydantic validation
- **Observable**: Structured logging with workflow context
---
## Quick Start
### Installation
**Basic installation** (File and Memory storage backends):
```bash
pip install pyworkflow-engine
```
**With optional storage backends:**
```bash
# Redis backend (includes Redis as Celery broker)
pip install pyworkflow-engine[redis]
# SQLite backend
pip install pyworkflow-engine[sqlite]
# PostgreSQL backend
pip install pyworkflow-engine[postgres]
# All storage backends
pip install pyworkflow-engine[all]
# Development (includes all backends + dev tools)
pip install pyworkflow-engine[dev]
```
### Prerequisites
**For distributed execution** (recommended for production):
PyWorkflow uses Celery for distributed task execution. You need a message broker:
**Option 1: Redis (recommended)**
```bash
# Install Redis support
pip install pyworkflow-engine[redis]
# Start Redis
docker run -d -p 6379:6379 redis:7-alpine
# Start Celery worker(s)
celery -A pyworkflow.celery.app worker --loglevel=info
# Start Celery Beat (for automatic sleep resumption)
celery -A pyworkflow.celery.app beat --loglevel=info
```
Or use the CLI to set up Docker infrastructure:
```bash
pyworkflow setup
```
**Option 2: Other brokers** (RabbitMQ, etc.)
```bash
# Celery supports multiple brokers
# Configure via environment: CELERY_BROKER_URL=amqp://localhost
```
**For local development/testing:**
```bash
# No broker needed - use in-process execution
pyworkflow configure --runtime local
```
See [DISTRIBUTED.md](DISTRIBUTED.md) for complete deployment guide.
### Your First Workflow
```python
from pyworkflow import workflow, step, start, sleep
@step()
async def send_welcome_email(user_id: str):
# This runs on any available Celery worker
print(f"Sending welcome email to user {user_id}")
return f"Email sent to {user_id}"
@step()
async def send_tips_email(user_id: str):
print(f"Sending tips email to user {user_id}")
return f"Tips sent to {user_id}"
@workflow()
async def onboarding_workflow(user_id: str):
# Send welcome email immediately
await send_welcome_email(user_id)
# Sleep for 1 day - workflow suspends, zero resources used
await sleep("1d")
# Automatically resumes after 1 day!
await send_tips_email(user_id)
return "Onboarding complete"
# Start workflow - executes across Celery workers
run_id = start(onboarding_workflow, user_id="user_123")
print(f"Workflow started: {run_id}")
```
**What happens:**
1. Workflow starts on a Celery worker
2. Welcome email is sent
3. Workflow suspends after calling `sleep("1d")`
4. Worker is freed to handle other tasks
5. After 1 day, Celery Beat automatically schedules resumption
6. Workflow resumes on any available worker
7. Tips email is sent
---
## Core Concepts
### Workflows
Workflows are the top-level orchestration functions. They coordinate steps, handle business logic, and can sleep for extended periods.
```python
from pyworkflow import workflow, start
@workflow(name="process_order", max_duration="1h")
async def process_order(order_id: str):
"""
Process a customer order.
This workflow:
- Validates the order
- Processes payment
- Creates shipment
- Sends confirmation
"""
order = await validate_order(order_id)
payment = await process_payment(order)
shipment = await create_shipment(order)
await send_confirmation(order)
return {"order_id": order_id, "status": "completed"}
# Start the workflow
run_id = start(process_order, order_id="ORD-123")
```
### Steps
Steps are the building blocks of workflows. Each step is an isolated, retryable unit of work that runs on Celery workers.
```python
from pyworkflow import step, RetryableError, FatalError
@step(max_retries=5, retry_delay="exponential")
async def call_external_api(url: str):
"""
Call external API with automatic retry.
Retries up to 5 times with exponential backoff if it fails.
"""
try:
response = await httpx.get(url)
if response.status_code == 404:
# Don't retry - resource doesn't exist
raise FatalError("Resource not found")
if response.status_code >= 500:
# Retry - server error
raise RetryableError("Server error", retry_after="30s")
return response.json()
except httpx.NetworkError:
# Retry with exponential backoff
raise RetryableError("Network error")
```
### Force Local Steps
By default, steps in a Celery runtime are dispatched to worker processes via the message broker. For lightweight steps where the broker round-trip overhead is undesirable, use `force_local=True` to execute the step inline in the orchestrator process:
```python
from pyworkflow import step
@step(force_local=True)
async def quick_transform(data: dict):
"""Runs inline even when runtime is Celery."""
return {k: v.upper() for k, v in data.items()}
@step()
async def heavy_computation(data: dict):
"""Dispatched to a Celery worker as usual."""
# ... expensive work ...
return result
```
Force-local steps still benefit from full durability: events (`STEP_STARTED`, `STEP_COMPLETED`) are recorded, results are cached for replay, and retry/timeout behavior is preserved. The only difference is that execution happens in the orchestrator process instead of a remote worker.
**When to use `force_local`:**
- Lightweight data transformations that finish in milliseconds
- Steps that merely combine results from previous steps
- Steps where broker serialization overhead exceeds the actual computation time
**When NOT to use `force_local`:**
- CPU-intensive or I/O-heavy steps (these benefit from worker distribution)
- Steps that should scale independently from the orchestrator
### Sleep and Delays
Workflows can sleep for any duration. During sleep, the workflow suspends and consumes zero resources.
```python
from pyworkflow import workflow, sleep
@workflow()
async def scheduled_reminder(user_id: str):
# Send immediate reminder
await send_reminder(user_id, "immediate")
# Sleep for 1 hour
await sleep("1h")
await send_reminder(user_id, "1 hour later")
# Sleep for 1 day
await sleep("1d")
await send_reminder(user_id, "1 day later")
# Sleep for 1 week
await sleep("7d")
await send_reminder(user_id, "1 week later")
return "All reminders sent"
```
**Supported formats:**
- Duration strings: `"5s"`, `"10m"`, `"2h"`, `"3d"`
- Timedelta: `timedelta(hours=2, minutes=30)`
- Datetime: `datetime(2025, 12, 25, 9, 0, 0)`
---
## Architecture
### Event-Sourced Execution
PyWorkflow uses event sourcing to achieve durable, fault-tolerant execution:
1. **All state changes are recorded as events** in an append-only log
2. **Deterministic replay** enables workflow resumption from any point
3. **Complete audit trail** of everything that happened in the workflow
**Event Types** (16 total):
- Workflow: `started`, `completed`, `failed`, `suspended`, `resumed`
- Step: `started`, `completed`, `failed`, `retrying`
- Sleep: `created`, `completed`
- Logging: `info`, `warning`, `error`, `debug`
### Distributed Execution
```
┌─────────────────────────────────────────────────────┐
│ Your Application │
│ │
│ start(my_workflow, args) │
│ │ │
└─────────┼───────────────────────────────────────────┘
│
▼
┌─────────┐
│ Redis │ ◄──── Message Broker
└─────────┘
│
├──────┬──────┬──────┐
▼ ▼ ▼ ▼
┌──────┐ ┌──────┐ ┌──────┐
│Worker│ │Worker│ │Worker│ ◄──── Horizontal Scaling
└──────┘ └──────┘ └──────┘
│ │ │
└──────┴──────┘
│
▼
┌──────────┐
│ Storage │ ◄──── Event Log (File/Redis/PostgreSQL)
└──────────┘
```
### Storage Backends
PyWorkflow supports pluggable storage backends:
| Backend | Status | Installation | Use Case |
|---------|--------|--------------|----------|
| **File** | ✅ Complete | Included | Development, single-machine |
| **Memory** | ✅ Complete | Included | Testing, ephemeral workflows |
| **SQLite** | ✅ Complete | `pip install pyworkflow-engine[sqlite]` | Embedded, local persistence |
| **PostgreSQL** | ✅ Complete | `pip install pyworkflow-engine[postgres]` | Production, enterprise |
| **Redis** | 📋 Planned | `pip install pyworkflow-engine[redis]` | High-performance, distributed |
---
## Advanced Features
### Parallel Execution
Use Python's native `asyncio.gather()` for parallel step execution:
```python
import asyncio
from pyworkflow import workflow, step
@step()
async def fetch_user(user_id: str):
# Fetch user data
return {"id": user_id, "name": "Alice"}
@step()
async def fetch_orders(user_id: str):
# Fetch user orders
return [{"id": "ORD-1"}, {"id": "ORD-2"}]
@step()
async def fetch_recommendations(user_id: str):
# Fetch recommendations
return ["Product A", "Product B"]
@workflow()
async def dashboard_data(user_id: str):
# Fetch all data in parallel
user, orders, recommendations = await asyncio.gather(
fetch_user(user_id),
fetch_orders(user_id),
fetch_recommendations(user_id)
)
return {
"user": user,
"orders": orders,
"recommendations": recommendations
}
```
### Error Handling
PyWorkflow distinguishes between retriable and fatal errors:
```python
from pyworkflow import FatalError, RetryableError, step
@step(max_retries=3, retry_delay="exponential")
async def process_payment(amount: float):
try:
# Attempt payment
result = await payment_gateway.charge(amount)
return result
except InsufficientFundsError:
# Don't retry - user doesn't have enough money
raise FatalError("Insufficient funds")
except PaymentGatewayTimeoutError:
# Retry - temporary issue
raise RetryableError("Gateway timeout", retry_after="10s")
except Exception as e:
# Unknown error - retry with backoff
raise RetryableError(f"Unknown error: {e}")
```
**Retry strategies:**
- `retry_delay="fixed"` - Fixed delay between retries (default: 60s)
- `retry_delay="exponential"` - Exponential backoff (1s, 2s, 4s, 8s, ...)
- `retry_delay="5s"` - Custom fixed delay
### Auto Recovery
Workflows automatically recover from worker crashes:
```python
from pyworkflow import workflow, step, sleep
@workflow(
recover_on_worker_loss=True, # Enable recovery (default for durable)
max_recovery_attempts=5, # Max recovery attempts
)
async def resilient_workflow(data_id: str):
data = await fetch_data(data_id) # Completed steps are skipped on recovery
await sleep("10m") # Sleep state is preserved
return await process_data(data) # Continues from here after crash
```
**What happens on worker crash:**
1. Celery detects worker loss, requeues task
2. New worker picks up the task
3. Events are replayed to restore state
4. Workflow resumes from last checkpoint
Configure globally:
```python
import pyworkflow
pyworkflow.configure(
default_recover_on_worker_loss=True,
default_max_recovery_attempts=3,
)
```
Or via config file:
```yaml
# pyworkflow.config.yaml
recovery:
recover_on_worker_loss: true
max_recovery_attempts: 3
```
### Idempotency
Prevent duplicate workflow executions with idempotency keys:
```python
from pyworkflow import start
# Same idempotency key = same workflow
run_id_1 = start(
process_order,
order_id="ORD-123",
idempotency_key="order-ORD-123"
)
# This will return the same run_id, not start a new workflow
run_id_2 = start(
process_order,
order_id="ORD-123",
idempotency_key="order-ORD-123"
)
assert run_id_1 == run_id_2 # True!
```
### Observability
PyWorkflow includes structured logging with automatic context:
```python
from pyworkflow import configure_logging
# Configure logging
configure_logging(
level="INFO",
log_file="workflow.log",
json_logs=True, # JSON format for production
show_context=True # Include run_id, step_id, etc.
)
# Logs automatically include:
# - run_id: Workflow execution ID
# - workflow_name: Name of the workflow
# - step_id: Current step ID
# - step_name: Name of the step
```
---
## Testing
PyWorkflow uses a unified API for testing with local execution:
```python
import pytest
from pyworkflow import workflow, step, start, configure, reset_config
from pyworkflow.storage.memory import InMemoryStorageBackend
@step()
async def my_step(x: int):
return x * 2
@workflow()
async def my_workflow(x: int):
result = await my_step(x)
return result + 1
@pytest.fixture(autouse=True)
def setup_storage():
reset_config()
storage = InMemoryStorageBackend()
configure(storage=storage, default_durable=True)
yield storage
reset_config()
@pytest.mark.asyncio
async def test_my_workflow(setup_storage):
storage = setup_storage
run_id = await start(my_workflow, 5)
# Get workflow result
run = await storage.get_run(run_id)
assert run.status.value == "completed"
```
---
## Production Deployment
### Docker Compose
```yaml
version: '3.8'
services:
redis:
image: redis:7-alpine
ports:
- "6379:6379"
worker:
build: .
command: celery -A pyworkflow.celery.app worker --loglevel=info
depends_on:
- redis
deploy:
replicas: 3 # Run 3 workers
beat:
build: .
command: celery -A pyworkflow.celery.app beat --loglevel=info
depends_on:
- redis
flower:
build: .
command: celery -A pyworkflow.celery.app flower --port=5555
ports:
- "5555:5555"
```
Start everything using the CLI:
```bash
pyworkflow setup
```
See [DISTRIBUTED.md](DISTRIBUTED.md) for complete deployment guide with Kubernetes.
---
## Examples
Check out the [examples/](examples/) directory for complete working examples:
- **[basic_workflow.py](examples/functional/basic_workflow.py)** - Complete example with retries, errors, and sleep
- **[distributed_example.py](examples/functional/distributed_example.py)** - Multi-worker distributed execution example
---
## Project Status
✅ **Status**: Production Ready (v1.0)
**Completed Features**:
- ✅ Core workflow and step execution
- ✅ Event sourcing with 16 event types
- ✅ Distributed execution via Celery
- ✅ Sleep primitive with automatic resumption
- ✅ Error handling and retry strategies
- ✅ File storage backend
- ✅ Structured logging
- ✅ Comprehensive test coverage (68 tests)
- ✅ Docker Compose deployment
- ✅ Idempotency support
**Next Milestones**:
- 📋 Redis storage backend
- 📋 PostgreSQL storage backend
- 📋 Webhook integration
- 📋 Web UI for monitoring
- 📋 CLI management tools
---
## Contributing
Contributions are welcome!
### Development Setup
```bash
# Clone repository
git clone https://github.com/QualityUnit/pyworkflow
cd pyworkflow
# Install with Poetry
poetry install
# Run tests
poetry run pytest
# Format code
poetry run black pyworkflow tests
poetry run ruff check pyworkflow tests
# Type checking
poetry run mypy pyworkflow
```
---
## Documentation
- **[Distributed Deployment Guide](DISTRIBUTED.md)** - Production deployment with Docker Compose and Kubernetes
- [Examples](examples/) - Working examples and patterns
- [API Reference](docs/api-reference.md) (Coming soon)
- [Architecture Guide](docs/architecture.md) (Coming soon)
---
## License
Apache License 2.0 - See [LICENSE](LICENSE) file for details.
---
## Links
- **Documentation**: https://docs.pyworkflow.dev
- **GitHub**: https://github.com/QualityUnit/pyworkflow
- **Issues**: https://github.com/QualityUnit/pyworkflow/issues
| text/markdown | PyWorkflow Contributors | null | null | null | MIT | workflow, durable, event-sourcing, celery, async, orchestration | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Develo... | [] | null | null | >=3.11 | [] | [] | [] | [
"celery<6.0.0,>=5.3.0",
"cloudpickle>=3.0.0",
"pydantic<3.0.0,>=2.0.0",
"loguru>=0.7.0",
"click>=8.0.0",
"inquirerpy>=0.3.4; python_version < \"4.0\"",
"httpx>=0.25.0",
"python-dateutil>=2.8.0",
"filelock>=3.12.0",
"pyyaml>=6.0.0",
"croniter>=2.0.0",
"redis>=5.0.0; extra == \"redis\"",
"aios... | [] | [] | [] | [
"Homepage, https://docs.pyworkflow.dev",
"Documentation, https://docs.pyworkflow.dev",
"Repository, https://github.com/QualityUnit/pyworkflow",
"Issues, https://github.com/QualityUnit/pyworkflow/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:27:26.111689 | pyworkflow_engine-0.1.35.tar.gz | 453,013 | 8a/b5/0759e0bf2fbacbaeaa4573604ba9b4147cab87ac284e4bfd6c9c5d46dd07/pyworkflow_engine-0.1.35.tar.gz | source | sdist | null | false | 965c8149ff659020707033054d541336 | 67ced36ccba79e678f7bb828ab9ea2e8077d3674558e2567edc7fbf52c822bd4 | 8ab50759e0bf2fbacbaeaa4573604ba9b4147cab87ac284e4bfd6c9c5d46dd07 | null | [
"LICENSE"
] | 356 |
2.4 | js-secret-santa | 1.1.4 | null | # Secret Santa
## Steps to set up
1. Create a new repository in GitHub using this repository as a template
2. Generate Docker Hub PAT (Personal Access Token)
3. Create an [Environment in GitHub](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-deployments/managing-environments-for-deployment#creating-an-environment) with the following secrets
- DOCKER_USERNAME (Docker Hub username)
- DOCKER_PASSWORD (Docker Hub PAT)
4. Create a Docker Hub repository with the same name as the GitHub repository
5. Update `assignees` in `renovate.json` with your GitHub username
6. Set up code-cove and make sure it has access to this repository
- https://docs.codecov.com/docs/quick-start
7. Setup branch protection rules
- Set `Enrforcment Status` to `Enabled`
- Make sure `Target branches` set to `main` or default branch
- Ensure these `Branch rules` are selected
- `Restrict deletions`
- `Require status checks to pass` with these checks
- `Lint`
- `Test`
- `Block force pushes`
8. Create a PyPi `Trusted Publisher`
- https://pypi.org/manage/account/publishing/
9. Ensure the name in `pyproject.toml` matches the name of the package on PyPi
10. Make sure the following linters are installed externally of the project
- yamllint
- shellcheck
- shfmt
- node (npx/dclint)
## TODO
- [X] Handle GitHub pre-release
- [X] Update PYTHONPATH with src folder
- [X] Add custom user to Dockerfile
- [ ] Fix Dockerfile
- [ ] Fix health check
- [ ] Fix version number
| text/markdown | null | jnstockley <jnstockley@users.noreply.github.com> | null | null | null | starter, template, python | [
"Programming Language :: Python :: 3"
] | [] | null | null | <4.0,>=3.13 | [] | [] | [] | [
"pandas==3.0.1",
"python-dotenv==1.2.1"
] | [] | [] | [] | [
"Homepage, https://github.com/jnstockley/secret-santa",
"Repository, https://github.com/jnstockley/secret-santa.git",
"Issues, https://github.com/jnstockley/secret-santa/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:27:24.523397 | js_secret_santa-1.1.4.tar.gz | 17,006 | c2/a5/ec3709b8245f241a7aeba919284c3bab36c7a7310235173410ad77de0ae6/js_secret_santa-1.1.4.tar.gz | source | sdist | null | false | a7e803d663a3f37e32c3c30cb82f5bd1 | 260740804f3bce56649681bd4900b28629fd3521a92ff2da0f46ac2b3135f883 | c2a5ec3709b8245f241a7aeba919284c3bab36c7a7310235173410ad77de0ae6 | null | [
"LICENSE"
] | 228 |
2.4 | chap-core | 1.2.0 | Climate Health Analysis Platform (CHAP) | # Welcome to the Chap modelling platform!
[](https://github.com/dhis2-chap/chap-core/actions/workflows/ci-test-python-install.yml)
[](https://pypi.org/project/chap-core/)
[](https://www.python.org/downloads/)
[](https://www.gnu.org/licenses/agpl-3.0)
[](https://chap.dhis2.org/chap-modeling-platform/)
This is the main repository for the Chap modelling platform.
[Read more about the Chap project here](https://chap.dhis2.org/about/)
## Code documentation
The main documentation for the modelling platform is located at [https://chap.dhis2.org/chap-documentation/](https://chap.dhis2.org/chap-documentation/).
## Development / contribution
Information about how to contributre to the the Chap Modelling Platform: [https://github.com/orgs/dhis2-chap/projects/4](https://chap.dhis2.org/chap-documentation/contributor/).
## Issues/Bugs
If you find any bugs or issues when using this code base, we appreciate it if you file a bug report here: https://github.com/dhis2-chap/chap-core/issues/new
## Launch development instance using Docker
```shell
cp .env.example .env
docker compose up
```
| text/markdown | null | Chap Team <chap@dhis2.org> | null | null | AGPLv3 license | chap_core | [] | [] | null | null | <3.14,>=3.13 | [] | [] | [] | [
"alembic>=1.14.0",
"altair>=5.5.0",
"bionumpy>=1.0.14",
"celery[pytest]>=5.5.3",
"chapkit>=0.16",
"cryptography>=46.0.3",
"cyclopts>=3.24.0",
"diskcache>=5.6.3",
"docker>=7.1.0",
"fastapi>=0.118.0",
"geopandas>=1.1.1",
"geopy>=2.4.1",
"gitpython>=3.1.45",
"gluonts>=0.16.2",
"gunicorn>=23... | [] | [] | [] | [
"Homepage, https://github.com/dhis2/chap-core"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:27:20.034550 | chap_core-1.2.0.tar.gz | 25,798,140 | b2/94/e9bfc810d7c46286968edc83e78fdb81c7ce282b88abd25e0914472e2226/chap_core-1.2.0.tar.gz | source | sdist | null | false | 3ee5b2567edf272485e94c9d771e4fb2 | 2de06788fd07fb68882eb3561e2f9f54e3de83eb3f9657279a3e639497870bc1 | b294e9bfc810d7c46286968edc83e78fdb81c7ce282b88abd25e0914472e2226 | null | [
"LICENSE"
] | 236 |
2.4 | yirgacheffe | 1.12.5 | Abstraction of gdal datasets for doing basic math operations | # Yirgacheffe: a declarative geospatial library for Python to make data-science with maps easier
[](https://github.com/quantifyearth/yirgacheffe/actions)
[](https://yirgacheffe.org)
[](https://pypi.org/project/yirgacheffe/)
## Overview
Yirgacheffe is a declarative geospatial library, allowing you to operate on both raster and polygon geospatial datasets without having to do all the tedious book keeping around layer alignment or dealing with hardware concerns around memory or parallelism. you can load into memory safely.
Example common use-cases:
* Do the datasets overlap? Yirgacheffe will let you define either the intersection or the union of a set of different datasets, scaling up or down the area as required.
* Rasterisation of vector layers: if you have a vector dataset then you can add that to your computation and yirgaceffe will rasterize it on demand, so you never need to store more data in memory than necessary.
* Do the raster layers get big and take up large amounts of memory? Yirgacheffe will let you do simple numerical operations with layers directly and then worry about the memory management behind the scenes for you.
* Parallelisation of operations over many CPU cores.
* Built in support for optionally using GPUs via [MLX](https://ml-explore.github.io/mlx/build/html/index.html) support.
## Installation
Yirgacheffe is available via pypi, so can be installed with pip for example:
```SystemShell
$ pip install yirgacheffe
```
## Documentation
The documentation can be found on [yirgacheffe.org](https://yirgacheffe.org/)
## Simple examples:
Here is how to do cloud removal from [Sentinel-2 data](https://browser.dataspace.copernicus.eu/?zoom=14&lat=6.15468&lng=38.20581&themeId=DEFAULT-THEME&visualizationUrl=U2FsdGVkX1944lrmeTJcaSsnoxNMp4oucN1AjklGUANHd2cRZWyXnepHvzpaOWzMhH8SrWQo%2BqrOvOnu6f9FeCMrS%2FDZmvjzID%2FoE1tbOCEHK8ohPXjFqYojeR9%2B82ri&datasetId=S2_L2A_CDAS&fromTime=2025-09-09T00%3A00%3A00.000Z&toTime=2025-09-09T23%3A59%3A59.999Z&layerId=1_TRUE_COLOR&demSource3D=%22MAPZEN%22&cloudCoverage=30&dateMode=SINGLE), using the [Scene Classification Layer](https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/scene-classification/) data:
```python
import yirgaceffe as yg
with (
yg.read_raster("T37NCG_20250909T073609_B06_20m.jp2") as vre2,
yg.read_raster("T37NCG_20250909T073609_SCL_20m.jp2") as scl,
):
is_cloud = (scl == 8) | (scl == 9) | (scl == 10) # various cloud types
is_shadow = (scl == 3)
is_bad = is_cloud | is_shadow
masked_vre2 = yg.where(is_bad, float("nan"), vre2)
masked_vre2.to_geotiff("vre2_cleaned.tif")
```
or a species' [Area of Habitat](https://www.sciencedirect.com/science/article/pii/S0169534719301892) calculation:
```python
import yirgaceffe as yg
with (
yg.read_raster("habitats.tif") as habitat_map,
yg.read_raster('elevation.tif') as elevation_map,
yg.read_shape('species123.geojson') as range_map,
):
refined_habitat = habitat_map.isin([...species habitat codes...])
refined_elevation = (elevation_map >= species_min) & (elevation_map <= species_max)
aoh = refined_habitat * refined_elevation * range_polygon * area_per_pixel_map
print(f'Area of habitat: {aoh.sum()}')
```
## Citation
If you use Yirgacheffe in your research, please cite our paper:
> Michael Winston Dales, Alison Eyres, Patrick Ferris, Francesca A. Ridley, Simon Tarr, and Anil Madhavapeddy. 2025. Yirgacheffe: A Declarative Approach to Geospatial Data. In *Proceedings of the 2nd ACM SIGPLAN International Workshop on Programming for the Planet* (PROPL '25). Association for Computing Machinery, New York, NY, USA, 47–54. https://doi.org/10.1145/3759536.3763806
<details>
<summary>BibTeX</summary>
```bibtex
@inproceedings{10.1145/3759536.3763806,
author = {Dales, Michael Winston and Eyres, Alison and Ferris, Patrick and Ridley, Francesca A. and Tarr, Simon and Madhavapeddy, Anil},
title = {Yirgacheffe: A Declarative Approach to Geospatial Data},
year = {2025},
isbn = {9798400721618},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3759536.3763806},
doi = {10.1145/3759536.3763806},
abstract = {We present Yirgacheffe, a declarative geospatial library that allows spatial algorithms to be implemented concisely, supports parallel execution, and avoids common errors by automatically handling data (large geospatial rasters) and resources (cores, memory, GPUs). Our primary user domain comprises ecologists, where a typical problem involves cleaning messy occurrence data, overlaying it over tiled rasters, combining layers, and deriving actionable insights from the results. We describe the successes of this approach towards driving key pipelines related to global biodiversity and describe the capability gaps that remain, hoping to motivate more research into geospatial domain-specific languages.},
booktitle = {Proceedings of the 2nd ACM SIGPLAN International Workshop on Programming for the Planet},
pages = {47–54},
numpages = {8},
keywords = {Biodiversity, Declarative, Geospatial, Python},
location = {Singapore, Singapore},
series = {PROPL '25}
}
```
</details>
## Thanks
Thanks to discussion and feedback from my colleagues, particularly Alison Eyres, Patrick Ferris, Amelia Holcomb, and Anil Madhavapeddy.
Inspired by the work of Daniele Baisero in his AoH library.
| text/markdown | null | Michael Dales <mwd24@cam.ac.uk> | null | null | null | gdal, gis, geospatial, declarative | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: GIS"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy<3.0,>=1.24",
"gdal[numpy]<4.0,>=3.8",
"scikit-image<1.0,>=0.20",
"torch",
"dill",
"deprecation",
"tomli",
"h3",
"pyproj",
"lazy_loader",
"mlx; extra == \"mlx\"",
"matplotlib; extra == \"matplotlib\"",
"mypy; extra == \"dev\"",
"pylint; extra == \"dev\"",
"pytest; extra == \"dev\""... | [] | [] | [] | [
"Homepage, https://yirgacheffe.org/",
"Repository, https://github.com/quantifyearth/yirgacheffe.git",
"Issues, https://github.com/quantifyearth/yirgacheffe/issues",
"Changelog, https://yirgacheffe.org/latest/changelog/"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T14:27:19.587268 | yirgacheffe-1.12.5.tar.gz | 57,794 | 3d/fa/61f0f326296e4389bf76148c1c61fa865d035d930e738cf2c07c59971c7f/yirgacheffe-1.12.5.tar.gz | source | sdist | null | false | b988a2f1153cc3f70278745a2d4dded4 | f212e5e84aa0e9e088ae57d692f1648a0d05754c3458bac730dc40e660f1abf6 | 3dfa61f0f326296e4389bf76148c1c61fa865d035d930e738cf2c07c59971c7f | ISC | [
"LICENSE"
] | 292 |
2.4 | teamplify | 0.14.0 | Teamplify on-premise runner | # Teamplify runner
[Teamplify](https://teamplify.com) is a personal assistant for your development
team. It helps you track work progress and notify your team about situations
that may require their attention. It is available
[in the cloud](https://teamplify.com) or as an on-premise installation.
This package is the installer and runner for the on-premise version.
* [System requirements](#system-requirements)
* [On Ubuntu, be sure to install pip3](#on-ubuntu-be-sure-to-install-pip3)
* [Hardware](#hardware)
* [Installing](#installing)
* [Installing on Linux](#installing-on-linux)
* [Installing on Mac OS X](#installing-on-mac-os-x)
* [Configuration](#configuration)
* [Where are configuration files located?](#where-are-configuration-files-located)
* [A reference of all configuration options](#a-reference-of-all-configuration-options)
* [Starting and stopping the service](#starting-and-stopping-the-service)
* [What to do after the first run?](#what-to-do-after-the-first-run)
* [Creating an admin account](#creating-an-admin-account)
* [Updating Teamplify](#updating-teamplify)
* [Backup and restore](#backup-and-restore)
* [A Sample maintenance script](#a-sample-maintenance-script)
* [Uninstall](#uninstall)
* [Troubleshooting](#troubleshooting)
* [Teamplify won't start](#teamplify-wont-start)
* [Email delivery issues](#email-delivery-issues)
* [The connection is refused or not trusted in SSL-enabled mode](#the-connection-is-refused-or-not-trusted-in-ssl-enabled-mode)
* [Other](#other)
* [License](#license)
## System requirements
Teamplify is designed to run on Linux. For demonstration purposes, it should
also deploy on Mac OS X. Windows is not supported.
Before you install, make sure that your system has the following components:
* [Docker version 1.13 and above](https://docs.docker.com/install/);
* [Docker Compose V2](https://docs.docker.com/compose/install/);
* [Python 3.9 and above](https://www.python.org/downloads/);
* [pip for Python 3](https://packaging.python.org/tutorials/installing-packages/#ensure-you-can-run-pip-from-the-command-line)
To check that the required versions are installed, run these commands (shown
with example output):
``` shell
$ docker -v
Docker version 27.5.1, build 9f9e405
$ python3 --version
Python 3.12.3
$ pip3 --version
pip 24.0 from /usr/lib/python3/dist-packages/pip (python 3.12)
```
### On Ubuntu, be sure to install `pip3`
On most systems, `Python 3` comes with `pip3` pre-installed. However,
in Ubuntu `Python 3` and `pip3` are installed separately. To install
`pip3` run:
```shell
$ sudo apt install python3-pip
```
**Important**: after installing, exit the terminal and re-open it.
This forces the terminal to update its path configuration, so that you
can find the command line tools installed with `pip3` in your `$PATH`.
### Hardware
As a default server configuration, we recommend 8GB of RAM, 4 CPU cores, and
30 GB of disk space (SSD is strongly recommended). For most small-to-medium
organizations (up to a few dozen people), this should be enough. Larger
workloads, however, may need more resources. The recommended strategy is to
start with the default server configuration and scale up or down depending on
the workload.
## Installing
### Installing on Linux
After installing Docker, check Docker's
[post-installation steps for Linux](https://docs.docker.com/install/linux/linux-postinstall/).
You probably want to make sure that you can run Docker commands without
`sudo`, and that Docker is configured to start on boot.
Install the latest version of Teamplify runner with pip:
``` shell
$ pip3 install -U teamplify
```
### Installing on Mac OS X
On Mac OS X, we recommend installing Teamplify in a Python virtual
environment located in your home directory. This is because Teamplify needs to
mount its configuration files into Docker containers. By default on Mac OS X,
only the `/Users` folder is shared with Docker.
1. Create a new Python virtual environment for Teamplify in your home directory:
``` shell
$ python3 -m venv ~/.venv/teamplify
```
2. Activate it:
``` shell
$ source ~/.venv/teamplify/bin/activate
```
3. At this point, a `pip` command is linked to the virtual environment that
you just created. Install Teamplify runner with `pip`:
``` shell
$ pip install teamplify
```
## Configuration
Teamplify requires a configuration file to run.
1. Run the following command to create the initial file:
``` shell
$ teamplify configure
```
_This creates a configuration file with default settings in your home
directory: `~/.teamplify.ini`. You can specify the location of
your file with the `--config` option._
2. Use your text editor to adjust the contents of this file.
You need to specify the following parameters:
* `product_key` in the `[main]` section
* `host` and `port` in the `[web]` section
Other parameters are optional and can keep their default values.
You can review them at [the reference of all configuration options](#A-reference-of-all-configuration-options)
### Where are configuration files located?
When you run `teamplify configure`, it creates a configuration file.
Typically, this file is at `~/.teamplify.ini`.
However, that is not the only
possible location. Teamplify searches the following locations (listed from
highest priority to lowest priority):
1. First, it checks the location specified in the `--config` parameter in the
command line. Example:
``` shell
$ teamplify --config /path/to/configuration/file start
```
2. An environment variable named `TEAMPLIFY_CONF`. Example:
``` shell
$ TEAMPLIFY_CONF=/path/to/configuration/file teamplify start
```
3. In the home directory of the current user: `~/.teamplify.ini`;
4. At `/etc/teamplify/teamplify.ini`.
### A reference of all configuration options
`[main]`
- `product_key` - the product key of your installation. This is required.
To get the product key, please email us at
[support@teamplify.com](mailto:support@teamplify.com);
- `send_crash_reports` - possible values are `yes` and `no`, defaults to `yes`.
When set to `yes`, the system automatically sends application crash
reports to our team. We recommend keeping this option enabled as it helps us
to detect bugs faster and ship fixes for them more quickly;
`[web]`
> ⚠️ Please note that the built-in SSL option with certificates from
> Let's Encrypt requires standard ports for HTTP and HTTPS: `port = 80` and
> `ssl_port = 443`. This requirement comes from the
> [Let's Encrypt HTTP-01 challenge](https://letsencrypt.org/docs/challenge-types/#http-01-challenge),
> which does not support custom ports. However, you can still use custom HTTP and
> HTTPS ports if you provide your own SSL certificates or use an external proxy
> with `use_ssl = external`.
- `host` - domain name on which the Teamplify web interface will be running. It
must be created in advance, and pointed to the server where you have
installed Teamplify;
- `port` - the port on which Teamplify web interface will be running, the default
is `80`. If `use_ssl` is set to `builtin` and no SSL certificates path is specified
then `80` is the only allowed option;
- `use_ssl` - SSL mode. Possible values are `no`, `builtin`, and `external`, defaults to
`no`. When set to `builtin`, Teamplify will serve HTTPS requests on the port specified
in the `ssl_port` option below. All HTTP traffic will be redirected to this port.
If you're hosting Teamplify behind a proxy or load balancer that is already
configured for SSL support, please set this parameter to `external`,
and also make sure that your proxy correctly sets `X-Forwarded-Proto` HTTP header.
In this case, the `port` setting specifies the local port that Teamplify binds to
(for the load balancer to forward traffic to), and the startup check will verify
that your service is accessible at `https://<host>` (standard port 443).
- `ssl_port` - the port on which Teamplify web interface will be running when SSL
is enabled, the default is `443`. If `use_ssl` is set to `builtin` and
no SSL certificates path is specified, then `443` is the only allowed option;
- `ssl_certs` (optional) - a path to a directory which contains SSL
certificates. The directory must contain certificate (.crt) and key (.key) files
for the domain specified in the `host` option above. The filenames must be equal
the domain name. For example, if the domain is `example.com`, the filenames
must be `example.com.crt` and `example.com.key`. If no path is specified,
Teamplify runner will use [Let's Encrypt](https://letsencrypt.org)
to generate and renew SSL certificates for the domain that you specified
in the `host` parameter above.
- `count` (optional) - the number of web instances to run. The default is `2`.
`[runner]`
`[db]`
- `host` - defaults to `builtin_db`, that is, using the DB instance that is
shipped with Teamplify. You can also switch to an external MySQL 5.7
compatible database by providing its hostname instead of `builtin_db` and
specifying other DB connection parameters below;
- `name` - the database name to use. Must be `teamplify` if `builtin_db` is
used;
- `port` - the database port. Must be `3306` for `builtin_db`;
- `user` - DB user. Must be `root` for `builtin_db`;
- `password` - DB password. Must be `teamplify` for `builtin_db`;
- `backup_mount` - a path to a directory on the server which will be mounted
into the built-in DB instance container. It is used as a temporary directory
in the process of making and restoring backups;
`[email]`
- `address_from` - email address used by Teamplify in the FROM field of its
email messages. It could be either a plain email address or an email address
with a display name, like this:
`Teamplify <teamplify@your-company-domain.com>`;
- `smtp_host` - hostname of an SMTP server used to send emails. Defaults to
`builtin_smtp` which means using the SMTP server that is shipped with
Teamplify. Built-in SMTP for Teamplify is based on Postfix, and it is
production-ready. However, if you plan to use it, we strongly recommend that
you add the address of Teamplify's server to the
[SPF record](http://www.openspf.org/SPF_Record_Syntax) of the domain used
in the `address_from` setting to prevent Teamplify emails from being marked
as spam. Or, you can configure Teamplify to use an external SMTP server by
providing its hostname instead of `builtin_smtp` and configuring other
SMTP connection settings below;
- `smtp_protocol` - SMTP protocol to use. Can be `plain`, `ssl`, or `tls`.
Must be `plain` if you use `builtin_smtp`;
- `smtp_port` - SMTP port to use. Must be `25` for `builtin_smtp`;
- `smtp_user` - username for the SMTP server. Must be blank for `builtin_smtp`;
- `smtp_password` - password for the SMTP server. Must be blank for
`builtin_smtp`;
`[crypto]`
- `signing_key` - the random secret string used by Teamplify for signing
cookies and generating CSRF protection tokens. It is automatically generated
when you run `teamplify configure`, and typically you don't need to change
it unless you think that it may be compromised. In such cases, replace it with
another 50-character random string made of Latin characters and numbers
(please note that this will force all existing users to log in to the system
again).
`[worker]`
- `slim_count` - number of workers doing background tasks, such as sending
notifications and emails. The default setting is `1`, which is sufficient
for most organizations.
- `fat_count` - the number of workers synchronizing resources, such as
repositories, chats, issues, etc. If you have a lot of resources or it takes
a long time to sync them, you can increase this parameter. Essentially, it
controls how many resources are synchronized in parallel. The default setting
is `3`.
## Starting and stopping the service
After you have created the configuration file, start Teamplify with:
``` shell
$ teamplify start
```
On the first run, the application has to download and configure many Docker
images. For this reason, the first run might take a while to start.
After the command completes, open Teamplify in your browser using the `host` and
the `port` that you provided in the `[web]` section of the configuration. After
starting the service, it might take a minute or two before it finally comes
online.
If you have problems starting Teamplify, please see the
[Troubleshooting](#troubleshooting) section below.
If you need to stop Teamplify, run:
``` shell
$ teamplify stop
```
After running Teamplify for the first time,
[follow the instructions to create an admin account](#create-an-admin-account)
There's also a convenient command to stop the service and start it again. This
is useful when applying changes made to the configuration:
``` shell
$ teamplify restart
```
## What to do after the first run?
After the first run, you need to create an admin account.
### Creating an admin account
After the application first starts, run the following command:
``` shell
$ teamplify createadmin --email <admin@email> --full-name <Full Name>
```
Please check the output to make sure that no errors occurred.
With the `createadmin` command, you can create as many
admin account as you want. However, after creating the first one,
it might be more convenient to create the others through the Teamplify UI.
## Updating Teamplify
Teamplify installation consists of the Teamplify runner and the Teamplify
product itself, which ships in the form of Docker images. We recommend
that you use the most recent version to keep up with the latest features and
bugfixes. The update process consists of two steps:
1. Update Teamplify runner:
``` shell
$ pip3 install -U teamplify
```
2. Update Teamplify itself:
``` shell
$ teamplify update
```
The `update` command automatically detects if a new version has been
downloaded and, if necessary, restarts the service. Because a service restart
causes a short downtime, you should ideally update in periods of low user
activity. The `update` command restarts the service only when necessary. If no
update has been downloaded, there is no restart and therefore no service
interruption.
## Backup and restore
Teamplify stores your data in a MySQL database. As with any other database, it
might be a good idea to make backups from time to time.
To back up the built-in Teamplify database, run:
``` shell
$ teamplify backup [optional-backup-file-or-directory]
```
If launched without parameters, it makes a gzipped backup of the DB and
stores it in the current working directory, under a name in the format:
* `teamplify_<current-date>.sql.gz`,
* for example, `teamplify_2019-01-31_06-58-57.sql.gz`.
You may specify the directory or file path where you'd like to save your backup.
To restore the built-in Teamplify database from a gzipped backup, run:
``` shell
$ teamplify restore <path-to-a-backup-file>
```
Please note that the commands above work with only a built-in database.
If you're running Teamplify with an external database, you need to use tools
for backups or restores that connect to that database directly.
## A Sample maintenance script
Backing up the data and keeping the software up-to-date are routine operations.
We recommend automating these processes. Below is a sample script you can use to
do so.
1. Create a file named `teamplify-maintenance.sh` with the following contents:
``` shell
#!/usr/bin/env bash
# Backups directory:
BACKUP_LOCATION=/backups/teamplify/
# How many days should we store the backups for:
BACKUP_STORE_DAYS=14
# Back up Teamplify DB and update Teamplify:
mkdir -p $BACKUP_LOCATION && \
pip3 install -U teamplify && \
teamplify backup $BACKUP_LOCATION && \
teamplify update
# If the update was successful, clean up old backups:
if [ $? -eq 0 ]; then
find $BACKUP_LOCATION -type f -mmin +$((60 * 24 * $BACKUP_STORE_DAYS)) \
-name 'teamplify_*.sql.gz' -execdir rm -- '{}' \;
fi
# The final step is optional but recommended. Add your code so that would
# sync contents of $BACKUP_LOCATION to a physically remote location.
#
# ... add your backups sync code below:
```
In the code above, adjust the path for `$BACKUP_LOCATION` and the value
for `$BACKUP_STORE_DAYS` as necessary. At the end of the script, you can add
your own code that would sync your backups to a remote location. This is an
optional but highly recommended precaution that would help you recover your
backup in the case of a disaster. For example, you can use
[aws s3 sync](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html) to
upload the backups to AWS S3.
2. When the maintenance script is ready, make it executable with
`chmod +x teamplify-maintenance.sh`.
3. Set the script to run as a daily cron job. Open the crontab schedule:
``` shell
$ crontab -e
```
Append the following entry (remember to replace the path to the script):
``` shell
0 3 * * * /path/to/the/script/teamplify-maintenance.sh
```
In the example above, the script is scheduled to run daily at 3 AM. See
[cron syntax](https://en.wikipedia.org/wiki/Cron) for a detailed explanation.
When ready, save and close the file.
## Uninstall
If you'd like to uninstall Teamplify, use the following steps.
**IMPORTANT**: The uninstall procedure erases all data stored in Teamplify.
Before doing this, [consider making a backup](#backup-and-restore).
1. Remove all Teamplify data, Docker images, volumes, and networks:
``` shell
$ teamplify erase
```
2. Uninstall Teamplify runner:
``` shell
$ pip3 uninstall teamplify
```
## Troubleshooting
So what could possibly go wrong?
### Teamplify won't start
Please check the following:
- The service won't start if the configuration file is missing or contains
errors. In such cases, the `teamplify start` command will report a problem.
Please inspect its output;
- There could be a problem with the domain name configuration. If the
`teamplify start` command has completed successfully, you should see
Teamplify's interface in the browser when you open an address specified in the
`host` and `port` parameters in the `[web]` section of the
[Configuration](#configuration). If that doesn't happen, i.e. if browser says
that it can't find the server or the server is not responding, then most
likely this is a problem with either the domain name or firewall
configuration. Please make sure that the domain exists and points to the
Teamplify server, and that the port is open in the firewall;
- If you see the "Teamplify is starting" message, you should give it a minute
or two to come online. If nothing happens after a few minutes, there could be
a problem during application start. Application logs may contain additional
information:
``` shell
$ docker logs teamplify_app
```
Please let us know about the problem and attach the output from the command
above. You can either
[open an issue on Github](https://github.com/teamplify/teamplify-runner/issues),
or contact us at [support@teamplify.com](mailto:support@teamplify.com), or
use the live chat on [teamplify.com](https://teamplify.com).
### Email delivery issues
Emails can go to spam or sometimes aren't delivered at all. If you're
running a demo version of Teamplify on your desktop at home, this is
very likely to happen, since IPs of home internet providers have a large
chance of being blacklisted in spam databases. We recommend that you
try the following:
- If you're going to use the built-in SMTP server, consider running Teamplify
on a server hosted in a data center or at your office, but not at home. Next,
please make sure that you've added the IP of your Teamplify server to the
[SPF record](http://www.openspf.org/SPF_Record_Syntax) of the domain used
in `address_from` setting in the configuration file;
- Some email providers, for example, Google Mail, explicitly reject emails
sent from blacklisted IPs. It might be helpful to examine SMTP server
logs to see if that's what's happening:
``` shell
$ docker logs teamplify_smtp
```
- Alternatively, if you have another SMTP server that is already configured
and can reliably send emails, you can configure Teamplify to use this server
instead of the built-in SMTP. See `[email]` section in
[Configuration](#configuration) for details;
### The connection is refused or not trusted in SSL-enabled mode
During the first start, Teamplify runner generates a temporary self-issued SSL
certificate (not trusted) and then tries to create a valid certificate for your
domain via [Let's Encrypt](https://letsencrypt.org) that would replace the
temporary one. Besides that, it also creates a new set of 2048-bit DH parameters
to give your SSL configuration an A+ rating. This process is rather slow and may
take a few minutes to complete. If you open Teamplify in your browser and see
that the SSL connection can't be established or is not trusted, the problem may
be caused by DH params or the SSL certificate generations that are still in
progress. After DH params and the SSL certificate have been successfully
generated, they are saved for future use and subsequent restarts of the server
should be much faster.
If you have just started the server for the very first time, please give it a
few minutes to complete the initialization and then refresh the page in your
browser. If after a few minutes the browser reports that the connection is not
trusted, it probably means that the certificate generation has failed. Please
check the following:
1. That the domain that you specified in the `host` parameter can be resolved
from the public Internet and is pointing to the server on which you have
installed Teamplify;
2. That ports `80` and `443` are not blocked in the firewall.
It also might be helpful to check the logs:
``` shell
$ docker logs teamplify_letsencrypt
```
### Other
For any issue with Teamplify, we recommend that you try to
[check for updates](#updates) first. We release updates frequently. It's quite
possible that the problem that you encountered is already addressed in a newer
version.
If these suggested solutions don't work, don't hesitate to
[open an issue on Github](https://github.com/teamplify/teamplify-runner/issues)
or contact us at [support@teamplify.com](mailto:support@teamplify.com). You can
also use the live chat on [teamplify.com](https://teamplify.com). We're ready
to help!
## License
Teamplify runner is available under the MIT license. Please note that the MIT
license applies to Teamplify runner only, but not to the main Teamplify product.
Some Docker images downloaded by Teamplify runner will contain a proprietary
code that is not open source and is distributed under its own
[terms and conditions](https://teamplify.com/terms/).
| text/markdown | null | Teamplify <support@teamplify.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"certifi>=2022.12.7",
"click>=5.0",
"requests>=2.28.2",
"sarge>=0.1.4"
] | [] | [] | [] | [
"Homepage, https://github.com/teamplify/teamplify-runner/",
"Repository, https://github.com/teamplify/teamplify-runner/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:26:50.652085 | teamplify-0.14.0.tar.gz | 20,762 | 3b/29/a70cbe4cb9d93f3650207ad883d3145b6e58f0976174df5f58443eddc5e3/teamplify-0.14.0.tar.gz | source | sdist | null | false | a1f5fab624134b5f60e9bea44c729f96 | 9e652ea3eb8461826cd685af0f053524d8ce14053b359120be72df7f7d04829d | 3b29a70cbe4cb9d93f3650207ad883d3145b6e58f0976174df5f58443eddc5e3 | MIT | [
"LICENSE"
] | 242 |
2.4 | csv-diff-py | 0.5.0 | CLI tool for comparing CSV files in git diff format | # CSV Diff
[][pypi-package]
[](https://github.com/fityannugroho/csv-diff/blob/main/LICENSE)
[](https://github.com/fityannugroho/csv-diff/actions/workflows/test.yml)
CSV Diff is a CLI tool for comparing two CSV files and displaying the results in `git diff` style.
For example, there are two CSV files, [`districts-2022.csv`](https://github.com/fityannugroho/csv-diff/blob/main/docs/examples/districts-2022.csv) and [`districts-2025.csv`](https://github.com/fityannugroho/csv-diff/blob/main/docs/examples/districts-2025.csv). With this tool, you can easily see the data differences between these two CSV files. The output will be saved as a `.diff` file, like this:
```diff
--- districts-2022.csv
+++ districts-2025.csv
@@ -7,9 +7,9 @@
11.01.07,11.01,Sawang
11.01.08,11.01,Tapaktuan
11.01.09,11.01,Trumon
-11.01.10,11.01,Pasi Raja
-11.01.11,11.01,Labuhan Haji Timur
-11.01.12,11.01,Labuhan Haji Barat
+11.01.10,11.01,Pasie Raja
+11.01.11,11.01,Labuhanhaji Timur
+11.01.12,11.01,Labuhanhaji Barat
11.01.13,11.01,Kluet Tengah
11.01.14,11.01,Kluet Timur
11.01.15,11.01,Bakongan Timur
@@ -141,7 +141,7 @@
11.08.11,11.08,Syamtalira Bayu
11.08.12,11.08,Tanah Luas
11.08.13,11.08,Tanah Pasir
-11.08.14,11.08,T. Jambo Aye
+11.08.14,11.08,Tanah Jambo Aye
11.08.15,11.08,Sawang
11.08.16,11.08,Nisam
11.08.17,11.08,Cot Girek
... (truncated)
```
> To see the full differences, please check the [`result.diff`](https://github.com/fityannugroho/csv-diff/blob/main/docs/examples/result.diff) file.
## Usage
```bash
csvdiff path/to/file1.csv path/to/file2.csv
```
> Use `--help` to see the available options.
## Installation
This package is available on [PyPI][pypi-package].
You can install it as a standalone CLI application using [`pipx`](https://pypa.github.io/pipx/) or [`uv`](https://docs.astral.sh/uv/guides/tools).
### Using `pipx`
```bash
pipx install csv-diff-py
csvdiff
```
### Using `uv`
```bash
uv tool install csv-diff-py
csvdiff
```
or without installing globally, you can use `uvx` to run it directly:
```bash
uvx --from csv-diff-py csvdiff
```
## Development Setup
### Prerequisites
- [`uv`](https://docs.astral.sh/uv/getting-started/installation) package manager
- Python 3.9 or higher
> Tip: You can use `uv` to install Python. See the [Python installation guide](https://docs.astral.sh/uv/guides/install-python) for more details.
### Steps
1. Clone this repository
```bash
git clone https://github.com/fityannugroho/csv-diff.git
cd csv-diff
```
1. Install dependencies
```bash
uv sync --all-extras
```
1. Run the tool locally
Via `uv`:
```bash
uv run csvdiff
```
Via virtual environment:
```bash
source .venv/bin/activate
csvdiff
```
1. Run tests
```bash
uv run pytest
```
1. Run linter and formatter
```bash
uv run ruff check
uv run ruff format
```
## Limitations
- Only supports CSV files with a header row.
[pypi-package]: https://pypi.org/project/csv-diff-py
| text/markdown | null | fityannugroho <fityannugroho@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"duckdb>=1.0.0",
"typer>=0.16.0"
] | [] | [] | [] | [
"Homepage, https://github.com/fityannugroho/csv-diff",
"Repository, https://github.com/fityannugroho/csv-diff",
"Changelog, https://github.com/fityannugroho/csv-diff/releases",
"Issues, https://github.com/fityannugroho/csv-diff/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:26:36.555788 | csv_diff_py-0.5.0.tar.gz | 9,930 | 4d/a2/717f6d1c293fe115f3c5068910360e4fd25696e58cf959b726b187b82779/csv_diff_py-0.5.0.tar.gz | source | sdist | null | false | 5b4251ae0e33e4895554813f4c84bb13 | f6ee364905c40a9abe24d2b7e46c6346990712b442c0d20e19598c0f23f0486e | 4da2717f6d1c293fe115f3c5068910360e4fd25696e58cf959b726b187b82779 | null | [
"LICENSE"
] | 244 |
2.4 | chuk-virtual-fs | 0.5.1 | A secure, modular virtual filesystem designed for AI agent sandboxes | # chuk-virtual-fs: Modular Virtual Filesystem Library
A powerful, flexible virtual filesystem library for Python with advanced features, multiple storage providers, and robust security.
> **🎯 Perfect for MCP Servers**: Expose virtual filesystems to Claude Desktop and other MCP clients via FUSE mounting. Generate code, mount it, and let Claude run real tools (TypeScript, linters, compilers) on it with full POSIX semantics.
## 🤖 For MCP & AI Tooling
**Make your virtual filesystem the "OS for tools"** - Mount per-session workspaces via **FUSE** and let Claude / MCP clients run real tools on AI-generated content.
- ✅ **Real tools, virtual filesystem**: TypeScript, ESLint, Prettier, `tsc`, pytest, etc. work seamlessly
- ✅ **Full POSIX semantics**: Any command-line tool that expects a real filesystem works
- ✅ **Pluggable backends**: Memory, S3, SQLite, E2B, or custom providers
- ✅ **Perfect for MCP servers**: Expose workspaces to Claude Desktop and other MCP clients
- ✅ **Zero-copy streaming**: Handle large files efficiently with progress tracking
**Example workflow:**
1. Your MCP server creates a `VirtualFileSystem` with AI-generated code
2. Mount it via FUSE at `/tmp/workspace`
3. Claude runs `tsc /tmp/workspace/main.ts` or any other tool
4. Read results back and iterate
See [MCP Use Cases](#for-mcp-servers-model-context-protocol) for detailed examples and [Architecture](#-architecture) for how it all fits together.
## 🏗️ Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Your MCP Server / AI App │
└────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ chuk-virtual-fs (This Library) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ VirtualFileSystem (Core API) │ │
│ │ • mkdir, write_file, read_file, ls, cp, mv, etc. │ │
│ │ • Streaming operations (large files) │ │
│ │ • Virtual mounts (combine providers) │ │
│ └────────────┬─────────────────────────────────────────┘ │
│ │ │
│ ┌────────┴────────┐ │
│ ▼ ▼ │
│ ┌────────┐ ┌──────────┐ │
│ │ WebDAV │ │ FUSE │ ◄── Mounting Adapters │
│ │Adapter │ │ Adapter │ │
│ └────┬───┘ └────┬─────┘ │
└───────┼───────────────┼──────────────────────────────────────┘
│ │
│ │
▼ ▼
┌────────────┐ ┌─────────────┐
│ WebDAV │ │ /tmp/mount │ ◄── Real OS Mounts
│ Server │ │ (FUSE) │
│ :8080 │ │ │
└──────┬─────┘ └──────┬──────┘
│ │
│ │
▼ ▼
┌───────────────────────────────────┐
│ Real Tools & Applications │
│ • Finder/Explorer (WebDAV) │
│ • TypeScript (tsc) │
│ • Linters (ESLint, Ruff) │
│ • Any POSIX tool (ls, cat, etc.) │
└───────────────────────────────────┘
│
│
▼
┌───────────────────────────────────┐
│ Storage Backends (Providers) │
│ • Memory • SQLite • S3 │
│ • E2B • Filesystem │
└───────────────────────────────────┘
```
**Key Points:**
- **Single API**: Use VirtualFileSystem regardless of backend
- **Multiple Backends**: Memory, SQLite, S3, E2B, or custom providers
- **Two Mount Options**: WebDAV (quick) or FUSE (full POSIX)
- **Real Tools Work**: Once mounted, any tool can access your virtual filesystem
## 🌟 Key Features
### 🔧 Modular Design
- Pluggable storage providers
- Flexible filesystem abstraction
- Supports multiple backend implementations
### 💾 Storage Providers
- **Memory Provider**: In-memory filesystem for quick testing and lightweight use
- **SQLite Provider**: Persistent storage with SQLite database backend
- **Pyodide Provider**: Web browser filesystem integration
- **S3 Provider**: Cloud storage with AWS S3 or S3-compatible services
- **E2B Sandbox Provider**: Remote sandbox environment filesystem
- **Google Drive Provider**: Store files in user's Google Drive (user owns data!)
- Easy to extend with custom providers
### 🔒 Advanced Security
- Multiple predefined security profiles
- Customizable access controls
- Path and file type restrictions
- Quota management
- Security violation tracking
### 🚀 Advanced Capabilities
- **Streaming Operations**: Memory-efficient streaming for large files with:
- Real-time progress tracking callbacks
- Atomic write safety (temp file + atomic move)
- Automatic error recovery and cleanup
- Support for both sync and async callbacks
- **Virtual Mounts**: Unix-like mounting system to combine multiple providers
- **WebDAV Mounting**: Expose virtual filesystems via WebDAV (no kernel extensions!)
- Mount in macOS Finder, Windows Explorer, or Linux file managers
- Perfect for AI coding assistants and development workflows
- Background server support
- Read-only mode option
- **FUSE Mounting**: Native filesystem mounting with full POSIX semantics
- Mount virtual filesystems as real directories
- Works with any tool that expects a filesystem
- Docker support for testing without system modifications
- Snapshot and versioning support
- Template-based filesystem setup
- Flexible path resolution
- Comprehensive file and directory operations
- CLI tools for bucket management
## 📦 Installation
### From PyPI
```bash
pip install chuk-virtual-fs
```
### With Optional Dependencies
```bash
# Install with S3 support
pip install "chuk-virtual-fs[s3]"
# Install with Google Drive support
pip install "chuk-virtual-fs[google_drive]"
# Install with Git support
pip install "chuk-virtual-fs[git]"
# Install with WebDAV mounting support (recommended!)
pip install "chuk-virtual-fs[webdav]"
# Install with FUSE mounting support
pip install "chuk-virtual-fs[mount]"
# Install everything
pip install "chuk-virtual-fs[all]"
# Using uv
uv pip install "chuk-virtual-fs[s3]"
uv pip install "chuk-virtual-fs[google_drive]"
uv pip install "chuk-virtual-fs[git]"
uv pip install "chuk-virtual-fs[webdav]"
uv pip install "chuk-virtual-fs[mount]"
uv pip install "chuk-virtual-fs[all]"
```
### For Development
```bash
# Clone the repository
git clone https://github.com/chuk-ai/chuk-virtual-fs.git
cd chuk-virtual-fs
# Install in development mode with all dependencies
pip install -e ".[dev,s3,e2b]"
# Using uv
uv pip install -e ".[dev,s3,e2b]"
```
## 📚 Examples
**Try the interactive example runner:**
```bash
cd examples
./run_example.sh # Interactive menu with 11 examples
```
**Or run specific examples:**
- WebDAV: `./run_example.sh 1` (Basic server)
- FUSE: `./run_example.sh 5` (Docker mount test)
- Providers: `./run_example.sh 7` (Memory provider)
**See**: [examples/](examples/) for comprehensive documentation
## 🚀 Quick Start
### Basic Usage (Async)
The library uses async/await for all operations:
```python
from chuk_virtual_fs import AsyncVirtualFileSystem
import asyncio
async def main():
# Use async context manager
async with AsyncVirtualFileSystem(provider="memory") as fs:
# Create directories
await fs.mkdir("/home/user/documents")
# Write to a file
await fs.write_file("/home/user/documents/hello.txt", "Hello, Virtual World!")
# Read from a file
content = await fs.read_text("/home/user/documents/hello.txt")
print(content) # Outputs: Hello, Virtual World!
# List directory contents
files = await fs.ls("/home/user/documents")
print(files) # Outputs: ['hello.txt']
# Change directory
await fs.cd("/home/user/documents")
print(fs.pwd()) # Outputs: /home/user/documents
# Copy and move operations
await fs.cp("hello.txt", "hello_copy.txt")
await fs.mv("hello_copy.txt", "/home/user/hello_moved.txt")
# Find files matching pattern
results = await fs.find("*.txt", path="/home", recursive=True)
print(results) # Finds all .txt files under /home
# Run the async function
asyncio.run(main())
```
> **Note**: The library also provides a synchronous `VirtualFileSystem` alias for backward compatibility, but the async API (`AsyncVirtualFileSystem`) is recommended for new code and required for streaming and mount operations.
## 💾 Storage Providers
### Provider Families
**If it looks like storage, we can probably wrap it as a provider.**
Our providers are organized into logical families:
- **🧠 In-Memory & Local**: Memory, SQLite, Filesystem - Fast, local-first storage
- **☁️ Cloud Object Stores**: S3 (AWS, MinIO, Tigris, etc.) - Scalable blob storage
- **👤 Cloud Sync (User-Owned)**: Google Drive - User owns data, OAuth-based
- **🌐 Browser & Web**: Pyodide - WebAssembly / browser environments
- **🔒 Remote Sandboxes**: E2B - Isolated execution environments
- **🔌 Network Access**: WebDAV, FUSE mounts - Make any provider accessible as a real filesystem
### Provider Comparison Matrix
| Provider | Read | Write | Streaming | Mount | OAuth | Multi-Tenant | Best For |
|----------|------|-------|-----------|-------|-------|--------------|----------|
| **Memory** | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | Testing, caching, temporary workspaces |
| **SQLite** | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | Persistent local storage, small datasets |
| **Filesystem** | ✅ | ✅ | ✅ | ✅ | ❌ | ⚠️ | Local dev, direct file access |
| **S3** | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | Cloud storage, large files, CDN integration |
| **Google Drive** | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | User-owned data, cross-device sync, sharing |
| **Git** | ✅ | ⚠️ | ❌ | ✅ | ❌ | ✅ | Code review, MCP devboxes, version control |
| **Pyodide** | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | Browser apps, WASM environments |
| **E2B** | ✅ | ✅ | ✅ | ✅ | 🔑 | ✅ | Sandboxed code execution, AI agents |
**Legend:**
- ✅ Fully supported
- ⚠️ Possible with caveats (e.g., Git Write only in worktree mode)
- ❌ Not supported
- 🔑 API key required
### Available Providers
The virtual filesystem supports multiple storage providers:
- **Memory**: In-memory storage (default)
- **SQLite**: SQLite database storage
- **Filesystem**: Direct filesystem access
- **S3**: AWS S3 or S3-compatible storage
- **Google Drive**: User's Google Drive (user owns data!)
- **Git**: Git repositories (snapshot or worktree modes)
- **Pyodide**: Native integration with Pyodide environment
- **E2B**: E2B Sandbox environments
### Potential Future Providers
We're exploring additional providers based on demand. Candidates include:
**Cloud Sync Family:**
- **OneDrive/SharePoint**: Enterprise cloud storage, OAuth-based
- **Dropbox**: Personal/creator cloud storage
- **Box**: Enterprise content management
**Archive Formats:**
- **ZIP/TAR providers**: Mount archives as virtual directories
- **OLE/OpenXML**: Access Office documents as filesystems
**Advanced Patterns:**
- **Encrypted provider**: Transparent encryption wrapper for any backend
- **Caching provider**: Multi-tier caching (memory → SQLite → S3)
- **Multi-provider**: Automatic sharding across backends
> **Want a specific provider?** [Open an issue](https://github.com/chuk-ai/chuk-virtual-fs/issues) with your use case!
### Using the S3 Provider
The S3 provider allows you to use AWS S3 or S3-compatible storage (like Tigris Storage) as the backend for your virtual filesystem.
#### Installation
```bash
# Install with S3 support
pip install "chuk-virtual-fs[s3]"
# Or with uv
uv pip install "chuk-virtual-fs[s3]"
```
#### Configuration
Create a `.env` file with your S3 credentials:
```ini
# AWS credentials for S3 provider
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=us-east-1
# For S3-compatible storage (e.g., Tigris Storage)
AWS_ENDPOINT_URL_S3=https://your-endpoint.example.com
S3_BUCKET_NAME=your-bucket-name
```
#### Example Usage
```python
from dotenv import load_dotenv
from chuk_virtual_fs import VirtualFileSystem
# Load environment variables
load_dotenv()
# Create filesystem with S3 provider
fs = VirtualFileSystem("s3",
bucket_name="your-bucket-name",
prefix="your-prefix", # Optional namespace in bucket
endpoint_url="https://your-endpoint.example.com") # For S3-compatible storage
# Use the filesystem as normal
fs.mkdir("/projects")
fs.write_file("/projects/notes.txt", "Virtual filesystem backed by S3")
# List directory contents
print(fs.ls("/projects"))
```
### E2B Sandbox Provider Example
```python
import os
from dotenv import load_dotenv
# Load E2B API credentials from .env file
load_dotenv()
# Ensure E2B API key is set
if not os.getenv("E2B_API_KEY"):
raise ValueError("E2B_API_KEY must be set in .env file")
from chuk_virtual_fs import VirtualFileSystem
# Create a filesystem in an E2B sandbox
# API key will be automatically used from environment variables
fs = VirtualFileSystem("e2b", root_dir="/home/user/sandbox")
# Create project structure
fs.mkdir("/projects")
fs.mkdir("/projects/python")
# Write a Python script
fs.write_file("/projects/python/hello.py", 'print("Hello from E2B sandbox!")')
# List directory contents
print(fs.ls("/projects/python"))
# Execute code in the sandbox (if supported)
if hasattr(fs.provider, 'sandbox') and hasattr(fs.provider.sandbox, 'run_code'):
result = fs.provider.sandbox.run_code(
fs.read_file("/projects/python/hello.py")
)
print(result.logs)
```
#### E2B Authentication
To use the E2B Sandbox Provider, you need to:
1. Install the E2B SDK:
```bash
pip install e2b-code-interpreter
```
2. Create a `.env` file in your project root:
```
E2B_API_KEY=your_e2b_api_key_here
```
3. Make sure to add `.env` to your `.gitignore` to keep credentials private.
Note: You can obtain an E2B API key from the [E2B platform](https://e2b.dev).
### Google Drive Provider
The Google Drive provider lets you store files in the user's own Google Drive. This approach offers unique advantages:
- ✅ **User Owns Data**: Files are stored in the user's Google Drive, not your infrastructure
- ✅ **Natural Discoverability**: Users can view/edit files directly in Google Drive UI
- ✅ **Built-in Sharing**: Use Drive's native sharing and collaboration features
- ✅ **Cross-Device Sync**: Files automatically sync across all user devices
- ✅ **No Infrastructure Cost**: No need to manage storage servers or buckets
#### Installation
```bash
# Install with Google Drive support
pip install "chuk-virtual-fs[google_drive]"
# Or with uv
uv pip install "chuk-virtual-fs[google_drive]"
```
#### OAuth Setup
Before using the Google Drive provider, you need to set up OAuth2 credentials:
**Step 1: Create Google Cloud Project**
1. Go to [Google Cloud Console](https://console.cloud.google.com/)
2. Create a new project (or select existing)
3. Enable the Google Drive API
4. Go to "Credentials" → Create OAuth 2.0 Client ID
5. Choose "Desktop app" as application type
6. Download the JSON file and save as `client_secret.json`
**Step 2: Run OAuth Setup**
```bash
# Run the OAuth setup helper
python examples/providers/google_drive_oauth_setup.py
# Or with custom client secrets file
python examples/providers/google_drive_oauth_setup.py --client-secrets /path/to/client_secret.json
```
This will:
- Open a browser for Google authorization
- Save credentials to `google_drive_credentials.json`
- Show you the configuration for Claude Desktop / MCP servers
#### Example Usage
```python
import json
from pathlib import Path
from chuk_virtual_fs import AsyncVirtualFileSystem
# Load credentials from OAuth setup
with open("google_drive_credentials.json") as f:
credentials = json.load(f)
# Create filesystem with Google Drive provider
async with AsyncVirtualFileSystem(
provider="google_drive",
credentials=credentials,
root_folder="CHUK", # Creates /CHUK/ folder in Drive
cache_ttl=60 # Cache file IDs for 60 seconds
) as fs:
# Create project structure
await fs.mkdir("/projects/demo")
# Write files - they appear in Google Drive!
await fs.write_file(
"/projects/demo/README.md",
"# My Project\n\nFiles stored in Google Drive!"
)
# Read files back
content = await fs.read_file("/projects/demo/README.md")
# List directory
files = await fs.ls("/projects/demo")
# Get file metadata
info = await fs.get_node_info("/projects/demo/README.md")
print(f"Size: {info.size} bytes")
print(f"Modified: {info.modified_at}")
# Files are now in Google Drive under /CHUK/projects/demo/
```
#### Configuration for Claude Desktop
After running OAuth setup, add to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"vfs": {
"command": "uvx",
"args": ["chuk-virtual-fs"],
"env": {
"VFS_PROVIDER": "google_drive",
"GOOGLE_DRIVE_CREDENTIALS": "{\"token\": \"...\", \"refresh_token\": \"...\", ...}"
}
}
}
}
```
(The OAuth setup helper generates the complete configuration)
#### Features
- **Two-Level Caching**: Path→file_id and file_id→metadata caches for performance
- **Metadata Storage**: Session IDs, custom metadata, and tags stored in Drive's `appProperties`
- **Async Operations**: Full async/await support using `asyncio.to_thread`
- **Standard Operations**: All VirtualFileSystem methods work (mkdir, write_file, read_file, ls, etc.)
- **Statistics**: Track API calls, cache hits/misses with `get_storage_stats()`
#### Provider-Specific Parameters
```python
from chuk_virtual_fs.providers import GoogleDriveProvider
provider = GoogleDriveProvider(
credentials=credentials_dict, # OAuth2 credentials
root_folder="CHUK", # Root folder name in Drive
cache_ttl=60, # Cache TTL in seconds (default: 60)
session_id="optional_session_id", # Optional session tracking
sandbox_id="default" # Optional sandbox tracking
)
```
#### Examples
See the `examples/providers/` directory for complete examples:
- **`google_drive_oauth_setup.py`**: Interactive OAuth2 setup helper
- **`google_drive_example.py`**: Comprehensive end-to-end example
Run the full example:
```bash
# First, set up OAuth credentials
python examples/providers/google_drive_oauth_setup.py
# Then run the example
python examples/providers/google_drive_example.py
```
#### How It Works
1. **OAuth2 Authentication**: Uses Google's OAuth2 flow for secure authorization
2. **Root Folder**: Creates a folder (default: `CHUK`) in the user's Drive as the filesystem root
3. **Path Mapping**: Virtual paths like `/projects/demo/file.txt` → `CHUK/projects/demo/file.txt` in Drive
4. **Metadata**: Custom metadata (session_id, tags, etc.) stored in Drive's `appProperties`
5. **Caching**: Two-level cache reduces API calls for better performance
#### Use Cases
Perfect for:
- **User-Owned Workspaces**: Give users their own persistent workspace in their Drive
- **Collaborative AI Projects**: Users can share their Drive folders with collaborators
- **Long-Term Storage**: User controls retention and can access files outside your app
- **Cross-Device Access**: Users access their files from any device with Drive
- **Zero Infrastructure**: No need to run storage servers or manage buckets
### Git Provider
The Git provider lets you mount Git repositories as virtual filesystems with two modes:
- ✅ **snapshot**: Read-only view of a repository at a specific commit/branch/tag
- ✅ **worktree**: Writable working directory with full Git operations (commit, push, pull)
Perfect for:
- **MCP Servers**: "Mount this repo for Claude to review" - instant read-only access to any commit
- **Code Review Tools**: Browse repository state at specific commits
- **AI Coding Workflows**: Clone → modify → commit → push workflows
- **Documentation**: Browse repos without cloning to disk
- **Version Control Integration**: Full Git operations from your virtual filesystem
#### Installation
```bash
# Install with Git support
pip install "chuk-virtual-fs[git]"
# Or with uv
uv pip install "chuk-virtual-fs[git]"
```
#### Snapshot Mode (Read-Only)
Perfect for code review, documentation browsing, or MCP servers:
```python
from chuk_virtual_fs import AsyncVirtualFileSystem
# Mount a repository snapshot at a specific commit/branch
async with AsyncVirtualFileSystem(
provider="git",
repo_url="https://github.com/user/repo", # Or local path
mode="snapshot",
ref="main", # Branch, tag, or commit SHA
depth=1 # Optional: shallow clone for faster performance
) as fs:
# Read-only access to repository files
readme = await fs.read_text("/README.md")
files = await fs.ls("/src")
code = await fs.read_text("/src/main.py")
# Get repository metadata
metadata = await fs.get_metadata("/")
print(f"Commit: {metadata['commit_sha']}")
print(f"Author: {metadata['commit_author']}")
print(f"Message: {metadata['commit_message']}")
```
**MCP Server Use Case:**
```python
# MCP tool for code review
@mcp.tool()
async def review_code_at_commit(repo_url: str, commit_sha: str):
"""Claude reviews code at a specific commit."""
async with AsyncVirtualFileSystem(
provider="git",
repo_url=repo_url,
mode="snapshot",
ref=commit_sha
) as fs:
# Claude can now read any file in the repo
files = await fs.find("*.py", recursive=True)
# Analyze, review, suggest improvements...
return {"files_reviewed": len(files)}
```
#### Worktree Mode (Writable)
Full Git operations for AI coding workflows:
```python
from chuk_virtual_fs import AsyncVirtualFileSystem
# Writable working directory
async with AsyncVirtualFileSystem(
provider="git",
repo_url="/path/to/repo", # Local repo or clone URL
mode="worktree",
branch="feature-branch" # Branch to work on
) as fs:
# Create/modify files
await fs.mkdir("/src/new_feature")
await fs.write_file(
"/src/new_feature/module.py",
"def new_feature():\\n pass\\n"
)
# Commit changes
provider = fs.provider
await provider.commit(
"Add new feature module",
author="AI Agent <ai@example.com>"
)
# Push to remote
await provider.push("origin", "feature-branch")
# Check Git status
status = await provider.get_status()
print(f"Clean: {not status['is_dirty']}")
```
#### Features
- **Two Modes**: snapshot (read-only) or worktree (full Git operations)
- **Remote & Local**: Clone from GitHub/GitLab or use local repositories
- **Shallow Clones**: Use `depth=1` for faster clones
- **Sparse Checkout**: Clone only specific paths (coming soon)
- **Full Git Operations** (worktree mode):
- `commit()`: Commit changes with custom author
- `push()`: Push to remote
- `pull()`: Pull from remote
- `get_status()`: Check working directory status
- **Metadata Access**: Get commit SHA, author, message, date
- **Temporary Clones**: Auto-cleanup of temporary clone directories
#### Provider-Specific Parameters
```python
from chuk_virtual_fs.providers import GitProvider
provider = GitProvider(
repo_url="https://github.com/user/repo", # Remote URL or local path
mode="snapshot", # "snapshot" or "worktree"
ref="main", # For snapshot: branch/tag/SHA
branch="main", # For worktree: branch to check out
clone_dir="/path/to/clone", # Optional: where to clone (default: temp)
depth=1, # Optional: shallow clone depth
)
```
#### Examples
See `examples/providers/git_provider_example.py` for comprehensive examples including:
- Snapshot mode for read-only access
- Worktree mode with commit/push
- MCP server code review use case
```bash
# Run the example
uv run python examples/providers/git_provider_example.py
```
#### Use Cases
**For MCP Servers:**
- Mount any GitHub repo for Claude to review
- Instant read-only access to specific commits
- No disk space used (temporary clones auto-cleanup)
**For AI Coding:**
- Clone → modify → commit → push workflows
- Full version control integration
- Author attribution for AI-generated commits
**For Code Analysis:**
- Browse repository history
- Compare files across commits
- Extract code examples from any version
## 🛡️ Security Features
The virtual filesystem provides robust security features to protect against common vulnerabilities and limit resource usage.
### Security Profiles
```python
from chuk_virtual_fs import VirtualFileSystem
# Create a filesystem with strict security
fs = VirtualFileSystem(
security_profile="strict",
security_max_file_size=1024 * 1024, # 1MB max file size
security_allowed_paths=["/home", "/tmp"]
)
# Attempt to write to a restricted path
fs.write_file("/etc/sensitive", "This will fail")
# Get security violations
violations = fs.get_security_violations()
```
### Available Security Profiles
- **default**: Standard security with moderate restrictions
- **strict**: High security with tight constraints
- **readonly**: Completely read-only, no modifications allowed
- **untrusted**: Highly restrictive environment for untrusted code
- **testing**: Relaxed security for development and testing
### Security Features
- File size and total storage quotas
- Path traversal protection
- Deny/allow path and pattern rules
- Security violation logging
- Read-only mode
## 🛠️ CLI Tools
### S3 Bucket Management CLI
The package includes a CLI tool for managing S3 buckets:
```bash
# List all buckets
python s3_bucket_cli.py list
# Create a new bucket
python s3_bucket_cli.py create my-bucket
# Show bucket information
python s3_bucket_cli.py info my-bucket --show-top 5
# List objects in a bucket
python s3_bucket_cli.py ls my-bucket --prefix data/
# Clear all objects in a bucket or prefix
python s3_bucket_cli.py clear my-bucket --prefix tmp/
# Delete a bucket (must be empty)
python s3_bucket_cli.py delete my-bucket
# Copy objects between buckets or prefixes
python s3_bucket_cli.py copy source-bucket dest-bucket --source-prefix data/ --dest-prefix backup/
```
## 📋 Advanced Features
### Snapshots
Create and restore filesystem snapshots:
```python
from chuk_virtual_fs import VirtualFileSystem
from chuk_virtual_fs.snapshot_manager import SnapshotManager
fs = VirtualFileSystem()
snapshot_mgr = SnapshotManager(fs)
# Create initial content
fs.mkdir("/home/user")
fs.write_file("/home/user/file.txt", "Original content")
# Create a snapshot
snapshot_id = snapshot_mgr.create_snapshot("initial_state", "Initial filesystem setup")
# Modify content
fs.write_file("/home/user/file.txt", "Modified content")
fs.write_file("/home/user/new_file.txt", "New file")
# List available snapshots
snapshots = snapshot_mgr.list_snapshots()
for snap in snapshots:
print(f"{snap['name']}: {snap['description']}")
# Restore to initial state
snapshot_mgr.restore_snapshot("initial_state")
# Verify restore
print(fs.read_file("/home/user/file.txt")) # Outputs: Original content
print(fs.get_node_info("/home/user/new_file.txt")) # Outputs: None
# Export a snapshot
snapshot_mgr.export_snapshot("initial_state", "/tmp/snapshot.json")
```
### Templates
Load filesystem structures from templates:
```python
from chuk_virtual_fs import VirtualFileSystem
from chuk_virtual_fs.template_loader import TemplateLoader
fs = VirtualFileSystem()
template_loader = TemplateLoader(fs)
# Define a template
project_template = {
"directories": [
"/projects/app",
"/projects/app/src",
"/projects/app/docs"
],
"files": [
{
"path": "/projects/app/README.md",
"content": "# ${project_name}\n\n${project_description}"
},
{
"path": "/projects/app/src/main.py",
"content": "def main():\n print('Hello from ${project_name}!')"
}
]
}
# Apply the template with variables
template_loader.apply_template(project_template, variables={
"project_name": "My App",
"project_description": "A sample project created with the virtual filesystem"
})
```
### Streaming Operations
Handle large files efficiently with streaming support, progress tracking, and atomic write safety:
```python
from chuk_virtual_fs import AsyncVirtualFileSystem
async def main():
async with AsyncVirtualFileSystem(provider="memory") as fs:
# Stream write with progress tracking
async def data_generator():
for i in range(1000):
yield f"Line {i}: {'x' * 1000}\n".encode()
# Track upload progress
def progress_callback(bytes_written, total_bytes):
if bytes_written % (100 * 1024) < 1024: # Every 100KB
print(f"Uploaded {bytes_written / 1024:.1f} KB...")
# Write large file with progress reporting and atomic safety
await fs.stream_write(
"/large_file.txt",
data_generator(),
progress_callback=progress_callback
)
# Stream read - process chunks as they arrive
total_bytes = 0
async for chunk in fs.stream_read("/large_file.txt", chunk_size=8192):
total_bytes += len(chunk)
# Process chunk without loading entire file
print(f"Processed {total_bytes} bytes")
# Run with asyncio
import asyncio
asyncio.run(main())
```
#### Progress Reporting
Track upload/download progress with callbacks:
```python
async def upload_with_progress():
async with AsyncVirtualFileSystem(provider="s3", bucket_name="my-bucket") as fs:
# Progress tracking with sync callback
def track_progress(bytes_written, total_bytes):
percent = (bytes_written / total_bytes * 100) if total_bytes > 0 else 0
print(f"Progress: {percent:.1f}% ({bytes_written:,} bytes)")
# Or use async callback
async def async_track_progress(bytes_written, total_bytes):
# Can perform async operations here
await update_progress_db(bytes_written, total_bytes)
# Stream large file with progress tracking
async def generate_data():
for i in range(10000):
yield f"Record {i}\n".encode()
await fs.stream_write(
"/exports/large_dataset.csv",
generate_data(),
progress_callback=track_progress # or async_track_progress
)
```
#### Atomic Write Safety
All streaming writes use atomic operations to prevent file corruption:
```python
async def safe_streaming():
async with AsyncVirtualFileSystem(provider="filesystem", root_path="/data") as fs:
# Streaming write is automatically atomic:
# 1. Writes to temporary file (.tmp_*)
# 2. Atomically moves to final location on success
# 3. Auto-cleanup of temp files on failure
try:
await fs.stream_write("/critical_data.json", data_stream())
# File appears atomically - never partially written
except Exception as e:
# On failure, no partial file exists
# Temp files are automatically cleaned up
print(f"Upload failed safely: {e}")
```
#### Provider-Specific Features
Different providers implement atomic writes differently:
| Provider | Atomic Write Method | Progress Support |
|----------|-------------------|------------------|
| **Memory** | Temp buffer → swap | ✅ Yes |
| **Filesystem** | Temp file → `os.replace()` (OS-level atomic) | ✅ Yes |
| **SQLite** | Temp file → atomic move | ✅ Yes |
| **S3** | Multipart upload (inherently atomic) | ✅ Yes |
| **E2B Sandbox** | Temp file → `mv` command (atomic) | ✅ Yes |
**Key Features:**
- Memory-efficient processing of large files
- Real-time progress tracking with callbacks
- Atomic write safety prevents corruption
- Automatic temp file cleanup on errors
- Customizable chunk sizes
- Works with all storage providers
- Perfect for streaming uploads/downloads
- Both sync and async callback support
### Virtual Mounts
Combine multiple storage providers in a single filesystem:
```python
from chuk_virtual_fs import AsyncVirtualFileSystem
async def main():
async with AsyncVirtualFileSystem(
provider="memory",
enable_mounts=True
) as fs:
# Mount S3 bucket at /cloud
await fs.mount(
"/cloud",
provider="s3",
bucket_name="my-bucket",
endpoint_url="https://my-endpoint.com"
)
# Mount local filesystem at /local
await fs.mount(
"/local",
provider="filesystem",
root_path="/tmp/storage"
)
# Now use paths transparently across providers
await fs.write_file("/cloud/data.txt", "Stored in S3")
await fs.write_file("/local/cache.txt", "Stored locally")
await fs.write_file("/memory.txt", "Stored in memory")
# List all active mounts
mounts = fs.list_mounts()
for mount in mounts:
print(f"{mount['mount_point']}: {mount['provider']}")
# Copy between providers seamlessly
await fs.cp("/cloud/data.txt", "/local/backup.txt")
# Unmount when done
await fs.unmount("/cloud")
import asyncio
asyncio.run(main())
```
**Key Features:**
- Unix-like mount system
- Transparent path routing to correct provider
- Combine cloud, local, and in-memory storage
- Read-only mount support
- Seamless cross-provider operations (copy, move)
### WebDAV Mounting
**Recommended for most users** - Mount virtual filesystems without kernel extensions!
```python
from chuk_virtual_fs import SyncVirtualFileSystem
from chuk_virtual_fs.adapters import WebDAVAdapter
# Create a virtual filesystem
vfs = SyncVirtualFileSystem()
vfs.write_file("/documents/hello.txt", "Hello World!")
vfs.write_file("/documents/notes.md", "# My Notes")
# Start WebDAV server
adapter = WebDAVAdapter(vfs, port=8080)
adapter.start() # Server runs at http://localhost:8080
# Or run in background
adapter.start_background()
# Continue working...
vfs.write_file("/documents/updated.txt", "New content!")
adapter.stop()
```
**Mounting in Your OS:**
- **macOS**: Finder → Cmd+K → `http://localhost:8080`
- **Windows**: Map Network Drive → `http://localhost:8080`
- **Linux**: `davfs2` or file manager
**Why WebDAV?**
- ✅ No kernel extensions required
- ✅ Works immediately on macOS/Windows/Linux
- ✅ Perfect for AI coding assistants
- ✅ Easy to deploy and test
- ✅ Background operation support
- ✅ Read-only mode available
**Installation:**
```bash
pip install "chuk-virtual-fs[webdav]"
```
**See**: [WebDAV Examples](examples/webdav/) for detailed usage
### FUSE Mounting
Native filesystem mounting with full POSIX semantics.
```python
from chuk_virtual_fs import AsyncVirtualFileSystem
from chuk_virtual_fs.mount import mount, MountOptions
async def main():
# Create virtual filesystem
vfs = AsyncVirtualFileSystem()
await vfs.write_file("/hello.txt", "Mounted!")
# Mount at /tmp/mymount
async with mount(vfs, "/tmp/mymount", MountOptions()) as adapter:
# Filesystem is now accessible at /tmp/mymount
# Any tool can access it: ls, cat, vim, TypeScript, etc.
await asyncio.Event().wait()
import asyncio
asyncio.run(main())
```
**FUSE Options:**
```python
from chuk_virtual_fs.mount import MountOptions
options = MountOptions(
readonly=False, # Read-only mount
allow_other=False, # Allow other users to access
debug=False, # Enable FUSE debug output
cache_timeout=1.0 # Stat cache timeout in seconds
)
```
**Installation & Requirements:**
```bash
# Install package with FUSE support
pip install "chuk-virtual-fs[mount]"
# macOS: Install macFUSE
brew install macfuse
# Linux: Install FUSE3
sudo apt-get install fuse3 libfuse3-dev
# Docker: No system modifications needed!
# See examples/mounting/README.md for Docker testing
```
**Docker Testing (No System Changes):**
```bash
cd examples
./run_example.sh 5 # Basic FUSE mount test
./run_example.sh 6 # TypeScript checker demo
```
**Why FUSE?**
- ✅ Full POSIX semantics
- ✅ Works with any tool expecting a filesystem
- ✅ **Perfect for MCP servers** - Expose virtual filesystems to Claude Desktop and other MCP clients
- ✅ Ideal for AI + tools integration (TypeScript, linters, compilers, etc.)
- ✅ True filesystem operations (stat, chmod, etc.)
**MCP Server Use Case:**
```python
# MCP server exposes a virtual filesystem via FUSE
# Claude Desktop can then access it like a real filesystem
async def mcp_filesystem_tool():
vfs = AsyncVirtualFileSystem()
# Populate with AI-generated code, data, etc.
await vfs.write_file("/project/main.ts", generated_code)
# Mount so tools can access it
async with mount(vfs, "/tmp/mcp-workspace", MountOptions()):
# Claude can now run: tsc /tmp/mcp-workspace/project/main.ts
# Or any other tool that expects a real filesystem
await process_with_real_tools()
```
**See**: [FUSE Examples](examples/mounting/) for detailed usage including Docker testing
### Choosing Between WebDAV and FUSE
| Feature | WebDAV | FUSE |
|---------|--------|------|
| **Setup** | No system changes | Requires kernel extension |
| **Installation** | `pip install` only | System FUSE + pip |
| **Compatibility** | All platforms | macOS/Linux (Windows WSL2) |
| **POSIX Semantics** | Basic | Full |
| **Speed** | Fast | Faster |
| **MCP Servers** | ⚠️ Limited tool support | ✅ **Perfect** - full tool compatibility |
| **Use Case** | Remote access, quick dev | MCP servers, local tools, full integration |
| **Best For** | Most users, simple sharing | **MCP servers**, power users, full POSIX needs |
**Which Should You Use?**
- **Building an MCP server?** → **Use FUSE** - Claude and MCP clients need full POSIX semantics to run real tools
- **Quick prototyping or sharing?** → **Use WebDAV** - Works imme | text/markdown | null | null | null | null | Apache-2.0 | filesystem, virtual, sandbox, ai, security | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries",
"Topic :: ... | [] | null | null | >=3.11 | [] | [] | [] | [
"python-dotenv>=1.1.0",
"pyyaml>=6.0.1",
"asyncio>=3.4.3",
"aioboto3>=15.1.0",
"pytest-cov>=6.0.0",
"wsgidav>=4.3.3",
"cheroot>=11.1.2",
"pydantic>=2.0.0",
"boto3>=1.28.0; extra == \"s3\"",
"aioboto3>=12.0.0; extra == \"s3\"",
"e2b-code-interpreter>=1.2.0; extra == \"e2b\"",
"google-auth>=2.23... | [] | [] | [] | [
"Homepage, https://github.com/chuk-ai/chuk-virtual-fs",
"Bug Tracker, https://github.com/chuk-ai/chuk-virtual-fs/issues",
"Documentation, https://github.com/chuk-ai/chuk-virtual-fs#readme"
] | twine/6.1.0 CPython/3.11.11 | 2026-02-18T14:25:50.276574 | chuk_virtual_fs-0.5.1.tar.gz | 242,928 | 0b/21/d52170d2cf7ae2651444b5aa64e4ef4cebfbcb43286bcf2451654502e9df/chuk_virtual_fs-0.5.1.tar.gz | source | sdist | null | false | a804a14193515458734b745cb798264f | 0f8256e0b7bc69f4602ad111c82065525671524f7733e63c09fe2c865fa52762 | 0b21d52170d2cf7ae2651444b5aa64e4ef4cebfbcb43286bcf2451654502e9df | null | [
"LICENSE"
] | 690 |
2.4 | js-web-scraper | 1.2.4 | Customizable Web Scrapper to get alerts when criteria is met on web sites. | # WebScraper
* This program can scrap data from websites using different scrapers, and send an email when
matches/ changes deadening on the scraper used
* There are 2 types of scrapers:
- Generic: Can scrap any website, but might not be as exact
- Specific: Can scrap only specific websites, but will be more exact
## Generic Scrapers
- Text
- Diff
## Specific Scrapers
- Cars.com
## How to use
### Text
1. Set these specific env variables
2. ```dotenv
SCRAPER=text # Scraper to use
URL=<URL> # URL to scrape
TEXT=<TEXT> # Text to look for
```
3. Ensure all other required env variables are set
### Diff
1. Set these specific env variables
2. ```dotenv
SCRAPER=diff # Scraper to use
URL=<URL> # URL to scrape
PERCENTAGE=<PERCENTAGE_DIFF> # Percentage difference to look for
```
3. Ensure all other required env variables are set
### Cars.com
1. Set these specific env variables
2. ```dotenv
SCRAPER=cars_com # Scraper to use
URL=https://www.cars.com/shopping/results/ # URL to scrape, must be on the results page, for a specific search
```
3. Ensure all other required env variables are set
### Required env variables
```dotenv
SLEEP_TIME_SEC= # Time to sleep between each scrape
SENDER_EMAIL= # Email to send from
FROM_EMAIL= # Name to send from i.e. '"Web Scraper" <no-reply@jstockley.com>'
RECEIVER_EMAIL= # Email to send to
PASSWORD= # Password for the sender's email
SMTP_SERVER= # SMTP server to use
SMTP_PORT= # SMTP port to use
TLS= # True/False to use TLS
```
### Running multiple of the same scraper
To run 2+ scrapers of the same type, i.e. 2 `diff` scrapers, make sure the host folder mapping is different
Ex:
```yaml
diff-scraper-1:
image: jnstockley/web-scraper:latest
volumes:
- ./diff-scraper-1-data/:/app/data/
environment:
- TZ=America/Chicago
- SCRAPER=diff
- URL=https://google.com
- PERCENTAGE=5
- SLEEP_TIME_SEC=21600
diff-scraper-2:
image: jnstockley/web-scraper:latest
volumes:
- ./diff-scraper-2-data/:/app/data/
environment:
- TZ=America/Chicago
- SCRAPER=diff
- URL=https://yahoo.com
- PERCENTAGE=5
- SLEEP_TIME_SEC=21600
```
| text/markdown | null | jnstockley <jnstockley@users.noreply.github.com> | null | null | null | starter, template, python | [
"Programming Language :: Python :: 3"
] | [] | null | null | <4.0,>=3.13 | [] | [] | [] | [
"beautifulsoup4==4.14.3",
"pandas==3.0.1",
"python-dotenv==1.2.1",
"tls-client==1.0.1"
] | [] | [] | [] | [
"Homepage, https://github.com/jnstockley/web-scraper",
"Repository, https://github.com/jnstockley/web-scraper.git",
"Issues, https://github.com/jnstockley/web-scraper/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:25:40.640204 | js_web_scraper-1.2.4.tar.gz | 19,057 | 6d/a9/227d20c3fc322ca410e1b5451fafcf7e1f4d615fdc05e2a7ed1ea7096db6/js_web_scraper-1.2.4.tar.gz | source | sdist | null | false | 85aabcf7137541393e17aef11b9e424b | 56171bb0802f8ae8d7dd5f74b917fed68f651ab71c9268ce085aeb67cb099a0e | 6da9227d20c3fc322ca410e1b5451fafcf7e1f4d615fdc05e2a7ed1ea7096db6 | null | [
"LICENSE"
] | 233 |
2.4 | ufal.pybox2d | 2.3.10.5 | Custom-build wheels of pybox2d library from http://github.com/pybox2d/pybox2d | Custom-build wheels of pybox2d library.
pybox2d homepage: https://github.com/pybox2d/pybox2d
Box2D homepage: http://www.box2d.org
| null | Milan Straka | straka@ufal.mff.cuni.cz | null | null | zlib | null | [] | [] | http://github.com/foxik/pybox2d | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T14:25:15.873658 | ufal_pybox2d-2.3.10.5-cp39-cp39-win_arm64.whl | 1,293,102 | 5f/6a/08c23af26b03122f8efd86b7b526c47504890ab509de058d70c294cb1fc6/ufal_pybox2d-2.3.10.5-cp39-cp39-win_arm64.whl | cp39 | bdist_wheel | null | false | 961477f03bdb4f2f52a2bc2a7b059427 | ac086b9186c8b177da3ae63b62131530815173d83f4b51532b67e2bbc8d9c736 | 5f6a08c23af26b03122f8efd86b7b526c47504890ab509de058d70c294cb1fc6 | null | [
"LICENSE"
] | 0 |
2.4 | django-absoluteuri | 2.0.0 | Absolute URI functions and template tags for Django | django-absoluteuri
==================
.. image:: https://github.com/fusionbox/django-absoluteuri/actions/workflows/tests.yml/badge.svg
:target: https://github.com/fusionbox/django-absoluteuri/actions/workflows/tests.yml
:alt: Tests
Absolute URI functions and template tags for Django.
Why
---
There are times when you need to output an absolute URL (for example, inside an
email), but you don't always have access to the request. These utilities use
the Sites Framework if available in order to create absolute URIs.
Installation
------------
Install django-absoluteuri::
pip install django-absoluteuri
Then add it to your ``INSTALLED_APPS``::
INSTALLED_APPS = (
# ...
'django.contrib.sites',
'absoluteuri',
)
django-absoluteuri requires the `Sites Framework
<https://docs.djangoproject.com/en/dev/ref/contrib/sites/>`_ to be in
``INSTALLED_APPS`` well and configured as well.
Settings
--------
The protocol of the uris returned by this library defaults to ``http``. You
can specify the protocol with the ``ABSOLUTEURI_PROTOCOL`` setting.
.. code:: python
# settings.py
ABSOLUTEURI_PROTOCOL = 'https'
# Elsewhere
>>> absoluteuri.build_absolute_uri('/some/path/')
'https://example.com/some/path/'
Template Tags
-------------
There are two template tags, ``absoluteuri`` and ``absolutize``.
``absoluteuri`` works just like the ``url`` tag, but that it outputs absolute
URLs.
.. code:: html+django
{% load absoluteuri %}
<a href="{% absoluteuri 'my_view' kwarg1='foo' kwarg2='bar' %}">click here</a>
``absolutize`` will take a relative URL and return an absolute URL.
.. code:: html+django
{% load absoluteuri %}
<a href="{% absolutize url_from_context %}">click here</a>
Filter
------
Sometimes instead of template tags, it's easier to use filters. You can do that
as well.
.. code:: html+django
{% load absoluteuri %}
<a href="{{ my_object.get_absolute_url|absolutize }}">click here</a>
But there are situations where tag can not be used but filter can.
.. code:: html+django
{% load absoluteuri %}
{% include "some-other-template.html" with url=my_object.get_absolute_url|absolutize %}
Functions
---------
There are also two functions that django-absoluteuri provides,
``build_absolute_uri`` and ``reverse``, which are equivalents of
``request.build_absolute_url`` and ``urlresolvers.reverse``.
.. code:: python
>>> import absoluteuri
>>> my_relative_url = '/path/to/somewhere/'
>>> absoluteuri.build_absolute_uri(my_relative_url)
'http://example.com/path/to/somewhere/'
>>> absoluteuri.reverse('viewname', kwargs={'foo': 'bar'})
'http://example.com/path/to/bar/'
.. :changelog:
Changelog
=========
2.0.0 (2026-02-18)
------------------
- Drop support for Python < 3.8
- Drop support for Django < 4.2.
- Add support for Django 4.2, 5.0, 5.1, 5.2, and 6.0.
- Add support for Python 3.10, 3.11, 3.12, 3.13, and 3.14.
1.3.0 (2018-09-04)
------------------
- Add support for Django 2.1. Remove support for Django < 1.11.
1.2.0 (2016-02-29)
------------------
- Add absolutize filter. This deprecates the absolutize tag. [#4]
1.1.0 (2015-03-23)
------------------
- Added ABSOLUTEURI_PROTOCOL settings. [#1]
- Documented sites framework requirement.
1.0.0 (2015-03-17)
------------------
- First release on PyPI.
| null | Fusionbox, Inc. | programmers@fusionbox.com | null | null | Apache 2.0 | django-absoluteuri | [
"Development Status :: 5 - Production/Stable",
"Framework :: Django",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python ... | [] | https://github.com/fusionbox/django-absoluteuri | null | null | [] | [] | [] | [
"Django>=4.2"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.11 | 2026-02-18T14:24:48.773711 | django_absoluteuri-2.0.0.tar.gz | 8,168 | ce/93/2cbc2c6d4fb93faf6749927c950bff3771fabbdf2c846e47691677b50cc3/django_absoluteuri-2.0.0.tar.gz | source | sdist | null | false | 65b02f83cc1dffff24d1112ffef8ab37 | c7fbfeac5be2a07c884803881e72d29c21d118817c83f5272ca42c2fec7f7ae3 | ce932cbc2c6d4fb93faf6749927c950bff3771fabbdf2c846e47691677b50cc3 | null | [
"LICENSE"
] | 265 |
2.4 | seastartool | 0.11 | The Sea-faring System for Tagging, Attribution and Redistribution (SeaSTAR) is a GUI and CLI application for processing biodiversity data collected at sea. | 
SeaSTAR (Sea-faring System for Tagging, Attribution and Redistribution) is a CLI and GUI application for processing biodiversity data collected at sea.
## Use cases
Currently SeaSTAR focuses on processing data collected on the IFCB instrument. It has functionality for interactions with EcoTaxa and CRAB.
## How to install and use
```
$ pip install seastartool
Collecting seastartool
Downloading seastartool-0.1-py3-none-any.whl.metadata (1.1 kB)
[...]
Successfully installed seastartool-0.1
$ seastar ifcb_v4_features -i testdata/*.hdr -o testout/
```
| text/markdown | null | Alex Baldwin <contact@alexbaldwin.dev> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"numpy>=1.13.3",
"pyarrow>=12.0.1",
"planktofeatures>=0.1",
"libifcb>=0.8",
"crabdeposit>=0.9"
] | [] | [] | [] | [
"Homepage, https://github.com/NOC-OI/seastar",
"Issues, https://github.com/NOC-OI/seastar/issues"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T14:24:48.235341 | seastartool-0.11.tar.gz | 72,346 | 3d/67/e0888d15eccb75e6d5efe48378325ba097d3fbf7ab23a265deef88bfd64f/seastartool-0.11.tar.gz | source | sdist | null | false | ef2e8bf0f530a5ea6e5246c90c42ee3e | 4f925b69634aa66270cccf62b1b352784beacaed611cb983e5aa9bfa1aecfa22 | 3d67e0888d15eccb75e6d5efe48378325ba097d3fbf7ab23a265deef88bfd64f | GPL-3.0 | [
"LICENSE"
] | 240 |
2.4 | llm-context-engineering | 0.1.0 | A middleware layer for managing, budgeting, and optimizing LLM context windows. | This is a comprehensive design document for your Python package, `llm-context-manager`. You can use this directly as your project's `README.md` or as the architectural blueprint for development.
---
# Project: `llm-context-manager`
### **1. Background & Problem Statement**
Large Language Models (LLMs) are moving from simple "Prompt Engineering" (crafting a single query) to "Context Engineering" (managing a massive ecosystem of retrieved documents, tools, and history).
The current problem is **Context Pollution**:
1. **Overloading:** RAG (Retrieval Augmented Generation) pipelines often dump too much data, exceeding token limits.
2. **Noise:** Duplicate or irrelevant information confuses the model and increases hallucination rates.
3. **Formatting Chaos:** Different models (Claude vs. Llama vs. GPT) require different formatting (XML vs. Markdown vs. Plain Text), leading to messy, hard-to-maintain string concatenation code.
4. **Black Box:** Developers rarely see exactly what "context" was sent to the LLM until after a failure occurs.
**The Solution:** `llm-context-manager` acts as a **middleware layer** for the LLM pipeline. It creates a structured, optimized, and budget-aware "context payload" before it reaches the model.
---
### **2. Architecture: Where It Fits**
The package sits strictly between the **Retrieval/Agent Layer** (e.g., LangChain, LlamaIndex) and the **Execution Layer** (the LLM API).
#### **Diagram: The "Before" (Standard Pipeline)**
*Without `llm-context-manager`, retrieval is messy and often truncated arbitrarily.*
```mermaid
graph LR
A[User Query] --> B[LangChain Retriever]
B --> C{Result: 15 Docs}
C -->|Raw Dump| D[LLM Context Window]
D -->|Token Limit Exceeded!| E[Truncated/Error]
```
#### **Diagram: The "After" (With `llm-context-manager`)**
*With your package, the context is curated, prioritized, and formatted.*
```mermaid
graph LR
A[User Query] --> B[LangChain Retriever]
B --> C[Raw Data: 15 Docs + History]
C --> D[**llm-context-manager**]
subgraph "Your Middleware"
D --> E["1. Token Budgeting"]
E --> F["2. Semantic Pruning"]
F --> G["3. Formatting (XML/JSON)"]
end
G --> H[Optimized Prompt]
H --> I[LLM API]
```
---
### **3. Key Features & Tools**
Here is the breakdown of the 4 core modules, the features they provide, and the libraries powering them.
#### **Module A: The Budget Controller (`budget`)**
* **Goal:** Ensure the context never exceeds the model's limit (e.g., 8192 tokens) while keeping the most important information.
* **Feature:** `PriorityQueue`. Users assign a priority (Critical, High, Medium, Low) to every piece of context. If the budget is full, "Low" items are dropped first.
* **Supported Providers & Tools:**
* **OpenAI** (`gpt-4`, `o1`, `o3`, etc.): **`tiktoken`** — fast, local token counting.
* **HuggingFace** (`meta-llama/...`, `mistralai/...`): **`tokenizers`** — for open-source models.
* **Google** (`gemini-2.0-flash`, `gemma-...`): **`google-genai`** — API-based `count_tokens`.
* **Anthropic** (`claude-sonnet-4-20250514`, etc.): **`anthropic`** — API-based `count_tokens`.
* **Installation (pick what you need):**
```bash
pip install llm-context-manager[openai] # tiktoken
pip install llm-context-manager[huggingface] # tokenizers
pip install llm-context-manager[google] # google-genai
pip install llm-context-manager[anthropic] # anthropic
pip install llm-context-manager[all] # everything
```
#### **Module B: The Semantic Pruner (`prune`)**
* **Goal:** Remove redundancy. If three retrieved documents say "Python is great," keep only the best one.
* **Features:**
* **`Deduplicator` (block-level):** Calculates cosine similarity between context blocks and removes duplicate blocks. Among duplicates, the highest-priority block is kept.
* **`Deduplicator.deduplicate_chunks()` (chunk-level):** Splits a single block's content by separator (e.g. `\n\n`), deduplicates the chunks internally, and reassembles the cleaned content. Ideal for RAG results where multiple retrieved chunks within one block are semantically redundant.
* **Tools:**
* **`FastEmbed`**: Lightweight embedding generation (CPU-friendly, no heavy PyTorch needed).
* **`Numpy`**: For efficient vector math (dot products).
* **Installation:**
```bash
pip install llm-context-manager[prune]
```
#### **Module C: Context Distillation (`distill`)**
* **Goal:** Compress individual blocks by removing non-essential tokens (e.g., reduces a 5000-token document to 2500 tokens) using a small ML model.
* **Feature:** `Compressor`. Uses **LLMLingua-2** (small BERT-based token classifier) to keep only the most important words.
* **Tools:**
* **`llmlingua`**: Microsoft's library for prompt compression.
* **`onnxruntime`** / **`transformers`**: For running the small BERT model.
* **Installation:**
```bash
pip install llm-context-manager[distill]
```
#### **Module D: The Formatter (`format`)**
* **Goal:** Adapt the text structure to the specific LLM being used without changing the data.
* **Feature:** `ModelAdapter`.
* *Claude Mode:* Wraps data in XML tags (`<doc id="1">...</doc>`).
* *Llama Mode:* Uses specific Markdown headers or `[INST]` tags.
* **Tools:**
* **`Jinja2`**: For powerful, logic-based string templates.
* **`Pydantic`**: To enforce strict schema validation on the input data.
#### **Module E: Observability (`inspect`)**
* **Goal:** Let the developer see exactly what is happening.
* **Feature:** `ContextVisualizer` and `Snapshot`. Prints a colored bar chart of token usage to the terminal and saves the final prompt to a JSON file for debugging.
* **Tools:**
* **`Rich`**: For beautiful terminal output and progress bars.
---
### **4. Installation & Usage Guide**
#### **Installation**
```bash
pip install llm-context-manager
```
#### **Example Usage Code**
This is how a developer would use your package in their Python script.
```python
from context_manager import ContextEngine, ContextBlock
from context_manager.strategies import PriorityPruning
# 1. Initialize the Engine
engine = ContextEngine(
model="gpt-4",
token_limit=4000,
pruning_strategy=PriorityPruning()
)
# 2. Add Data (Raw from LangChain/User)
# System Prompt (Critical Priority)
engine.add(ContextBlock(
content="You are a helpful AI assistant for financial analysis.",
role="system",
priority="critical"
))
# User History (High Priority)
engine.add(ContextBlock(
content="User: What is the PE ratio of Apple?",
role="history",
priority="high"
))
# Retrieved Documents (Medium Priority - acting as 'Low' if space is tight)
docs = ["Apple PE is 28...", "Microsoft PE is 35...", "Banana prices are up..."]
for doc in docs:
engine.add(ContextBlock(
content=doc,
role="rag_context",
priority="medium"
))
# 3. BUILD THE CONTEXT
# This triggers the budgeting, pruning, and formatting
final_prompt = engine.compile()
# 4. INSPECT (Optional)
engine.visualize()
# Output: [██ system (10%) ][████ history (20%) ][██████ rag (40%) ][ unused (30%) ]
# 5. SEND TO LLM
openai.ChatCompletion.create(messages=final_prompt)
```
### **5. Roadmap for Development**
1. **v0.1 (MVP):** Implement `tiktoken` counting and `PriorityPruning`. Just get the math right.
2. **v0.2 (Structure):** Add `Jinja2` templates for "Claude Style" (XML) vs "OpenAI Style" (JSON).
3. **v0.3 (Smarts):** Integrate `FastEmbed` to automatically detect and remove duplicate documents.
4. **v0.4 (Vis):** Add the `Rich` terminal visualization.
This design gives you a clear path to building a high-value tool that solves a specific, painful problem for AI engineers. | text/markdown | Adipta Martulandi | null | null | null | null | ai, budget, context, llm, token | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engi... | [] | null | null | >=3.10 | [] | [] | [] | [
"anthropic>=0.40; extra == \"all\"",
"fastembed>=0.4; extra == \"all\"",
"google-genai>=1.0; extra == \"all\"",
"llmlingua>=0.2.2; extra == \"all\"",
"numpy>=1.24; extra == \"all\"",
"tiktoken>=0.7; extra == \"all\"",
"tokenizers>=0.19; extra == \"all\"",
"anthropic>=0.40; extra == \"anthropic\"",
"... | [] | [] | [] | [
"Homepage, https://github.com/adiptamartulandi/llm-context-manager",
"Repository, https://github.com/adiptamartulandi/llm-context-manager"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T14:24:30.391438 | llm_context_engineering-0.1.0.tar.gz | 24,468 | 71/90/0abe51d828cd8a08da2e97cf56918405b459bcf2c3d88852fb6528713d74/llm_context_engineering-0.1.0.tar.gz | source | sdist | null | false | afa168301db21979dba8d05322fde307 | 090ebaa6ed7b01f1d243ee4e1f2b47ea0abce098c7a2690a2c4eb2c5cf03aae1 | 71900abe51d828cd8a08da2e97cf56918405b459bcf2c3d88852fb6528713d74 | MIT | [] | 257 |
2.4 | gpiod-sysfs-proxy | 0.1.4 | User-space compatibility layer for linux GPIO sysfs interface | <!-- SPDX-License-Identifier: MIT -->
<!-- SPDX-FileCopyrightText: 2024 Bartosz Golaszewski <bartosz.golaszewski@linaro.org> -->
# gpiod-sysfs-proxy
[libgpiod](https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git/)-based
compatibility layer for the linux GPIO sysfs interface.
It uses [FUSE](https://www.kernel.org/doc/html/v6.3/filesystems/fuse.html)
(Filesystem in User Space) in order to expose a filesystem that can be mounted
over `/sys/class/gpio` to simulate the kernel interface.
## Running
Running the script with a mountpoint parameter will mount the simulated gpio
class directory and then exit. The script can also be run with `-f` or `-d`
switches for foreground or debug operation respectively.
The recommended command-line mount options to use are:
```
gpiod-sysfs-proxy <mountpoint> -o nonempty -o allow_other -o default_permissions -o entry_timeout=0
```
This allows to mount the compatibility layer on non-empty `/sys/class/gpio`,
allows non-root users to access it, enables permission checks by the kernel
and disables caching of entry stats (for when we remove directories from the
script while the kernel doesn't know it).
For a complete list of available command-line options, please run:
```
gpiod-sysfs-proxy --help
```
## Caveats
Due to how FUSE works, there are certain limitations to the level of
compatibility we can assure as well as some other issues the user may need
to have to work around.
### Non-existent `/sys/class/gpio`
If the GPIO sysfs interface is disabled in Kconfig, the `/sys/class/gpio`
directory will not exist and the user-space can't create directories inside
of sysfs. There are two solutions: either the user can use a different
mountpount or - for full backward compatibility - they can use overlayfs on
top of `/sys/class` providing the missing `gpio` directory.
Example:
```
mkdir -p /run/gpio/sys /run/gpio/class/gpio /run/gpio/work
mount -t sysfs sysfs /run/gpio/sys
mount -t overlay overlay -o lowerdir=/run/gpio/sys/class,upperdir=/run/gpio/class,workdir=/run/gpio/work,ro
gpiod-sysfs-proxy /sys/class/gpio <options>
```
### Links in `/sys/class/gpio`
The kernel sysfs interface at `/sys/class/gpio` contains links to directories
living elsewhere (specifically: under the relevant device entries) in sysfs.
For obvious reasons we cannot replicate that so, instead we expose actual
directories representing GPIO chips and exported GPIO lines.
### Polling of the `value` attribute
We currently don't support multiple users polling the `value` attribute at
once. Also: unlike the kernel interface, reading from `value` will not block
after the value has been read once.
### Static GPIO base number
Some legacy GPIO drivers hard-code the base GPIO number. We don't yet support
it but it's planned as a future extension in the form of an argument that will
allow to associate a hard-coded base with a GPIO chip by its label.
## Similar projects
* [sysfs-gpio-shim](https://github.com/info-beamer/sysfs-gpio-shim), written in
C. Officially only supports Raspberry Pi.
| text/markdown | null | Bartosz Golaszewski <brgl@bgdev.pl> | null | null | null | null | [] | [] | null | null | >=3.6.0 | [] | [] | [] | [
"fuse-python>=1.0.9",
"gpiod>=2.1.0",
"pyudev>=0.24.0"
] | [] | [] | [] | [
"Homepage, https://github.com/brgl/gpiod-sysfs-proxy",
"Issues, https://github.com/brgl/gpiod-sysfs-proxy/issues"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-18T14:24:29.418220 | gpiod_sysfs_proxy-0.1.4.tar.gz | 7,984 | 4e/d0/e912c00555ece60988c2719d60de6493cd5a6accd521b50479d6b3947153/gpiod_sysfs_proxy-0.1.4.tar.gz | source | sdist | null | false | 3a6ce1847318df0564269735aecdcec0 | bb38e31e4046a7aa0101c53c9e7e2cf319c0cd9620b9ba1641e962fce44a1f3a | 4ed0e912c00555ece60988c2719d60de6493cd5a6accd521b50479d6b3947153 | MIT | [
"COPYING"
] | 232 |
2.4 | tobiko | 0.8.25 | OpenStack Testing Upgrades Library | ======
Tobiko
======
Test Big Cloud Operations
-------------------------
Tobiko is an OpenStack testing framework focusing on areas mostly
complementary to `Tempest <https://docs.openstack.org/tempest/latest/>`__.
While Tempest main focus has been testing OpenStack rest APIs, the main Tobiko
focus is to test OpenStack system operations while "simulating"
the use of the cloud as the final user would.
Tobiko's test cases populate the cloud with workloads such as Nova instances;
they execute disruption operations such as services/nodes restart; finally they
run test cases to validate that the cloud workloads are still functional.
Tobiko's test cases can also be used, for example, for testing that previously
created workloads are working right after OpenStack services update/upgrade
operation.
Project Requirements
--------------------
Tobiko Python framework is being automatically tested with below Python
versions:
- Python 3.8
- Python 3.9
- Python 3.10 (new)
and below Linux distributions:
- CentOS 9 / RHEL 8 (with Python 3.9)
- Ubuntu Focal (with Python 3.8)
- Ubuntu Jammy (with Python 3.10)
Tobiko has also been tested for development purposes with below OSes:
- OSX (with Python 3.6 to 3.10)
The Tobiko Python framework is being used to implement test cases. As Tobiko
can be executed on nodes that are not part of the cloud to test against, this
doesn't mean Tobiko requires cloud nodes have to run with one of above Python
versions or Linux distributions.
There is also a Docker file that can be used to create a container for running
test cases from any node that do support containers execution.
Main Project Goals
~~~~~~~~~~~~~~~~~~
- To test OpenStack and Red Hat OpenStack Platform projects before they are
released.
- To provide a Python framework to write system scenario test cases (create
and test workloads).
- To verify previously created workloads are working fine after executing
OpenStack nodes update/upgrade.
- To write white boxing test cases (to log to cloud nodes
for internal inspection purpose).
- To write disruptive test cases (to simulate
service disruptions like for example rebooting/interrupting a service to
verify cloud reliability).
- To provide Ansible roles implementing a workflow designed to run an ordered
sequence of test suites. For example a workflow could do below steps:
- creates workloads;
- run disruptive test cases (IE reboot OpenStack nodes or services);
- verify workloads are still working.
The main use of these roles is writing continuous integration jobs for Zuul
or other services like Jenkins (IE by using the Tobiko InfraRed plug-in).
- To provide tools to monitor and recollect the healthy status of the cloud as
seen from user perspective (black-box testing) or from an inside point of
view (white-box testing built around SSH client).
References
----------
* Free software: Apache License, Version 2.0
* Documentation: https://tobiko.readthedocs.io/
* Release notes: https://docs.openstack.org/releasenotes/tobiko/
* Source code: https://opendev.org/x/tobiko
* Bugs: https://storyboard.openstack.org/#!/project/x/tobiko
* Code review: https://review.opendev.org/q/project:x/tobiko
Related projects
~~~~~~~~~~~~~~~~
* OpenStack: https://www.openstack.org/
* Red Hat OpenStack Platform: https://www.redhat.com/en/resources/openstack-platform-datasheet
* Python: https://www.python.org/
* Testtools: https://github.com/testing-cabal/testtools
* Ansible: https://www.ansible.com/
* InfraRed: https://infrared.readthedocs.io/en/latest/
* DevStack: https://docs.openstack.org/devstack/latest/
* Zuul: https://docs.openstack.org/infra/system-config/zuul.html
* Jenkins: https://www.jenkins.io/
| null | OpenStack | openstack-discuss@lists.openstack.org | null | null | null | setup, distutils | [
"Environment :: OpenStack",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programm... | [] | https://tobiko.readthedocs.io/ | null | >=3.9 | [] | [] | [] | [
"fixtures>=3.0.0",
"keystoneauth1>=4.3.0",
"metalsmith>=1.6.2",
"netaddr>=0.8.0",
"neutron-lib>=2.7.0",
"openstacksdk>=0.31.2",
"oslo.concurrency>=3.26.0",
"oslo.config>=8.4.0",
"oslo.log>=4.4.0",
"oslo.utils>=4.12.3",
"packaging>=20.4",
"paramiko>=2.9.2",
"pbr>=5.5.1",
"psutil>=5.8.0",
... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T14:24:22.219433 | tobiko-0.8.25.tar.gz | 551,607 | 64/e0/fa7cb16f27a0464e7636610759ae2c47c66b5012f8ee02bc20871273356f/tobiko-0.8.25.tar.gz | source | sdist | null | false | 6d98f655721786f10f6459df4b04639f | 7f6d691a99407c274a6afeec0eb12e0c6ef5ad08cc46c6e38274500b75a3ca70 | 64e0fa7cb16f27a0464e7636610759ae2c47c66b5012f8ee02bc20871273356f | null | [
"LICENSE"
] | 256 |
2.4 | rakam-systems-agent | 0.1.1rc13 | AI Agents framework with Pydantic AI support and LLM Gateway integration | # Rakam System Agent
The agent package of Rakam Systems providing AI agent implementations powered by Pydantic AI.
## Overview
`rakam-systems-agent` provides flexible AI agents with tool integration, chat history, and LLM gateway abstractions. This package depends on `rakam-systems-core`.
## Features
- **Configuration-First Design**: Change agents without code changes - just update YAML files
- **Async/Sync Support**: Full support for both synchronous and asynchronous agent operations
- **Tool Integration**: Easy tool definition and integration using the `Tool.from_schema` pattern
- **Model Settings**: Control model behavior including parallel tool calls, temperature, and max tokens
- **Pydantic AI Powered**: Built on top of Pydantic AI library
- **Streaming Support**: Both sync and async streaming interfaces
- **Chat History**: Multiple backends (JSON, SQLite, PostgreSQL)
- **LLM Gateway**: Unified interface for OpenAI and Mistral AI
### 🎯 Configuration Convenience
The agent package supports comprehensive YAML configuration, allowing you to:
- **Switch LLM models** without changing code (GPT-4 → GPT-4o-mini → Claude)
- **Tune parameters** instantly (temperature, max tokens, parallel tools)
- **Modify prompts** without redeployment
- **Add/remove tools** by editing config files
- **Enable/disable tracking** via configuration
- **Use different configs** for different environments
**Example**: Switch from GPT-4o to GPT-4o-mini by changing one line in your YAML - no code changes, no redeployment needed!
## Installation
```bash
# Requires core package
pip install -e ./rakam-systems-core
# Install agent package
pip install -e ./rakam-systems-agent
```
## Quick Start
### Using BaseAgent (Pydantic AI-powered)
```python
import asyncio
from rakam_systems_agent import BaseAgent
from rakam_systems_core.interfaces import ModelSettings
from rakam_systems_core.interfaces.tool import ToolComponent as Tool
# Define a tool function
async def get_weather(city: str) -> dict:
"""Get weather information for a city"""
# Your implementation here
return {"city": city, "temperature": 72, "condition": "sunny"}
# Create an agent with tools
agent = BaseAgent(
name="weather_agent",
model="openai:gpt-4o",
system_prompt="You are a helpful weather assistant.",
tools=[
Tool.from_schema(
function=get_weather,
name='get_weather',
description='Get weather information for a city',
json_schema={
'type': 'object',
'properties': {
'city': {'type': 'string', 'description': 'The city name'},
},
'required': ['city'],
'additionalProperties': False,
},
takes_ctx=False,
),
],
)
# Run the agent
async def main():
result = await agent.arun(
"What's the weather in San Francisco?",
model_settings=ModelSettings(parallel_tool_calls=True),
)
print(result.output_text)
asyncio.run(main())
```
## Core Components
### AgentComponent
The base abstract class for all agents. Provides:
- `run()` / `arun()`: Execute the agent synchronously or asynchronously
- `stream()` / `astream()`: Stream responses
- Support for tools, model settings, and dependencies
### BaseAgent
The core agent implementation powered by Pydantic AI. This is the primary agent class in our system. Features:
- Direct integration with Pydantic AI's Agent
- Full support for parallel tool calls
- Automatic conversion between our interfaces and Pydantic AI's
- Support for both traditional tool lists and ToolRegistry/ToolInvoker system
### Tool
Wrapper for tool functions compatible with Pydantic AI's `Tool.from_schema` pattern:
```python
Tool.from_schema(
function=my_function,
name='my_function',
description='What this function does',
json_schema={
'type': 'object',
'properties': {...},
'required': [...],
},
takes_ctx=False,
)
```
### ModelSettings
Configure model behavior:
```python
ModelSettings(
parallel_tool_calls=True, # Enable parallel tool execution
temperature=0.7, # Control randomness
max_tokens=1000, # Limit response length
)
```
## Advanced Usage
### Parallel vs Sequential Tool Calls
Control whether tools are called in parallel or sequentially:
```python
# Parallel (faster for independent tools)
result = await agent.arun(
"Get weather for NYC and LA",
model_settings=ModelSettings(parallel_tool_calls=True),
)
# Sequential (for dependent operations)
result = await agent.arun(
"Get weather for NYC and LA",
model_settings=ModelSettings(parallel_tool_calls=False),
)
```
### Using Dependencies
Pass context/dependencies to your agent:
```python
class Deps:
def __init__(self, user_id: str):
self.user_id = user_id
agent = BaseAgent(
deps_type=Deps,
# ...
)
result = await agent.arun(
"Process this",
deps=Deps(user_id="123"),
)
```
### Streaming Responses
```python
async for chunk in agent.astream("Tell me a story"):
print(chunk, end='', flush=True)
```
## API Reference
### AgentComponent
```python
class AgentComponent(BaseComponent):
def __init__(
self,
name: str,
config: Optional[Dict[str, Any]] = None,
model: Optional[str] = None,
deps_type: Optional[Type[Any]] = None,
system_prompt: Optional[str] = None,
tools: Optional[List[Any]] = None,
)
def run(
self,
input_data: Union[str, AgentInput],
deps: Optional[Any] = None,
model_settings: Optional[ModelSettings] = None
) -> AgentOutput
async def arun(
self,
input_data: Union[str, AgentInput],
deps: Optional[Any] = None,
model_settings: Optional[ModelSettings] = None
) -> AgentOutput
```
### Tool
```python
class Tool:
@classmethod
def from_schema(
cls,
function: Callable[..., Any],
name: str,
description: str,
json_schema: Dict[str, Any],
takes_ctx: bool = False,
) -> "Tool"
```
### ModelSettings
```python
class ModelSettings:
def __init__(
self,
parallel_tool_calls: bool = True,
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
**kwargs: Any
)
```
## Examples
See the `examples/ai_agents_examples/` directory in the main repository for complete examples demonstrating:
- Multiple tool definitions
- Parallel vs sequential tool calls
- Performance comparisons
- Complex multi-tool workflows
- Chat history integration
- RAG systems
## Package Structure
```
rakam-systems-agent/
├── src/rakam_systems_agent/
│ ├── components/
│ │ ├── base_agent.py # BaseAgent (Pydantic AI-powered)
│ │ ├── llm_gateway/ # LLM provider gateways
│ │ ├── chat_history/ # Chat history backends
│ │ ├── tools/ # Built-in tools
│ │ └── __init__.py # Exports
│ └── server/ # MCP server
└── pyproject.toml
```
## Best Practices
1. **Use async when possible**: Async operations are more efficient, especially with tools
2. **Enable parallel tool calls**: For independent operations, parallel execution is much faster
3. **Provide clear tool descriptions**: Better descriptions help the LLM use tools correctly
4. **Use type hints**: JSON schemas should match your function signatures
5. **Handle errors gracefully**: Tools should catch and return meaningful errors
## Migration Guide
### Using BaseAgent
`BaseAgent` is the core Pydantic AI-powered agent implementation in this framework.
```python
from rakam_systems_agent import BaseAgent
agent = BaseAgent(
name="agent",
model="openai:gpt-4o",
system_prompt="You are a helpful assistant."
)
```
## Troubleshooting
### ImportError: pydantic_ai not installed
Install Pydantic AI:
```bash
pip install pydantic_ai
```
### Tool not being called
Check:
1. Tool description is clear and relevant
2. JSON schema matches function signature
3. System prompt doesn't contradict tool usage
### Performance issues
- Enable `parallel_tool_calls=True` for independent operations
- Use async functions for I/O-bound operations
- Consider caching tool results when appropriate
## Contributing
When adding new agent types:
1. Inherit from `BaseAgent`
2. Implement `ainfer()` for async or `infer()` for sync
3. Add tests in `tests/`
4. Update this README with examples
## License
See main project LICENSE file.
| text/markdown | null | Mohamed Hilel <mohammedjassemhlel@gmail.com>, Peng Zheng <pengzheng990630@outlook.com> | null | null | null | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"psycopg2-binary",
"pydantic-ai<2.0.0,>=1.11.0",
"pydantic>=2.11.5",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"rakam-systems-core==0.1.1rc10",
"mistralai<2.0.0,>=1.9.0; extra == \"all\"",
"openai<3.0.0,>=1.37.0; extra == \"all\"",
"tiktoken; extra == \"all\"",
"black>=23.0.0; extra == \"dev\"",
"p... | [] | [] | [] | [
"Homepage, https://github.com/Rakam-AI/rakam_systems-inhouse",
"Documentation, https://github.com/Rakam-AI/rakam_systems-inhouse",
"Repository, https://github.com/Rakam-AI/rakam_systems-inhouse"
] | uv/0.7.6 | 2026-02-18T14:21:55.381942 | rakam_systems_agent-0.1.1rc13.tar.gz | 255,422 | a7/d9/dcca33b8504e5e7781bbe5aaf4e245b88e45a78556746365354de20504b1/rakam_systems_agent-0.1.1rc13.tar.gz | source | sdist | null | false | b1a97f993081a976983a2ac62d95c940 | 091c2e2a052a64bee4bacfce0b6375ef49515127161a32a53f90ad08bf29058c | a7d9dcca33b8504e5e7781bbe5aaf4e245b88e45a78556746365354de20504b1 | null | [] | 243 |
2.1 | roboflow | 1.2.14 | Official Python package for working with the Roboflow API | <div align="center">
<p>
<a align="center" href="" target="_blank">
<img
width="100%"
src="https://github.com/roboflow/roboflow-python/assets/37276661/528ed065-d5ac-4f9a-942e-0d211b8d97de"
>
</a>
</p>
<br>
[notebooks](https://github.com/roboflow/notebooks) | [inference](https://github.com/roboflow/inference) | [autodistill](https://github.com/autodistill/autodistill) | [collect](https://github.com/roboflow/roboflow-collect) | [supervision](https://github.com/roboflow/supervision)
<br>
[](https://badge.fury.io/py/roboflow)
[](https://pypistats.org/packages/roboflow)
[](https://github.com/roboflow/roboflow-python/blob/main/LICENSE.md)
[](https://badge.fury.io/py/roboflow)
</div>
# Roboflow Python Package
[Roboflow](https://roboflow.com) provides everything you need to build and deploy computer vision models. `roboflow-python` is the official Roboflow Python package. `roboflow-python` enables you to interact with models, datasets, and projects hosted on Roboflow.
With this Python package, you can:
1. Create and manage projects;
2. Upload images, annotations, and datasets to manage in Roboflow;
3. Start training vision models on Roboflow;
4. Run inference on models hosted on Roboflow, or Roboflow models self-hosted via [Roboflow Inference](https://github.com/roboflow/inference), and more.
The Python package is documented on the [official Roboflow documentation site](https://docs.roboflow.com/api-reference/introduction). If you are developing a feature for this Python package, or need a full Python library reference, refer to the [package developer documentation](https://roboflow.github.io/roboflow-python/).
## 💻 Installation
You will need to have `Python 3.8` or higher set up to use the Roboflow Python package.
Run the following command to install the Roboflow Python package:
```bash
pip install roboflow
```
For desktop features, use:
```bash
pip install "roboflow[desktop]"
```
<details>
<summary>Install from source</summary>
You can also install the Roboflow Python package from source using the following commands:
```bash
git clone https://github.com/roboflow-ai/roboflow-python.git
cd roboflow-python
python3 -m venv env
source env/bin/activate
pip install .
```
</details>
<details>
<summary>Command line tool</summary>
By installing roboflow python package you can use some of its functionality in the command line (without having to write python code).
See [CLI-COMMANDS.md](CLI-COMMANDS.md)
</details>
## 🚀 Getting Started
To use the Roboflow Python package, you first need to authenticate with your Roboflow account. You can do this by running the following command:
```python
import roboflow
roboflow.login()
```
<details>
<summary>Authenticate with an API key</summary>
You can also authenticate with an API key by using the following code:
```python
import roboflow
rf = roboflow.Roboflow(api_key="")
```
[Learn how to retrieve your Roboflow API key](https://docs.roboflow.com/api-reference/authentication#retrieve-an-api-key).
</details>
## Quickstart
Below are some common methods used with the Roboflow Python package, presented concisely for reference. For a full library reference, refer to the [Roboflow API reference documentation](https://docs.roboflow.com/api-reference).
```python
import roboflow
# Pass API key or use roboflow.login()
rf = roboflow.Roboflow(api_key="MY_API_KEY")
workspace = rf.workspace()
# creating object detection model that will detect flowers
project = workspace.create_project(
project_name="Flower detector",
project_type="object-detection", # Or "classification", "instance-segmentation", "semantic-segmentation"
project_license="MIT", # "private" for private projects, only available for paid customers
annotation="flowers" # If you plan to annotate lillys, sunflowers, etc.
)
# upload a dataset
workspace.upload_dataset(
dataset_path="./dataset/",
num_workers=10,
dataset_format="yolov8", # supports yolov8, yolov5, and Pascal VOC
project_license="MIT",
project_type="object-detection"
)
version = project.version("VERSION_NUMBER")
# upload model weights - yolov10
version.deploy(model_type="yolov10", model_path=f”{HOME}/runs/detect/train/”, filename="weights.pt")
# run inference
model = version.model
img_url = "https://media.roboflow.com/quickstart/aerial_drone.jpeg"
predictions = model.predict(img_url, hosted=True).json()
print(predictions)
```
## Library Structure
The Roboflow Python library is structured using the same Workspace, Project, and Version ontology that you will see in the Roboflow application.
```python
import roboflow
roboflow.login()
rf = roboflow.Roboflow()
workspace = rf.workspace("WORKSPACE_URL")
project = workspace.project("PROJECT_URL")
version = project.version("VERSION_NUMBER")
```
The workspace, project, and version parameters are the same as those you will find in the URL addresses at app.roboflow.com and universe.roboflow.com.
Within the workspace object you can perform actions like making a new project, listing your projects, or performing active learning where you are using predictions from one project's model to upload images to a new project.
Within the project object, you can retrieve metadata about the project, list versions, generate a new dataset version with preprocessing and augmentation settings, train a model in your project, and upload images and annotations to your project.
Within the version object, you can download the dataset version in any model format, train the version on Roboflow, and deploy your own external model to Roboflow.
## 🏆 Contributing
We would love your input on how we can improve the Roboflow Python package! Please see our [contributing guide](https://github.com/roboflow/roboflow-python/blob/main/CONTRIBUTING.md) to get started. Thank you 🙏 to all our contributors!
<br>
<div align="center">
<a href="https://youtube.com/roboflow">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/youtube.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634652"
width="3%"
/>
</a>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://roboflow.com">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/roboflow-app.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949746649"
width="3%"
/>
</a>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://www.linkedin.com/company/roboflow-ai/">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/linkedin.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633691"
width="3%"
/>
</a>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://docs.roboflow.com">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/knowledge.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634511"
width="3%"
/>
</a>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://disuss.roboflow.com">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/forum.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633584"
width="3%"
/>
<img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
<a href="https://blog.roboflow.com">
<img
src="https://media.roboflow.com/notebooks/template/icons/purple/blog.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633605"
width="3%"
/>
</a>
</a>
</div>
| text/markdown | Roboflow | support@roboflow.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | https://github.com/roboflow-ai/roboflow-python | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.8.18 | 2026-02-18T14:21:44.625902 | roboflow-1.2.14.tar.gz | 87,835 | 69/2b/458568ef86cf2adc3db195a2acfa7eee5c1aa75a742c0f28774241e2eaa6/roboflow-1.2.14.tar.gz | source | sdist | null | false | a08c926529aaef1a7a81089f99b8113f | 647ad2d961c1bdd400050a29c9cbd4b04b5c98198f883930a57fb60a8add2bd8 | 692b458568ef86cf2adc3db195a2acfa7eee5c1aa75a742c0f28774241e2eaa6 | null | [] | 40,763 |
2.4 | rakam-systems-core | 0.1.1rc10 | A Package containing core logic and schema for rakam-systems | # Rakam System Core
The core package of Rakam Systems providing foundational interfaces, base components, and utilities.
## Overview
`rakam-systems-core` is the foundation of the Rakam Systems framework. It provides:
- **Base Component**: Abstract base class with lifecycle management
- **Interfaces**: Standard interfaces for agents, tools, vector stores, embeddings, and loaders
- **Configuration System**: YAML/JSON configuration loading and validation
- **Tracking System**: Input/output tracking for debugging and evaluation
- **Logging Utilities**: Structured logging with color support
This package is required by both `rakam-systems-agent` and `rakam-systems-vectorstore`.
## Installation
```bash
pip install -e ./rakam-systems-core
```
## Key Components
### BaseComponent
All components extend `BaseComponent` which provides:
- Lifecycle management with `setup()` and `shutdown()` methods
- Auto-initialization via `__call__`
- Context manager support
- Built-in evaluation harness
```python
from rakam_systems_core.base import BaseComponent
class MyComponent(BaseComponent):
def setup(self):
super().setup()
# Initialize resources
def shutdown(self):
# Clean up resources
super().shutdown()
def run(self, *args, **kwargs):
# Main logic
pass
```
### Interfaces
Standard interfaces for building AI systems:
- **AgentComponent**: AI agents with sync/async support
- **ToolComponent**: Callable tools for agents
- **LLMGateway**: LLM provider abstraction
- **VectorStore**: Vector storage interface
- **EmbeddingModel**: Text embedding interface
- **Loader**: Document loading interface
- **Chunker**: Text chunking interface
```python
from rakam_systems_core.interfaces.agent import AgentComponent
from rakam_systems_core.interfaces.tool import ToolComponent
from rakam_systems_core.interfaces.vectorstore import VectorStore
```
### Configuration System
Load and validate configurations from YAML files:
```python
from rakam_systems_core.config_loader import ConfigurationLoader
loader = ConfigurationLoader()
config = loader.load_from_yaml("agent_config.yaml")
agent = loader.create_agent("my_agent", config)
```
### Tracking System
Track inputs and outputs for debugging:
```python
from rakam_systems_core.tracking import TrackingMixin
class MyAgent(TrackingMixin, BaseAgent):
pass
agent.enable_tracking(output_dir="./tracking")
# Use agent...
agent.export_tracking_data(format='csv')
```
## Package Structure
```
rakam-systems-core/
├── src/rakam_systems_core/
│ ├── ai_core/
│ │ ├── base.py # BaseComponent
│ │ ├── interfaces/ # Standard interfaces
│ │ ├── config_loader.py # Configuration system
│ │ ├── tracking.py # I/O tracking
│ │ └── mcp/ # MCP server support
│ └── ai_utils/
│ └── logging.py # Logging utilities
└── pyproject.toml
```
## Usage in Other Packages
### Agent Package
```python
# rakam-systems-agent uses core interfaces
from rakam_systems_core.interfaces.agent import AgentComponent
from rakam_systems_agent import BaseAgent
agent = BaseAgent(name="my_agent", model="openai:gpt-4o")
```
### Vectorstore Package
```python
# rakam-systems-vectorstore uses core interfaces
from rakam_systems_core.interfaces.vectorstore import VectorStore
from rakam_systems_vectorstore import ConfigurablePgVectorStore
store = ConfigurablePgVectorStore(config=config)
```
## Development
This package contains only interfaces and utilities. To contribute:
1. Install in editable mode: `pip install -e ./rakam-systems-core`
2. Make changes to interfaces or utilities
3. Ensure backward compatibility with agent and vectorstore packages
4. Update version in `pyproject.toml`
## License
Apache 2.0
## Links
- [Main Repository](https://github.com/Rakam-AI/rakam-systems)
- [Documentation](../docs/)
- [Agent Package](../rakam-systems-agent/)
- [Vectorstore Package](../rakam-systems-vectorstore/)
| text/markdown | null | Mohamed Hilel <mohammedjassemhlel@gmail.com>, Peng Zheng <pengzheng990630@outlook.com>, somebodyawesome-dev <luckynoob2011830@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pydantic",
"pyyaml"
] | [] | [] | [] | [] | uv/0.7.6 | 2026-02-18T14:21:15.660090 | rakam_systems_core-0.1.1rc10.tar.gz | 138,033 | 76/29/2ea7d1875a03e24e70eee16c8ae7d1db31671ce4bb157e94938d9e15801d/rakam_systems_core-0.1.1rc10.tar.gz | source | sdist | null | false | 70c58c9b85b6723076f10cc754f2a086 | 706c9186b2c47129e88fd0a0e2278175479fc9acf42bcd97baa9d6d1fcda34dd | 76292ea7d1875a03e24e70eee16c8ae7d1db31671ce4bb157e94938d9e15801d | null | [] | 322 |
2.4 | langgraphics | 0.1.0b1 | Visualize live LangGraph execution and see how your agent thinks as it runs. | <p align="center">
<picture class="github-only">
<source media="(prefers-color-scheme: light)" srcset="https://github.com/user-attachments/assets/4e168df3-d45f-43fa-bd93-e71f8ca33d24">
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/user-attachments/assets/5bceb55b-0588-4f35-91a9-2287c6db0310">
<img alt="LangGraphics" src="https://github.com/user-attachments/assets/4e168df3-d45f-43fa-bd93-e71f8ca33d24" width="25%">
</picture>
</p>
**LangGraphics** is a live visualization tool for [LangGraph](https://github.com/langchain-ai/langgraph) agents. It's
especially useful when working with large networks: graphs with many nodes, branching conditions, and cycles are hard to
reason about from the logs alone.
<p align="center">
<img alt="Demo" src="https://github.com/user-attachments/assets/1db519fb-0dd9-4fee-8bc8-f6b12cbf1342" width="80%">
</p>
## Why it helps
Seeing the execution path visually makes it immediately obvious which branches were taken, where loops occurred, and
where the agent got stuck or failed. It also helps when onboarding to an unfamiliar graph - a single run tells you more
about the workflow than reading the graph definition ever could.
## How to use
One line is all it takes - wrap the compiled graph of your agent workflow with LangGraphics' `watch` function before
invoking it, and the visualization opens in your browser automatically, tracking the agent in real time.
```python
from langgraph.graph import StateGraph, MessagesState
from langgraphics import watch
workflow = StateGraph(MessagesState)
workflow.add_node(...)
workflow.add_edge(...)
graph = watch(workflow.compile())
await graph.ainvoke({"messages": [...]})
```
Works with any LangGraph agent, no matter how simple or complex the graph is. Add it during a debugging session, or keep
it in while you're actively building - it has no effect on how the agent behaves or what it returns.
## Features
| Feature | [LangGraphics](https://github.com/proactive-agent/langgraphics) | [LangFuse](https://github.com/langfuse/langfuse) | [LangSmith Studio](https://smith.langchain.com) |
|------------------------------------|-----------------------------------------------------------------|--------------------------------------------------|-------------------------------------------------|
| Open-source | ✅ | ✅ | ❌ |
| Unlimited free usage | ✅ | ✅ | ❌ |
| Self-hosting supported | ✅ | ✅ | ❌ |
| No vendor lock-in | ✅ | ✅ | ❌ |
| Works without external services | ✅ | ❌ | ❌ |
| Simple setup | ✅ | ❌ | ❌ |
| One-line integration | ✅ | ❌ | ❌ |
| No API key required | ✅ | ❌ | ❌ |
| No instrumentation required | ✅ | ❌ | ❌ |
| Runs fully locally | ✅ | ❌ | ❌ |
| Native LangGraph visualization | ✅ | ❌ | ❌ |
| Real-time execution graph | ✅ | ❌ | ❌ |
| Data stays local by default | ✅ | ❌ | ❌ |
| Low learning curve | ✅ | ❌ | ❌ |
| Built-in prompt evaluation | ❌ | ❌ | ✅ |
| Built-in observability dashboards | ❌ | ✅ | ✅ |
| Built-in cost and latency tracking | ❌ | ✅ | ✅ |
| Production monitoring capabilities | ❌ | ✅ | ✅ |
## Contribute
Any contribution is welcome. Feel free to open an issue or a discussion if you have any questions not covered here. If
you have any ideas or suggestions, please open a pull request.
## License
Copyright (C) 2026 Artyom Vancyan. [MIT](https://github.com/proactive-agent/langgraphics/blob/main/LICENSE)
| text/markdown | Artyom Vancyan | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"langchain-core>=1.0.0",
"langgraph>=1.0.0",
"websockets>=14.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:21:13.626640 | langgraphics-0.1.0b1.tar.gz | 224,665 | 81/06/883ffac87d3ce392dbb3e2496b6baec41e4826fd1d07a975d310950f1a7c/langgraphics-0.1.0b1.tar.gz | source | sdist | null | false | 27d0405a02c7f52f551e4e4bf80c9fd8 | 81b6cc81dc7f9233b01e56162134a9543241cd9796ae93b4e299fbb4622e7ab2 | 8106883ffac87d3ce392dbb3e2496b6baec41e4826fd1d07a975d310950f1a7c | MIT | [
"LICENSE"
] | 211 |
2.4 | ff-ban | 0.1.0 | Free Fire ban tool – BLACKApis BAN V1 | # FF Ban
A command-line tool to ban Free Fire accounts (educational purposes only).
## Installation
```bash
pip install ff-ban
| text/markdown | null | Your Name <you@example.com> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"requests>=2.25.0",
"cfonts>=1.5.2"
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/ff-ban"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T14:20:59.640913 | ff_ban-0.1.0.tar.gz | 5,170 | 09/9b/3b25aabbd9abe3d24207a0f0ad6dfc892ae6dbf11f97efa167eeecfbce80/ff_ban-0.1.0.tar.gz | source | sdist | null | false | 725a98a98c60e4a1579291b2a4ea9f97 | 7999c396e189325c7baf4cf0b26b6ffbc3dd996495490c984af8e661d47fbf40 | 099b3b25aabbd9abe3d24207a0f0ad6dfc892ae6dbf11f97efa167eeecfbce80 | null | [] | 381 |
2.4 | email-to-calendar | 1.0.7 | Takes emails from an IMAP server, parses the body, and creates event(s) in a CalDAV calendar | # E-Mail to calendar Converter
The point of this application is to search an IMAP account, look for emails based on certain criteria(s), and parse
the content, using regex, and automatically create calendar events in an iCal account.
## TO-DO
- [X] Get e-mails, and save ID to sqlite db to avoid duplicates
- [X] Save calendar events to sqlite db to avoid duplicates
- [X] Add config to backfill (check all emails from an optional certain date), or use most recent email
- [X] If using most recent, when new email arrives, remove events not present, and add new ones
- [ ] If new email comes in with updated events, update event in calendar instead of creating a new one
- [ ] Using email summary check for words like `Cancelled`, etc. to delete events
- [ ] If event already exists, check if details have changed, and update if necessary
- [ ] Investigate IMAP IDLE (push instead of poll)
- [X] Make sure all day events are handled correctly
- [ ] Add Docker Model Runner support
- [ ] Add 'validate' function for events, and if it fails, have AI re-process that event
## Environment Variables
| Name | Description | Type | Default Value | Allowed Values |
|------|-------------|------|---------------|----------------|
| | | | | |
| | | | | |
| | | | | |
| text/markdown | null | jnstockley <jnstockley@users.noreply.github.com> | null | null | null | starter, template, python | [
"Programming Language :: Python :: 3"
] | [] | null | null | <4.0,>=3.13 | [] | [] | [] | [
"apprise==1.9.7",
"caldav==2.2.6",
"imapclient==3.1.0",
"markdownify==1.2.2",
"pydantic-ai-slim[openai]==1.61.0",
"pydantic-settings==2.13.0",
"pydantic[email]==2.12.5",
"sqlmodel==0.0.34",
"tzlocal==5.3.1",
"vobject==0.9.9",
"python-dotenv==1.2.1"
] | [] | [] | [] | [
"Homepage, https://github.com/jnstockley/email-to-calendar",
"Repository, https://github.com/jnstockley/email-to-calendar.git",
"Issues, https://github.com/jnstockley/email-to-calendar/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:20:03.774706 | email_to_calendar-1.0.7.tar.gz | 29,418 | 78/a8/18121a00e852fed6f81fc5ffc8ad579a9e1e6849c0411da45c818605400e/email_to_calendar-1.0.7.tar.gz | source | sdist | null | false | 8f0273e03afe4ea0288a0ef2ab230a63 | c18a6e2426d3d56ec5202a7a6d56a10154c4c4d5348090e041aa8aa3826b12d2 | 78a818121a00e852fed6f81fc5ffc8ad579a9e1e6849c0411da45c818605400e | null | [
"LICENSE"
] | 234 |
2.4 | delphai-rpc | 5.3.6 | Queue-based RPC client, server and workflow | # delphai-rpc
Queue-based RPC client and server
| text/markdown | Anton Ryzhov | anton@delphai.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"aio-pika<10.0,>=9.4",
"pydantic<3.0.0,>=2.6.0",
"msgpack<2,>=1",
"prometheus-client<0.21,>=0.4",
"tenacity<9,>=8",
"zstandard<0.24.0,>=0.23.0"
] | [] | [] | [] | [] | poetry/2.2.0 CPython/3.11.14 Linux/6.14.0-1017-azure | 2026-02-18T14:19:35.704379 | delphai_rpc-5.3.6.tar.gz | 15,200 | 4b/12/3d5fb2adf7d1069ef9e0b2bc9a649cfdf76335278d5160b558d3ef3d9a81/delphai_rpc-5.3.6.tar.gz | source | sdist | null | false | 281d2fbf6effb7ba725197b6f8458536 | 66ab414436d11f81c68de83dfb7dca38d3c4b01c641bdf0dc9f45a3ca6da49aa | 4b123d5fb2adf7d1069ef9e0b2bc9a649cfdf76335278d5160b558d3ef3d9a81 | null | [] | 244 |
2.4 | basalt_sdk | 1.1.7 | Basalt SDK for python | # Basalt SDK
Basalt is a powerful tool for managing AI prompts, monitoring AI applications, and their release workflows. This SDK is the official Python package for interacting with your Basalt prompts and monitoring your AI applications.
## Installation
Install the Basalt SDK via pip:
```bash
pip install basalt-sdk
```
### Optional Instrumentation Dependencies
The SDK includes optional OpenTelemetry instrumentation packages for various LLM providers, vector databases, and frameworks. You can install only the instrumentations you need:
#### LLM Provider Instrumentations
```bash
# Individual providers (10 available)
pip install basalt-sdk[openai]
pip install basalt-sdk[anthropic]
pip install basalt-sdk[google-generativeai] # Google Gemini
pip install basalt-sdk[bedrock]
pip install basalt-sdk[vertex-ai]
pip install basalt-sdk[ollama]
...
# Multiple providers
pip install basalt-sdk[openai,anthropic]
# All LLM providers (10 providers)
pip install basalt-sdk[llm-all]
```
**Note:** The NEW Google GenAI SDK instrumentation is not yet available on PyPI. Use `google-generativeai` for Gemini for now.
#### Vector Database Instrumentations
```bash
# Individual vector databases
pip install basalt-sdk[chromadb]
pip install basalt-sdk[pinecone]
pip install basalt-sdk[qdrant]
# All vector databases
pip install basalt-sdk[vector-all]
```
#### Framework Instrumentations
```bash
# Individual frameworks
pip install basalt-sdk[langchain]
pip install basalt-sdk[llamaindex]
# All frameworks
pip install basalt-sdk[framework-all]
```
#### Install Everything
```bash
# Install all available instrumentations
pip install basalt-sdk[all]
```
**Note:** These instrumentation packages are automatically activated when you enable telemetry in the Basalt SDK. They provide automatic tracing for your LLM provider calls, vector database operations, and framework usage.
## Usage
### Importing and Initializing the SDK
To get started, import the `Basalt` class and initialize it with your API key. Telemetry is enabled by default via OpenTelemetry, but can be configured or disabled:
```python
from basalt import Basalt, TelemetryConfig
# Basic initialization with API key
basalt = Basalt(api_key="my-dev-api-key")
# Disable all telemetry
basalt = Basalt(api_key="my-dev-api-key", enable_telemetry=False)
# Advanced telemetry configuration
telemetry = TelemetryConfig(
service_name="my-app",
environment="staging",
enabled_providers=["openai", "anthropic"], # Optional: selective instrumentation
)
basalt = Basalt(api_key="my-dev-api-key", telemetry_config=telemetry)
# Or use client-level parameters (simpler)
basalt = Basalt(
api_key="my-dev-api-key",
enabled_instruments=["openai", "anthropic"]
)
# Configure global metadata when constructing the client
basalt = Basalt(
api_key="my-dev-api-key",
observability_metadata={"env": "staging"},
)
# Don't forget to shutdown the client when done
# This flushes any pending telemetry data
basalt.shutdown()
```
See `examples/telemetry_example.py` for a more complete walkthrough covering decorators, context managers, and custom exporters.
### Telemetry & Observability
The SDK includes comprehensive OpenTelemetry integration for observability:
- `TelemetryConfig` centralizes all observability options including:
- Service name/version and deployment environment
- Custom exporter configuration
- Lightweight tracing wrappers for Basalt API calls (bring your own HTTP instrumentation if you need transport-level spans)
- LLM provider instrumentation with fine-grained control over which providers to instrument
- Quick disable via `enable_telemetry=False` bypasses all instrumentation without touching application code.
- Built-in decorators and context managers simplify manual span creation:
- **Root Spans**: `@start_observe` - Creates trace entry point with identity and experiment tracking
- **Nested Spans**: `@observe` with `kind` parameter - For generation, retrieval, tool, event, function spans
```python
from basalt.observability import observe, start_observe
# Root span with identity tracking
@start_observe(
feature_slug="dataset-processing",
name="process_workflow",
identity={
"organization": {"id": "123", "name": "ACME"},
"user": {"id": "456", "name": "John Doe"}
},
metadata={"environment": "production"}
)
def process_dataset(slug: str, user_id: str) -> str:
# Identity automatically propagates to child spans
observe.set_input({"slug": slug})
result = f"processed:{slug}"
observe.set_output({"result": result})
return result
# Nested LLM span
@observe(kind="generation", name="llm.generate")
def generate_summary(model: str, prompt: str) -> dict:
# Your LLM call here
return {"choices": [{"message": {"content": "Summary"}}]}
```
**Supported environment variables:**
| Variable | Description |
| --- | --- |
| `BASALT_API_KEY` | API key for authentication (can also be passed to `Basalt()` constructor). |
| `BASALT_TELEMETRY_ENABLED` | Master switch to enable/disable telemetry (default: `true`). |
| `BASALT_SERVICE_NAME` | Overrides the OTEL `service.name`. |
| `BASALT_ENVIRONMENT` | Sets `deployment.environment`. |
| `BASALT_OTEL_EXPORTER_OTLP_ENDPOINT` | Custom OTLP HTTP endpoint for traces. Overrides the default Basalt OTEL collector endpoint. |
| `BASALT_BUILD` | SDK build mode - set to `development` for local OTEL collector testing (default: `production`). |
| `TRACELOOP_TRACE_CONTENT` | Controls whether prompts/completions are logged. **Note:** Set automatically by `TelemetryConfig.trace_content` - you typically don't need to set this manually. |
| `BASALT_ENABLED_INSTRUMENTS` | Comma-separated list of instruments to enable (e.g., `openai,anthropic`). |
| `BASALT_DISABLED_INSTRUMENTS` | Comma-separated list of instruments to disable (e.g., `langchain,llamaindex`). |
**Default OTLP Exporter:**
By default, the SDK automatically sends traces to Basalt's OTEL collector:
- **Production**: `https://otel.getbasalt.ai/v1/traces`
- **Development**: `http://localhost:4318/v1/traces` (when `BASALT_BUILD=development`)
You can override this by:
1. Providing a custom `exporter` in `TelemetryConfig`
2. Setting the `BASALT_OTEL_EXPORTER_OTLP_ENDPOINT` environment variable
3. Disabling telemetry with `enable_telemetry=False`
## Prompt SDK
The Prompt SDK allows you to interact with your Basalt prompts using an exception-based API for clear error handling.
For a complete working example, check out:
- [Prompt API Example](./examples/prompt_api_example.py) - Detailed examples with error handling
- [Prompt SDK Demo Notebook](./examples/prompt_sdk_demo.ipynb) - Interactive notebook
### Available Methods
#### Prompts
Your Basalt instance exposes a `prompts` property for interacting with your Basalt prompts:
- **List Prompts**
Retrieve all available prompts.
**Example Usage:**
```python
from basalt import Basalt
from basalt.types.exceptions import BasaltAPIError, UnauthorizedError
basalt = Basalt(api_key="your-api-key")
try:
prompts = basalt.prompts.list_sync()
for prompt in prompts:
print(f"{prompt.slug} - {prompt.name}")
except UnauthorizedError:
print("Invalid API key")
except BasaltAPIError as e:
print(f"API error: {e}")
```
- **Get a Prompt**
Retrieve a specific prompt using a slug, and optional filters `tag` and `version`. Without tag or version, the production version of your prompt is selected by default.
**Example Usage:**
```python
from basalt import Basalt
from basalt.types.exceptions import NotFoundError, BasaltAPIError
basalt = Basalt(api_key="your-api-key")
try:
# Get the production version
prompt = basalt.prompts.get_sync('prompt-slug')
print(prompt.text)
# With optional tag or version parameters
prompt = basalt.prompts.get_sync(slug='prompt-slug', tag='latest')
prompt = basalt.prompts.get_sync(slug='prompt-slug', version='1.0.0')
# If your prompt has variables, pass them when fetching
prompt = basalt.prompts.get_sync(
slug='prompt-slug',
variables={'name': 'John Doe', 'role': 'engineer'}
)
# Use the prompt with your AI provider of choice
# Example: OpenAI
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model='gpt-4',
messages=[{'role': 'user', 'content': prompt.text}]
)
print(response.choices[0].message.content)
except NotFoundError:
print('Prompt not found')
except BasaltAPIError as e:
print(f'API error: {e}')
finally:
basalt.shutdown()
```
- **Context Managers for Observability** (Recommended)
Use prompts as context managers to automatically nest LLM calls under a prompt span for better trace organization and observability:
**Sync Example:**
```python
from basalt import Basalt
import openai
basalt = Basalt(api_key="your-api-key")
client = openai.OpenAI()
# Use context manager for automatic span nesting
with basalt.prompts.get_sync('summary-prompt', tag='production') as prompt:
response = client.chat.completions.create(
model=prompt.model.model,
messages=[{'role': 'user', 'content': prompt.text}]
)
print(response.choices[0].message.content)
basalt.shutdown()
```
**Async Example:**
```python
import asyncio
from basalt import Basalt
import openai
async def generate():
basalt = Basalt(api_key="your-api-key")
client = openai.AsyncOpenAI()
async with await basalt.prompts.get('summary-prompt', tag='production') as prompt:
response = await client.chat.completions.create(
model=prompt.model.model,
messages=[{'role': 'user', 'content': prompt.text}]
)
print(response.choices[0].message.content)
basalt.shutdown()
asyncio.run(generate())
```
See the [Prompts guide](./docs/03-prompts.md#observability-with-context-managers) for complete details.
- **Describe a Prompt**
Get metadata about a prompt including available versions and tags.
**Example Usage:**
```python
try:
description = basalt.prompts.describe_sync('prompt-slug')
print(f"Available versions: {description.available_versions}")
print(f"Available tags: {description.available_tags}")
except NotFoundError:
print('Prompt not found')
```
- **Async Operations**
All methods have async variants using `_async` suffix:
```python
import asyncio
async def fetch_prompts():
basalt = Basalt(api_key="your-api-key")
try:
# List prompts asynchronously
prompts = await basalt.prompts.list_async()
# Get a specific prompt asynchronously
prompt = await basalt.prompts.get_async('prompt-slug')
# Describe a prompt asynchronously
description = await basalt.prompts.describe_async('prompt-slug')
finally:
basalt.shutdown()
asyncio.run(fetch_prompts())
```
## Dataset SDK
The Dataset SDK allows you to interact with your Basalt datasets using an exception-based API for clear error handling.
For a complete working example, check out:
- [Dataset API Example](./examples/dataset_api_example.py) - Detailed examples with error handling
- [Dataset SDK Demo Notebook](./examples/dataset_sdk_demo.ipynb) - Interactive notebook
### Available Methods
#### Datasets
Your Basalt instance exposes a `datasets` property for interacting with your Basalt datasets:
- **List Datasets**
Retrieve all available datasets.
**Example Usage:**
```python
from basalt import Basalt
from basalt.types.exceptions import BasaltAPIError
basalt = Basalt(api_key="your-api-key")
try:
datasets = basalt.datasets.list_sync()
for dataset in datasets:
print(f"{dataset.slug} - {dataset.name}")
print(f"Columns: {dataset.columns}")
except BasaltAPIError as e:
print(f"API error: {e}")
```
- **Get a Dataset**
Retrieve a specific dataset by slug.
**Example Usage:**
```python
from basalt.types.exceptions import NotFoundError
try:
dataset = basalt.datasets.get_sync('dataset-slug')
print(f"Dataset: {dataset.name}")
print(f"Rows: {len(dataset.rows)}")
# Access dataset rows
for row in dataset.rows:
print(row)
except NotFoundError:
print('Dataset not found')
```
- **Async Operations**
All methods have async variants using `_async` suffix:
```python
import asyncio
async def fetch_datasets():
basalt = Basalt(api_key="your-api-key")
try:
# List datasets asynchronously
datasets = await basalt.datasets.list_async()
# Get a specific dataset asynchronously
dataset = await basalt.datasets.get_async('dataset-slug')
finally:
basalt.shutdown()
asyncio.run(fetch_datasets())
```
## Error Handling
The SDK uses exception-based error handling for clarity and pythonic patterns:
```python
from basalt import Basalt
from basalt.types.exceptions import (
BasaltAPIError, # Base exception for all API errors
NotFoundError, # Resource not found (404)
UnauthorizedError, # Authentication failed (401)
NetworkError, # Network/connection errors
)
basalt = Basalt(api_key="your-api-key")
try:
prompt = basalt.prompts.get_sync('my-prompt')
# Use the prompt
except NotFoundError:
print("Prompt doesn't exist")
except UnauthorizedError:
print("Check your API key")
except NetworkError:
print("Network connection failed")
except BasaltAPIError as e:
print(f"Other API error: {e}")
finally:
basalt.shutdown()
```
## License
MIT
| text/markdown | null | Basalt <support@getbasalt.ai> | null | null | null | ai, basalt, python, sdk | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"black>=26.1.0",
"httpx>=0.28.1",
"jinja2>=3.1.6",
"opentelemetry-api~=1.39.1",
"opentelemetry-exporter-otlp~=1.39.1",
"opentelemetry-instrumentation-httpx~=0.59b0",
"opentelemetry-instrumentation~=0.59b0",
"opentelemetry-sdk~=1.39.1",
"opentelemetry-semantic-conventions~=0.59b0",
"typing-extensio... | [] | [] | [] | [
"Homepage, https://github.com/basalt-ai/basalt-python"
] | Hatch/1.16.3 cpython/3.14.3 HTTPX/0.28.1 | 2026-02-18T14:19:27.113695 | basalt_sdk-1.1.7.tar.gz | 117,396 | 3f/12/82b1980d751a5f38bcdfa3bfa77ee4f71b3bfc613671cd3d9480ae83c191/basalt_sdk-1.1.7.tar.gz | source | sdist | null | false | a7c07602677d9e7d48976c3dd58350ec | 11fd00ed310d719d3c35ac08a21260c961a6f33be7905c32d3266386d33a7a98 | 3f1282b1980d751a5f38bcdfa3bfa77ee4f71b3bfc613671cd3d9480ae83c191 | MIT | [] | 0 |
2.4 | ds-protocol-http-py-lib | 0.1.0b2 | A Python package from the ds-protocol library collection | # ds-protocol-http-py-lib
A Python package from the ds-common library collection.
## Installation
Install the package using pip:
```bash
pip install ds-protocol-http-py-lib
```
Or using uv (recommended):
```bash
uv pip install ds-protocol-http-py-lib
```
## Quick Start
```python
from ds_protocol_http_py_lib import __version__
print(f"ds-protocol-http-py-lib version: {__version__}")
```
## Usage
<!-- Add usage examples here -->
```python
# Example usage
import ds_protocol_http_py_lib
# Your code examples here
```
## Requirements
- Python 3.11 or higher
## Documentation
Full documentation is available at:
- [GitHub Repository](https://github.com/grasp-labs/ds-protocol-http-py-lib)
- [Documentation Site](https://grasp-labs.github.io/ds-protocol-http-py-lib/)
## Development
To contribute or set up a development environment:
```bash
# Clone the repository
git clone https://github.com/grasp-labs/ds-protocol-http-py-lib.git
cd ds-protocol-http-py-lib
# Install development dependencies
uv sync --all-extras --dev
# Run tests
make test
```
See the [README](https://github.com/grasp-labs/ds-protocol-http-py-lib#readme)
for more information.
## License
This package is licensed under the Apache License 2.0.
See the [LICENSE-APACHE](https://github.com/grasp-labs/ds-protocol-http-py-lib/blob/main/LICENSE-APACHE)
file for details.
## Support
- **Issues**: [GitHub Issues](https://github.com/grasp-labs/ds-protocol-http-py-lib/issues)
- **Releases**: [GitHub Releases](https://github.com/grasp-labs/ds-protocol-http-py-lib/releases)
| text/markdown | null | Kristoffer Varslott <hello@grasplabs.com> | null | Kristoffer Varslott <hello@grasplabs.com> | null | ds, protocol, python | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent",
"Intended Audience :: D... | [] | null | null | >=3.11 | [] | [] | [] | [
"ds-resource-plugin-py-lib<1.0.0,>=0.1.0b1",
"ds-common-logger-py-lib<1.0.0,>=0.1.0a5",
"pandas<3.0.0,>=2.2.0",
"requests<3.0.0,>=2.32.0",
"bandit>=1.9.3; extra == \"dev\"",
"ruff>=0.1.8; extra == \"dev\"",
"mypy>=1.7.0; extra == \"dev\"",
"pandas-stubs>=2.1.0; extra == \"dev\"",
"pytest>=7.4.0; ext... | [] | [] | [] | [
"Documentation, https://grasp-labs.github.io/ds-protocol-http-py-lib/",
"Repository, https://github.com/grasp-labs/ds-protocol-http-py-lib/",
"Issues, https://github.com/grasp-labs/ds-protocol-http-py-lib/issues/",
"Changelog, https://github.com/grasp-labs/ds-protocol-http-py-lib/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:18:31.644783 | ds_protocol_http_py_lib-0.1.0b2.tar.gz | 21,218 | 51/fa/5d6fbc48a18a9f22b31079159ebcf3abd197ad73599a09c4461fb7ee3809/ds_protocol_http_py_lib-0.1.0b2.tar.gz | source | sdist | null | false | dcc25f84b8b7fe07e88b316dc65bbc49 | e5085f04b2cf29718766cc871289a2cbb74c957b569e5b13c734921bb4df70ef | 51fa5d6fbc48a18a9f22b31079159ebcf3abd197ad73599a09c4461fb7ee3809 | Apache-2.0 | [
"LICENSE-APACHE"
] | 275 |
2.1 | huntflow-webhook-models | 0.1.23 | Huntflow webhooks requests data models | # huntflow-webhook-models-py
Huntflow webhooks requests data models
| text/markdown | null | Developers huntflow <developer@huntflow.ru> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pydantic>=2.3.0",
"openapi-schema-pydantic>=1.2.4"
] | [] | [] | [] | [] | pdm/2.20.1 CPython/3.8.16 Linux/6.14.0-1017-azure | 2026-02-18T14:18:11.584682 | huntflow_webhook_models-0.1.23.tar.gz | 10,366 | 01/13/1c44a64a93fbf786e2a6138e941bcf56c43afa6ba63aa8ba524b261778d0/huntflow_webhook_models-0.1.23.tar.gz | source | sdist | null | false | 254f9955d68b8657d9708c388b2c567e | 9337a0b8c221f49aeb246f686b8a9ee30921ce50726468c9380b246ccd8f3391 | 01131c44a64a93fbf786e2a6138e941bcf56c43afa6ba63aa8ba524b261778d0 | null | [] | 250 |
2.4 | dramatiq-worker-starter | 0.1.1a1 | 开箱即用的 Dramatiq Worker 启动包 | # Dramatiq Worker Starter
开箱即用的 Dramatiq Worker 启动包,简化 Dramatiq Worker 的配置和启动流程。
## 特性
- **开箱即用** - 最少配置即可启动 Worker
- **可扩展** - 支持自定义中间件、队列、Actor
- **类型安全** - 完整的类型提示
- **配置灵活** - 通过参数传入配置
- **内置中间件**:
- `TimingMiddleware` - 记录任务执行耗时
- `WorkerInfoMiddleware` - Worker 心跳上报
- `ReadableRedisBackend` - 可读的结果键名
## 安装
```bash
pip install dramatiq-worker-starter
```
或从源码安装:
```bash
git clone <repository-url>
cd dramatiq-worker-starter
pip install -e .
```
## 快速开始
### 定义 Actor
```python
from dramatiq_worker_starter import ActorBase
@ActorBase.actor(queue_name="default")
def hello_task(name: str) -> str:
return f"Hello, {name}!"
```
### 启动 Worker
```python
from dramatiq_worker_starter import ActorBase, init_broker, run_worker
from dramatiq_worker_starter.utils import setup_logging
# 导入 Actor(必须在使用前导入)
from my_actors import hello_task
# 配置日志
setup_logging()
# 初始化 Broker
broker = init_broker(
redis_host="localhost",
redis_port=6379,
redis_db=0,
redis_db_result=1,
)
# 启动 Worker
if __name__ == "__main__":
run_worker(broker)
```
### 发送任务
```python
from dramatiq_worker_starter import ActorBase, init_broker
# 初始化 Broker(与 Worker 使用相同的配置)
broker = init_broker(
redis_host="localhost",
redis_port=6379,
redis_db=0,
redis_db_result=1,
)
# 发送任务
message = ActorBase.send("hello_task", "Alice")
print(f"Task sent with message_id: {message.message_id}")
# 延迟发送任务(5秒后执行)
delayed_message = ActorBase.send_with_options("hello_task", "Bob", delay=5000)
```
## 配置
### init_broker 参数
```python
broker = init_broker(
redis_host: str = "localhost",
redis_port: int = 6379,
redis_db: int = 0,
redis_password: str = "",
redis_db_result: int = 1,
namespace: str = "dramatiq-result",
heartbeat_interval: int = 30,
worker_ttl: int = 120,
custom_middleware: list | None = None,
)
```
## 模块说明
### ActorBase
Actor 基类,提供便捷的 Actor 定义方式:
```python
from dramatiq_worker_starter import ActorBase
@ActorBase.actor(
queue_name="custom",
max_retries=3,
min_backoff=1000,
max_backoff=30000,
)
def my_task(data: str) -> str:
return data.upper()
```
### Worker
Worker 启动器,支持自定义配置:
```python
from dramatiq_worker_starter import Worker
worker = Worker(
broker_instance=broker,
queues=["default", "custom"],
worker_threads=4,
worker_timeout=3600000,
)
worker.start()
```
### Middleware
#### TimingMiddleware
记录任务执行耗时:
```python
from dramatiq_worker_starter import init_broker, TimingMiddleware
broker = init_broker(custom_middleware=[TimingMiddleware()])
```
#### WorkerInfoMiddleware
Worker 心跳上报:
```python
from dramatiq_worker_starter import init_broker, WorkerInfoMiddleware
broker = init_broker(
custom_middleware=[WorkerInfoMiddleware(heartbeat_interval=15)]
)
```
#### 自定义中间件
```python
import dramatiq
from dramatiq.middleware import Middleware
class CustomMiddleware(Middleware):
def before_process_message(self, broker, message):
print(f"Task started: {message.actor_name}")
def after_process_message(self, broker, message, *, result=None, exception=None):
if exception:
print(f"Task failed: {exception}")
else:
print(f"Task completed: {message.actor_name}")
broker = init_broker(custom_middleware=[CustomMiddleware()])
```
## 项目结构
```
dramatiq-worker-starter/
├── src/
│ └── dramatiq_worker_starter/
│ ├── __init__.py # 包导出
│ ├── broker.py # Broker 初始化
│ ├── middleware.py # 内置中间件
│ ├── worker.py # Worker 启动器
│ ├── actors/ # Actor 模块
│ │ ├── __init__.py
│ │ └── base.py # Actor 基类
│ └── utils/ # 工具模块
│ ├── __init__.py
│ └── logger.py # 日志工具
├── examples/ # 使用示例
│ ├── simple/
│ │ ├── actors.py
│ │ ├── main.py
│ │ └── tasks.py
│ └── advanced/
│ ├── custom_middleware.py
│ ├── main.py
│ └── tasks.py
└── tests/ # 测试用例
```
## 示例
### Simple Example
启动 Worker:
```bash
python -m examples.simple.main
```
发送任务:
```bash
python -m examples.simple.tasks
```
### Advanced Example
启动 Worker(自定义配置):
```bash
python -m examples.advanced.main
```
发送任务并查询结果:
```bash
python -m examples.advanced.tasks
```
## Redis 数据结构
### Worker 信息
- `dramatiq:workers:active` (ZSET): 活跃 Worker 集合
- `dramatiq:worker:{worker_id}:actors` (SET): Worker 支持的 Actor 列表
- `dramatiq:worker:{worker_id}:info` (HASH): Worker 详细信息
### 结果存储
- `dramatiq-result:{queue_name}:{actor_name}:{message_id}` (LIST): 任务结果
结果格式:
```json
{
"result": { /* 实际结果 */ },
"_timing": {
"start_datetime": "2026-02-12T12:00:00.000000",
"end_datetime": "2026-02-12T12:00:05.000000",
"duration_ms": 5000,
"queue_name": "default",
"actor_name": "hello_task",
"worker_id": "abc12345",
"exception": null
}
}
```
## 依赖
- `dramatiq[redis,watch]>=2.0.1`
- `pydantic>=2.12.5`
## 许可证
MIT License
## 贡献
欢迎提交 Issue 和 Pull Request!
| text/markdown | null | seehar <seehar@qq.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"dramatiq[redis,watch]>=2.0.1",
"pydantic>=2.0.0"
] | [] | [] | [] | [] | uv/0.9.10 {"installer":{"name":"uv","version":"0.9.10"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T14:18:06.878081 | dramatiq_worker_starter-0.1.1a1.tar.gz | 13,756 | 7c/c5/76f718d44307088d96417724829408d882ea3a76b4085a16457715c7d678/dramatiq_worker_starter-0.1.1a1.tar.gz | source | sdist | null | false | 89730ebe72178a14565a27930050f6aa | 2099761e0e18cac026894224a953a1986767c589ddb7395318a0d439e25899e9 | 7cc576f718d44307088d96417724829408d882ea3a76b4085a16457715c7d678 | null | [] | 226 |
2.4 | eraUQ | 0.0.1.dev1 | This package provides the base software used by the Engineering Risk Analysis group of TUM |
# DEV Instructions
## Installation
First, ensure the repository is cloned and you are on the right branch
The following commands assume that you are **inside the root directory of the package**
(i.e. the directory containing `pyproject.toml`).
```bash
git checkout <branch-name>
pip install -e .
```
Alternatively, a command with the absolute path can be used:
```bash
py -m pip install -e <absolute/path/to/repo/root/directory>
```
or
```bash
pip install -e <absolute/path/to/repo/root/directory>
```
### Example Notebooks
### [**<img src="https://colab.research.google.com/img/colab_favicon_256px.png" alt="Colab Logo" width="28" style="vertical-align: middle;"/> Usage Overview Notebook**](https://colab.research.google.com/github/theresa-hefele/eradist/blob/dev_library_theresa/notebooks/demonstrate_usage_overview.ipynb)
| text/markdown | Iason Papaioannou, Daniel Straub, Sebastian Geyer, Felipe Uribe, Luca Sardi, Fong-Lin Wu, Alexander von Ramm, Matthias Willer, Peter Kaplan, Antonios Kamariotis, Michael Engel, Nicola Bronzetti | null | null | null | MIT | Risk, Reliability | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"scipy",
"matplotlib",
"networkx"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-18T14:18:05.637366 | erauq-0.0.1.dev1.tar.gz | 22,354 | a6/20/aee811776d206c4942c84e364f52484e3be9789f4436580d8c19eb0f69a1/erauq-0.0.1.dev1.tar.gz | source | sdist | null | false | fdefc9676a33f48c53080810db7f347f | 5347f558ae0de6382eb7d8b41b67b1a672111e523f76144af3ce5729bd78ccba | a620aee811776d206c4942c84e364f52484e3be9789f4436580d8c19eb0f69a1 | null | [
"LICENSE"
] | 0 |
2.4 | baldertest | 0.2.0 | balder: reusable scenario based test framework |
<div align="center">
<img style="margin: 20px;max-width: 68%" src="https://docs.balder.dev/en/latest/_static/balder_w_boarder.png" alt="Balder logo">
</div>
Balder is a flexible Python test system that allows you to reuse test code written once for different but similar
platforms, devices, or applications. It enables you to install ready-to-use test cases and provides various test
development features that help you test your software or devices much faster.
You can use shared test code by installing an [existing BalderHub project](https://hub.balder.dev), or you can create
your own. This makes test development for your project much faster, since it is oftentimes enough to install a BalderHub
project and only provide the user-specific code.
Be part of the progress and share your tests with others, your company, or the whole world.
# Installation
You can install the latest release with pip:
```
python -m pip install baldertest
```
# Run Balder
After you've installed it, you can run Balder inside a Balder environment with the following command:
```
balder
```
You can also provide a specific path to the balder environment directory by using this console argument:
```
balder --working-dir /path/to/working/dir
```
# How does it work?
Balder allows you to reuse previously written test code by dividing it into the components that **are needed** for a
test (`Scenario`) and the components that **you have** (`Setup`).
You can define a test within a method of a `Scenario` class. This is often an abstract layer, where you only describe
the general business logic without providing any specific implementation details.
These specific implementation details are provided in the `Setup` classes. They describe exactly **what you have**. In
these classes, you provide an implementation for the abstract elements that were defined earlier in the `Scenario`.
Balder then automatically searches for matching mappings and runs your tests using them.
## Define the `Scenario` class
Inside `Scenario` or `Setup` classes, you can describe the environment using inner `Device` classes. For example, let's
write a test that validates the functionality of a lamp. For that, keep in mind that we want to make this test as
flexible as possible. It should be able to run with all kind of things that have a lamp:
```python
import balder
from lib.scenario_features import BaseLightFeature
class ScenarioLight(balder.Scenario):
# The device with its features that are required for this test
class LightSpendingDevice(balder.Device):
light = BaseLightFeature()
def test_check_light(self):
self.LightSpendingDevice.light.switch_on()
assert self.LightSpendingDevice.light.light_is_on()
self.LightSpendingDevice.light.switch_off()
assert not self.LightSpendingDevice.light.light_is_on()
```
Here, we have defined that a `LightSpendingDevice` **needs to have** a feature called `BaseLightFeature` so that this
scenario can be executed.
We have also added a test case (named with a `test_*()` prefix) called `test_check_light`, which executes the validation
of a lamp, by switching it on and off and checking its state.
**Note:** The `BaseLightFeature` is an abstract Feature class that defines the abstract methods `switch_on()`,
`switch_off()`, and `light_is_on()`.
## Define the `Setup` class
The next step is defining a `Setup` class, which describes what we have. For a `Scenario` to match a `Setup`, the
features of all scenario devices must be implemented by the mapped setup devices.
For example, if we want to test a car that includes a lamp, we could have a setup like the one shown below:
```python
import balder
from lib.setup_features import CarEngineFeature, CarLightFeature
class SetupGarage(balder.Setup):
class Car(balder.Device):
car_engine = CarEngineFeature()
car_light = CarLightFeature() # subclass of `lib.scenario_feature.BaseLightFeature`
...
```
When you run Balder in this environment, it will collect the `ScenarioLight` and the `SetupMyCar` classes and try to
find mappings between them. Based on the `ScenarioLight`, Balder looks for a device that provides an implementation of
the single `BaseLightFeature`. To do this, it scans all available setups. Since the `SetupMyCar.Car` device provides an
implementation through the `CarLightFeature`, this device will match.
```shell
+----------------------------------------------------------------------------------------------------------------------+
| BALDER Testsystem |
| python version 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] | balder version 0.1.0b14 |
+----------------------------------------------------------------------------------------------------------------------+
Collect 1 Setups and 1 Scenarios
resolve them to 1 valid variations
================================================== START TESTSESSION ===================================================
SETUP SetupGarage
SCENARIO ScenarioLight
VARIATION ScenarioLight.LightSpendingDevice:SetupGarage.Car
TEST ScenarioLight.test_check_light [.]
================================================== FINISH TESTSESSION ==================================================
TOTAL NOT_RUN: 0 | TOTAL FAILURE: 0 | TOTAL ERROR: 0 | TOTAL SUCCESS: 1 | TOTAL SKIP: 0 | TOTAL COVERED_BY: 0
```
## Add another Device to the `Setup` class
Now the big advantage of Balder comes into play. We can run our test with all devices that can implement the
`BaseLightFeature`, independent of how this will be implemented in detail. **You do not need to rewrite the test**.
So, We have more devices in our garage. So let's add them:
```python
import balder
from lib.setup_features import CarEngineFeature, CarLightFeature, PedalFeature, BicycleLightFeature, GateOpenerFeature
class SetupGarage(balder.Setup):
class Car(balder.Device):
car_engine = CarEngineFeature()
car_light = CarLightFeature() # subclass of `lib.scenario_feature.BaseLightFeature`
...
class Bicycle(balder.Device):
pedals = PedalFeature()
light = BicycleLightFeature() # another subclass of `lib.scenario_feature.BaseLightFeature`
class GarageGate(balder.Device):
opener = GateOpenerFeature()
```
If we run Balder now, it will find more mappings because the `Bicycle` device also provides an implementation for the
`BaseLightFeature` we are looking for.
```shell
+----------------------------------------------------------------------------------------------------------------------+
| BALDER Testsystem |
| python version 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] | balder version 0.1.0b14 |
+----------------------------------------------------------------------------------------------------------------------+
Collect 1 Setups and 1 Scenarios
resolve them to 2 valid variations
================================================== START TESTSESSION ===================================================
SETUP SetupGarage
SCENARIO ScenarioLight
VARIATION ScenarioLight.LightSpendingDevice:SetupGarage.Bicycle
TEST ScenarioLight.test_check_light [.]
VARIATION ScenarioLight.LightSpendingDevice:SetupGarage.Car
TEST ScenarioLight.test_check_light [.]
================================================== FINISH TESTSESSION ==================================================
TOTAL NOT_RUN: 0 | TOTAL FAILURE: 0 | TOTAL ERROR: 0 | TOTAL SUCCESS: 2 | TOTAL SKIP: 0 | TOTAL COVERED_BY: 0
```
Balder handles all of this for you. You only need to describe your environment by defining `Scenario` and `Setup`
classes, then provide the specific implementations by creating the features. Balder will automatically search for and
apply the mappings between them.
**NOTE:** Balder offers many more elements to design complete device structures, including connections between multiple
devices.
You can learn more about that in the
[Tutorial Section of the Documentation](https://docs.balder.dev/en/latest/tutorial_guide/index.html).
# Example: Use an installable BalderHub package
With Balder, you can create custom test environments or install open-source-available test packages, known as
[BalderHub packages](https://hub.balder.dev). For example, if you want to test the login functionality of a website, simply use the
ready-to-use scenario `ScenarioSimpleLogin` from the [`balderhub-auth` package](https://hub.balder.dev/projects/auth/en/latest/examples.html),
We want to use [Selenium](https://www.selenium.dev/) to control the browser and of course use html elements, so let's install
`balderhub-selenium` and `balderhub-html` right away.
```
$ pip install balderhub-auth balderhub-selenium balderhub-html
```
So as mentioned, you don't need to define a scenario and a test yourself; you can simply import it:
```python
# file `scenario_balderhub.py`
from balderhub.auth.scenarios import ScenarioSimpleLogin
```
According to the [documentation of this BalderHub project](https://hub.balder.dev/projects/auth/en/latest/examples.html),
we only need to define the login page by overwriting the ``LoginPage`` feature:
```python
# file `lib/pages.py`
import balderhub.auth.contrib.html.pages
from balderhub.html.lib.utils import Selector
from balderhub.url.lib.utils import Url
import balderhub.html.lib.utils.components as html
class LoginPage(balderhub.auth.contrib.html.pages.LoginPage):
url = Url('https://example.com')
# Overwrite abstract property
@property
def input_username(self):
return html.inputs.HtmlTextInput.by_selector(self.driver, Selector.by_name('user'))
# Overwrite abstract property
@property
def input_password(self):
return html.inputs.HtmlPasswordInput.by_selector(self.driver, Selector.by_name('user'))
# Overwrite abstract property
@property
def btn_login(self):
return html.HtmlButtonElement.by_selector(self.driver, Selector.by_id('submit-button'))
```
And use it in our setup:
```python
# file `setups/setup_office.py`
import balder
import balderhub.auth.lib.scenario_features.role
from balderhub.selenium.lib.setup_features import SeleniumChromeWebdriverFeature
from lib.pages import LoginPage
class UserConfig(balderhub.auth.lib.scenario_features.role.UserRoleFeature):
# provide the credentials for the log in
username = 'admin'
password = 'secret'
class SetupOffice(balder.Setup):
class Server(balder.Device):
user = UserConfig()
class Browser(balder.Device):
selenium = SeleniumChromeWebdriverFeature()
page_login = LoginPage()
# fixture to prepare selenium - will be executed before the test session runs
@balder.fixture('session')
def selenium(self):
self.Browser.selenium.create()
yield
self.Browser.selenium.quit()
```
When you run Balder now, it will execute a complete login test that you didn't write yourself -
**it was created by the open-source community**.
```shell
+----------------------------------------------------------------------------------------------------------------------+
| BALDER Testsystem |
| python version 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] | balder version 0.1.0b14 |
+----------------------------------------------------------------------------------------------------------------------+
Collect 1 Setups and 1 Scenarios
resolve them to 1 valid variations
================================================== START TESTSESSION ===================================================
SETUP SetupOffice
SCENARIO ScenarioSimpleLogin
VARIATION ScenarioSimpleLogin.Client:SetupOffice.Browser | ScenarioSimpleLogin.System:SetupOffice.Server
TEST ScenarioSimpleLogin.test_login [.]
================================================== FINISH TESTSESSION ==================================================
TOTAL NOT_RUN: 0 | TOTAL FAILURE: 0 | TOTAL ERROR: 0 | TOTAL SUCCESS: 1 | TOTAL SKIP: 0 | TOTAL COVERED_BY: 0
```
If you'd like to learn more about it, feel free to dive [into the documentation](https://balder.dev).
# Contribution guidelines
Any help is appreciated. If you want to contribute to balder, take a look into the
[contribution guidelines](https://github.com/balder-dev/balder/blob/main/CONTRIBUTING.md).
Are you an expert in your field? Do you enjoy the concept of balder? How about creating your own
BalderHub project? You can contribute to an existing project or create your own. If you are not sure, a project for
your idea already exists or if you want to discuss your ideas with others, feel free to
[create an issue in the BalderHub main entry project](https://github.com/balder-dev/hub.balder.dev/issues) or
[start a new discussion](https://github.com/balder-dev/hub.balder.dev/discussions).
# License
Balder is free and Open-Source
Copyright (c) 2022-2025 Max Stahlschmidt and others
Distributed under the terms of the MIT license
| text/markdown | Max Stahlschmidt and others | null | null | null | MIT | test, systemtest, reusable, scenario | [
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: P... | [
"unix"
] | https://docs.balder.dev | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Source, https://github.com/balder-dev/balder",
"Tracker, https://github.com/balder-dev/balder/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T14:18:00.265172 | baldertest-0.2.0.tar.gz | 3,760,208 | 81/1c/c2ba11b2d3639b015a31e6b41bb80dea8c3770f9c80c8814a42f382ba95c/baldertest-0.2.0.tar.gz | source | sdist | null | false | 361d65c4d246f500be34eac46b59e450 | 254151cb97297bbac780a5de9b71cc5fb2f7211fd2462c6e157689aa838c1718 | 811cc2ba11b2d3639b015a31e6b41bb80dea8c3770f9c80c8814a42f382ba95c | null | [
"LICENSE"
] | 451 |
2.4 | workerlib | 0.4.4 | Async RabbitMQ worker utilities | # WorkerLib - асинхронная работа с RabbitMQ
## Быстрый старт
```python
import asyncio
from workerlib import WorkerPool
async def task_handler(data: dict) -> bool:
print(f"Обработка: {data}")
return True
async def main():
async with WorkerPool() as pool:
pool.add_worker("tasks", task_handler)
await pool.send("tasks", {"id": 1, "cmd": "start"})
await asyncio.sleep(2)
asyncio.run(main())
```
## Формат сообщений
**JSON сообщение**
Библиотека автоматически сериализует dict в JSON при отправке:
```python
# Отправка простого сообщения
await pool.send("queue", {
"event": "user_created",
"user_id": 123,
"email": "user@example.com",
"timestamp": "2024-01-15T10:30:00Z"
})
# Отправка вложенных структур
await pool.send("queue", {
"type": "order",
"data": {
"order_id": "ORD-12345",
"items": [
{"id": 1, "quantity": 2},
{"id": 2, "quantity": 1}
],
"total": 299.99
},
"metadata": {
"source": "api",
"version": "1.0"
}
})
```
## Основные примеры
1. Пул с несколькими воркерами
```python
from workerlib import WorkerPool, ErrorHandlingStrategy
async def main():
async with WorkerPool() as pool:
# Email воркер с DLQ
pool.add_worker(
"emails",
email_handler,
error_strategy=ErrorHandlingStrategy.DLQ,
prefetch_count=5
)
# Обработчик платежей
pool.add_worker(
"payments",
payment_handler,
error_strategy=ErrorHandlingStrategy.REQUEUE_END
)
# Отправка задач
await pool.send("emails", {"to": "user@test.com"})
await pool.send("payments", {"amount": 100})
```
2. Кастомное подключение и retry
```python
from workerlib import ConnectionParams, RetryConfig
params = ConnectionParams(
host="rabbit.local",
username="admin",
password="secret"
)
retry_config = RetryConfig(
max_attempts=3,
initial_delay=1.0,
backoff_factor=2.0
)
async with WorkerPool(connection_params=params) as pool:
pool.add_worker(
"critical",
critical_handler,
retry_config=retry_config
)
```
3. Обработка ошибок
```python
from workerlib import ErrorHandlingStrategy
# Варианты:
# IGNORE - проигнорировать ошибку
# REQUEUE_END - в конец очереди с задержкой
# REQUEUE_FRONT - в начало очереди
# DLQ - в Dead Letter Queue
pool.add_worker(
"tasks",
my_handler,
error_strategy=ErrorHandlingStrategy.DLQ,
dlq_enabled=True,
requeue_delay=5.0 # задержка повторной обработки
)
```
4. Метрики
```python
async with WorkerPool() as pool:
pool.add_worker("monitored", handler)
# Отправляем задачи
for i in range(10):
await pool.send("monitored", {"task": i})
# Получаем метрики
metrics = pool.get_metrics("monitored")
print(f"Обработано: {metrics['consumer']['processed']}")
print(f"Ошибок: {metrics['consumer']['failed']}")
```
5. FastAPI интеграция
```python
from fastapi import FastAPI
from workerlib import WorkerPool
app = FastAPI()
worker_pool = WorkerPool(auto_start=False)
@app.on_event("startup")
async def startup():
await worker_pool.start()
worker_pool.add_worker("api_tasks", task_handler)
@app.on_event("shutdown")
async def shutdown():
await worker_pool.stop()
@app.post("/task")
async def create_task(data: dict):
await worker_pool.send("api_tasks", data)
return {"status": "queued"}
```
## Конфигурация
ConnectionParams
```python
ConnectionParams(
host="127.0.0.1",
port=5672,
username="guest",
password="guest",
heartbeat=60,
timeout=10
)
```
QueueConfig
```python
QueueConfig(
name="queue_name",
durable=True,
prefetch_count=1
)
```
RetryConfig
```python
RetryConfig(
max_attempts=3,
initial_delay=1.0,
backoff_factor=2.0,
max_delay=60.0
)
```
## Установка
```bash
pip install workerlib
```
Требования: Python 3.10+, aio_pika
| text/markdown | ametist-dev | null | ametist-dev | null | null | async, rabbitmq, aio-pika, workers, queue, messaging, background-jobs | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audie... | [] | null | null | >=3.10 | [] | [] | [] | [
"aio-pika<10,>=9.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/ametist-dev/workerlib",
"Repository, https://github.com/ametist-dev/workerlib",
"Issues, https://github.com/ametist-dev/workerlib/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:17:38.736445 | workerlib-0.4.4.tar.gz | 15,434 | ea/24/052a3d5bbcfa83ba0c1461d4689e2dd3c09deed20c99b9c55b283a7d2abc/workerlib-0.4.4.tar.gz | source | sdist | null | false | ed4e073b555df002667162a23d0c8848 | 86210b9b393d75a87441757ae508b1417afc1a59a10b2782b0cef210bb600ac6 | ea24052a3d5bbcfa83ba0c1461d4689e2dd3c09deed20c99b9c55b283a7d2abc | null | [] | 249 |
2.4 | pvtlib | 1.14.1 | A library containing various tools in the categories of thermodynamics, fluid mechanics, metering etc. | <img src="https://raw.githubusercontent.com/equinor/pvtlib/main/images/pvtlib_klab.png" alt="pvtlib logo" width="600"/>
`pvtlib` is a Python library that provides various tools in the categories of thermodynamics, fluid mechanics, metering and various process equipment. The library includes functions for calculating flow rates, gas properties, and other related calculations.
## Installation
You can install the library using `pip`:
```sh
pip install pvtlib
```
## Usage
Here is an example of how to use the library:
```py
from pvtlib.metering import differential_pressure_flowmeters
# Example usage of the calculate_flow_venturi function
result = differential_pressure_flowmeters.calculate_flow_venturi(D=0.1, d=0.05, dP=200, rho1=1000)
print(result)
```
More examples are provided in the examples folder: https://github.com/equinor/pvtlib/tree/main/examples
## Features
- **Thermodynamics**: Thermodynamic functions
- **Fluid Mechanics**: Fluid mechanic functions
- **Metering**: Metering functions
- **aga8**: Equations for calculating gas properties (GERG-2008 and DETAIL) using the Rust port (https://crates.io/crates/aga8) of NIST's AGA8 code (https://github.com/usnistgov/AGA8)
- **Unit converters**: Functions to convert between different units of measure
### Handling of invalid input
This library is used for analyzing large amounts of data, as well as in live applications. In these applications it is desired that the functions return "nan" (using numpy nan) when invalid input are provided, or in case of certain errors (such as "divide by zero" errors).
## License
This project is licensed under the MIT License - see the [LICENSE](https://github.com/equinor/pvtlib/blob/main/LICENSE) file for details.
## Contact
For any questions or suggestions, feel free to open an issue or contact the author at chaagen2013@gmail.com.
| text/markdown | Christian Hågenvik | chaagen2013@gmail.com | null | null | MIT | thermodynamics fluid-mechanics metering aga8 | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/equinor/pvtlib | null | >=3.9 | [] | [] | [] | [
"numpy>=1.26.2",
"scipy>=1.11.4",
"pyaga8>=0.1.15"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T14:17:35.056634 | pvtlib-1.14.1.tar.gz | 64,900 | e4/fe/3cb1c4c31020413c3e273e875712d843b6a8424b6d0de76db0c94f8c0d54/pvtlib-1.14.1.tar.gz | source | sdist | null | false | d173fe90297bb62e1c5bd326f8c3b98b | 581890b7530fc9c7e4218c23ab66f175e3e578f2e17e12fd7707d2e64f8f230d | e4fe3cb1c4c31020413c3e273e875712d843b6a8424b6d0de76db0c94f8c0d54 | null | [
"LICENSE"
] | 263 |
2.4 | paras | 2.0.1 | Predictive algorithm for resolving A-domain specificity featurising enzyme and compound in tandem. | # PARASECT
Welcome to PARASECT: Predictive Algorithm for Resolving A-domain Specificity featurising Enzyme and Compound in Tandem. Detect NRPS AMP-binding domains from an amino acid sequence and predict their substrate specificity profile.
## Web application
You can find a live version of the web application [here](https://paras.bioinformatics.nl/).
## Database
Browse the data that PARAS and PARASECT were trained on [here](https://paras.bioinformatics.nl/query_database).
## Data submission
Do you have new datapoints that you think PARAS/PARASECT could benefit from in future versions? Submit your data [here](https://paras.bioinformatics.nl/data_annotation).
## Trained models
The trained models for PARAS and PARASECT can be found on Zenodo [here](https://zenodo.org/records/17224548).
## Command line installation
To install PARAS/PARASECT on the command line, run:
```angular2html
conda create -n paras python=3.9
conda activate paras
pip install paras
conda install -c bioconda hmmer
conda install -c bioconda hmmer2
conda install -c bioconda muscle==3.8.1551
```
For usage instructions, see our [wiki](https://github.com/BTheDragonMaster/parasect/wiki).
Note that the command line tool will download the models from zenodo upon the first run.
| text/markdown | Barbara Terlouw | barbara.terlouw@wur.nl | David Meijer | david.meijer@wur.nl | null | paras, parasect, non-ribosomal peptide, substrate specificity, prediction | [
"Development Status :: 1 - Planning",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Framework :: Pytest",
"Framework :: tox",
"Framework :: Sphinx",
"Programming Language :: Python",
"Programming Langua... | [] | https://github.com/BTheDragonMaster/parasect | null | >=3.9 | [] | [] | [] | [
"scipy",
"biopython",
"joblib",
"pikachu-chem",
"scikit-learn",
"ncbi-acc-download>=0.2.5",
"sqlalchemy",
"iterative-stratification",
"imblearn",
"pandas",
"seaborn",
"coverage; extra == \"tests\"",
"pytest; extra == \"tests\"",
"tox; extra == \"tests\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-18T14:17:01.014823 | paras-2.0.1.tar.gz | 8,464,266 | 00/d3/67acbaf906a38040c6b20ff66c98618030aad8b5ca9fc99e4461c3e0fe9b/paras-2.0.1.tar.gz | source | sdist | null | false | f62ead3ffa973df9d33535d5c8ae540c | 4e54ffd0952f1853f053219c6a13e174a8e14275db963ee0279342326a429c4b | 00d367acbaf906a38040c6b20ff66c98618030aad8b5ca9fc99e4461c3e0fe9b | null | [
"LICENSE.txt"
] | 239 |
2.4 | django-unfold-patrick | 0.80.1 | Modern Django admin theme for seamless interface development with Customized functions | [](https://unfoldadmin.com)
<<<<<<< HEAD
## Unfold Django Admin Theme Customized Version
=======
## Unfold - Modern Django Admin
>>>>>>> ccebc517014c6243e26e79ebcd45827e583a943c
[](https://pypi.org/project/django-unfold/)
[](https://discord.gg/9sQj9MEbNz)
[](https://github.com/unfoldadmin/django-unfold/actions?query=workflow%3Arelease)

Enhance Django Admin with a modern interface and powerful tools to build internal applications.
- **Documentation:** The full documentation is available at [unfoldadmin.com](https://unfoldadmin.com?utm_medium=github&utm_source=unfold).
- **Live demo:** The demo site is available at [unfoldadmin.com](https://unfoldadmin.com?utm_medium=github&utm_source=unfold).
- **Formula:** A repository with a demo implementation is available at [github.com/unfoldadmin/formula](https://github.com/unfoldadmin/formula?utm_medium=github&utm_source=unfold).
- **Turbo:** A Django & Next.js boilerplate implementing Unfold is available at [github.com/unfoldadmin/turbo](https://github.com/unfoldadmin/turbo?utm_medium=github&utm_source=unfold).
- **Discord:** Join our Unfold community on [Discord](https://discord.gg/9sQj9MEbNz).
## Quickstart
**Install the package**
```sh
pip install django-unfold
```
**Change INSTALLED_APPS in settings.py**
```python
INSTALLED_APPS = [
"unfold",
# Rest of the apps
]
```
**Use Unfold ModelAdmin**
```python
from unfold.admin import ModelAdmin
@admin.register(MyModel)
class MyModelAdmin(ModelAdmin):
pass
```
*Unfold works alongside the default Django admin and requires no migration of existing models or workflows. Unfold is actively developed and continuously evolving as new use cases and edge cases are discovered.*
## Why Unfold?
- Built on `django.contrib.admin`: Enhances the existing admin without replacing it.
- Provides a modern interface and improved workflows.
- Designed for real internal tools and backoffice apps.
- Incremental adoption for existing projects.
## Features
- **Visual interface**: Provides a modern user interface based on the Tailwind CSS framework.
- **Sidebar navigation**: Simplifies the creation of sidebar menus with icons, collapsible sections, and more.
- **Dark mode support**: Includes both light and dark mode themes.
- **Flexible actions**: Provides multiple ways to define actions throughout the admin interface.
- **Advanced filters**: Features custom dropdowns, autocomplete, numeric, datetime, and text field filters.
- **Dashboard tools**: Includes helpers for building custom dashboard pages.
- **UI components**: Offers reusable interface components such as cards, buttons, and charts.
- **Crispy forms**: Custom template pack for django-crispy-forms to style forms with Unfold's design system.
- **WYSIWYG editor**: Built-in support for WYSIWYG editing through Trix.
- **Array widget:** Support for `django.contrib.postgres.fields.ArrayField`.
- **Inline tabs:** Group inlines into tab navigation in the change form.
- **Conditional fields:** Show or hide fields dynamically based on the values of other fields in the form.
- **Model tabs:** Allow defining custom tab navigation for models.
- **Fieldset tabs:** Merge multiple fieldsets into tabs in the change form.
- **Sortable inlines:** Allow sorting inlines by dragging and dropping.
- **Command palette**: Quickly search across models and custom data.
- **Datasets**: Custom changelists `ModelAdmin` displayed on change form detail pages.
- **Environment label:** Distinguish between environments by displaying a label.
- **Nonrelated inlines:** Display nonrelated models as inlines in the change form.
- **Paginated inlines:** Break down large record sets into pages within inlines for better admin performance.
- **Favicons:** Built-in support for configuring various site favicons.
- **Theming:** Customize color schemes, backgrounds, border radius, and more.
- **Font colors:** Adjust font colors for better readability.
- **Changeform modes:** Display fields in compressed mode in the change form.
- **Language switcher:** Allow changing language directly from the admin area.
- **Infinite paginator:** Efficiently handle large datasets with seamless pagination that reduces server load.
- **Parallel admin:** Supports [running the default admin](https://unfoldadmin.com/blog/migrating-django-admin-unfold/?utm_medium=github&utm_source=unfold) alongside Unfold.
- **Third-party packages:** Provides default support for multiple popular applications.
- **Configuration:** Allows basic options to be changed in `settings.py`.
- **Dependencies:** Built entirely on `django.contrib.admin`.
## Third-party package support
- [django-guardian](https://github.com/django-guardian/django-guardian) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-guardian/)
- [django-import-export](https://github.com/django-import-export/django-import-export) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-import-export/)
- [django-simple-history](https://github.com/jazzband/django-simple-history) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-simple-history/)
- [django-constance](https://github.com/jazzband/django-constance) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-constance/)
- [django-celery-beat](https://github.com/celery/django-celery-beat) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-celery-beat/)
- [django-modeltranslation](https://github.com/deschler/django-modeltranslation) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-modeltranslation/)
- [django-money](https://github.com/django-money/django-money) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-money/)
- [django-location-field](https://github.com/caioariede/django-location-field) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-location-field/)
- [djangoql](https://github.com/ivelum/djangoql) - [Integration guide](https://unfoldadmin.com/docs/integrations/djangoql/)
- [django-json-widget](https://github.com/jmrivas86/django-json-widget) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-json-widget/)
## Professional services
Need help integrating, customizing, or scaling Django Admin with Unfold?
- **Consulting**: Expert guidance on Django architecture, performance, feature development, and Unfold integration. [Learn more](https://unfoldadmin.com/consulting/?utm_medium=github&utm_source=unfold)
- **Support**: Assistance with integrating or customizing Unfold, including live 1:1 calls and implementation review. Fixed price, no ongoing commitment. [Learn more](https://unfoldadmin.com/support/?utm_medium=github&utm_source=unfold)
- **Studio**: Extend Unfold with advanced dashboards, visual customization, and additional admin tooling. [Learn more](https://unfoldadmin.com/studio?utm_medium=github&utm_source=unfold)
[](https://unfoldadmin.com/studio?utm_medium=github&utm_source=unfold)
## Credits
- **Tailwind**: [Tailwind CSS](https://github.com/tailwindlabs/tailwindcss) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **Icons**: [Material Symbols](https://github.com/google/material-design-icons) - Licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
- **Font**: [Inter](https://github.com/rsms/inter) - Licensed under the [SIL Open Font License 1.1](https://scripts.sil.org/OFL).
- **Charts**: [Chart.js](https://github.com/chartjs/Chart.js) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **JavaScript Framework**: [Alpine.js](https://github.com/alpinejs/alpine) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **AJAX calls**: [HTMX](https://htmx.org/) - Licensed under the [BSD 2-Clause License](https://opensource.org/licenses/BSD-2-Clause).
- **Custom Scrollbars**: [SimpleBar](https://github.com/Grsmto/simplebar) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **Range Slider**: [noUiSlider](https://github.com/leongersen/noUiSlider) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **Number Formatting**: [wNumb](https://github.com/leongersen/wnumb) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
| text/markdown | null | null | null | null | MIT | django, admin, tailwind, theme | [
"Environment :: Web Environment",
"Framework :: Django",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming L... | [] | null | null | >=3.9 | [] | [] | [] | [
"django>=4.2"
] | [] | [] | [] | [
"Homepage, https://unfoldadmin.com",
"Repository, https://github.com/kyong-dev/django-unfold"
] | twine/6.0.1 CPython/3.11.13 | 2026-02-18T14:15:57.575837 | django_unfold_patrick-0.80.1.tar.gz | 1,119,754 | b6/2d/233120cd3b58ec1a5329a9b1b4c22b27df3b6ee9b07597176dfcbd8897fb/django_unfold_patrick-0.80.1.tar.gz | source | sdist | null | false | 2109adb8f41ed7380acca379a4dd407f | 6ecc48bbac09d98f20f4890a6d3df79fe09f834820455b42035d5c97481ecb79 | b62d233120cd3b58ec1a5329a9b1b4c22b27df3b6ee9b07597176dfcbd8897fb | null | [] | 253 |
2.4 | sdialog | 0.4.5 | Synthetic Dialogue Generation and Analysis | <a href="https://sdialog.github.io/"><img src="https://raw.githubusercontent.com/idiap/sdialog/master/docs/_static/logo-banner.png" alt="SDialog Logo" title="SDialog" height="150" /></a>
[](https://sdialog.readthedocs.io)
[](https://github.com/idiap/sdialog/actions/workflows/ci.yml)
[](https://app.codecov.io/gh/idiap/sdialog?displayType=list)
[](https://www.youtube.com/watch?v=oG_jJuU255I)
[](https://badge.fury.io/py/sdialog)
[](https://pepy.tech/project/sdialog)
[](https://colab.research.google.com/github/idiap/sdialog/)
Quick links: [Website](https://sdialog.github.io/) • [GitHub](https://github.com/idiap/sdialog) • [Docs](https://sdialog.readthedocs.io) • [API](https://sdialog.readthedocs.io/en/latest/api/sdialog.html) • [ArXiv paper](https://arxiv.org/abs/2506.10622) • [Demo (video)](demo.md) • [Tutorials](https://github.com/idiap/sdialog/tree/main/tutorials) • [Datasets (HF)](https://huggingface.co/datasets/sdialog) • [Issues](https://github.com/idiap/sdialog/issues)
---
SDialog is an MIT-licensed open-source toolkit for building, simulating, and evaluating LLM-based conversational agents end-to-end. It aims to bridge **agent construction → user simulation → dialog generation → evaluation** in a single reproducible workflow, so you can generate reliable, controllable dialog systems or data at scale.
It standardizes a Dialog schema and offers persona‑driven multi‑agent simulation with LLMs, composable orchestration, built‑in metrics, and mechanistic interpretability.
<p align="center">
<img src="https://raw.githubusercontent.com/idiap/sdialog/master/docs/_static/sdialog-modules.png" alt="SDialog Logo" title="SDialog" height="650" />
</p>
## ✨ Key features
- Standard dialog schema with JSON import/export _(aiming to standardize dialog dataset formats [with your help 🙏](#project-vision--community-call))_
- Persona‑driven multi‑agent simulation with contexts, tools, and thoughts
- Composable orchestration for precise control over behavior and flow
- Built‑in evaluation (metrics + LLM‑as‑judge) for comparison and iteration
- Native mechanistic interpretability (inspect and steer activations)
- Easy creation of user-defined components by inheriting from base classes (personas, metrics, orchestrators, etc.)
- Interoperability across OpenAI, Hugging Face, Ollama, AWS Bedrock, Google GenAI, Anthropic, and more.
If you are building conversational systems, benchmarking dialog models, producing synthetic training corpora, simulating diverse users to test or probe conversational systems, or analyzing internal model behavior, SDialog provides an end‑to‑end workflow.
## ⚡ Installation
```bash
pip install sdialog
```
Alternatively, a ready-to-use Apptainer image (.sif) with SDialog and all dependencies is available on Hugging Face and can be downloaded [here](https://huggingface.co/datasets/sdialog/apptainer/resolve/main/sdialog.sif).
```bash
apptainer exec --nv sdialog.sif python3 -c "import sdialog; print(sdialog.__version__)"
```
> [!NOTE]
> This Apptainer image also has the Ollama server preinstalled.
## 🏁 Quickstart tour
Here's a short, hands‑on example: a support agent helps a customer disputing a double charge. We add a small refund rule and two simple tools, generate three dialogs for evaluation, then serve the agent on port 1333 for Open WebUI or any OpenAI‑compatible client.
```python
import sdialog
from sdialog import Context
from sdialog.agents import Agent
from sdialog.personas import SupportAgent, Customer
from sdialog.orchestrators import SimpleReflexOrchestrator
# First, let's set our preferred default backend:model and parameters
sdialog.config.llm("openai:gpt-4.1", temperature=1, api_key="YOUR_KEY") # or export OPENAI_API_KEY=YOUR_KEY
# sdialog.config.llm("ollama:qwen3:14b") # etc.
# Let's define our personas (use built-ins like in this example, or create your own!)
support_persona = SupportAgent(name="Ava", politeness="high", communication_style="friendly")
customer_persona = Customer(name="Riley", issue="double charge", desired_outcome="refund")
# (Optional) Let's define two mock tools (just plain Python functions) for our support agent
def verify_account(user_id):
"""Verify user account by user id."""
return {"user_id": user_id, "verified": True}
def refund(amount):
"""Process a refund for the given amount."""
return {"status": "refunded", "amount": amount}
# (Optional) Let's also include a small rule-based orchestrator for our support agent
react_refund = SimpleReflexOrchestrator(
condition=lambda utt: "refund" in utt.lower(),
instruction="Follow refund policy; verify account, apologize, refund.",
)
# Now, let's create the agents!
support_agent = Agent(
persona=support_persona,
think=True, # Let's also enable thinking mode
tools=[verify_account, refund],
name="Support"
)
simulated_customer = Agent(
persona=customer_persona,
first_utterance="Hi!",
name="Customer"
)
# Since we have one orchestrator, let's attach it to our target agent
support_agent = support_agent | react_refund
# Let's generate 3 dialogs between them! (we can evaluate them later)
# (Optional) Let's also define a concrete conversational context for the agents in these dialogs
web_chat = Context(location="chat", environment="web", circumstances="billing")
for ix in range(3):
dialog = simulated_customer.dialog_with(support_agent, context=web_chat) # Generate the dialog
dialog.to_file(f"dialog_{ix}.json") # Save it
dialog.print(all=True) # And pretty print it with all its events (thoughts, orchestration, etc.)
# Finally, let's serve our support agent to interact with real users (OpenAI-compatible API)
# Point Open WebUI or any OpenAI-compatible client to: http://localhost:1333
support_agent.serve(port=1333)
```
> [!TIP]
> - Choose your [LLMs and backends freely](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#configuration-layer).
> - Personas and context can be [automatically generated](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#attribute-generators) (e.g. generate different customer profiles!).
> [!NOTE]
> - See ["agents with tools and thoughts" tutorial](https://github.com/idiap/sdialog/blob/main/tutorials/00_overview/7.agents_with_tools_and_thoughts.ipynb) for a more complete example.
> - See [Serving Agents via REST API](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#serving-agents) for more details on server options.
### 🧪 Testing remote systems with simulated users
<details>
<summary>Probe OpenAI‑compatible deployed systems with controllable simulated users and capture dialogs for evaluation.</summary>
You can also use SDialog as a controllable test harness for any OpenAI‑compatible system such as **vLLM**-based ones by role‑playing realistic or adversarial users against your deployed system:
* Black‑box functional checks (Does the system follow instructions? Handle edge cases?)
* Persona / use‑case coverage (Different goals, emotions, domains)
* Regression testing (Run the same persona batch each release; diff dialogs)
* Safety / robustness probing (Angry, confused, or noisy users)
* Automated evaluation (Pipe generated dialogs directly into evaluators - See Evaluation section below)
Core idea: wrap your system as an `Agent` using `openai:` as the prefix of your model name string, talk to it with simulated user `Agent`s, and capture `Dialog`s you can save, diff, and score.
Below is a minimal example where our simulated customer interacts once with your hypothetical remote endpoint:
```python
# Our remote system (your conversational backend exposing an OpenAI-compatible API)
system = Agent(
model="openai:your/model", # Model name exposed by your server
openai_api_base="http://your-endpoint.com:8000/v1", # Base URL of the service
openai_api_key="EMPTY", # Or a real key if required
name="System"
)
# Let's make our simulated customer talk with the system
dialog = simulated_customer.dialog_with(system)
dialog.to_file("dialog_0.json")
```
</details>
### 💾 Loading and saving dialogs
<details>
<summary>Import, export, and transform dialogs from JSON, text, CSV, or Hugging Face datasets.</summary>
Dialogs are rich objects with helper methods (filter, slice, transform, etc.) that can be easily exported and loaded using different methods:
```python
from sdialog import Dialog
# Load from JSON (generated by SDialog using `to_file()`)
dialog = Dialog.from_file("dialog_0.json")
# Load from HuggingFace Hub datasets
dialogs = Dialog.from_huggingface("sdialog/Primock-57")
# Create from plain text files or strings - perfect for converting existing datasets!
dialog_from_txt = Dialog.from_str("""
Alice: Hello there! How are you today?
Bob: I'm doing great, thanks for asking.
Alice: That's wonderful to hear!
""")
# Or, equivalently if the content is in a txt file
dialog_from_txt = Dialog.from_file("conversation.txt")
# Load from CSV files with custom column names
dialog_from_csv = Dialog.from_file("conversation.csv",
csv_speaker_col="speaker",
csv_text_col="value",)
# All Dialog objects have rich manipulation methods
dialog.filter("Alice").rename_speaker("Alice", "Customer").upper().to_file("processed.json")
avg_words_turn = sum(len(turn) for turn in dialog) / len(dialog)
```
See [Dialog section](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#dialog) in the documentation for more information.
</details>
## 📊 Evaluate and compare
<details>
<summary>Score dialogs with built‑in metrics and LLM judges, and compare datasets with aggregators and plots.</summary>
Dialogs can be evaluated using the different components available inside the `sdialog.evaluation` module.
Use [built‑in metrics](https://sdialog.readthedocs.io/en/latest/api/sdialog.html#module-sdialog.evaluation)—conversational features, readability, embedding-based, LLM-as-judge, flow-based, functional correctness (30+ metrics across six categories)—or easily create new ones, then aggregate and compare datasets (sets of dialogs) via `Comparator`.
```python
from sdialog import Dialog
from sdialog.evaluation import LLMJudgeYesNo, ToolSequenceValidator
from sdialog.evaluation import FrequencyEvaluator, Comparator
# Two quick checks: did the agent ask for verification, and did it call tools in order?
judge_verify = LLMJudgeYesNo(
"Did the support agent try to verify the customer?",
reason=True,
)
tool_seq = ToolSequenceValidator(["verify_account", "refund"])
comparator = Comparator([
FrequencyEvaluator(judge_verify, name="Asked for verification"),
FrequencyEvaluator(tool_seq, name="Correct tool order"),
])
results = comparator({
"model-A": Dialog.from_folder("output/model-A"),
"model-B": Dialog.from_folder("output/model-B"),
})
comparator.plot()
```
</details>
> [!TIP]
> See [evaluation tutorial](https://github.com/idiap/sdialog/blob/main/tutorials/00_overview/5.evaluation.ipynb).
## 🧠 Mechanistic interpretability
<details>
<summary>Capture per‑token activations and steer models via Inspectors for analysis and interventions.</summary>
Attach Inspectors to capture per‑token activations and optionally steer (add/ablate directions) to analyze or intervene in model behavior.
```python
import sdialog
from sdialog.interpretability import Inspector
from sdialog.agents import Agent
sdialog.config.llm("huggingface:meta-llama/Llama-3.2-3B-Instruct")
agent = Agent(name="Bob")
inspector = Inspector(target="model.layers.15")
agent = agent | inspector
agent("How are you?")
agent("Cool!")
# Let's get the last response's first token activation vector!
act = inspector[-1][0].act # [response index][token index]
```
Steering intervention (subtracting a direction):
```python
import torch
anger_direction = torch.load("anger_direction.pt") # A direction vector (e.g., PCA / difference-in-mean vector)
agent_steered = agent | inspector - anger_direction # Ablate the anger direction from the target activations
agent_steered("You are an extremely upset assistant") # Agent "can't get angry anymore" :)
```
</details>
> [!TIP]
> See [the tutorial](https://github.com/idiap/sdialog/blob/main/tutorials/00_overview/6.agent%2Binspector_refusal.ipynb) on using SDialog to remove the refusal capability from LLaMA 3.2.
## 🔊 Audio generation
<details>
<summary>Convert text dialogs to audio conversations with speech synthesis, voice assignment, and acoustic simulation.</summary>
SDialog can transform text dialogs into audio conversations with a simple one-line command. The audio module supports:
* **Text-to-Speech (TTS)**: Kokoro and HuggingFace models (with planned support for better TTS like IndexTTS and API-based TTS like OpenAI)
* **Voice databases**: Automatic or manual voice assignment based on persona attributes (age, gender, language)
* **Acoustic simulation**: Room acoustics simulation for realistic spatial audio
* **Microphone simulation**: Professional microphones simulation from brands like Shure, Sennheiser, and Sony
* **Multiple formats**: Export to WAV, MP3, or FLAC with custom sampling rates
* **Multi-stage pipeline**: Step 1 (tts and concatenate utterances) and Step 2/3 (position based timeline generation and room acoustics)
Generate audio from any dialog easily with just a few lines of code:
Install dependencies (see [the documentation](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#setup-and-installation) for complete setup instructions):
```bash
apt-get install sox ffmpeg espeak-ng
pip install sdialog[audio]
```
Then, simply:
```python
from sdialog import Dialog
dialog = Dialog.from_file("my_dialog.json")
# Convert to audio with default settings (HuggingFace TTS - single speaker)
audio_dialog = dialog.to_audio(perform_room_acoustics=True)
print(audio_dialog.display())
# Or customize the audio generation
audio_dialog = dialog.to_audio(
perform_room_acoustics=True,
audio_file_format="mp3",
re_sampling_rate=16000,
)
print(audio_dialog.display())
```
</details>
> [!TIP]
> See the [Audio Generation documentation](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#audio-generation) for more details. For usage examples including acoustic simulation, room generation, and voice databases, check out the [audio tutorials](https://github.com/idiap/sdialog/tree/main/tutorials/01_audio).
## 📖 Documentation and tutorials
- [ArXiv paper](https://arxiv.org/abs/2506.10622)
- [Demo (video)](demo.md)
- [Tutorials](https://github.com/idiap/sdialog/tree/main/tutorials)
- [API reference](https://sdialog.readthedocs.io/en/latest/api/sdialog.html)
- [Documentation](https://sdialog.readthedocs.io)
- Documentation for **AI coding assistants** like Copilot is also available at `https://sdialog.readthedocs.io/en/latest/llm.txt` following the [llm.txt specification](https://llmstxt.org/). In your Copilot chat, simply use:
```
#fetch https://sdialog.readthedocs.io/en/latest/llm.txt
Your prompt goes here...(e.g. Write a python script using sdialog to have an agent for
criminal investigation, define its persona, tools, orchestration...)
```
## 🌍 Project Vision & Community Call
To accelerate open, rigorous, and reproducible conversational AI research, SDialog invites the community to collaborate and help shape the future of open dialog generation.
### 🤝 How You Can Help
- **🗂️ Dataset Standardization**: Help convert existing dialog datasets to SDialog format. Currently, each dataset stores dialogs in different formats, making cross-dataset analysis and model evaluation challenging. **Converted datasets are made available as Hugging Face datasets** in the [SDialog organization](https://huggingface.co/datasets/sdialog/) for easy access and integration.
- **🔧 Component Development**: Create new personas, orchestrators, evaluators, generators, or backend integrations
- **📊 Evaluation & Benchmarks**: Design new metrics, evaluation frameworks, or comparative studies
- **🧠 Interpretability Research**: Develop new analysis tools, steering methods, or mechanistic insights
- **📖 Documentation & Tutorials**: Improve guides, add examples, or create educational content
- **🐛 Issues & Discussions**: Report bugs, request features, or share research ideas and use cases
> [!NOTE]
> **Example**: Check out [Primock-57](https://huggingface.co/datasets/sdialog/Primock-57), a sample dataset already available in SDialog format on Hugging Face.
>
> If you have a dialog dataset you'd like to convert to SDialog format, need help with the conversion process, or want to contribute in any other way, please [open an issue](https://github.com/idiap/sdialog/issues) or reach out to us. We're happy to help and collaborate!
## 💪 Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md). We welcome issues, feature requests, and pull requests. If you want to **contribute to the project**, please open an [issue](https://github.com/idiap/sdialog/issues) or submit a PR, and help us make SDialog better 👍.
If you find SDialog useful, please consider starring ⭐ the GitHub repository to support the project and increase its visibility 😄.
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. All-contributors list:
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tbody>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://sergioburdisso.github.io/"><img src="https://avatars.githubusercontent.com/u/12646542?v=4?s=100" width="100px;" alt="Sergio Burdisso"/><br /><sub><b>Sergio Burdisso</b></sub></a><br /><a href="https://github.com/idiap/sdialog/commits?author=sergioburdisso" title="Code">💻</a> <a href="#ideas-sergioburdisso" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/idiap/sdialog/commits?author=sergioburdisso" title="Documentation">📖</a> <a href="#tutorial-sergioburdisso" title="Tutorials">✅</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://linkedin.com/in/yanis-labrak-8a7412145/"><img src="https://avatars.githubusercontent.com/u/19389475?v=4?s=100" width="100px;" alt="Labrak Yanis"/><br /><sub><b>Labrak Yanis</b></sub></a><br /><a href="https://github.com/idiap/sdialog/commits?author=qanastek" title="Code">💻</a> <a href="#ideas-qanastek" title="Ideas, Planning, & Feedback">🤔</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/SevKod"><img src="https://avatars.githubusercontent.com/u/123748182?v=4?s=100" width="100px;" alt="Séverin"/><br /><sub><b>Séverin</b></sub></a><br /><a href="https://github.com/idiap/sdialog/commits?author=SevKod" title="Code">💻</a> <a href="#ideas-SevKod" title="Ideas, Planning, & Feedback">🤔</a> <a href="#tutorial-SevKod" title="Tutorials">✅</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://www.ricardmarxer.com"><img src="https://avatars.githubusercontent.com/u/15324?v=4?s=100" width="100px;" alt="Ricard Marxer"/><br /><sub><b>Ricard Marxer</b></sub></a><br /><a href="https://github.com/idiap/sdialog/commits?author=rikrd" title="Code">💻</a> <a href="#ideas-rikrd" title="Ideas, Planning, & Feedback">🤔</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/thschaaf"><img src="https://avatars.githubusercontent.com/u/42753790?v=4?s=100" width="100px;" alt="Thomas Schaaf"/><br /><sub><b>Thomas Schaaf</b></sub></a><br /><a href="#ideas-thschaaf" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/idiap/sdialog/commits?author=thschaaf" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/enderzhangpro"><img src="https://avatars.githubusercontent.com/u/41446535?v=4?s=100" width="100px;" alt="David Liu"/><br /><sub><b>David Liu</b></sub></a><br /><a href="https://github.com/idiap/sdialog/commits?author=enderzhangpro" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/ahassoo1"><img src="https://avatars.githubusercontent.com/u/46629954?v=4?s=100" width="100px;" alt="ahassoo1"/><br /><sub><b>ahassoo1</b></sub></a><br /><a href="#ideas-ahassoo1" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/idiap/sdialog/commits?author=ahassoo1" title="Code">💻</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="http://www.cyrta.com"><img src="https://avatars.githubusercontent.com/u/83173?v=4?s=100" width="100px;" alt="Pawel Cyrta"/><br /><sub><b>Pawel Cyrta</b></sub></a><br /><a href="https://github.com/idiap/sdialog/commits?author=cyrta" title="Code">💻</a> <a href="#ideas-cyrta" title="Ideas, Planning, & Feedback">🤔</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Amyyyyeah"><img src="https://avatars.githubusercontent.com/u/122391422?v=4?s=100" width="100px;" alt="ABCDEFGHIJKL"/><br /><sub><b>ABCDEFGHIJKL</b></sub></a><br /><a href="https://github.com/idiap/sdialog/commits?author=Amyyyyeah" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://blog.leonesfrancos.com/"><img src="https://avatars.githubusercontent.com/u/91928331?v=4?s=100" width="100px;" alt="Fernando Leon Franco"/><br /><sub><b>Fernando Leon Franco</b></sub></a><br /><a href="https://github.com/idiap/sdialog/commits?author=Seikened" title="Code">💻</a> <a href="#ideas-Seikened" title="Ideas, Planning, & Feedback">🤔</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://www.idiap.ch/~evillatoro/"><img src="https://avatars.githubusercontent.com/u/49253959?v=4?s=100" width="100px;" alt="Esaú Villatoro-Tello, Ph. D."/><br /><sub><b>Esaú Villatoro-Tello, Ph. D.</b></sub></a><br /><a href="#ideas-villatoroe" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/idiap/sdialog/commits?author=villatoroe" title="Documentation">📖</a></td>
</tr>
</tbody>
</table>
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
## 📚 Citation
If you use SDialog in academic work, please consider citing [our paper](https://arxiv.org/abs/2506.10622):
```bibtex
@misc{burdisso2025sdialogpythontoolkitendtoend,
title = {SDialog: A Python Toolkit for End-to-End Agent Building, User Simulation, Dialog Generation, and Evaluation},
author = {Sergio Burdisso and Séverin Baroudi and Yanis Labrak and David Grunert and Pawel Cyrta and Yiyang Chen and Srikanth Madikeri and Thomas Schaaf and Esaú Villatoro-Tello and Ahmed Hassoon and Ricard Marxer and Petr Motlicek},
year = {2025},
eprint = {2506.10622},
archivePrefix = {arXiv},
primaryClass = {cs.AI},
url = {https://arxiv.org/abs/2506.10622},
}
```
_(A system demonstration version of the paper has been submitted to EACL 2026 and is under review; we will update this BibTeX if accepted)_
## 🙏 Acknowledgments
This work was mainly supported by the European Union Horizon 2020 project [ELOQUENCE](https://eloquenceai.eu/about/) and received a significant development boost during the **Johns Hopkins University** [JSALT 2025 workshop](https://jsalt2025.fit.vut.cz/), as part of the ["Play your Part" research group](https://jsalt2025.fit.vut.cz/play-your-part). We thank all contributors and the open-source community for their valuable feedback and contributions.
## 📝 License
[MIT License](LICENSE)
Copyright (c) 2025 Idiap Research Institute
| text/markdown | null | Sergio Burdisso <sergio.burdisso@gmail.com> | null | Sergio Burdisso <sergio.burdisso@gmail.com>, Severin Baroudi <sevbargal@outlook.fr>, Yanis Labrak <yanis.labrak@univ-avignon.fr> | null | null | [
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Softwa... | [] | null | null | >=3.9 | [] | [] | [] | [
"codecov",
"flake8",
"graphviz",
"Jinja2",
"matplotlib",
"networkx",
"numpy",
"pandas",
"tabulate",
"pre-commit",
"print-color",
"PyYAML",
"tqdm",
"tenacity",
"syllables",
"fastapi",
"uvicorn",
"langchain-huggingface",
"langchain-openai",
"langchain-google-genai",
"langchain-... | [] | [] | [] | [
"Homepage, https://sdialog.readthedocs.io",
"Issues, https://github.com/idiap/sdialog/issues",
"Source, https://github.com/idiap/sdialog",
"Documentation, https://sdialog.readthedocs.io"
] | twine/6.1.0 CPython/3.9.20 | 2026-02-18T14:15:36.746797 | sdialog-0.4.5.tar.gz | 1,325,453 | 9d/98/37a0d148263e6a5fabd859a4fef0becdec4695ba3a0cd8f5b8c9e0e3442a/sdialog-0.4.5.tar.gz | source | sdist | null | false | 925b5a76a8d0fafbab0e6303bc4b5dcb | 9cdca2d342a121bdce4b87c97527bd8f3206c954b67cd74fda7e5caab828bf20 | 9d9837a0d148263e6a5fabd859a4fef0becdec4695ba3a0cd8f5b8c9e0e3442a | MIT | [
"LICENSE"
] | 249 |
2.3 | uncertainty-engine-types | 0.18.0 | Common type definitions for the Uncertainty Engine | 
# Types
Common types definitions for the Uncertainty Engine.
This library should be used by other packages to ensure consistency in the types used across the Uncertainty Engine.
## Overview
### Execution & Error Handling
- **ExecutionError**
Exception raised to indicate execution errors.
### Graph & Node Types
- **Graph**
Represents a collection of nodes and their connections.
- **NodeElement**
Defines a node with a type and associated inputs.
- **NodeId**
A unique identifier for nodes.
- **SourceHandle** & **TargetHandle**
Strings used to reference node connections.
### Node Handles
- **Handle**
Represents a node handle in the format `node.handle` and validates this structure.
### Language Learning Models (LLMs)
- **LLMProvider**
Enum listing supported LLM providers.
- **LLMConfig**
Manages connections to LLMs based on the chosen provider and configuration.
### Messaging
- **Message**
Represents a message with a role and content, used for interactions with LLMs.
### TwinLab Models
- **MachineLearningModel**
Represents a model configuration including metadata.
### Node Metadata
- **NodeInputInfo**
Describes the properties of a node's input.
- **NodeOutputInfo**
Describes the properties of a node's output.
- **NodeInfo**
Aggregates metadata for a node, including inputs and outputs.
### Sensor Design
- **SensorDesigner**
Defines sensor configuration and provides functionality to load sensor data.
- **save_sensor_designer**
Function to persist a sensor designer configuration.
### SQL Database Types
- **SQLKind**
Enum listing supported SQL database types.
- **SQLConfig**
Configures connections and operations for SQL databases.
### Tabular Data
- **TabularData**
Represents CSV-based data and includes functionality to load it into a pandas DataFrame.
### Token Types
- **Token**
Enum representing token types, such as TRAINING and STANDARD.
### Vector Stores
- **VectorStoreProvider**
Enum for supported vector store providers.
- **VectorStoreConfig**
Configures connections to vector stores.
| text/markdown | Freddy Wordingham | freddy@digilab.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"pydantic<3.0.0,>=2.10.5"
] | [] | [] | [] | [] | poetry/2.1.4 CPython/3.13.5 Darwin/25.2.0 | 2026-02-18T14:14:21.276211 | uncertainty_engine_types-0.18.0.tar.gz | 7,266 | f1/e4/15e74536297064f0b1662f5b64d5518816342cdb1adc2ab277edd0d976af/uncertainty_engine_types-0.18.0.tar.gz | source | sdist | null | false | e449355d6d299400a6c2a2d299828380 | 3e3ccd3600dd69da3599104920e0ae86328ecd30aed013c5f083bd829b66fd09 | f1e415e74536297064f0b1662f5b64d5518816342cdb1adc2ab277edd0d976af | null | [] | 253 |
2.4 | quantumapi | 0.1.0b0 | Official Python SDK for QuantumAPI - Quantum-safe encryption and identity platform | # QuantumAPI Python SDK
[](https://badge.fury.io/py/quantumapi)
[](https://pypi.org/project/quantumapi/)
[](https://github.com/quantumapi/quantumapi-python/blob/main/LICENSE)
[](https://docs.quantumapi.eu/sdk/python)
Official Python SDK for [QuantumAPI](https://quantumapi.eu) - The European quantum-safe encryption and identity platform.
## Features
- 🔐 **Quantum-Safe Encryption** - ML-KEM-768, ML-DSA-65, and hybrid encryption
- 🔑 **Key Management** - Generate, rotate, and manage cryptographic keys
- 🗄️ **Secret Vault** - Securely store and manage secrets with versioning
- 👤 **User Management** - Create and manage end-users with roles
- 🔍 **Audit Logs** - Query comprehensive audit trails
- 💳 **Billing Integration** - Access subscription and usage metrics
- 🚀 **Async/Await Support** - Full async support with httpx
- ⚡ **Automatic Retries** - Exponential backoff for transient failures
- 📊 **Rate Limiting** - Client-side rate limit tracking
- 🎯 **Type Safety** - Complete type hints with Pydantic models
## Installation
```bash
pip install quantumapi
```
For development:
```bash
pip install quantumapi[dev]
```
## Quick Start
```python
from quantumapi import QuantumAPIClient
# Initialize client with API key
client = QuantumAPIClient(api_key="qapi_xxx")
# Encrypt data
ciphertext = await client.encryption.encrypt(
plaintext="secret data",
key_id="550e8400-e29b-41d4-a716-446655440000"
)
# Store a secret
secret = await client.secrets.create(
name="database_password",
value="super_secret_password",
labels=["production", "database"]
)
# Generate a new key pair
key = await client.keys.generate(
name="encryption-key-2024",
algorithm="ML-KEM-768",
key_type="encryption"
)
# Retrieve secret
secret_value = await client.secrets.get(secret.id)
print(f"Secret: {secret_value.value}")
```
## Synchronous API
If you prefer synchronous code, use the `QuantumAPISyncClient`:
```python
from quantumapi import QuantumAPISyncClient
client = QuantumAPISyncClient(api_key="qapi_xxx")
# All methods work the same without await
ciphertext = client.encryption.encrypt(plaintext="secret data")
```
## Configuration
The SDK can be configured via environment variables or constructor arguments:
```python
client = QuantumAPIClient(
api_key="qapi_xxx", # Or set QUANTUMAPI_API_KEY
base_url="https://api.quantumapi.eu", # Optional, defaults to production
timeout=30.0, # Request timeout in seconds
max_retries=3, # Max retry attempts for failed requests
enable_logging=True, # Enable detailed logging
)
```
### Environment Variables
- `QUANTUMAPI_API_KEY` - Your API key
- `QUANTUMAPI_BASE_URL` - API base URL (default: https://api.quantumapi.eu)
- `QUANTUMAPI_TIMEOUT` - Request timeout in seconds (default: 30)
- `QUANTUMAPI_MAX_RETRIES` - Maximum retry attempts (default: 3)
## Core Modules
### Encryption Client
```python
# Encrypt data
result = await client.encryption.encrypt(
plaintext="sensitive data",
key_id="key-id", # Optional, uses default key if not specified
encoding="utf8" # or "base64" for binary data
)
# Decrypt data
plaintext = await client.encryption.decrypt(
encrypted_payload=result.encrypted_payload
)
# Sign data
signature = await client.encryption.sign(
data="message to sign",
key_id="signing-key-id"
)
# Verify signature
is_valid = await client.encryption.verify(
data="message to sign",
signature=signature.signature,
key_id="signing-key-id"
)
```
### Keys Client
```python
# Generate key pair
key = await client.keys.generate(
name="my-encryption-key",
key_type="encryption",
algorithm="ML-KEM-768",
set_as_default=True
)
# List keys
keys = await client.keys.list(
key_type="encryption",
page=1,
page_size=50
)
# Get key details
key = await client.keys.get(key_id="key-id")
# Rotate key
new_key = await client.keys.rotate(key_id="key-id")
# Delete key
await client.keys.delete(key_id="key-id")
# Set default key
await client.keys.set_default(key_id="key-id", key_type="encryption")
```
### Secrets Client
```python
# Create secret
secret = await client.secrets.create(
name="api-key",
value="secret_value",
content_type="api-key",
labels=["prod", "stripe"],
expires_at="2025-12-31T23:59:59Z"
)
# List secrets
secrets = await client.secrets.list(
content_type="password",
label="production",
page=1
)
# Get secret
secret = await client.secrets.get(secret_id="secret-id")
# Update secret value
await client.secrets.update_value(
secret_id="secret-id",
value="new_secret_value"
)
# Update secret metadata
await client.secrets.update(
secret_id="secret-id",
name="new-name",
labels=["staging"]
)
# Delete secret
await client.secrets.delete(secret_id="secret-id")
```
### Users Client
```python
# Create user
user = await client.users.create(
email="user@example.com",
name="John Doe",
role="developer"
)
# List users
users = await client.users.list(active_only=True)
# Update user
await client.users.update(
user_id="user-id",
name="Jane Doe",
role="admin"
)
# Deactivate user
await client.users.deactivate(user_id="user-id")
```
### Applications Client
```python
# Create OIDC application
app = await client.applications.create(
name="My App",
redirect_uris=["https://myapp.com/callback"],
scopes=["openid", "profile", "email"]
)
# List applications
apps = await client.applications.list()
# Update application
await client.applications.update(
app_id="app-id",
redirect_uris=["https://myapp.com/callback", "https://myapp.com/callback2"]
)
```
### Audit Client
```python
# Query audit logs
logs = await client.audit.list(
action="secret_accessed",
resource_type="Secret",
start_date="2024-01-01",
end_date="2024-12-31",
page=1
)
# Export audit logs
csv_data = await client.audit.export(
format="csv",
start_date="2024-01-01",
end_date="2024-12-31"
)
```
### Billing Client
```python
# Get subscription details
subscription = await client.billing.get_subscription()
# Get usage metrics
usage = await client.billing.get_usage(
start_date="2024-01-01",
end_date="2024-01-31"
)
# List invoices
invoices = await client.billing.list_invoices()
```
### Health Client
```python
# Check API health
health = await client.health.check()
print(f"API Status: {health.status}")
# Get API version
version = await client.health.version()
print(f"API Version: {version.version}")
# Get rate limit status
rate_limit = await client.health.rate_limit()
print(f"Remaining: {rate_limit.remaining}/{rate_limit.limit}")
```
## Error Handling
The SDK provides typed exceptions for all error scenarios:
```python
from quantumapi import (
QuantumAPIClient,
AuthenticationError,
PermissionError,
NotFoundError,
RateLimitError,
ValidationError,
ServerError,
NetworkError,
)
try:
secret = await client.secrets.get("invalid-id")
except NotFoundError as e:
print(f"Secret not found: {e.message}")
except AuthenticationError as e:
print(f"Authentication failed: {e.message}")
except RateLimitError as e:
print(f"Rate limit exceeded. Retry after {e.retry_after} seconds")
except ServerError as e:
print(f"Server error: {e.message}")
except NetworkError as e:
print(f"Network error: {e.message}")
```
## Pagination
List methods support automatic pagination:
```python
# Manual pagination
secrets = await client.secrets.list(page=1, page_size=100)
# Iterate through all pages
all_secrets = []
page = 1
while True:
result = await client.secrets.list(page=page, page_size=100)
all_secrets.extend(result.secrets)
if not result.has_more:
break
page += 1
```
## Webhook Verification
Verify webhook signatures from QuantumAPI:
```python
from quantumapi.webhooks import verify_signature
# Verify webhook payload
is_valid = verify_signature(
payload=request.body,
signature=request.headers["X-QuantumAPI-Signature"],
secret="your_webhook_secret"
)
if is_valid:
# Process webhook event
pass
```
## Logging
Enable detailed logging for debugging:
```python
import logging
logging.basicConfig(level=logging.DEBUG)
client = QuantumAPIClient(
api_key="qapi_xxx",
enable_logging=True
)
```
## Examples
Check out the [examples/](./examples/) directory for more comprehensive examples:
- [Basic encryption and decryption](./examples/01_encryption_basics.py)
- [Secret management](./examples/02_secret_management.py)
- [Key generation and rotation](./examples/03_key_management.py)
- [User management](./examples/04_user_management.py)
- [Audit log queries](./examples/05_audit_logs.py)
- [Django integration](./examples/frameworks/django_integration.py)
- [Flask integration](./examples/frameworks/flask_integration.py)
## Framework Integrations
### Django
```bash
pip install quantumapi[django]
```
```python
# settings.py
QUANTUMAPI = {
'API_KEY': 'qapi_xxx',
'BASE_URL': 'https://api.quantumapi.eu',
}
# Use in views
from quantumapi.integrations.django import get_client
async def my_view(request):
client = get_client()
secret = await client.secrets.get('secret-id')
return JsonResponse({'value': secret.value})
```
### Flask
```bash
pip install quantumapi[flask]
```
```python
from flask import Flask
from quantumapi.integrations.flask import QuantumAPIFlask
app = Flask(__name__)
app.config['QUANTUMAPI_API_KEY'] = 'qapi_xxx'
qapi = QuantumAPIFlask(app)
@app.route('/secret/<secret_id>')
async def get_secret(secret_id):
secret = await qapi.secrets.get(secret_id)
return {'value': secret.value}
```
## Testing
Run tests:
```bash
pytest
```
Run tests with coverage:
```bash
pytest --cov=quantumapi --cov-report=html
```
## Development
```bash
# Clone repository
git clone https://github.com/quantumapi/quantumapi-python
cd quantumapi-python
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black quantumapi tests
isort quantumapi tests
# Type checking
mypy quantumapi
# Linting
flake8 quantumapi tests
```
## Requirements
- Python 3.9 or higher
- httpx >= 0.27.0
- pydantic >= 2.0.0
- python-dotenv >= 1.0.0
- cryptography >= 42.0.0
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Support
- 📖 [Documentation](https://docs.quantumapi.eu/sdk/python)
- 💬 [Discord Community](https://discord.gg/quantumapi)
- 🐛 [Issue Tracker](https://github.com/quantumapi/quantumapi-python/issues)
- 📧 [Email Support](mailto:support@quantumapi.eu)
## Contributing
Contributions are welcome! Please read our [Contributing Guide](CONTRIBUTING.md) for details on our code of conduct and the process for submitting pull requests.
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for a history of changes to this SDK.
## Security
For security concerns, please email security@quantumapi.eu. Do not create public issues for security vulnerabilities.
## Links
- [QuantumAPI Website](https://quantumapi.eu)
- [API Documentation](https://docs.quantumapi.eu)
- [Python SDK Documentation](https://docs.quantumapi.eu/sdk/python)
- [GitHub Repository](https://github.com/quantumapi/quantumapi-python)
- [PyPI Package](https://pypi.org/project/quantumapi/)
| text/markdown | null | QuantumAPI Team <developers@quantumapi.eu> | null | null | null | quantum, encryption, post-quantum, cryptography, security, api, sdk, ml-kem, ml-dsa, pqc | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx<1.0.0,>=0.27.0",
"pydantic<3.0.0,>=2.0.0",
"python-dotenv<2.0.0,>=1.0.0",
"cryptography<43.0.0,>=42.0.0",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-httpx>=0.30.0; extra == \"dev\"",
"black>=24.0.0; extra == \"... | [] | [] | [] | [
"Homepage, https://quantumapi.eu",
"Documentation, https://docs.quantumapi.eu/sdk/python",
"Repository, https://github.com/victorZKov/mislata",
"Bug Tracker, https://github.com/victorZKov/mislata/issues",
"Changelog, https://github.com/victorZKov/mislata/blob/main/src/sdk/python-quantumapi/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T14:13:52.411449 | quantumapi-0.1.0b0.tar.gz | 24,483 | de/ae/d8799c9aaae50ed891e31f3cbd8cb12eb5ba2c867865e4d48a570c737d9c/quantumapi-0.1.0b0.tar.gz | source | sdist | null | false | a001878f7962b8b525eba504c615663b | 46a006b2d0f2da1558f4522224f1b8f5f8718be468fd6d45ddaade1b8593fa51 | deaed8799c9aaae50ed891e31f3cbd8cb12eb5ba2c867865e4d48a570c737d9c | MIT | [
"LICENSE"
] | 232 |
2.4 | uipath-mcp | 0.1.0 | UiPath MCP SDK | # UiPath MCP Python SDK
[](https://pypi.org/project/uipath-mcp/)
[](https://img.shields.io/pypi/v/uipath-mcp)
[](https://pypi.org/project/uipath-mcp/)
A Python SDK that enables hosting local MCP servers on UiPath Platform.
Check out our [samples directory](https://github.com/UiPath/uipath-mcp-python/tree/main/samples) to explore various MCP server implementations. You can also learn how to [pack and host binary servers](https://github.com/UiPath/uipath-mcp-python/blob/main/docs/how_to_pack_binary.md) written in languages like Go within UiPath.
## Installation
```bash
pip install uipath-mcp
```
using `uv`:
```bash
uv add uipath-mcp
```
## Configuration
### Servers Definition
Create the `mcp.json` file:
```json
{
"servers": {
"my-python-server": {
"type": "stdio",
"command": "python",
"args": ["server.py"]
},
}
}
```
## Command Line Interface (CLI)
The SDK also provides a command-line interface for creating, packaging, and deploying Python-based MCP servers:
### Authentication
```bash
uipath auth
```
This command opens a browser for authentication and creates/updates your `.env` file with the proper credentials.
### Initialize a Project
```bash
uipath init [SERVER]
```
Creates a `uipath.json` configuration file for your project. If [SERVER] is not provided, it will create an entrypoint for each MCP server defined in the `mcp.json` file.
### Debug a Project
```bash
uipath run [SERVER]
```
Starts the local MCP Server
### Package a Project
```bash
uipath pack
```
Packages your MCP Server into a `.nupkg` file that can be deployed to UiPath.
**Note:** Your `pyproject.toml` must include:
- A description field (avoid characters: &, <, >, ", ', ;)
- Author information
Example:
```toml
description = "Your package description"
authors = [{name = "Your Name", email = "your.email@example.com"}]
```
### Publish a Package
```bash
uipath publish
```
Publishes the most recently created package to your UiPath Orchestrator.
## Project Structure
To properly use the CLI for packaging and publishing, your project should include:
- A `pyproject.toml` file with project metadata
- A `mcp.json` file with servers metadata
- A `uipath.json` file (generated by `uipath init`)
- Any Python files needed for your automation
## Development
### Setting Up a Development Environment
Please read our [contribution guidelines](CONTRIBUTING.md) before submitting a pull request.
| text/markdown | null | null | null | Marius Cosareanu <marius.cosareanu@uipath.com>, Cristian Pufu <cristian.pufu@uipath.com> | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build Tools"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"mcp==1.26.0",
"pysignalr==1.3.0",
"uipath-runtime<0.9.0,>=0.8.0",
"uipath<2.9.0,>=2.8.23"
] | [] | [] | [] | [
"Homepage, https://uipath.com",
"Repository, https://github.com/UiPath/uipath-mcp-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:13:28.519243 | uipath_mcp-0.1.0.tar.gz | 1,726,135 | 8a/85/8284a07213c29123427790d247c8690f975c522f144142fb83921045a892/uipath_mcp-0.1.0.tar.gz | source | sdist | null | false | d19d9452fe03ee0fa261f8d451cb766e | 5d142f0f289ca1b16d49013151007ca86c3404094a7aef084d6ccadf2b216a39 | 8a858284a07213c29123427790d247c8690f975c522f144142fb83921045a892 | null | [
"LICENSE"
] | 9,283 |
2.4 | mdx-better-lists | 1.1.0 | Python-Markdown extension for better list handling | # mdx_better_lists
[](https://pypi.org/project/mdx-better-lists/)
[](https://pypi.org/project/mdx-better-lists/)
[](https://github.com/JimmyOei/mdx_better_lists/blob/main/LICENSE)
[](https://github.com/JimmyOei/mdx_better_lists/actions/workflows/ci.yml)
[](https://codecov.io/gh/JimmyOei/mdx_better_lists)
[](https://github.com/psf/black)
A Python-Markdown extension for better list handling, providing more intuitive list behavior and formatting with fine-grained control over list rendering. Created with Test-Driven Development (TDD) principles to ensure reliability and maintainability.
## Features
- **Configurable nested indentation** - Control how many spaces are required for nested lists (default: 2)
- **Marker-based list separation** - Automatically separate lists when marker types change (-, *, +)
- **Blank line list separation** - Separate unordered lists when blank lines appear between items
- **Loose list control** - Control paragraph wrapping in ordered lists with blank lines
- **Number preservation** - Optionally preserve exact list numbers from markdown source
- **Always start at one** - Force ordered lists to always start at 1
- **Paragraph-list splitting** - Optionally split paragraphs and lists without requiring blank lines between them
## Installation
```bash
pip install mdx_better_lists
```
## Usage
### Basic Usage
```python
from markdown import markdown
text = """
- Item 1
- Item 2
- Item 3
"""
html = markdown(text, extensions=['mdx_better_lists'])
```
### With Configuration
```python
from markdown import markdown
text = """
1. First
2. Second
2. Another second
"""
html = markdown(text, extensions=['mdx_better_lists'],
extension_configs={'mdx_better_lists': {
'preserve_numbers': True
}})
```
## Configuration Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `nested_indent` | int | 2 | Number of spaces required for nested list indentation |
| `marker_separation` | bool | True | Separate lists when marker types (-, *, +) differ |
| `unordered_list_separation` | bool | True | Separate unordered lists when blank lines appear between items |
| `ordered_list_loose` | bool | True | Wrap ordered list items in `<p>` tags when blank lines separate them |
| `preserve_numbers` | bool | False | Preserve exact list numbers from markdown (use `value` attribute) |
| `always_start_at_one` | bool | False | Force all ordered lists to start at 1 |
| `split_paragraph_lists` | bool | False | Split paragraphs and lists when they appear without blank lines between them |
### Configuration Details
#### `nested_indent` (default: 2)
Controls how many spaces are required for a list item to be considered nested.
```python
# With nested_indent=2 (default)
- Parent
- Nested # 2 spaces = nested
# With nested_indent=4
- Parent
- Nested # 4 spaces = nested
- Not nested # 2 spaces = not nested
```
#### `marker_separation` (default: True)
When enabled, lists with different markers (-, *, +) are separated into different `<ul>` elements.
```python
# With marker_separation=True (default)
- Item with dash
+ Item with plus # Creates a new <ul>
# Output: Two separate <ul> elements
# With marker_separation=False
- Item with dash
+ Item with plus # Same <ul>
# Output: Single <ul> element
```
#### `unordered_list_separation` (default: True)
When enabled, unordered lists are separated into different `<ul>` elements when blank lines appear.
```python
# With unordered_list_separation=True (default)
- First
- Second # Creates a new <ul>
# Output: Two separate <ul> elements
# With unordered_list_separation=False
- First
- Second # Same <ul>
# Output: Single <ul> element
```
#### `ordered_list_loose` (default: True)
When enabled, ordered list items are wrapped in `<p>` tags when blank lines separate them.
```python
# With ordered_list_loose=True (default)
1. First
2. Second
# Output:
<ol>
<li><p>First</p></li>
<li><p>Second</p></li>
</ol>
# With ordered_list_loose=False
1. First
2. Second
# Output:
<ol>
<li>First</li>
<li>Second</li>
</ol>
```
#### `preserve_numbers` (default: False)
When enabled, preserves exact list numbers from the markdown source using the `value` attribute.
```python
# With preserve_numbers=True
1. First
2. Second
2. Another second
3. Third
# Output:
<ol>
<li value="1">First</li>
<li value="2">Second</li>
<li value="2">Another second</li>
<li value="3">Third</li>
</ol>
```
#### `always_start_at_one` (default: False)
When enabled, forces all ordered lists to start at 1, ignoring the starting number in markdown.
```python
# With always_start_at_one=True
5. Fifth
6. Sixth
# Output:
<ol>
<li>Fifth</li>
<li>Sixth</li>
</ol>
# (renders as 1, 2 instead of 5, 6)
# With always_start_at_one=False (default)
5. Fifth
6. Sixth
# Output:
<ol start="5">
<li>Fifth</li>
<li>Sixth</li>
</ol>
```
#### `split_paragraph_lists` (default: False)
When enabled, automatically splits paragraphs and lists that appear without blank lines between them into separate blocks. This allows lists to be recognized immediately after paragraphs without requiring a blank line separator.
```python
# With split_paragraph_lists=False (default)
This is a paragraph before the list.
- First item
- Second item
# Output:
<p>This is a paragraph before the list.
- First item
- Second item</p>
# (list markers are treated as plain text)
# With split_paragraph_lists=True
This is a paragraph before the list.
- First item
- Second item
# Output:
<p>This is a paragraph before the list.</p>
<ul>
<li>First item</li>
<li>Second item</li>
</ul>
# (paragraph and list are separated)
```
This also works with ordered lists:
```python
# With split_paragraph_lists=True
Introduction paragraph.
1. First point
2. Second point
# Output:
<p>Introduction paragraph.</p>
<ol>
<li>First point</li>
<li>Second point</li>
</ol>
```
**Note:** This feature only operates at the top level. List markers inside list items that are not properly indented will remain as text (standard Markdown behavior).
## Examples
### Example 1: Marker Separation
```python
from markdown import markdown
text = """
- Item with dash
- Another dash
+ Item with plus
+ Another plus
"""
html = markdown(text, extensions=['mdx_better_lists'])
# Output: Three separate <ul> elements (blank line + marker change)
```
### Example 2: Nested Lists with Custom Indentation
```python
from markdown import markdown
text = """
- Parent
- Nested (4 spaces)
- Deeply nested (8 spaces)
"""
html = markdown(text, extensions=['mdx_better_lists'],
extension_configs={'mdx_better_lists': {
'nested_indent': 4
}})
```
### Example 3: Preserving List Numbers
```python
from markdown import markdown
text = """
1. Introduction
1. Background
1. Methods
"""
html = markdown(text, extensions=['mdx_better_lists'],
extension_configs={'mdx_better_lists': {
'preserve_numbers': True
}})
# Each item gets value="1"
```
### Migration from mdx_truly_sane_lists
```python
# mdx_truly_sane_lists
markdown(text, extensions=['mdx_truly_sane_lists'],
extension_configs={'mdx_truly_sane_lists': {
'truly_sane': True # Default
'nested_indent': 2 # Default
}})
# Equivalent in mdx_better_lists (this is the default)
markdown(text, extensions=['mdx_better_lists'],
extension_configs={'mdx_better_lists': {
'marker_separation': True, # Default
'unordered_list_separation': True, # Default
'ordered_list_loose': True # Default
'nested_indent': 2 # Default
}})
```
**Note:** `mdx_better_lists` does not support loose list behavior (paragraph wrapping) for unordered lists. Unordered lists always remain tight, even when both `marker_separation` and `unordered_list_separation` are set to `False`.
## Development
This project follows Test-Driven Development (TDD) principles.
### Running Tests
```bash
pytest tests/
```
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- Inspired by [mdx_truly_sane_lists](https://github.com/radude/mdx_truly_sane_lists)
- Built on [Python-Markdown](https://python-markdown.github.io/)
| text/markdown | null | Jimmy Oei <Jamesmontyn@gmail.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.8 | [] | [] | [] | [
"markdown>=3.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/JimmyOei/mdx_better_lists",
"Repository, https://github.com/JimmyOei/mdx_better_lists",
"Issues, https://github.com/JimmyOei/mdx_better_lists/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:13:18.439771 | mdx_better_lists-1.1.0.tar.gz | 23,207 | 0a/f5/0b8880572f39e5b2e4fc2a7c1a7adcf70e81fbaabdb35943ed488bc09f71/mdx_better_lists-1.1.0.tar.gz | source | sdist | null | false | 2a1d4588c01808d3b14eef07138bf417 | 02ca2999a5f558bbd56c392b86381cf993497870cd7f11320c990907d1cc3730 | 0af50b8880572f39e5b2e4fc2a7c1a7adcf70e81fbaabdb35943ed488bc09f71 | null | [
"LICENSE"
] | 246 |
2.3 | rasa-sdk | 3.16.0a3 | Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants | # Rasa Python-SDK
[](https://forum.rasa.com/?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[](https://github.com/RasaHQ/rasa-sdk/actions/runs/)
[](https://coveralls.io/github/RasaHQ/rasa-sdk?branch=main)
[](https://pypi.python.org/pypi/rasa-sdk)
Python SDK for the development of custom actions for Rasa.
<hr />
💡 **We're migrating issues to Jira** 💡
Starting January 2023, issues for Rasa Open Source are located in
[this Jira board](https://rasa-open-source.atlassian.net/browse/OSS). You can browse issues without being logged in;
if you want to create issues, you'll need to create a Jira account.
<hr />
## Installation
To install the SDK run
```bash
pip install rasa-sdk
```
## Compatibility
`rasa-sdk` package:
| SDK version | compatible Rasa version |
|----------------|-----------------------------------|
| `1.0.x` | `>=1.0.x` |
old `rasa_core_sdk` package:
| SDK version | compatible Rasa Core version |
|----------------|----------------------------------------|
| `0.12.x` | `>=0.12.x` |
| `0.11.x` | `0.11.x` |
| not compatible | `<=0.10.x` |
## Usage
Detailed instructions can be found in the Rasa Documentation about
[Custom Actions](https://rasa.com/docs/pro/build/custom-actions).
## Docker
### Usage
In order to start an action server using implemented custom actions,
you can use the available Docker image `rasa/rasa-sdk`.
Before starting the action server ensure that the folder containing
your actions is handled as Python module and therefore has to contain
a file called `__init__.py`
Then start the action server using:
```bash
docker run -p 5055:5055 --mount type=bind,source=<ABSOLUTE_PATH_TO_YOUR_ACTIONS>,target=/app/actions \
rasa/rasa-sdk:<version>
```
The action server is then available at `http://localhost:5055/webhook`.
### Custom Dependencies
To add custom dependencies you enhance the given Docker image, e.g.:
```
# Extend the official Rasa SDK image
FROM rasa/rasa-sdk:<version>
# Change back to root user to install dependencies
USER root
# To install system dependencies
RUN apt-get update -qq && \
apt-get install -y <NAME_OF_REQUIRED_PACKAGE> && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# To install packages from PyPI
RUN pip install --no-cache-dir <A_REQUIRED_PACKAGE_ON_PYPI>
# Switch back to non-root to run code
USER 1001
```
## Building from source
Rasa SDK uses Poetry for packaging and dependency management. If you want to build it from source,
you have to install Poetry first. This is how it can be done:
```
curl -sSL https://install.python-poetry.org | python3 -
```
There are several other ways to install Poetry. Please, follow
[the official guide](https://python-poetry.org/docs/#installation) to see all possible options.
To install dependencies and `rasa-sdk` itself in editable mode execute
```
make install
```
## Code Style
To ensure a standardized code style we use the formatter [ruff](https://github.com/astral-sh/ruff).
If your code is not formatted properly, GitHub CI will fail to build.
If you want to automatically format your code on every commit, you can use [pre-commit](https://pre-commit.com/).
Just install it via `pip install pre-commit` and execute `pre-commit install`.
To check and reformat files execute
```
make lint
```
## Steps to release a new version
Releasing a new version is quite simple, as the packages are build and distributed
by GitHub Actions.
*Release steps*:
1. Switch to the branch you want to cut the release from (`main` in case of a
major / minor, the current release branch for patch releases).
2. If this is a minor / major release: Make sure all fixes from currently supported minor versions have been merged from their respective release branches (e.g. 3.3.x) back into main.
3. Run `make release`
4. Create a PR against main or the release branch (e.g. `1.2.x`)
5. **If this is a minor release**, a new release branch should be created
pointing to the same commit as the tag to allow for future patch releases,
e.g.
```bash
git checkout -b 1.2.x
git push origin 1.2.x
```
## License
Licensed under the Apache License, Version 2.0. Copyright 2021 Rasa
Technologies GmbH. [Copy of the license](LICENSE.txt).
A list of the Licenses of the dependencies of the project can be found at
the bottom of the
[Libraries Summary](https://libraries.io/github/RasaHQ/rasa-sdk).
| text/markdown | Rasa Technologies GmbH | hi@rasa.com | Tom Bocklisch | tom@rasa.com | Apache-2.0 | nlp, machine-learning, machine-learning-library, bot, bots, botkit, rasa conversational-agents, conversational-ai, chatbot, chatbot-framework, bot-framework | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"... | [] | null | null | <3.14,>3.9 | [] | [] | [] | [
"Sanic-Cors<3.0.0,>=2.0.0",
"coloredlogs<16,>=10",
"grpcio<1.67.0,>=1.66.2",
"grpcio-health-checking<1.60.0,>=1.59.3",
"grpcio-tools<1.67.0,>=1.66.2",
"opentelemetry-api<1.34.0,>=1.33.0",
"opentelemetry-exporter-otlp<1.34.0,>=1.33.0",
"opentelemetry-sdk<1.34.0,>=1.33.0",
"pluggy<2.0.0,>=1.0.0",
"p... | [] | [] | [] | [
"Documentation, https://rasa.com/docs",
"Homepage, https://rasa.com",
"Repository, https://github.com/rasahq/rasa-sdk"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T14:13:04.267506 | rasa_sdk-3.16.0a3.tar.gz | 49,393 | 69/08/f59836ff99cf545501cdaf295802a10c69d9aeb6597155e4e31b73d49be6/rasa_sdk-3.16.0a3.tar.gz | source | sdist | null | false | a1a87ee4e2e48d0cbfaf46d13e719e3d | 115dbeceb276eafcac7aafa1de2ac04daa0db733f679c0b2e7378ef153588b31 | 6908f59836ff99cf545501cdaf295802a10c69d9aeb6597155e4e31b73d49be6 | null | [] | 1,122 |
2.4 | woodelf-explainer | 0.2.10 | Fast explainability algorithms for tree ensembles | <p align="center">
<img src="https://raw.githubusercontent.com/ron-wettenstein/woodelf/main/docs/WOODELF_commercial.png" width="1000" />
</p>
### Understand trees. Decision trees.
WOODELF is a unified and efficient algorithm for computing Shapley values on decision trees. It supports:
- **CPU and GPU**
- **Path-Dependent and Background (Interventional) explanations**
- **Shapley and Banzhaf values**
- **First-order values and interaction values**
All within a single algorithmic framework. The implementation is written in Python, with all performance-critical operations vectorized and optimized using NumPy and SciPy. GPU acceleration is supported via CuPy.
Our approach is significantly faster than the one used by the `shap` package.
In particular, the complexity of Interventional SHAP is dramatically reduced:
when explaining $n$ samples with a background dataset of size $m$,
the `shap` package requires $O(nm)$ time, while WOODELF requires only $O(n + m)$ time.
We also substantially accelerate Path-Dependent SHAP, especially on large datasets and when computing interaction values.
To demonstrate the speed-up, we computed Shapley values and interaction values for 3,000,000 samples with 127 features
using a background dataset of 5,000,000 rows. We explained the predictions of an XGBoost model with 100 trees of
depth 6, trained on this background data. The results are reported in the table below.
| Task | shap package CPU | WOODELF CPU | WOODELF GPU |
|----------------------------------|------------------|-------------|-------------|
| Path Dependent SHAP | 51 min | 96 seconds | 3.3 seconds |
| Background SHAP | 8 year* | 162 seconds | 16 seconds |
| Path Dependent SHAP interactions | 8 days* | 193 seconds | 6 seconds |
| Background SHAP interactions | Not implemented | 262 seconds | 19 seconds |
Values marked with * are runtime estimates.
WOODELF also outperforms other state-of-the-art approaches. In this task, it is an order of magnitude faster than PLTreeSHAP and approximately 4× faster than FastTreeSHAP.
## Installations
A simple pip install:
<pre>
pip install woodelf_explainer
</pre>
The required dependencies are `pandas`, `numpy`, `scipy`, and `shap`.
The `shap` package is used for parsing decision trees and for minor auxiliary operations,
while the Shapley value computation is handled entirely by WOODELF.
An optional dependency is `cupy`, which enables GPU-accelerated execution.
## Usage
Use the `woodelf.explainer.WoodelfExplainer` object!
Its API is identical to the API of `shap.TreeExplainer`. The functions inputs are the same, the output is the same, just the algorithm is different.
Background (interventional) SHAP and Shapley interaction values computation:
```python
from woodelf import WoodelfExplainer
# Get X_train, y_train, X_test, y_test and train an XGBoost model
explainer = WoodelfExplainer(xgb_model, X_train)
background_values = explainer.shap_values(X_test)
background_iv = explainer.shap_interaction_values(X_test)
```
**Good to know:**
- The shap python package does not support Background SHAP interactions - WOODELF supports them!
- We extensively validate WOODELF against the `shap` package and confirm that both return identical Shapley values.
Replacing `shap.TreeExplainer` with `WoodelfExplainer` will not change the output: both the format and the numerical values
remain the same (up to minor floating-point differences - we test with a tolerance of 0.00001).
For visualization, pass the returned output to any of the shap plots:
```python
import shap
shap.summary_plot(background_values, X_test)
```
## Additional Usage Examples
#### Path Dependent SHAP
Same API as in the shap package:
```python
explainer = WoodelfExplainer(xgb_model)
pd_values = explainer.shap_values(X_test)
pd_iv = explainer.shap_interaction_values(X_test)
```
#### Using GPU
Simply pass `GPU=True` in the WoodelfExplainer initialization:
```python
explainer = WoodelfExplainer(xgb_model, X_train, GPU=True)
background_values_GPU = explainer.shap_values(X_test)
```
#### Banzhaf Values
Simply use `explainer.banzhaf_values` for banzhaf values and `explainer.banzhaf_interaction_values` for banzhaf interaction values.
```python
explainer = WoodelfExplainer(xgb_model, X_train)
banzhaf_values = explainer.banzhaf_values(X_test)
banzhaf_iv = explainer.banzhaf_interaction_values(X_test)
```
#### Better Output Format
A memory-efficient output format that returns results as a DataFrame with feature pairs as columns. Feature pairs with zero interactions are automatically excluded to save RAM.
```python
explainer = WoodelfExplainer(xgb_model, X_train)
background_iv_df = explainer.shap_interaction_values(X_test, as_df=True, exclude_zero_contribution_features=False)
```
#### Built-in Cache
By default, caching is enabled for sufficiently small decision-tree ensembles (e.g., models with low tree depth).
The cache stores the preprocessed background data, eliminating the need to recompute it across repeated uses of the same explainer instance.
You can control this behavior manually:
- Enable caching with `cache_option='yes'`
- Disable caching with `cache_option='no'`
**Note:** Caching may be memory-intensive for deep trees (e.g., `max_depth ≥ 8`).
```python
explainer = WoodelfExplainer(xgb_model, X_train, cache_option='yes')
shap_sample_1 = explainer.shap_values(X_test.sample(100))
# No need to preprocess the background data from here on, the cache will be used instead.
shap_sample_2 = explainer.shap_values(X_test.sample(100))
shap_sample_3 = explainer.shap_values(X_test.sample(100))
...
```
## Citations
To cite our package and algorithm, please refer to our AAAI 2026 paper. The paper was accepted to AAAI 2026 and will be published soon.
For now, refer to its [arXiv version](https://arxiv.org/abs/2511.09376).
```bibtex
@misc{nadel2025decisiontreesbooleanlogic,
title={From Decision Trees to Boolean Logic: A Fast and Unified SHAP Algorithm},
author={Alexander Nadel and Ron Wettenstein},
year={2025},
eprint={2511.09376},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2511.09376},
}
```
## Contact & Collaboration
If you have questions, are considering using WOODELF in your research, or would like to contribute, feel free to reach out:
**Ron Wettenstein**
Reichman University, Herzliya, Israel
📧 ron.wettenstein@post.runi.ac.il
## Summary and Future Research
A strong explainability approach reveals what your models have learned,
which features matter most for predicting your target,
and which factors have the greatest influence on individual predictions.
Accurate Shapley and Banzhaf values based on large background datasets are a
meaningful step toward answering these questions.
We will continue to explore new explainability methods and expand this framework with increasingly powerful tools for interpreting
and understanding the behavior of decision tree ensembles—so you can not only observe their predictions, but understand what drives them.
> "Trees much like this one date back 290 million years, around a thousand times longer than we've been here.
> To me, to sit beneath a Ginkgo tree and look up is to be reminded that we're a blip in the story of life.
> But also, what a blip. We are, after all, the only species that ever tried to name the Ginkgo.
> We are not observers of life on Earth, as much as it may sometimes feel that way. We are participants in that life. And so trees don't just remind me how astonishing they are, they also remind me how astonishing we are."
>
> John Green, *At Least There Are Trees*
| text/markdown | null | Ron Wettenstein <ron-wettenstein@gmail.com> | null | null | The MIT License (MIT)
Copyright (c) 2025 Ron Wettenstein
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"pandas",
"scipy",
"shap",
"pytest; extra == \"test\"",
"xgboost>=2.0; extra == \"test\"",
"scikit-learn; extra == \"test\"",
"shap; extra == \"test\"",
"cupy; extra == \"gpu\""
] | [] | [] | [] | [
"Homepage, https://github.com/ron-wettenstein/woodelf"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-18T14:13:01.557867 | woodelf_explainer-0.2.10.tar.gz | 43,397 | 3f/4b/e0d977d1121a4669fd7687117b812640d8306a8d815f4a1d1d7e35f4e243/woodelf_explainer-0.2.10.tar.gz | source | sdist | null | false | 0478b3dd5ae5d05ffd03ceb7b10cb1a4 | e984c201b4cb1c44e334f4fa822dfbe77d7c05d6b892095e7421110ed47fa380 | 3f4be0d977d1121a4669fd7687117b812640d8306a8d815f4a1d1d7e35f4e243 | null | [
"LICENSE"
] | 284 |
2.1 | pyap2 | 0.2.8 | Pyap2 is a maintained fork of pyap, a regex-based library for parsing US, CA, and UK addresses. The fork adds typing support, handles more address formats and edge cases. | Pyap2: Python address parser
============================
Pyap2 is a maintained fork of Pyap, a regex-based python library for
detecting and parsing addresses. Currently it supports US 🇺🇸, Canadian 🇨🇦 and British 🇬🇧 addresses.
.. code-block:: python
>>> import pyap
>>> test_address = """
Lorem ipsum
225 E. John Carpenter Freeway,
Suite 1500 Irving, Texas 75062
Dorem sit amet
"""
>>> addresses = pyap.parse(test_address, country='US')
>>> for address in addresses:
# shows found address
print(address)
# shows address parts
print(address.as_dict())
...
Installation
------------
To install Pyap2, simply:
.. code-block:: bash
$ pip install pyap2
About
-----
We started improving the original `pyap` by adopting poetry and adding typing support.
It was extensively tested in web-scraping operations on thousands of US addresses.
Gradually, we added support for many rarer address formats and edge cases, as well
as the ability to parse a partial address where only street info is available.
Typical workflow
----------------
Pyap should be used as a first thing when you need to detect an address
inside a text when you don't know for sure whether the text contains
addresses or not.
Limitations
-----------
Because Pyap2 (and Pyap) is based on regular expressions it provides fast results.
This is also a limitation because regexps intentionally do not use too
much context to detect an address.
In other words in order to detect US address, the library doesn't
use any list of US cities or a list of typical street names. It
looks for a pattern which is most likely to be an address.
For example the string below would be detected as a valid address:
"1 SPIRITUAL HEALER DR SHARIF NSAMBU SPECIALISING IN"
It happens because this string has all the components of a valid
address: street number "1", street name "SPIRITUAL HEALER" followed
by a street identifier "DR" (Drive), city "SHARIF NSAMBU SPECIALISING"
and a state name abbreviation "IN" (Indiana).
The good news is that the above mentioned errors are **quite rare**.
| text/x-rst | Argyle Developers | developers@argyle.io | null | null | MIT | address, parser, regex | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.10",
"Programming Lan... | [] | https://github.com/argyle-engineering/pyap | null | <4.0,>=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/argyle-engineering/pyap",
"Repository, https://github.com/argyle-engineering/pyap"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T14:13:00.741816 | pyap2-0.2.8.tar.gz | 19,608 | da/23/c758eef81c064fa7f0bc0dd4553ac04ff8517fb56e6ebbd318f2f1b3a334/pyap2-0.2.8.tar.gz | source | sdist | null | false | b8619d29789773bb9283630ae5ffc8f8 | 693f58b983fae645d0ec12ddaa2c4a252f72c05a687949376fab039c15511d45 | da23c758eef81c064fa7f0bc0dd4553ac04ff8517fb56e6ebbd318f2f1b3a334 | null | [] | 522 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.