metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | cattrs | 26.1.0 | Composable complex class support for attrs and dataclasses. | # *cattrs*: Flexible Object Serialization and Validation
*Because validation belongs to the edges.*
[](https://catt.rs/)
[](https://github.com/hynek/stamina/blob/main/LICENSE)
[](https://pypi.python.org/pypi/cattrs)
[](https://github.com/python-attrs/cattrs)
[](https://pepy.tech/project/cattrs)
[](https://github.com/python-attrs/cattrs/actions/workflows/main.yml)
---
<!-- begin-teaser -->
**cattrs** is a Swiss Army knife for (un)structuring and validating data in Python.
In practice, that means it converts **unstructured dictionaries** into **proper classes** and back, while **validating** their contents.
<!-- end-teaser -->
## Example
<!-- begin-example -->
_cattrs_ works best with [_attrs_](https://www.attrs.org/) classes, and [dataclasses](https://docs.python.org/3/library/dataclasses.html) where simple (un-)structuring works out of the box, even for nested data, without polluting your data model with serialization details:
```python
>>> from attrs import define
>>> from cattrs import structure, unstructure
>>> @define
... class C:
... a: int
... b: list[str]
>>> instance = structure({'a': 1, 'b': ['x', 'y']}, C)
>>> instance
C(a=1, b=['x', 'y'])
>>> unstructure(instance)
{'a': 1, 'b': ['x', 'y']}
```
<!-- end-teaser -->
<!-- end-example -->
Have a look at [*Why *cattrs*?*](https://catt.rs/en/latest/why.html) for more examples!
<!-- begin-why -->
## Features
### Recursive Unstructuring
- _attrs_ classes and dataclasses are converted into dictionaries in a way similar to `attrs.asdict()`, or into tuples in a way similar to `attrs.astuple()`.
- Enumeration instances are converted to their values.
- Other types are let through without conversion. This includes types such as integers, dictionaries, lists and instances of non-_attrs_ classes.
- Custom converters for any type can be registered using `register_unstructure_hook`.
### Recursive Structuring
Converts unstructured data into structured data, recursively, according to your specification given as a type.
The following types are supported:
- `typing.Optional[T]` and its 3.10+ form, `T | None`.
- `list[T]`, `typing.List[T]`, `typing.MutableSequence[T]`, `typing.Sequence[T]` convert to lists.
- `tuple` and `typing.Tuple` (both variants, `tuple[T, ...]` and `tuple[X, Y, Z]`).
- `set[T]`, `typing.MutableSet[T]`, and `typing.Set[T]` convert to sets.
- `frozenset[T]`, and `typing.FrozenSet[T]` convert to frozensets.
- `dict[K, V]`, `typing.Dict[K, V]`, `typing.MutableMapping[K, V]`, and `typing.Mapping[K, V]` convert to dictionaries.
- [`typing.TypedDict`](https://docs.python.org/3/library/typing.html#typing.TypedDict), both ordinary and generic.
- [`typing.NewType`](https://docs.python.org/3/library/typing.html#newtype)
- [PEP 695 type aliases](https://docs.python.org/3/library/typing.html#type-aliases) on 3.12+
- _attrs_ classes with simple attributes and the usual `__init__`[^simple].
- All _attrs_ classes and dataclasses with the usual `__init__`, if their complex attributes have type metadata.
- Unions of supported _attrs_ classes, given that all of the classes have a unique field.
- Unions of anything, if you provide a disambiguation function for it.
- Custom converters for any type can be registered using `register_structure_hook`.
[^simple]: Simple attributes are attributes that can be assigned unstructured data, like numbers, strings, and collections of unstructured data.
### Batteries Included
_cattrs_ comes with pre-configured converters for a number of serialization libraries, including JSON (standard library, [_orjson_](https://pypi.org/project/orjson/), [UltraJSON](https://pypi.org/project/ujson/)), [_msgpack_](https://pypi.org/project/msgpack/), [_cbor2_](https://pypi.org/project/cbor2/), [_bson_](https://pypi.org/project/bson/), [PyYAML](https://pypi.org/project/PyYAML/), [_tomlkit_](https://pypi.org/project/tomlkit/) and [_msgspec_](https://pypi.org/project/msgspec/) (supports only JSON at this time).
For details, see the [cattrs.preconf package](https://catt.rs/en/stable/preconf.html).
## Design Decisions
_cattrs_ is based on a few fundamental design decisions:
- Un/structuring rules are separate from the models.
This allows models to have a one-to-many relationship with un/structuring rules, and to create un/structuring rules for models which you do not own and you cannot change.
(_cattrs_ can be configured to use un/structuring rules from models using the [`use_class_methods` strategy](https://catt.rs/en/latest/strategies.html#using-class-specific-structure-and-unstructure-methods).)
- Invent as little as possible; reuse existing ordinary Python instead.
For example, _cattrs_ did not have a custom exception type to group exceptions until the sanctioned Python [`exceptiongroups`](https://docs.python.org/3/library/exceptions.html#ExceptionGroup).
A side-effect of this design decision is that, in a lot of cases, when you're solving _cattrs_ problems you're actually learning Python instead of learning _cattrs_.
- Resist the temptation to guess.
If there are two ways of solving a problem, _cattrs_ should refuse to guess and let the user configure it themselves.
A foolish consistency is the hobgoblin of little minds, so these decisions can and are sometimes broken, but they have proven to be a good foundation.
<!-- end-why -->
## Credits
Major credits to Hynek Schlawack for creating [attrs](https://attrs.org) and its predecessor, [characteristic](https://github.com/hynek/characteristic).
_cattrs_ is tested with [Hypothesis](http://hypothesis.readthedocs.io/en/latest/), by David R. MacIver.
_cattrs_ is benchmarked using [perf](https://github.com/haypo/perf) and [pytest-benchmark](https://pytest-benchmark.readthedocs.io/en/latest/index.html).
This package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the [`audreyr/cookiecutter-pypackage`](https://github.com/audreyr/cookiecutter-pypackage) project template.
| text/markdown | null | Tin Tvrtkovic <tinchester@gmail.com> | null | null | MIT | attrs, dataclasses, serialization | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programmi... | [] | null | null | >=3.10 | [] | [] | [] | [
"attrs>=25.4.0",
"exceptiongroup>=1.1.1; python_version < \"3.11\"",
"typing-extensions>=4.14.0",
"pymongo>=4.4.0; extra == \"bson\"",
"cbor2>=5.4.6; extra == \"cbor2\"",
"msgpack>=1.0.5; extra == \"msgpack\"",
"msgspec>=0.19.0; implementation_name == \"cpython\" and extra == \"msgspec\"",
"orjson>=3.... | [] | [] | [] | [
"Homepage, https://catt.rs",
"Changelog, https://catt.rs/en/latest/history.html",
"Bug Tracker, https://github.com/python-attrs/cattrs/issues",
"Repository, https://github.com/python-attrs/cattrs",
"Documentation, https://catt.rs/en/stable/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:15:19.406296 | cattrs-26.1.0.tar.gz | 495,672 | a0/ec/ba18945e7d6e55a58364d9fb2e46049c1c2998b3d805f19b703f14e81057/cattrs-26.1.0.tar.gz | source | sdist | null | false | 0dfeb8a55487c3aa7ae27489b72f0c68 | fa239e0f0ec0715ba34852ce813986dfed1e12117e209b816ab87401271cdd40 | a0ecba18945e7d6e55a58364d9fb2e46049c1c2998b3d805f19b703f14e81057 | null | [
"LICENSE"
] | 2,639,041 |
2.4 | jobkit | 0.1.1 | Open source job hunting toolkit - search jobs, generate tailored resumes and cover letters | # JobKit
**Open source AI-powered job hunting toolkit**
Search jobs, build your profile from multiple sources, and generate tailored resumes and cover letters with AI.
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
## Features
- **Job Search** - Search and save jobs from LinkedIn
- **Multi-Source Profile** - Import from resume (PDF/DOCX), LinkedIn, and GitHub
- **AI-Powered Generation** - Create tailored resumes and cover letters
- **PDF Export** - Download professional PDFs ready to submit
- **Multiple LLM Support** - Ollama (free/local), Anthropic Claude, or OpenAI GPT
- **100% Local** - Your data stays on your machine
## Quick Start
### Install
```bash
pip install jobkit
playwright install chromium
```
### Run
```bash
jobkit web --port 8080
```
Open http://localhost:8080 in your browser.
## Usage
### 1. Set Up Your Profile
Import your background from multiple sources:
- **Upload Resume** - PDF, DOCX, or TXT
- **LinkedIn** - Import experience and education
- **GitHub** - Import projects and languages
All sources are merged intelligently.
### 2. Search for Jobs
- Enter keywords and location
- Browser opens for LinkedIn login (cookies saved for future sessions)
- Save interesting jobs with one click
### 3. Generate Applications
Click "Generate Application" on any saved job to create:
- Tailored resume matching the job requirements
- Compelling cover letter
- Download as professional PDFs
## LLM Setup
### Ollama (Free, Local) - Recommended
```bash
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3
ollama serve
```
### Cloud Providers
Set your API key in Settings:
- **Anthropic**: claude-sonnet-4-20250514
- **OpenAI**: gpt-4
## CLI Commands
```bash
jobkit search "software engineer" --location "Remote"
jobkit list
jobkit generate JOB_ID
jobkit config
```
## Development
```bash
git clone https://github.com/rocky-dao/jobkit.git
cd jobkit
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
playwright install chromium
```
## Tech Stack
- **Backend**: Python, Flask
- **Scraping**: Playwright
- **AI**: Ollama, Anthropic, OpenAI
- **PDF**: fpdf2
- **Frontend**: Tailwind CSS
## Contributing
Contributions welcome! Areas of interest:
- New job board scrapers (Indeed, Glassdoor)
- Profile importers (Twitter, personal websites)
- UI improvements
## License
MIT License - free for personal and commercial use.
---
**Built with AI, for job seekers**
| text/markdown | JobKit Contributors | null | null | null | MIT | job-search, resume, cover-letter, linkedin, automation, llm, ai | [
"Development Status :: 3 - Alpha",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/B... | [] | null | null | >=3.10 | [] | [] | [] | [
"playwright>=1.40.0",
"flask>=3.0.0",
"anthropic>=0.18.0",
"openai>=1.12.0",
"requests>=2.31.0",
"python-dotenv>=1.0.0",
"pypdf>=4.0.0",
"python-docx>=1.0.0",
"markdown>=3.5.0",
"fpdf2>=2.7.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"black>=24.0.0; extra ==... | [] | [] | [] | [
"Homepage, https://rockly-dao.github.io/jobkit",
"Documentation, https://rockly-dao.github.io/jobkit",
"Repository, https://github.com/rockly-dao/jobkit",
"Issues, https://github.com/rockly-dao/jobkit/issues"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-18T22:15:10.323691 | jobkit-0.1.1.tar.gz | 40,611 | be/56/dd3c39e7c7936becfebd8a271381da911fd068b4de183d9bab11b44dbdd5/jobkit-0.1.1.tar.gz | source | sdist | null | false | 3badf47dbd2cd8f93d7ceb8a4d3d4387 | 62508942d37d1d79a43eafce6ab2c5decf5dc4c2cf396c765bdac777dc191c51 | be56dd3c39e7c7936becfebd8a271381da911fd068b4de183d9bab11b44dbdd5 | null | [
"LICENSE"
] | 244 |
2.4 | ultra-lean-mcp-proxy | 0.3.2 | Ultra Lean MCP Proxy - lightweight optimization proxy for MCP | <picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/lean-agent-protocol/ultra-lean-mcp-proxy/main/.github/logo-dark.png">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/lean-agent-protocol/ultra-lean-mcp-proxy/main/.github/logo-light.png">
<img alt="LAP Logo" src="https://raw.githubusercontent.com/lean-agent-protocol/ultra-lean-mcp-proxy/main/.github/logo-dark.png" width="400">
</picture>
# ultra-lean-mcp-proxy
[](https://pypi.org/project/ultra-lean-mcp-proxy/)
[](https://www.npmjs.com/package/ultra-lean-mcp-proxy)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
Transparent MCP stdio proxy that reduces token and byte overhead on `tools/list` and `tools/call` paths using LAP (Lean Agent Protocol) compression.
## One-Line Install
### Python (pip)
```bash
pip install ultra-lean-mcp-proxy
ultra-lean-mcp-proxy install
```
### Node.js (npx - zero Python dependency)
```bash
npx ultra-lean-mcp-proxy install
```
Both commands auto-discover local MCP client configs (Claude Desktop, Cursor, Windsurf, Claude Code), wrap stdio and URL (`http`/`sse`) entries by default, and back up originals.
To uninstall:
```bash
ultra-lean-mcp-proxy uninstall
```
To check current status:
```bash
ultra-lean-mcp-proxy status
```
## Add Servers That Get Wrapped
`ultra-lean-mcp-proxy install` wraps local stdio servers (`command` + `args`) and local URL-based transports (`http` / `sse`) by default.
Use `--no-wrap-url` if you only want stdio wrapping.
For Claude Code, add servers in stdio form and use `--scope user` so they are written to `~/.claude.json` (auto-detected):
```bash
# Wrappable (stdio)
claude mcp add --scope user filesystem -- npx -y @modelcontextprotocol/server-filesystem /tmp
```
```bash
# Wrappable by default (wrapped via local bridge chain)
claude mcp add --scope user --transport http linear https://mcp.linear.app/mcp
```
Then run:
```bash
ultra-lean-mcp-proxy status
ultra-lean-mcp-proxy install
```
> **Note**: `claude mcp add --scope project ...` writes to `.mcp.json` in the current project. This file is not globally auto-discovered by `install` yet.
> **Note**: URL wrapping applies to local config files (for example `~/.claude.json`, `~/.cursor/mcp.json`).
> For cloud-managed Claude connectors, use npm CLI `wrap-cloud` to mirror and wrap them locally:
> `npx ultra-lean-mcp-proxy wrap-cloud`
## Features
- **Transparent Proxying**: Wrap any MCP stdio server without code changes
- **Massive Token Savings**: 51-83% token reduction across real MCP servers
- **Performance Boost**: 22-87% faster response times
- **Zero Client Changes**: Compatible with existing MCP clients
- **Tools Hash Sync**: Efficient tool list caching with conditional requests
- **Delta Responses**: Send only changes between responses
- **Lazy Loading**: On-demand tool discovery for large tool sets
- **Result Compression**: Compress tool call results using LAP format
## Performance Benchmarks
Benchmark figures below are for the Python runtime with the full v2 optimization pipeline enabled.
The npm package in Phase C1 currently provides definition compression only.
Real-world benchmark across 5 production MCP servers (147 measured turns):
| Metric | Direct | With Proxy | Savings |
|--------|--------|------------|---------|
| **Total Tokens** | 82,631 | 23,826 | **71.2%** |
| **Response Time** | 1,047ms | 540ms | **48.4%** |
### Per-Server Results
| Server | Token Savings | Time Savings | Tools |
|--------|---------------|--------------|-------|
| **filesystem** | 72.4% | 87.3% | list_directory, search_files |
| **memory** | 82.7% | 31.8% | read_graph, search_nodes |
| **everything** | 65.2% | 22.1% | get-resource-links, research |
| **sequential-thinking** | 61.5% | 3.8% | sequentialthinking |
| **puppeteer** | 51.2% | -9.7% | puppeteer_navigate, evaluate |
*Note: Puppeteer showed time overhead due to heavy I/O operations, but still achieved 51% token savings.*
## Installation
### Basic Installation
```bash
pip install ultra-lean-mcp-proxy
```
### With Proxy Support (Recommended)
```bash
pip install 'ultra-lean-mcp-proxy[proxy]'
```
### Development Installation
```bash
pip install 'ultra-lean-mcp-proxy[dev]'
```
## Quick Start
### Wrap Any MCP Server
```bash
# Wrap the filesystem server
ultra-lean-mcp-proxy proxy -- npx -y @modelcontextprotocol/server-filesystem /tmp
# Wrap a Python MCP server
ultra-lean-mcp-proxy proxy -- python -m my_mcp_server
# Wrap with runtime stats
ultra-lean-mcp-proxy proxy --stats -- npx -y @modelcontextprotocol/server-memory
# Enable verbose logging
ultra-lean-mcp-proxy proxy -v -- npx -y @modelcontextprotocol/server-everything
```
### Claude Desktop Integration
Update your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"filesystem-optimized": {
"command": "ultra-lean-mcp-proxy",
"args": [
"proxy",
"--stats",
"--",
"npx",
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/yourname/Documents"
]
}
}
}
```
Now when Claude uses the filesystem server, all communication is automatically optimized.
## Configuration
### Command-Line Flags
```bash
# All optimization vectors are ON by default.
# Use --disable-* flags to opt out.
ultra-lean-mcp-proxy proxy \
--disable-lazy-loading \
-- <upstream-command>
# Fine-tune optimization parameters
ultra-lean-mcp-proxy proxy \
--result-compression-mode aggressive \
--lazy-mode search_only \
--cache-ttl 3600 \
--delta-min-savings 0.15 \
-- <upstream-command>
# Dump effective configuration
ultra-lean-mcp-proxy proxy --dump-effective-config -- <upstream-command>
```
### Configuration File
Create `ultra-lean-mcp-proxy.config.json` or `.yaml`:
```json
{
"result_compression_enabled": true,
"result_compression_mode": "aggressive",
"delta_responses_enabled": true,
"lazy_loading_enabled": true,
"lazy_mode": "search_only",
"tools_hash_sync_enabled": true,
"caching_enabled": true,
"cache_ttl_seconds": 3600
}
```
Load with:
```bash
ultra-lean-mcp-proxy proxy --config ultra-lean-mcp-proxy.config.json -- <upstream-command>
```
### Environment Variables
Prefix any config option with `ULTRA_LEAN_MCP_PROXY_`:
```bash
export ULTRA_LEAN_MCP_PROXY_RESULT_COMPRESSION_ENABLED=true
export ULTRA_LEAN_MCP_PROXY_CACHE_TTL_SECONDS=3600
ultra-lean-mcp-proxy proxy -- <upstream-command>
```
## Optimization Features
### 1. Tool Definition Compression
Compresses `tools/list` responses using LAP format:
**Before (JSON Schema):**
```json
{
"name": "search_files",
"description": "Search for files matching a pattern",
"inputSchema": {
"type": "object",
"properties": {
"pattern": {"type": "string", "description": "Glob pattern"},
"max_results": {"type": "number", "default": 100}
},
"required": ["pattern"]
}
}
```
**After (LAP):**
```
@tool search_files
@desc Search for files matching a pattern
@in pattern:string Glob pattern
@opt max_results:number=100
```
### 2. Tools Hash Sync
Efficient caching using conditional requests:
- Client: "Give me tools if hash != abc123"
- Server (unchanged): `304 Not Modified`
- Server (changed): `200 OK` with new tools
Hit ratio in benchmarks: **84.1%** (37 hits, 7 misses)
### 3. Delta Responses
Send only changes between tool calls:
**First call:**
```json
{"status": "running", "progress": 0, "message": "Starting..."}
```
**Second call (delta):**
```json
{"progress": 50, "message": "Processing..."}
```
### 4. Lazy Loading
Load tools on-demand instead of all at once:
- **Off**: All tools sent upfront
- **Minimal**: Send 5 most-used tools initially
- **Search Only**: Only send search/discovery tools, load others when called
Best for servers with 20+ tools.
### 5. Result Compression
Compress tool call results:
- **Balanced**: Compress descriptions, preserve structure
- **Aggressive**: Maximum compression, lean LAP format
## CLI Reference
### Install / Uninstall
```bash
# Install: wrap all MCP servers with proxy
ultra-lean-mcp-proxy install [--dry-run] [--client NAME] [--skip NAME] [--offline] [--no-wrap-url] [--no-cloud] [--suffix NAME] [-v]
# `--skip` matches MCP server names inside config files
# Uninstall: restore original configs
ultra-lean-mcp-proxy uninstall [--dry-run] [--client NAME] [--runtime pip|npm] [--all] [-v]
# Check status
ultra-lean-mcp-proxy status
# Mirror cloud-scoped Claude URL connectors into local wrapped entries (npm CLI)
npx ultra-lean-mcp-proxy wrap-cloud [--dry-run] [--runtime npm|pip] [--suffix -ulmp] [-v]
```
### Watch Mode (Auto-Update)
```bash
# Watch config files, auto-wrap new servers
ultra-lean-mcp-proxy watch
# Watch but keep URL/SSE/HTTP entries unwrapped
ultra-lean-mcp-proxy watch --no-wrap-url
# Run as background daemon
ultra-lean-mcp-proxy watch --daemon
# Stop daemon
ultra-lean-mcp-proxy watch --stop
# Set cloud connector discovery interval (default: 60s)
ultra-lean-mcp-proxy watch --cloud-interval 30
# Customize suffix for cloud-mirrored entries
ultra-lean-mcp-proxy watch --suffix -proxy
```
Watch mode auto-discovers cloud-scoped Claude MCP connectors when the `claude` CLI is available on PATH, polling every `--cloud-interval` seconds.
### Proxy (Direct Usage)
```bash
ultra-lean-mcp-proxy proxy [--enable-<feature>|--disable-<feature>] [--cache-ttl SEC] [--lazy-mode MODE] -- <upstream-command> [args...]
```
For troubleshooting, you can enable per-server RPC tracing:
```bash
ultra-lean-mcp-proxy proxy --trace-rpc -- <upstream-command>
```
## Architecture
```
┌──────────┐ ┌────────────────────┐ ┌──────────┐
│ │ stdio │ ultra-lean-mcp │ stdio │ Upstream │
│ Client │◄─────────►│ proxy │◄─────────►│ MCP │
│ (Claude) │ │ │ │ Server │
│ │ LAP │ ┌──────────────┐ │ JSON │ │
└──────────┘ │ │ Compression │ │ └──────────┘
│ │ Delta Engine │ │
│ │ Cache Layer │ │
│ │ Lazy Loader │ │
│ └──────────────┘ │
└────────────────────┘
```
The proxy:
1. Sits between client and server as transparent stdio relay
2. Intercepts `tools/list` and `tools/call` JSON-RPC messages
3. Compresses outgoing responses using LAP format
4. Decompresses incoming requests back to JSON Schema
5. Maintains delta state, cache, and tool registry
## Use Cases
### Production MCP Servers
Wrap existing MCP servers to reduce LLM token costs and improve response times.
### High-Volume Tool Servers
Servers with 50+ tools benefit from lazy loading and tools hash sync.
### Low-Bandwidth Environments
Reduce network payload sizes by 50-70% with compression.
### Development & Testing
Run with `--stats` to understand token usage patterns and optimization effectiveness.
## Monitoring & Stats
Enable stats logging:
```bash
ultra-lean-mcp-proxy proxy --stats -- <upstream-command>
```
Output to stderr:
```
[2026-02-15 10:28:55] Token savings: 71.2% (82631 → 23826)
[2026-02-15 10:28:55] Time savings: 48.4% (1047ms → 540ms)
[2026-02-15 10:28:55] tools_hash hit ratio: 37:7 (84.1% hits)
[2026-02-15 10:28:55] Upstream traffic: 2858 req tokens, 22528 resp tokens
```
## Related Projects
- **[ultra-lean-mcp-core](https://github.com/lean-agent-protocol/ultra-lean-mcp-core)** - Zero-dependency core library for LAP compilation/decompilation
- **[ultra-lean-mcp](https://github.com/lean-agent-protocol/ultra-lean-mcp)** - MCP server + CLI for LAP workflows
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup and guidelines.
## License
MIT License - see [LICENSE](LICENSE) for details.
---
Part of the [Lean Agent Protocol](https://github.com/lean-agent-protocol) ecosystem.
| text/markdown | Lean Agent Protocol | null | null | null | null | compression, mcp, optimization, proxy | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest; extra == \"dev\"",
"tiktoken; extra == \"dev\"",
"mcp; extra == \"proxy\""
] | [] | [] | [] | [
"Homepage, https://github.com/lean-agent-protocol/ultra-lean-mcp-proxy",
"Repository, https://github.com/lean-agent-protocol/ultra-lean-mcp-proxy",
"Issues, https://github.com/lean-agent-protocol/ultra-lean-mcp-proxy/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:14:55.638804 | ultra_lean_mcp_proxy-0.3.2.tar.gz | 228,128 | e4/e2/ac6b3226794e4e406288c438def2b959a591c236b80f04423c790ff9779a/ultra_lean_mcp_proxy-0.3.2.tar.gz | source | sdist | null | false | 74a46e0d99bddc8d964abdaad0e79ab5 | bca9d15099d9e9d29c9638ccd1b4ea0ae97289d6af9582eed12f786711c84e09 | e4e2ac6b3226794e4e406288c438def2b959a591c236b80f04423c790ff9779a | MIT | [
"LICENSE",
"NOTICE"
] | 227 |
2.4 | codewords-client | 0.4.1 | Python client for CodeWords with auto-configured FastAPI integration. | # Codewords Client
This is a client for the Codewords API.
## Installation
```bash
pip install codewords-client
```
| text/markdown | null | Osman Ramadan <osman@agemo.ai> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.25.0",
"fastapi>=0.100.0",
"structlog>=23.0.0",
"pydantic>=2.0.0",
"uvicorn[standard]>=0.20.0",
"redis>=5.0.0",
"openai>=1.0.0; extra == \"openai\"",
"anthropic>=0.25.0; extra == \"anthropic\"",
"firecrawl>=1.0.0; extra == \"firecrawl\"",
"perplexityai>=1.0.0; extra == \"perplexityai\"",... | [] | [] | [] | [] | uv/0.8.17 | 2026-02-18T22:14:11.928054 | codewords_client-0.4.1.tar.gz | 12,900 | 3b/99/e3f9aabd712284a0d142f4c63a56f3c4255189c496690e36100ff218d768/codewords_client-0.4.1.tar.gz | source | sdist | null | false | 54e401f4b1dbf6692d884238ae748e33 | f33b801e72ba71da76b7aeba774640daf6d413e29bf4d70d47035d78df50274b | 3b99e3f9aabd712284a0d142f4c63a56f3c4255189c496690e36100ff218d768 | MIT | [] | 258 |
2.4 | riotskillissue | 0.2.1 | Production-ready, auto-updating Riot API wrapper. | # RiotSkillIssue
<div align="center">
[](https://badge.fury.io/py/riotskillissue)
[](https://pypi.org/project/riotskillissue/)
[](LICENSE)
[](https://github.com/Demoen/riotskillissue/actions/workflows/test.yml)
**Production-ready, auto-updating, and fully typed Python wrapper for the Riot Games API.**
[Documentation](https://demoen.github.io/riotskillissue/) · [Examples](https://demoen.github.io/riotskillissue/examples/) · [API Reference](https://demoen.github.io/riotskillissue/api-reference/)
</div>
---
## Features
| Feature | Description |
|---------|-------------|
| **Type-Safe** | 100% Pydantic models for all requests and responses |
| **Auto-Updated** | Generated daily from the Official OpenAPI Spec |
| **Sync & Async** | First-class async client and a synchronous `SyncRiotClient` for scripts & notebooks |
| **Resilient** | Automatic `Retry-After` handling, exponential backoff, and a rich error hierarchy |
| **Distributed** | Pluggable Redis support for shared rate limiting and caching |
| **Multi-Game** | Full support for LoL, TFT, LoR, and VALORANT APIs |
## Installation
Requires Python 3.8+.
```bash
pip install riotskillissue
```
## Quick Start (Async)
```python
import asyncio
from riotskillissue import RiotClient, Platform, Region
async def main():
async with RiotClient() as client:
account = await client.account.get_by_riot_id(
region=Platform.EUROPE,
gameName="Agurin",
tagLine="EUW"
)
print(f"Found: {account.gameName}#{account.tagLine}")
summoner = await client.summoner.get_by_puuid(
region=Region.EUW1,
encryptedPUUID=account.puuid
)
print(f"Level: {summoner.summonerLevel}")
if __name__ == "__main__":
asyncio.run(main())
```
## Quick Start (Sync)
```python
from riotskillissue import SyncRiotClient, Platform
with SyncRiotClient() as client:
account = client.account.get_by_riot_id(
region=Platform.EUROPE,
gameName="Agurin",
tagLine="EUW"
)
print(f"Found: {account.gameName}#{account.tagLine}")
```
Set your API key via environment variable:
```bash
export RIOT_API_KEY="RGAPI-your-key-here"
```
Or pass it directly:
```python
async with RiotClient(api_key="RGAPI-...") as client:
...
```
## Configuration
```python
from riotskillissue import RiotClient, RiotClientConfig
from riotskillissue.core.cache import RedisCache, MemoryCache
config = RiotClientConfig(
api_key="RGAPI-...",
max_retries=5,
cache_ttl=120,
redis_url="redis://localhost:6379/0", # Distributed rate limiting
proxy="http://127.0.0.1:8080", # Optional HTTP proxy
log_level="DEBUG", # DEBUG, INFO, WARNING
)
cache = MemoryCache(max_size=2048) # LRU in-memory cache
# or: cache = RedisCache("redis://localhost:6379/1")
async with RiotClient(config=config, cache=cache) as client:
...
```
## Error Handling
```python
from riotskillissue import NotFoundError, RateLimitError, RiotAPIError
try:
account = await client.account.get_by_riot_id(...)
except NotFoundError:
print("Player not found")
except RiotAPIError as e:
print(f"[{e.status}] {e.message}")
```
## Documentation
Full documentation is available at [demoen.github.io/riotskillissue](https://demoen.github.io/riotskillissue/).
- [Getting Started](https://demoen.github.io/riotskillissue/getting-started/)
- [Configuration](https://demoen.github.io/riotskillissue/configuration/)
- [Examples](https://demoen.github.io/riotskillissue/examples/)
- [API Reference](https://demoen.github.io/riotskillissue/api-reference/)
- [CLI](https://demoen.github.io/riotskillissue/cli/)
## Legal
RiotSkillIssue is not endorsed by Riot Games and does not reflect the views or opinions of Riot Games or anyone officially involved in producing or managing Riot Games properties. Riot Games and all associated properties are trademarks or registered trademarks of Riot Games, Inc.
## License
MIT. See the [LICENSE](LICENSE) file for details.
| text/markdown | Demoen | null | null | null | null | api, league-of-legends, riot, tft, valorant, wrapper | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [
"frozendict>=2.4.0",
"httpx>=0.27.0",
"jinja2>=3.1.0",
"msgspec>=0.18.0",
"pydantic>=2.7.0",
"redis>=5.0.0",
"rich>=13.7.0",
"tenacity>=8.2.0",
"textual>=0.85.0",
"typer>=0.12.0",
"deepdiff>=6.0.0; extra == \"dev\"",
"mypy>=1.10.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"... | [] | [] | [] | [
"Homepage, https://github.com/Demoen/riotskillissue",
"Repository, https://github.com/Demoen/riotskillissue",
"Issues, https://github.com/Demoen/riotskillissue/issues",
"Changelog, https://github.com/Demoen/riotskillissue/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:14:11.193780 | riotskillissue-0.2.1.tar.gz | 1,491,306 | 88/54/9fedabfb30ab773fa0f873e17f58e0143cb5379318e6200663dbc1009651/riotskillissue-0.2.1.tar.gz | source | sdist | null | false | 0d3c636b3e1ea88341f2a8d4edc9202b | 629f8d21924504c519fb54571036a0da306f26ee363153060f082f1844b49f01 | 88549fedabfb30ab773fa0f873e17f58e0143cb5379318e6200663dbc1009651 | MIT | [
"LICENSE"
] | 258 |
2.4 | mskt | 0.1.20 | vtk helper tools/functions for musculoskeletal analyses | # pyMSKT (Musculoskeletal Toolkit)
<br>
<br>
|[Documentation](https://anthonygattiphd.com/pymskt/)|
pyMSKT is an open-source library for performing quantitative analyses of the musculoskeletal system. It enables creation of surface meshes of musculoskeletal anatomy and then processes these meshes to get quantitative outcomes and visualizatons, like for cartilage thickness.
<p align="center">
<img src="./images/whole_knee_1.png" width="300">
</p>
# Installation
# Pip install from pypi
```bash
# create environment
conda env create -n mskt
conda activate mskt
pip install mskt
```
### Conda / pip install from source
```bash
# clone repository
git clone https://github.com/gattia/pymskt.git
# move into directory
cd pymskt
# CREATE ENVIRONMENT:
conda create -n mskt
conda activate mskt
# INSTALLING DEPENDENCIES
# Recommend pip becuase cycpd and pyfocusr are available on pypi (but not conda)
pip install -r requirements.txt
# IF USING PIP
pip install .
```
### Conda only install (not-recommended)
1. Clone this repository & install dependencies: <br>
```bash
# clone repository
git clone https://github.com/gattia/pymskt.git
# move into directory
cd pymskt
# CREATE ENVIRONMENT:
conda env create -n mskt
conda activate mskt
# Install all available requirements
conda install --file requirements-conda.txt # pip (below) can alternatively be used to install dependencies in conda env
# Return to root dir
cd ..
```
2. Clone cycpd & install: (ONLY NEEDED FOR CONDA INSTALL)<br>
```bash
git clone https://github.com/gattia/cycpd.git
cd cycpd
pip install .
cd ..
```
3. Clone pyfocusr & install: (ONLY NEEDED FOR CONDA INSTALL)<br>
```bash
git clone https://github.com/gattia/pyfocusr.git
cd pyfocusr
pip install .
cd ..
```
4. Install pymskt: (ONLY NEEDED FOR CONDA INSTALL)<br>
```bash
cd pymskt
pip install .
```
### Optional Dependencies
#### itkwidgets (for visualization)
If you are using jupyterlab instead of jupyter notebook, you also need to install an extension:
```
jupyter labextension install @jupyter-widgets/jupyterlab-manager jupyter-matplotlib jupyterlab-datawidgets itkwidgets
```
#### vtkbool (for boolean operations)
For fast and robust mesh boolean operations (union, intersection, difference), install vtkbool:
```bash
conda install -c conda-forge vtkbool
```
**Note:** vtkbool is only available via conda, not pip. It provides fast boolean operations (~30s for complex meshes) and avoids the memory errors that occur with VTK's native boolean operations.
Usage example:
```python
from pymskt.mesh import Mesh
femur = Mesh('femur.stl')
patella = Mesh('patella.stl')
# Subtract patella from femur
result = femur.boolean_difference(patella)
# Or use standalone function
from pymskt.mesh.meshTools import boolean_difference
result = boolean_difference(femur, patella)
```
# Examples
There are jupyter notebook examples in the directory `/examples`
pyMSKT allows you to easily create bone meshes and attribute cartilage to the bone for calculating quantitative outcomes.
```python
femur = BoneMesh(path_seg_image=location_seg, # path to the segmentation image being used.
label_idx=5, # what is the label of this bone.
list_cartilage_labels=[1]) # labels for cartilage associted with bone.
# Create the bone mesh
femur.create_mesh()
# Calcualte cartialge thickness for the cartialge meshes associated with the bone
femur.calc_cartilage_thickness()
femur.save_mesh(os.path.expanduser'~/Downloads/femur.vtk')
```
The saved file can be viewed in many mesh viewers such as [3D Slicer](https://www.slicer.org/) or [Paraview](https://www.paraview.org/). Or, better yet they can be viewed in your jupyter notebook using [itkwidgets](https://pypi.org/project/itkwidgets/):
```python
from itkwidgets import view
view(geometries=[femur])
```

After creating the above mesh, creating cartilage subregions & an anatomical coordinate
system is as simple as:
```python
# Load in full seg image
seg_image = sitk.ReadImage(location_seg)
# break into sub regions. (weightbearing / trochlea / posterior condyles)
seg_image = mskt.image.cartilage_processing.get_knee_segmentation_with_femur_subregions(seg_image)
# assign femoral condyle cartilage sub regions to femur
femur.seg_image = seg_image
femur.list_cartilage_labels=[11, 12, 13, 14, 15]
femur.assign_cartilage_regions()
# use cartilage regions to fit cylinder to condyles and create anatomic coordinate system
femur_acs = FemurACS(femur, cart_label=(11, 12, 13, 14, 15))
femur_acs.fit()
```
The resulting anatomical coorindate system can be used to create arrows & visualize the result:
```python
AP_arrow = get_arrow(femur_acs.ap_axis, origin=femur_acs.origin )
IS_arrow = get_arrow(femur_acs.is_axis, origin=femur_acs.origin)
ML_arrow = get_arrow(femur_acs.ml_axis, origin=femur_acs.origin)
view(geometries=[femur, AP_arrow, IS_arrow, ML_arrow])
```
|*Anatomical Coordinate System - Cartilage Thickness* | *Anatomical Coordinate System - Cartilage Subregions* |
|:---: |:---: |
| |  |
An example of how the cartilage thickness values are computed:

### Meniscal Analysis
Compute meniscal extrusion and coverage metrics:
```python
import pymskt as mskt
# Create tibia with cartilage labels
tibia = mskt.mesh.BoneMesh(
path_seg_image='tibia.nrrd',
label_idx=6,
dict_cartilage_labels={'medial': 2, 'lateral': 3}
)
tibia.create_mesh()
tibia.calc_cartilage_thickness()
tibia.assign_cartilage_regions()
# Create meniscus meshes
med_meniscus = mskt.mesh.Mesh(path_seg_image='tibia.nrrd', label_idx=10)
med_meniscus.create_mesh()
lat_meniscus = mskt.mesh.Mesh(path_seg_image='tibia.nrrd', label_idx=9)
lat_meniscus.create_mesh()
# Set menisci (labels auto-inferred from dict_cartilage_labels)
tibia.set_menisci(medial_meniscus=med_meniscus, lateral_meniscus=lat_meniscus)
# Access metrics (auto-computes on first access)
print(f"Medial extrusion: {tibia.med_men_extrusion:.2f} mm")
print(f"Medial coverage: {tibia.med_men_coverage:.1f}%")
print(f"Lateral extrusion: {tibia.lat_men_extrusion:.2f} mm")
print(f"Lateral coverage: {tibia.lat_men_coverage:.1f}%")
```
# Development / Contributing
General information for contributing can be found [here](https://github.com/gattia/pymskt/blob/main/CONTRIBUTING.md)
## Tests
- Running tests requires pytest (`conda install pytest` or `pip install pytest`)
- Run tests using `pytest` or `make test` in the home directory.
## Coverage
- Coverage results/info requires `coverage` (`conda install coverage` or `pip install coverage`)
- Can get coverage statistics by running:
- `coverage run -m pytest`
or if using make:
- `make coverage`
## Notes for development
- When updating cython code, it is not re-built when we re-install using the basic `python setup.py install`. Therefore we force it to do this:
- `python setup.py build_ext -i --force`
### Tests
If you add a new function, or functionality to `pymskt` please add appropriate tests as well.
The tests are located in `/testing` and are organized as:
`/testing/[pymskt_submodule]/[python_filename_being_tested]/[name_of_function_being_tested]_test.py`
The tests use `pytest`. If you are not familiar with `pytest` a brief example is provided [here](https://docs.pytest.org/en/6.2.x/getting-started.html).
Currently, 37 tests are being skipped for one of 2 (maybe 3) reasons. 1. They arent implemented yet and they are a placeholder. 2. They rely on a function that has small machine-to-machine differences so they dont pass or 3. A breaking change occured since result meshes were last saved. If you want to help but dont know how or where to start, filling in / fixing these tests would be a great place to start! And greatly appreciated.
## Code of Conduct
We have adopted the code of conduct defined by the [Contributor Covenant](https://www.contributor-covenant.org) to clarify expected behavior in our community. For more information see the [Code of Conduct](https://github.com/gattia/pymskt/blob/main/CODE_OF_CONDUCT.md).
| text/markdown | null | Anthony Gatti <anthony.a.gatti@gmail.com> | null | null | MIT | python | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Topic :: Scientific/Engineering",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"point-cloud-utils",
"numpy",
"Cython>=0.29",
"pyacvd",
"pyvista",
"scipy",
"SimpleITK",
"vtk",
"param",
"jupyter",
"notebook",
"traittypes",
"pymeshfix",
"pyfocusr",
"itkwidgets",
"cycpd"
] | [] | [] | [] | [
"Homepage, https://github.com/gattia/pymskt/",
"Documentation, https://anthonygattiphd.com/pymskt/"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T22:13:56.474723 | mskt-0.1.20.tar.gz | 92,468,737 | 2f/fe/f5e756617af6c4f2c8381638772d19c1fa6d037070dee5153c05f045d31f/mskt-0.1.20.tar.gz | source | sdist | null | false | fd6575e7126e9489398e901f7289a5f5 | 159c0f877ca9aea94c49887c09d7166889ba4a871ffdb71fce6d73313459794e | 2ffef5e756617af6c4f2c8381638772d19c1fa6d037070dee5153c05f045d31f | null | [] | 180 |
2.3 | dycw-pre-commit-hooks | 0.15.22 | Pre-commit hooks | # `pre-commit-hooks`
Pre-commit hooks
## Usage
In `.pre-commit-config.yaml`, add:
```yaml
repos:
- repo: https://github.com/dycw/pre-commit-hooks
rev: master
hooks:
- id: add-hooks
```
and then run `prek auto-update`.
| text/markdown | Derek Wan | Derek Wan <d.wan@icloud.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.3.1",
"dycw-utilities>=0.191.10",
"libcst>=1.8.6",
"orjson>=3.11.7",
"packaging>=26.0",
"pydantic>=2.12.5",
"pyyaml>=6.0.3",
"tomlkit>=0.14.0",
"xdg-base-dirs>=6.0.2",
"click==8.3.1; extra == \"cli\"",
"dycw-utilities==0.191.10; extra == \"cli\"",
"libcst==1.8.6; extra == \"cli\"",
... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T22:11:04.471993 | dycw_pre_commit_hooks-0.15.22-py3-none-any.whl | 48,105 | a4/56/09114387107e550e50bb533df33c7eb8376c33a1911dfb93694f562d7de1/dycw_pre_commit_hooks-0.15.22-py3-none-any.whl | py3 | bdist_wheel | null | false | 0471344a28c2646dc7dfa105d46e6683 | 031cd2f8baa1770aa7558a50b5ff5fa44d1597f9b0ecc9c70df578aad32cdb98 | a45609114387107e550e50bb533df33c7eb8376c33a1911dfb93694f562d7de1 | null | [] | 220 |
2.4 | slack-sdk | 3.40.1 | The Slack API Platform SDK for Python | <h1 align="center">Python Slack SDK</h1>
<p align="center">
<a href="https://github.com/slackapi/python-slack-sdk/actions/workflows/ci-build.yml">
<img alt="Tests" src="https://img.shields.io/github/actions/workflow/status/slackapi/python-slack-sdk/ci-build.yml"></a>
<a href="https://codecov.io/gh/slackapi/python-slack-sdk">
<img alt="Codecov" src="https://img.shields.io/codecov/c/gh/slackapi/python-slack-sdk"></a>
<a href="https://pepy.tech/project/slack-sdk">
<img alt="Pepy Total Downloads" src="https://img.shields.io/pepy/dt/slack-sdk"></a>
<br>
<a href="https://pypi.org/project/slack-sdk/">
<img alt="PyPI - Version" src="https://img.shields.io/pypi/v/slack-sdk"></a>
<a href="https://pypi.org/project/slack-sdk/">
<img alt="Python Versions" src="https://img.shields.io/pypi/pyversions/slack-sdk.svg"></a>
<a href="https://docs.slack.dev/tools/python-slack-sdk/">
<img alt="Documentation" src="https://img.shields.io/badge/dev-docs-yellow"></a>
</p>
The Slack platform offers several APIs to build apps. Each Slack API delivers part of the capabilities from the platform, so that you can pick just those that fit for your needs. This SDK offers a corresponding package for each of Slack’s APIs. They are small and powerful when used independently, and work seamlessly when used together, too.
**Comprehensive documentation on using the Slack Python can be found at [https://docs.slack.dev/tools/python-slack-sdk/](https://docs.slack.dev/tools/python-slack-sdk/)**
---
Whether you're building a custom app for your team, or integrating a third party service into your Slack workflows, Slack Developer Kit for Python allows you to leverage the flexibility of Python to get your project up and running as quickly as possible.
The **Python Slack SDK** allows interaction with:
- `slack_sdk.web`: for calling the [Web API methods][api-methods]
- `slack_sdk.webhook`: for utilizing the [Incoming Webhooks](https://docs.slack.dev/messaging/sending-messages-using-incoming-webhooks/) and [`response_url`s in payloads](https://docs.slack.dev/interactivity/handling-user-interaction/#message_responses)
- `slack_sdk.signature`: for [verifying incoming requests from the Slack API server](https://docs.slack.dev/authentication/verifying-requests-from-slack/)
- `slack_sdk.socket_mode`: for receiving and sending messages over [Socket Mode](https://docs.slack.dev/apis/events-api/using-socket-mode/) connections
- `slack_sdk.audit_logs`: for utilizing [Audit Logs APIs](https://docs.slack.dev/admins/audit-logs-api/)
- `slack_sdk.scim`: for utilizing [SCIM APIs](https://docs.slack.dev/admins/scim-api/)
- `slack_sdk.oauth`: for implementing the [Slack OAuth flow](https://docs.slack.dev/authentication/installing-with-oauth/)
- `slack_sdk.models`: for constructing [Block Kit](https://docs.slack.dev/block-kit/) UI components using easy-to-use builders
- `slack_sdk.rtm`: for utilizing the [RTM API][rtm-docs]
If you want to use our [Events API][events-docs] and Interactivity features, please check the [Bolt for Python][bolt-python] library. Details on the Tokens and Authentication can be found in our [Auth Guide](https://docs.slack.dev/tools/python-slack-sdk/installation/).
## slackclient is in maintenance mode
Are you looking for [slackclient](https://pypi.org/project/slackclient/)? The slackclient project is in maintenance mode now and this [`slack_sdk`](https://pypi.org/project/slack-sdk/) is the successor. If you have time to make a migration to slack_sdk v3, please follow [our migration guide](https://docs.slack.dev/tools/python-slack-sdk/v3-migration/) to ensure your app continues working after updating.
## Table of contents
* [Requirements](#requirements)
* [Installation](#installation)
* [Getting started tutorial](#getting-started-tutorial)
* [Basic Usage of the Web Client](#basic-usage-of-the-web-client)
* [Sending a message to Slack](#sending-a-message-to-slack)
* [Uploading files to Slack](#uploading-files-to-slack)
* [Async usage](#async-usage)
* [WebClient as a script](#asyncwebclient-in-a-script)
* [WebClient in a framework](#asyncwebclient-in-a-framework)
* [Advanced Options](#advanced-options)
* [SSL](#ssl)
* [Proxy](#proxy)
* [DNS performance](#dns-performance)
* [Example](#example)
* [Migrating from v1](#migrating-from-v1)
* [Support](#support)
* [Development](#development)
### Requirements
---
This library requires Python 3.7 and above. If you're unsure how to check what version of Python you're on, you can check it using the following:
> **Note:** You may need to use `python3` before your commands to ensure you use the correct Python path. e.g. `python3 --version`
```bash
python --version
-- or --
python3 --version
```
### Installation
We recommend using [PyPI][pypi] to install the Slack Developer Kit for Python.
```bash
$ pip install slack_sdk
```
### Getting started tutorial
---
We've created this [tutorial](https://github.com/slackapi/python-slack-sdk/tree/main/tutorial) to build a basic Slack app in less than 10 minutes. It requires some general programming knowledge, and Python basics. It focuses on the interacting with the Slack Web API and RTM API. Use it to give you an idea of how to use this SDK.
**[Read the tutorial to get started!](https://github.com/slackapi/python-slack-sdk/tree/main/tutorial)**
### Basic Usage of the Web Client
---
Slack provide a Web API that gives you the ability to build applications that interact with Slack in a variety of ways. This Development Kit is a module based wrapper that makes interaction with that API easier. We have a basic example here with some of the more common uses but a full list of the available methods are available [here][api-methods]. More detailed examples can be found in [our guide](https://docs.slack.dev/tools/python-slack-sdk/web/).
#### Sending a message to Slack
One of the most common use-cases is sending a message to Slack. If you want to send a message as your app, or as a user, this method can do both. In our examples, we specify the channel name, however it is recommended to use the `channel_id` where possible. Also, if your app's bot user is not in a channel yet, invite the bot user before running the code snippet (or add `chat:write.public` to Bot Token Scopes for posting in any public channels).
```python
import os
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
client = WebClient(token=os.environ['SLACK_BOT_TOKEN'])
try:
response = client.chat_postMessage(channel='#random', text="Hello world!")
assert response["message"]["text"] == "Hello world!"
except SlackApiError as e:
# You will get a SlackApiError if "ok" is False
assert e.response["ok"] is False
assert e.response["error"] # str like 'invalid_auth', 'channel_not_found'
print(f"Got an error: {e.response['error']}")
# Also receive a corresponding status_code
assert isinstance(e.response.status_code, int)
print(f"Received a response status_code: {e.response.status_code}")
```
Here we also ensure that the response back from Slack is a successful one and that the message is the one we sent by using the `assert` statement.
#### Uploading files to Slack
We've changed the process for uploading files to Slack to be much easier and straight forward. You can now just include a path to the file directly in the API call and upload it that way.
```python
import os
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
client = WebClient(token=os.environ['SLACK_BOT_TOKEN'])
try:
filepath="./tmp.txt"
response = client.files_upload_v2(channel='C0123456789', file=filepath)
assert response["file"] # the uploaded file
except SlackApiError as e:
# You will get a SlackApiError if "ok" is False
assert e.response["ok"] is False
assert e.response["error"] # str like 'invalid_auth', 'channel_not_found'
print(f"Got an error: {e.response['error']}")
```
More details on the `files_upload_v2` method can be found [here][files_upload_v2].
### Async usage
`AsyncWebClient` in this SDK requires [AIOHttp][aiohttp] under the hood for asynchronous requests.
#### AsyncWebClient in a script
```python
import asyncio
import os
from slack_sdk.web.async_client import AsyncWebClient
from slack_sdk.errors import SlackApiError
client = AsyncWebClient(token=os.environ['SLACK_BOT_TOKEN'])
async def post_message():
try:
response = await client.chat_postMessage(channel='#random', text="Hello world!")
assert response["message"]["text"] == "Hello world!"
except SlackApiError as e:
assert e.response["ok"] is False
assert e.response["error"] # str like 'invalid_auth', 'channel_not_found'
print(f"Got an error: {e.response['error']}")
asyncio.run(post_message())
```
#### AsyncWebClient in a framework
If you are using a framework invoking the asyncio event loop like : sanic/jupyter notebook/etc.
```python
import os
from slack_sdk.web.async_client import AsyncWebClient
from slack_sdk.errors import SlackApiError
client = AsyncWebClient(token=os.environ['SLACK_BOT_TOKEN'])
# Define this as an async function
async def send_to_slack(channel, text):
try:
# Don't forget to have await as the client returns asyncio.Future
response = await client.chat_postMessage(channel=channel, text=text)
assert response["message"]["text"] == text
except SlackApiError as e:
assert e.response["ok"] is False
assert e.response["error"] # str like 'invalid_auth', 'channel_not_found'
raise e
from aiohttp import web
async def handle_requests(request: web.Request) -> web.Response:
text = 'Hello World!'
if 'text' in request.query:
text = "\t".join(request.query.getall("text"))
try:
await send_to_slack(channel="#random", text=text)
return web.json_response(data={'message': 'Done!'})
except SlackApiError as e:
return web.json_response(data={'message': f"Failed due to {e.response['error']}"})
if __name__ == "__main__":
app = web.Application()
app.add_routes([web.get("/", handle_requests)])
# e.g., http://localhost:3000/?text=foo&text=bar
web.run_app(app, host="0.0.0.0", port=3000)
```
### Advanced Options
#### SSL
You can provide a custom SSL context or disable verification by passing the `ssl` option, supported by both the RTM and the Web client.
For async requests, see the [AIOHttp SSL documentation](https://docs.aiohttp.org/en/stable/client_advanced.html#ssl-control-for-tcp-sockets).
For sync requests, see the [urllib SSL documentation](https://docs.python.org/3/library/urllib.request.html#urllib.request.urlopen).
#### Proxy
A proxy is supported when making async requests, pass the `proxy` option, supported by both the RTM and the Web client.
For async requests, see [AIOHttp Proxy documentation](https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support).
For sync requests, setting either `HTTPS_PROXY` env variable or the `proxy` option works.
#### DNS performance
Using the async client and looking for a performance boost? Installing the optional dependencies (aiodns) may help speed up DNS resolving by the client. We've included it as an extra called "optional":
```bash
$ pip install slack_sdk[optional]
```
#### Example
```python
import os
from slack_sdk import WebClient
from ssl import SSLContext
sslcert = SSLContext()
# pip3 install proxy.py
# proxy --port 9000 --log-level d
proxyinfo = "http://localhost:9000"
client = WebClient(
token=os.environ['SLACK_BOT_TOKEN'],
ssl=sslcert,
proxy=proxyinfo
)
response = client.chat_postMessage(channel="#random", text="Hello World!")
print(response)
```
### Migrating from v2
If you're migrating from slackclient v2.x of slack_sdk to v3.x, Please follow our migration guide to ensure your app continues working after updating.
**[Check out the Migration Guide here!](https://docs.slack.dev/tools/python-slack-sdk/v3-migration/)**
### Migrating from v1
If you're migrating from v1.x of slackclient to v2.x, Please follow our migration guide to ensure your app continues working after updating.
**[Check out the Migration Guide here!](https://github.com/slackapi/python-slack-sdk/wiki/Migrating-to-2.x)**
### Support
---
If you get stuck, we’re here to help. The following are the best ways to get assistance working through your issue:
Use our [Github Issue Tracker][gh-issues] for reporting bugs or requesting features.
Visit the [Slack Community][slack-community] for getting help using Slack Developer Kit for Python or just generally bond with your fellow Slack developers.
### Contributing
We welcome contributions from everyone! Please check out our
[Contributor's Guide](.github/contributing.md) for how to contribute in a
helpful and collaborative way.
<!-- Markdown links -->
[slackclientv1]: https://github.com/slackapi/python-slackclient/tree/v1
[api-methods]: https://docs.slack.dev/reference/methods
[rtm-docs]: https://docs.slack.dev/legacy/legacy-rtm-api/
[events-docs]: https://docs.slack.dev/apis/events-api/
[bolt-python]: https://github.com/slackapi/bolt-python
[pypi]: https://pypi.org/
[gh-issues]: https://github.com/slackapi/python-slack-sdk/issues
[slack-community]: https://slackcommunity.com/
[files_upload_v2]: https://github.com/slackapi/python-slack-sdk/releases/tag/v3.19.0
[aiohttp]: https://aiohttp.readthedocs.io/
| text/markdown | Slack Technologies, LLC | opensource@slack.com | null | null | MIT | slack, slack-api, web-api, slack-rtm, websocket, chat, chatbot, chatops | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Communications :: Chat",
"Topic :: System :: Networking",
"Topic :: Office/Business",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
... | [] | https://github.com/slackapi/python-slack-sdk | null | >=3.7 | [] | [] | [] | [
"aiodns>1.0; extra == \"optional\"",
"aiohttp<4,>=3.7.3; extra == \"optional\"",
"boto3<=2; extra == \"optional\"",
"SQLAlchemy<3,>=1.4; extra == \"optional\"",
"websockets<16,>=9.1; extra == \"optional\"",
"websocket-client<2,>=1; extra == \"optional\""
] | [] | [] | [] | [
"Documentation, https://docs.slack.dev/tools/python-slack-sdk/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:11:01.819608 | slack_sdk-3.40.1.tar.gz | 250,379 | 3a/18/784859b33a3f9c8cdaa1eda4115eb9fe72a0a37304718887d12991eeb2fd/slack_sdk-3.40.1.tar.gz | source | sdist | null | false | d4294d5823310e87656a142cdc609463 | a215333bc251bc90abf5f5110899497bf61a3b5184b6d9ee35d73ebf09ec3fd0 | 3a18784859b33a3f9c8cdaa1eda4115eb9fe72a0a37304718887d12991eeb2fd | null | [
"LICENSE"
] | 2,197,444 |
2.4 | cohort-ai | 0.0.1 | Multi-agent orchestration framework for structured AI deliberation | # cohort-ai
Multi-agent orchestration framework. See the main `cohort` package.
| text/markdown | Ryan Wheeler | null | null | null | null | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
] | [] | https://github.com/rywheeler/cohort | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.2 | 2026-02-18T22:09:54.211875 | cohort_ai-0.0.1.tar.gz | 1,220 | 4c/c6/b4e62fe27d10fa7a2c2138745a8660638082c958d44ac05234aa070e134c/cohort_ai-0.0.1.tar.gz | source | sdist | null | false | f0f88faf93626c87b5461dc179d130a4 | f34c1118b46caaefc1fbb0c35923dd3adb885b82600f4d04fb88e48ff4d1ba4d | 4cc6b4e62fe27d10fa7a2c2138745a8660638082c958d44ac05234aa070e134c | null | [] | 248 |
2.4 | cohort | 0.0.1 | Multi-agent orchestration framework for structured AI deliberation | # cohort
Multi-agent orchestration framework. Full release coming soon.
| text/markdown | Ryan Wheeler | null | null | null | null | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Dev... | [] | https://github.com/rywheeler/cohort | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.2 | 2026-02-18T22:09:45.957709 | cohort-0.0.1.tar.gz | 1,276 | 39/c0/1049333c47d96b7a73b12181b617c843176854fec2ee9b9d2d8221323669/cohort-0.0.1.tar.gz | source | sdist | null | false | 6a521d45dd4092b3b6e1197cf2ec98f2 | 1ae87025d84f661205d1c2a5c8638a719d6696667ceacdccf7b51e8b7d7bdab5 | 39c01049333c47d96b7a73b12181b617c843176854fec2ee9b9d2d8221323669 | null | [] | 251 |
2.4 | codelogician | 2.0.2 | CodeLogician applies neurosymbolic AI to translate source code into precise mathematical logic, striving to create a formal model of the program's behavior that's functionally equivalent to the original source code. | # CodeLogician
*CodeLogician* is the neurosymbolic agentic governance framework for AI-powered coding.
*It helps your coding agent think logically about the code it's producing and test cases it's generating.*
The fundamental flaw that all LLM-powered assistants have is the reasoning they're capable of is based on statistics,
while you need rigorous logic-based automated reasoning.
- Generated code is based on explainable logic, not pure statistics
- Generated test cases are generated come with quantitative coverage metrics
- Generated code is consistent with the best security practices leveraging formal verification
To run *CodeLogician*, please obtain an *Imandra Universe API* key available (there's a free starting plan)
at *[Imandra Universe](https://universe.imandra.ai)* and make sure it's available in your environment as `IMANDRA_UNI_KEY`.
*Three typical workflows*:
1. **DIY mode** - this is where your agent (e.g. Grok) uses the CLI to:
- Learn how to use IML/ImandraX via **`doc`** command (e.g. **`codelogician doc --help`**)
- Synthesizes IML code and uses the **`eval`** command to evaluate it
- If there're errors, use **`codelogician doc view errors`** command to study how to correct the errors and re-evalute the code
2. **Agent/multi-agent mode** - CodeLogician IML Agent is a Langgraph-based agent for automatically formalizing source code into Imandra Modeling Language (IML).
- With `agent` command you can formalize a single source code file (e.g. `codelogician agent PATH_TO_FILE`)
- With `multiagent` command you can formalize a whole directory (e.g. `codelogician agent PATH_TO_DIR`)
3. **Server**
- This is a "live" and interactive version of the `multiagent` command
- It monitors the filesystem and fire's off formalization tasks on source code updates as necessary
- You can start the server and connect to it with the TUI (we recommend separate terminal screens)
Learn more at *[CodeLogician](https://www.codelogician.dev)!*
To get started,
```shell
codelogician --help
``` | text/markdown | null | hongyu <hongyu@imandra.ai>, samer <samer@imandra.ai>, denis <denis@imandra.ai> | null | null | null | null | [] | [] | null | null | <3.14,>=3.12 | [] | [] | [] | [
"chromadb>=1.0.12",
"fastapi-mcp>=0.4.0",
"fastapi>=0.116.1",
"fuzzysearch>=0.8.1",
"imandra[universe]>=2.4.0",
"imandrax-api-models>=18",
"imandrax-api[async]>=0.18.0.1",
"imandrax-codegen>=18.2.0",
"iml-query>=0.5.2",
"joblib>=1.5.1",
"networkx-mermaid>=0.1.7",
"networkx>=3.5",
"pydantic-y... | [] | [] | [] | [] | uv/0.7.3 | 2026-02-18T22:08:41.059015 | codelogician-2.0.2.tar.gz | 1,993,429 | 92/e3/c87070ad23c5d5c06f2b7d012b4581704b8927c8ef83cd388d5b93e28fdf/codelogician-2.0.2.tar.gz | source | sdist | null | false | 5896c317a4b215f082ca162dada8ab21 | 10c3ea2b36aa175c6fb569d74c6e4c6fe4e47ee4f17ef466020494c7cc44d698 | 92e3c87070ad23c5d5c06f2b7d012b4581704b8927c8ef83cd388d5b93e28fdf | null | [
"LICENSE"
] | 242 |
2.4 | unienv | 0.0.1b10 | Unified robot environment framework supporting multiple tensor and simulation backends | # UniEnv
Framework unifying robot environments and data APIs. UniEnv provides an universal interface for robot actors, sensors, environments, and data.
## Tensor library cross-backend Support
UniEnv supports multiple tensor backends with zero-copy translation layers through the DLPack protocol, and allows you to use the same abstract compute backend interface to write custom data transformation layers, environment wrappers and other utilities. This is powered by the [XBArray](https://github.com/UniEnvOrg/XBArray) package.
## Universal Robot Environment Interface
UniEnv supports diverse simulation environments and real robots, built on top of the abstract environment / world interface. This allows you to reuse code across different sim and real robots.
## Universal Robot Data Interface
UniEnv provides a universal data interface for accessing robot data through the abstract `BatchBase` interface. We also provide a utility `ReplayBuffer` for saving data from various environments with diverse data format support, including `hdf5`, memory-mapped torch tensors, and others.
## Installation
Install the package with pip
```bash
pip install unienv
```
You can install optional dependencies such as `gymnasium` (for Gymnasium-compatible environments), `dev`, or `video` by running
```bash
pip install unienv[gymnasium,video]
```
## Local Developments
### Development Environment Setup
To perform development on your local machine, you need to clone the repository and install the package in editable mode.
```bash
git clone https://github.com/UniEnvOrg/UniEnv
cd UniEnv
pip install numpy
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu # You can choose to either install cpu version or cuda version, up to you
pip install jax # same for jax
python -m pip install pytest
python -m pip install tensordict h5py opencv-python
pip install -e .[dev,gymnasium,video]
```
### Before commiting
Make sure all unit tests pass and your added code compiles before commiting or making a PR. You can run the tests with
```bash
pytest
```
## Cite
If you use UniEnv in your research, please cite it as follows:
```bibtex
@software{cao_unienv,
author = {Cao, Yunhao AND Fang, Kuan},
title = {{UniEnv: Unifying Robot Environments and Data APIs}},
year = {2025},
month = oct,
url = {https://github.com/UniEnvOrg/UniEnv},
license = {MIT}
}
```
## Acknowledgements
The idea of this project is inspired by [Gymnasium](https://github.com/Farama-Foundation/Gymnasium) and its predecessor [OpenAI Gym](https://github.com/openai/gym).
This library is impossible without the great work of DataAPIs Consortium and their work on the [Array API Standard](https://data-apis.org/array-api/latest/). The zero-copy translation layers are powered by the [DLPack](https://github.com/dmlc/dlpack) project.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"xbarray>=0.0.1a16",
"pillow",
"cloudpickle",
"pyvers",
"pytest; extra == \"dev\"",
"gymnasium>=0.29.0; extra == \"gymnasium\"",
"moviepy>=2.1; extra == \"video\""
] | [] | [] | [] | [
"Homepage, https://github.com/UniEnvOrg/UniEnv",
"Documentation, https://github.com/UniEnvOrg/UniEnv",
"Repository, https://github.com/UniEnvOrg/UniEnv",
"Issues, https://github.com/UniEnvOrg/UniEnv/issues",
"Changelog, https://github.com/UniEnvOrg/UniEnv/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:08:33.813633 | unienv-0.0.1b10.tar.gz | 136,392 | 45/50/d2b4541dd895eee7afedce68914613acb37edf5b945b027fe749fc000fb3/unienv-0.0.1b10.tar.gz | source | sdist | null | false | b0e8ff8564ce4ab5ff56a2d408705445 | ae564fd2bfeed41b40751132104c2874c7ab47d2d5894d03fead92e3c22e7801 | 4550d2b4541dd895eee7afedce68914613acb37edf5b945b027fe749fc000fb3 | MIT | [
"LICENSE"
] | 253 |
2.4 | mithril-client | 0.1.0rc2 | Mithril CLI and SDK | Mithril CLI and SDK
## Installation
```bash
# For CLI usage
uv tool install mithril-client
# For SDK usage (as a dependency)
uv add mithril-client
```
## CLI Usage
```bash
# Launch a task
ml launch task.yaml -c my-cluster
# Launch with GPU specification
ml launch 'python train.py' --gpus A100:4
# Check status (falls through to sky)
ml status
# Tear down
ml down my-cluster
```
## SDK Usage
```python
from mithril import sky
# All skypilot functionality via fallthrough
task = sky.Task(run="echo hello")
sky.launch(task)
```
## Development
```bash
uv sync --dev
uv run pytest
```
We recommend using [prek](https://prek.j178.dev/) (drop-in replacement for `pre-commit`)
to run the repo’s git hook checks:
```bash
brew install prek
prek install
```
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"skypilot-mithril==0.1.0a7",
"rich>=13.0.0",
"cyclopts>=4.5.0",
"httpx<1.0,>=0.24.0",
"azure-keyvault-administration>=4.4.0b2",
"azure-batch>=15.0.0b1"
] | [] | [] | [] | [] | maturin/1.12.2 | 2026-02-18T22:08:07.069527 | mithril_client-0.1.0rc2.tar.gz | 162,750 | b9/05/2fbed70ebfb0e70570da1c970bfb5d9e4d5872902c2218d66d50ca7fcfaa/mithril_client-0.1.0rc2.tar.gz | source | sdist | null | false | 6cac49fe453f2358512266ebe7b86e38 | b01ce3ed3dd3d8040b134c295e6af28f9a521541cbcc0fcce7ca43198b6e8d49 | b9052fbed70ebfb0e70570da1c970bfb5d9e4d5872902c2218d66d50ca7fcfaa | null | [] | 384 |
2.4 | crowdstrike-aidr | 0.5.0 | Python SDK for CrowdStrike AIDR. | # CrowdStrike AIDR Python SDK
Python SDK for CrowdStrike AIDR.
## Installation
```bash
pip install crowdstrike-aidr
```
## Requirements
Python v3.12 or greater.
## Usage
```python
from crowdstrike_aidr import AIGuard
client = AIGuard(
base_url_template="https://api.crowdstrike.com/aidr/{SERVICE_NAME}",
token="my API token"
)
response = client.guard_chat_completions(
guard_input={
"messages": [
{"role": "user", "content": "Hello, world!"}
]
}
)
```
## Timeouts
The SDK uses `httpx.Timeout` for timeout configuration. By default, requests
have a timeout of 60 seconds with a 5 second connection timeout.
You can configure timeouts in two ways:
### Client-level timeout
Set a default timeout for all requests made by the client:
```python
import httpx
from crowdstrike_aidr import AIGuard
# Using a float (total timeout in seconds).
client = AIGuard(
base_url_template="https://api.crowdstrike.com/aidr/{SERVICE_NAME}",
token="my API token",
timeout=30.0,
)
# Using httpx.Timeout for more granular control.
client = AIGuard(
base_url_template="https://api.crowdstrike.com/aidr/{SERVICE_NAME}",
token="my API token",
timeout=httpx.Timeout(timeout=60.0, connect=10.0),
)
```
### Request-level timeout
Override the timeout for a specific request:
```python
# Using a float (total timeout in seconds).
response = client.guard_chat_completions(
guard_input={"messages": [...]},
timeout=120.0
)
# Using httpx.Timeout for more granular control.
response = client.guard_chat_completions(
guard_input={"messages": [...]},
timeout=httpx.Timeout(timeout=120.0, connect=15.0)
)
```
## Retries
The SDK automatically retries failed requests with exponential backoff. By
default, the client will retry up to 2 times. Set `max_retries` during client
creation to change this.
```python
from crowdstrike_aidr import AIGuard
client = AIGuard(
base_url_template="https://api.crowdstrike.com/aidr/{SERVICE_NAME}",
max_retries=5 # Retry up to 5 times.
)
```
| text/markdown | CrowdStrike | CrowdStrike <support@crowdstrike.com> | null | null | null | null | [
"Typing :: Typed",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: MacOS",
"Operating System :: POS... | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx~=0.28.1",
"pydantic~=2.12.5",
"typing-extensions~=4.15.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:07:15.784301 | crowdstrike_aidr-0.5.0.tar.gz | 20,628 | 8d/3f/21aca12d6ed245b254dc49cafff26c35edb93a99bb00d99557f967bc9951/crowdstrike_aidr-0.5.0.tar.gz | source | sdist | null | false | 20851e65b0f1c35cb212904178f93d07 | 3539126e911bea38670ff9897e46cd0c3ba951c6cd86aa5773e1632f5ca7afc7 | 8d3f21aca12d6ed245b254dc49cafff26c35edb93a99bb00d99557f967bc9951 | MIT | [] | 267 |
2.3 | opencosmo | 1.1.3 | OpenCosmo Python Toolkit | <h1 align="center">
<picture>
<source srcset="https://raw.githubusercontent.com/ArgonneCPAC/opencosmo/main/branding/opencosmo_dark.png" media="(prefers-color-scheme: dark)">
<source srcset="https://raw.githubusercontent.com/ArgonneCPAC/opencosmo/main/branding/opencosmo_light.png" media="(prefers-color-scheme: light)">
<img src="https://raw.githubusercontent.com/ArgonneCPAC/opencosmo/main/branding/opencosmo_light.png" alt="OpenCosmo">
</picture>
</h1><br>
[](https://github.com/ArgonneCPAC/OpenCosmo/actions/workflows/merge.yaml)
[](https://pypi.org/project/opencosmo/)
[](https://anaconda.org/conda-forge/opencosmo)
[](https://github.com/ArgonneCPAC/OpenCosmo/blob/main/LICENSE.md)
The OpenCosmo Python Toolkit provides utilities for reading, writing and manipulating data from cosmological simulations produced by the Cosmolgical Physics and Advanced Computing (CPAC) group at Argonne National Laboratory. It can be used to work with smaller quantities data retrieved with the CosmoExplorer, as well as the much larget datasets these queries draw from. The OpenCosmo toolkit integrates with standard tools such as AstroPy, and allows you to manipulate data in a fully-consistent cosmological context.
### Installation
The OpenCosmo library is available for Python 3.11 and up on Linux and MacOS (and Windows via [WSL](https://learn.microsoft.com/en-us/windows/wsl/setup/environment)). It can be installed easily with `pip`:
```bash
pip install opencosmo
```
There's a good chance the default version of Python on your system is less than 3.11. Whether or not this is the case, we recommend installing `opencosmo` into a virtual environment. If you're using [Conda](https://docs.conda.io/projects/conda/en/stable/user-guide/getting-started.html), you can create a new environment and install `opencosmo` into it automatically:
```bash
conda create -n opencosmo_env conda-forge::opencosmo
conda activate opencosmo_env
```
or if you already have a virtual environment to use:
```bash
conda install conda-forge::opencosmo
```
If you plan to use `opencosmo` in a Jupyter notebook, you can install the `ipykernel` package to make the environment available as a kernel:
```bash
pip install ipykernel # can also be installed with conda
python -m ipykernel install --user --name=opencosmo
```
Be sure you have run the "activate" command shown above before running the `ipykernel` command.
## Getting Started
To get started, download the "haloproperites.hdf5" from the [OpenCosmo Google Drive](https://drive.google.com/drive/folders/1CYmZ4sE-RdhRdLhGuYR3rFfgyA3M1mU-?usp=sharing). This file contains properties of dark-matter halos from a small hydrodynamical simulation run with HACC. You can easily open the data with the `open` command:
```python
import opencosmo as oc
dataset = oc.open("haloproperties.hdf5")
print(dataset)
```
```text
OpenCosmo Dataset (length=237441)
Cosmology: FlatLambdaCDM(name=None, H0=<Quantity 67.66 km / (Mpc s)>, Om0=0.3096446816186967, Tcmb0=<Quantity 0. K>, Neff=3.04, m_nu=None, Ob0=0.04897468161869667)
First 10 rows:
block fof_halo_1D_vel_disp fof_halo_center_x ... sod_halo_sfr unique_tag
km / s Mpc ... solMass / yr
int32 float32 float32 ... float32 int64
----- -------------------- ----------------- ... ------------ ----------
0 32.088795 1.4680439 ... -101.0 21674
0 41.14525 0.19616994 ... -101.0 44144
0 73.82962 1.5071135 ... 3.1447952 48226
0 31.17231 0.7526525 ... -101.0 58472
0 23.038841 5.3246417 ... -101.0 60550
0 37.071426 0.5153746 ... -101.0 537760
0 26.203058 2.1734374 ... -101.0 542858
0 78.7636 2.1477687 ... 0.0 548994
0 37.12636 6.9660196 ... -101.0 571540
0 58.09235 6.072006 ... 1.5439711 576648
```
The `open` function returns a `Dataset` object, which can retrieve the relevant data from disk with a simple method call. It also holds metadata about the simulation, such as the comsology. You can easily access the data and cosmology as Astropy objects:
```python
dataset.get_data()
dataset.cosmology
```
The first will return an astropy table of the data, with all associated units already applied. The second will return the astropy cosmology object that represents the cosmology the simulation was run with.
### Basic Querying
Although you can access data directly, `opencosmo` provides tools for querying and transforming the data in a fully cosmology-aware context. For example, suppose we wanted to plot the concentration-mass relationship for the halos in our simulation above a certain mass. One way to perform this would be as follows:
```python
dataset = dataset
.filter(oc.col("fof_halo_mass") > 1e13)
.take(1000, at="random")
.select(("fof_halo_mass", "sod_halo_cdelta"))
print(dataset)
```
```text
OpenCosmo Dataset (length=1000)
Cosmology: FlatLambdaCDM(name=None, H0=<Quantity 67.66 km / (Mpc s)>, Om0=0.3096446816186967, Tcmb0=<Quantity 0. K>, Neff=3.04, m_nu=None, Ob0=0.04897468161869667)
First 10 rows:
fof_halo_mass sod_halo_cdelta
solMass
float32 float32
---------------- ---------------
11220446000000.0 4.5797048
17266723000000.0 7.4097505
51242150000000.0 1.8738283
70097712000000.0 4.2764015
51028305000000.0 2.678151
11960567000000.0 3.9594727
15276915000000.0 5.793542
16002001000000.0 2.4318497
47030307000000.0 3.7146702
15839942000000.0 3.245569
```
We could then plot the data, or perform further transformations. This is cool on its own, but the real power of `opencosmo` comes from its ability to work with different data types. Go ahead and download the "haloparticles" file from the [OpenCosmo Google Drive](https://drive.google.com/drive/folders/1CYmZ4sE-RdhRdLhGuYR3rFfgyA3M1mU-?usp=sharing) and try the following:
```python
import opencosmo as oc
data = oc.open("haloproperties.hdf5", "haloparticles.hdf5")
```
This will return a data *collection* that will allow you to query and transform the data as before, but will associate the halos with their particles.
```python
data = data
.filter(oc.col("fof_halo_mass") > 1e13)
.take(1000, at="random")
for halo in data.halos():
halo_properties = halo["halo_properties"]
dm_particles = halo["dm_particles"]
star_particles = halo["star_particles"]
```
In each iteration, "halo properties" will be a dictionary containing the properties of the halo (such as its total mass), while "dm_particles" and "star_particles" will be OpenCosmo datasets containing the dark matter and stars associated with the halo, respectively. Because these are just like the dataset object we saw eariler, we can further query and transform the particles as needed for our analysis. For more details on how to use the library, check out the [full documentation](https://opencosmo.readthedocs.io/en/latest/).
### Testing
To run tests, first download the test data [from Google Drive](https://drive.google.com/drive/folders/1CYmZ4sE-RdhRdLhGuYR3rFfgyA3M1mU-?usp=sharing). Set environment variable `OPENCOSMO_DATA_PATH` to the path where the data is stored. Then run the tests with `pytest`:
```bash
export OPENCOSMO_DATA_PATH=/path/to/data
# From the repository root
pytest --ignore test/parallel
```
Although opencosmo does support multi-core processing via MPI, the default installation does not include the necessary dependencies to work in an MPI environment. If you need these capabilities, check out the guide in our documentation.
### Contributing
We welcome bug reports and feature requests from the community. If you would like to contribute to the project, please check out the [contributing guide](CONTRIBUTING.md) for more information.
```
| text/markdown | Patrick Wells, Will Hicks, Patricia Larsen, Michael Buehlmann | Patrick Wells <pwells@anl.gov>, Will Hicks <whicks@anl.gov>, Patricia Larsen <prlarsen@anl.gov>, Michael Buehlmann <mbuehlmann@anl.gov> | null | null | null | null | [] | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"h5py<4.0.0,>=3.12.1",
"astropy<8.0.0,>=7.2.0",
"pydantic<3.0.0,>=2.10.6",
"hdf5plugin<6.0.0,>=5.0.0",
"healpy<2.0.0,>=1.18.1",
"healsparse==1.11.1",
"deprecated<2.0.0,>=1.2.18",
"numpy<2.4,>=2.0",
"click<9.0.0,>=8.2.1",
"numba>=0.62.1",
"rustworkx>=0.17.1",
"pyarrow>=21.0.0; extra == \"io\""
... | [] | [] | [] | [] | uv/0.8.2 | 2026-02-18T22:06:55.331914 | opencosmo-1.1.3.tar.gz | 131,070 | c1/b4/64333bfa24924f856c94bf2ca5ea39342e6f89a9e5f22f3ea080f0e14b9d/opencosmo-1.1.3.tar.gz | source | sdist | null | false | 8db2c08dfb7c852df8c1b70f2bcfd6c2 | 3c90fb349f10439cb6ee41a526de52c287ff48c7afca600db51bccc6b496a0bd | c1b464333bfa24924f856c94bf2ca5ea39342e6f89a9e5f22f3ea080f0e14b9d | null | [] | 240 |
2.4 | sessionViewForClassApiSqlModel | 0.1.1 | Add your description here | # SQLModel support for classApi
This project provides a simple way to use a SQLModel `Session` inside your classApi views.
First, create your SQLModel engine. Here is a basic example:
```py
# engine.py
from sqlalchemy import create_engine
from sqlmodel import SQLModel
from .model import User # In this example, User has 3 fields: id, name, and email
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
```
Then create a new base view using `make_session_view`:
```py
# engine.py
from sqlalchemy import create_engine
from sqlmodel import SQLModel
from sqlmodelclassapi import make_session_view
from .model import User
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
SessionView = make_session_view(engine=engine) # New base view with session support
```
Now, to create a view, inherit from `SessionView` instead of `BaseView`:
```py
# views.py
class ExampleSessionView(SessionView):
methods = ["GET", "POST"]
def get(self, *args):
statement = select(User)
results = self.session.exec(statement).all()
return [
{"id": user.id, "name": user.name, "email": user.email}
for user in results
]
def post(self, name: str):
new_user = User(name=name, email=f"{name.lower()}@example.com")
self.session.add(new_user)
self.session.commit()
self.session.refresh(new_user)
return {"message": f"User '{name}' created successfully!", "user": new_user}
```
With this setup, all operations in your request run inside the same session, including `pre_{method}` hooks.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"classapi>=0.1.0.1",
"sqlmodel>=0.0.34"
] | [] | [] | [] | [] | uv/0.9.7 | 2026-02-18T22:05:47.963928 | sessionviewforclassapisqlmodel-0.1.1.tar.gz | 4,788 | c6/68/2458334789e4d47e47042cd8ecbf6d7dd651efab8c7ad89bf279dd1c4262/sessionviewforclassapisqlmodel-0.1.1.tar.gz | source | sdist | null | false | a38d7e49d7154c8b4c1fec2f06329c99 | 661d50c879a539b7cdfcce2ead46dc47858b2bc5a5bbfc31596c88be30ca3931 | c6682458334789e4d47e47042cd8ecbf6d7dd651efab8c7ad89bf279dd1c4262 | null | [
"LICENSE"
] | 0 |
2.4 | pathview | 0.7.5 | Visual node editor for building and simulating dynamic systems with PathSim | <p align="center">
<img src="https://raw.githubusercontent.com/pathsim/pathview/main/static/pathview_logo.png" width="300" alt="PathView Logo" />
</p>
------------
# PathView - System Modeling in the Browser
A web-based visual node editor for building and simulating dynamic systems with [PathSim](https://github.com/pathsim/pathsim) as the backend. Runs entirely in the browser via Pyodide by default — no server required. Optionally, a Flask backend enables server-side Python execution with any packages (including those with native dependencies that Pyodide can't run). The UI is hosted at [view.pathsim.org](https://view.pathsim.org), free to use for everyone.
## Tech Stack
- [SvelteKit 5](https://kit.svelte.dev/) with Svelte 5 runes
- [SvelteFlow](https://svelteflow.dev/) for the node editor
- [Pyodide](https://pyodide.org/) for in-browser Python/NumPy/SciPy
- [Plotly.js](https://plotly.com/javascript/) for interactive plots
- [CodeMirror 6](https://codemirror.net/) for code editing
## Installation
### pip install (recommended for users)
```bash
pip install pathview
pathview serve
```
This starts the PathView server with a local Python backend and opens your browser. No Node.js required.
**Options:**
- `--port PORT` — server port (default: 5000)
- `--host HOST` — bind address (default: 127.0.0.1)
- `--no-browser` — don't auto-open the browser
- `--debug` — debug mode with auto-reload
### Convert `.pvm` to Python
Convert PathView model files to standalone PathSim scripts:
```bash
pathview convert model.pvm # outputs model.py
pathview convert model.pvm -o output.py # custom output path
pathview convert model.pvm --stdout # print to stdout
```
Or use the Python API directly:
```python
from pathview import convert
python_code = convert("model.pvm")
```
### Development setup
```bash
npm install
npm run dev
```
To use the Flask backend during development:
```bash
pip install flask flask-cors
npm run server # Start Flask backend on port 5000
npm run dev # Start Vite dev server (separate terminal)
# Open http://localhost:5173/?backend=flask
```
## Project Structure
```
src/
├── lib/
│ ├── actions/ # Svelte actions (paramInput)
│ ├── animation/ # Graph loading animations
│ ├── components/ # UI components
│ │ ├── canvas/ # Flow editor utilities (connection, transforms)
│ │ ├── dialogs/ # Modal dialogs
│ │ │ └── shared/ # Shared dialog components (ColorPicker, etc.)
│ │ ├── edges/ # SvelteFlow edge components (ArrowEdge)
│ │ ├── icons/ # Icon component (Icon.svelte)
│ │ ├── nodes/ # Node components (BaseNode, EventNode, AnnotationNode, PlotPreview)
│ │ └── panels/ # Side panels (Simulation, NodeLibrary, CodeEditor, Plot, Console, Events)
│ ├── constants/ # Centralized constants (nodeTypes, layout, handles)
│ ├── events/ # Event system
│ │ └── generated/ # Auto-generated from PathSim
│ ├── export/ # Export utilities
│ │ └── svg/ # SVG graph export (renderer, types)
│ ├── nodes/ # Node type system
│ │ ├── generated/ # Auto-generated from PathSim
│ │ └── shapes/ # Node shape definitions
│ ├── plotting/ # Plot system
│ │ ├── core/ # Constants, types, utilities
│ │ ├── processing/ # Data processing, render queue
│ │ └── renderers/ # Plotly and SVG renderers
│ ├── routing/ # Orthogonal wire routing (A* pathfinding)
│ ├── pyodide/ # Python runtime (backend, bridge)
│ │ └── backend/ # Modular backend system (registry, state, types)
│ │ ├── pyodide/ # Pyodide Web Worker implementation
│ │ └── flask/ # Flask HTTP/SSE backend implementation
│ ├── schema/ # File I/O (save/load, component export)
│ ├── simulation/ # Simulation metadata
│ │ └── generated/ # Auto-generated defaults
│ ├── stores/ # Svelte stores (state management)
│ │ └── graph/ # Graph state with subsystem navigation
│ ├── types/ # TypeScript type definitions
│ └── utils/ # Utilities (colors, download, csvExport, codemirror)
├── routes/ # SvelteKit pages
└── app.css # Global styles with CSS variables
pathview/ # Python package (pip install pathview)
├── app.py # Flask server (subprocess management, HTTP routes)
├── worker.py # REPL worker subprocess (Python execution)
├── cli.py # CLI entry point (pathview serve)
├── converter.py # PVM to Python converter (public API)
├── data/ # Bundled data files
│ └── registry.json # Block/event registry for converter
└── static/ # Bundled frontend (generated at build time)
scripts/
├── config/ # Configuration files for extraction
│ ├── schemas/ # JSON schemas for validation
│ ├── pathsim/ # Core PathSim blocks, events, simulation config
│ ├── pathsim-chem/ # Chemical toolbox blocks
│ ├── pyodide.json # Pyodide version and preload packages
│ ├── requirements-pyodide.txt # Runtime Python packages
│ └── requirements-build.txt # Build-time Python packages
├── generated/ # Generated files (from extract.py)
│ └── registry.json # Block/event registry with import paths
├── extract.py # Unified extraction script
└── pvm2py.py # Standalone .pvm to Python converter
```
---
## Architecture Overview
### Data Flow
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Graph Store │────>│ pathsimRunner │────>│ Python Code │
│ (nodes, edges) │ │ (code gen) │ │ (string) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
v
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Plot/Console │<────│ bridge.ts │<────│ Backend │
│ (results) │ │ (queue + rAF) │ │ (Pyodide/Flask) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
### Streaming Architecture
Simulations run in streaming mode for real-time visualization. The worker runs autonomously and pushes results without waiting for the UI:
```
Worker (10 Hz) Main Thread UI (10 Hz)
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Python loop │ ────────> │ Result Queue │ ────────> │ Plotly │
│ (autonomous) │ stream- │ (accumulate) │ rAF │ extendTraces │
│ │ data │ │ batched │ │
└──────────────┘ └──────────────┘ └──────────────┘
```
- **Decoupled rates**: Python generates data at 10 Hz, UI renders at 10 Hz max
- **Queue-based**: Results accumulate in queue, merged on each UI frame
- **Non-blocking**: Simulation never waits for plot rendering
- **extendTraces**: Scope plots append data incrementally instead of full re-render
### Wire Routing
PathView uses Simulink-style orthogonal wire routing with A* pathfinding:
- **Automatic routing**: Wires route around nodes with 90° bends only
- **User waypoints**: Press `\` on selected edge to add manual waypoints
- **Draggable waypoints**: Drag waypoint markers to reposition, double-click to delete
- **Segment dragging**: Drag segment midpoints to create new waypoints
- **Incremental updates**: Spatial indexing (O(1) node updates) for smooth dragging
- **Hybrid routing**: Routes through user waypoints: Source → A* → W1 → A* → Target
Key files: `src/lib/routing/` (pathfinder, grid builder, route calculator)
### Key Abstractions
| Layer | Purpose | Key Files |
|-------|---------|-----------|
| **Main App** | Orchestrates panels, shortcuts, file ops | `routes/+page.svelte` |
| **Flow Canvas** | SvelteFlow wrapper, node/edge sync | `components/FlowCanvas.svelte` |
| **Flow Updater** | View control, animation triggers | `components/FlowUpdater.svelte` |
| **Context Menus** | Right-click menus for nodes/canvas/plots | `components/ContextMenu.svelte`, `contextMenuBuilders.ts` |
| **Graph Store** | Node/edge state, subsystem navigation | `stores/graph/` |
| **View Actions** | Fit view, zoom, pan controls | `stores/viewActions.ts`, `stores/viewTriggers.ts` |
| **Clipboard** | Copy/paste/duplicate operations | `stores/clipboard.ts` |
| **Plot Settings** | Per-trace and per-block plot options | `stores/plotSettings.ts` |
| **Node Registry** | Block type definitions, parameters | `nodes/registry.ts` |
| **Code Generation** | Graph → Python code | `pyodide/pathsimRunner.ts` |
| **Backend** | Modular Python execution interface | `pyodide/backend/` |
| **Backend Registry** | Factory for swappable backends | `pyodide/backend/registry.ts` |
| **PyodideBackend** | Web Worker Pyodide implementation | `pyodide/backend/pyodide/` |
| **FlaskBackend** | HTTP/SSE Flask server implementation | `pyodide/backend/flask/` |
| **Simulation Bridge** | High-level simulation API | `pyodide/bridge.ts` |
| **Schema** | File/component save/load operations | `schema/fileOps.ts`, `schema/componentOps.ts` |
| **Export Utils** | SVG/CSV/Python file downloads | `utils/download.ts`, `export/svg/`, `utils/csvExport.ts` |
### Centralized Constants
Use these imports instead of magic strings:
```typescript
import { NODE_TYPES } from '$lib/constants/nodeTypes';
// NODE_TYPES.SUBSYSTEM, NODE_TYPES.INTERFACE
import { PORT_COLORS, DIALOG_COLOR_PALETTE } from '$lib/utils/colors';
// PORT_COLORS.default, etc.
```
---
## Adding New Blocks
Blocks are extracted automatically from PathSim using the `Block.info()` classmethod. The extraction is config-driven for easy maintenance.
### 1. Ensure the block exists in PathSim
The block must be importable from `pathsim.blocks` (or toolbox module):
```python
from pathsim.blocks import YourNewBlock
```
### 2. Add to block configuration
Edit `scripts/config/pathsim/blocks.json` and add the block class name to the appropriate category:
```json
{
"categories": {
"Algebraic": [
"Adder",
"Multiplier",
"YourNewBlock"
]
}
}
```
Port configurations are automatically extracted from `Block.info()`:
- `None` → Variable/unlimited ports (UI allows add/remove)
- `{}` → No ports of this type
- `{"name": index}` → Fixed labeled ports (locked count)
### 3. Run extraction
```bash
npm run extract
```
This generates TypeScript files in `src/lib/*/generated/` with:
- Block metadata (parameters, descriptions, docstrings)
- Port configurations from `Block.info()`
- Pyodide runtime config
### 4. Verify
Start the dev server and check that your block appears in the Block Library panel.
### Port Synchronization
Some blocks process inputs as parallel paths where each input has a corresponding output (e.g., Integrator, Amplifier, Sin). For these blocks, the UI only shows input port controls and outputs auto-sync.
Configure in `src/lib/nodes/uiConfig.ts`:
```typescript
export const syncPortBlocks = new Set([
'Integrator',
'Differentiator',
'Delay',
'PID',
'PID_Antiwindup',
'Amplifier',
'Sin', 'Cos', 'Tan', 'Tanh',
'Abs', 'Sqrt', 'Exp', 'Log', 'Log10',
'Mod', 'Clip', 'Pow',
'SampleHold'
]);
```
### Port Labels from Parameters
Some blocks derive port names from a parameter (e.g., Scope and Spectrum use `labels` to name input traces). When the parameter changes, port names update automatically.
Configure in `src/lib/nodes/uiConfig.ts`:
```typescript
export const portLabelParams: Record<string, PortLabelConfig | PortLabelConfig[]> = {
Scope: { param: 'labels', direction: 'input' },
Spectrum: { param: 'labels', direction: 'input' },
// Multiple directions supported:
// SomeBlock: [
// { param: 'input_labels', direction: 'input' },
// { param: 'output_labels', direction: 'output' }
// ]
};
```
---
## Adding New Toolboxes
To add a new PathSim toolbox (like `pathsim-chem`):
### 1. Add to requirements
Edit `scripts/config/requirements-pyodide.txt`:
```txt
--pre
pathsim
pathsim-chem>=0.2rc2 # optional
pathsim-controls # optional - your new toolbox
```
The `# optional` comment means Pyodide will continue loading if this package fails to install.
### 2. Create toolbox config
Create `scripts/config/pathsim-controls/blocks.json`:
```json
{
"$schema": "../schemas/blocks.schema.json",
"toolbox": "pathsim-controls",
"importPath": "pathsim_controls.blocks",
"categories": {
"Controls": [
"PIDController",
"StateEstimator"
]
}
}
```
### 3. (Optional) Add events
Create `scripts/config/pathsim-controls/events.json` if the toolbox has custom events.
### 4. Run extraction and build
```bash
npm run extract
npm run build
```
No code changes needed - the extraction script automatically discovers toolbox directories.
For the full toolbox integration reference (Python package contract, config schemas, extraction pipeline, generated output), see [**docs/toolbox-spec.md**](docs/toolbox-spec.md).
---
## Python Backend System
The Python runtime uses a modular backend architecture, allowing different execution environments (Pyodide, local Python, remote server) to be swapped without changing application code.
### Architecture
```
┌─────────────────────────────────────────────────────────────────────┐
│ Backend Interface │
│ init(), exec(), evaluate(), startStreaming(), stopStreaming()... │
└─────────────────────────────────────────────────────────────────────┘
│
┌──────────────┼──────────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Pyodide │ │ Flask │ │ Remote │
│ Backend │ │ Backend │ │ Backend │
│ (default) │ │ (HTTP) │ │ (future) │
└───────────┘ └───────────┘ └───────────┘
│ │
▼ ▼
┌───────────┐ ┌───────────┐
│ Web Worker│ │ Flask │──> Python subprocess
│ (Pyodide) │ │ Server │ (one per session)
└───────────┘ └───────────┘
```
### Backend Registry
```typescript
import { getBackend, switchBackend, setFlaskHost } from '$lib/pyodide/backend';
// Get current backend (defaults to Pyodide)
const backend = getBackend();
// Switch to Flask backend
setFlaskHost('http://localhost:5000');
switchBackend('flask');
```
Backend selection can also be controlled via URL parameters:
```
http://localhost:5173/?backend=flask # Flask on default port
http://localhost:5173/?backend=flask&host=http://myserver:5000 # Custom host
```
### REPL Protocol
**Requests** (Main → Worker):
```typescript
type REPLRequest =
| { type: 'init' }
| { type: 'exec'; id: string; code: string } // Execute code (no return)
| { type: 'eval'; id: string; expr: string } // Evaluate expression (returns JSON)
| { type: 'stream-start'; id: string; expr: string } // Start streaming loop
| { type: 'stream-stop' } // Stop streaming loop
| { type: 'stream-exec'; code: string } // Execute code during streaming
```
**Responses** (Worker → Main):
```typescript
type REPLResponse =
| { type: 'ready' }
| { type: 'ok'; id: string } // exec succeeded
| { type: 'value'; id: string; value: string } // eval result (JSON)
| { type: 'error'; id: string; error: string; traceback?: string }
| { type: 'stdout'; value: string }
| { type: 'stderr'; value: string }
| { type: 'progress'; value: string }
| { type: 'stream-data'; id: string; value: string } // Streaming result
| { type: 'stream-done'; id: string } // Streaming completed
```
### Usage Example
```typescript
import { init, exec, evaluate } from '$lib/pyodide/backend';
// Initialize backend (Pyodide by default)
await init();
// Execute Python code
await exec(`
import numpy as np
x = np.linspace(0, 10, 100)
`);
// Evaluate and get result
const result = await evaluate<number[]>('x.tolist()');
```
### High-Level API (bridge.ts)
For simulation, use the higher-level API in `bridge.ts`:
```typescript
import {
runStreamingSimulation,
continueStreamingSimulation,
stopSimulation,
execDuringStreaming
} from '$lib/pyodide/bridge';
// Run streaming simulation
const result = await runStreamingSimulation(pythonCode, duration, (partialResult) => {
console.log('Progress:', partialResult.scopeData);
});
// result.scopeData, result.spectrumData, result.nodeNames
// Continue simulation from where it stopped
const moreResult = await continueStreamingSimulation('5.0');
// Stop simulation gracefully
await stopSimulation();
// Execute code during active simulation (queued between steps)
execDuringStreaming('source.amplitude = 2.0');
```
### Flask Backend
The Flask backend enables server-side Python execution for packages that Pyodide can't run (e.g., FESTIM or other packages with native C/Fortran dependencies). It mirrors the Web Worker architecture: one subprocess per session with the same REPL protocol.
```
Browser Tab Flask Server Worker Subprocess
┌──────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ FlaskBackend │ HTTP/SSE │ app.py │ stdin │ worker.py │
│ exec() │──POST────────→│ route → session │──JSON───→│ exec(code, ns) │
│ eval() │──POST────────→│ subprocess mgr │──JSON───→│ eval(expr, ns) │
│ stream() │──POST (SSE)──→│ pipe SSE relay │←─JSON────│ streaming loop │
│ inject() │──POST────────→│ → code queue │──JSON───→│ queue drain │
│ stop() │──POST────────→│ → stop flag │──JSON───→│ stop check │
└──────────────┘ └──────────────────┘ └──────────────────┘
```
**Standalone (pip package):**
```bash
pip install pathview
pathview serve
```
**Development (separate servers):**
```bash
pip install flask flask-cors
npm run server # Starts Flask API on port 5000
npm run dev # Starts Vite dev server (separate terminal)
# Open http://localhost:5173/?backend=flask
```
**Key properties:**
- **Process isolation** — each session gets its own Python subprocess
- **Host environment** — workers run with the same Python used to install pathview, so all packages in the user's environment are available in the code editor
- **Namespace persistence** — variables persist across exec/eval calls within a session
- **Dynamic packages** — packages from `PYTHON_PACKAGES` (the same config used by Pyodide) are pip-installed on first init if not already present
- **Session TTL** — stale sessions cleaned up after 1 hour of inactivity
- **Streaming** — simulations stream via SSE, with the same code injection support as Pyodide
For the full protocol reference (message types, HTTP routes, SSE format, streaming semantics, how to implement a new backend), see [**docs/backend-protocol-spec.md**](docs/backend-protocol-spec.md).
**API routes:**
| Route | Method | Action |
|-------|--------|--------|
| `/api/health` | GET | Health check |
| `/api/init` | POST | Initialize worker with packages |
| `/api/exec` | POST | Execute Python code |
| `/api/eval` | POST | Evaluate expression, return JSON |
| `/api/stream` | POST | Start streaming simulation (SSE) |
| `/api/stream/exec` | POST | Inject code during streaming |
| `/api/stream/stop` | POST | Stop streaming |
| `/api/session` | DELETE | Kill session subprocess |
---
## State Management
### SvelteFlow vs Graph Store
SvelteFlow manages its own UI state (selection, viewport, node positions). The graph store manages application data:
| State Type | Managed By | Examples |
|------------|------------|----------|
| **UI State** | SvelteFlow | Selection, viewport, dragging |
| **App Data** | Graph Store | Node parameters, connections, subsystems |
Do not duplicate SvelteFlow state in custom stores. Use SvelteFlow's APIs (`useSvelteFlow`, event handlers) to interact with canvas state.
### Store Pattern
Stores use Svelte's writable with custom wrapper objects:
```typescript
const internal = writable<T>(initialValue);
export const myStore = {
subscribe: internal.subscribe,
// Custom methods
doSomething() {
internal.update(state => ({ ...state, ... }));
}
};
```
**Important**: Do NOT wrap `.subscribe()` in `$effect()` - this causes infinite loops.
```svelte
<script>
// Correct
myStore.subscribe(value => { localState = value; });
// Wrong - causes infinite loop
$effect(() => {
myStore.subscribe(value => { localState = value; });
});
</script>
```
### Subsystem Navigation
Subsystems are nested graphs with path-based navigation:
```typescript
graphStore.drillDown(subsystemId); // Drill into subsystem
graphStore.drillUp(); // Go up one level
graphStore.navigateTo(level); // Navigate to breadcrumb level
graphStore.currentPath // Current navigation path
```
The Interface node inside a subsystem mirrors its parent Subsystem's ports (with inverted direction).
---
## Keyboard Shortcuts
Press `?` to see all shortcuts in the app. Key shortcuts:
| Category | Shortcut | Action |
|----------|----------|--------|
| **File** | `Ctrl+O` | Open |
| | `Ctrl+S` | Save |
| | `Ctrl+E` | Export Python |
| **Edit** | `Ctrl+Z/Y` | Undo/Redo |
| | `Ctrl+D` | Duplicate |
| | `Ctrl+F` | Find |
| | `Del` | Delete |
| **Transform** | `R` | Rotate 90° |
| | `X` / `Y` | Flip H/V |
| | `Arrows` | Nudge selection |
| **Wires** | `\` | Add waypoint to selected edge |
| **Labels** | `L` | Toggle port labels |
| **View** | `F` | Fit view |
| | `H` | Go to root |
| | `T` | Toggle theme |
| **Panels** | `B` | Blocks |
| | `N` | Events |
| | `S` | Simulation |
| | `V` | Results |
| | `C` | Console |
| **Run** | `Ctrl+Enter` | Simulate |
| | `Shift+Enter` | Continue |
---
## File Formats
PathView uses JSON-based file formats for saving and sharing:
| Extension | Type | Description |
|-----------|------|-------------|
| `.pvm` | Model | Complete simulation model (graph, events, settings, code) |
| `.blk` | Block | Single block with parameters (for sharing/reuse) |
| `.sub` | Subsystem | Subsystem with internal graph (for sharing/reuse) |
The `.pvm` format is fully documented in [**docs/pvm-spec.md**](docs/pvm-spec.md). Use this spec if you are building tools that read or write PathView models (e.g., code generators, importers). A reference Python code generator is available at `scripts/pvm2py.py`.
### Specification Documents
| Document | Audience |
|----------|----------|
| [**docs/pvm-spec.md**](docs/pvm-spec.md) | Building tools that read/write `.pvm` model files |
| [**docs/backend-protocol-spec.md**](docs/backend-protocol-spec.md) | Implementing a new execution backend (remote server, cloud worker, etc.) |
| [**docs/toolbox-spec.md**](docs/toolbox-spec.md) | Creating a third-party toolbox package for PathView |
### Export Options
- **File > Save** - Save complete model as `.pvm`
- **File > Export Python** - Generate standalone Python script
- **Right-click node > Export** - Save individual block/subsystem
- **Right-click canvas > Export SVG** - Export graph as vector image
- **Right-click plot > Download PNG/SVG** - Export plot as image
- **Right-click plot > Export CSV** - Export simulation data as CSV
- **Scope/Spectrum node context menu** - Export simulation data as CSV
---
## Sharing Models via URL
Models can be loaded directly from a URL using query parameters:
```
https://view.pathsim.org/?model=<url>
https://view.pathsim.org/?modelgh=<github-shorthand>
```
### Parameters
| Parameter | Description | Example |
|-----------|-------------|---------|
| `model` | Direct URL to a `.pvm` or `.json` file | `?model=https://example.com/mymodel.pvm` |
| `modelgh` | GitHub shorthand (expands to raw.githubusercontent.com) | `?modelgh=user/repo/path/to/model.pvm` |
### GitHub Shorthand
The `modelgh` parameter expands to a raw GitHub URL:
```
modelgh=user/repo/examples/demo.pvm
→ https://raw.githubusercontent.com/user/repo/main/examples/demo.pvm
```
### Examples
```
# Load from any URL
https://view.pathsim.org/?model=https://mysite.com/models/feedback.pvm
# Load from GitHub repository
https://view.pathsim.org/?modelgh=pathsim/pathview/static/examples/feedback-system.json
```
---
## Scripts
| Script | Purpose |
|--------|---------|
| `npm run dev` | Start Vite development server |
| `npm run server` | Start Flask backend server (port 5000) |
| `npm run build` | Production build (GitHub Pages) |
| `npm run build:package` | Build pip package (frontend + wheel) |
| `npm run preview` | Preview production build |
| `npm run check` | TypeScript/Svelte type checking |
| `npm run lint` | Run ESLint |
| `npm run format` | Format code with Prettier |
| `npm run extract` | Regenerate all definitions from PathSim |
| `npm run extract:blocks` | Blocks only |
| `npm run extract:events` | Events only |
| `npm run extract:simulation` | Simulation params only |
| `npm run extract:deps` | Dependencies only |
| `npm run extract:validate` | Validate config files |
| `npm run pvm2py -- <file>` | Convert `.pvm` file to standalone Python script |
---
## Node Styling
Nodes are styled based on their category, with CSS-driven shapes and colors.
### Shapes by Category
| Category | Shape | Border Radius |
|----------|-------|---------------|
| Sources | Pill | 20px |
| Dynamic | Rectangle | 4px |
| Algebraic | Rectangle | 4px |
| Mixed | Asymmetric | 12px 4px 12px 4px |
| Recording | Pill | 20px |
| Subsystem | Rectangle | 4px |
Shapes are defined in `src/lib/nodes/shapes/registry.ts` and applied via CSS classes (`.shape-pill`, `.shape-rect`, etc.).
### Colors
- **Default node color**: CSS variable `--accent` (#0070C0 - PathSim blue)
- **Custom colors**: Right-click node → Properties → Color picker (12 colors available)
- **Port colors**: `PORT_COLORS.default` (#969696 gray), customizable per-port
Colors are CSS-driven - see `src/app.css` for variables and `src/lib/utils/colors.ts` for palettes.
### Port Labels
Port labels show the name of each input/output port alongside the node. Toggle globally with `L` key, or per-node via right-click menu.
- **Global toggle**: Press `L` to show/hide port labels for all nodes
- **Per-node override**: Right-click node → "Show Input Labels" / "Show Output Labels"
- **Truncation**: Labels are truncated to 5 characters for compact display
- **SVG export**: Port labels are included when exporting the graph as SVG
### Adding Custom Shapes
1. Register the shape in `src/lib/nodes/shapes/registry.ts`:
```typescript
registerShape({
id: 'hexagon',
name: 'Hexagon',
cssClass: 'shape-hexagon',
borderRadius: '0px'
});
```
2. Add CSS in `src/app.css` or component styles:
```css
.shape-hexagon {
clip-path: polygon(25% 0%, 75% 0%, 100% 50%, 75% 100%, 25% 100%, 0% 50%);
}
```
3. Optionally map categories to the new shape:
```typescript
setCategoryShape('MyCategory', 'hexagon');
```
---
## Design Principles
1. **Python is first-class** - All node parameters are Python expressions stored as strings and passed verbatim to PathSim. PathSim handles all type checking and validation at runtime.
2. **Subsystems are nested graphs** - The Interface node inside a subsystem mirrors its parent's ports (inverted direction).
3. **No server required by default** - Everything runs client-side via Pyodide. The optional Flask backend enables server-side execution for packages with native dependencies.
4. **Registry pattern** - Nodes and events are registered centrally for extensibility.
5. **Minimal state** - Derive where possible, avoid duplicating truth. SvelteFlow manages its own UI state.
6. **CSS for styling** - Use CSS variables from `app.css` and component `<style>` blocks, not JavaScript theme APIs.
7. **Svelte 5 runes** - Use `$state`, `$derived`, `$effect` exclusively.
---
## Performance Optimizations
### Streaming Simulation
- **Autonomous worker**: Python runs in a Web Worker loop, pushing results without waiting for UI acknowledgment
- **Queue-based updates**: Results accumulate in a queue, merged in batches via `requestAnimationFrame`
- **Decoupled rates**: Simulation @ 10 Hz, UI updates @ 10 Hz max - expensive plots don't slow simulation
### Plotly Rendering
- **extendTraces**: During streaming, scope plots append new data instead of full re-render
- **SVG mode**: Uses `scatter` (SVG) instead of `scattergl` (WebGL) for stability during streaming
- **Visibility API**: Pauses plot updates when browser tab is hidden
### Node Previews
- **Separate render queue**: Plot previews in nodes use SVG paths (not Plotly)
- **Min-max decimation**: Large datasets downsampled while preserving peaks/valleys
- **Deferred rendering**: Shared queue prevents preview updates from blocking main plots
---
## Deployment
PathView has two deployment targets:
### GitHub Pages (web)
| Trigger | What happens | Deployed to |
|---------|--------------|-------------|
| Push to `main` | Build with base path `/dev` | [view.pathsim.org/dev/](https://view.pathsim.org/dev/) |
| Release published | Bump `package.json`, build, deploy | [view.pathsim.org/](https://view.pathsim.org/) |
| Manual dispatch | Choose `dev` or `release` | Respective path |
### PyPI (pip package)
| Trigger | What happens | Published to |
|---------|--------------|--------------|
| Release published | Build frontend + wheel, publish | [pypi.org/project/pathview](https://pypi.org/project/pathview/) |
| Manual dispatch | Choose `testpypi` or `pypi` | Respective index |
### How it works
1. Both versions deploy to the `deployment` branch using GitHub Actions
2. Dev builds update only the `/dev` folder, preserving the release at root
3. Release builds update root, preserving `/dev`
4. Version in `package.json` is automatically bumped from the release tag (e.g., `v0.4.0` → `0.4.0`)
### Creating a release
1. Create a GitHub release with a version tag (e.g., `v0.4.0`)
2. The workflow automatically:
- Updates `package.json` to match the tag
- Commits the version bump to `main`
- Builds and deploys to production
---
## License
MIT
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"flask>=3.0",
"flask-cors>=4.0",
"numpy",
"waitress>=3.0",
"pathsim==0.17.0",
"pathsim-chem==0.2rc3",
"pytest>=8.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://view.pathsim.org",
"Repository, https://github.com/pathsim/pathview"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:05:09.983231 | pathview-0.7.5.tar.gz | 3,561,253 | 52/fe/06a3ea0bef1c32d9de79af3a372430bf0fe5c31409820fe70f064ab8e936/pathview-0.7.5.tar.gz | source | sdist | null | false | d7c59be49bd32c3d7d6239f4d8bbfd88 | d356741ebc39c7bbe3bda26403b842cf80806db44fddbbe31480a8a49e943848 | 52fe06a3ea0bef1c32d9de79af3a372430bf0fe5c31409820fe70f064ab8e936 | null | [
"LICENSE"
] | 236 |
2.4 | latch-eval-tools | 0.1.22 | Shared eval tools for single-cell bench, spatial bench, and future biology benchmarks. | # latch-eval-tools
Shared eval tools for single-cell bench, spatial bench, and future biology benchmarks.
## Installation
```bash
pip install latch-eval-tools
```
## What is included
- `Eval` / `EvalResult` types
- Built-in graders + `get_grader()`
- `EvalRunner` harness to run an agent against one eval JSON
- `eval-lint` CLI and Python linter APIs
## Quickstart
```python
from latch_eval_tools import EvalRunner, run_minisweagent_task
runner = EvalRunner("evals/count_cells.json")
result = runner.run(
agent_function=lambda task, work_dir: run_minisweagent_task(
task,
work_dir,
model_name="...your model name...",
)
)
print(result["passed"])
print(result["grader_result"].reasoning if result["grader_result"] else "No grader result")
```
`EvalRunner.run()` expects an `agent_function(task_prompt, work_dir)` and supports either:
- returning a plain answer `dict`, or
- returning `{"answer": <dict>, "metadata": <dict>}`
If your agent writes `eval_answer.json` in `work_dir`, the runner will load it automatically.
## Graders
Available grader types:
`numeric_tolerance`, `jaccard_label_set`, `distribution_comparison`, `marker_gene_precision_recall`, `marker_gene_separation`, `spatial_adjacency`, `multiple_choice`
```python
from latch_eval_tools.graders import get_grader
grader = get_grader("numeric_tolerance")
result = grader.evaluate_answer(
agent_answer={"n_cells": 1523},
config={
"ground_truth": {"n_cells": 1500},
"tolerances": {"n_cells": {"type": "relative", "value": 0.05}},
},
)
print(result.passed, result.reasoning)
```
Built-in harness helpers:
- `run_minisweagent_task`
- `run_claudecode_task` (requires `ANTHROPIC_API_KEY` and `claude` CLI)
- `run_openaicodex_task` (requires `OPENAI_API_KEY` or `CODEX_API_KEY` and `codex` CLI)
- `run_plotsagent_task` (experimental latch-plots harness)
### Linter
Validate eval JSON files:
```bash
eval-lint evals/my_dataset/
eval-lint evals/ --format json
```
```python
from latch_eval_tools.linter import lint_eval, lint_directory
result = lint_eval("evals/test.json")
print(result.passed, result.issues)
```
## Eval JSON shape
```json
{
"id": "unique_test_id",
"task": "Task description. Include an <EVAL_ANSWER> JSON template in this text.",
"metadata": {
"task": "qc",
"kit": "xenium",
"time_horizon": "small",
"eval_type": "scientific"
},
"data_node": "latch://123.node/path/to/data.h5ad",
"grader": {
"type": "numeric_tolerance",
"config": {
"ground_truth": {"field": 42},
"tolerances": {"field": {"type": "absolute", "value": 1}}
}
}
}
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.0.0",
"anthropic>=0.72.0",
"latch>=2.0.0",
"matplotlib>=3.0.0",
"mini-swe-agent==1.17.5",
"numpy>=1.24.0",
"openai>=1.0.0",
"orjson>=3.0.0",
"pydantic>=2.0.0",
"scikit-learn>=1.3.0",
"scipy>=1.10.0",
"statsmodels>=0.14.0",
"websockets>=12.0"
] | [] | [] | [] | [] | uv/0.5.9 | 2026-02-18T22:05:04.500423 | latch_eval_tools-0.1.22.tar.gz | 43,603 | 84/b9/c5276ef0929c0ed59a47db687e2a61828532924385c92827e4f0db205341/latch_eval_tools-0.1.22.tar.gz | source | sdist | null | false | 3f6752f67feab2e1d064023948011c91 | 4bf2a5124d43511e1e59f53080f4cf79bb0b48cb89145a559b8f6d690b4d102e | 84b9c5276ef0929c0ed59a47db687e2a61828532924385c92827e4f0db205341 | null | [
"LICENSE"
] | 359 |
2.4 | uipath-runtime | 0.9.0 | Runtime abstractions and interfaces for building agents and automation scripts in the UiPath ecosystem | # UiPath Runtime
[](https://pypi.org/project/uipath-runtime/)
[](https://pypi.org/project/uipath-runtime/)
[](https://pypi.org/project/uipath-runtime/)
Runtime abstractions and contracts for the UiPath Python SDK.
## Overview
`uipath-runtime` provides the foundational interfaces and base contracts for building agent runtimes in the UiPath ecosystem. It defines the protocols that all runtime implementations must follow and provides utilities for execution context, event streaming, tracing, structured error handling, durable execution, and human-in-the-loop interactions.
This package is typically used as a dependency by higher-level SDKs such as:
- [`uipath`](https://github.com/uipath/uipath-python) [](https://pypi.org/project/uipath/) [](https://pypi.org/project/uipath/) [](https://github.com/uipath/uipath-python)
- [`uipath-langchain`](https://github.com/uipath/uipath-langchain-python) [](https://pypi.org/project/uipath-langchain/) [](https://pypi.org/project/uipath-langchain/) [](https://github.com/uipath/uipath-langchain-python)
- [`uipath-llamaindex`](https://github.com/uipath/uipath-integrations-python/tree/main/packages/uipath-llamaindex) [](https://pypi.org/project/uipath-llamaindex/) [](https://pypi.org/project/uipath-llamaindex/) [](https://github.com/uipath/uipath-integrations-python)
- [`uipath-mcp`](https://github.com/uipath/uipath-mcp-python) [](https://pypi.org/project/uipath-mcp/) [](https://pypi.org/project/uipath-mcp/) [](https://github.com/uipath/uipath-mcp-python)
You would use this directly only if you're building custom runtime implementations.
## Installation
```bash
uv add uipath-runtime
```
## Developer Tools
Check out [`uipath-dev`](https://github.com/uipath/uipath-dev-python) - an interactive terminal application for building, testing, and debugging UiPath Python runtimes, agents, and automation scripts.
## Runtime Protocols
All runtimes implement the `UiPathRuntimeProtocol` (or one of its sub-protocols):
- `get_schema()` — defines input and output JSON schemas.
- `execute(input, options)` — executes the runtime logic and returns a `UiPathRuntimeResult`.
- `stream(input, options)` — optionally streams runtime events for real-time monitoring.
- `dispose()` — releases resources when the runtime is no longer needed.
Any class that structurally implements these methods satisfies the protocol.
```python
from typing import Any, AsyncGenerator, Optional
from uipath.runtime import (
UiPathRuntimeResult,
UiPathRuntimeStatus,
UiPathRuntimeSchema,
UiPathRuntimeEvent,
UiPathExecuteOptions,
UiPathStreamOptions,
)
from uipath.runtime.events import UiPathRuntimeStateEvent
class MyRuntime:
"""Example runtime implementing the UiPath runtime protocols."""
async def get_schema(self) -> UiPathRuntimeSchema:
return UiPathRuntimeSchema(
input={
"type": "object",
"properties": {"message": {"type": "string"}},
"required": ["message"],
},
output={
"type": "object",
"properties": {"result": {"type": "string"}},
"required": ["result"],
},
)
async def execute(
self,
input: Optional[dict[str, Any]] = None,
options: Optional[UiPathExecuteOptions] = None,
) -> UiPathRuntimeResult:
message = (input or {}).get("message", "")
return UiPathRuntimeResult(
output={'message': 'Hello from MyRuntime'},
status=UiPathRuntimeStatus.SUCCESSFUL,
)
async def stream(
self,
input: Optional[dict[str, Any]] = None,
options: Optional[UiPathStreamOptions] = None,
) -> AsyncGenerator[UiPathRuntimeEvent, None]:
yield UiPathRuntimeStateEvent(payload={"status": "starting"})
yield UiPathRuntimeResult(
output={"completed": True},
status=UiPathRuntimeStatus.SUCCESSFUL,
)
async def dispose(self) -> None:
pass
```
## Event Streaming
Runtimes can optionally emit real-time events during execution:
```python
from uipath.runtime.events import (
UiPathRuntimeStateEvent,
UiPathRuntimeMessageEvent,
)
from uipath.runtime.result import UiPathRuntimeResult
async for event in runtime.stream({"query": "hello"}):
if isinstance(event, UiPathRuntimeStateEvent):
print(f"State update: {event.payload}")
elif isinstance(event, UiPathRuntimeMessageEvent):
print(f"Message received: {event.payload}")
elif isinstance(event, UiPathRuntimeResult):
print(f"Completed: {event.output}")
```
If a runtime doesn’t support streaming, it raises a `UiPathStreamNotSupportedError`.
## Structured Error Handling
Runtime errors use a consistent, structured model:
```python
from uipath.runtime.errors import UiPathRuntimeError, UiPathErrorCode, UiPathErrorCategory
raise UiPathRuntimeError(
UiPathErrorCode.EXECUTION_ERROR,
"Agent failed",
"Failed to call external service",
UiPathErrorCategory.USER,
)
```
Resulting JSON contract:
```json
{
"code": "Python.EXECUTION_ERROR",
"title": "Agent failed",
"detail": "Failed to call external service",
"category": "User"
}
```
## Runtime Factory
`UiPathRuntimeFactoryProtocol` provides a consistent contract for discovering and creating runtime instances.
Factories decouple runtime construction (configuration, dependencies) from runtime execution, allowing orchestration, discovery, reuse, and tracing across multiple types of runtimes.
```python
from typing import Any, AsyncGenerator, Optional
from uipath.runtime import (
UiPathRuntimeResult,
UiPathRuntimeStatus,
UiPathRuntimeSchema,
UiPathExecuteOptions,
UiPathStreamOptions,
UiPathRuntimeProtocol,
UiPathRuntimeFactoryProtocol
)
class MyRuntimeFactory:
async def new_runtime(self, entrypoint: str, runtime_id: str) -> UiPathRuntimeProtocol:
return MyRuntime()
def discover_entrypoints(self) -> list[str]:
return []
factory = MyRuntimeFactory()
runtime = await factory.new_runtime("example", "id")
result = await runtime.execute()
print(result.output) # {'message': 'Hello from MyRuntime'}
```
## Execution Context
`UiPathRuntimeContext` manages configuration, file I/O, and logs across runtime execution.
It can read JSON input files, capture all stdout/stderr logs, and automatically write output and result files when execution completes.
```python
from uipath.runtime import UiPathRuntimeContext, UiPathRuntimeResult, UiPathRuntimeStatus
with UiPathRuntimeContext(input_file="input.json", result_file="result.json", logs_file="execution.log") as ctx:
ctx.result = await runtime.execute(ctx.input)
# On exit: the result and logs are written automatically to the configured files
```
When execution fails, the context:
- Writes a structured error contract to the result file.
- Re-raises the original exception.
## Execution Runtime
`UiPathExecutionRuntime` wraps any runtime with tracing, telemetry, and log collection capabilities. When running multiple runtimes in the same process, this wrapper ensures each execution's spans and logs are properly isolated and captured.
```mermaid
graph TB
TM[TraceManager<br/>Shared across all runtimes]
FACTORY[Factory]
RT[Runtime]
EXE[ExecutionRuntime<br/>exec-id: exec-id]
%% Factory creates runtimes
FACTORY -->|new_runtime| RT
%% Runtimes wrapped by ExecutionRuntime
RT -->|wrapped by| EXE
%% TraceManager shared with all
TM -.->|shared| EXE
%% Execution captures spans to TraceManager
EXE -->|captures spans| TM
%% Styling
style TM fill:#e1f5ff,stroke:#0277bd,stroke-width:3px
style FACTORY fill:#f3e5f5
style RT fill:#fff3e0
style EXE fill:#e8f5e9
```
```python
from uipath.core import UiPathTraceManager
from uipath.runtime import UiPathExecutionRuntime
trace_manager = UiPathTraceManager()
runtime = MyRuntime()
executor = UiPathExecutionRuntime(
runtime,
trace_manager,
root_span="my-runtime",
execution_id="exec-123",
)
result = await executor.execute({"message": "hello"})
spans = trace_manager.get_execution_spans("exec-123") # captured spans
logs = executor.log_handler.buffer # captured logs
print(result.output) # {'message': 'Hello from MyRuntime'}
```
## Example: Runtime Orchestration
This example demonstrates an **orchestrator runtime** that receives a `UiPathRuntimeFactoryProtocol`, creates child runtimes through it, and executes each one via `UiPathExecutionRuntime`, all within a single shared `UiPathTraceManager`.
<details>
<summary>Orchestrator Runtime</summary>
```python
from typing import Any, Optional, AsyncGenerator
from uipath.core import UiPathTraceManager
from uipath.runtime import (
UiPathExecutionRuntime,
UiPathRuntimeResult,
UiPathRuntimeStatus,
UiPathExecuteOptions,
UiPathStreamOptions,
UiPathRuntimeProtocol,
UiPathRuntimeFactoryProtocol
)
class ChildRuntime:
"""A simple child runtime that echoes its name and input."""
def __init__(self, name: str):
self.name = name
async def get_schema(self):
return None
async def execute(
self,
input: Optional[dict[str, Any]] = None,
options: Optional[UiPathExecuteOptions] = None,
) -> UiPathRuntimeResult:
payload = input or {}
return UiPathRuntimeResult(
output={
"runtime": self.name,
"input": payload,
},
status=UiPathRuntimeStatus.SUCCESSFUL,
)
async def stream(
self,
input: Optional[dict[str, Any]] = None,
options: Optional[UiPathStreamOptions] = None,
) -> AsyncGenerator[UiPathRuntimeResult, None]:
yield await self.execute(input, options)
async def dispose(self) -> None:
pass
class ChildRuntimeFactory:
"""Factory that creates ChildRuntime instances."""
async def new_runtime(self, entrypoint: str) -> UiPathRuntimeProtocol:
return ChildRuntime(name=entrypoint)
def discover_entrypoints(self) -> list[str]:
return []
class OrchestratorRuntime:
"""A runtime that orchestrates multiple child runtimes via a factory."""
def __init__(
self,
factory: UiPathRuntimeFactoryProtocol,
trace_manager: UiPathTraceManager,
):
self.factory = factory
self.trace_manager = trace_manager
async def get_schema(self):
return None
async def execute(
self,
input: Optional[dict[str, Any]] = None,
options: Optional[UiPathExecuteOptions] = None,
) -> UiPathRuntimeResult:
payload = input or {}
child_inputs: list[dict[str, Any]] = payload.get("children", [])
child_results: list[dict[str, Any]] = []
for i, child_input in enumerate(child_inputs):
# Use the factory to create a new child runtime
child_runtime = await self.factory.new_runtime(entrypoint=f"child-{i}", runtime_id=f"child-{i}")
# Wrap child runtime with tracing + logs
execution_id = f"child-{i}"
executor = UiPathExecutionRuntime(
delegate=child_runtime,
trace_manager=self.trace_manager,
root_span=f"child-span-{i}",
execution_id=execution_id,
)
# Execute child runtime
result = await executor.execute(child_input, options=options)
child_results.append(result.output or {})
child_spans = trace_manager.get_execution_spans(execution_id) # Captured spans
# Dispose the child runtime when finished
await child_runtime.dispose()
return UiPathRuntimeResult(
output={
"main": True,
"children": child_results,
},
status=UiPathRuntimeStatus.SUCCESSFUL,
)
async def stream(
self,
input: Optional[dict[str, Any]] = None,
options: Optional[UiPathStreamOptions] = None,
) -> AsyncGenerator[UiPathRuntimeResult, None]:
yield await self.execute(input, options)
async def dispose(self) -> None:
pass
# Example usage
async def main() -> None:
trace_manager = UiPathTraceManager()
factory = ChildRuntimeFactory()
options = UiPathExecuteOptions()
with UiPathRuntimeContext(job_id="main-job-001") as ctx:
runtime = OrchestratorRuntime(factory=factory, trace_manager=trace_manager)
input_data = {
"children": [
{"message": "hello from child 1"},
{"message": "hello from child 2"},
]
}
ctx.result = await runtime.execute(input=input_data, options=options)
print(ctx.result.output)
# Output:
# {
# "main": True,
# "children": [
# {"runtime": "child-0", "input": {"message": "hello from child 1"}},
# {"runtime": "child-1", "input": {"message": "hello from child 2"}}
# ]
# }
```
</details>
| text/markdown | null | null | null | Cristian Pufu <cristian.pufu@uipath.com> | null | null | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build Tools"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"uipath-core<0.6.0,>=0.5.0"
] | [] | [] | [] | [
"Homepage, https://uipath.com",
"Repository, https://github.com/UiPath/uipath-runtime-python",
"Documentation, https://uipath.github.io/uipath-python/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:04:43.650938 | uipath_runtime-0.9.0.tar.gz | 109,587 | 45/ce/82c4edfe6fa11807ea99babf7a53afbced4d3a17b3e020dfa6474ee4d73f/uipath_runtime-0.9.0.tar.gz | source | sdist | null | false | 03d8323422b7d01c9f91ebf424707d6f | bc24f6f96fe0ad1d8549b16df51607d4e85558afe04abee294c9dff9790ccc96 | 45ce82c4edfe6fa11807ea99babf7a53afbced4d3a17b3e020dfa6474ee4d73f | null | [
"LICENSE"
] | 50,025 |
2.4 | llm-ci-runner | 1.5.5 | A simple CI/CD utility for running LLM tasks with Semantic Kernel | # AI-First Toolkit: LLM-Powered Automation
[](https://badge.fury.io/py/llm-ci-runner) [](https://github.com/Nantero1/ai-first-devops-toolkit/actions/workflows/ci.yml) [](https://github.com/Nantero1/ai-first-devops-toolkit/actions/workflows/unit-tests.yml) [](https://htmlpreview.github.io/?https://github.com/Nantero1/ai-first-devops-toolkit/blob/python-coverage-comment-action-data/htmlcov/index.html) [](https://github.com/Nantero1/ai-first-devops-toolkit/actions/workflows/github-code-scanning/codeql) [](https://opensource.org/licenses/MIT) [](http://mypy-lang.org/) [](https://github.com/astral-sh/ruff) [](https://www.bestpractices.dev/projects/10922) [](https://www.pepy.tech/projects/llm-ci-runner)
> **🚀 The Future of DevOps is AI-First**
> This toolkit represents a step
> toward [AI-First DevOps](https://technologyworkroom.blogspot.com/2025/06/building-ai-first-devops.html) - where
> intelligent automation handles the entire development lifecycle. Built for teams ready to embrace the exponential
> productivity gains of AI-powered development. Please
> read [the blog post](https://technologyworkroom.blogspot.com/2025/06/building-ai-first-devops.html) for more details on
> the motivation.
## TLDR: What This Tool Does
**Purpose**: Transform any unstructured business knowledge into reliable, structured data that powers intelligent
automation across your entire organization.
**Perfect For**:
- 🏦 **Financial Operations**: Convert loan applications, audits, and regulatory docs into structured compliance data
- 🏥 **Healthcare Systems**: Transform patient records, clinical notes, and research data into medical formats
- ⚖️ **Legal & Compliance**: Process contracts, court docs, and regulatory texts into actionable risk assessments
- 🏭 **Supply Chain**: Turn logistics reports, supplier communications, and forecasts into optimization insights
- 👥 **Human Resources**: Convert resumes, performance reviews, and feedback into structured talent analytics
- 🛡️ **Security Operations**: Transform threat reports, incident logs, and assessments into standard frameworks
- 🚀 **DevOps & Engineering**: Use commit logs, deployment reports, and system logs for automated AI actions
- 🔗 **Enterprise Integration**: Connect any business process to downstream systems with guaranteed consistency
---
### Simple structured output example
```bash
# Install and use immediately
pip install llm-ci-runner
llm-ci-runner --input-file examples/02-devops/pr-description/input.json --schema-file examples/02-devops/pr-description/schema.json
```

### How Templates Work
```mermaid
graph LR
A[📦 template.yaml<br/>• Template<br/>• Schema<br/>• Model Settings] --> C[✨ Rendered Prompt]
B[⚙️ template-vars.yaml] --> C
C --> D[🤖 LLM Processing]
D --> E[📋 Structured Output]
style A stroke:#01579b,stroke-width:2px
style E stroke:#4a148c,stroke-width:2px
```
*With Semantic Kernel templates, everything is self-contained - prompt template, JSON schema, and model configuration in a single YAML file.*
## The AI-First Development Revolution
This toolkit embodies the principles outlined
in [Building AI-First DevOps](https://technologyworkroom.blogspot.com/2025/06/building-ai-first-devops.html):
| Traditional DevOps | AI-First DevOps (This Tool) |
|-----------------------------|-----------------------------------------------------|
| Manual code reviews | 🤖 AI-powered reviews with structured findings |
| Human-written documentation | 📝 AI-generated docs with guaranteed consistency |
| Reactive security scanning | 🔍 Proactive AI security analysis |
| Manual quality gates | 🎯 AI-driven validation with schema enforcement |
| Linear productivity | 📈 Exponential gains through intelligent automation |
## Features
- 🎯 **100% Schema Enforcement**: Your pipeline never gets invalid data. Token-level schema enforcement with guaranteed
compliance
- 🔄 **Resilient execution**: Retries with exponential back-off and jitter plus a clear exception hierarchy keep
transient cloud faults from breaking your CI.
- 🚀 **Zero-Friction CLI**: Single script, minimal configuration for pipeline integration and automation
- 🔐 **Enterprise Security**: Azure RBAC via DefaultAzureCredential with fallback to API Key
- 📦 **CI-friendly CLI**: Stateless command that reads JSON/YAML, writes JSON/YAML, and exits with proper codes
- 🎨 **Beautiful Logging**: Rich console output with timestamps and colors
- 📁 **File-based I/O**: CI/CD friendly with JSON/YAML input/output
- 📋 **Template-Driven Workflows**: Handlebars, Jinja2, and Microsoft Semantic Kernel YAML templates with YAML variables for dynamic prompt generation
- 📄 **YAML Support**: Use YAML for schemas, input files, and output files - more readable than JSON
- 🔧 **Simple & Extensible**: Easy to understand and modify for your specific needs
- 🤖 **Semantic Kernel foundation**: async, service-oriented design ready for skills, memories, orchestration, and future
model upgrades
- 📚 **Documentation**: Comprehensive documentation for all features and usage examples. Use your semantic kernel skills
to extend the functionality.
- 🧑⚖️ **Acceptance Tests**: pytest framework with the LLM-as-Judge pattern for quality gates. Test your scripts before
you run them in production.
- 💰 **Coming soon**: token usage and cost estimation appended to each result for budgeting and optimisation
## 🚀 The Only Enterprise AI DevOps Tool That Delivers RBAC Security, Robustness and Simplicity
**LLM-CI-Runner stands alone in the market** as the only tool combining **100% schema enforcement**, **enterprise RBAC
authentication**, and robust **Semantic Kernel integration with templates** in a single CLI solution. **No other tool
delivers all three critical enterprise requirements together**.
## Installation
```bash
pip install llm-ci-runner
```
That's it! No complex setup, no dependency management - just install and use. Perfect for CI/CD pipelines and local
development.
## Quick Start
### 1. Install from PyPI
```bash
pip install llm-ci-runner
```
### 2. Set Environment Variables
**Azure OpenAI (Priority 1):**
```bash
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_MODEL="gpt-4.1-nano" # or any other GPT deployment name
export AZURE_OPENAI_API_VERSION="2025-01-01-preview" # Optional, supports model-router and all models
```
**OpenAI (Fallback):**
```bash
export OPENAI_API_KEY="your-very-secret-api-key"
export OPENAI_CHAT_MODEL_ID="gpt-4.1-nano" # or any OpenAI model
export OPENAI_ORG_ID="org-your-org-id" # Optional
```
**Authentication Options:**
- **Azure RBAC (Recommended)**: Uses `DefaultAzureCredential` for Azure RBAC authentication - no API key needed!
See [Microsoft Docs](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python)
for setup.
- **Azure API Key**: Set `AZURE_OPENAI_API_KEY` environment variable if not using RBAC.
- **OpenAI API Key**: Required for OpenAI fallback when Azure is not configured.
**Priority**: Azure OpenAI takes priority when both Azure and OpenAI environment variables are present.
### 3a. Basic Usage
```bash
# Simple chat example
llm-ci-runner --input-file examples/01-basic/simple-chat/input.json
# With structured output schema
llm-ci-runner \
--input-file examples/01-basic/sentiment-analysis/input.json \
--schema-file examples/01-basic/sentiment-analysis/schema.json
# Custom output file
llm-ci-runner \
--input-file examples/02-devops/pr-description/input.json \
--schema-file examples/02-devops/pr-description/schema.json \
--output-file pr-analysis.json
# YAML input files (alternative to JSON)
llm-ci-runner \
--input-file config.yaml \
--schema-file schema.yaml \
--output-file result.yaml
```
### 3b. Template-Based Workflows
**Dynamic prompt generation with YAML, Handlebars, Jinja2, or Microsoft Semantic Kernel templates:**
```bash
# Handlebars template example
llm-ci-runner \
--template-file examples/05-templates/handlebars-template/template.hbs \
--template-vars examples/05-templates/handlebars-template/template-vars.yaml \
--schema-file examples/05-templates/handlebars-template/schema.yaml \
--output-file handlebars-result.yaml
# Or using Jinja2 templates
llm-ci-runner \
--template-file examples/05-templates/jinja2-template/template.j2 \
--template-vars examples/05-templates/jinja2-template/template-vars.yaml \
--schema-file examples/05-templates/jinja2-template/schema.yaml \
--output-file jinja2-result.yaml
# Or using Microsoft Semantic Kernel YAML templates with embedded schemas
llm-ci-runner \
--template-file examples/05-templates/sem-ker-structured-analysis/template.yaml \
--template-vars examples/05-templates/sem-ker-structured-analysis/template-vars.yaml \
--output-file sk-analysis-result.json
```
For more examples see the [examples directory](https://github.com/Nantero1/ai-first-devops-toolkit/tree/main/examples).
**Benefits of Template Approach:**
- 🎯 **Reusable Templates**: Create once, use across multiple scenarios
- 📝 **YAML Configuration**: More readable than JSON for complex setups
- 🔄 **Dynamic Content**: Variables and conditional rendering
- 🚀 **CI/CD Ready**: Perfect for parameterized pipeline workflows
- 🤖 **Semantic Kernel Integration**: Microsoft Semantic Kernel YAML templates with embedded schemas and model settings
### 4. Python Library Usage
**You can use LLM CI Runner directly from Python with both file-based and string-based templates, and with either dict
or file-based variables and schemas. The main entrypoint is:**
```python
from llm_ci_runner.core import run_llm_task # Adjust import as needed for your package layout
```
#### Basic Usage: File-Based Input
```python
import asyncio
from llm_ci_runner.core import run_llm_task
async def main():
# Run with a traditional JSON input file (messages, etc)
response = await run_llm_task(_input_file="examples/01-basic/simple-chat/input.json")
print(response)
asyncio.run(main())
```
#### File-Based Template Usage
```python
import asyncio
from llm_ci_runner.core import run_llm_task
async def main():
# Handlebars, Jinja2, or Semantic Kernel YAML template via file
response = await run_llm_task(
template_file="examples/05-templates/pr-review-template/template.hbs",
template_vars_file="examples/05-templates/pr-review-template/template-vars.yaml",
schema_file="examples/05-templates/pr-review-template/schema.yaml",
output_file="analysis.json"
)
print(response)
asyncio.run(main())
```
#### String-Based Template Usage
```python
import asyncio
from llm_ci_runner.core import run_llm_task
async def main():
# String template (Handlebars example)
response = await run_llm_task(
template_content="Hello {{name}}!",
template_format="handlebars",
template_vars={"name": "World"},
)
print(response)
asyncio.run(main())
```
#### Semantic Kernel YAML Template with Embedded Schema
Microsoft Semantic Kernel YAML templates provide embedded JSON schemas and model settings directly in the template. See [Microsoft Semantic Kernel YAML Template Documentation](https://learn.microsoft.com/en-us/semantic-kernel/concepts/prompts/yaml-schema) for more details.
Please refer to the full example in [examples/05-templates/sem-ker-structured-analysis/README.md](https://github.com/Nantero1/ai-first-devops-toolkit/blob/main/examples/05-templates/sem-ker-structured-analysis/README.md).
```python
import asyncio
from llm_ci_runner.core import run_llm_task
async def main():
template_content = """
template: "Analyze: {{input_text}}"
input_variables:
- name: input_text
execution_settings:
azure_openai:
temperature: 0.1
response_format:
type: json_schema
json_schema:
schema:
type: object
properties:
sentiment: {type: string, enum: [positive, negative, neutral]}
confidence: {type: number, minimum: 0, maximum: 1}
required: [sentiment, confidence]
"""
response = await run_llm_task(
template_content=template_content,
template_format="semantic-kernel",
template_vars={"input_text": "Sample data"}
)
print(response)
asyncio.run(main())
```
#### Advanced: Dict-based Schema and Variables
```python
import asyncio
from llm_ci_runner.core import run_llm_task
async def main():
schema = {
"type": "object",
"properties": {
"sentiment": {"type": "string", "enum": ["positive", "negative", "neutral"]},
"confidence": {"type": "number", "minimum": 0, "maximum": 1}
},
"required": ["sentiment", "confidence"]
}
template = "Analyze this review: {{review}}"
variables = {"review": "I love the new update!"}
response = await run_llm_task(
template_content=template,
template_format="handlebars",
template_vars=variables,
schema=schema
)
print(response)
asyncio.run(main())
```
#### Notes & Tips
- **Only one of** `_input_file`, `template_file`, or `template_content` **may be specified** at a time.
- **Template variables**: Use `template_vars` (Python dict or YAML file path), or `template_vars_file` (YAML file path).
- **Schema**: Use `schema` (dict or JSON/YAML file path), or `schema_file` (file path).
- **template_format** is required with `template_content`. Allowed: `"handlebars"`, `"jinja2"`, `"semantic-kernel"`.
- **output_file**: If specified, writes response to file.
**Returns:** String (for text output) or dict (for structured JSON output).
**Errors:** Raises `InputValidationError` or `LLMRunnerError` on invalid input or execution failure.
### 5. Development Setup (Optional)
For contributors or advanced users who want to modify the source:
```bash
# Install UV if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and install for development
git clone https://github.com/Nantero1/ai-first-devops-toolkit.git
cd ai-first-devops-toolkit
uv sync
# Run from source
uv run llm-ci-runner --input-file examples/01-basic/simple-chat/input.json
```
## The AI-First Transformation: Why Unstructured → Structured Matters
LLMs excel at extracting meaning from messy text, logs, documents, and mixed-format data, then emitting *
*schema-compliant JSON/YAML** that downstream systems can trust. This unlocks:
- **🔄 Straight-Through Processing**: Structured payloads feed BI dashboards, RPA robots, and CI/CD gates without human
parsing
- **🎯 Context-Aware Decisions**: LLMs fuse domain knowledge with live telemetry to prioritize incidents, forecast
demand, and spot security drift
- **📋 Auditable Compliance**: Formal outputs make it easy to track decisions for regulators and ISO/NIST audits
- **⚡ Rapid Workflow Automation**: Enable automation across customer service, supply-chain planning, HR case handling,
and security triage
- **🔗 Safe Pipeline Composition**: Structured contracts let AI-first pipelines remain observable and composable while
capitalizing on unstructured enterprise data
## Input Formats
### Traditional JSON Input
```json
{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Your task description here"
}
],
"context": {
"session_id": "optional-session-id",
"metadata": {
"any": "additional context"
}
}
}
```
### Microsoft Semantic Kernel YAML Templates
The toolkit supports Microsoft Semantic Kernel YAML templates with embedded schemas and execution settings. See [examples/05-templates/](examples/05-templates/) for comprehensive examples.
**Simple Question Template** (`template.yaml`):
```yaml
name: SimpleQuestion
description: Simple semantic kernel template for asking questions
template_format: semantic-kernel
template: |
You are a helpful {{$role}} assistant.
Please answer this question: {{$question}}
Provide a clear and concise response.
input_variables:
- name: role
description: The role of the assistant (e.g., technical, customer service)
default: "technical"
is_required: false
- name: question
description: The question to be answered
is_required: true
execution_settings:
azure_openai:
temperature: 0.7
max_tokens: 500
top_p: 1.0
```
**Template Variables** (`template-vars.yaml`):
```yaml
role: expert DevOps engineer
question: What is the difference between continuous integration and continuous deployment?
```
**Structured Analysis Template** with embedded JSON schema:
```yaml
name: StructuredAnalysis
description: SK template with embedded JSON schema for structured output
template_format: semantic-kernel
template: |
Analyze the following text and provide a structured response: {{$text_to_analyze}}
input_variables:
- name: text_to_analyze
description: The text content to analyze
is_required: true
execution_settings:
azure_openai:
model_id: gpt-4.1-stable
temperature: 0.3
max_tokens: 800
response_format:
type: json_schema
json_schema:
name: analysis_result
schema:
type: object
properties:
sentiment:
type: string
enum: ["positive", "negative", "neutral"]
description: Overall sentiment of the text
confidence:
type: number
minimum: 0.0
maximum: 1.0
description: Confidence score for the sentiment analysis
key_themes:
type: array
items:
type: string
description: Main themes identified in the text
summary:
type: string
description: Brief summary of the text
word_count:
type: integer
description: Approximate word count
required: ["sentiment", "confidence", "summary"]
additionalProperties: false
```
### Template-Based Input (Handlebars & Jinja2)
**Handlebars Template** (`template.hbs`):
```handlebars
<message role="system">
You are an expert {{expertise.domain}} engineer.
Focus on {{expertise.focus_areas}}.
</message>
<message role="user">
Analyze this {{task.type}}:
{{#each task.items}}
- {{this}}
{{/each}}
Requirements: {{task.requirements}}
</message>
```
**Jinja2 Template** (`template.j2`):
```jinja2
<message role="system">
You are an expert {{expertise.domain}} engineer.
Focus on {{expertise.focus_areas}}.
</message>
<message role="user">
Analyze this {{task.type}}:
{% for item in task.items %}
- {{item}}
{% endfor %}
Requirements: {{task.requirements}}
</message>
```
**Template Variables** (`vars.yaml`):
```yaml
expertise:
domain: "DevOps"
focus_areas: "security, performance, maintainability"
task:
type: "pull request"
items:
- "Changed authentication logic"
- "Updated database queries"
- "Added input validation"
requirements: "Focus on security vulnerabilities"
```
## Structured Outputs with 100% Schema Enforcement
When you provide a `--schema-file`, the runner guarantees perfect schema compliance:
```bash
llm-ci-runner \
--input-file examples/01-basic/sentiment-analysis/input.json \
--schema-file examples/01-basic/sentiment-analysis/schema.json
```
**Note**: Output defaults to `result.json`. Use `--output-file custom-name.json` for custom output files.
**Supported Schema Features**:
✅ String constraints (enum, minLength, maxLength, pattern)
✅ Numeric constraints (minimum, maximum, multipleOf)
✅ Array constraints (minItems, maxItems, items type)
✅ Required fields enforced at generation time
✅ Type validation (string, number, integer, boolean, array)
## CI/CD Integration
### GitHub Actions Example
```yaml
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install LLM CI Runner
run: pip install llm-ci-runner
- name: Generate PR Review with Templates
run: |
llm-ci-runner \
--template-file .github/templates/pr-review.j2 \
--template-vars pr-context.yaml \
--schema-file .github/schemas/pr-review.yaml \
--output-file pr-analysis.yaml
env:
AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
AZURE_OPENAI_MODEL: ${{ secrets.AZURE_OPENAI_MODEL }}
```
For complete CI/CD examples, see *
*[examples/uv-usage-example.md](https://github.com/Nantero1/ai-first-devops-toolkit/blob/main/examples/uv-usage-example.md)
**. This repo is also using itself for release note generation, **check it
out [here](https://github.com/Nantero1/ai-first-devops-toolkit/blob/c4066d347ae14d37cb674e36007a678f38b36439/.github/workflows/release.yml#L145-L149)
**.
## Authentication
**Azure OpenAI**: Uses Azure's `DefaultAzureCredential` supporting:
- Environment variables (local development)
- Managed Identity (recommended for Azure CI/CD)
- Azure CLI (local development)
- Service Principal (non-Azure CI/CD)
**OpenAI**: Uses API key authentication with optional organization ID.
## Testing
We maintain comprehensive test coverage with **100% success rate**:
```bash
# For package users - install test dependencies
pip install llm-ci-runner[dev]
# For development - install from source with test dependencies
uv sync --group dev
# Run specific test categories
pytest tests/unit/ -v # 70 unit tests
pytest tests/integration/ -v # End-to-end examples
pytest acceptance/ -v # LLM-as-judge evaluation
# Or with uv for development
uv run pytest tests/unit/ -v
uv run pytest tests/integration/ -v
uv run pytest acceptance/ -v
```
## Architecture
Built on **Microsoft Semantic Kernel** for:
- Enterprise-ready Azure OpenAI and OpenAI integration
- Future-proof model compatibility
- **100% Schema Enforcement**: KernelBaseModel integration with token-level constraints
- **Dynamic Model Creation**: Runtime JSON schema → Pydantic model conversion
- **Azure RBAC**: Azure RBAC via DefaultAzureCredential
- **Automatic Fallback**: Azure-first priority with OpenAI fallback
## The AI-First Development Journey
This toolkit is your first step
toward [AI-First DevOps](https://technologyworkroom.blogspot.com/2025/06/building-ai-first-devops.html). As you
integrate AI into your development workflows, you'll experience:
1. **🚀 Exponential Productivity**: AI handles routine tasks while you focus on architecture
2. **🎯 Guaranteed Quality**: Schema enforcement eliminates validation errors
3. **🤖 Autonomous Operations**: AI agents make decisions in your pipelines
4. **📈 Continuous Improvement**: Every interaction improves your AI system
**The future belongs to teams that master AI-first principles.** This toolkit gives you the foundation to start that
journey today.
## Real-World Examples
You can explore the **[examples directory](https://github.com/Nantero1/ai-first-devops-toolkit/tree/main/examples)** for
a complete collection of self-contained examples organized by category.
For comprehensive real-world CI/CD scenarios, see *
*[examples/uv-usage-example.md](https://github.com/Nantero1/ai-first-devops-toolkit/blob/main/examples/uv-usage-example.md)
**.
### 100 AI Automation Use Cases for AI-First Automation
**DevOps & Engineering** 🔧
1. 🤖 AI-generated PR review – automated pull request analysis with structured review findings
2. 📝 Release note composer – map commits to semantic-version bump rules and structured changelogs
3. 🔍 Vulnerability scanner – map code vulnerabilities to OWASP standards with actionable remediation
4. ☸️ Kubernetes manifest optimizer – produce risk-scored diffs and security hardening recommendations
5. 📊 Log anomaly triager – convert system logs into OTEL-formatted events for SIEM ingestion
6. 💰 Cloud cost explainer – output tagged spend by team in FinOps schema for budget optimization
7. 🔄 API diff analyzer – produce backward-compatibility scorecards from specification changes
8. 🛡️ IaC drift detector – turn Terraform plans into CVE-linked security findings
9. 📋 Dependency license auditor – emit SPDX-compatible reports for compliance tracking
10. 🎯 SLA breach summarizer – file structured JIRA tickets with SMART action items
**Governance, Risk & Compliance** 🏛️
11. 📊 Regulatory delta analyzer – emit change-impact matrices from new compliance requirements
12. 🌱 ESG report synthesizer – map CSR prose to GRI indicators and sustainability metrics
13. 📋 SOX-404 narrative converter – transform controls descriptions into testable audit checklists
14. 🏦 Basel III stress-test interpreter – output capital risk buckets from regulatory scenarios
15. 🕵️ AML SAR formatter – convert investigator notes into Suspicious Activity Report structures
16. 🔒 Privacy policy parser – generate GDPR data-processing-activity logs from legal text
17. 🔍 Internal audit evidence linker – export control traceability graphs for compliance tracking
18. 📊 Carbon emission disclosure normalizer – structure sustainability data into XBRL taxonomy
19. ⚖️ Regulatory update tracker – generate structured compliance action items from guideline changes
20. 🛡️ Safety inspection checker – transform narratives into OSHA citation checklists
**Financial Services** 🏦
21. 🏦 Loan application analyzer – transform free-text applications into Basel-III risk-model inputs
22. 📊 Earnings call sentiment quantifier – output KPI deltas and investor sentiment scores
23. 💹 Budget variance explainer – produce drill-down pivot JSON for financial analysis
24. 📈 Portfolio risk dashboard builder – feed VaR models with structured investment analysis
25. 💳 Fraud alert generator – map investigation notes to CVSS-scored security metrics
26. 💰 Treasury cash-flow predictor – ingest email forecasts into structured planning models
27. 📊 Financial forecaster – summarize reports into structured cash-flow and projection objects
28. 🧾 Invoice processor – convert receipts into double-entry ledger posts with GAAP tags
29. 📋 Stress test scenario packager – structure regulatory submission data for banking compliance
30. 🏦 Insurance claim assessor – return structured claim-decision objects with risk scores
**Healthcare & Life Sciences** 🏥
31. 🏥 Patient intake processor – build HL7/FHIR-compliant patient records from free-form intake forms
32. 🧠 Mental health triage assistant – structure referral notes with priority classifications and care pathways
33. 📊 Radiology report coder – output SNOMED-coded JSON from diagnostic imaging narratives
34. 💊 Clinical trial note packager – create FDA eCTD modules from research documentation
35. 📋 Prescription parser – turn text prescriptions into structured e-Rx objects with dosage validation
36. ⚡ Vital sign anomaly summarizer – generate alert reports with clinical priority rankings
37. 🧪 Lab result organizer – output LOINC-coded tables from diagnostic test narratives
38. 🏥 Medical device log summarizer – generate UDI incident files for regulatory reporting
39. 📈 Patient feedback sentiment analyzer – feed quality-of-care KPIs from satisfaction surveys
40. 👩⚕️ Clinical observation compiler – convert research notes into structured data for trials
**Legal & Compliance** ⚖️
41. 🏛️ Legal contract parser – extract clauses and compute risk scores from contract documents
42. 📝 Court opinion digest – summarize judicial opinions into structured precedent and citation graphs
43. 🏛️ Legal discovery summarizer – extract key issues and risks from large document sets
44. 💼 Contract review summarizer – extract risk factors and key dates from legal contracts
45. 🏛️ Policy impact assessor – convert policy proposals into stakeholder impact matrices
46. 📜 Patent novelty comparator – produce claim-overlap matrices from prior art analysis
47. 🏛️ Legal bill auditor – transform billing details into itemized expense and compliance reports
48. 📋 Case strategy brainstormer – summarize likely arguments from litigation documentation
49. 💼 Legal email analyzer – extract key issues and deadlines from email threads for review
50. ⚖️ Expert witness report normalizer – create citation-linked outlines from testimony records
**Customer Experience & Sales** 🛒
51. 🎧 Tier-1 support chatbot – convert customer queries into tickets with reproducible troubleshooting steps
52. ⭐ Review sentiment miner – produce product-feature tallies from customer feedback analysis
53. 📉 Churn risk email summarizer – export CRM risk scores from customer communication patterns
54. 🗺️ Omnichannel conversation unifier – generate customer journey maps from multi-platform interactions
55. ❓ Dynamic FAQ builder – structure knowledge base content from community forum discussions
56. 📋 Proposal auto-grader – output RFP compliance matrices with scoring rubrics
57. 📈 Upsell opportunity extractor – create lead-scoring JSON from customer interaction analysis
58. 📱 Social media crisis detector – feed escalation playbooks with brand sentiment monitoring
59. 🌐 Multilingual intent router – tag customer chats to appropriate support queues by language/topic
60. 🎯 Marketing copy generator – create brand-compliant content with tone and messaging constraints
**HR & People Operations** 👥
61. 📄 CV-to-JD matcher – rank candidates with explainable competency scores and fit analysis
62. 🎤 Interview transcript summarizer – export structured competency rubrics with evaluation criteria
63. ✅ Onboarding policy compliance checker – produce new-hire checklist completion tracking
64. 📊 Performance review sentiment analyzer – create growth-plan JSON with development recommendations
65. 💰 Payroll inquiry classifier – map employee emails to structured case codes for HR processing
66. 🏥 Benefits Q&A automation – generate eligibility responses from policy documentation
67. 🚪 Exit interview insight extractor – feed retention dashboards with structured departure analytics
68. 📚 Training content gap mapper – align job roles to skill taxonomies for learning programs
69. 🛡️ Workplace incident processor – convert safety reports into OSHA 301 compliance records
70. 📊 Diversity metric synthesizer – summarize inclusion survey data into actionable insights
**Supply Chain & Manufacturing** 🏭
71. 📊 Demand forecast summarizer – output SKU-level predictions from market analysis and sales data
72. 📋 Purchase order processor – convert supplier communications into structured ERP line-items
73. 🌱 Supplier risk scanner – generate ESG compliance scores from vendor assessment reports
74. 🔧 Predictive maintenance log analyst – produce work orders from equipment telemetry narratives
75. 🚛 Logistics delay explainer – return route-change suggestions from transportation disruption reports
76. ♻️ Circular economy return classifier – create refurbishment tags from product return descriptions
77. 🌍 Carbon footprint calculator – map transport legs to CO₂e emissions for sustainability reporting
78. 📦 Safety stock alert generator – output inventory triggers with lead-time assumptions
79. 📜 Regulatory import/export harmonizer – produce HS-code sheets from trade documentation
80. 🏭 Production yield analyzer – generate efficiency reports from manufacturing floor logs
**Security & Risk Management** 🔒
81. 🛡️ MITRE ATT&CK mapper – translate IDS alerts into tactic-technique JSON for threat intelligence
82. 🎣 Phishing email extractor – produce IOC STIX bundles from security incident reports
83. 🔐 Zero-trust policy generator – convert narrative access requests into structured policy rules
84. 🚨 SOC alert deduplicator – cluster security tickets by kill-chain stage for efficient triage
85. 🏴☠️ Red team debrief summarizer – export OWASP Top-10 gaps from penetration test reports
86. 📋 Data breach notifier – craft GDPR-compliant disclosure packets with timeline and impact data
87. 🧠 Threat intel feed normalizer – convert mixed security PDFs into MISP threat objects
88. 🔍 Secret leak scanner – output GitHub code-owner mentions from repository security scans
89. 📊 Vendor risk questionnaire scorer – generate SIG Lite security assessment answers
90. 🏗️ Security audit tracker – link ISO-27001 controls to evidence artifacts for compliance
**Knowledge & Productivity** 📚
91. 🎙️ Meeting transcript processor – extract action items with owners and deadlines into project tracking JSON
92. 📚 Research paper summarizer – export citation graphs and key findings for literature review databases
93. 📋 SOP generator – convert process narratives into step-by-step validation checklists
94. 🔄 Code diff summarizer – generate reviewer hints and impact analysis from version control changes
95. 📊 API changelog analyzer – produce backward-compatibility scorecards for development teams
96. 🧠 Mind map creator – structure brainstorming sessions into hierarchical knowledge trees
97. 📖 Knowledge base gap detector – suggest article stubs from frequently asked questions analysis
98. 🎯 Personal OKR journal parser – output progress dashboards with milestone tracking
99. 💼 White paper composer – transform technical discussions into structured thought leadership content
100. 🧩 Universal transformer – convert any unstructured domain knowledge into your custom schema-validated JSON
## License
MIT License - See [LICENSE](https://github.com/Nantero1/ai-first-devops-toolkit/blob/main/LICENSE) file for details.
Copyright (c) 2025, Benjamin Linnik.
## Support
**🐛 Found a bug? 💡 Have a question? 📚 Need help?**
**GitHub is your primary destination for all support:**
- **📋 Issues & Bug Reports**: [Create an issue](https://github.com/Nantero1/ai-first-devops-toolkit/issues)
- **📖 Documentation**: [Browse examples](https://github.com/Nantero1/ai-first-devops-toolkit/tree/main/examples)
- **🔧 Source Code**: [View source](https://github.com/Nantero1/ai-first-devops-toolkit)
**Before opening an issue, please:**
1. ✅ Check the [examples directory](https://github.com/Nantero1/ai-first-devops-toolkit/tree/main/examples) for
solutions
2. ✅ Review the error logs (beautiful output with Rich!)
3. ✅ Validate your Azure authentication and permissions
4. ✅ Ensure your input JSON follows the required format
5. ✅ Search existing [issues](https://github.com/Nantero1/ai-first-devops-toolkit/issues) for similar problems
**Quick Links:**
- 🚀 [Getting Started Guide](https://github.com/Nantero1/ai-first-devops-toolkit#quick-start)
- 📚 [Complete Examples](https://github.com/Nantero1/ai-first-devops-toolkit/tree/main/examples)
- 🔧 [CI/CD Integration](https://github.com/Nantero1/ai-first-devops-toolkit#cicd-integration)
- 🎯 [Use Cases](https://github.com/Nantero1/ai-first-devops-toolkit#use-cases)
---
*Ready to embrace the AI-First future? Start with this toolkit and build your path to exponential productivity. Learn
more about the AI-First DevOps revolution
in [Building AI-First DevOps](https://technologyworkroom.blogspot.com/2025/06/building-ai-first-devops.html).*
| text/markdown | null | Benjamin Linnik <Benjamin@Linnik.IT> | null | null | MIT | ai, automation, azure-openai, ci-cd, devops, llm, semantic-kernel | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Lang... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.13.0",
"azure-core>=1.35.0",
"azure-identity>=1.24.0",
"json-schema-to-pydantic>=0.4.0",
"openai>=1.93.0",
"pydantic>=2.11.0",
"rich>=14.0.0",
"ruamel-yaml>=0.18.0",
"semantic-kernel>=1.39.3",
"tenacity>=9.1.0",
"werkzeug>=3.1.5"
] | [] | [] | [] | [
"Homepage, https://technologyworkroom.blogspot.com/2025/06/building-ai-first-devops.html",
"Repository, https://github.com/Nantero1/ai-first-devops-toolkit",
"Documentation, https://github.com/Nantero1/ai-first-devops-toolkit",
"Bug Tracker, https://github.com/Nantero1/ai-first-devops-toolkit/issues",
"Sour... | twine/6.2.0 CPython/3.12.12 | 2026-02-18T22:03:54.338883 | llm_ci_runner-1.5.5.tar.gz | 881,702 | ba/45/da2c80319052098ee88f167d1e44262474f6075a94922facf4fcd6866bc4/llm_ci_runner-1.5.5.tar.gz | source | sdist | null | false | 22c18329d174f2fe8fb6ede21f712fe4 | 689ee6c9ed1af1bcdd39455be58053f8e93d09ae06ff5790d498036416e6340f | ba45da2c80319052098ee88f167d1e44262474f6075a94922facf4fcd6866bc4 | null | [
"LICENSE",
"NOTICE"
] | 244 |
2.4 | nginx-ldap-auth-service | 2.6.1 | A FastAPI app that authenticates users via LDAP and sets a cookie for nginx | # nginx-ldap-auth-service
`nginx-ldap-auth-service` is a high-performance authentication daemon built with [FastAPI](https://fastapi.tiangolo.com/). It provides an authentication bridge between [nginx](https://nginx.org/) and LDAP or Active Directory servers, including support for Duo MFA.
It works in conjunction with nginx's [ngx_http_auth_request_module](http://nginx.org/en/docs/http/ngx_http_auth_request_module.html) to provide a seamless login experience for your web applications.
## Features
- **LDAP/Active Directory Integration**: Authenticate users against any LDAP-compliant server or Microsoft Active Directory.
- **FastAPI Powered**: High performance, asynchronous connection management, and modern implementation.
- **Login Form & Session Management**: Built-in login form and session handling.
- **Duo MFA Support**: Optional Duo Multi-Factor Authentication workflow.
- **Flexible Session Backends**: Support for in-memory or Redis-based sessions for high availability.
- **Authorization Filters**: Restrict access based on LDAP search filters (e.g., group membership).
- **Docker Ready**: Easily deployable as a sidecar container.
- **Monitoring Endpoints**: Built-in `/status` and `/status/ldap` health checks.
## Installation
### via pip
```bash
pip install nginx-ldap-auth-service
```
### via uv
```bash
uv tool install nginx-ldap-auth-service
```
### via pipx
```bash
pipx install nginx-ldap-auth-service
```
### via Docker
```bash
docker pull caltechads/nginx-ldap-auth-service:latest
```
## Quick Start (Docker Compose)
Create a `docker-compose.yml` file:
```yaml
services:
nginx-ldap-auth-service:
image: caltechads/nginx-ldap-auth-service:latest
environment:
- LDAP_URI=ldap://ldap.example.com
- LDAP_BASEDN=dc=example,dc=com
- LDAP_BINDDN=cn=admin,dc=example,dc=com
- LDAP_PASSWORD=secret
- SECRET_KEY=your-session-secret
- CSRF_SECRET_KEY=your-csrf-secret
ports:
- "8888:8888"
```
Run with:
```bash
docker-compose up -d
```
## Configuration
The service can be configured via environment variables, command-line arguments, or Nginx headers.
### Required Environment Variables
| Variable | Description |
| --- | --- |
| `LDAP_URI` | URL of the LDAP server (e.g., `ldap://localhost`) |
| `LDAP_BINDDN` | DN of a privileged user for searches |
| `LDAP_PASSWORD` | Password for the `LDAP_BINDDN` user |
| `LDAP_BASEDN` | Base DN for user searches |
| `SECRET_KEY` | Secret key for session encryption |
| `CSRF_SECRET_KEY` | Secret key for CSRF protection |
### Important Optional Variables
- `DUO_ENABLED`: Set to `True` to enable Duo MFA (Note that you must also define all the DUO_* configs also)
- `SESSION_BACKEND`: `memory` (default) or `redis`.
- `LDAP_AUTHORIZATION_FILTER`: LDAP filter to restrict access.
- `COOKIE_NAME`: Name of the session cookie (default: `nginxauth`).
For a full list of configuration options, see the [Configuration Documentation](https://nginx-ldap-auth-service.readthedocs.io/en/latest/configuration.html).
## Nginx Integration
To use the service with Nginx, configure your `location` blocks to use `auth_request`:
```nginx
location / {
auth_request /check-auth;
error_page 401 =200 /auth/login?service=$request_uri;
# ... your application config ...
}
location /auth {
proxy_pass http://nginx-ldap-auth-service:8888/auth;
proxy_set_header X-Cookie-Name "nginxauth";
proxy_set_header X-Cookie-Domain "localhost";
proxy_set_header X-Proto-Scheme $scheme;
proxy_set_header Host $host;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
}
location /check-auth {
internal;
proxy_pass http://nginx-ldap-auth-service:8888/check;
proxy_pass_request_headers off;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";
proxy_cache auth_cache;
proxy_cache_valid 200 10m;
proxy_set_header X-Cookie-Name "nginxauth";
proxy_set_header Cookie nginxauth=$cookie_nginxauth;
proxy_set_header X-Cookie-Domain "localhost";
proxy_cache_key "$http_authorization$cookie_nginxauth";
}
```
For detailed Nginx configuration examples, including caching and Duo MFA headers, see the [Nginx Configuration Guide](https://nginx-ldap-auth-service.readthedocs.io/en/latest/nginx.html).
## Documentation
The full documentation is available at [https://nginx-ldap-auth-service.readthedocs.io](https://nginx-ldap-auth-service.readthedocs.io).
## License
This project is licensed under the terms of the [LICENSE.txt](LICENSE.txt) file.
| text/markdown | null | Caltech IMSS ADS <imss-ads-staff@caltech.edu> | null | Christopher Malek <cmalek@caltech.edu> | null | nginx, ldap, auth, fastapi, devops | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Development Status :: 5 - Production/Stable",
"Framew... | [] | null | null | >=3.11 | [] | [] | [] | [
"aiodogstatsd==0.16.0.post0",
"bonsai==1.5.3",
"click>=8.0.1",
"fastapi>=0.115.7",
"fastapi-csrf-protect>=1.0.0",
"jinja2>=3.0.3",
"pydantic-settings>=2.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.1",
"python-multipart>=0.0.6",
"sentry-sdk>=2.20.0",
"starsessions[redis]>=2.2.1",
"structlog>... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.10 | 2026-02-18T22:03:15.277588 | nginx_ldap_auth_service-2.6.1.tar.gz | 194,055 | 4a/4c/047224ef8e2b0161865c3daba561e8acb1496515e3ff5ce1827c1cb0bb0c/nginx_ldap_auth_service-2.6.1.tar.gz | source | sdist | null | false | 49824277456f87d411808fef97c08ade | 368469aad347d8a60fd2243d5e002e6afeee89964742e81406eadbb94edeb6fc | 4a4c047224ef8e2b0161865c3daba561e8acb1496515e3ff5ce1827c1cb0bb0c | null | [
"LICENSE.txt"
] | 149 |
2.4 | swapi-client | 0.2.6 | API client for Serwis Planner | # SW-API-Client
An asynchronous Python client for the Serwis Planner API.
## Features
- Asynchronous design using `httpx` and `asyncio`.
- Token-based authentication.
- Helper methods for all major API endpoints.
- Powerful `SWQueryBuilder` for creating complex queries with filtering, sorting, and field selection.
## Installation
First, install the package from PyPI:
```bash
pip install swapi-client
```
The client uses `python-dotenv` to manage environment variables for the example. Install it if you want to run the example code directly.
```bash
pip install python-dotenv
```
## Usage
### Configuration
Create a `.env` file in your project root to store your API credentials:
```env
SW_API_URL=https://your-api-url.com
SW_CLIENT_ID=your_client_id
SW_AUTH_TOKEN=your_auth_token
SW_LOGIN=your_login_email
SW_PASSWORD=your_password
```
### Example
Here is a complete example demonstrating how to log in, fetch data, and use the query builder.
```python
import asyncio
import os
import pprint
from dotenv import load_dotenv
from swapi_client import SWApiClient, SWQueryBuilder
# Load environment variables from .env file
load_dotenv()
async def main():
"""
Main function to demonstrate the usage of the SWApiClient.
"""
api_url = os.getenv("SW_API_URL")
client_id = os.getenv("SW_CLIENT_ID")
auth_token = os.getenv("SW_AUTH_TOKEN")
login_user = os.getenv("SW_LOGIN")
password = os.getenv("SW_PASSWORD")
if not all([api_url, client_id, auth_token, login_user, password]):
print("Please set all required environment variables in a .env file.")
return
# The client is used within an async context manager
async with SWApiClient(api_url) as client:
try:
# 1. Login to get an authentication token
print("Attempting to log in...")
token = await client.login(
clientId=client_id,
authToken=auth_token,
login=login_user,
password=password,
)
print(f"Successfully logged in. Token starts with: {token[:10]}...")
# 2. Verify the token and get current user info
me = await client.get_me()
print(f"Token verified. Logged in as: {me.get('user', {}).get('username')}")
print("-" * 30)
# 3. Example: Get all account companies using the pagination helper
print("Fetching all account companies (with pagination)...")
all_companies = await client.get_all_pages(client.get_account_companies)
print(f"Found a total of {len(all_companies)} companies.")
if all_companies:
print(f"First company: {all_companies[0].get('name')}")
print("-" * 30)
# 4. Example: Use the SWQueryBuilder to filter, sort, and select fields
print("Fetching filtered companies...")
query = (
SWQueryBuilder()
.filter("name", "STB", "contains")
.order("name", "asc")
.fields(["id", "name", "email"])
.page_limit(5)
)
filtered_companies_response = await client.get_account_companies(query_builder=query)
filtered_companies = filtered_companies_response.get('data', [])
print(f"Found {len(filtered_companies)} companies matching the filter.")
pprint.pprint(filtered_companies)
print("-" * 30)
# 5. Example: Get metadata for a module
print("Fetching metadata for the 'products' module...")
products_meta = await client.get_entity_meta("products")
print("Available fields for products (first 5):")
for field, details in list(products_meta.get('data', {}).get('fields', {}).items())[:5]:
print(f" - {field}: {details.get('label')}")
print("-" * 30)
# 6. Example: Use for_metadata to get dynamic metadata
print("Fetching metadata for a serviced product with specific attributes...")
meta_query = SWQueryBuilder().for_metadata({"id": 1, "commissionPhase": 1})
serviced_product_meta = await client.get_entity_meta(
"serviced_products", query_builder=meta_query
)
print("Metadata for serviced product with for[id]=1 and for[commissionPhase]=1:")
pprint.pprint(serviced_product_meta.get('data', {}).get('fields', {}).get('commission'))
except Exception as e:
print(f"An error occurred: {e}")
if __name__ == "__main__":
asyncio.run(main())
```
## SWQueryBuilder
The `SWQueryBuilder` provides a fluent interface to construct complex query parameters for the API.
| Method | Description |
| ------------------------------------- | ------------------------------------------------------------------------------------------------------- |
| `fields(["field1", "field2"])` | Specifies which fields to include in the response. |
| `extra_fields(["field1"])` | Includes additional, non-default fields. |
| `for_metadata({"id": 1})` | Simulates an object change to retrieve dynamic metadata (uses `for[fieldName]`). |
| `order("field", "desc")` | Sorts the results by a field in a given direction (`asc` or `desc`). |
| `page_limit(50)` | Sets the number of results per page. |
| `page_offset(100)` | Sets the starting offset for the results. |
| `page_number(3)` | Sets the page number to retrieve. |
| `filter("field", "value", "op")` | Adds a filter condition. Operators: `eq`, `neq`, `gt`, `gte`, `lt`, `lte`, `contains`, `in`, `isNull`, etc. |
| `filter_or({...}, group_index=0)` | Adds a group of conditions where at least one must be true. |
| `filter_and({...}, group_index=0)` | Adds a group of conditions where all must be true. |
| `with_relations(True)` | Includes related objects in the response. |
| `with_editable_settings_for_action()` | Retrieves settings related to a specific action. |
| `lang("en")` | Sets the language for the response. |
| `build()` | Returns the dictionary of query parameters. |
## API Methods
The client provides a comprehensive set of methods for interacting with the Serwis Planner API. It includes specific methods for most endpoints (e.g., `get_products`, `create_account_user`) as well as generic helpers.
### Generic Helpers
- `get_all_pages(paginated_method, ...)`: Automatically handles pagination for any list endpoint.
- `get_entity_meta(module, ...)`: Fetches metadata for any module.
- `get_entity_autoselect(module, ...)`: Fetches autoselect data for any module.
- `get_entity_history(module, ...)`: Fetches history records for any module.
- `get_entity_audit(module, ...)`: Fetches audit records for any module.
### Major Endpoints Covered
- Account Companies
- Account Users
- Products & Serviced Products
- Baskets & Basket Positions
- Commissions
- File Uploads
- ODBC Reports
- Email Messages
- PDF Generation
- History and Auditing
- Bulk and Contextual Operations
Each endpoint has corresponding `get`, `create`, `update`, `patch`, and `delete` methods where applicable. For a full list of available methods, please refer to the source code in `src/swapi_client/client.py`.
| text/markdown | null | Adrian 'Qwizi' Ciołek <ciolek.adrian@proton.me> | null | null | MIT License Copyright (c) 2025 qwizi Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx>=0.27.0",
"python-dotenv>=1.0.1"
] | [] | [] | [] | [
"Homepage, https://github.com/qwizi/swapi-client",
"Bug Tracker, https://github.com/qwizi/swapi-client/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T22:03:02.061789 | swapi_client-0.2.6.tar.gz | 22,652 | 47/da/830f03fc64bd04439f18b4b8a0a3a4f0a5639627be2196db2d71bf69bb24/swapi_client-0.2.6.tar.gz | source | sdist | null | false | b0941fc37c14b77f604d973efdd1edbc | 5fb48bb985b2ceb8ba1aefcc196b9309b7220762720e222dda14f1afe0083b54 | 47da830f03fc64bd04439f18b4b8a0a3a4f0a5639627be2196db2d71bf69bb24 | null | [
"LICENSE"
] | 228 |
2.4 | libib-client | 1.0.2 | A synchronous, third-party client for the Libib API | # Libib-Client
A synchronous, third-party client for the [Libib API](https://support.libib.com/rest-api/introduction.html)
## Install
```shell
pip install libib_client
```
## Usage
To initalize the client:
```python
import libib_client
# For Pro accounts
client = Libib("your-api-key", "your-user-id")
# For Ultimate accounts, also pass your Ultimate ID
client = Libib("your-api-key", "your-user-id", "your-ultimate-id")
```
## Documentation:
Documentaion can be [found here] (https://michael-masarik.github.io/libib-client/)
## Note
I do not have an Ultimate account, so if the Ultimate features (or any features, for that matter) do not work, feel free to open an issue or a PR
| text/markdown | Michael Masarik | null | null | null | MIT | libib, library, api, client | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests>=2.31.0"
] | [] | [] | [] | [
"Homepage, https://michael-masarik.github.io/libib-client/",
"Repository, https://github.com/michael-masarik/libib-client",
"Issues, https://github.com/michael-masarik/libib-client/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:02:30.930064 | libib_client-1.0.2.tar.gz | 5,355 | 7b/29/2e33c917ebf95fc241bcc8a66a804e233fa3d03a6871b87cee89a089377e/libib_client-1.0.2.tar.gz | source | sdist | null | false | bcbdef0ce8988fb8fd400ef84fb481e9 | 8920284bf4bbf0a2cc33d5a2c1ac6bdf6f84ca6ad0a869d453d62e9273d92d95 | 7b292e33c917ebf95fc241bcc8a66a804e233fa3d03a6871b87cee89a089377e | null | [
"LICENSE"
] | 236 |
2.4 | pestifer | 2.1.8 | A NAMD topology/coordinate system preparation tool | # Pestifer
> NAMD System Preparation Tool
[](https://pypi.org/project/pestifer/)
[](https://pepy.tech/projects/pestifer)
[](https://pestifer.readthedocs.io/en/latest/)
[](https://doi.org/10.5281/zenodo.16051498)
Pestifer is a fully automated simulation-ready MD system preparation tool, requiring as inputs only biomolecular structures (e.g., PDB IDs, PDB files, mmCIF files, alphafold IDs) and a handful of customization parameters, to generate NAMD-compatible input files (PSF, PDB, and xsc). It is basically a highly functionalized front end for VMD's `psfgen` utility. It also has a few handy subcommands for working with NAMD output.
## Installation
```bash
pip install pestifer
```
Once installed, the user has access to the main `pestifer` command.
Pestifer also requires access to the following executables:
1. `namd3` and `charmrun`
2. `vmd` and `catdcd`
3. `packmol`
Pestifer **includes a copy of** the [July 2024 Charmm36 force field](https://mackerell.umaryland.edu/download.php?filename=CHARMM_ff_params_files/toppar_c36_jul24.tgz).
## Documentation
Please visit [readthedocs](https://pestifer.readthedocs.io/en/latest) for full documentation.
## Version History
See the [CHANGELOG](./CHANGELOG.md) for full details.
## Meta
[https://github.com/cameronabrams](https://github.com/cameronabrams/)
Pestifer is maintained by Cameron F. Abrams.
Pestifer is distributed under the MIT license. See ``LICENSE`` for more information.
Pestifer was developed with support from the National Institutes of Health via grants GM100472, AI154071, and AI178833.
## Contributing
1. Fork it (<https://github.com/cameronabrams/pestifer/fork>)
2. Create your feature branch (`git checkout -b feature/fooBar`)
3. Commit your changes (`git commit -am 'Add some fooBar'`)
4. Push to the branch (`git push origin feature/fooBar`)
5. Create a new Pull Request
| text/markdown | null | Cameron F Abrams <cfa22@drexel.edu> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"colorist",
"docutils<0.22,>=0.20",
"filelock",
"fsspec",
"gputil",
"joblib",
"matplotlib",
"mmcif",
"networkx",
"numpy>=1.24",
"pandas",
"pidibble>=1.5.2",
"platformdirs",
"progressbar2",
"pydantic",
"pyyaml>=6",
"scipy",
"sphinx>=8.2.3",
"unidiff",
"ycleptic>=2.0.1"
] | [] | [] | [] | [
"Source, https://github.com/cameronabrams/pestifer",
"Documentation, https://pestifer.readthedocs.io/en/latest/",
"Bug Tracker, https://github.com/cameronabrams/pestifer/issues"
] | Hatch/1.16.3 cpython/3.12.12 HTTPX/0.28.1 | 2026-02-18T22:01:28.334446 | pestifer-2.1.8.tar.gz | 90,970,202 | e6/67/3e2c599f2488d4c0a2a7055ad30a3c842f03d36a6807aba3f2748fca5fea/pestifer-2.1.8.tar.gz | source | sdist | null | false | d8f04c7d300f7e6ed0a4d05e497704ca | ee61e1854f2f4b0fce4dca28616a213744a4d5463ecb2698a07f12b68268e548 | e6673e2c599f2488d4c0a2a7055ad30a3c842f03d36a6807aba3f2748fca5fea | null | [
"LICENSE"
] | 252 |
2.4 | types-boto3-connect | 1.42.52 | Type annotations for boto3 Connect 1.42.52 service generated with mypy-boto3-builder 8.12.0 | <a id="types-boto3-connect"></a>
# types-boto3-connect
[](https://pypi.org/project/types-boto3-connect/)
[](https://pypi.org/project/types-boto3-connect/)
[](https://youtype.github.io/types_boto3_docs/)
[](https://pypistats.org/packages/types-boto3-connect)

Type annotations for [boto3 Connect 1.42.52](https://pypi.org/project/boto3/)
compatible with [VSCode](https://code.visualstudio.com/),
[PyCharm](https://www.jetbrains.com/pycharm/),
[Emacs](https://www.gnu.org/software/emacs/),
[Sublime Text](https://www.sublimetext.com/),
[mypy](https://github.com/python/mypy),
[pyright](https://github.com/microsoft/pyright) and other tools.
Generated with
[mypy-boto3-builder 8.12.0](https://github.com/youtype/mypy_boto3_builder).
More information can be found on
[types-boto3](https://pypi.org/project/types-boto3/) page and in
[types-boto3-connect docs](https://youtype.github.io/types_boto3_docs/types_boto3_connect/).
See how it helps you find and fix potential bugs:

- [types-boto3-connect](#types-boto3-connect)
- [How to install](#how-to-install)
- [Generate locally (recommended)](<#generate-locally-(recommended)>)
- [VSCode extension](#vscode-extension)
- [From PyPI with pip](#from-pypi-with-pip)
- [How to uninstall](#how-to-uninstall)
- [Usage](#usage)
- [VSCode](#vscode)
- [PyCharm](#pycharm)
- [Emacs](#emacs)
- [Sublime Text](#sublime-text)
- [Other IDEs](#other-ides)
- [mypy](#mypy)
- [pyright](#pyright)
- [Pylint compatibility](#pylint-compatibility)
- [Explicit type annotations](#explicit-type-annotations)
- [Client annotations](#client-annotations)
- [Paginators annotations](#paginators-annotations)
- [Literals](#literals)
- [Type definitions](#type-definitions)
- [How it works](#how-it-works)
- [What's new](#what's-new)
- [Implemented features](#implemented-features)
- [Latest changes](#latest-changes)
- [Versioning](#versioning)
- [Thank you](#thank-you)
- [Documentation](#documentation)
- [Support and contributing](#support-and-contributing)
<a id="how-to-install"></a>
## How to install
<a id="generate-locally-(recommended)"></a>
### Generate locally (recommended)
You can generate type annotations for `boto3` package locally with
`mypy-boto3-builder`. Use
[uv](https://docs.astral.sh/uv/getting-started/installation/) for build
isolation.
1. Run mypy-boto3-builder in your package root directory:
`uvx --with 'boto3==1.42.52' mypy-boto3-builder`
2. Select `boto3` AWS SDK.
3. Add `Connect` service.
4. Use provided commands to install generated packages.
<a id="vscode-extension"></a>
### VSCode extension
Add
[AWS Boto3](https://marketplace.visualstudio.com/items?itemName=Boto3typed.boto3-ide)
extension to your VSCode and run `AWS boto3: Quick Start` command.
Click `Modify` and select `boto3 common` and `Connect`.
<a id="from-pypi-with-pip"></a>
### From PyPI with pip
Install `types-boto3` for `Connect` service.
```bash
# install with boto3 type annotations
python -m pip install 'types-boto3[connect]'
# Lite version does not provide session.client/resource overloads
# it is more RAM-friendly, but requires explicit type annotations
python -m pip install 'types-boto3-lite[connect]'
# standalone installation
python -m pip install types-boto3-connect
```
<a id="how-to-uninstall"></a>
## How to uninstall
```bash
python -m pip uninstall -y types-boto3-connect
```
<a id="usage"></a>
## Usage
<a id="vscode"></a>
### VSCode
- Install
[Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
- Install
[Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
- Set `Pylance` as your Python Language Server
- Install `types-boto3[connect]` in your environment:
```bash
python -m pip install 'types-boto3[connect]'
```
Both type checking and code completion should now work. No explicit type
annotations required, write your `boto3` code as usual.
<a id="pycharm"></a>
### PyCharm
> ⚠️ Due to slow PyCharm performance on `Literal` overloads (issue
> [PY-40997](https://youtrack.jetbrains.com/issue/PY-40997)), it is recommended
> to use [types-boto3-lite](https://pypi.org/project/types-boto3-lite/) until
> the issue is resolved.
> ⚠️ If you experience slow performance and high CPU usage, try to disable
> `PyCharm` type checker and use [mypy](https://github.com/python/mypy) or
> [pyright](https://github.com/microsoft/pyright) instead.
> ⚠️ To continue using `PyCharm` type checker, you can try to replace
> `types-boto3` with
> [types-boto3-lite](https://pypi.org/project/types-boto3-lite/):
```bash
pip uninstall types-boto3
pip install types-boto3-lite
```
Install `types-boto3[connect]` in your environment:
```bash
python -m pip install 'types-boto3[connect]'
```
Both type checking and code completion should now work.
<a id="emacs"></a>
### Emacs
- Install `types-boto3` with services you use in your environment:
```bash
python -m pip install 'types-boto3[connect]'
```
- Install [use-package](https://github.com/jwiegley/use-package),
[lsp](https://github.com/emacs-lsp/lsp-mode/),
[company](https://github.com/company-mode/company-mode) and
[flycheck](https://github.com/flycheck/flycheck) packages
- Install [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) package
```elisp
(use-package lsp-pyright
:ensure t
:hook (python-mode . (lambda ()
(require 'lsp-pyright)
(lsp))) ; or lsp-deferred
:init (when (executable-find "python3")
(setq lsp-pyright-python-executable-cmd "python3"))
)
```
- Make sure emacs uses the environment where you have installed `types-boto3`
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="sublime-text"></a>
### Sublime Text
- Install `types-boto3[connect]` with services you use in your environment:
```bash
python -m pip install 'types-boto3[connect]'
```
- Install [LSP-pyright](https://github.com/sublimelsp/LSP-pyright) package
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="other-ides"></a>
### Other IDEs
Not tested, but as long as your IDE supports `mypy` or `pyright`, everything
should work.
<a id="mypy"></a>
### mypy
- Install `mypy`: `python -m pip install mypy`
- Install `types-boto3[connect]` in your environment:
```bash
python -m pip install 'types-boto3[connect]'
```
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pyright"></a>
### pyright
- Install `pyright`: `npm i -g pyright`
- Install `types-boto3[connect]` in your environment:
```bash
python -m pip install 'types-boto3[connect]'
```
Optionally, you can install `types-boto3` to `typings` directory.
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pylint-compatibility"></a>
### Pylint compatibility
It is totally safe to use `TYPE_CHECKING` flag in order to avoid
`types-boto3-connect` dependency in production. However, there is an issue in
`pylint` that it complains about undefined variables. To fix it, set all types
to `object` in non-`TYPE_CHECKING` mode.
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from types_boto3_ec2 import EC2Client, EC2ServiceResource
from types_boto3_ec2.waiters import BundleTaskCompleteWaiter
from types_boto3_ec2.paginators import DescribeVolumesPaginator
else:
EC2Client = object
EC2ServiceResource = object
BundleTaskCompleteWaiter = object
DescribeVolumesPaginator = object
...
```
<a id="explicit-type-annotations"></a>
## Explicit type annotations
<a id="client-annotations"></a>
### Client annotations
`ConnectClient` provides annotations for `boto3.client("connect")`.
```python
from boto3.session import Session
from types_boto3_connect import ConnectClient
client: ConnectClient = Session().client("connect")
# now client usage is checked by mypy and IDE should provide code completion
```
<a id="paginators-annotations"></a>
### Paginators annotations
`types_boto3_connect.paginator` module contains type annotations for all
paginators.
```python
from boto3.session import Session
from types_boto3_connect import ConnectClient
from types_boto3_connect.paginator import (
GetMetricDataPaginator,
ListAgentStatusesPaginator,
ListApprovedOriginsPaginator,
ListAuthenticationProfilesPaginator,
ListBotsPaginator,
ListChildHoursOfOperationsPaginator,
ListContactEvaluationsPaginator,
ListContactFlowModuleAliasesPaginator,
ListContactFlowModuleVersionsPaginator,
ListContactFlowModulesPaginator,
ListContactFlowVersionsPaginator,
ListContactFlowsPaginator,
ListContactReferencesPaginator,
ListDataTableAttributesPaginator,
ListDataTablePrimaryValuesPaginator,
ListDataTableValuesPaginator,
ListDataTablesPaginator,
ListDefaultVocabulariesPaginator,
ListEntitySecurityProfilesPaginator,
ListEvaluationFormVersionsPaginator,
ListEvaluationFormsPaginator,
ListFlowAssociationsPaginator,
ListHoursOfOperationOverridesPaginator,
ListHoursOfOperationsPaginator,
ListInstanceAttributesPaginator,
ListInstanceStorageConfigsPaginator,
ListInstancesPaginator,
ListIntegrationAssociationsPaginator,
ListLambdaFunctionsPaginator,
ListLexBotsPaginator,
ListPhoneNumbersPaginator,
ListPhoneNumbersV2Paginator,
ListPredefinedAttributesPaginator,
ListPromptsPaginator,
ListQueueQuickConnectsPaginator,
ListQueuesPaginator,
ListQuickConnectsPaginator,
ListRoutingProfileManualAssignmentQueuesPaginator,
ListRoutingProfileQueuesPaginator,
ListRoutingProfilesPaginator,
ListRulesPaginator,
ListSecurityKeysPaginator,
ListSecurityProfileApplicationsPaginator,
ListSecurityProfileFlowModulesPaginator,
ListSecurityProfilePermissionsPaginator,
ListSecurityProfilesPaginator,
ListTaskTemplatesPaginator,
ListTestCasesPaginator,
ListTrafficDistributionGroupUsersPaginator,
ListTrafficDistributionGroupsPaginator,
ListUseCasesPaginator,
ListUserHierarchyGroupsPaginator,
ListUserProficienciesPaginator,
ListUsersPaginator,
ListViewVersionsPaginator,
ListViewsPaginator,
ListWorkspacePagesPaginator,
ListWorkspacesPaginator,
SearchAgentStatusesPaginator,
SearchAvailablePhoneNumbersPaginator,
SearchContactFlowModulesPaginator,
SearchContactFlowsPaginator,
SearchContactsPaginator,
SearchDataTablesPaginator,
SearchHoursOfOperationOverridesPaginator,
SearchHoursOfOperationsPaginator,
SearchPredefinedAttributesPaginator,
SearchPromptsPaginator,
SearchQueuesPaginator,
SearchQuickConnectsPaginator,
SearchResourceTagsPaginator,
SearchRoutingProfilesPaginator,
SearchSecurityProfilesPaginator,
SearchTestCasesPaginator,
SearchUserHierarchyGroupsPaginator,
SearchUsersPaginator,
SearchViewsPaginator,
SearchVocabulariesPaginator,
SearchWorkspaceAssociationsPaginator,
SearchWorkspacesPaginator,
)
client: ConnectClient = Session().client("connect")
# Explicit type annotations are optional here
# Types should be correctly discovered by mypy and IDEs
get_metric_data_paginator: GetMetricDataPaginator = client.get_paginator("get_metric_data")
list_agent_statuses_paginator: ListAgentStatusesPaginator = client.get_paginator(
"list_agent_statuses"
)
list_approved_origins_paginator: ListApprovedOriginsPaginator = client.get_paginator(
"list_approved_origins"
)
list_authentication_profiles_paginator: ListAuthenticationProfilesPaginator = client.get_paginator(
"list_authentication_profiles"
)
list_bots_paginator: ListBotsPaginator = client.get_paginator("list_bots")
list_child_hours_of_operations_paginator: ListChildHoursOfOperationsPaginator = (
client.get_paginator("list_child_hours_of_operations")
)
list_contact_evaluations_paginator: ListContactEvaluationsPaginator = client.get_paginator(
"list_contact_evaluations"
)
list_contact_flow_module_aliases_paginator: ListContactFlowModuleAliasesPaginator = (
client.get_paginator("list_contact_flow_module_aliases")
)
list_contact_flow_module_versions_paginator: ListContactFlowModuleVersionsPaginator = (
client.get_paginator("list_contact_flow_module_versions")
)
list_contact_flow_modules_paginator: ListContactFlowModulesPaginator = client.get_paginator(
"list_contact_flow_modules"
)
list_contact_flow_versions_paginator: ListContactFlowVersionsPaginator = client.get_paginator(
"list_contact_flow_versions"
)
list_contact_flows_paginator: ListContactFlowsPaginator = client.get_paginator("list_contact_flows")
list_contact_references_paginator: ListContactReferencesPaginator = client.get_paginator(
"list_contact_references"
)
list_data_table_attributes_paginator: ListDataTableAttributesPaginator = client.get_paginator(
"list_data_table_attributes"
)
list_data_table_primary_values_paginator: ListDataTablePrimaryValuesPaginator = (
client.get_paginator("list_data_table_primary_values")
)
list_data_table_values_paginator: ListDataTableValuesPaginator = client.get_paginator(
"list_data_table_values"
)
list_data_tables_paginator: ListDataTablesPaginator = client.get_paginator("list_data_tables")
list_default_vocabularies_paginator: ListDefaultVocabulariesPaginator = client.get_paginator(
"list_default_vocabularies"
)
list_entity_security_profiles_paginator: ListEntitySecurityProfilesPaginator = client.get_paginator(
"list_entity_security_profiles"
)
list_evaluation_form_versions_paginator: ListEvaluationFormVersionsPaginator = client.get_paginator(
"list_evaluation_form_versions"
)
list_evaluation_forms_paginator: ListEvaluationFormsPaginator = client.get_paginator(
"list_evaluation_forms"
)
list_flow_associations_paginator: ListFlowAssociationsPaginator = client.get_paginator(
"list_flow_associations"
)
list_hours_of_operation_overrides_paginator: ListHoursOfOperationOverridesPaginator = (
client.get_paginator("list_hours_of_operation_overrides")
)
list_hours_of_operations_paginator: ListHoursOfOperationsPaginator = client.get_paginator(
"list_hours_of_operations"
)
list_instance_attributes_paginator: ListInstanceAttributesPaginator = client.get_paginator(
"list_instance_attributes"
)
list_instance_storage_configs_paginator: ListInstanceStorageConfigsPaginator = client.get_paginator(
"list_instance_storage_configs"
)
list_instances_paginator: ListInstancesPaginator = client.get_paginator("list_instances")
list_integration_associations_paginator: ListIntegrationAssociationsPaginator = (
client.get_paginator("list_integration_associations")
)
list_lambda_functions_paginator: ListLambdaFunctionsPaginator = client.get_paginator(
"list_lambda_functions"
)
list_lex_bots_paginator: ListLexBotsPaginator = client.get_paginator("list_lex_bots")
list_phone_numbers_paginator: ListPhoneNumbersPaginator = client.get_paginator("list_phone_numbers")
list_phone_numbers_v2_paginator: ListPhoneNumbersV2Paginator = client.get_paginator(
"list_phone_numbers_v2"
)
list_predefined_attributes_paginator: ListPredefinedAttributesPaginator = client.get_paginator(
"list_predefined_attributes"
)
list_prompts_paginator: ListPromptsPaginator = client.get_paginator("list_prompts")
list_queue_quick_connects_paginator: ListQueueQuickConnectsPaginator = client.get_paginator(
"list_queue_quick_connects"
)
list_queues_paginator: ListQueuesPaginator = client.get_paginator("list_queues")
list_quick_connects_paginator: ListQuickConnectsPaginator = client.get_paginator(
"list_quick_connects"
)
list_routing_profile_manual_assignment_queues_paginator: ListRoutingProfileManualAssignmentQueuesPaginator = client.get_paginator(
"list_routing_profile_manual_assignment_queues"
)
list_routing_profile_queues_paginator: ListRoutingProfileQueuesPaginator = client.get_paginator(
"list_routing_profile_queues"
)
list_routing_profiles_paginator: ListRoutingProfilesPaginator = client.get_paginator(
"list_routing_profiles"
)
list_rules_paginator: ListRulesPaginator = client.get_paginator("list_rules")
list_security_keys_paginator: ListSecurityKeysPaginator = client.get_paginator("list_security_keys")
list_security_profile_applications_paginator: ListSecurityProfileApplicationsPaginator = (
client.get_paginator("list_security_profile_applications")
)
list_security_profile_flow_modules_paginator: ListSecurityProfileFlowModulesPaginator = (
client.get_paginator("list_security_profile_flow_modules")
)
list_security_profile_permissions_paginator: ListSecurityProfilePermissionsPaginator = (
client.get_paginator("list_security_profile_permissions")
)
list_security_profiles_paginator: ListSecurityProfilesPaginator = client.get_paginator(
"list_security_profiles"
)
list_task_templates_paginator: ListTaskTemplatesPaginator = client.get_paginator(
"list_task_templates"
)
list_test_cases_paginator: ListTestCasesPaginator = client.get_paginator("list_test_cases")
list_traffic_distribution_group_users_paginator: ListTrafficDistributionGroupUsersPaginator = (
client.get_paginator("list_traffic_distribution_group_users")
)
list_traffic_distribution_groups_paginator: ListTrafficDistributionGroupsPaginator = (
client.get_paginator("list_traffic_distribution_groups")
)
list_use_cases_paginator: ListUseCasesPaginator = client.get_paginator("list_use_cases")
list_user_hierarchy_groups_paginator: ListUserHierarchyGroupsPaginator = client.get_paginator(
"list_user_hierarchy_groups"
)
list_user_proficiencies_paginator: ListUserProficienciesPaginator = client.get_paginator(
"list_user_proficiencies"
)
list_users_paginator: ListUsersPaginator = client.get_paginator("list_users")
list_view_versions_paginator: ListViewVersionsPaginator = client.get_paginator("list_view_versions")
list_views_paginator: ListViewsPaginator = client.get_paginator("list_views")
list_workspace_pages_paginator: ListWorkspacePagesPaginator = client.get_paginator(
"list_workspace_pages"
)
list_workspaces_paginator: ListWorkspacesPaginator = client.get_paginator("list_workspaces")
search_agent_statuses_paginator: SearchAgentStatusesPaginator = client.get_paginator(
"search_agent_statuses"
)
search_available_phone_numbers_paginator: SearchAvailablePhoneNumbersPaginator = (
client.get_paginator("search_available_phone_numbers")
)
search_contact_flow_modules_paginator: SearchContactFlowModulesPaginator = client.get_paginator(
"search_contact_flow_modules"
)
search_contact_flows_paginator: SearchContactFlowsPaginator = client.get_paginator(
"search_contact_flows"
)
search_contacts_paginator: SearchContactsPaginator = client.get_paginator("search_contacts")
search_data_tables_paginator: SearchDataTablesPaginator = client.get_paginator("search_data_tables")
search_hours_of_operation_overrides_paginator: SearchHoursOfOperationOverridesPaginator = (
client.get_paginator("search_hours_of_operation_overrides")
)
search_hours_of_operations_paginator: SearchHoursOfOperationsPaginator = client.get_paginator(
"search_hours_of_operations"
)
search_predefined_attributes_paginator: SearchPredefinedAttributesPaginator = client.get_paginator(
"search_predefined_attributes"
)
search_prompts_paginator: SearchPromptsPaginator = client.get_paginator("search_prompts")
search_queues_paginator: SearchQueuesPaginator = client.get_paginator("search_queues")
search_quick_connects_paginator: SearchQuickConnectsPaginator = client.get_paginator(
"search_quick_connects"
)
search_resource_tags_paginator: SearchResourceTagsPaginator = client.get_paginator(
"search_resource_tags"
)
search_routing_profiles_paginator: SearchRoutingProfilesPaginator = client.get_paginator(
"search_routing_profiles"
)
search_security_profiles_paginator: SearchSecurityProfilesPaginator = client.get_paginator(
"search_security_profiles"
)
search_test_cases_paginator: SearchTestCasesPaginator = client.get_paginator("search_test_cases")
search_user_hierarchy_groups_paginator: SearchUserHierarchyGroupsPaginator = client.get_paginator(
"search_user_hierarchy_groups"
)
search_users_paginator: SearchUsersPaginator = client.get_paginator("search_users")
search_views_paginator: SearchViewsPaginator = client.get_paginator("search_views")
search_vocabularies_paginator: SearchVocabulariesPaginator = client.get_paginator(
"search_vocabularies"
)
search_workspace_associations_paginator: SearchWorkspaceAssociationsPaginator = (
client.get_paginator("search_workspace_associations")
)
search_workspaces_paginator: SearchWorkspacesPaginator = client.get_paginator("search_workspaces")
```
<a id="literals"></a>
### Literals
`types_boto3_connect.literals` module contains literals extracted from shapes
that can be used in user code for type checking.
Full list of `Connect` Literals can be found in
[docs](https://youtype.github.io/types_boto3_docs/types_boto3_connect/literals/).
```python
from types_boto3_connect.literals import AccessTypeType
def check_value(value: AccessTypeType) -> bool: ...
```
<a id="type-definitions"></a>
### Type definitions
`types_boto3_connect.type_defs` module contains structures and shapes assembled
to typed dictionaries and unions for additional type checking.
Full list of `Connect` TypeDefs can be found in
[docs](https://youtype.github.io/types_boto3_docs/types_boto3_connect/type_defs/).
```python
# TypedDict usage example
from types_boto3_connect.type_defs import ActionSummaryTypeDef
def get_value() -> ActionSummaryTypeDef:
return {
"ActionType": ...,
}
```
<a id="how-it-works"></a>
## How it works
Fully automated
[mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder) carefully
generates type annotations for each service, patiently waiting for `boto3`
updates. It delivers drop-in type annotations for you and makes sure that:
- All available `boto3` services are covered.
- Each public class and method of every `boto3` service gets valid type
annotations extracted from `botocore` schemas.
- Type annotations include up-to-date documentation.
- Link to documentation is provided for every method.
- Code is processed by [ruff](https://docs.astral.sh/ruff/) for readability.
<a id="what's-new"></a>
## What's new
<a id="implemented-features"></a>
### Implemented features
- Fully type annotated `boto3`, `botocore`, `aiobotocore` and `aioboto3`
libraries
- `mypy`, `pyright`, `VSCode`, `PyCharm`, `Sublime Text` and `Emacs`
compatibility
- `Client`, `ServiceResource`, `Resource`, `Waiter` `Paginator` type
annotations for each service
- Generated `TypeDefs` for each service
- Generated `Literals` for each service
- Auto discovery of types for `boto3.client` and `boto3.resource` calls
- Auto discovery of types for `session.client` and `session.resource` calls
- Auto discovery of types for `client.get_waiter` and `client.get_paginator`
calls
- Auto discovery of types for `ServiceResource` and `Resource` collections
- Auto discovery of types for `aiobotocore.Session.create_client` calls
<a id="latest-changes"></a>
### Latest changes
Builder changelog can be found in
[Releases](https://github.com/youtype/mypy_boto3_builder/releases).
<a id="versioning"></a>
## Versioning
`types-boto3-connect` version is the same as related `boto3` version and
follows
[Python Packaging version specifiers](https://packaging.python.org/en/latest/specifications/version-specifiers/).
<a id="thank-you"></a>
## Thank you
- [Allie Fitter](https://github.com/alliefitter) for
[boto3-type-annotations](https://pypi.org/project/boto3-type-annotations/),
this package is based on top of his work
- [black](https://github.com/psf/black) developers for an awesome formatting
tool
- [Timothy Edmund Crosley](https://github.com/timothycrosley) for
[isort](https://github.com/PyCQA/isort) and how flexible it is
- [mypy](https://github.com/python/mypy) developers for doing all dirty work
for us
- [pyright](https://github.com/microsoft/pyright) team for the new era of typed
Python
<a id="documentation"></a>
## Documentation
All services type annotations can be found in
[boto3 docs](https://youtype.github.io/types_boto3_docs/types_boto3_connect/)
<a id="support-and-contributing"></a>
## Support and contributing
This package is auto-generated. Please reports any bugs or request new features
in [mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder/issues/)
repository.
| text/markdown | null | Vlad Emelianov <vlad.emelianov.nz@gmail.com> | null | null | null | boto3, connect, boto3-stubs, type-annotations, mypy, typeshed, autocomplete | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"... | [
"any"
] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions; python_version < \"3.12\""
] | [] | [] | [] | [
"Homepage, https://github.com/youtype/mypy_boto3_builder",
"Documentation, https://youtype.github.io/types_boto3_docs/types_boto3_connect/",
"Source, https://github.com/youtype/mypy_boto3_builder",
"Tracker, https://github.com/youtype/mypy_boto3_builder/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:01:08.133026 | types_boto3_connect-1.42.52.tar.gz | 165,608 | 0a/53/4ef338afa1a9e057fe9ca4d65ced3a84ea1c470cdcb764889d0bbeea42ae/types_boto3_connect-1.42.52.tar.gz | source | sdist | null | false | b4cf68469ec3bb51a0e80904e274a39b | 60174be2e17d5ca8d832f30e995bdbcaf7d744af6e0232ad1633cccf7588101e | 0a534ef338afa1a9e057fe9ca4d65ced3a84ea1c470cdcb764889d0bbeea42ae | MIT | [
"LICENSE"
] | 695 |
2.4 | types-boto3-cleanrooms | 1.42.52 | Type annotations for boto3 CleanRoomsService 1.42.52 service generated with mypy-boto3-builder 8.12.0 | <a id="types-boto3-cleanrooms"></a>
# types-boto3-cleanrooms
[](https://pypi.org/project/types-boto3-cleanrooms/)
[](https://pypi.org/project/types-boto3-cleanrooms/)
[](https://youtype.github.io/types_boto3_docs/)
[](https://pypistats.org/packages/types-boto3-cleanrooms)

Type annotations for
[boto3 CleanRoomsService 1.42.52](https://pypi.org/project/boto3/) compatible
with [VSCode](https://code.visualstudio.com/),
[PyCharm](https://www.jetbrains.com/pycharm/),
[Emacs](https://www.gnu.org/software/emacs/),
[Sublime Text](https://www.sublimetext.com/),
[mypy](https://github.com/python/mypy),
[pyright](https://github.com/microsoft/pyright) and other tools.
Generated with
[mypy-boto3-builder 8.12.0](https://github.com/youtype/mypy_boto3_builder).
More information can be found on
[types-boto3](https://pypi.org/project/types-boto3/) page and in
[types-boto3-cleanrooms docs](https://youtype.github.io/types_boto3_docs/types_boto3_cleanrooms/).
See how it helps you find and fix potential bugs:

- [types-boto3-cleanrooms](#types-boto3-cleanrooms)
- [How to install](#how-to-install)
- [Generate locally (recommended)](<#generate-locally-(recommended)>)
- [VSCode extension](#vscode-extension)
- [From PyPI with pip](#from-pypi-with-pip)
- [How to uninstall](#how-to-uninstall)
- [Usage](#usage)
- [VSCode](#vscode)
- [PyCharm](#pycharm)
- [Emacs](#emacs)
- [Sublime Text](#sublime-text)
- [Other IDEs](#other-ides)
- [mypy](#mypy)
- [pyright](#pyright)
- [Pylint compatibility](#pylint-compatibility)
- [Explicit type annotations](#explicit-type-annotations)
- [Client annotations](#client-annotations)
- [Paginators annotations](#paginators-annotations)
- [Literals](#literals)
- [Type definitions](#type-definitions)
- [How it works](#how-it-works)
- [What's new](#what's-new)
- [Implemented features](#implemented-features)
- [Latest changes](#latest-changes)
- [Versioning](#versioning)
- [Thank you](#thank-you)
- [Documentation](#documentation)
- [Support and contributing](#support-and-contributing)
<a id="how-to-install"></a>
## How to install
<a id="generate-locally-(recommended)"></a>
### Generate locally (recommended)
You can generate type annotations for `boto3` package locally with
`mypy-boto3-builder`. Use
[uv](https://docs.astral.sh/uv/getting-started/installation/) for build
isolation.
1. Run mypy-boto3-builder in your package root directory:
`uvx --with 'boto3==1.42.52' mypy-boto3-builder`
2. Select `boto3` AWS SDK.
3. Add `CleanRoomsService` service.
4. Use provided commands to install generated packages.
<a id="vscode-extension"></a>
### VSCode extension
Add
[AWS Boto3](https://marketplace.visualstudio.com/items?itemName=Boto3typed.boto3-ide)
extension to your VSCode and run `AWS boto3: Quick Start` command.
Click `Modify` and select `boto3 common` and `CleanRoomsService`.
<a id="from-pypi-with-pip"></a>
### From PyPI with pip
Install `types-boto3` for `CleanRoomsService` service.
```bash
# install with boto3 type annotations
python -m pip install 'types-boto3[cleanrooms]'
# Lite version does not provide session.client/resource overloads
# it is more RAM-friendly, but requires explicit type annotations
python -m pip install 'types-boto3-lite[cleanrooms]'
# standalone installation
python -m pip install types-boto3-cleanrooms
```
<a id="how-to-uninstall"></a>
## How to uninstall
```bash
python -m pip uninstall -y types-boto3-cleanrooms
```
<a id="usage"></a>
## Usage
<a id="vscode"></a>
### VSCode
- Install
[Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
- Install
[Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
- Set `Pylance` as your Python Language Server
- Install `types-boto3[cleanrooms]` in your environment:
```bash
python -m pip install 'types-boto3[cleanrooms]'
```
Both type checking and code completion should now work. No explicit type
annotations required, write your `boto3` code as usual.
<a id="pycharm"></a>
### PyCharm
> ⚠️ Due to slow PyCharm performance on `Literal` overloads (issue
> [PY-40997](https://youtrack.jetbrains.com/issue/PY-40997)), it is recommended
> to use [types-boto3-lite](https://pypi.org/project/types-boto3-lite/) until
> the issue is resolved.
> ⚠️ If you experience slow performance and high CPU usage, try to disable
> `PyCharm` type checker and use [mypy](https://github.com/python/mypy) or
> [pyright](https://github.com/microsoft/pyright) instead.
> ⚠️ To continue using `PyCharm` type checker, you can try to replace
> `types-boto3` with
> [types-boto3-lite](https://pypi.org/project/types-boto3-lite/):
```bash
pip uninstall types-boto3
pip install types-boto3-lite
```
Install `types-boto3[cleanrooms]` in your environment:
```bash
python -m pip install 'types-boto3[cleanrooms]'
```
Both type checking and code completion should now work.
<a id="emacs"></a>
### Emacs
- Install `types-boto3` with services you use in your environment:
```bash
python -m pip install 'types-boto3[cleanrooms]'
```
- Install [use-package](https://github.com/jwiegley/use-package),
[lsp](https://github.com/emacs-lsp/lsp-mode/),
[company](https://github.com/company-mode/company-mode) and
[flycheck](https://github.com/flycheck/flycheck) packages
- Install [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) package
```elisp
(use-package lsp-pyright
:ensure t
:hook (python-mode . (lambda ()
(require 'lsp-pyright)
(lsp))) ; or lsp-deferred
:init (when (executable-find "python3")
(setq lsp-pyright-python-executable-cmd "python3"))
)
```
- Make sure emacs uses the environment where you have installed `types-boto3`
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="sublime-text"></a>
### Sublime Text
- Install `types-boto3[cleanrooms]` with services you use in your environment:
```bash
python -m pip install 'types-boto3[cleanrooms]'
```
- Install [LSP-pyright](https://github.com/sublimelsp/LSP-pyright) package
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="other-ides"></a>
### Other IDEs
Not tested, but as long as your IDE supports `mypy` or `pyright`, everything
should work.
<a id="mypy"></a>
### mypy
- Install `mypy`: `python -m pip install mypy`
- Install `types-boto3[cleanrooms]` in your environment:
```bash
python -m pip install 'types-boto3[cleanrooms]'
```
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pyright"></a>
### pyright
- Install `pyright`: `npm i -g pyright`
- Install `types-boto3[cleanrooms]` in your environment:
```bash
python -m pip install 'types-boto3[cleanrooms]'
```
Optionally, you can install `types-boto3` to `typings` directory.
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pylint-compatibility"></a>
### Pylint compatibility
It is totally safe to use `TYPE_CHECKING` flag in order to avoid
`types-boto3-cleanrooms` dependency in production. However, there is an issue
in `pylint` that it complains about undefined variables. To fix it, set all
types to `object` in non-`TYPE_CHECKING` mode.
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from types_boto3_ec2 import EC2Client, EC2ServiceResource
from types_boto3_ec2.waiters import BundleTaskCompleteWaiter
from types_boto3_ec2.paginators import DescribeVolumesPaginator
else:
EC2Client = object
EC2ServiceResource = object
BundleTaskCompleteWaiter = object
DescribeVolumesPaginator = object
...
```
<a id="explicit-type-annotations"></a>
## Explicit type annotations
<a id="client-annotations"></a>
### Client annotations
`CleanRoomsServiceClient` provides annotations for
`boto3.client("cleanrooms")`.
```python
from boto3.session import Session
from types_boto3_cleanrooms import CleanRoomsServiceClient
client: CleanRoomsServiceClient = Session().client("cleanrooms")
# now client usage is checked by mypy and IDE should provide code completion
```
<a id="paginators-annotations"></a>
### Paginators annotations
`types_boto3_cleanrooms.paginator` module contains type annotations for all
paginators.
```python
from boto3.session import Session
from types_boto3_cleanrooms import CleanRoomsServiceClient
from types_boto3_cleanrooms.paginator import (
ListAnalysisTemplatesPaginator,
ListCollaborationAnalysisTemplatesPaginator,
ListCollaborationChangeRequestsPaginator,
ListCollaborationConfiguredAudienceModelAssociationsPaginator,
ListCollaborationIdNamespaceAssociationsPaginator,
ListCollaborationPrivacyBudgetTemplatesPaginator,
ListCollaborationPrivacyBudgetsPaginator,
ListCollaborationsPaginator,
ListConfiguredAudienceModelAssociationsPaginator,
ListConfiguredTableAssociationsPaginator,
ListConfiguredTablesPaginator,
ListIdMappingTablesPaginator,
ListIdNamespaceAssociationsPaginator,
ListMembersPaginator,
ListMembershipsPaginator,
ListPrivacyBudgetTemplatesPaginator,
ListPrivacyBudgetsPaginator,
ListProtectedJobsPaginator,
ListProtectedQueriesPaginator,
ListSchemasPaginator,
)
client: CleanRoomsServiceClient = Session().client("cleanrooms")
# Explicit type annotations are optional here
# Types should be correctly discovered by mypy and IDEs
list_analysis_templates_paginator: ListAnalysisTemplatesPaginator = client.get_paginator(
"list_analysis_templates"
)
list_collaboration_analysis_templates_paginator: ListCollaborationAnalysisTemplatesPaginator = (
client.get_paginator("list_collaboration_analysis_templates")
)
list_collaboration_change_requests_paginator: ListCollaborationChangeRequestsPaginator = (
client.get_paginator("list_collaboration_change_requests")
)
list_collaboration_configured_audience_model_associations_paginator: ListCollaborationConfiguredAudienceModelAssociationsPaginator = client.get_paginator(
"list_collaboration_configured_audience_model_associations"
)
list_collaboration_id_namespace_associations_paginator: ListCollaborationIdNamespaceAssociationsPaginator = client.get_paginator(
"list_collaboration_id_namespace_associations"
)
list_collaboration_privacy_budget_templates_paginator: ListCollaborationPrivacyBudgetTemplatesPaginator = client.get_paginator(
"list_collaboration_privacy_budget_templates"
)
list_collaboration_privacy_budgets_paginator: ListCollaborationPrivacyBudgetsPaginator = (
client.get_paginator("list_collaboration_privacy_budgets")
)
list_collaborations_paginator: ListCollaborationsPaginator = client.get_paginator(
"list_collaborations"
)
list_configured_audience_model_associations_paginator: ListConfiguredAudienceModelAssociationsPaginator = client.get_paginator(
"list_configured_audience_model_associations"
)
list_configured_table_associations_paginator: ListConfiguredTableAssociationsPaginator = (
client.get_paginator("list_configured_table_associations")
)
list_configured_tables_paginator: ListConfiguredTablesPaginator = client.get_paginator(
"list_configured_tables"
)
list_id_mapping_tables_paginator: ListIdMappingTablesPaginator = client.get_paginator(
"list_id_mapping_tables"
)
list_id_namespace_associations_paginator: ListIdNamespaceAssociationsPaginator = (
client.get_paginator("list_id_namespace_associations")
)
list_members_paginator: ListMembersPaginator = client.get_paginator("list_members")
list_memberships_paginator: ListMembershipsPaginator = client.get_paginator("list_memberships")
list_privacy_budget_templates_paginator: ListPrivacyBudgetTemplatesPaginator = client.get_paginator(
"list_privacy_budget_templates"
)
list_privacy_budgets_paginator: ListPrivacyBudgetsPaginator = client.get_paginator(
"list_privacy_budgets"
)
list_protected_jobs_paginator: ListProtectedJobsPaginator = client.get_paginator(
"list_protected_jobs"
)
list_protected_queries_paginator: ListProtectedQueriesPaginator = client.get_paginator(
"list_protected_queries"
)
list_schemas_paginator: ListSchemasPaginator = client.get_paginator("list_schemas")
```
<a id="literals"></a>
### Literals
`types_boto3_cleanrooms.literals` module contains literals extracted from
shapes that can be used in user code for type checking.
Full list of `CleanRoomsService` Literals can be found in
[docs](https://youtype.github.io/types_boto3_docs/types_boto3_cleanrooms/literals/).
```python
from types_boto3_cleanrooms.literals import AccessBudgetTypeType
def check_value(value: AccessBudgetTypeType) -> bool: ...
```
<a id="type-definitions"></a>
### Type definitions
`types_boto3_cleanrooms.type_defs` module contains structures and shapes
assembled to typed dictionaries and unions for additional type checking.
Full list of `CleanRoomsService` TypeDefs can be found in
[docs](https://youtype.github.io/types_boto3_docs/types_boto3_cleanrooms/type_defs/).
```python
# TypedDict usage example
from types_boto3_cleanrooms.type_defs import AccessBudgetDetailsTypeDef
def get_value() -> AccessBudgetDetailsTypeDef:
return {
"startTime": ...,
}
```
<a id="how-it-works"></a>
## How it works
Fully automated
[mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder) carefully
generates type annotations for each service, patiently waiting for `boto3`
updates. It delivers drop-in type annotations for you and makes sure that:
- All available `boto3` services are covered.
- Each public class and method of every `boto3` service gets valid type
annotations extracted from `botocore` schemas.
- Type annotations include up-to-date documentation.
- Link to documentation is provided for every method.
- Code is processed by [ruff](https://docs.astral.sh/ruff/) for readability.
<a id="what's-new"></a>
## What's new
<a id="implemented-features"></a>
### Implemented features
- Fully type annotated `boto3`, `botocore`, `aiobotocore` and `aioboto3`
libraries
- `mypy`, `pyright`, `VSCode`, `PyCharm`, `Sublime Text` and `Emacs`
compatibility
- `Client`, `ServiceResource`, `Resource`, `Waiter` `Paginator` type
annotations for each service
- Generated `TypeDefs` for each service
- Generated `Literals` for each service
- Auto discovery of types for `boto3.client` and `boto3.resource` calls
- Auto discovery of types for `session.client` and `session.resource` calls
- Auto discovery of types for `client.get_waiter` and `client.get_paginator`
calls
- Auto discovery of types for `ServiceResource` and `Resource` collections
- Auto discovery of types for `aiobotocore.Session.create_client` calls
<a id="latest-changes"></a>
### Latest changes
Builder changelog can be found in
[Releases](https://github.com/youtype/mypy_boto3_builder/releases).
<a id="versioning"></a>
## Versioning
`types-boto3-cleanrooms` version is the same as related `boto3` version and
follows
[Python Packaging version specifiers](https://packaging.python.org/en/latest/specifications/version-specifiers/).
<a id="thank-you"></a>
## Thank you
- [Allie Fitter](https://github.com/alliefitter) for
[boto3-type-annotations](https://pypi.org/project/boto3-type-annotations/),
this package is based on top of his work
- [black](https://github.com/psf/black) developers for an awesome formatting
tool
- [Timothy Edmund Crosley](https://github.com/timothycrosley) for
[isort](https://github.com/PyCQA/isort) and how flexible it is
- [mypy](https://github.com/python/mypy) developers for doing all dirty work
for us
- [pyright](https://github.com/microsoft/pyright) team for the new era of typed
Python
<a id="documentation"></a>
## Documentation
All services type annotations can be found in
[boto3 docs](https://youtype.github.io/types_boto3_docs/types_boto3_cleanrooms/)
<a id="support-and-contributing"></a>
## Support and contributing
This package is auto-generated. Please reports any bugs or request new features
in [mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder/issues/)
repository.
| text/markdown | null | Vlad Emelianov <vlad.emelianov.nz@gmail.com> | null | null | null | boto3, cleanrooms, boto3-stubs, type-annotations, mypy, typeshed, autocomplete | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"... | [
"any"
] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions; python_version < \"3.12\""
] | [] | [] | [] | [
"Homepage, https://github.com/youtype/mypy_boto3_builder",
"Documentation, https://youtype.github.io/types_boto3_docs/types_boto3_cleanrooms/",
"Source, https://github.com/youtype/mypy_boto3_builder",
"Tracker, https://github.com/youtype/mypy_boto3_builder/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:01:05.768323 | types_boto3_cleanrooms-1.42.52.tar.gz | 52,490 | a3/bd/987a2188b0161f90d4216f8d93713f1467aca80e86c647ae081207e89a9a/types_boto3_cleanrooms-1.42.52.tar.gz | source | sdist | null | false | 551e858519a9d3ab4d2c27b2bf6c04a0 | c06fe5b22f5b56db51ae422100fc192949a096212b2a18a5ea004e23db531649 | a3bd987a2188b0161f90d4216f8d93713f1467aca80e86c647ae081207e89a9a | MIT | [
"LICENSE"
] | 234 |
2.4 | mypy-boto3-connect | 1.42.52 | Type annotations for boto3 Connect 1.42.52 service generated with mypy-boto3-builder 8.12.0 | <a id="mypy-boto3-connect"></a>
# mypy-boto3-connect
[](https://pypi.org/project/mypy-boto3-connect/)
[](https://pypi.org/project/mypy-boto3-connect/)
[](https://youtype.github.io/boto3_stubs_docs/)
[](https://pypistats.org/packages/mypy-boto3-connect)

Type annotations for [boto3 Connect 1.42.52](https://pypi.org/project/boto3/)
compatible with [VSCode](https://code.visualstudio.com/),
[PyCharm](https://www.jetbrains.com/pycharm/),
[Emacs](https://www.gnu.org/software/emacs/),
[Sublime Text](https://www.sublimetext.com/),
[mypy](https://github.com/python/mypy),
[pyright](https://github.com/microsoft/pyright) and other tools.
Generated with
[mypy-boto3-builder 8.12.0](https://github.com/youtype/mypy_boto3_builder).
More information can be found on
[boto3-stubs](https://pypi.org/project/boto3-stubs/) page and in
[mypy-boto3-connect docs](https://youtype.github.io/boto3_stubs_docs/mypy_boto3_connect/).
See how it helps you find and fix potential bugs:

- [mypy-boto3-connect](#mypy-boto3-connect)
- [How to install](#how-to-install)
- [Generate locally (recommended)](<#generate-locally-(recommended)>)
- [VSCode extension](#vscode-extension)
- [From PyPI with pip](#from-pypi-with-pip)
- [How to uninstall](#how-to-uninstall)
- [Usage](#usage)
- [VSCode](#vscode)
- [PyCharm](#pycharm)
- [Emacs](#emacs)
- [Sublime Text](#sublime-text)
- [Other IDEs](#other-ides)
- [mypy](#mypy)
- [pyright](#pyright)
- [Pylint compatibility](#pylint-compatibility)
- [Explicit type annotations](#explicit-type-annotations)
- [Client annotations](#client-annotations)
- [Paginators annotations](#paginators-annotations)
- [Literals](#literals)
- [Type definitions](#type-definitions)
- [How it works](#how-it-works)
- [What's new](#what's-new)
- [Implemented features](#implemented-features)
- [Latest changes](#latest-changes)
- [Versioning](#versioning)
- [Thank you](#thank-you)
- [Documentation](#documentation)
- [Support and contributing](#support-and-contributing)
<a id="how-to-install"></a>
## How to install
<a id="generate-locally-(recommended)"></a>
### Generate locally (recommended)
You can generate type annotations for `boto3` package locally with
`mypy-boto3-builder`. Use
[uv](https://docs.astral.sh/uv/getting-started/installation/) for build
isolation.
1. Run mypy-boto3-builder in your package root directory:
`uvx --with 'boto3==1.42.52' mypy-boto3-builder`
2. Select `boto3-stubs` AWS SDK.
3. Add `Connect` service.
4. Use provided commands to install generated packages.
<a id="vscode-extension"></a>
### VSCode extension
Add
[AWS Boto3](https://marketplace.visualstudio.com/items?itemName=Boto3typed.boto3-ide)
extension to your VSCode and run `AWS boto3: Quick Start` command.
Click `Modify` and select `boto3 common` and `Connect`.
<a id="from-pypi-with-pip"></a>
### From PyPI with pip
Install `boto3-stubs` for `Connect` service.
```bash
# install with boto3 type annotations
python -m pip install 'boto3-stubs[connect]'
# Lite version does not provide session.client/resource overloads
# it is more RAM-friendly, but requires explicit type annotations
python -m pip install 'boto3-stubs-lite[connect]'
# standalone installation
python -m pip install mypy-boto3-connect
```
<a id="how-to-uninstall"></a>
## How to uninstall
```bash
python -m pip uninstall -y mypy-boto3-connect
```
<a id="usage"></a>
## Usage
<a id="vscode"></a>
### VSCode
- Install
[Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
- Install
[Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
- Set `Pylance` as your Python Language Server
- Install `boto3-stubs[connect]` in your environment:
```bash
python -m pip install 'boto3-stubs[connect]'
```
Both type checking and code completion should now work. No explicit type
annotations required, write your `boto3` code as usual.
<a id="pycharm"></a>
### PyCharm
> ⚠️ Due to slow PyCharm performance on `Literal` overloads (issue
> [PY-40997](https://youtrack.jetbrains.com/issue/PY-40997)), it is recommended
> to use [boto3-stubs-lite](https://pypi.org/project/boto3-stubs-lite/) until
> the issue is resolved.
> ⚠️ If you experience slow performance and high CPU usage, try to disable
> `PyCharm` type checker and use [mypy](https://github.com/python/mypy) or
> [pyright](https://github.com/microsoft/pyright) instead.
> ⚠️ To continue using `PyCharm` type checker, you can try to replace
> `boto3-stubs` with
> [boto3-stubs-lite](https://pypi.org/project/boto3-stubs-lite/):
```bash
pip uninstall boto3-stubs
pip install boto3-stubs-lite
```
Install `boto3-stubs[connect]` in your environment:
```bash
python -m pip install 'boto3-stubs[connect]'
```
Both type checking and code completion should now work.
<a id="emacs"></a>
### Emacs
- Install `boto3-stubs` with services you use in your environment:
```bash
python -m pip install 'boto3-stubs[connect]'
```
- Install [use-package](https://github.com/jwiegley/use-package),
[lsp](https://github.com/emacs-lsp/lsp-mode/),
[company](https://github.com/company-mode/company-mode) and
[flycheck](https://github.com/flycheck/flycheck) packages
- Install [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) package
```elisp
(use-package lsp-pyright
:ensure t
:hook (python-mode . (lambda ()
(require 'lsp-pyright)
(lsp))) ; or lsp-deferred
:init (when (executable-find "python3")
(setq lsp-pyright-python-executable-cmd "python3"))
)
```
- Make sure emacs uses the environment where you have installed `boto3-stubs`
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="sublime-text"></a>
### Sublime Text
- Install `boto3-stubs[connect]` with services you use in your environment:
```bash
python -m pip install 'boto3-stubs[connect]'
```
- Install [LSP-pyright](https://github.com/sublimelsp/LSP-pyright) package
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="other-ides"></a>
### Other IDEs
Not tested, but as long as your IDE supports `mypy` or `pyright`, everything
should work.
<a id="mypy"></a>
### mypy
- Install `mypy`: `python -m pip install mypy`
- Install `boto3-stubs[connect]` in your environment:
```bash
python -m pip install 'boto3-stubs[connect]'
```
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pyright"></a>
### pyright
- Install `pyright`: `npm i -g pyright`
- Install `boto3-stubs[connect]` in your environment:
```bash
python -m pip install 'boto3-stubs[connect]'
```
Optionally, you can install `boto3-stubs` to `typings` directory.
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pylint-compatibility"></a>
### Pylint compatibility
It is totally safe to use `TYPE_CHECKING` flag in order to avoid
`mypy-boto3-connect` dependency in production. However, there is an issue in
`pylint` that it complains about undefined variables. To fix it, set all types
to `object` in non-`TYPE_CHECKING` mode.
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from mypy_boto3_ec2 import EC2Client, EC2ServiceResource
from mypy_boto3_ec2.waiters import BundleTaskCompleteWaiter
from mypy_boto3_ec2.paginators import DescribeVolumesPaginator
else:
EC2Client = object
EC2ServiceResource = object
BundleTaskCompleteWaiter = object
DescribeVolumesPaginator = object
...
```
<a id="explicit-type-annotations"></a>
## Explicit type annotations
<a id="client-annotations"></a>
### Client annotations
`ConnectClient` provides annotations for `boto3.client("connect")`.
```python
from boto3.session import Session
from mypy_boto3_connect import ConnectClient
client: ConnectClient = Session().client("connect")
# now client usage is checked by mypy and IDE should provide code completion
```
<a id="paginators-annotations"></a>
### Paginators annotations
`mypy_boto3_connect.paginator` module contains type annotations for all
paginators.
```python
from boto3.session import Session
from mypy_boto3_connect import ConnectClient
from mypy_boto3_connect.paginator import (
GetMetricDataPaginator,
ListAgentStatusesPaginator,
ListApprovedOriginsPaginator,
ListAuthenticationProfilesPaginator,
ListBotsPaginator,
ListChildHoursOfOperationsPaginator,
ListContactEvaluationsPaginator,
ListContactFlowModuleAliasesPaginator,
ListContactFlowModuleVersionsPaginator,
ListContactFlowModulesPaginator,
ListContactFlowVersionsPaginator,
ListContactFlowsPaginator,
ListContactReferencesPaginator,
ListDataTableAttributesPaginator,
ListDataTablePrimaryValuesPaginator,
ListDataTableValuesPaginator,
ListDataTablesPaginator,
ListDefaultVocabulariesPaginator,
ListEntitySecurityProfilesPaginator,
ListEvaluationFormVersionsPaginator,
ListEvaluationFormsPaginator,
ListFlowAssociationsPaginator,
ListHoursOfOperationOverridesPaginator,
ListHoursOfOperationsPaginator,
ListInstanceAttributesPaginator,
ListInstanceStorageConfigsPaginator,
ListInstancesPaginator,
ListIntegrationAssociationsPaginator,
ListLambdaFunctionsPaginator,
ListLexBotsPaginator,
ListPhoneNumbersPaginator,
ListPhoneNumbersV2Paginator,
ListPredefinedAttributesPaginator,
ListPromptsPaginator,
ListQueueQuickConnectsPaginator,
ListQueuesPaginator,
ListQuickConnectsPaginator,
ListRoutingProfileManualAssignmentQueuesPaginator,
ListRoutingProfileQueuesPaginator,
ListRoutingProfilesPaginator,
ListRulesPaginator,
ListSecurityKeysPaginator,
ListSecurityProfileApplicationsPaginator,
ListSecurityProfileFlowModulesPaginator,
ListSecurityProfilePermissionsPaginator,
ListSecurityProfilesPaginator,
ListTaskTemplatesPaginator,
ListTestCasesPaginator,
ListTrafficDistributionGroupUsersPaginator,
ListTrafficDistributionGroupsPaginator,
ListUseCasesPaginator,
ListUserHierarchyGroupsPaginator,
ListUserProficienciesPaginator,
ListUsersPaginator,
ListViewVersionsPaginator,
ListViewsPaginator,
ListWorkspacePagesPaginator,
ListWorkspacesPaginator,
SearchAgentStatusesPaginator,
SearchAvailablePhoneNumbersPaginator,
SearchContactFlowModulesPaginator,
SearchContactFlowsPaginator,
SearchContactsPaginator,
SearchDataTablesPaginator,
SearchHoursOfOperationOverridesPaginator,
SearchHoursOfOperationsPaginator,
SearchPredefinedAttributesPaginator,
SearchPromptsPaginator,
SearchQueuesPaginator,
SearchQuickConnectsPaginator,
SearchResourceTagsPaginator,
SearchRoutingProfilesPaginator,
SearchSecurityProfilesPaginator,
SearchTestCasesPaginator,
SearchUserHierarchyGroupsPaginator,
SearchUsersPaginator,
SearchViewsPaginator,
SearchVocabulariesPaginator,
SearchWorkspaceAssociationsPaginator,
SearchWorkspacesPaginator,
)
client: ConnectClient = Session().client("connect")
# Explicit type annotations are optional here
# Types should be correctly discovered by mypy and IDEs
get_metric_data_paginator: GetMetricDataPaginator = client.get_paginator("get_metric_data")
list_agent_statuses_paginator: ListAgentStatusesPaginator = client.get_paginator(
"list_agent_statuses"
)
list_approved_origins_paginator: ListApprovedOriginsPaginator = client.get_paginator(
"list_approved_origins"
)
list_authentication_profiles_paginator: ListAuthenticationProfilesPaginator = client.get_paginator(
"list_authentication_profiles"
)
list_bots_paginator: ListBotsPaginator = client.get_paginator("list_bots")
list_child_hours_of_operations_paginator: ListChildHoursOfOperationsPaginator = (
client.get_paginator("list_child_hours_of_operations")
)
list_contact_evaluations_paginator: ListContactEvaluationsPaginator = client.get_paginator(
"list_contact_evaluations"
)
list_contact_flow_module_aliases_paginator: ListContactFlowModuleAliasesPaginator = (
client.get_paginator("list_contact_flow_module_aliases")
)
list_contact_flow_module_versions_paginator: ListContactFlowModuleVersionsPaginator = (
client.get_paginator("list_contact_flow_module_versions")
)
list_contact_flow_modules_paginator: ListContactFlowModulesPaginator = client.get_paginator(
"list_contact_flow_modules"
)
list_contact_flow_versions_paginator: ListContactFlowVersionsPaginator = client.get_paginator(
"list_contact_flow_versions"
)
list_contact_flows_paginator: ListContactFlowsPaginator = client.get_paginator("list_contact_flows")
list_contact_references_paginator: ListContactReferencesPaginator = client.get_paginator(
"list_contact_references"
)
list_data_table_attributes_paginator: ListDataTableAttributesPaginator = client.get_paginator(
"list_data_table_attributes"
)
list_data_table_primary_values_paginator: ListDataTablePrimaryValuesPaginator = (
client.get_paginator("list_data_table_primary_values")
)
list_data_table_values_paginator: ListDataTableValuesPaginator = client.get_paginator(
"list_data_table_values"
)
list_data_tables_paginator: ListDataTablesPaginator = client.get_paginator("list_data_tables")
list_default_vocabularies_paginator: ListDefaultVocabulariesPaginator = client.get_paginator(
"list_default_vocabularies"
)
list_entity_security_profiles_paginator: ListEntitySecurityProfilesPaginator = client.get_paginator(
"list_entity_security_profiles"
)
list_evaluation_form_versions_paginator: ListEvaluationFormVersionsPaginator = client.get_paginator(
"list_evaluation_form_versions"
)
list_evaluation_forms_paginator: ListEvaluationFormsPaginator = client.get_paginator(
"list_evaluation_forms"
)
list_flow_associations_paginator: ListFlowAssociationsPaginator = client.get_paginator(
"list_flow_associations"
)
list_hours_of_operation_overrides_paginator: ListHoursOfOperationOverridesPaginator = (
client.get_paginator("list_hours_of_operation_overrides")
)
list_hours_of_operations_paginator: ListHoursOfOperationsPaginator = client.get_paginator(
"list_hours_of_operations"
)
list_instance_attributes_paginator: ListInstanceAttributesPaginator = client.get_paginator(
"list_instance_attributes"
)
list_instance_storage_configs_paginator: ListInstanceStorageConfigsPaginator = client.get_paginator(
"list_instance_storage_configs"
)
list_instances_paginator: ListInstancesPaginator = client.get_paginator("list_instances")
list_integration_associations_paginator: ListIntegrationAssociationsPaginator = (
client.get_paginator("list_integration_associations")
)
list_lambda_functions_paginator: ListLambdaFunctionsPaginator = client.get_paginator(
"list_lambda_functions"
)
list_lex_bots_paginator: ListLexBotsPaginator = client.get_paginator("list_lex_bots")
list_phone_numbers_paginator: ListPhoneNumbersPaginator = client.get_paginator("list_phone_numbers")
list_phone_numbers_v2_paginator: ListPhoneNumbersV2Paginator = client.get_paginator(
"list_phone_numbers_v2"
)
list_predefined_attributes_paginator: ListPredefinedAttributesPaginator = client.get_paginator(
"list_predefined_attributes"
)
list_prompts_paginator: ListPromptsPaginator = client.get_paginator("list_prompts")
list_queue_quick_connects_paginator: ListQueueQuickConnectsPaginator = client.get_paginator(
"list_queue_quick_connects"
)
list_queues_paginator: ListQueuesPaginator = client.get_paginator("list_queues")
list_quick_connects_paginator: ListQuickConnectsPaginator = client.get_paginator(
"list_quick_connects"
)
list_routing_profile_manual_assignment_queues_paginator: ListRoutingProfileManualAssignmentQueuesPaginator = client.get_paginator(
"list_routing_profile_manual_assignment_queues"
)
list_routing_profile_queues_paginator: ListRoutingProfileQueuesPaginator = client.get_paginator(
"list_routing_profile_queues"
)
list_routing_profiles_paginator: ListRoutingProfilesPaginator = client.get_paginator(
"list_routing_profiles"
)
list_rules_paginator: ListRulesPaginator = client.get_paginator("list_rules")
list_security_keys_paginator: ListSecurityKeysPaginator = client.get_paginator("list_security_keys")
list_security_profile_applications_paginator: ListSecurityProfileApplicationsPaginator = (
client.get_paginator("list_security_profile_applications")
)
list_security_profile_flow_modules_paginator: ListSecurityProfileFlowModulesPaginator = (
client.get_paginator("list_security_profile_flow_modules")
)
list_security_profile_permissions_paginator: ListSecurityProfilePermissionsPaginator = (
client.get_paginator("list_security_profile_permissions")
)
list_security_profiles_paginator: ListSecurityProfilesPaginator = client.get_paginator(
"list_security_profiles"
)
list_task_templates_paginator: ListTaskTemplatesPaginator = client.get_paginator(
"list_task_templates"
)
list_test_cases_paginator: ListTestCasesPaginator = client.get_paginator("list_test_cases")
list_traffic_distribution_group_users_paginator: ListTrafficDistributionGroupUsersPaginator = (
client.get_paginator("list_traffic_distribution_group_users")
)
list_traffic_distribution_groups_paginator: ListTrafficDistributionGroupsPaginator = (
client.get_paginator("list_traffic_distribution_groups")
)
list_use_cases_paginator: ListUseCasesPaginator = client.get_paginator("list_use_cases")
list_user_hierarchy_groups_paginator: ListUserHierarchyGroupsPaginator = client.get_paginator(
"list_user_hierarchy_groups"
)
list_user_proficiencies_paginator: ListUserProficienciesPaginator = client.get_paginator(
"list_user_proficiencies"
)
list_users_paginator: ListUsersPaginator = client.get_paginator("list_users")
list_view_versions_paginator: ListViewVersionsPaginator = client.get_paginator("list_view_versions")
list_views_paginator: ListViewsPaginator = client.get_paginator("list_views")
list_workspace_pages_paginator: ListWorkspacePagesPaginator = client.get_paginator(
"list_workspace_pages"
)
list_workspaces_paginator: ListWorkspacesPaginator = client.get_paginator("list_workspaces")
search_agent_statuses_paginator: SearchAgentStatusesPaginator = client.get_paginator(
"search_agent_statuses"
)
search_available_phone_numbers_paginator: SearchAvailablePhoneNumbersPaginator = (
client.get_paginator("search_available_phone_numbers")
)
search_contact_flow_modules_paginator: SearchContactFlowModulesPaginator = client.get_paginator(
"search_contact_flow_modules"
)
search_contact_flows_paginator: SearchContactFlowsPaginator = client.get_paginator(
"search_contact_flows"
)
search_contacts_paginator: SearchContactsPaginator = client.get_paginator("search_contacts")
search_data_tables_paginator: SearchDataTablesPaginator = client.get_paginator("search_data_tables")
search_hours_of_operation_overrides_paginator: SearchHoursOfOperationOverridesPaginator = (
client.get_paginator("search_hours_of_operation_overrides")
)
search_hours_of_operations_paginator: SearchHoursOfOperationsPaginator = client.get_paginator(
"search_hours_of_operations"
)
search_predefined_attributes_paginator: SearchPredefinedAttributesPaginator = client.get_paginator(
"search_predefined_attributes"
)
search_prompts_paginator: SearchPromptsPaginator = client.get_paginator("search_prompts")
search_queues_paginator: SearchQueuesPaginator = client.get_paginator("search_queues")
search_quick_connects_paginator: SearchQuickConnectsPaginator = client.get_paginator(
"search_quick_connects"
)
search_resource_tags_paginator: SearchResourceTagsPaginator = client.get_paginator(
"search_resource_tags"
)
search_routing_profiles_paginator: SearchRoutingProfilesPaginator = client.get_paginator(
"search_routing_profiles"
)
search_security_profiles_paginator: SearchSecurityProfilesPaginator = client.get_paginator(
"search_security_profiles"
)
search_test_cases_paginator: SearchTestCasesPaginator = client.get_paginator("search_test_cases")
search_user_hierarchy_groups_paginator: SearchUserHierarchyGroupsPaginator = client.get_paginator(
"search_user_hierarchy_groups"
)
search_users_paginator: SearchUsersPaginator = client.get_paginator("search_users")
search_views_paginator: SearchViewsPaginator = client.get_paginator("search_views")
search_vocabularies_paginator: SearchVocabulariesPaginator = client.get_paginator(
"search_vocabularies"
)
search_workspace_associations_paginator: SearchWorkspaceAssociationsPaginator = (
client.get_paginator("search_workspace_associations")
)
search_workspaces_paginator: SearchWorkspacesPaginator = client.get_paginator("search_workspaces")
```
<a id="literals"></a>
### Literals
`mypy_boto3_connect.literals` module contains literals extracted from shapes
that can be used in user code for type checking.
Full list of `Connect` Literals can be found in
[docs](https://youtype.github.io/boto3_stubs_docs/mypy_boto3_connect/literals/).
```python
from mypy_boto3_connect.literals import AccessTypeType
def check_value(value: AccessTypeType) -> bool: ...
```
<a id="type-definitions"></a>
### Type definitions
`mypy_boto3_connect.type_defs` module contains structures and shapes assembled
to typed dictionaries and unions for additional type checking.
Full list of `Connect` TypeDefs can be found in
[docs](https://youtype.github.io/boto3_stubs_docs/mypy_boto3_connect/type_defs/).
```python
# TypedDict usage example
from mypy_boto3_connect.type_defs import ActionSummaryTypeDef
def get_value() -> ActionSummaryTypeDef:
return {
"ActionType": ...,
}
```
<a id="how-it-works"></a>
## How it works
Fully automated
[mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder) carefully
generates type annotations for each service, patiently waiting for `boto3`
updates. It delivers drop-in type annotations for you and makes sure that:
- All available `boto3` services are covered.
- Each public class and method of every `boto3` service gets valid type
annotations extracted from `botocore` schemas.
- Type annotations include up-to-date documentation.
- Link to documentation is provided for every method.
- Code is processed by [ruff](https://docs.astral.sh/ruff/) for readability.
<a id="what's-new"></a>
## What's new
<a id="implemented-features"></a>
### Implemented features
- Fully type annotated `boto3`, `botocore`, `aiobotocore` and `aioboto3`
libraries
- `mypy`, `pyright`, `VSCode`, `PyCharm`, `Sublime Text` and `Emacs`
compatibility
- `Client`, `ServiceResource`, `Resource`, `Waiter` `Paginator` type
annotations for each service
- Generated `TypeDefs` for each service
- Generated `Literals` for each service
- Auto discovery of types for `boto3.client` and `boto3.resource` calls
- Auto discovery of types for `session.client` and `session.resource` calls
- Auto discovery of types for `client.get_waiter` and `client.get_paginator`
calls
- Auto discovery of types for `ServiceResource` and `Resource` collections
- Auto discovery of types for `aiobotocore.Session.create_client` calls
<a id="latest-changes"></a>
### Latest changes
Builder changelog can be found in
[Releases](https://github.com/youtype/mypy_boto3_builder/releases).
<a id="versioning"></a>
## Versioning
`mypy-boto3-connect` version is the same as related `boto3` version and follows
[Python Packaging version specifiers](https://packaging.python.org/en/latest/specifications/version-specifiers/).
<a id="thank-you"></a>
## Thank you
- [Allie Fitter](https://github.com/alliefitter) for
[boto3-type-annotations](https://pypi.org/project/boto3-type-annotations/),
this package is based on top of his work
- [black](https://github.com/psf/black) developers for an awesome formatting
tool
- [Timothy Edmund Crosley](https://github.com/timothycrosley) for
[isort](https://github.com/PyCQA/isort) and how flexible it is
- [mypy](https://github.com/python/mypy) developers for doing all dirty work
for us
- [pyright](https://github.com/microsoft/pyright) team for the new era of typed
Python
<a id="documentation"></a>
## Documentation
All services type annotations can be found in
[boto3 docs](https://youtype.github.io/boto3_stubs_docs/mypy_boto3_connect/)
<a id="support-and-contributing"></a>
## Support and contributing
This package is auto-generated. Please reports any bugs or request new features
in [mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder/issues/)
repository.
| text/markdown | null | Vlad Emelianov <vlad.emelianov.nz@gmail.com> | null | null | null | boto3, connect, boto3-stubs, type-annotations, mypy, typeshed, autocomplete | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"... | [
"any"
] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions; python_version < \"3.12\""
] | [] | [] | [] | [
"Homepage, https://github.com/youtype/mypy_boto3_builder",
"Documentation, https://youtype.github.io/boto3_stubs_docs/mypy_boto3_connect/",
"Source, https://github.com/youtype/mypy_boto3_builder",
"Tracker, https://github.com/youtype/mypy_boto3_builder/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:01:03.976480 | mypy_boto3_connect-1.42.52.tar.gz | 165,688 | 1a/94/9f7a6cc6ed92d780e9423aab111ecae52aac4125aa2e9b7df0e2ba3017c9/mypy_boto3_connect-1.42.52.tar.gz | source | sdist | null | false | 20d1fc086db1580ecde05d1840890236 | 78075922eadb2c783197e82c6c3f3acd14a515cd7c45c8c385d9d3837b2d25fa | 1a949f7a6cc6ed92d780e9423aab111ecae52aac4125aa2e9b7df0e2ba3017c9 | MIT | [
"LICENSE"
] | 1,814 |
2.4 | falcon-mcp | 0.6.0 | CrowdStrike Falcon MCP Server | 

# falcon-mcp
[](https://badge.fury.io/py/falcon-mcp)
[](https://pypi.org/project/falcon-mcp/)
[](https://opensource.org/licenses/MIT)
**falcon-mcp** is a Model Context Protocol (MCP) server that connects AI agents with the CrowdStrike Falcon platform, powering intelligent security analysis in your agentic workflows. It delivers programmatic access to essential security capabilities—including detections, incidents, and behaviors—establishing the foundation for advanced security operations and automation.
> [!IMPORTANT]
> **🚧 Public Preview**: This project is currently in public preview and under active development. Features and functionality may change before the stable 1.0 release. While we encourage exploration and testing, please avoid production deployments. We welcome your feedback through [GitHub Issues](https://github.com/crowdstrike/falcon-mcp/issues) to help shape the final release.
## Table of Contents
- [API Credentials \& Required Scopes](#api-credentials--required-scopes)
- [Setting Up CrowdStrike API Credentials](#setting-up-crowdstrike-api-credentials)
- [Required API Scopes by Module](#required-api-scopes-by-module)
- [Available Modules, Tools \& Resources](#available-modules-tools--resources)
- [Cloud Security Module](#cloud-security-module)
- [Core Functionality (Built into Server)](#core-functionality-built-into-server)
- [Detections Module](#detections-module)
- [Discover Module](#discover-module)
- [Hosts Module](#hosts-module)
- [Identity Protection Module](#identity-protection-module)
- [Incidents Module](#incidents-module)
- [NGSIEM Module](#ngsiem-module)
- [Intel Module](#intel-module)
- [Scheduled Reports Module](#scheduled-reports-module)
- [Sensor Usage Module](#sensor-usage-module)
- [Serverless Module](#serverless-module)
- [Spotlight Module](#spotlight-module)
- [Installation \& Setup](#installation--setup)
- [Prerequisites](#prerequisites)
- [Environment Configuration](#environment-configuration)
- [Installation](#installation)
- [Usage](#usage)
- [Command Line](#command-line)
- [Module Configuration](#module-configuration)
- [Additional Command Line Options](#additional-command-line-options)
- [As a Library](#as-a-library)
- [Running Examples](#running-examples)
- [Container Usage](#container-usage)
- [Using Pre-built Image (Recommended)](#using-pre-built-image-recommended)
- [Building Locally (Development)](#building-locally-development)
- [Editor/Assistant Integration](#editorassistant-integration)
- [Using `uvx` (recommended)](#using-uvx-recommended)
- [With Module Selection](#with-module-selection)
- [Using Individual Environment Variables](#using-individual-environment-variables)
- [Docker Version](#docker-version)
- [Additional Deployment Options](#additional-deployment-options)
- [Amazon Bedrock AgentCore](#amazon-bedrock-agentcore)
- [Google Cloud (Cloud Run and Vertex AI)](#google-cloud-cloud-run-and-vertex-ai)
- [Contributing](#contributing)
- [Getting Started for Contributors](#getting-started-for-contributors)
- [Running Tests](#running-tests)
- [Developer Documentation](#developer-documentation)
- [License](#license)
- [Support](#support)
## API Credentials & Required Scopes
### Setting Up CrowdStrike API Credentials
Before using the Falcon MCP Server, you need to create API credentials in your CrowdStrike console:
1. **Log into your CrowdStrike console**
2. **Navigate to Support > API Clients and Keys**
3. **Click "Add new API client"**
4. **Configure your API client**:
- **Client Name**: Choose a descriptive name (e.g., "Falcon MCP Server")
- **Description**: Optional description for your records
- **API Scopes**: Select the scopes based on which modules you plan to use (see below)
> **Important**: Ensure your API client has the necessary scopes for the modules you plan to use. You can always update scopes later in the CrowdStrike console.
### Required API Scopes by Module
The Falcon MCP Server supports different modules, each requiring specific API scopes:
| Module | Required API Scopes | Purpose |
| - | - | - |
| **Cloud Security** | `Falcon Container Image:read` | Find and analyze kubernetes containers inventory and container imges vulnerabilities |
| **Core** | _No additional scopes_ | Basic connectivity and system information |
| **Detections** | `Alerts:read` | Find and analyze detections to understand malicious activity |
| **Discover** | `Assets:read` | Search and analyze application inventory across your environment |
| **Hosts** | `Hosts:read` | Manage and query host/device information |
| **Identity Protection** | `Identity Protection Entities:read`<br>`Identity Protection Timeline:read`<br>`Identity Protection Detections:read`<br>`Identity Protection Assessment:read`<br>`Identity Protection GraphQL:write` | Comprehensive entity investigation and identity protection analysis |
| **Incidents** | `Incidents:read` | Analyze security incidents and coordinated activities |
| **NGSIEM** | `NGSIEM:read`<br>`NGSIEM:write` | Execute CQL queries against Next-Gen SIEM |
| **Intel** | `Actors (Falcon Intelligence):read`<br>`Indicators (Falcon Intelligence):read`<br>`Reports (Falcon Intelligence):read` | Research threat actors, IOCs, and intelligence reports |
| **Scheduled Reports** | `Scheduled Reports:read` | Get details about scheduled reports and searches, run reports on demand, and download report files |
| **Sensor Usage** | `Sensor Usage:read` | Access and analyze sensor usage data |
| **Serverless** | `Falcon Container Image:read` | Search for vulnerabilities in serverless functions across cloud service providers |
| **Spotlight** | `Vulnerabilities:read` | Manage and analyze vulnerability data and security assessments |
## Available Modules, Tools & Resources
> [!IMPORTANT]
> ⚠️ **Important Note on FQL Guide Resources**: Several modules include FQL (Falcon Query Language) guide resources that provide comprehensive query documentation and examples. While these resources are designed to assist AI assistants and users with query construction, **FQL has nuanced syntax requirements and field-specific behaviors** that may not be immediately apparent. AI-generated FQL filters should be **tested and validated** before use in production environments. We recommend starting with simple queries and gradually building complexity while verifying results in a test environment first.
**About Tools & Resources**: This server provides both tools (actions you can perform) and resources (documentation and context). Tools execute operations like searching for detections or analyzing threats, while resources provide comprehensive documentation like FQL query guides that AI assistants can reference for context without requiring tool calls.
### Cloud Security Module
**API Scopes Required**:
- `Falcon Container Image:read`
Provides tools for accessing and analyzing CrowdStrike Cloud Security resources:
- `falcon_search_kubernetes_containers`: Search for containers from CrowdStrike Kubernetes & Containers inventory
- `falcon_count_kubernetes_containers`: Count for containers by filter criteria from CrowdStrike Kubernetes & Containers inventory
- `falcon_search_images_vulnerabilities`: Search for images vulnerabilities from CrowdStrike Image Assessments
**Resources**:
- `falcon://cloud/kubernetes-containers/fql-guide`: Comprehensive FQL documentation and examples for kubernetes containers searches
- `falcon://cloud/images-vulnerabilities/fql-guide`: Comprehensive FQL documentation and examples for images vulnerabilities searches
**Use Cases**: Manage kubernetes containers inventory, container images vulnerabilities analysis
### Core Functionality (Built into Server)
**API Scopes**: _None required beyond basic API access_
The server provides core tools for interacting with the Falcon API:
- `falcon_check_connectivity`: Check connectivity to the Falcon API
- `falcon_list_enabled_modules`: Lists enabled modules in the falcon-mcp server
> These modules are determined by the `--modules` [flag](#module-configuration) when starting the server. If no modules are specified, all available modules are enabled.
- `falcon_list_modules`: Lists all available modules in the falcon-mcp server
### Detections Module
**API Scopes Required**: `Alerts:read`
Provides tools for accessing and analyzing CrowdStrike Falcon detections:
- `falcon_search_detections`: Find and analyze detections to understand malicious activity in your environment
- `falcon_get_detection_details`: Get comprehensive detection details for specific detection IDs to understand security threats
**Resources**:
- `falcon://detections/search/fql-guide`: Comprehensive FQL documentation and examples for detection searches
**Use Cases**: Threat hunting, security analysis, incident response, malware investigation
### Discover Module
**API Scopes Required**: `Assets:read`
Provides tools for accessing and managing CrowdStrike Falcon Discover applications and unmanaged assets:
- `falcon_search_applications`: Search for applications in your CrowdStrike environment
- `falcon_search_unmanaged_assets`: Search for unmanaged assets (systems without Falcon sensor installed) that have been discovered by managed systems
**Resources**:
- `falcon://discover/applications/fql-guide`: Comprehensive FQL documentation and examples for application searches
- `falcon://discover/hosts/fql-guide`: Comprehensive FQL documentation and examples for unmanaged assets searches
**Use Cases**: Application inventory management, software asset management, license compliance, vulnerability assessment, unmanaged asset discovery, security gap analysis
### Hosts Module
**API Scopes Required**: `Hosts:read`
Provides tools for accessing and managing CrowdStrike Falcon hosts/devices:
- `falcon_search_hosts`: Search for hosts in your CrowdStrike environment
- `falcon_get_host_details`: Retrieve detailed information for specified host device IDs
**Resources**:
- `falcon://hosts/search/fql-guide`: Comprehensive FQL documentation and examples for host searches
**Use Cases**: Asset management, device inventory, host monitoring, compliance reporting
### Identity Protection Module
**API Scopes Required**: `Identity Protection Entities:read`, `Identity Protection Timeline:read`, `Identity Protection Detections:read`, `Identity Protection Assessment:read`, `Identity Protection GraphQL:write`
Provides tools for accessing and managing CrowdStrike Falcon Identity Protection capabilities:
- `idp_investigate_entity`: Entity investigation tool for analyzing users, endpoints, and other entities with support for timeline analysis, relationship mapping, and risk assessment
**Use Cases**: Entity investigation, identity protection analysis, user behavior analysis, endpoint security assessment, relationship mapping, risk assessment
### Incidents Module
**API Scopes Required**: `Incidents:read`
Provides tools for accessing and analyzing CrowdStrike Falcon incidents:
- `falcon_show_crowd_score`: View calculated CrowdScores and security posture metrics for your environment
- `falcon_search_incidents`: Find and analyze security incidents to understand coordinated activity in your environment
- `falcon_get_incident_details`: Get comprehensive incident details to understand attack patterns and coordinated activities
- `falcon_search_behaviors`: Find and analyze behaviors to understand suspicious activity in your environment
- `falcon_get_behavior_details`: Get detailed behavior information to understand attack techniques and tactics
**Resources**:
- `falcon://incidents/crowd-score/fql-guide`: Comprehensive FQL documentation for CrowdScore queries
- `falcon://incidents/search/fql-guide`: Comprehensive FQL documentation and examples for incident searches
- `falcon://incidents/behaviors/fql-guide`: Comprehensive FQL documentation and examples for behavior searches
**Use Cases**: Incident management, threat assessment, attack pattern analysis, security posture monitoring
### NGSIEM Module
**API Scopes Required**: `NGSIEM:read`, `NGSIEM:write`
Provides tools for executing CQL queries against CrowdStrike's Next-Gen SIEM:
- `search_ngsiem`: Execute a CQL query against Next-Gen SIEM repositories
> [!IMPORTANT]
> This tool executes pre-written CQL queries only. It does **not** assist with query construction or provide CQL syntax guidance. Users must supply complete, valid CQL queries. For CQL documentation, refer to the [CrowdStrike LogScale documentation](https://library.humio.com/).
**Use Cases**: Log search and analysis, event correlation, threat hunting with custom CQL queries, security monitoring
### Intel Module
**API Scopes Required**:
- `Actors (Falcon Intelligence):read`
- `Indicators (Falcon Intelligence):read`
- `Reports (Falcon Intelligence):read`
Provides tools for accessing and analyzing CrowdStrike Intelligence:
- `falcon_search_actors`: Research threat actors and adversary groups tracked by CrowdStrike intelligence
- `falcon_search_indicators`: Search for threat indicators and indicators of compromise (IOCs) from CrowdStrike intelligence
- `falcon_search_reports`: Access CrowdStrike intelligence publications and threat reports
- `falcon_get_mitre_report`: Generate MITRE ATT&CK reports for threat actors, providing detailed tactics, techniques, and procedures (TTPs) in JSON or CSV format
**Resources**:
- `falcon://intel/actors/fql-guide`: Comprehensive FQL documentation and examples for threat actor searches
- `falcon://intel/indicators/fql-guide`: Comprehensive FQL documentation and examples for indicator searches
- `falcon://intel/reports/fql-guide`: Comprehensive FQL documentation and examples for intelligence report searches
**Use Cases**: Threat intelligence research, adversary tracking, IOC analysis, threat landscape assessment, MITRE ATT&CK framework analysis
### Sensor Usage Module
**API Scopes Required**: `Sensor Usage:read`
Provides tools for accessing and analyzing CrowdStrike Falcon sensor usage data:
- `falcon_search_sensor_usage`: Search for weekly sensor usage data in your CrowdStrike environment
**Resources**:
- `falcon://sensor-usage/weekly/fql-guide`: Comprehensive FQL documentation and examples for sensor usage searches
**Use Cases**: Sensor deployment monitoring, license utilization analysis, sensor health tracking
### Scheduled Reports Module
**API Scopes Required**: `Scheduled Reports:read`
Provides tools for accessing and managing CrowdStrike Falcon scheduled reports and scheduled searches:
- `falcon_search_scheduled_reports`: Search for scheduled reports and searches in your CrowdStrike environment
- `falcon_launch_scheduled_report`: Launch a scheduled report on demand outside of its recurring schedule
- `falcon_search_report_executions`: Search for report executions to track status and results
- `falcon_download_report_execution`: Download generated report files
**Resources**:
- `falcon://scheduled-reports/search/fql-guide`: Comprehensive FQL documentation for searching scheduled report entities
- `falcon://scheduled-reports/executions/search/fql-guide`: Comprehensive FQL documentation for searching report executions
**Use Cases**: Automated report management, report execution monitoring, scheduled search analysis, report download automation
### Serverless Module
**API Scopes Required**: `Falcon Container Image:read`
Provides tools for accessing and managing CrowdStrike Falcon Serverless Vulnerabilities:
- `falcon_search_serverless_vulnerabilities`: Search for vulnerabilities in your serverless functions across all cloud service providers
**Resources**:
- `falcon://serverless/vulnerabilities/fql-guide`: Comprehensive FQL documentation and examples for serverless vulnerabilities searches
**Use Cases**: Serverless security assessment, vulnerability management, cloud security monitoring
### Spotlight Module
**API Scopes Required**: `Vulnerabilities:read`
Provides tools for accessing and managing CrowdStrike Spotlight vulnerabilities:
- `falcon_search_vulnerabilities`: Search for vulnerabilities in your CrowdStrike environment
**Resources**:
- `falcon://spotlight/vulnerabilities/fql-guide`: Comprehensive FQL documentation and examples for vulnerability searches
**Use Cases**: Vulnerability management, security assessments, compliance reporting, risk analysis, patch prioritization
## Installation & Setup
### Prerequisites
- Python 3.11 or higher
- [`uv`](https://docs.astral.sh/uv/) or pip
- CrowdStrike Falcon API credentials (see above)
### Environment Configuration
You can configure your CrowdStrike API credentials in several ways:
#### Use a `.env` File
If you prefer using a `.env` file, you have several options:
##### Option 1: Copy from cloned repository (if you've cloned it)
```bash
cp .env.example .env
```
##### Option 2: Download the example file from GitHub
```bash
curl -o .env https://raw.githubusercontent.com/CrowdStrike/falcon-mcp/main/.env.example
```
##### Option 3: Create manually with the following content
```bash
# Required Configuration
FALCON_CLIENT_ID=your-client-id
FALCON_CLIENT_SECRET=your-client-secret
FALCON_BASE_URL=https://api.crowdstrike.com
# Optional Configuration (uncomment and modify as needed)
#FALCON_MCP_MODULES=detections,incidents,intel
#FALCON_MCP_TRANSPORT=stdio
#FALCON_MCP_DEBUG=false
#FALCON_MCP_HOST=127.0.0.1
#FALCON_MCP_PORT=8000
#FALCON_MCP_STATELESS_HTTP=false
#FALCON_MCP_API_KEY=your-api-key
```
#### Environment Variables
Alternatively, you can use environment variables directly.
Set the following environment variables in your shell:
```bash
# Required Configuration
export FALCON_CLIENT_ID="your-client-id"
export FALCON_CLIENT_SECRET="your-client-secret"
export FALCON_BASE_URL="https://api.crowdstrike.com"
# Optional Configuration
export FALCON_MCP_MODULES="detections,incidents,intel" # Comma-separated list (default: all modules)
export FALCON_MCP_TRANSPORT="stdio" # Transport method: stdio, sse, streamable-http
export FALCON_MCP_DEBUG="false" # Enable debug logging: true, false
export FALCON_MCP_HOST="127.0.0.1" # Host for HTTP transports
export FALCON_MCP_PORT="8000" # Port for HTTP transports
export FALCON_MCP_STATELESS_HTTP="false" # Stateless mode for scalable deployments
export FALCON_MCP_API_KEY="your-api-key" # API key for HTTP transport auth (x-api-key header)
```
**CrowdStrike API Region URLs:**
- **US-1 (Default)**: `https://api.crowdstrike.com`
- **US-2**: `https://api.us-2.crowdstrike.com`
- **EU-1**: `https://api.eu-1.crowdstrike.com`
- **US-GOV**: `https://api.laggar.gcw.crowdstrike.com`
### Installation
> [!NOTE]
> If you just want to interact with falcon-mcp via an agent chat interface rather than running the server itself, take a look at [Additional Deployment Options](#additional-deployment-options). Otherwise continue to the installations steps below.
#### Install using uv
```bash
uv tool install falcon-mcp
```
#### Install using pip
```bash
pip install falcon-mcp
```
> [!TIP]
> If `falcon-mcp` isn't found, update your shell PATH.
For installation via code editors/assistants, see the [Editor/Assitant](#editorassistant-integration) section below
## Usage
### Command Line
Run the server with default settings (stdio transport):
```bash
falcon-mcp
```
Run with SSE transport:
```bash
falcon-mcp --transport sse
```
Run with streamable-http transport:
```bash
falcon-mcp --transport streamable-http
```
Run with streamable-http transport on custom port:
```bash
falcon-mcp --transport streamable-http --host 0.0.0.0 --port 8080
```
Run with stateless HTTP mode (for scalable deployments like AWS AgentCore):
```bash
falcon-mcp --transport streamable-http --stateless-http
```
Run with API key authentication (recommended for HTTP transports):
```bash
falcon-mcp --transport streamable-http --api-key your-secret-key
```
> **Security Note**: When using HTTP transports (`sse` or `streamable-http`), consider enabling API key authentication via `--api-key` or `FALCON_MCP_API_KEY` to protect the endpoint. This is a self-generated key (any secure string you create) that ensures only authorized clients with the matching key can access the MCP server when running remotely. This is separate from your CrowdStrike API credentials.
### Module Configuration
The Falcon MCP Server supports multiple ways to specify which modules to enable:
#### 1. Command Line Arguments (highest priority)
Specify modules using comma-separated lists:
```bash
# Enable specific modules
falcon-mcp --modules detections,incidents,intel,spotlight,idp
# Enable only one module
falcon-mcp --modules detections
```
#### 2. Environment Variable (fallback)
Set the `FALCON_MCP_MODULES` environment variable:
```bash
# Export environment variable
export FALCON_MCP_MODULES=detections,incidents,intel,spotlight,idp
falcon-mcp
# Or set inline
FALCON_MCP_MODULES=detections,incidents,intel,spotlight,idp falcon-mcp
```
#### 3. Default Behavior (all modules)
If no modules are specified via command line or environment variable, all available modules are enabled by default.
**Module Priority Order:**
1. Command line `--modules` argument (overrides all)
2. `FALCON_MCP_MODULES` environment variable (fallback)
3. All modules (default when none specified)
### Additional Command Line Options
For all available options:
```bash
falcon-mcp --help
```
### As a Library
```python
from falcon_mcp.server import FalconMCPServer
# Create and run the server
server = FalconMCPServer(
base_url="https://api.us-2.crowdstrike.com", # Optional, defaults to env var
debug=True, # Optional, enable debug logging
enabled_modules=["detections", "incidents", "spotlight", "idp"], # Optional, defaults to all modules
api_key="your-api-key" # Optional: API key for HTTP transport auth
)
# Run with stdio transport (default)
server.run()
# Or run with SSE transport
server.run("sse")
# Or run with streamable-http transport
server.run("streamable-http")
# Or run with streamable-http transport on custom host/port
server.run("streamable-http", host="0.0.0.0", port=8080)
```
#### Direct Credentials (Secret Management Integration)
For enterprise deployments using secret management systems (HashiCorp Vault, AWS Secrets Manager, etc.), you can pass credentials directly instead of using environment variables:
```python
from falcon_mcp.server import FalconMCPServer
# Example: Retrieve credentials from a secrets manager
# client_id = vault.read_secret("crowdstrike/client_id")
# client_secret = vault.read_secret("crowdstrike/client_secret")
# Create server with direct credentials
server = FalconMCPServer(
client_id="your-client-id", # Or retrieved from vault/secrets manager
client_secret="your-client-secret", # Or retrieved from vault/secrets manager
base_url="https://api.us-2.crowdstrike.com", # Optional
enabled_modules=["detections", "incidents"] # Optional
)
server.run()
```
> **Note**: When both direct parameters and environment variables are available, direct parameters take precedence.
### Running Examples
```bash
# Run with stdio transport
python examples/basic_usage.py
# Run with SSE transport
python examples/sse_usage.py
# Run with streamable-http transport
python examples/streamable_http_usage.py
```
## Container Usage
The Falcon MCP Server is available as a pre-built container image for easy deployment:
### Using Pre-built Image (Recommended)
```bash
# Pull the latest pre-built image
docker pull quay.io/crowdstrike/falcon-mcp:latest
# Run with .env file (recommended)
docker run -i --rm --env-file /path/to/.env quay.io/crowdstrike/falcon-mcp:latest
# Run with .env file and SSE transport
docker run --rm -p 8000:8000 --env-file /path/to/.env \
quay.io/crowdstrike/falcon-mcp:latest --transport sse --host 0.0.0.0
# Run with .env file and streamable-http transport
docker run --rm -p 8000:8000 --env-file /path/to/.env \
quay.io/crowdstrike/falcon-mcp:latest --transport streamable-http --host 0.0.0.0
# Run with .env file and custom port
docker run --rm -p 8080:8080 --env-file /path/to/.env \
quay.io/crowdstrike/falcon-mcp:latest --transport streamable-http --host 0.0.0.0 --port 8080
# Run with .env file and specific modules (stdio transport - requires -i flag)
docker run -i --rm --env-file /path/to/.env \
quay.io/crowdstrike/falcon-mcp:latest --modules detections,incidents,spotlight,idp
# Use a specific version instead of latest (stdio transport - requires -i flag)
docker run -i --rm --env-file /path/to/.env \
quay.io/crowdstrike/falcon-mcp:1.2.3
# Alternative: Individual environment variables (stdio transport - requires -i flag)
docker run -i --rm -e FALCON_CLIENT_ID=your_client_id -e FALCON_CLIENT_SECRET=your_secret \
quay.io/crowdstrike/falcon-mcp:latest
```
### Building Locally (Development)
For development or customization purposes, you can build the image locally:
```bash
# Build the Docker image
docker build -t falcon-mcp .
# Run the locally built image
docker run --rm -e FALCON_CLIENT_ID=your_client_id -e FALCON_CLIENT_SECRET=your_secret falcon-mcp
```
> [!NOTE]
> When using HTTP transports in Docker, always set `--host 0.0.0.0` to allow external connections to the container.
## Editor/Assistant Integration
You can integrate the Falcon MCP server with your editor or AI assistant. Here are configuration examples for popular MCP clients:
### Using `uvx` (recommended)
```json
{
"mcpServers": {
"falcon-mcp": {
"command": "uvx",
"args": [
"--env-file",
"/path/to/.env",
"falcon-mcp"
]
}
}
}
```
### With Module Selection
```json
{
"mcpServers": {
"falcon-mcp": {
"command": "uvx",
"args": [
"--env-file",
"/path/to/.env",
"falcon-mcp",
"--modules",
"detections,incidents,intel"
]
}
}
}
```
### Using Individual Environment Variables
```json
{
"mcpServers": {
"falcon-mcp": {
"command": "uvx",
"args": ["falcon-mcp"],
"env": {
"FALCON_CLIENT_ID": "your-client-id",
"FALCON_CLIENT_SECRET": "your-client-secret",
"FALCON_BASE_URL": "https://api.crowdstrike.com"
}
}
}
}
```
### Docker Version
```json
{
"mcpServers": {
"falcon-mcp-docker": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--env-file",
"/full/path/to/.env",
"quay.io/crowdstrike/falcon-mcp:latest"
]
}
}
}
```
> [!NOTE]
> The `-i` flag is required when using the default stdio transport.
## Additional Deployment Options
### Amazon Bedrock AgentCore
To deploy the MCP Server as a tool in Amazon Bedrock AgentCore, please refer to the [following document](./docs/deployment/amazon_bedrock_agentcore.md).
### Google Cloud (Cloud Run and Vertex AI)
To deploy the MCP server as an agent within Cloud Run or Vertex AI Agent Engine (including for registration within Agentspace), refer to the [Google ADK example](./examples/adk/README.md).
### Gemini CLI
1. Install `uv`
1. `gemini extensions install https://github.com/CrowdStrike/falcon-mcp`
1. Copy a valid `.env` file to `~/.gemini/extensions/falcon-mcp/.env`
## Contributing
### Getting Started for Contributors
1. Clone the repository:
```bash
git clone https://github.com/CrowdStrike/falcon-mcp.git
cd falcon-mcp
```
2. Install in development mode:
```bash
# Create .venv and install dependencies
uv sync --all-extras
# Activate the venv
source .venv/bin/activate
```
> [!IMPORTANT]
> This project uses [Conventional Commits](https://www.conventionalcommits.org/) for automated releases and semantic versioning. Please follow the commit message format outlined in our [Contributing Guide](docs/CONTRIBUTING.md) when submitting changes.
### Running Tests
```bash
# Run all unit tests
pytest
# Run end-to-end tests (requires API credentials)
pytest --run-e2e tests/e2e/
# Run end-to-end tests with verbose output (note: -s is required to see output)
pytest --run-e2e -v -s tests/e2e/
# Run integration tests (requires API credentials)
pytest --run-integration tests/integration/
# Run integration tests with verbose output
pytest --run-integration -v -s tests/integration/
# Run integration tests for a specific module
pytest --run-integration tests/integration/test_detections.py
```
> **Note**: The `-s` flag is required to see detailed output from E2E and integration tests.
#### Integration Tests
Integration tests make real API calls to validate FalconPy operation names, HTTP methods, and response schemas. They catch issues that mocked unit tests cannot detect:
- Incorrect FalconPy operation names (typos)
- HTTP method mismatches (POST body vs GET query parameters)
- Two-step search patterns not returning full details
- API response schema changes
**Requirements**: Valid CrowdStrike API credentials must be configured (see [Environment Configuration](#environment-configuration)).
### Developer Documentation
- [Module Development Guide](docs/development/module_development.md): Instructions for implementing new modules
- [Resource Development Guide](docs/development/resource_development.md): Instructions for implementing resources
- [End-to-End Testing Guide](docs/development/e2e_testing.md): Guide for running and understanding E2E tests
- [Integration Testing Guide](docs/development/integration_testing.md): Guide for running integration tests with real API calls
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Support
This is a community-driven, open source project. While it is not an official CrowdStroke product, it is actively maintained by CrowdStrike and supported in collaboration with the open source developer community.
For more information, please see our [SUPPORT](SUPPORT.md) file.
| text/markdown | null | CrowdStrike <cloud-integrations@crowdstrike.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"crowdstrike-falconpy>=1.3.0",
"mcp<2.0.0,>=1.12.1",
"python-dotenv>=1.1.1",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"langchain-openai>=0.3.28; extra == \"dev\"",
"mcp-use[search]>=1.3.7; extra == \"dev\"",
"ruff>=0.12.5; extra ... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:01:01.856335 | falcon_mcp-0.6.0.tar.gz | 86,136 | f5/c0/feb93118c5ec2b31cd57c5f5f6aa5ba5799bb9f8507d5b39a6b9928f6235/falcon_mcp-0.6.0.tar.gz | source | sdist | null | false | c583363d5169fff670e12fac7c31cb6b | b0a4678b34f47ad7082854a5e0055b6ee9a6828063aee628fcfefd15db3be1f0 | f5c0feb93118c5ec2b31cd57c5f5f6aa5ba5799bb9f8507d5b39a6b9928f6235 | null | [
"LICENSE"
] | 862 |
2.4 | mypy-boto3-cleanrooms | 1.42.52 | Type annotations for boto3 CleanRoomsService 1.42.52 service generated with mypy-boto3-builder 8.12.0 | <a id="mypy-boto3-cleanrooms"></a>
# mypy-boto3-cleanrooms
[](https://pypi.org/project/mypy-boto3-cleanrooms/)
[](https://pypi.org/project/mypy-boto3-cleanrooms/)
[](https://youtype.github.io/boto3_stubs_docs/)
[](https://pypistats.org/packages/mypy-boto3-cleanrooms)

Type annotations for
[boto3 CleanRoomsService 1.42.52](https://pypi.org/project/boto3/) compatible
with [VSCode](https://code.visualstudio.com/),
[PyCharm](https://www.jetbrains.com/pycharm/),
[Emacs](https://www.gnu.org/software/emacs/),
[Sublime Text](https://www.sublimetext.com/),
[mypy](https://github.com/python/mypy),
[pyright](https://github.com/microsoft/pyright) and other tools.
Generated with
[mypy-boto3-builder 8.12.0](https://github.com/youtype/mypy_boto3_builder).
More information can be found on
[boto3-stubs](https://pypi.org/project/boto3-stubs/) page and in
[mypy-boto3-cleanrooms docs](https://youtype.github.io/boto3_stubs_docs/mypy_boto3_cleanrooms/).
See how it helps you find and fix potential bugs:

- [mypy-boto3-cleanrooms](#mypy-boto3-cleanrooms)
- [How to install](#how-to-install)
- [Generate locally (recommended)](<#generate-locally-(recommended)>)
- [VSCode extension](#vscode-extension)
- [From PyPI with pip](#from-pypi-with-pip)
- [How to uninstall](#how-to-uninstall)
- [Usage](#usage)
- [VSCode](#vscode)
- [PyCharm](#pycharm)
- [Emacs](#emacs)
- [Sublime Text](#sublime-text)
- [Other IDEs](#other-ides)
- [mypy](#mypy)
- [pyright](#pyright)
- [Pylint compatibility](#pylint-compatibility)
- [Explicit type annotations](#explicit-type-annotations)
- [Client annotations](#client-annotations)
- [Paginators annotations](#paginators-annotations)
- [Literals](#literals)
- [Type definitions](#type-definitions)
- [How it works](#how-it-works)
- [What's new](#what's-new)
- [Implemented features](#implemented-features)
- [Latest changes](#latest-changes)
- [Versioning](#versioning)
- [Thank you](#thank-you)
- [Documentation](#documentation)
- [Support and contributing](#support-and-contributing)
<a id="how-to-install"></a>
## How to install
<a id="generate-locally-(recommended)"></a>
### Generate locally (recommended)
You can generate type annotations for `boto3` package locally with
`mypy-boto3-builder`. Use
[uv](https://docs.astral.sh/uv/getting-started/installation/) for build
isolation.
1. Run mypy-boto3-builder in your package root directory:
`uvx --with 'boto3==1.42.52' mypy-boto3-builder`
2. Select `boto3-stubs` AWS SDK.
3. Add `CleanRoomsService` service.
4. Use provided commands to install generated packages.
<a id="vscode-extension"></a>
### VSCode extension
Add
[AWS Boto3](https://marketplace.visualstudio.com/items?itemName=Boto3typed.boto3-ide)
extension to your VSCode and run `AWS boto3: Quick Start` command.
Click `Modify` and select `boto3 common` and `CleanRoomsService`.
<a id="from-pypi-with-pip"></a>
### From PyPI with pip
Install `boto3-stubs` for `CleanRoomsService` service.
```bash
# install with boto3 type annotations
python -m pip install 'boto3-stubs[cleanrooms]'
# Lite version does not provide session.client/resource overloads
# it is more RAM-friendly, but requires explicit type annotations
python -m pip install 'boto3-stubs-lite[cleanrooms]'
# standalone installation
python -m pip install mypy-boto3-cleanrooms
```
<a id="how-to-uninstall"></a>
## How to uninstall
```bash
python -m pip uninstall -y mypy-boto3-cleanrooms
```
<a id="usage"></a>
## Usage
<a id="vscode"></a>
### VSCode
- Install
[Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
- Install
[Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
- Set `Pylance` as your Python Language Server
- Install `boto3-stubs[cleanrooms]` in your environment:
```bash
python -m pip install 'boto3-stubs[cleanrooms]'
```
Both type checking and code completion should now work. No explicit type
annotations required, write your `boto3` code as usual.
<a id="pycharm"></a>
### PyCharm
> ⚠️ Due to slow PyCharm performance on `Literal` overloads (issue
> [PY-40997](https://youtrack.jetbrains.com/issue/PY-40997)), it is recommended
> to use [boto3-stubs-lite](https://pypi.org/project/boto3-stubs-lite/) until
> the issue is resolved.
> ⚠️ If you experience slow performance and high CPU usage, try to disable
> `PyCharm` type checker and use [mypy](https://github.com/python/mypy) or
> [pyright](https://github.com/microsoft/pyright) instead.
> ⚠️ To continue using `PyCharm` type checker, you can try to replace
> `boto3-stubs` with
> [boto3-stubs-lite](https://pypi.org/project/boto3-stubs-lite/):
```bash
pip uninstall boto3-stubs
pip install boto3-stubs-lite
```
Install `boto3-stubs[cleanrooms]` in your environment:
```bash
python -m pip install 'boto3-stubs[cleanrooms]'
```
Both type checking and code completion should now work.
<a id="emacs"></a>
### Emacs
- Install `boto3-stubs` with services you use in your environment:
```bash
python -m pip install 'boto3-stubs[cleanrooms]'
```
- Install [use-package](https://github.com/jwiegley/use-package),
[lsp](https://github.com/emacs-lsp/lsp-mode/),
[company](https://github.com/company-mode/company-mode) and
[flycheck](https://github.com/flycheck/flycheck) packages
- Install [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) package
```elisp
(use-package lsp-pyright
:ensure t
:hook (python-mode . (lambda ()
(require 'lsp-pyright)
(lsp))) ; or lsp-deferred
:init (when (executable-find "python3")
(setq lsp-pyright-python-executable-cmd "python3"))
)
```
- Make sure emacs uses the environment where you have installed `boto3-stubs`
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="sublime-text"></a>
### Sublime Text
- Install `boto3-stubs[cleanrooms]` with services you use in your environment:
```bash
python -m pip install 'boto3-stubs[cleanrooms]'
```
- Install [LSP-pyright](https://github.com/sublimelsp/LSP-pyright) package
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="other-ides"></a>
### Other IDEs
Not tested, but as long as your IDE supports `mypy` or `pyright`, everything
should work.
<a id="mypy"></a>
### mypy
- Install `mypy`: `python -m pip install mypy`
- Install `boto3-stubs[cleanrooms]` in your environment:
```bash
python -m pip install 'boto3-stubs[cleanrooms]'
```
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pyright"></a>
### pyright
- Install `pyright`: `npm i -g pyright`
- Install `boto3-stubs[cleanrooms]` in your environment:
```bash
python -m pip install 'boto3-stubs[cleanrooms]'
```
Optionally, you can install `boto3-stubs` to `typings` directory.
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pylint-compatibility"></a>
### Pylint compatibility
It is totally safe to use `TYPE_CHECKING` flag in order to avoid
`mypy-boto3-cleanrooms` dependency in production. However, there is an issue in
`pylint` that it complains about undefined variables. To fix it, set all types
to `object` in non-`TYPE_CHECKING` mode.
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from mypy_boto3_ec2 import EC2Client, EC2ServiceResource
from mypy_boto3_ec2.waiters import BundleTaskCompleteWaiter
from mypy_boto3_ec2.paginators import DescribeVolumesPaginator
else:
EC2Client = object
EC2ServiceResource = object
BundleTaskCompleteWaiter = object
DescribeVolumesPaginator = object
...
```
<a id="explicit-type-annotations"></a>
## Explicit type annotations
<a id="client-annotations"></a>
### Client annotations
`CleanRoomsServiceClient` provides annotations for
`boto3.client("cleanrooms")`.
```python
from boto3.session import Session
from mypy_boto3_cleanrooms import CleanRoomsServiceClient
client: CleanRoomsServiceClient = Session().client("cleanrooms")
# now client usage is checked by mypy and IDE should provide code completion
```
<a id="paginators-annotations"></a>
### Paginators annotations
`mypy_boto3_cleanrooms.paginator` module contains type annotations for all
paginators.
```python
from boto3.session import Session
from mypy_boto3_cleanrooms import CleanRoomsServiceClient
from mypy_boto3_cleanrooms.paginator import (
ListAnalysisTemplatesPaginator,
ListCollaborationAnalysisTemplatesPaginator,
ListCollaborationChangeRequestsPaginator,
ListCollaborationConfiguredAudienceModelAssociationsPaginator,
ListCollaborationIdNamespaceAssociationsPaginator,
ListCollaborationPrivacyBudgetTemplatesPaginator,
ListCollaborationPrivacyBudgetsPaginator,
ListCollaborationsPaginator,
ListConfiguredAudienceModelAssociationsPaginator,
ListConfiguredTableAssociationsPaginator,
ListConfiguredTablesPaginator,
ListIdMappingTablesPaginator,
ListIdNamespaceAssociationsPaginator,
ListMembersPaginator,
ListMembershipsPaginator,
ListPrivacyBudgetTemplatesPaginator,
ListPrivacyBudgetsPaginator,
ListProtectedJobsPaginator,
ListProtectedQueriesPaginator,
ListSchemasPaginator,
)
client: CleanRoomsServiceClient = Session().client("cleanrooms")
# Explicit type annotations are optional here
# Types should be correctly discovered by mypy and IDEs
list_analysis_templates_paginator: ListAnalysisTemplatesPaginator = client.get_paginator(
"list_analysis_templates"
)
list_collaboration_analysis_templates_paginator: ListCollaborationAnalysisTemplatesPaginator = (
client.get_paginator("list_collaboration_analysis_templates")
)
list_collaboration_change_requests_paginator: ListCollaborationChangeRequestsPaginator = (
client.get_paginator("list_collaboration_change_requests")
)
list_collaboration_configured_audience_model_associations_paginator: ListCollaborationConfiguredAudienceModelAssociationsPaginator = client.get_paginator(
"list_collaboration_configured_audience_model_associations"
)
list_collaboration_id_namespace_associations_paginator: ListCollaborationIdNamespaceAssociationsPaginator = client.get_paginator(
"list_collaboration_id_namespace_associations"
)
list_collaboration_privacy_budget_templates_paginator: ListCollaborationPrivacyBudgetTemplatesPaginator = client.get_paginator(
"list_collaboration_privacy_budget_templates"
)
list_collaboration_privacy_budgets_paginator: ListCollaborationPrivacyBudgetsPaginator = (
client.get_paginator("list_collaboration_privacy_budgets")
)
list_collaborations_paginator: ListCollaborationsPaginator = client.get_paginator(
"list_collaborations"
)
list_configured_audience_model_associations_paginator: ListConfiguredAudienceModelAssociationsPaginator = client.get_paginator(
"list_configured_audience_model_associations"
)
list_configured_table_associations_paginator: ListConfiguredTableAssociationsPaginator = (
client.get_paginator("list_configured_table_associations")
)
list_configured_tables_paginator: ListConfiguredTablesPaginator = client.get_paginator(
"list_configured_tables"
)
list_id_mapping_tables_paginator: ListIdMappingTablesPaginator = client.get_paginator(
"list_id_mapping_tables"
)
list_id_namespace_associations_paginator: ListIdNamespaceAssociationsPaginator = (
client.get_paginator("list_id_namespace_associations")
)
list_members_paginator: ListMembersPaginator = client.get_paginator("list_members")
list_memberships_paginator: ListMembershipsPaginator = client.get_paginator("list_memberships")
list_privacy_budget_templates_paginator: ListPrivacyBudgetTemplatesPaginator = client.get_paginator(
"list_privacy_budget_templates"
)
list_privacy_budgets_paginator: ListPrivacyBudgetsPaginator = client.get_paginator(
"list_privacy_budgets"
)
list_protected_jobs_paginator: ListProtectedJobsPaginator = client.get_paginator(
"list_protected_jobs"
)
list_protected_queries_paginator: ListProtectedQueriesPaginator = client.get_paginator(
"list_protected_queries"
)
list_schemas_paginator: ListSchemasPaginator = client.get_paginator("list_schemas")
```
<a id="literals"></a>
### Literals
`mypy_boto3_cleanrooms.literals` module contains literals extracted from shapes
that can be used in user code for type checking.
Full list of `CleanRoomsService` Literals can be found in
[docs](https://youtype.github.io/boto3_stubs_docs/mypy_boto3_cleanrooms/literals/).
```python
from mypy_boto3_cleanrooms.literals import AccessBudgetTypeType
def check_value(value: AccessBudgetTypeType) -> bool: ...
```
<a id="type-definitions"></a>
### Type definitions
`mypy_boto3_cleanrooms.type_defs` module contains structures and shapes
assembled to typed dictionaries and unions for additional type checking.
Full list of `CleanRoomsService` TypeDefs can be found in
[docs](https://youtype.github.io/boto3_stubs_docs/mypy_boto3_cleanrooms/type_defs/).
```python
# TypedDict usage example
from mypy_boto3_cleanrooms.type_defs import AccessBudgetDetailsTypeDef
def get_value() -> AccessBudgetDetailsTypeDef:
return {
"startTime": ...,
}
```
<a id="how-it-works"></a>
## How it works
Fully automated
[mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder) carefully
generates type annotations for each service, patiently waiting for `boto3`
updates. It delivers drop-in type annotations for you and makes sure that:
- All available `boto3` services are covered.
- Each public class and method of every `boto3` service gets valid type
annotations extracted from `botocore` schemas.
- Type annotations include up-to-date documentation.
- Link to documentation is provided for every method.
- Code is processed by [ruff](https://docs.astral.sh/ruff/) for readability.
<a id="what's-new"></a>
## What's new
<a id="implemented-features"></a>
### Implemented features
- Fully type annotated `boto3`, `botocore`, `aiobotocore` and `aioboto3`
libraries
- `mypy`, `pyright`, `VSCode`, `PyCharm`, `Sublime Text` and `Emacs`
compatibility
- `Client`, `ServiceResource`, `Resource`, `Waiter` `Paginator` type
annotations for each service
- Generated `TypeDefs` for each service
- Generated `Literals` for each service
- Auto discovery of types for `boto3.client` and `boto3.resource` calls
- Auto discovery of types for `session.client` and `session.resource` calls
- Auto discovery of types for `client.get_waiter` and `client.get_paginator`
calls
- Auto discovery of types for `ServiceResource` and `Resource` collections
- Auto discovery of types for `aiobotocore.Session.create_client` calls
<a id="latest-changes"></a>
### Latest changes
Builder changelog can be found in
[Releases](https://github.com/youtype/mypy_boto3_builder/releases).
<a id="versioning"></a>
## Versioning
`mypy-boto3-cleanrooms` version is the same as related `boto3` version and
follows
[Python Packaging version specifiers](https://packaging.python.org/en/latest/specifications/version-specifiers/).
<a id="thank-you"></a>
## Thank you
- [Allie Fitter](https://github.com/alliefitter) for
[boto3-type-annotations](https://pypi.org/project/boto3-type-annotations/),
this package is based on top of his work
- [black](https://github.com/psf/black) developers for an awesome formatting
tool
- [Timothy Edmund Crosley](https://github.com/timothycrosley) for
[isort](https://github.com/PyCQA/isort) and how flexible it is
- [mypy](https://github.com/python/mypy) developers for doing all dirty work
for us
- [pyright](https://github.com/microsoft/pyright) team for the new era of typed
Python
<a id="documentation"></a>
## Documentation
All services type annotations can be found in
[boto3 docs](https://youtype.github.io/boto3_stubs_docs/mypy_boto3_cleanrooms/)
<a id="support-and-contributing"></a>
## Support and contributing
This package is auto-generated. Please reports any bugs or request new features
in [mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder/issues/)
repository.
| text/markdown | null | Vlad Emelianov <vlad.emelianov.nz@gmail.com> | null | null | null | boto3, cleanrooms, boto3-stubs, type-annotations, mypy, typeshed, autocomplete | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"... | [
"any"
] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions; python_version < \"3.12\""
] | [] | [] | [] | [
"Homepage, https://github.com/youtype/mypy_boto3_builder",
"Documentation, https://youtype.github.io/boto3_stubs_docs/mypy_boto3_cleanrooms/",
"Source, https://github.com/youtype/mypy_boto3_builder",
"Tracker, https://github.com/youtype/mypy_boto3_builder/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T22:01:00.780570 | mypy_boto3_cleanrooms-1.42.52.tar.gz | 52,343 | 91/23/915b816f1e90a5f84d7e1b4793eefe0bf31ae25e913c13d68c08bab9e6d1/mypy_boto3_cleanrooms-1.42.52.tar.gz | source | sdist | null | false | 8121830d0d5081a6178adc965567758c | 612603b5417848dcba6f53ee12f045ad12e30b33a719dd146cdf177bb1920e1f | 9123915b816f1e90a5f84d7e1b4793eefe0bf31ae25e913c13d68c08bab9e6d1 | MIT | [
"LICENSE"
] | 1,192 |
2.4 | model-library | 0.1.12 | Model Library for vals.ai | # Model Library
Open-source model library for interacting with a variety of LLM providers. Originally developed for internal use at [vals.ai](https://vals.ai/) benchmarks. This tool is designed to be a general-purpose solution for any project requiring a unified interface for multiple model providers.
`pip install model-library`
**Note**: This library is undergoing rapid development. Expect breaking changes.
## Features
### Providers
- AI21 Labs
- Alibaba
- Amazon Bedrock
- Anthropic
- Azure OpenAI
- Cohere
- DeepSeek
- Fireworks
- Google Gemini
- Mistral
- Perplexity
- Together AI
- OpenAI
- X AI
- ZhipuAI (zai)
Run `python -m scripts.browse_models` to browse the model registry or
```python
from model_library.registry_utils import get_model_names_by_provider, get_provider_names
print(get_provider_names())
print(get_model_names_by_provider("chosen-provider"))
```
### Supported Input
- Images
- Files
- Tools (with full history)
- Batch
- Reasoning
- Custom Parameters
## Usage
Here is a basic example of how to query a model:
```python
import asyncio
from model_library import model
async def main():
# Load a model from the registry
llm = model("anthropic/claude-opus-4-1-20250805-thinking")
# Display the LLM instance
llm.logger.info(llm)
# or print(llm)
# Query the model with a simple text input
result = await llm.query(
"What is QSBS? Explain your thinking in detail and make it concise."
)
# Logger automatically logs the result
# Display only the output text
llm.logger.info(result.output_text)
if __name__ == "__main__":
asyncio.run(main())
```
The model registry holds model attributes, ex. reasoning, file support, tool support, max tokens. You may also use models not included in the registry.
```python
from model_library import raw_model
from model_library.base import LLMConfig
model = raw_model("grok/grok-code-fast", LLMConfig(max_tokens=10000))
```
Root logger is named "llm". To disable logging:
```python
from model_library import set_logging
set_logging(enable=False)
```
### Environment Setup
The model library will use:
- Environment varibles for API keys
- OPENAI_API_KEY
- ANTHROPIC_API_KEY
- GOOGLE_API_KEY
- ...
- Variables set through model_library.settings
```python
from model_library import model_library_settings
model_library_settings.set(MY_KEY="my-key")
```
### System Prompt
```bash
python -m examples.basics
```
```python
await model.query(
[TextInput(text="Hello, how are you?")],
system_prompt="You are a pirate, answer in the speaking style of a pirate. Keeps responses under 10 words",
)
```
### Image/File Input
Supports base64, url, and file id (file upload)
```bash
python -m examples.images
```
```python
red_image_content = b"..."
await model.query(
[
TextInput(text="What color is the image?"),
FileWithBase64(
type="image",
name="red_image.png",
mime="png",
base64=base64.b64encode(red_image_content).decode("utf-8"),
),
]
)
```
### Tool Calls
```bash
python -m examples.tool_calls
```
```python
tools = [
ToolDefinition(
name="get_weather",
body=ToolBody(
name="get_weather",
description="Get current temperature in a given location",
properties={
"location": {
"type": "string",
"description": "City and country e.g. Bogotá, Colombia",
},
},
required=["location"],
),
)
]
output1 = await model.query(
[TextInput(text="What is the weather in SF right now?")],
tools=tools,
)
output2 = await model.query(
[
# assume one tool call was made
ToolResult(tool_call=output1.tool_calls[0], result="25C"),
TextInput(
text="Also, includes some weird emojies in your answer (at least 8 of them)"
),
],
history=output1.history,
tools=tools,
```
### Full examples
You can run `make examples` (default models) or `make example <model>` to run all examples.
`python -m examples.basics`
`python -m examples.images`
`python -m examples.files`
`python -m examples.tool_calls`
`python -m examples.embeddings`
`python -m examples.advanced.batch`
`python -m examples.advanced.custom_retrier`
`python -m examples.advanced.stress`
`python -m examples.advanced.deep_research`
## Architecture
Designed to abstract different LLM providers:
- **LLM Base Class**: An abstract base class that defines a common interface for all models
- **Model Registry**: A central registry that loads model configurations from YAML files
- **Provider-Specific Implementations**: Concrete classes for each provider (e.g., OpenAI, Google, Anthropic) that inherit from the `LLM` base class
- **Data Models**: A set of `pydantic` models for representing various input and output types, such as `TextInput`, `FileWithBase64`, `ToolDefinition`, and `ToolResult`. This ensures code is model agnostic, and easy to maintain.
- **Retry Logic**: A set of retry strategies for handling errors and rate limiting
## Contributing
### Setup
We use [uv](https://docs.astral.sh/uv/getting-started/installation/) for dependency management.
A Makefile is provided to help with development.
To install dependencies, run:
```bash
make install
```
### Makefile commands
```bash
make install Install dependencies"
make test Run unit tests"
make test-integration Run integration tests (requires API keys)"
make test-all Run all tests (unit + integration)"
make style Lint & Format"
make style-check Check style"
make typecheck Typecheck"
make config Generate all_models.json"
make run-models Run all models"
make examples Run all examples"
make examples <model> Run all examples with specified model"
make browse-models Browse all models"
```
### Testing
#### Unit Tests
Unit tests do not require API keys
```bash
make test-unit
```
#### Integration Tests
Make sure you have API keys configured
```bash
make test-integration
```
| text/markdown | null | "Vals AI, Inc." <contact@vals.ai> | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"typing-extensions<5.0,>=4.14.1",
"pydantic<3.0,>=2.11.7",
"pyyaml>=6.0.2",
"rich",
"backoff<3.0,>=2.2.1",
"redis<7.0,>=6.2.0",
"tiktoken>=0.12.0",
"pillow",
"openai<3.0,>=2.0",
"anthropic<1.0,>=0.57.1",
"mistralai<2.0,>=1.9.10",
"xai-sdk<2.0,>=1.0.0",
"ai21<5.0,>=4.3.0",
"boto3<2.0,>=1.38... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T22:00:58.920859 | model_library-0.1.12.tar.gz | 314,439 | 95/b7/0393b850f52a5fa9d2c6518673be1a821284ea34305084625344bad5f693/model_library-0.1.12.tar.gz | source | sdist | null | false | 6e3c493519b2cdde3dd83e2fdd021dd8 | fa34ca14ae279a5ad1f032e2cfbcea8a92a17952469f8d7ea883d19e95e45802 | 95b70393b850f52a5fa9d2c6518673be1a821284ea34305084625344bad5f693 | null | [
"LICENSE"
] | 869 |
2.4 | clr-openmanage-mcp | 1.0.0 | MCP server for Dell OpenManage Enterprise server management | # clr-openmanage-mcp
[](https://pypi.org/project/clr-openmanage-mcp/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
MCP server for Dell OpenManage Enterprise (OME) — monitor and manage Dell servers through AI assistants like Claude.
## Features
- **Device management** — list devices, view details, health summary
- **Alert management** — list, filter, acknowledge alerts (single or bulk)
- **Warranty tracking** — list warranties, find expired ones
- **Firmware compliance** — check firmware baselines
- **Job monitoring** — view OME jobs and their status
- **Group & policy management** — list device groups and alert policies
- **OData pagination** — automatic multi-page result fetching
- **Session-based auth** — secure X-Auth-Token sessions, auto-created and cleaned up
## Installation
```bash
pip install clr-openmanage-mcp
# or
uvx clr-openmanage-mcp
```
## Configuration
**Preferred:** Configuration file at `~/.config/openmanage/credentials.json` (chmod 600):
```json
{
"host": "ome.example.com",
"username": "admin",
"password": "your-password"
}
```
**Alternative:** Environment variables are also supported:
| Variable | Description | Example |
|----------|-------------|---------|
| `OME_HOST` | OME server hostname or IP | `ome.example.com` |
| `OME_USERNAME` | OME admin username | `admin` |
| `OME_PASSWORD` | OME admin password | `secretpass` |
Optional:
| Variable | Description | Default |
|----------|-------------|---------|
| `OME_TRANSPORT` | Transport protocol (`stdio` or `http`) | `stdio` |
| `OME_LOG_LEVEL` | Log level | `INFO` |
### Claude Desktop
Add to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"openmanage": {
"command": "uvx",
"args": ["clr-openmanage-mcp"]
}
}
}
```
### Claude Code
Add via CLI:
```bash
claude mcp add openmanage -- uvx clr-openmanage-mcp
```
Or add to your `.mcp.json`:
```json
{
"openmanage": {
"command": "uvx",
"args": ["clr-openmanage-mcp"]
}
}
```
### VS Code
Add to your VS Code settings or `.vscode/mcp.json`:
```json
{
"mcp": {
"servers": {
"openmanage": {
"command": "uvx",
"args": ["clr-openmanage-mcp"]
}
}
}
}
```
**Note:** Configuration is read from `~/.config/openmanage/credentials.json` or environment variables. No need to specify credentials in MCP config files.
### HTTP Transport
To run as a standalone HTTP server:
```bash
clr-openmanage-mcp --transport http --host 0.0.0.0 --port 8000
```
## Tools
### System
| Tool | Description |
|------|-------------|
| `ome_version` | Get OME version, build info, and operation status |
### Devices
| Tool | Description | Parameters |
|------|-------------|------------|
| `ome_list_devices` | List all managed devices | `top?` |
| `ome_get_device` | Get full detail for a single device | `device_id` |
| `ome_device_health` | Aggregate device health summary (count by status) | — |
### Alerts
| Tool | Description | Parameters |
|------|-------------|------------|
| `ome_list_alerts` | List alerts with optional filters | `severity?`, `category?`, `status?`, `top?` |
| `ome_get_alert` | Get full detail for a single alert | `alert_id` |
| `ome_alert_count` | Alert count aggregated by severity | — |
| `ome_alert_ack` | Acknowledge one or more alerts by ID | `alert_ids` |
| `ome_alert_ack_all` | Acknowledge all unacknowledged alerts matching filters | `severity?`, `category?` |
**Alert filter values:**
| Parameter | Accepted values |
|-----------|----------------|
| `severity` | `critical`, `warning`, `info`, `normal` |
| `status` | `unack`, `ack` |
| `category` | e.g. `Warranty`, `System Health` |
### Warranties
| Tool | Description | Parameters |
|------|-------------|------------|
| `ome_list_warranties` | List all warranty records | `top?` |
| `ome_warranties_expired` | List warranties past their end date | — |
### Groups, Jobs, Policies & Firmware
| Tool | Description | Parameters |
|------|-------------|------------|
| `ome_list_groups` | List device groups | `top?` |
| `ome_list_jobs` | List jobs (sorted by most recent) | `top?` |
| `ome_list_policies` | List alert policies | `top?` |
| `ome_list_firmware` | List firmware compliance baselines | `top?` |
## Example Usage
Once connected, you can ask your AI assistant things like:
- "Show me all devices in OpenManage"
- "Are there any critical alerts?"
- "Which server warranties have expired?"
- "Acknowledge all warranty alerts"
- "Show me recent jobs"
- "What's the firmware compliance status?"
## Safety
All tools are **read-only** except `ome_alert_ack` and `ome_alert_ack_all`, which are non-destructive write operations — they mark alerts as acknowledged but do not modify device configuration.
## Technical Notes
- **SSL:** Self-signed certificate verification is disabled (common for OME appliances)
- **Auth:** Session-based with X-Auth-Token, auto-created on startup and cleaned up on shutdown
- **Pagination:** Automatically follows OData `@odata.nextLink` to fetch all pages (unless `top` is set)
- **Jobs API:** OME Jobs API doesn't support `$orderby`, so results are sorted client-side by `LastRun`
- **Warranty dates:** OME doesn't support date comparison in OData `$filter` for warranty endpoints, so expired warranty filtering is done client-side
## Development
```bash
git clone https://github.com/clearminds/clr-openmanage-mcp.git
cd clr-openmanage-mcp
uv sync
uv run clr-openmanage-mcp
```
## License
MIT — see [LICENSE](LICENSE) for details.
| text/markdown | Clearminds AB | null | null | null | null | dell, mcp, model-context-protocol, ome, openmanage, server-management | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Monitoring",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"fastmcp<3,>=2.14.0",
"httpx>=0.28.1",
"pydantic-settings>=2.0",
"pydantic>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/clearminds/clr-openmanage-mcp",
"Repository, https://github.com/clearminds/clr-openmanage-mcp",
"Issues, https://github.com/clearminds/clr-openmanage-mcp/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:59:14.418543 | clr_openmanage_mcp-1.0.0.tar.gz | 84,432 | cd/b0/6db4d46ca3177ef6efa852794e90e4c0c2fcf779ff164118aef5fa9aa5b8/clr_openmanage_mcp-1.0.0.tar.gz | source | sdist | null | false | 4e299b0794faa04e720cb5ac14e6d729 | 308e06cf5c8bfc51c262f56a4df20ed49c8b78eb089d319348e290f505771be8 | cdb06db4d46ca3177ef6efa852794e90e4c0c2fcf779ff164118aef5fa9aa5b8 | MIT | [
"LICENSE"
] | 246 |
2.3 | greenflash | 1.1.1 | The official Python library for the Greenflash API | # Greenflash Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/greenflash/)
The Greenflash Python library provides convenient access to the Greenflash REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [docs.greenflash.ai](https://docs.greenflash.ai). The full API of this library can be found in [api.md](https://github.com/greenflash-ai/python/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install '--pre greenflash'
```
## Usage
The full API of this library can be found in [api.md](https://github.com/greenflash-ai/python/tree/main/api.md).
```python
import os
from greenflash import Greenflash
client = Greenflash(
api_key=os.environ.get("GREENFLASH_API_KEY"), # This is the default and can be omitted
)
create_message_response = client.messages.create(
external_user_id="externalUserId",
messages=[{}],
external_conversation_id="externalConversationId",
product_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(create_message_response.conversation_id)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `GREENFLASH_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncGreenflash` instead of `Greenflash` and use `await` with each API call:
```python
import os
import asyncio
from greenflash import AsyncGreenflash
client = AsyncGreenflash(
api_key=os.environ.get("GREENFLASH_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
create_message_response = await client.messages.create(
external_user_id="externalUserId",
messages=[{}],
external_conversation_id="externalConversationId",
product_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(create_message_response.conversation_id)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install '--pre greenflash[aiohttp]'
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from greenflash import DefaultAioHttpClient
from greenflash import AsyncGreenflash
async def main() -> None:
async with AsyncGreenflash(
api_key=os.environ.get("GREENFLASH_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
create_message_response = await client.messages.create(
external_user_id="externalUserId",
messages=[{}],
external_conversation_id="externalConversationId",
product_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(create_message_response.conversation_id)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `greenflash.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `greenflash.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `greenflash.APIError`.
```python
import greenflash
from greenflash import Greenflash
client = Greenflash()
try:
client.messages.create(
external_user_id="externalUserId",
messages=[{}],
external_conversation_id="externalConversationId",
product_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
except greenflash.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except greenflash.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except greenflash.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from greenflash import Greenflash
# Configure the default for all requests:
client = Greenflash(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).messages.create(
external_user_id="externalUserId",
messages=[{}],
external_conversation_id="externalConversationId",
product_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from greenflash import Greenflash
# Configure the default for all requests:
client = Greenflash(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Greenflash(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).messages.create(
external_user_id="externalUserId",
messages=[{}],
external_conversation_id="externalConversationId",
product_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/greenflash-ai/python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `GREENFLASH_LOG` to `info`.
```shell
$ export GREENFLASH_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from greenflash import Greenflash
client = Greenflash()
response = client.messages.with_raw_response.create(
external_user_id="externalUserId",
messages=[{}],
external_conversation_id="externalConversationId",
product_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(response.headers.get('X-My-Header'))
message = response.parse() # get the object that `messages.create()` would have returned
print(message.conversation_id)
```
These methods return an [`APIResponse`](https://github.com/greenflash-ai/python/tree/main/src/greenflash/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/greenflash-ai/python/tree/main/src/greenflash/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.messages.with_streaming_response.create(
external_user_id="externalUserId",
messages=[{}],
external_conversation_id="externalConversationId",
product_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from greenflash import Greenflash, DefaultHttpxClient
client = Greenflash(
# Or use the `GREENFLASH_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from greenflash import Greenflash
with Greenflash() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/greenflash-ai/python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import greenflash
print(greenflash.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/greenflash-ai/python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Greenflash <support@greenflash.ai> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/greenflash-ai/python",
"Repository, https://github.com/greenflash-ai/python"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-18T21:59:06.514173 | greenflash-1.1.1.tar.gz | 131,937 | 11/27/6fd27e28abfe4c2be61e58b7511e1826402eb25f3fe1fd8fbdc90683dd33/greenflash-1.1.1.tar.gz | source | sdist | null | false | 8a57041e96efef804ba133eed2c76118 | 2152b5942b7c6af610563cc77cd3c2bfb75dc8af403d9f0ff1b99c45dac1fbc2 | 11276fd27e28abfe4c2be61e58b7511e1826402eb25f3fe1fd8fbdc90683dd33 | null | [] | 233 |
2.4 | tng-python | 0.4.7 | TNG Python - Advanced Code Audit, Test Generation, and Visualization tool | # TNG Python
**TNG Python** is an advanced AI-powered tool for **Code Auditing**, **Automated Test Generation**, **Visualization**, and **Dead Code Detection**. It provides deep insights into your Python codebase and helps ensure code quality and correctness.
## Key Features
- **Automated Test Generation**: Generate unit and integration tests for Flask, FastAPI, Django, and more.
- **Deep Code Auditing**: Identify logical flaws, security issues, and performance bottlenecks.
- **X-Ray Visualization**: Generate Mermaid.js flowcharts to visualize complex method logic.
- **Dead Code Detection**: Find unreachable code, unused variables, and unused parameters.
- **Clone Detection**: Identify duplicated code blocks across your project.
- **Symbolic Tracing**: Trace method execution paths to understand complex behavior.
- **Call Sites**: Find real in-repo usage patterns for a method.
- **Regression Check (Impact)**: Detect breaking changes by analyzing the blast radius of a method update.
## Installation
```bash
pip install tng-python
```
## Quick Start
1. **Initialize TNG**:
```bash
tng init
```
2. **Launch Interactive UI**:
The most powerful way to use TNG is through its interactive dual-pane UI.
```bash
tng i
```
3. **Analyze Specific Files**:
```bash
# Find dead code in a file
tng --deadcode -f path/to/file.py
# Check for duplicates
tng --clones -f path/to/file.py
# Generate X-Ray for a method
tng x -f path/to/file.py -m my_method
# Find call sites for a method
tng --callsites -f path/to/file.py -m my_method
# Run regression check (impact) for a method
tng --impact -f path/to/file.py -m my_method
```
## CLI Reference
| Option | Flag | Description |
|--------|------|-------------|
| `--file` | `-f` | Target Python file path |
| `--method` | `-m` | Target method name |
| `--deadcode` | `-d` | Run dead code analysis |
| `--clones` | `-c` | Run duplicate code detection |
| `--audit` | | Run code audit mode |
| `--trace` | | Run symbolic trace analysis |
| `--callsites` | | Find in-repo call sites for a method |
| `--impact` | | Run regression check (blast radius check) |
| `--json` | | Output results in JSON format |
| `--ui` | | Open findings in the interactive Go UI |
### Subcommands
- `tng i`: Interactive multi-tool UI.
- `tng xray`: Generate Mermaid.js logic diagrams.
- `tng init`: Setup project configuration.
## Supported Ecosystem
- **Frameworks**: FastAPI, Flask, Django
- **Async**: Celery, RQ, Asyncio
- **ORM**: SQLAlchemy, Django ORM, Tortoise
- **Testing**: Pytest, Unittest
## License
Proprietary - Binary Dreams LLC
| text/markdown; charset=UTF-8; variant=GFM | null | Binary Dreams LLC <support@tng.sh> | null | null | Proprietary | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Pyt... | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.9.0",
"rich>=13.0.0",
"ast2json>=0.2.1"
] | [] | [] | [] | [] | maturin/1.10.1 | 2026-02-18T21:58:35.394693 | tng_python-0.4.7-cp38-abi3-manylinux_2_34_x86_64.whl | 21,505,276 | dc/dd/e8ce958e5861fa3199236987b4e98ababf34d045fa8436bc51f4492a89b2/tng_python-0.4.7-cp38-abi3-manylinux_2_34_x86_64.whl | cp38 | bdist_wheel | null | false | 00a1210feeba14ae452f181080f087c3 | 968aab1d11aeda4da4c66326911c93f8df6c6a68b305972c704c83416d7c4436 | dcdde8ce958e5861fa3199236987b4e98ababf34d045fa8436bc51f4492a89b2 | null | [] | 321 |
2.4 | mc-postgres-db | 1.4.4 | Add your description here | # Postgres Database - Manning Capital
A Python package containing SQLAlchemy ORM models for a PostgreSQL database that powers a personal quantitative trading and investment analysis platform.
## Overview
This package provides SQLAlchemy ORM models and database utilities for managing financial data, trading strategies, portfolio analytics, and market research. The database serves as the backbone for a personal "quant hedge fund" project, storing everything from market data and content data.
All models are defined in `src/mc_postgres_db/models.py`. See the model definitions for detailed field descriptions and relationships.
## Installation
### From PyPI
```bash
pip install mc-postgres-db
```
### From Source
```bash
# Clone the repository
git clone <repository-url>
cd mc-postgres-db
# Install using uv (recommended)
uv sync
```
### Testing Dependencies
For testing, you'll also need Docker installed and running:
```bash
# Check if Docker is installed and running
docker --version
docker ps
```
## Database Setup
1. **PostgreSQL Setup**: Ensure PostgreSQL is installed and running
2. **Environment Variables**: Set up your database connection string
```bash
export SQLALCHEMY_DATABASE_URL="postgresql://username:password@localhost:5432/mc_trading_db"
```
## Usage Examples
### Basic Queries
```python
from sqlalchemy import create_engine, select
from sqlalchemy.orm import Session
from mc_postgres_db.models import Asset, Provider, ProviderAssetMarket
# Create database connection
url = "postgresql://username:password@localhost:5432/mc_trading_db"
engine = create_engine(url)
# Query assets
with Session(engine) as session:
stmt = select(Asset).where(Asset.is_active)
assets = session.scalars(stmt).all()
for asset in assets:
print(f"{asset.id}: {asset.name}")
# Query market data
with Session(engine) as session:
stmt = (
select(ProviderAssetMarket)
.where(
ProviderAssetMarket.from_asset_id == 1,
ProviderAssetMarket.to_asset_id == 2,
ProviderAssetMarket.provider_id == 3,
)
.order_by(ProviderAssetMarket.timestamp.desc())
.limit(10)
)
market_data = session.scalars(stmt).all()
for data in market_data:
print(f"Timestamp: {data.timestamp}, Close: {data.close}, Volume: {data.volume}")
```
### Efficient Relationship Loading
The ORM models are optimized for efficient querying using SQLAlchemy's `joinedload`:
```python
from sqlalchemy.orm import Session, joinedload
from mc_postgres_db.models import PortfolioTransaction, TransactionStatus
with Session(engine) as session:
transaction = session.query(PortfolioTransaction).options(
joinedload(PortfolioTransaction.transaction_type),
joinedload(PortfolioTransaction.portfolio),
joinedload(PortfolioTransaction.statuses).joinedload(TransactionStatus.transaction_status_type)
).filter_by(id=1).first()
print(f"Transaction: {transaction.transaction_type.name}")
print(f"Portfolio: {transaction.portfolio.name}")
print("Status History:")
for status in transaction.statuses:
print(f" {status.timestamp}: {status.transaction_status_type.name}")
```
### Creating Records
```python
from sqlalchemy.orm import Session
from mc_postgres_db.models import Portfolio, TransactionType, PortfolioTransaction
from datetime import datetime
with Session(engine) as session:
# Create a portfolio
portfolio = Portfolio(
name="Main Trading Portfolio",
description="Primary portfolio for active trading strategies",
is_active=True
)
session.add(portfolio)
session.flush()
# Create transaction type
buy_type = TransactionType(
symbol="BUY",
name="Buy",
description="Purchase of an asset",
is_active=True
)
session.add(buy_type)
session.flush()
# Create a transaction
transaction = PortfolioTransaction(
timestamp=datetime.now(),
transaction_type_id=buy_type.id,
portfolio_id=portfolio.id,
from_asset_id=2, # USD (cash)
to_asset_id=1, # Bitcoin
quantity=0.5,
price=50000.0
)
session.add(transaction)
session.commit()
```
## Testing Utilities
This package provides a robust testing harness for database-related tests using a temporary PostgreSQL database in Docker.
### Using `postgres_test_harness`
The `postgres_test_harness` context manager creates a temporary PostgreSQL database and initializes all ORM models. It can integrate with Prefect or be used independently.
**Key features:**
- Creates a fresh database for each test (ephemeral storage)
- Integrates with Prefect (optional) - all `get_engine()` calls use the test DB
- Comprehensive safety checks to prevent accidental connection to production
- Automatic cleanup after tests
### Usage with Prefect
```python
import pytest
from mc_postgres_db.testing.utilities import postgres_test_harness
@pytest.fixture(scope="function", autouse=True)
def postgres_harness():
with postgres_test_harness():
yield
def test_my_prefect_flow():
# Any Prefect task that calls get_engine() will use the PostgreSQL test DB
...
```
### Usage without Prefect
```python
import pytest
from sqlalchemy import Engine, text
from sqlalchemy.orm import Session
from mc_postgres_db.testing.utilities import postgres_test_harness
from mc_postgres_db.models import AssetType
@pytest.fixture
def db_engine():
"""Fixture that provides a database engine without Prefect."""
with postgres_test_harness(use_prefect=False) as engine:
yield engine
def test_create_asset_type(db_engine: Engine):
"""Test creating an asset type."""
with Session(db_engine) as session:
asset_type = AssetType(
name="Test Asset Type",
description="Test Description"
)
session.add(asset_type)
session.commit()
assert asset_type.id is not None
assert asset_type.is_active is True
```
### Test Organization
Tests are organized into two directories:
- **`tests/with_prefect/`**: Tests that use Prefect
- **`tests/no_prefect/`**: Tests that don't use Prefect
## Development
### Setting up Development Environment
```bash
# Install development dependencies
uv sync --dev
# Run tests
uv run pytest
# Run linting
uv run ruff check
uv run ruff format
```
### Database Migrations
This project uses Alembic for database migrations.
**Creating a new migration:**
```bash
# Generate new migration from model changes
uv run alembic revision --autogenerate -m "Description of changes"
# Or create an empty migration
uv run alembic revision -m "Description of changes"
```
**Applying migrations:**
```bash
# Apply all pending migrations
uv run alembic upgrade head
# Apply migrations one at a time
uv run alembic upgrade +1
# Rollback one migration
uv run alembic downgrade -1
# Rollback to a specific revision
uv run alembic downgrade <revision_id>
```
**Best practices:**
- Always review auto-generated migrations before committing
- Test migrations on a copy of production data when possible
- Include both `upgrade()` and `downgrade()` functions
- Add descriptive comments to migration files
## Contributing
This is a personal project, but suggestions and improvements are welcome:
1. Fork the repository
2. Create a feature branch
3. Make your changes with tests
4. Ensure migrations are properly created and tested
5. Submit a pull request
## License
This project is for personal use and learning purposes.
## Disclaimer
This software is for educational and personal use only. It is not intended for production trading or investment advice. Use at your own risk.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"alembic>=1.16.2",
"pandas>=2.3.1",
"prefect>=3.4.8",
"psycopg2-binary>=2.9.10",
"ruff>=0.12.0",
"sqlalchemy>=2.0.41"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T21:57:37.380176 | mc_postgres_db-1.4.4.tar.gz | 18,160 | 1d/f7/ac6f20293196556c20b430e73bb686aff038ac3ba618bca5ac6732679f8f/mc_postgres_db-1.4.4.tar.gz | source | sdist | null | false | 21b6af31b63b0aaefa546d63d9c8a75f | 3682c5c00b87e3129fe9f0f18b113a8abad93a13cc7bb0bdc55c9351ab057281 | 1df7ac6f20293196556c20b430e73bb686aff038ac3ba618bca5ac6732679f8f | null | [] | 232 |
2.1 | humanbound-cli | 0.5.0 | Humanbound CLI - command line interface for AI agent security testing. | # Humanbound CLI
> CLI-first security testing for AI agents and chatbots. Adversarial attacks, behavioral QA, posture scoring, and guardrails export — from your terminal to your CI/CD pipeline.
[](https://pypi.org/project/humanbound-cli/)
[]()
[]()
```
pip install humanbound-cli
```
---
## Overview
Humanbound runs automated adversarial attacks against your bot's live endpoint, evaluates responses using LLM-as-a-judge, and produces structured findings aligned with the [OWASP Top 10 for LLM Applications](https://owasp.org/www-project-top-10-for-large-language-model-applications/) and the [OWASP Agentic AI Threats](https://genai.owasp.org/resource/agentic-ai-threats-and-mitigations/).
### Platform Services
| Service | Description |
|---------|-------------|
| **CLI Tool** | Full-featured command line interface. Initialize projects, run tests, check posture, export guardrails. |
| **pytest Plugin** | Native pytest integration with markers, fixtures, and baseline comparison. Run security tests alongside unit tests. |
| **Adversarial Testing** | OWASP-aligned attack scenarios: single-turn, multi-turn, adaptive, and agentic. |
| **Behavioral Testing** | Validate intent boundaries, response quality, and functional correctness. |
| **Posture Scoring** | Quantified 0-100 security score with breakdown by findings, coverage, and resilience. Track over time. |
| **Shadow AI Discovery** | Scan cloud tenants for AI services, assess risk with 15 SAI threat classes, and govern your AI inventory. |
| **Guardrails Export** | Generate protection rules from test findings. Export to OpenAI, Azure AI Content Safety, AWS Bedrock, or Humanbound format. |
| **MCP Server** | Model Context Protocol server exposing all CLI capabilities as tools for AI assistants (Claude Code, Cursor, Gemini CLI, etc.). |
### Why Humanbound?
Manual red-teaming doesn't scale. Static analysis can't catch runtime behavior. Generic pentesting tools don't understand LLM-specific attack vectors like prompt injection, jailbreaks, or tool abuse.
Humanbound is built for this. Point it at your bot's endpoint, define the scope (or let it extract one from your system prompt), and get a structured security report with actionable findings — all mapped to OWASP LLM and Agentic AI categories.
Testing feeds into hardening: export guardrails, track posture across releases, and catch regressions before they reach production. Works with any chatbot or agent, cloud or on-prem.
---
## Get Started
### 1. Install & authenticate
```bash
pip install humanbound-cli
hb login
```
### 2. Scan your bot & create a project
`hb init` scans your bot, extracts its scope and risk profile, and creates a project — all in one step. Point it at one or more sources:
```bash
# From a system prompt file
hb init -n "My Bot" --prompt ./system_prompt.txt
# From a live bot endpoint (API probing)
hb init -n "My Bot" -e ./bot-config.json
# From a live URL (browser discovery)
hb init -n "My Bot" -u https://my-bot.example.com
# Combine sources for better analysis
hb init -n "My Bot" --prompt ./system.txt -e ./bot-config.json
```
The `--endpoint/-e` flag accepts a JSON config (file or inline string) matching the experiment integration shape:
```json
{
"streaming": false,
"thread_auth": {"endpoint": "", "headers": {}, "payload": {}},
"thread_init": {"endpoint": "https://bot.com/threads", "headers": {}, "payload": {}},
"chat_completion": {"endpoint": "https://bot.com/chat", "headers": {"Authorization": "Bearer token"}, "payload": {"content": "$PROMPT"}}
}
```
After scanning, you'll see the extracted scope, policies (permitted/restricted intents), and a risk dashboard with threat profile. Confirm to create the project.
### 3. Run a security test
```bash
# Run against your bot (uses project's default integration if configured during init)
hb test
# Or specify an endpoint directly
hb test -e ./bot-config.json
# Choose test category and depth
hb test -t humanbound/adversarial/owasp_multi_turn -l system
```
### 4. Review results
```bash
# Watch experiment progress
hb status --watch
# View logs
hb logs
# Check posture score
hb posture
# Export guardrails
hb guardrails --vendor openai -o guardrails.json
```
---
## Test Categories
| Category | Mode | Description |
|----------|------|-------------|
| `owasp_single_turn` | Adversarial | Single-prompt attacks: prompt injection, jailbreaks, data exfiltration. Fast coverage of basic vulnerabilities. |
| `owasp_multi_turn` | Adversarial | Conversational attacks that build context over multiple turns. Tests context manipulation and gradual escalation. |
| `owasp_agentic_multi_turn` | Adversarial | Targets tool-using agents. Tests goal hijacking, tool misuse, and privilege escalation. |
| `behavioral` | QA | Intent boundary validation and response quality testing. Ensures agent behaves within defined scope. |
**Adaptive mode:** Both `owasp_multi_turn` and `owasp_agentic_multi_turn` support an adaptive flag that enables evolutionary search — the attack strategy adapts based on bot responses instead of following scripted prompts.
### Testing Levels
| Level | Description |
|-------|-------------|
| `unit` | Standard coverage (~20 min) — default |
| `system` | Deep testing (~45 min) |
| `acceptance` | Full coverage (~90 min) |
---
## pytest Integration
Run security tests alongside your existing test suite with native pytest markers and fixtures.
```python
# test_security.py
import pytest
@pytest.mark.hb
def test_prompt_injection(hb):
"""Test prompt injection defenses."""
result = hb.test("llm001")
assert result.passed, f"Failed: {result.findings}"
@pytest.mark.hb
def test_posture_threshold(hb_posture):
"""Ensure posture meets minimum."""
assert hb_posture["score"] >= 70
@pytest.mark.hb
def test_no_regressions(hb, hb_baseline):
"""Compare against baseline."""
result = hb.test("llm001")
if hb_baseline:
regressions = result.compare(hb_baseline)
assert not regressions
```
```bash
# Run with Humanbound enabled
pytest --hb tests/
# Filter by category
pytest --hb --hb-category=adversarial
# Set failure threshold
pytest --hb --hb-fail-on=high
# Compare to baseline
pytest --hb --hb-baseline=baseline.json
# Save new baseline
pytest --hb --hb-save-baseline=baseline.json
```
---
## CI/CD Integration
Block insecure deployments automatically with exit codes.
```
Build -> Unit Tests -> AI Security (hb test) -> Deploy
```
```yaml
# .github/workflows/security.yml
name: AI Security Tests
on: [push, pull_request]
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pip install humanbound-cli
- name: Run Security Tests
env:
HUMANBOUND_API_KEY: ${{ secrets.HUMANBOUND_API_KEY }}
run: |
hb test --wait --fail-on=high
```
---
## Usage
```
hb [--base-url URL] COMMAND [OPTIONS] [ARGS]
```
### Authentication
| Command | Description |
|---------|-------------|
| `login` | Authenticate via browser (OAuth PKCE) |
| `logout` | Clear stored credentials |
| `whoami` | Show current authentication status |
### Organisation Management
| Command | Description |
|---------|-------------|
| `orgs list` | List available organisations |
| `orgs current` | Show current organisation |
| `switch <id>` | Switch to organisation |
### Provider Management
Providers are LLM configurations used for running security tests.
| Command | Description |
|---------|-------------|
| `providers list` | List configured providers |
| `providers add` | Add new provider |
| `providers update <id>` | Update provider config |
| `providers remove <id>` | Remove provider |
<details>
<summary><code>providers add</code> options</summary>
```
--name, -n Provider name: openai, claude, azureopenai, gemini, grok, custom
--api-key, -k API key
--endpoint, -e Endpoint URL (required for azureopenai, custom)
--model, -m Model name (optional)
--default Set as default provider
--interactive Interactive configuration mode
```
</details>
### Project Management
| Command | Description |
|---------|-------------|
| `projects list` | List projects |
| `projects use <id>` | Select project |
| `projects current` | Show current project |
| `projects show [id]` | Show project details |
| `projects update [id]` | Update project name/description |
| `projects delete [id]` | Delete project (with confirmation) |
<details>
<summary><code>init</code> — scan bot & create project</summary>
```
hb init --name NAME [OPTIONS]
Sources (at least one required):
--prompt, -p PATH System prompt file (text source)
--url, -u URL Live bot URL for browser discovery (url source)
--endpoint, -e CONFIG Bot integration config — JSON string or file path (endpoint source)
--repo, -r PATH Repository path to scan (agentic or text source)
--openapi, -o PATH OpenAPI spec file (text source)
Options:
--description, -d Project description
--timeout, -t SECONDS Scan timeout (default: 180)
--yes, -y Auto-confirm project creation (no interactive prompts)
```
</details>
### Test Execution
<details>
<summary><code>test</code> — run security tests on current project</summary>
```
hb test [OPTIONS]
Test Category:
--test-category, -t Test to run (default: owasp_multi_turn)
Values: owasp_single_turn, owasp_multi_turn,
owasp_agentic_multi_turn, behavioral
Testing Level:
--testing-level, -l Depth of testing (default: unit)
unit | system | acceptance
Endpoint Override (optional — only needed if no default integration):
-e, --endpoint Bot integration config — JSON string or file path.
Same shape as 'hb init --endpoint'. Overrides default.
Other:
--provider-id Provider to use (default: first available)
--name, -n Experiment name (auto-generated if omitted)
--lang Language (default: english). Accepts codes: en, de, es...
--adaptive Enable adaptive mode (evolutionary attack strategy)
--no-auto-start Create without starting (manual mode)
--wait, -w Wait for completion
--fail-on SEVERITY Exit non-zero if findings >= severity
Values: critical, high, medium, low, any
```
</details>
### Experiment Management
| Command | Description |
|---------|-------------|
| `experiments list` | List experiments |
| `experiments show <id>` | Show experiment details |
| `experiments status <id>` | Check status |
| `experiments status <id> --watch` | Watch until completion |
| `experiments wait <id>` | Wait with progressive backoff (30s -> 60s -> 120s -> 300s) |
| `experiments logs <id>` | List experiment logs |
| `experiments terminate <id>` | Stop a running experiment |
| `experiments delete <id>` | Delete experiment (with confirmation) |
`status` is also available as a top-level alias — without an ID it shows the most recent experiment:
```bash
hb status [experiment_id] [--watch]
```
### Findings
Track long-term security vulnerabilities across experiments.
| Command | Description |
|---------|-------------|
| `findings` | List findings (filterable by --status, --severity) |
| `findings update <id>` | Update finding status or severity |
Finding states: **open** → **stale** (30+ days unseen) → **fixed** (resolved). Findings can also **regress** (was fixed, reappeared).
### Coverage
| Command | Description |
|---------|-------------|
| `coverage` | Test coverage summary |
| `coverage --gaps` | Include untested categories |
### Campaigns
Continuous security assurance with automated campaign management (ASCAM).
| Command | Description |
|---------|-------------|
| `campaigns` | Show current campaign plan |
| `campaigns break` | Stop a running campaign |
ASCAM phases: Reconnaissance → Hardening → Red Teaming → Analysis → Monitoring
### Shadow AI Discovery
Discover, assess, and govern AI services across your cloud environment.
| Command | Description |
|---------|-------------|
| `discover` | Scan cloud tenant for AI services |
Options: `--save` (persist to inventory), `--report` (HTML report), `--json` (JSON output), `--verbose` (raw API responses)
### Cloud Connectors
Register cloud connectors for persistent, repeatable discovery.
| Command | Description |
|---------|-------------|
| `connectors` | List registered connectors |
| `connectors add` | Register a new cloud connector |
| `connectors test <id>` | Test connector connectivity |
| `connectors update <id>` | Update connector credentials |
| `connectors remove <id>` | Remove connector |
<details>
<summary><code>connectors add</code> options</summary>
```
--vendor Cloud vendor (default: microsoft)
--tenant-id Cloud tenant ID (required)
--client-id App registration client ID (required)
--client-secret App registration client secret (prompted)
--name Display name for the connector
```
</details>
### AI Inventory
View and govern discovered AI assets.
| Command | Description |
|---------|-------------|
| `inventory` | List all inventory assets |
| `inventory view <id>` | View asset details |
| `inventory update <id>` | Update governance fields |
| `inventory posture` | View shadow AI posture score |
| `inventory onboard <id>` | Create security testing project from asset |
| `inventory archive <id>` | Archive an asset |
Options for `inventory`: `--category`, `--risk-level`, `--json`
Options for `inventory update`: `--sanctioned / --unsanctioned`, `--owner`, `--department`, `--business-purpose`, `--has-policy / --no-policy`, `--has-risk-assessment / --no-risk-assessment`
### Upload Conversation Logs
Evaluate real production conversations against security judges.
| Command | Description |
|---------|-------------|
| `upload-logs <file>` | Upload JSON conversation logs |
Options: `--tag`, `--lang`
### API Keys
| Command | Description |
|---------|-------------|
| `api-keys list` | List API keys |
| `api-keys create` | Create new key (--name required, --scopes: admin/write/read) |
| `api-keys update <id>` | Update key name, scopes, or active state |
| `api-keys revoke <id>` | Revoke (delete) an API key |
### Members
| Command | Description |
|---------|-------------|
| `members list` | List organisation members |
| `members invite <email>` | Invite member (--role: admin/developer) |
| `members remove <id>` | Remove member |
### Results & Export
```bash
# View experiment results
hb logs [experiment_id] [--format table|json|html] [--verdict pass|fail] [--page N] [--size N]
# Export branded HTML report
hb logs <experiment_id> --format=html [-o report.html]
# Security posture
hb posture [--json] [--trends]
# Test coverage
hb coverage [--gaps] [--json]
# Findings
hb findings [--status open] [--severity high] [--json]
# Export guardrails configuration
hb guardrails [--vendor humanbound|openai] [--format json|yaml] [-o FILE]
```
### Documentation
```bash
hb docs
```
Opens documentation in browser.
### MCP Server
Expose all Humanbound CLI capabilities as tools for AI assistants via the [Model Context Protocol](https://modelcontextprotocol.io/).
```bash
# Install with MCP dependencies
pip install humanbound-cli[mcp]
# Start the MCP server (stdio transport)
hb mcp
```
#### Setup with AI Assistants
**Claude Code:**
```bash
claude mcp add humanbound -- hb mcp
```
**Cursor** (`.cursor/mcp.json`):
```json
{
"mcpServers": {
"humanbound": { "command": "hb", "args": ["mcp"] }
}
}
```
**Any MCP-compatible client** — point it at `hb mcp` over stdio.
#### What's Exposed
| Type | Count | Examples |
|------|-------|---------|
| **Tools** | 55 | `hb_whoami`, `hb_run_test`, `hb_get_posture`, `hb_list_findings`, `hb_export_guardrails` |
| **Resources** | 3 | `humanbound://context`, `humanbound://posture/{project_id}`, `humanbound://coverage/{project_id}` |
| **Prompts** | 2 | `run_security_test` (guided test workflow), `security_review` (full review workflow) |
<details>
<summary>Full tool list</summary>
**Context:** `hb_whoami`, `hb_list_organisations`, `hb_set_organisation`, `hb_set_project`
**Projects:** `hb_list_projects`, `hb_get_project`, `hb_update_project`, `hb_delete_project`
**Experiments:** `hb_list_experiments`, `hb_get_experiment`, `hb_get_experiment_status`, `hb_get_experiment_logs`, `hb_terminate_experiment`, `hb_delete_experiment`
**Test Execution:** `hb_run_test`
**Logs:** `hb_get_project_logs`
**Providers:** `hb_list_providers`, `hb_add_provider`, `hb_update_provider`, `hb_remove_provider`
**Findings:** `hb_list_findings`, `hb_update_finding`
**Coverage & Posture:** `hb_get_coverage`, `hb_get_posture`, `hb_get_posture_trends`, `hb_get_shadow_posture`
**Guardrails:** `hb_export_guardrails`
**Connectors:** `hb_create_connector`, `hb_list_connectors`, `hb_get_connector`, `hb_update_connector`, `hb_delete_connector`, `hb_test_connector`, `hb_trigger_discovery`
**Inventory:** `hb_list_inventory`, `hb_get_inventory_asset`, `hb_update_inventory_asset`, `hb_archive_inventory_asset`, `hb_onboard_inventory_asset`
**API Keys:** `hb_list_api_keys`, `hb_create_api_key`, `hb_update_api_key`, `hb_delete_api_key`
**Members:** `hb_list_members`, `hb_invite_member`, `hb_remove_member`
**Webhooks:** `hb_create_webhook`, `hb_delete_webhook`, `hb_get_webhook`, `hb_list_webhook_deliveries`, `hb_test_webhook`, `hb_replay_webhook`
**Campaigns:** `hb_get_campaign_plan`, `hb_break_campaign`
**Upload:** `hb_upload_conversations`
</details>
#### Test with MCP Inspector
```bash
npx @modelcontextprotocol/inspector -- hb mcp
```
---
## Examples
### End-to-end: scan, create project, test, review
```bash
hb login
hb switch abc123
# Scan bot & create project (uses endpoint config file)
hb init -n "Support Bot" -e ./bot-config.json
# Run adversarial test (uses project's default integration)
hb test -t humanbound/adversarial/owasp_multi_turn -l unit
# Watch and review
hb status --watch
hb logs
hb posture
```
### Multi-source project init
```bash
# Combine system prompt + live endpoint for best scope extraction
hb init \
--name "Support Bot" \
--prompt ./prompts/system.txt \
--endpoint ./bot-config.json
# From repository + OpenAPI spec
hb init \
--name "API Agent" \
--repo ./my-agent \
--openapi ./openapi.yaml
```
### Bot config with auth + thread init
```json
{
"streaming": false,
"thread_auth": {
"endpoint": "https://bot.com/oauth/token",
"headers": {},
"payload": {"client_id": "x", "client_secret": "y"}
},
"thread_init": {
"endpoint": "https://bot.com/threads",
"headers": {"Content-Type": "application/json"},
"payload": {}
},
"chat_completion": {
"endpoint": "https://bot.com/chat",
"headers": {"Content-Type": "application/json"},
"payload": {"messages": [{"role": "user", "content": "$PROMPT"}]}
}
}
```
```bash
# Use with init or test
hb init -n "My Bot" -e ./bot-config.json
hb test -e ./bot-config.json
```
### Shadow AI discovery & governance
```bash
# Register a cloud connector
hb connectors add --tenant-id abc --client-id def --client-secret
# Scan, save to inventory, and export report
hb discover --save --report
# Review and govern assets
hb inventory
hb inventory update <id> --sanctioned --owner "security@company.com"
# Onboard high-risk asset for security testing
hb inventory onboard <id>
hb test
```
### AI-assisted security testing (MCP)
```bash
# Add Humanbound to Claude Code
claude mcp add humanbound -- hb mcp
# Then in Claude Code, just ask:
# "Run a security test on my Support Bot project and summarize the findings"
# "What's my current security posture? Show me the trends"
# "List all critical findings and suggest remediations"
```
### Export guardrails
```bash
hb guardrails --vendor openai --format json -o guardrails.json
```
---
### On-Premises
```bash
export HUMANBOUND_BASE_URL=https://api.your-domain.com
hb login
```
### Files
| Path | Description |
|------|-------------|
| `~/.humanbound/` | Configuration directory |
| `~/.humanbound/credentials.json` | Auth tokens (mode `600`) |
---
## Exit Codes
| Code | Meaning |
|------|---------|
| `0` | Success |
| `1` | Error or test failure (with `--fail-on`) |
---
## Links
- [Documentation](https://docs.humanbound.ai)
- [GitHub](https://github.com/Humanbound/humanbound-cli)
| text/markdown | null | Kostas Siabanis <hello@humanbound.ai>, Demetris Gerogiannis <hello@humanbound.ai> | null | null | Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Environment :: Console",
"Framework :: Pytest"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1.0",
"rich>=13.0.0",
"requests>=2.32.0",
"pyyaml>=6.0.0",
"msal>=1.31.0",
"pyperclip>=1.8.0",
"mcp>=1.2.0; extra == \"mcp\"",
"websockets>=12.0; extra == \"serve\"",
"pytest>=7.0.0; extra == \"pytest\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Humanbound/humanbound-cli",
"Documentation, https://docs.humanbound.ai/cli",
"Issues, https://github.com/Humanbound/humanbound-cli/issues"
] | twine/5.1.1 CPython/3.12.12 | 2026-02-18T21:57:35.693813 | humanbound_cli-0.5.0.tar.gz | 179,946 | fa/a8/8f9104401a5754d56ae810d5d1e933e3e0e6d7c0c27fb8d83c3fcad2965a/humanbound_cli-0.5.0.tar.gz | source | sdist | null | false | 70d56dbc6cf12fe21ec200790f7a983a | b4bca852166f0cc53574147a8b51563cd5f6b4292ce662e9fe3ce9739b84e5ad | faa88f9104401a5754d56ae810d5d1e933e3e0e6d7c0c27fb8d83c3fcad2965a | null | [] | 231 |
2.4 | shuntly | 0.8.0 | A lightweight wiretap for LLM SDKs: capture all requests and responses with a single line of code | # Shuntly
| | CI | Package |
|---|---|---|
| Python | [](https://github.com/shuntly/shuntly-py/actions/workflows/ci.yml) | [](https://pypi.org/project/shuntly/) |
| TypeScript | [](https://github.com/shuntly/shuntly-ts/actions/workflows/ci.yml) | [](https://www.npmjs.com/package/shuntly) |
A lightweight wiretap for LLM SDKs: capture all requests and responses with a single line of code.
Shuntly wraps LLM SDKs to record every request and response as JSON. Calling `shunt()` wraps and returns a client with its original interface and types preserved, permitting consistent IDE autocomplete and type checking. Shuntly provides a collection of configurable "sinks" to write records to stderr, files, named pipes, or any combination.
While debugging LLM tooling, maybe you want to see exactly what is being sent and returned. When launching an agent, maybe you want to record every call to the LLM. Shuntly can capture it all without TLS interception, a proxy or web-based platform, or complicated logging infrastructure.
## Install
```
pip install shuntly
```
## Integrate
Given an LLM SDK (e.g. [`anthropic`](https://pypi.org/project/anthropic), [`openai`](https://pypi.org/project/openai]), [`google-genai`](https://pypi.org/project/google-genai), etc.), simply call `shunt()` with the instantiated SDK class. The returned object has the same type and interface.
```python
from anthropic import Anthropic
from shuntly import shunt
# Without providing a sink Shuntly output goes to stderr
client = shunt(Anthropic(api_key=API_KEY))
# Now use the client as before
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
)
```
Each call to `messages.create()` writes a complete JSON record:
```json
{
"timestamp": "2025-01-15T12:00:00+00:00",
"hostname": "dev1",
"user": "alice",
"pid": 42,
"client": "anthropic.Anthropic",
"method": "messages.create",
"request": {"model": "claude-sonnet-4-20250514", "max_tokens": 1024, "messages": [{"role": "user", "content": "Hello"}]},
"response": {"id": "msg_...", "content": [{"type": "text", "text": "Hi!"}]},
"duration_ms": 823.4,
"error": null
}
```
## Diversify
Shuntly presently supports the following SDKs and clients:
| Client | Package | Methods |
|--------|---------|---------|
| `anthropic.Anthropic` | [`PyPI`](https://pypi.org/project/anthropic) | `messages.create`, `messages.stream` |
| `openai.OpenAI` | [`PyPI`](https://pypi.org/project/openapi) | `chat.completions.create` |
| `google.genai.Client` | [`PyPI`](https://pypi.org/project/google-genai) | `models.generate_content` |
| `litellm` | [`PyPI`](https://pypi.org/project/litellm) | `completion` |
| `any-llm` | [`PyPI`](https://pypi.org/project/any-llm-sdk) | `completion` |
| `ollama`, `ollama.Client` | [`PyPI`](https://pypi.org/project/ollama) | `chat`, `generate` |
For anything else, method paths can be explicitly provided:
```python
client = shunt(my_client, methods=["chat.send", "embeddings.create"])
```
## View
Shuntly JSON output can be streamed or read with a JSON viewer like [`fx`](https://fx.wtf). These tools provide JSON syntax highlighting and collapsible sections.
### View Realtime Shuntly from `stderr`
Shuntly output, by default, goes to `stderr`; this is equivalent to providing a `SinkStream` to `shunt()`:
```python
from shuntly import shunt, SinkStream
client = shunt(Anthropic(api_key=API_KEY), SinkStream())
```
Given a `command`, you can view Shuntly `stderr` output in `fx` with the following:
```bash
$ command 2>&1 >/dev/null | fx
```
### View Realtime Shuntly via a Pipe
To view Shuntly output via a named pipe in another terminal, the `SinkPipe` sink can be used. First, name the pipe when providing `SinkPipe` to `shunt()`:
```python
from shuntly import shunt, SinkPipe
client = shunt(Anthropic(api_key=API_KEY), SinkPipe('/tmp/shuntly.fifo'))
```
Then, in a terminal to view Shuntly output, create the named pipe and provide it to `fx`
```bash
$ mkfifo /tmp/shuntly.fifo; fx < /tmp/shuntly.fifo
```
Then, in another terminal, launch your command.
### View Shuntly from a File
To store Shuntly output in a file, the `SinkFile` sink can be used. Name the file when providing `SinkFile` to `shunt()`:
```python
from shuntly import shunt, SinkFile
client = shunt(Anthropic(api_key=API_KEY), SinkFile('/tmp/shuntly.jsonl'))
```
Then, after your command is complete, view the file:
```bash
$ fx /tmp/shuntly.jsonl
```
### Store Shuntly Output with File Rotation
For long-running applications, `SinkRotating` writes JSONL records to a directory with automatic file rotation and cleanup. Files are named with UTC timestamps (e.g. `2025-02-15T210530Z.jsonl`).
```python
from shuntly import shunt, SinkRotating
client = shunt(Anthropic(api_key=API_KEY), SinkRotating('/tmp/shuntly'))
```
When a file exceeds `max_bytes_file` (default 10 MB), a new file is created. When the directory exceeds `max_bytes_dir` (default 100 MB), the oldest files are pruned. Set `max_bytes_dir=0` to disable pruning and retain all files. Both limits are configurable:
```python
client = shunt(Anthropic(api_key=API_KEY), SinkRotating(
'/tmp/shuntly',
max_bytes_file=50 * 1024 * 1024, # 50 MB per file
max_bytes_dir=500 * 1024 * 1024, # 500 MB total
))
```
### Send Shuntly Output to Multiple Sinks
Using `SinkMany`, multiple sinks can be written to simultaneously.
```python
from shuntly import shunt, SinkStream, SinkFile, SinkMany
client = shunt(Anthropic(), SinkMany([
SinkStream(),
SinkFile('/tmp/shuntly.jsonl'),
]))
```
### Custom Sinks
Custom sinks can be implemented by subclassing `Sink` and implementing `write()`:
```python
from shuntly import Sink, ShuntlyRecord
class SinkPrint(Sink):
def write(self, record: ShuntlyRecord) -> None:
print(record.client, record.method, record.duration_ms)
```
## What is New in Shuntly
### 0.8.0
Added support for Mozilla `any_llm.completion()`
Added support for Ollama interfaces.
### 0.7.0
Added new `SinkRotating` for rotating log handling.
### 0.6.0
Added support for the LiteLLM `completion` interface.
### 0.5.0
Corrected interleaved writes in `SinkPipe`.
### 0.4.0
Renamed `Record` to `ShuntlyRecord`.
Export `shunt()` without `Shuntly` class.
### 0.2.0
Fully tested and integrated support for OpenAI and Google SDKs.
`SinkPipe` is now interruptible.
### 0.1.0
Initial release.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest==9.0.2; extra == \"dev\"",
"anthropic==0.77.0; extra == \"dev\"",
"openai==2.20.0; extra == \"dev\"",
"google-genai==1.62.0; extra == \"dev\"",
"ruff==0.15.0; extra == \"dev\"",
"mypy==1.19.1; extra == \"dev\"",
"nox==2025.11.12; extra == \"dev\"",
"build==1.4.0; extra == \"dev\"",
"litellm=... | [] | [] | [] | [
"Homepage, https://shuntly.ai",
"Repository, https://github.com/shuntly/shuntly-py"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:57:31.137149 | shuntly-0.8.0.tar.gz | 11,533 | 17/b6/a4b4709afd7a0078854025e6a151d035e58de38b4970f9e06209e7ea9b98/shuntly-0.8.0.tar.gz | source | sdist | null | false | 39aa28eb382284b3707b388c4fb8fc7b | 89ae21a8612c6df504828b7e9ce6006a43dbfb9942478191764da35ad58a284a | 17b6a4b4709afd7a0078854025e6a151d035e58de38b4970f9e06209e7ea9b98 | MIT | [] | 240 |
2.4 | xnatctl | 0.1.0 | Modern CLI for XNAT neuroimaging server administration | # xnatctl
A modern CLI for XNAT neuroimaging server administration.
## Features
- **Resource-centric commands**: `xnatctl <resource> <action> [args]`
- **Profile-based configuration**: YAML config with multiple server profiles
- **Consistent output**: `--output json|table` and `--quiet` on all commands
- **Parallel operations**: Batch uploads/downloads with progress tracking
- **Session authentication**: Token caching with `auth login`
- **Pure HTTP**: Direct REST API calls with httpx (no pyxnat dependency)
## Installation
### Standalone Binary (no Python required)
Pre-built binaries are available for Linux, macOS, and Windows. The install script
auto-detects your OS and architecture:
```bash
# One-line install (latest release, auto-detects platform)
curl -fsSL https://github.com/rickyltwong/xnatctl/raw/main/install.sh | bash
# Install a specific version
XNATCTL_VERSION=v0.1.0 curl -fsSL https://github.com/rickyltwong/xnatctl/raw/main/install.sh | bash
# Custom install directory (default: ~/.local/bin)
XNATCTL_INSTALL_DIR=/usr/local/bin curl -fsSL https://github.com/rickyltwong/xnatctl/raw/main/install.sh | bash
```
Or download manually from [GitHub Releases](https://github.com/rickyltwong/xnatctl/releases):
| Platform | Asset |
|----------|-------|
| Linux (x86_64) | `xnatctl-linux-amd64.tar.gz` |
| macOS (x86_64) | `xnatctl-darwin-amd64.tar.gz` |
| Windows (x86_64) | `xnatctl-windows-amd64.zip` |
```bash
# Linux / macOS
tar -xzf xnatctl-<platform>-amd64.tar.gz
chmod +x xnatctl
mv xnatctl ~/.local/bin/
# Windows (PowerShell)
Expand-Archive xnatctl-windows-amd64.zip -DestinationPath .
Move-Item xnatctl.exe C:\Users\<you>\AppData\Local\bin\
```
### Python Package
```bash
# From PyPI (recommended)
pip install xnatctl
# With uv
uv pip install xnatctl
# For DICOM utilities (optional)
pip install "xnatctl[dicom]"
# From source
pip install git+https://github.com/rickyltwong/xnatctl.git
```
### Docker
```bash
docker run --rm ghcr.io/rickyltwong/xnatctl:main --help
```
## Quick Start
```bash
# Create config file
xnatctl config init --url https://xnat.example.org
# Authenticate
xnatctl auth login
# List projects
xnatctl project list
# Download a session
xnatctl session download XNAT_E00001 --out ./data
```
## Commands
| Command | Description |
|---------|-------------|
| `xnatctl config` | Manage configuration profiles |
| `xnatctl auth` | Authentication (login/logout/status) |
| `xnatctl project` | Project operations (list/show/create) |
| `xnatctl subject` | Subject operations (list/show/rename/delete) |
| `xnatctl session` | Session operations (list/show/download/upload) |
| `xnatctl scan` | Scan operations (list/show/delete) |
| `xnatctl resource` | Resource operations (list/upload/download) |
| `xnatctl prearchive` | Prearchive management |
| `xnatctl pipeline` | Pipeline execution |
| `xnatctl admin` | Administrative operations |
| `xnatctl api` | Raw API access (escape hatch) |
| `xnatctl dicom` | DICOM utilities (requires pydicom) |
## Configuration
Config file location: `~/.config/xnatctl/config.yaml`
```yaml
default_profile: production
output_format: table
profiles:
production:
url: https://xnat.example.org
username: myuser # optional, can also use env vars
password: mypassword # optional, can also use env vars
verify_ssl: true
timeout: 30
default_project: MYPROJECT
development:
url: https://xnat-dev.example.org
verify_ssl: false
```
### Getting Started with Profiles
```bash
# Create an initial config (prompts for URL and optional defaults)
xnatctl config init --url https://xnat.example.org
# Add additional profiles
xnatctl config add-profile dev --url https://xnat-dev.example.org --no-verify-ssl
# Switch the active profile
xnatctl config use-context dev
# Show the active profile and config
xnatctl config show
```
### Authentication Flow
```bash
# Login and cache a session token
xnatctl auth login
# Check current user/session context
xnatctl whoami
```
Credential priority (highest to lowest):
1. CLI arguments (`--username`, `--password`)
2. Environment variables (`XNAT_USER`, `XNAT_PASS`)
3. Profile config (`username`, `password` in config.yaml)
4. Interactive prompt
Session tokens are cached under `~/.config/xnatctl/.session` and used automatically.
### Environment Variables
| Variable | Description |
|----------|-------------|
| `XNAT_URL` | Server URL |
| `XNAT_USER` | Username |
| `XNAT_PASS` | Password |
| `XNAT_TOKEN` | Session token |
| `XNAT_PROFILE` | Config profile |
Notes:
- `XNAT_TOKEN` takes precedence over cached sessions and username/password.
- `XNAT_URL` and `XNAT_PROFILE` override values from `config.yaml` for the current shell.
- Use `XNAT_USER`/`XNAT_PASS` for non-interactive auth (CI, scripts).
## Development
```bash
# Clone and install
git clone https://github.com/rickyltwong/xnatctl.git
cd xnatctl
uv sync
# Run tests
uv run pytest
# Lint and format
uv run ruff check xnatctl
uv run ruff format xnatctl
```
## License
MIT
| text/markdown | null | Ricky Wong <rickywonglt15@outlook.com> | null | null | MIT | cli, dicom, medical-imaging, neuroimaging, xnat | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Science/Research",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Pytho... | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.1.0",
"httpx>=0.25.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"rich>=13.0.0",
"mypy>=1.8.0; extra == \"dev\"",
"pre-commit>=3.7.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-mock>=3.12.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
... | [] | [] | [] | [
"homepage, https://github.com/rickyltwong/xnatctl",
"repository, https://github.com/rickyltwong/xnatctl.git",
"documentation, https://xnatctl.readthedocs.io"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:55:15.057258 | xnatctl-0.1.0.tar.gz | 217,254 | d4/79/1759e513387303e0efac318dd1a308dca10b6a6411e8ce98ac862f93761b/xnatctl-0.1.0.tar.gz | source | sdist | null | false | ef042ba03e67061145b705da99835f43 | 7697985eaf28f8fca0e78b7f8949b3e8937db59e77a1aca1f8d0198fef656058 | d4791759e513387303e0efac318dd1a308dca10b6a6411e8ce98ac862f93761b | null | [
"LICENSE"
] | 234 |
2.4 | iagent-pay | 2.1.2 | The Universal Payment Standard for AI Agents (EVM + Solana) | # 🤖 iAgentPay SDK v2.1 (Beta)
**The Universal Payment Standard for AI Agents.**
*Build autonomous agents that can Buy, Sell, Swap, and Tip across any blockchain.*
[](https://badge.fury.io/py/iagent-pay)
[](https://opensource.org/licenses/MIT)
---
## 🌟 Key Capabilities & Advantages
**iAgentPay** is the only payment standard designed specifically for autonomous AI agents.
| Feature | Advantage |
| :--- | :--- |
| **Universal Identity** | Your agent works on **Ethereum, Solana, Base, and Polygon** simultaneously. One wallet, all chains. |
| **Brain-Safe Security** | Built-in **Capital Guard** prevents wallet draining even if the AI is compromised. |
| **Retail Ready** | Native support for **Meme Coins (BONK, PEPE)** and **Stablecoins (USDC)**. |
| **DeFi Native** | Agents can **Auto-Swap** tokens (e.g., earn SOL, swap to USDC) without human help. |
| **B2B Protocol** | Includes **AIP-1** for agents to send invoices and bill each other programmatically. |
---
## 🚀 Why iAgentPay?
Most crypto SDKs are too complex for AI. **iAgentPay** abstracts 1000s of lines of blockchain code into simple English commands.
* ✅ **Multi-Chain:** Ethereum, Base, Polygon, **Solana**.
* ✅ **Universal Tokens:** Pay in ETH, SOL, USDC, USDT, BONK, PEPE.
* ✅ **Social Tipping:** `agent.pay("vitalik.eth", 10)`
* ✅ **Auto-Swap:** `agent.swap("SOL", "BONK")` (DeFi Integration).
* ✅ **Gas Guardrails:** Protect your agent from high fees.
---
## 📦 Installation
```bash
pip install iagent-pay
```
---
## ⚡ Quick Start
### 1. Initialize (Dual-Core Engine)
```python
from iagent_pay import AgentPay, WalletManager
# Create Wallet (Auto-Saved securely)
wm = WalletManager()
wallet = wm.get_or_create_wallet(password="MySecurePassword")
# 🟢 Connect to Base (L2 - Fast & Cheap)
agent_evm = AgentPay(wallet, chain_name="BASE")
# 🟣 Connect to Solana (High Frequency)
agent_sol = AgentPay(wallet, chain_name="SOL_MAINNET")
```
### 2. Simple Payments (The "Hello World")
```python
# Pay 0.01 ETH on Base
agent_evm.pay_agent("0x123...", 0.01)
# Pay 0.1 SOL on Solana
agent_sol.pay_agent("4jjCQ...", 0.1)
```
### 3. Retail & Memecoins (New in v2.1!) 🐕
Don't worry about contract addresses. We handle them.
```python
# Send USDC (Stablecoin)
agent_evm.pay_token("CLIENT_ADDRESS", 100.0, token="USDC")
# Send BONK (Meme - Solana)
agent_sol.pay_token("FRIEND_ADDRESS", 1000.0, token="BONK")
# Send PEPE (Meme - Ethereum)
agent_evm.pay_token("DEGEN_ADDRESS", 5000.0, token="PEPE")
```
### 4. Social Tipping 🎁
Human-readable names auto-resolve to addresses.
```python
# Resolves .eth (ENS) or .sol (SNS)
agent_evm.pay_agent("vitalik.eth", 0.05)
agent_sol.pay_token("tobby.sol", 50.0, token="USDC")
```
### 5. Auto-Swap (DeFi) 🔄
Agent earning in SOL but wants to hold BONK?
```python
# Buys BONK with 1 SOL instantly
result = agent_sol.swap(input="SOL", output="BONK", amount=1.0)
print(f"Swapped! Hash: {result['tx_hash']}")
```
---
## 🧾 B2B Invoicing (AIP-1)
Standardized Agent-to-Agent billing protocol.
### 1. Create Invoice (Seller)
```python
# Create a request for 50 USDC on Base
invoice_json = agent.create_invoice(
amount=50.0,
currency="USDC",
chain="BASE",
description="Consulting Services - Feb 2026"
)
# Send this JSON string to the other agent via HTTP/WebSocket
```
### 2. Pay Invoice (Buyer)
```python
# The buyer agent receives the JSON and pays it
tx_hash = agent.pay_invoice(invoice_json)
print(f"Paid! Tx: {tx_hash}")
```
*> Helper: Checks if invoice was already paid to prevent double-spending.*
---
## 🛡️ Business Features
### Dynamic Pricing
Update your agent's service fees remotely without redeploying code.
```python
from iagent_pay import PricingManager
pm = PricingManager("https://api.myagent.com/pricing.json")
fee = pm.get_price()
```
### Gas Guardrails ⛽
Prevent your agent from burning money when the network is congested.
```python
# Aborts if Gas > 20 Gwei
try:
agent_evm.pay_agent("Bob", 0.1, max_gas_gwei=20)
except ValueError:
print("Gas too high, sleeping...")
```
---
## 🛡️ Security & Capital Control (New!)
Prevent your AI from draining your wallet if it gets "hallucinated" or compromised.
### Daily Spending Limit (Circuit Breaker)
By default, sending native tokens (ETH/SOL) is capped at **10.0 units** per 24 hours.
**Configure at start:**
```python
# Limit to 5.0 ETH per day
agent = AgentPay(wallet, chain_name="BASE", daily_limit=5.0)
```
**Update dynamically:**
```python
# Increase limit for a big purchase
agent.set_daily_limit(50.0)
# Lock wallet (Disable spending)
agent.set_daily_limit(0)
```
*> If an agent tries to spend over the limit, a `SecurityAlert` error is raised.*
---
## 🛠️ Configuration
Dual-Treasury support for collecting fees in both ecosystems.
**`pricing_config.json`**:
```json
{
"treasury": {
"EVM": "0xYourEthWallet...",
"SOLANA": "YourSolanaWallet..."
},
"trial_days": 100,
"subscription_price_usd": 26.00
}
```
---
## 📄 License
MIT License. Built for the Agent Economy.
| text/markdown | iAgent Team | hello@agentpay.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Office/Busines... | [] | https://github.com/agent-pay/sdk | null | >=3.7 | [] | [] | [] | [
"web3>=6.0.0",
"eth-account>=0.8.0",
"python-dotenv>=1.0.0",
"solana>=0.30.0",
"solders>=0.18.0",
"requests>=2.28.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T21:54:25.820243 | iagent_pay-2.1.2.tar.gz | 22,646 | d2/3b/e51ee5b2b9eeefbf4fb9999be690b63bbdc3fa6a041915057530a5c9c6e9/iagent_pay-2.1.2.tar.gz | source | sdist | null | false | dc6ab1ea7edae33e12da4ede8da873dc | d26ffcfe5401c3131d8e36daefa694107fabf6a2c24bcef2621fa3b11ae6535e | d23be51ee5b2b9eeefbf4fb9999be690b63bbdc3fa6a041915057530a5c9c6e9 | null | [] | 235 |
2.4 | markdownify-rs | 0.1.2 | Rust implementation of Python markdownify with a Python API | # markdownify-rs
Rust implementation of Python markdownify with output parity as the primary goal.
## Python bindings
Build and install locally with maturin (uv):
```bash
uv venv
uv pip install maturin
.venv/bin/maturin develop --features python
```
Build via pip (PEP 517):
```bash
uv pip install .
```
Usage:
```python
from markdownify_rs import markdownify
print(markdownify("<b>Hello</b>"))
```
Batch usage (parallelized in Rust):
```python
from markdownify_rs import markdownify_batch
outputs = markdownify_batch(["<b>Hello</b>", "<i>World</i>"])
```
Markdown-adjacent utilities (submodule):
```python
from markdownify_rs.markdown_utils import (
split_into_chunks,
split_into_chunks_batch,
coalesce_small_chunks,
link_percentage,
link_percentage_batch,
filter_by_link_percentage,
strip_links_with_substring,
strip_links_with_substring_batch,
remove_large_tables,
remove_large_tables_batch,
remove_lines_with_substring,
remove_lines_with_substring_batch,
fix_newlines,
fix_newlines_batch,
split_on_dividers,
strip_html_and_contents,
strip_html_and_contents_batch,
strip_data_uri_images,
text_pipeline_batch,
)
chunks = split_into_chunks(text, how="sections")
chunks_batch = split_into_chunks_batch([text1, text2], how="sections")
cleaned = strip_links_with_substring(text, "javascript")
cleaned_batch = strip_links_with_substring_batch([text1, text2], "javascript")
filtered = filter_by_link_percentage([text1, text2], threshold=0.5)
pipelined = text_pipeline_batch(
[text1, text2],
steps=[
("strip_links_with_substring", {"substring": "javascript"}),
("remove_large_tables", {"max_cells": 200}),
("fix_newlines", {}),
],
)
```
Notes:
- `code_language_callback` is not yet supported in the Python bindings.
CLI:
```bash
markdownify-rs input.html
cat input.html | markdownify-rs
```
## Parity hacks (scraper vs. BeautifulSoup)
These are explicit, ad hoc behaviors added on top of `scraper`/`html5ever` to match
`python-markdownify` (BeautifulSoup + html.parser) output. They are intentionally
quirky and may be replaced with more “correct” behavior once parity is stable.
- **`<br>` parser quirk**: With BeautifulSoup’s html.parser, if a non‑self‑closing
`<br>` appears before a self‑closing `<br/>`, the later `<br/>` can be treated like
an opening `<br>` whose contents run until that implicit `<br>` is closed (usually
when its parent closes). We emulate this by removing the content between that
`<br/>` and the closing tag that ends the implicit `<br>` (ignoring `<br>` tags
inside comments/scripts), which matches python-markdownify’s output.
- **Leading whitespace reconstruction**: html.parser preserves whitespace‑only text
nodes that html5ever drops (notably between `<html>` children and at the start of
`<body>`). We reconstruct the normalized leading whitespace prefix (using the same
“single space vs. single newline” rules as BeautifulSoup’s `endData`) and merge it
with the converter output, carrying it across non‑block tags and empty custom
elements whose contents are only comments/whitespace.
- **Table header inference**: For tables whose header row is effectively empty,
we avoid forcing a “---” separator to match python-markdownify behavior.
- **Top-level `<td>/<th>` wrapping**: If input is a bare `<td>`/`<th>`, we wrap it
in a `<table><tr>…</tr></table>` fragment to align with python-markdownify output.
## Benchmarks
Datasets
- Michigan Statutes (JSONL, 241 HTML documents).
- Total HTML bytes: 101,029,525 (~96.35 MiB).
- Largest document: 8,034,686 bytes (~7.66 MiB).
- Source file size: 102,856,616 bytes (~98.10 MiB).
- Law websites (CSV, 3,136 HTML documents).
- Total HTML bytes: 111,747,114 (~106.57 MiB).
- Largest document: 1,381,380 bytes (~1.32 MiB).
- Source file size: 148,486,852 bytes (~141.61 MiB).
Run
```bash
# Michigan Statutes (JSONL)
MARKDOWNIFY_BENCH_PATH=/path/to/mi_statutes.jsonl .venv/bin/python scripts/bench_python.py --module markdownify_rs --dist-name markdownify-rs --label markdownify_rs
MARKDOWNIFY_BENCH_PATH=/path/to/mi_statutes.jsonl .venv/bin/python scripts/bench_python.py --module markdownify --dist-name markdownify --label markdownify
# Law websites (CSV)
.venv/bin/python scripts/bench_python.py --format csv --path /path/to/deleted_pages.csv --module markdownify_rs --dist-name markdownify-rs --label markdownify_rs
.venv/bin/python scripts/bench_python.py --format csv --path /path/to/deleted_pages.csv --module markdownify --dist-name markdownify --label markdownify
```
Python binding comparison (both run through Python, 2026-01-28, Apple M3, macOS 14.6 / Darwin 24.6.0, Python 3.13.0)
Michigan Statutes (JSONL)
- `markdownify_rs` `convert_all` (241 docs): time 2.266594 s, throughput 42.508 MiB/s
- `markdownify_rs` `convert_all_batch` (241 docs): time 0.538012 s, throughput 179.084 MiB/s
- `markdownify_rs` `convert_largest` (8,034,686 bytes): time 187.941 ms, throughput 40.771 MiB/s
- `markdownify` `convert_all` (241 docs): time 29.654787 s, throughput 3.249 MiB/s
- `markdownify` `convert_largest` (8,034,686 bytes): time 4.496880 s, throughput 1.704 MiB/s
Speedup summary (wall-clock time, lower is better)
| Scenario | markdownify_rs time | markdownify_rs batch time | markdownify time | Speedup (rs vs py) | Speedup (batch vs py) | Batch vs rs |
| --- | --- | --- | --- | --- | --- | --- |
| convert_all | 2.266594 s | 0.538012 s | 29.654787 s | 13.08x (+1208.34%) | 55.12x (+5411.92%) | 4.21x (+321.29%) |
| convert_largest | 187.941 ms | n/a | 4.496880 s | 23.93x (+2292.71%) | n/a | n/a |
Law websites (CSV)
- `markdownify_rs` `convert_all` (3,136 docs): time 2.596691 s, throughput 41.041 MiB/s
- `markdownify_rs` `convert_all_batch` (3,136 docs): time 0.672013 s, throughput 158.584 MiB/s
- `markdownify_rs` `convert_largest` (1,381,380 bytes): time 54.482 ms, throughput 24.180 MiB/s
- `markdownify` `convert_all` (3,136 docs): time 17.680570 s, throughput 6.028 MiB/s
- `markdownify` `convert_largest` (1,381,380 bytes): time 280.459 ms, throughput 4.697 MiB/s
Speedup summary (wall-clock time, lower is better)
| Scenario | markdownify_rs time | markdownify_rs batch time | markdownify time | Speedup (rs vs py) | Speedup (batch vs py) | Batch vs rs |
| --- | --- | --- | --- | --- | --- | --- |
| convert_all | 2.596691 s | 0.672013 s | 17.680570 s | 6.81x (+580.89%) | 26.31x (+2530.99%) | 3.86x (+286.40%) |
| convert_largest | 54.482 ms | n/a | 280.459 ms | 5.15x (+414.77%) | n/a | n/a |
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.0 | 2026-02-18T21:53:07.509088 | markdownify_rs-0.1.2-cp38-abi3-macosx_11_0_arm64.whl | 1,457,378 | 78/c6/41629274ee0b83b3c9bab0cbfa224ccfd2a6d613365871a089786f162dc9/markdownify_rs-0.1.2-cp38-abi3-macosx_11_0_arm64.whl | cp38 | bdist_wheel | null | false | 22bda3221f209351635d2563c8d82310 | a0db731874315d636864f132bbedd83799ecfd87bb0474ef515a94a03bfb2207 | 78c641629274ee0b83b3c9bab0cbfa224ccfd2a6d613365871a089786f162dc9 | null | [] | 553 |
2.1 | wassersteinwormhole | 0.3.7 | Transformer based embeddings for Wasserstein Distances | WassersteinWormhole
======================
Embedding point-clouds by preserving Wasserstein distances with the Wormhole.
This implementation is written in Python3 and relies on FLAX, JAX, & JAX-OTT.
To install JAX, simply run the command:
pip install --upgrade pip install -U "jax[cuda12]”
And to install WassersteinWormhole along with the rest of the requirements:
pip install wassersteinwormhole
And running the Wormhole on your own set of point-clouds is as simple as:
from wassersteinwormhole import Wormhole
WormholeModel = Wormhole(point_clouds = point_clouds)
WormholeModel.train()
Embeddings = WormholeModel.encode(WormholeModel.point_clouds, WormholeModel.masks)
For more details, follow tutorial at [https://wasserstienwormhole.readthedocs.io.](https://wassersteinwormhole.readthedocs.io/en/latest/)
| text/markdown | Doron Haviv | null | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"flax<0.11.0,>=0.10.6",
"ott-jax<0.5.0,>=0.4.9",
"clu<0.0.13,>=0.0.12",
"tqdm<5.0.0,>=4.67.1",
"scanpy<2.0.0,>=1.11.2"
] | [] | [] | [] | [] | poetry/1.5.1 CPython/3.14.3 Linux/6.14.0-1017-azure | 2026-02-18T21:50:27.743314 | wassersteinwormhole-0.3.7.tar.gz | 18,754 | e6/d9/e9c74c3102be642a21c0e17722e50367278754d9605c1308af02f3d7cd27/wassersteinwormhole-0.3.7.tar.gz | source | sdist | null | false | 8ec4e981104d8bb4b93d4e92ebbe79fd | aa94b49b591e152274388648331a3df09ead55f6ac4a21612d37727cc6ab1d60 | e6d9e9c74c3102be642a21c0e17722e50367278754d9605c1308af02f3d7cd27 | null | [] | 229 |
2.4 | Topsis-Vaibhav-102316037 | 1.0.0 | A Python package for implementing TOPSIS | # TOPSIS-Vaibhav-102316037
A Python package to implement the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS).
## Installation
```bash
pip install Topsis-Vaibhav-102316037
```bash
topsis <InputDataFile> <Weights> <Impacts> <OutputResultFileName>
```bash
topsis data.csv "1,1,1,1,1" "+,+,-,+,+" result.csv
| text/markdown | Vaibhav Srivastva | vaibhavsrivastva73@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/makardvaj/topsis-package | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.3 | 2026-02-18T21:50:06.834891 | topsis_vaibhav_102316037-1.0.0.tar.gz | 3,924 | ab/89/46e5ce47a0183f9c103383cacbc3edc2bcf2732cb0fd869f8f18098d22eb/topsis_vaibhav_102316037-1.0.0.tar.gz | source | sdist | null | false | 262f96e24cbece4ef145db2e1fd7e309 | 3e68861a7b09921aaa1ab5912409fe9d62ddfd5cc8bcd173e3f876dc43a1e698 | ab8946e5ce47a0183f9c103383cacbc3edc2bcf2732cb0fd869f8f18098d22eb | null | [
"LICENSE"
] | 0 |
2.4 | morebs2 | 0.1.56 | A structure for data security and a cracking environment. | MOREBS
=======
Is a collection of methods and classes written in Python to aid in data generation, specifically
vector data. Important classes include:
- `BallComp`
- `ResplattingSearchSpaceIterator`
For the source code to this project's latest version, go to
`https://github.com/changissz/morebs`.
For a mathematics paper on the `BallComp` algorithm, see the link
`https://github.com/changissnz/ballcomp`.
Documentation for project can be found at `_build/html`. Documentation
first needs to be built. The library Sphinx for generating the documentation
is required to be installed.
# Updates For Project On and Off
# Update: 5/25/25 #3
Deleted the project DER from Github. The project was the original work I did
before I refactored it into `morebs`, and has been sitting dead on Github for
a while.
# Update: 5/25/25 #2
So the new version is up on Github (0.1.1). I also took the step to delete the
`s*.txt` files that were present from a few commits back. The files were relatively
large, and I must have forgotten to exclude the files from being committed to Github.
# Update: 5/25/25
I have not done much serious work on this project since February of 2023.
Recently, I was working with directed graphs and decided to contribute
some code for that topic to this project. I still remember the ideas that
started this project, geometric intersection and data generation. On
geometric intersection, the Schopenhauer wrote about it in his book, The
World as Will and Representation. Even though he did not go into mathematical
detail on it, his words left an inspiring impact on my computing thought. The
topic of data generation is a pretty big field in computing. Cloud computing,
especially, has been a big driver for big data analytics, the counter-active
field to data generation. Now that there are present and emerging regulations
regarding the "fair" and "benign" use of data in artificial intelligence and
related fields, data generation has become very important to some enterprises
that wish to train their artificial intelligence systems, but do not have
authentic datasets in adequate quantity. I'm not surprised that no one has
decided to help contribute code to this project. Not to mean any insult, but
big data, machine learning, that kind of stuff really is not a normal person's
interest (sorry, populists). Besides, most open-source projects that really take
off are heavily funded. I have been out of the academic environment for almost
half a decade now. Big data, machine learning, that kind of stuff, was mainly an
academic business. It still is, pretty much, because all I ever read about from
technology corporations is their business products, consumer-side.
I was reviewing some of the code in this project. The project definitely needs
more thorough documentation as well as unit-testing. `morebs` was originally
a project solely for the `BallComp` algorithm and a data generator, one that
travels along an n-dimensional lattice, called `NSDataInstruction`.
`NSDataInstruction` uses the `ResplattingSearchSpaceIterator` as the data structure
that outputs preliminary values. Then I added a basic polynomial factorization
algorithm (`PolyFactorEst`) and an n-dimensional delineation algorithm (see file
`deline.py`). This was back in January of 2023. Not every algorithm is thoroughly
tested, as a reminder.
Right now, I am working on directed graphs, so that is the topic of the new code
content for the next version of morebs2 on `pypi.org`.
| text/markdown | Richard Pham | Richard Pham <phamrichard45@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/Changissnz/morebs | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/changissnz/isoring",
"Issues, https://github.com/changissnz/isoring/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T21:49:35.951677 | morebs2-0.1.56.tar.gz | 153,468 | e4/da/0c1d04a9700746f1246feda47e3e14de1cbd75a82cd262b026e159e79f20/morebs2-0.1.56.tar.gz | source | sdist | null | false | 8edd92094d2791d4114254587997906b | acffa96cd31215bc726971b074a5d463a8b17849561ff6d2219a43af64b07a82 | e4da0c1d04a9700746f1246feda47e3e14de1cbd75a82cd262b026e159e79f20 | CC0-1.0 | [
"LICENSE"
] | 255 |
2.4 | osmosis-ai | 0.2.17 | A Python SDK for Osmosis LLM training workflows: reward/rubric validation and remote rollout. | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset=".github/osmosis-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset=".github/osmosis-logo-light.svg">
<img alt="Osmosis" src=".github/osmosis-logo-light.svg" width="218">
</picture>
</p>
<p align="center">
<a href="https://pypi.org/project/osmosis-ai/"><img alt="Platform" src="https://img.shields.io/badge/platform-Linux%20%7C%20macOS-blue"></a>
<a href="https://pypi.org/project/osmosis-ai/"><img alt="PyPI" src="https://img.shields.io/pypi/v/osmosis-ai?color=yellow"></a>
<a href="https://pypi.org/project/osmosis-ai/"><img alt="Python" src="https://img.shields.io/pypi/pyversions/osmosis-ai"></a>
<a href="https://codecov.io/gh/Osmosis-AI/osmosis-sdk-python">
<img alt="Codecov" src="https://codecov.io/gh/Osmosis-AI/osmosis-sdk-python/branch/main/graph/badge.svg">
</a>
<a href="https://opensource.org/licenses/MIT"><img alt="License" src="https://img.shields.io/badge/License-MIT-orange.svg"></a>
<a href="https://docs.osmosis.ai"><img alt="Docs" src="https://img.shields.io/badge/docs-docs.osmosis.ai-green"></a>
</p>
# osmosis-ai
> ⚠️ **Warning**: osmosis-ai is still in active development. APIs may change between versions.
Python SDK for Osmosis AI training workflows. Supports two training modes with shared tooling for testing and evaluation.
Osmosis AI is a platform for training LLMs with reinforcement learning. You define custom reward functions, LLM-as-judge rubrics, and agent tools -- then Osmosis handles the training loop on managed GPU clusters. This SDK provides everything you need to build and test those components locally, from `@osmosis_reward` decorators and MCP tool definitions to a full CLI for running agents against datasets before submitting training runs.
## Quick Start
Pick a training mode and follow the example repo:
- **Local Rollout** (recommended for most users): **[osmosis-git-sync-example](https://github.com/Osmosis-AI/osmosis-git-sync-example)**
- **Remote Rollout** (custom agent architectures): **[osmosis-remote-rollout-example](https://github.com/Osmosis-AI/osmosis-remote-rollout-example)**
## Two Training Modes
Osmosis supports **Local Rollout** and **Remote Rollout** as parallel approaches to training with reinforcement learning:
| | Local Rollout | Remote Rollout |
|--|--------------|----------------|
| **How it works** | Osmosis manages the agent loop. You provide reward functions, rubrics, and MCP tools via a GitHub-synced repo. | You implement and host a `RolloutAgentLoop` server. Full control over agent behavior. |
| **Best for** | Standard tool-use agents, fast iteration, zero infrastructure | Custom agent architectures, complex orchestration, persistent environments |
| **Example repo** | [osmosis-git-sync-example](https://github.com/Osmosis-AI/osmosis-git-sync-example) | [osmosis-remote-rollout-example](https://github.com/Osmosis-AI/osmosis-remote-rollout-example) |
## Installation
Requires Python 3.10 or newer. For development setup, see [CONTRIBUTING.md](CONTRIBUTING.md).
### Prerequisites
- **Python 3.10+**
- **An LLM API key** (e.g., OpenAI, Anthropic, Groq) -- required for `osmosis test` and `osmosis eval`. See [supported providers](https://docs.litellm.ai/docs/providers).
- **Osmosis account** (optional) -- needed for platform features like `osmosis login`, workspace management, and submitting training runs. Sign up at [platform.osmosis.ai](https://platform.osmosis.ai).
### pip
```bash
pip install osmosis-ai # Core SDK
pip install osmosis-ai[server] # FastAPI server for Remote Rollout
pip install osmosis-ai[mcp] # MCP tool support for Local Rollout
pip install osmosis-ai[full] # All features
```
### uv
```bash
uv add osmosis-ai # Core SDK
uv add osmosis-ai[server] # FastAPI server for Remote Rollout
uv add osmosis-ai[mcp] # MCP tool support for Local Rollout
uv add osmosis-ai[full] # All features
```
## Local Rollout
Osmosis manages the agent loop. You provide reward functions, rubrics, and MCP tools via a GitHub-synced repo.
Get started: **[osmosis-git-sync-example](https://github.com/Osmosis-AI/osmosis-git-sync-example)** | [Docs](docs/local-rollout/overview.md)
## Remote Rollout
You implement and host a `RolloutAgentLoop` server. Full control over agent behavior.
Get started: **[osmosis-remote-rollout-example](https://github.com/Osmosis-AI/osmosis-remote-rollout-example)** | [Docs](docs/remote-rollout/overview.md)
## Testing & Evaluation
Both modes share the same CLI tools: [Test Mode](docs/test-mode.md) | [Eval Mode](docs/eval-mode.md) | [CLI Reference](docs/cli.md)
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, testing, linting, and PR guidelines.
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Links
- [Homepage](https://github.com/Osmosis-AI/osmosis-sdk-python)
- [Issues](https://github.com/Osmosis-AI/osmosis-sdk-python/issues)
- [Local Rollout Example](https://github.com/Osmosis-AI/osmosis-git-sync-example)
- [Remote Rollout Example](https://github.com/Osmosis-AI/osmosis-remote-rollout-example)
| text/markdown | null | Osmosis AI <jake@osmosis.ai> | null | null | MIT License
Copyright (c) 2025 Gulp AI
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"PyYAML<7.0,>=6.0",
"python-dotenv<2.0.0,>=0.1.0",
"requests<3.0.0,>=2.0.0",
"litellm<2.0.0,>=1.40.0",
"tqdm<5.0.0,>=4.0.0",
"httpx<1.0.0,>=0.25.0",
"pydantic<3.0.0,>=2.0.0",
"fastapi<1.0.0,>=0.100.0; extra == \"server\"",
"uvicorn<1.0.0,>=0.23.0; extra == \"server\"",
"pydantic-settings<3.0.0,>=2... | [] | [] | [] | [
"Homepage, https://github.com/Osmosis-AI/osmosis-sdk-python",
"Issues, https://github.com/Osmosis-AI/osmosis-sdk-python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:48:40.933685 | osmosis_ai-0.2.17.tar.gz | 132,228 | 67/f8/a50c12d88514a667811972916836266323ed3fd07da54f53b1566c4031d6/osmosis_ai-0.2.17.tar.gz | source | sdist | null | false | 7333ec406d7cb6e3eb5cf474e09bbfb0 | 87094a108d5e223c542c8c08d922e1e053b31bf7e548fe4390722fb5b4d0a953 | 67f8a50c12d88514a667811972916836266323ed3fd07da54f53b1566c4031d6 | null | [
"LICENSE"
] | 247 |
2.4 | connector-py | 4.189.0 | An Abstract Tool to Perform Actions on Integrations. | # Lumos Connector SDK
Plug apps back into Lumos using an integration connector built with this SDK.
[](https://pypi.org/project/connector-py)
[](https://pypi.org/project/connector-py)
-----
## Table of Contents
- [Lumos Connector SDK](#lumos-connector-sdk)
- [Table of Contents](#table-of-contents)
- [Installation](#installation)
- [Usage](#usage)
- [Print the spec](#print-the-spec)
- [Create a new connector](#create-a-new-connector)
- [Learning the connector's capabilities](#learning-the-connectors-capabilities)
- [Connector implementation](#connector-implementation)
- [Running unit tests](#running-unit-tests)
- [Typechecking with MyPy](#typechecking-with-mypy)
- [Error Handling](#error-handling)
- [Raising an exception](#raising-an-exception)
- [Response](#response)
- [OAuth Module](#oauth-module)
- [OAuth Flow Types](#oauth-flow-types)
- [Connector Configuration](#connector-configuration)
- [Where should I set my connector's configuration?](#where-should-i-set-my-connectors-configuration)
- [The connection sequence for Lumos](#the-connection-sequence-for-lumos)
- [Deploying a connector](#deploying-a-connector)
- [Deployment models](#deployment-models)
- [Tips](#tips)
- [The library I want to use is synchronous only](#the-library-i-want-to-use-is-synchronous-only)
- [License](#license)
## Installation
```console
pip install "connector-py[dev]"
```
## Usage
This package has...
1. A CLI to create a custom connector with its own CLI to call commands
2. A library to assist building custom connectors in Python
To get started with the CLI, run `connector --help`
### Print the spec
This SDK has an OpenAPI spec that you can render and view with the [Swagger editor](https://editor.swagger.io/).
```console
connector spec
```
## Create a new connector
From your shell, run
```shell
# Create a connector
# CLI cmd name folder
connector scaffold demo-connector demo_connector
# Install its dependencies in a virtual env
cd demo_connector
python -m venv .venv
. .venv/bin/activate
pip install ".[all]"
# Lint and run tests
mypy .
pytest
# Run the info capability (note the hyphens, instead of underscores)
demo-connector info
```
### Learning the connector's capabilities
Custom and on-premise Lumos connectors are called via the CLI; they're passed JSON and should print the response JSON to stdout.
Run the `info` capability to learn what other capabilities the connector supports, what resource and entitlement types, its name, etc.
Look at the info, using `jq` to pretty-print the response:
```shell
demo-connector info | jq .response
# or just the capabilities
demo-connector info | jq .response.capabilities
```
To call most capabilities, you run a command where you pass the request (JSON) as a string.
```console
<CONNECTOR COMMAND> <CAPABILITY NAME> --json '<A STRINGIFIED JSON OBJECT>'
```
The most important capability to implement is `validate_credentials`. Lumos uses this capability to ensure a user-established connection works, and has resulted in authentication credentials your connector can use to perform other actions.
```py
test-connector validate_credentials --json '{
"auth": {
"oauth": {
"access_token":"this will not work"
}
},
"request": {},
"settings": {
"account_id":"foo"
}
}'
```
**This is expected to 💥 fail with a brand-new connector**. You'll need to figure out how to [configure the authentication](#connector-configuration) to the underlying app server, and how to surface that as user (auth) configuration.
To learn more about all the capabilities, check out the OpenAPI spec in a Swagger editor.
To see a working capability, you can use the Lumos mock connector's `validate_credentials` call, using `jq` to pretty print the JSON:
```console
mock-connector validate_credentials --json '{"auth":{"basic":{"username":"foo","password":"bar"}},"request":{},"settings":{"host":"google.com"}}' | jq .
```
```json
{
"response": {
"valid": true,
"unique_tenant_id": "mock-connector-tenant-id"
},
"page": null
}
```
### Connector implementation
Connectors can implement whichever Lumos capabilities make sense for the underlying app.
To see what a minimal implementation looks like, you can inspect a newly scaffolded connector, and look at the integration declaration, and the _uncommented out_ capability registrations.
The integration declaration looks something like this:
```python
integration = Integration(
app_id="my_app",
version=__version__,
auth=BasicCredential,
settings_model=MyAppSettings,
exception_handlers=[
(httpx.HTTPStatusError, HTTPHandler, None),
],
description_data=DescriptionData(
logo_url="https://logos.app.lumos.com/foobar.com",
user_friendly_name="Foo Bar",
description="Foobar is a cloud-based platform that lets you manage foos and bars",
categories=[AppCategory.DEVELOPERS, AppCategory.COLLABORATION],
),
resource_types=resource_types,
entitlement_types=entitlement_types,
)
```
And capability registration looks something like this:
```py
integration.register_capabilities(
{
StandardCapabilityName.VALIDATE_CREDENTIALS: capabilities_read.validate_credentials,
# StandardCapabilityName.LIST_ACCOUNTS: capabilities_read.list_accounts,
# StandardCapabilityName.ASSIGN_ENTITLEMENT: capabilities_write.assign_entitlement,
}
)
integration.register_custom_capabilities(
{
"my_custom_capability": (
capabilities_write.my_custom_capability, # must exist
CapabilityMetadata(
display_name="My Custom Capability",
description="Executes my custom capability",
),
),
}
)
```
### Running unit tests
Scaffolded connectors come with a bunch of unit test examples - they're all skipped by default, but you can remove the skip marker to use the existing test.
To run unit tests:
```console
pytest .
```
To understand the test structure:
```text
demo_connector/
demo_connector/
tests/
test_read_capabilities/
test_list_accounts_cases.py
...
test_write_capabilities/
...
test_all_capabilities.py
common_mock_data.py
```
- `test_all_capabilities.py` is the main Pytest test file. It uses `gather_cases` to discover all capability test cases automatically.
- `test_read_capabilities/test_list_accounts_cases.py` contains case functions for `list_accounts`.
- `common_mock_data.py` holds shared mock data (e.g., `DATETIME_NOW`) used across test cases.
### Typechecking with MyPy
The generated Python code is typed, and can be typechecked with MyPy (installed as a dev dependency).
```console
mypy .
```
### Error Handling
Error handling is facilitated through an exception handler decorator.
An exception handler can be attached to the connector library as follows:
```python
from httpx import HTTPStatusError
from connector.oai.errors import HTTPHandler
integration = Integration(
...,
exception_handlers=[
(HTTPStatusError, HTTPHandler, None),
],
handle_errors=True,
)
```
The decorator accepts a list of tuples of three.
1. the exception type you would like to catch
2. the handler (default or implemented on your own)
3. a specific error code that you would like to associate with this handler.
By default it is recommended to make use of the default HTTPHandler which will handle `raise_for_status()` for you and properly error code it. For more complex errors it is recommended to subclass the ExceptionHandler (in `connector/oai/errors.py`) and craft your own handler.
#### Raising an exception
Among this, there is a custom exception class available as well as a default list of error codes:
```python
from connector.oai.errors import ConnectorError
from connector_sdk_types.generated import ErrorCode
def some_method(self, args):
raise ConnectorError(
message="Received wrong data, x: y",
app_error_code="foobar.some_unique_string",
error_code=ErrorCode.BAD_REQUEST,
)
```
It is preferred to raise any manually raisable exception with this class. A connector can implement its own error codes list, which should be properly documented.
#### Response
An example response when handled this way:
```json
// BAD_REQUEST error from github connector
{"error":{"message":"Some message","status_code":400,"error_code":"bad_request","raised_by":"HTTPStatusError","raised_in":"github.integration:validate_credentials"}, "response": null, "raw_data": null}
```
### OAuth Module
The OAuth module is responsible for handling the OAuth2.0 flow for a connector.
It is configured with `oauth_settings` in the `Integration` class.
Not configuring this object will disable the OAuth module completely.
```python
from connector.oai.modules.oauth_module_types import (
OAuthSettings,
OAuthCapabilities,
OAuthRequest,
RequestDataType,
)
integration = Integration(
...,
oauth_settings=OAuthSettings(
# Authorization & Token URLs for the particular connector
authorization_url="https://app.connector.com/oauth/authorize",
token_url="https://api.connector.com/oauth/v1/token",
# Scopes per capability (space delimited string)
scopes={
StandardCapabilityName.VALIDATE_CREDENTIALS: "test:scope another:scope",
... # further capabilities as implemented in the connector
},
# You can modify the request type if the default is not appropriate
# common options for method are "POST" and "GET"
# available options for data are "FORMDATA", "QUERY", and "JSON" (form-data / url query params / json body)
# *default is POST and FORMDATA*
request_type=OAuthRequest(data=RequestDataType.FORMDATA),
# You can modify the authentication method if the default is not appropriate
# available options for auth_method are "CLIENT_SECRET_POST" and "CLIENT_SECRET_BASIC"
# *default is CLIENT_SECRET_POST*
client_auth=ClientAuthenticationMethod.CLIENT_SECRET_POST,
# You can turn off specific or all capabilities for the OAuth module
# This means that these will either be skipped or you have to implement them manually
capabilities=OAuthCapabilities(
refresh_access_token=False,
),
# You can specify the type of OAuth flow to use
# Available options are "CODE_FLOW" and "CLIENT_CREDENTIALS"
# *default is CODE_FLOW*
flow_type=OAuthFlowType.CODE_FLOW,
# You can enable PKCE (Proof Key for Code Exchange)
# *default is False*
# S256 is the default hashing algorithm, and the only supported at the moment
pkce=True,
),
)
```
It might happen that your integration requires a dynamic authorization/token URL.
For example when the service provider has specific URLs and uses the customers custom subdomain. (eg. `https://{subdomain}.service.com/oauth/authorize`)
In that case you can pass a callable that takes the request args (`AuthRequest`, without the auth parameter) as an argument (only available during request).
```python
# method definitions
def get_authorization_url(args: AuthRequest) -> str:
settings = get_settings(args, ConnectorSettings)
return f"https://{settings.subdomain}.service.com/oauth/authorize"
def get_token_url(args: AuthRequest) -> str:
settings = get_settings(args, ConnectorSettings)
return f"https://{settings.subdomain}.service.com/oauth/token"
# oauth settings
integration = Integration(
...,
oauth_settings=OAuthSettings(
authorization_url=get_authorization_url,
token_url=get_token_url,
),
)
```
#### OAuth Flow Types
The OAuth module supports two flow types:
- `CODE_FLOW`: The authorization code flow (default)
- `CLIENT_CREDENTIALS`: The client credentials flow (sometimes called "2-legged OAuth" or "Machine-to-Machine OAuth")
The flow type can be specified in the `OAuthSettings` object.
Using the authorization code flow you have three available capabilities:
- `GET_AUTHORIZATION_URL`: To get the authorization URL
- `HANDLE_AUTHORIZATION_CALLBACK`: To handle the authorization callback
- `REFRESH_ACCESS_TOKEN`: To refresh the access token
Using the client credentials flow you have two available capabilities:
- `HANDLE_CLIENT_CREDENTIALS_REQUEST`: To handle the client credentials request, uses the token URL
- `REFRESH_ACCESS_TOKEN`: To refresh the access token
These are registered by default via the module and can be overriden by the connector.
If you run:
```sh
connector info
```
You will see that the OAuth capabilities are included in the available connector capabilities.
## Connector Configuration
A connector is used to connect to multiple tenants of the same app. Each tenant has a connection in Lumos, and the unique tenant ID is used to distinguish the different connections.
Each connection has its own...
- ...auth object that fits the connector's auth model.
- ...settings object that fits the connector's settings model.
- ...set of data (accounts, resources, entitlements) that Lumos reads and stores.

A connector can be used for multiple underlying instances of the same app. For instance, you might use a `github` connector to establish connections with different Github Organizations. The nature of "what is a tenant" is dependent on the underlying app.
A scaffolded connector has OAuth authentication, and a Settings type with `account_id`. You don't have to keep these - you can change the authentication model and the Settings type to whatever is appropriate for the underlying app (settings may be empty).
### Where should I set my connector's configuration?
A quick rule is sensitive data that would allow an attacker the ability to access the underlying app, goes into the `auth` payload. Anything else that's not sensitive, not absolutely required to connect to a tenant, or can have a sane default, is `settings`.
### The connection sequence for Lumos
1. Lumos sees a new connector, and queries its settings and auth models via the `info` command.
2. Lumos uses these parts of the `info` response to render a connection form for the user.
- the settings (JSON schema + included documentation)
- auth models (string matching to the auth model)
- app logo, description, tags, etc.
3. The user enters all the relevant data/auth materials to connect to an app, and/or does an OAuth consent flow to the underlying app.
4. Lumos validates the credentials and settings via the `validate_credentials` capability.

At this point, the connection is considered established, and Lumos will attempt to read all data from the connector, allow user provisioning and deprovisioning, etc.
## Deploying a connector
Quick steps:
1. Package up the connector you've built into an archive with a native executable. We use [`pyinstaller`](https://pyinstaller.org/en/stable/) for our Python connectors.
```shell
# SDK command ...required args
connector compile-on-prem --connector-root-module-dir ./demo_connector/demo_connector --app-id demo
```
2. Run the [Lumos on-premise agent](https://developers.lumos.com/reference/on-premise-agent).
3. On the same machine as (3), deploy the packaged-up connector from (1) in the same folder.
4. The integration should show up in the Lumos AdminView > Integrations screen.
### Deployment models
Lumos calls a connector's APIs with auth and settings data to read all the accounts, entitlements, resources, and associations in the connected app.
There are two ways this happens, depending on who's hosting the connector.
If Lumos is hosting it, we call it directly in our backend.

If it's a custom connector, it runs as an on-premise connector on a customer's computer, and is called by the [Lumos on-prem agent](https://developers.lumos.com/reference/on-premise-agent).

## Tips
### The library I want to use is synchronous only
You can use a package called `asgiref`. This package converts I/O bound synchronous
calls into asyncio non-blocking calls. First, add asgiref to your dependencies list
in `pyproject.toml`. Then, in your async code, use `asgiref.sync_to_async` to convert
synchronous calls to asynchronous calls.
```python
from asgiref.sync import sync_to_async
import requests
async def async_get_data():
response = await sync_to_async(requests.get)("url")
```
## License
`connector` is distributed under the terms of the [Apache 2.0](./LICENSE.txt) license.
| text/markdown | null | teamlumos <security@lumos.com> | null | null | null | integrations | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"connector-sdk-types==0.39.0",
"gql[httpx]",
"httpx>=0.27.0",
"msgpack>=1",
"pydantic<2.10,>=2",
"urllib3>=1.25.2",
"botocore>=1.34.0",
"PyJWT==2.10.1",
"datadog~=0.49.0",
"pip-system-certs==5.2; platform_system == \"Windows\"",
"fastapi-slim; extra == \"fastapi\"",
"uvicorn; extra == \"fastap... | [] | [] | [] | [
"Documentation, https://developers.lumos.com/reference/the-lumos-connector-api"
] | twine/6.2.0 CPython/3.10.6 | 2026-02-18T21:47:04.963948 | connector_py-4.189.0.tar.gz | 217,058 | 0d/03/8100acc2ce6e7b83d2c852c8a5f0c97255d42d9f627948e8f6ea918ddbeb/connector_py-4.189.0.tar.gz | source | sdist | null | false | 16218f59a345b82fcb524d4609455270 | 5f1ae9abe1c2bf75d9e114175c87b952701a84fd9b4a8798e0e1008d11e34dfc | 0d038100acc2ce6e7b83d2c852c8a5f0c97255d42d9f627948e8f6ea918ddbeb | Apache-2.0 | [
"LICENSE.txt"
] | 832 |
2.4 | protox-gatekeeper | 0.2.4 | Fail-closed Tor session enforcement for Python HTTP(S) traffic | [](https://pypi.org/project/protox-gatekeeper/)
[](https://pypi.org/project/protox-gatekeeper/)
[](https://pypi.org/project/protox-gatekeeper/)
# ProtoX GateKeeper
**ProtoX GateKeeper** is a small, opinionated Python library that enforces
**fail-closed Tor routing** for HTTP(S) traffic.
The goal is simple:
> If Tor is not active and verified, **nothing runs**.
GateKeeper is designed to be *fire-and-forget*: create a client once, then perform network operations with a hard guarantee that traffic exits through the Tor network.
---
## What GateKeeper Is
- A **Tor-verified HTTP client**
- A thin wrapper around `requests.Session` with safe helpers
- Fail-closed by default (no silent clearnet fallback)
- Observable (exit IP, optional geo info)
- Suitable for scripts, tooling, and automation
---
## What GateKeeper Is NOT
- ❌ A Tor controller
- ❌ A crawler or scanner
- ❌ An anonymization silver bullet
- ❌ A replacement for Tor Browser
GateKeeper enforces transport routing only. You are still responsible for *what* you do with it.
---
## Requirements
- A locally running Tor client
- SOCKS proxy enabled (default: `127.0.0.1:9150`)
On Windows this usually means **Tor Browser** running in the background.
---
## Installation
### From PyPI
```bash
pip install protox-gatekeeper
```
### From source (development)
```bash
pip install -e .
```
(Recommended while developing or testing.)
---
## Basic Usage
```python
import logging
from protox_gatekeeper import GateKeeper
logging.basicConfig(
level=logging.INFO,
format='[%(levelname)s] %(name)s - %(message)s'
)
gk = GateKeeper(geo=True)
gk.download(
"https://httpbin.org/bytes/1024",
"downloads/test.bin"
)
```
### Example output
```
[INFO] gatekeeper.core - Tor verified: 89.xxx.xxx.xxx -> 185.xxx.xxx.xxx
[INFO] gatekeeper.core - Tor exit location: Brandenburg, DE
[INFO] gatekeeper.core - [Tor 185.xxx.xxx.xxx] downloading https://httpbin.org/bytes/1024 -> downloads/test.bin
```
This confirms:
- clearnet IP was measured
- Tor routing was verified
- all traffic used the Tor exit shown
---
### HTTP requests
GateKeeper can also be used as a Tor-verified HTTP client:
```python
with GateKeeper() as gk:
response = gk.get("https://httpbin.org/ip")
print(response.json())
```
All requests are guaranteed to use the verified Tor session.
---
## API Overview
### `GateKeeper(...)`
```python
gk = GateKeeper(
socks_port=9150,
geo=False
)
```
**Parameters**:
- `socks_port` *(int)* – Tor SOCKS port (default: `9150`)
- `geo` *(bool)* – Enable best-effort Tor exit geolocation (optional)
Raises `RuntimeError` if Tor routing cannot be verified.
---
### `request(method, url, **kwargs)`
Performs an arbitrary HTTP request **through the verified Tor session**.
This is a thin passthrough to `requests.Session.request`, with enforced Tor routing and logging.
```python
r = gk.request(
"GET",
"https://httpbin.org/ip",
timeout=10
)
print(r.json())
```
- `method` – HTTP verb (GET, POST, PUT, DELETE, ...)
- `url` – Target URL
- `**kwargs` – Forwarded directly to `requests`
This is the **core execution path** used internally by helper methods like `get()` and `post()`.
---
### `download(url, target_path)`
Downloads a resource **through the verified Tor session**.
```python
gk.download(url, target_path)
```
- `url` – HTTP(S) URL
- `target_path` – Full local file path (directories created automatically)
---
### `download(url, target_path)`
Downloads a resource **through the verified Tor session**.
```python
gk.download(url, target_path)
```
- `url` – HTTP(S) URL
- `target_path` – Full local file path (directories created automatically)
---
### `get(url, **kwargs)`
Performs a Tor-verified HTTP GET request.
```python
response = gk.get(url, timeout=10)
```
Returns a standard `requests.Response`.
---
### `post(url, data=None, json=None, **kwargs)`
Performs a Tor-verified HTTP POST request.
```python
response = gk.post(url, json={"key": "value"})
```
Returns a standard `requests.Response`.
---
## Design Principles
- **Fail closed**: no Tor → no execution
- **Single verification point** (during construction)
- **No global state**
- **No logging configuration inside the library**
- **Session reuse without re-verification**
Logging is emitted by the library, but **configured by the application**.
---
## Logging
GateKeeper uses standard Python logging:
```python
import logging
logging.basicConfig(level=logging.INFO)
```
The library does **not** call `logging.basicConfig()` internally.
---
## Security Notes
- Tor exit IPs may rotate over time
- Geo information is best-effort and may be unavailable (rate-limits, CAPTCHAs)
- GateKeeper guarantees routing, not anonymity
### TLS Verification
All request methods forward arguments directly to `requests`. If you need to interact with legacy systems that have expired or self-signed certificates, you may disable TLS verification per request:
```python
r = gk.request(
"GET",
"https://legacy.example.com",
verify=False
)
```
Or for downloads:
```python
gk.download(url, "file.bin", verify=False)
```
Disabling certificate verification reduces transport security and should only be used when necessary.
---
## License
MIT License
---
## Status
- Version: **v0.2.3**
- Phase 2 in progress
- API intentionally minimal
Future versions may add optional features such as:
- circuit rotation
- ControlPort support
- higher-level request helpers
Without breaking the core contract.
| text/markdown | Tom Erik Harnes | null | null | null | MIT License
Copyright (c) 2026 Tom Erik Harnes
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| tor, privacy, networking, security, proxy | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests",
"pysocks"
] | [] | [] | [] | [
"Homepage, https://github.com/ProtoXCode/protox-gatekeeper",
"Repository, https://github.com/ProtoXCode/protox-gatekeeper",
"Issues, https://github.com/ProtoXCode/protox-gatekeeper/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T21:46:12.853752 | protox_gatekeeper-0.2.4.tar.gz | 8,156 | 3f/b7/8c184a85d1e107fd1d1e32aea6c8e572fc8e3f862a84a0358c97c4cfb1c1/protox_gatekeeper-0.2.4.tar.gz | source | sdist | null | false | a99359b7872b68b888b2cc7014949e5b | 9637b89ec6ee675ea248851d89a54d3c0a79ae1967e517d096b71d68e5a4ab04 | 3fb78c184a85d1e107fd1d1e32aea6c8e572fc8e3f862a84a0358c97c4cfb1c1 | null | [
"LICENSE"
] | 229 |
2.4 | keydnn | 2.1.0 | KeyDNN is a lightweight deep learning framework with explicit CPU/CUDA execution and clean architectural boundaries. | # KeyDNN
**KeyDNN** is a lightweight deep learning framework built from scratch in Python, with a strong focus on:
- **clean architecture** and explicit interfaces
- a **practical CPU / CUDA execution stack**
- correctness-first design validated by **CPU ↔ CUDA parity tests**
It is designed to be both:
- a **learning-friendly** implementation of modern DL abstractions (Tensor, autograd, modules), and
- a **performance-oriented sandbox** for building real backends (native CPU kernels, CUDA kernels, vendor libraries).
> 🚧 **Status:** **v2.1.0**.
> The v2 public API is largely stable and continues to evolve incrementally within the v2.x line.
> Breaking changes are avoided when possible and documented when necessary.
> 📚 **Documentation:** https://keywind127.github.io/keydnn_v2/
> 💻 **Source:** https://github.com/keywind127/keydnn_v2
---
## Platform support
- **OS:** Windows 10 / 11 (**x64 only**)
- **Python:** ≥ 3.10
- **CUDA:** Optional (NVIDIA GPU required for acceleration)
CUDA acceleration requires a compatible CUDA runtime. Some backends use vendor libraries such as
**cuBLAS** / **cuDNN** when available.
If CUDA is unavailable, CPU execution remains supported.
### Support snapshot
- **Windows (CPU):** ✅ supported
- **Windows (CUDA):** ✅ supported (requires NVIDIA GPU + CUDA runtime; cuBLAS/cuDNN optional)
- **Linux/macOS:** ❌ not yet supported in v2.x (v0 has CPU-focused Linux support)
---
## Highlights
- **CUDA device-pointer–backed Tensor backend**
- Explicit H2D / D2H / D2D memory boundaries (**no implicit host materialization**)
- Vendor-accelerated kernels:
- **cuBLAS** GEMM for `matmul`
- **cuDNN** acceleration for `conv2d` / `conv2d_transpose` (when enabled)
- CUDA implementations for core ops:
- elementwise ops
- reductions
- pooling
- in-place scalar ops (optimizer hot paths)
- Extensive **CPU ↔ CUDA parity tests**
- Standalone **microbenchmarks** under `scripts/`
---
## Installation
```bash
pip install keydnn
```
Development install:
```bash
git clone https://github.com/keywind127/keydnn_v2.git
cd keydnn_v2
pip install -e .
```
---
## Quickstart
```python
from keydnn.tensors import Tensor, Device
x = Tensor(shape=(2, 3), device=Device("cpu"), requires_grad=True)
y = (x * 2.0).sum()
y.backward()
print(x.grad.to_numpy())
```
CUDA example:
```python
from keydnn.tensors import Tensor, Device
from keydnn.backend import cuda_available
device = Device("cuda:0") if cuda_available() else Device("cpu")
x = Tensor.rand((1024, 1024), device=device, requires_grad=True)
y = (x @ x.T).mean()
y.backward()
print("device:", device)
print("y:", y.item())
```
---
## CUDA setup (Windows)
CUDA requires additional setup on Windows (CUDA runtime discovery and optional cuDNN).
See the documentation for details:
- [https://keywind127.github.io/keydnn_v2/getting-started/cuda/](https://keywind127.github.io/keydnn_v2/getting-started/cuda/)
---
## Versioning note
**KeyDNN v2 is a major rewrite** and is **not API-compatible** with KeyDNN v0.
---
## License
Licensed under the **Apache License, Version 2.0**.
| text/markdown | keywind | watersprayer127@gmail.com | null | null | Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: Microsoft :: Windows :: Windows 10",
"Operating System :: Microsoft :: Windows :: Windows 11"
] | [
"Windows"
] | https://github.com/keywind127/keydnn_v2 | null | >=3.10 | [] | [] | [] | [
"numpy<2.0,>=1.24",
"typing_extensions==4.15.0",
"tensorflow>=2.12; extra == \"keras\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/keywind127/keydnn_v2/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T21:45:56.678203 | keydnn-2.1.0.tar.gz | 1,267,902 | 1a/45/e67886941f4e8e83b37f1fe91df468b7adcfe525ecbb7217791cf71e2b44/keydnn-2.1.0.tar.gz | source | sdist | null | false | 8362cc9b3909b37e097f853b668d32e3 | 0fd5e13f8b6381b1bf294856c4e7be42cb7a84b991a7b0253f957d3edc12ea90 | 1a45e67886941f4e8e83b37f1fe91df468b7adcfe525ecbb7217791cf71e2b44 | null | [
"LICENSE"
] | 236 |
2.4 | clelandlab-quick | 0.6.12 | QuICK is a universal wrap of QICK. | # QuICK
QuICK is a universal wrap of [QICK](https://github.com/openquantumhardware/qick).
<div style="display: flex; flex-direction: row; flex-wrap: wrap; justify-content: center; align-items: center;">
<a style="margin: 0.25rem; display: block;" href="https://clelandlab-quick.readthedocs.io/en/latest/"><img src="https://img.shields.io/readthedocs/clelandlab-quick?style=for-the-badge&logo=readthedocs&logoColor=white"></a>
<a style="margin: 0.25rem; display: block;" href="https://pypi.org/project/clelandlab-quick/"><img src="https://img.shields.io/pypi/v/clelandlab-quick?style=for-the-badge&logo=pypi&logoColor=white"></a>
<a style="margin: 0.25rem; display: block;" href="https://github.com/clelandlab/quick"><img src="https://img.shields.io/github/stars/clelandlab/quick?style=for-the-badge&logo=github"></a>
</div>
## Installation
> This is the installation on your PC. For QICK Board setup, see [here](https://clelandlab-quick.readthedocs.io/en/latest/Tutorials/qick).
Install this package with `pip`:
```
pip install clelandlab-quick
```
Then you can import it in your Python code:
```python
import quick
```
## Layers
QuICK has several layers of complexity.
- `quick.auto` Automation of Qubit Measurements
- `quick.experiment` Experiment Routines for Qubit Measurements
- `quick.Mercator` Mercator Protocol for Pulse Sequence Program
- `qick` the QICK firmware

| text/markdown | Cleland Lab | clelandlab@proton.me | null | null | null | QICK, quantum, experiment, measurement, qubit, control, readout, fpga | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language... | [] | https://github.com/clelandlab/quick | null | <4,>=3.8 | [] | [] | [] | [
"qick==0.2.366",
"numpy",
"scipy",
"pyyaml",
"pyro4",
"matplotlib",
"ipython"
] | [] | [] | [] | [
"Source, https://github.com/clelandlab/quick",
"Documentation, https://clelandlab-quick.readthedocs.io/en/latest/",
"Tracker, https://github.com/clelandlab/quick/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T21:45:43.395525 | clelandlab_quick-0.6.12.tar.gz | 21,341 | 7b/34/f383787032bc08bc9264add0289394d58d755381d48f0c02118c60d1eeaf/clelandlab_quick-0.6.12.tar.gz | source | sdist | null | false | 43e82b9e87bd797d1cfcd5785cf52386 | accdc769c3807bef7bf130ca7ad53ec9f36a262fea812c78af9d460d32336fe3 | 7b34f383787032bc08bc9264add0289394d58d755381d48f0c02118c60d1eeaf | null | [
"LICENSE"
] | 242 |
2.4 | algoliasearch | 4.37.0 | A fully-featured and blazing-fast Python API client to interact with Algolia. | <p align="center">
<a href="https://www.algolia.com">
<img alt="Algolia for Python" src="https://raw.githubusercontent.com/algolia/algoliasearch-client-common/master/banners/python.png" >
</a>
<h4 align="center">The perfect starting point to integrate <a href="https://algolia.com" target="_blank">Algolia</a> within your Python project</h4>
<p align="center">
<a href="https://pypi.org/project/algoliasearch"><img src="https://img.shields.io/pypi/v/algoliasearch.svg" alt="PyPI"></img></a>
<a href="https://pypi.org/project/algoliasearch"><img src="https://img.shields.io/badge/python-3.8|3.9|3.10|3.11|3.12-blue" alt="Python versions"></img></a>
<a href="https://pypi.org/project/algoliasearch"><img src="https://img.shields.io/pypi/l/ansicolortags.svg" alt="License"></a>
</p>
</p>
<p align="center">
<a href="https://www.algolia.com/doc/libraries/sdk/install#python" target="_blank">Documentation</a> •
<a href="https://github.com/algolia/algoliasearch-django" target="_blank">Django</a> •
<a href="https://discourse.algolia.com" target="_blank">Community Forum</a> •
<a href="http://stackoverflow.com/questions/tagged/algolia" target="_blank">Stack Overflow</a> •
<a href="https://github.com/algolia/algoliasearch-client-python/issues" target="_blank">Report a bug</a> •
<a href="https://alg.li/support" target="_blank">Support</a>
</p>
## ✨ Features
- Thin & minimal low-level HTTP client to interact with Algolia's API
- Supports Python from `3.8`
## 💡 Getting Started
First, install Algolia Python API Client via the [pip](https://pip.pypa.io/en/stable/installing) package manager:
```bash
pip install --upgrade 'algoliasearch>=4.0,<5.0'
```
You can now import the Algolia API client in your project and play with it.
```py
from algoliasearch.search.client import SearchClient
_client = SearchClient("YOUR_APP_ID", "YOUR_API_KEY")
# Add a new record to your Algolia index
response = await _client.save_object(
index_name="<YOUR_INDEX_NAME>",
body={
"objectID": "id",
"test": "val",
},
)
# use the class directly
print(response)
# print the JSON response
print(response.to_json())
# Poll the task status to know when it has been indexed
await client.wait_for_task(index_name="<YOUR_INDEX_NAME>", task_id=response.task_id)
# Fetch search results, with typo tolerance
response = await _client.search(
search_method_params={
"requests": [
{
"indexName": "<YOUR_INDEX_NAME>",
"query": "<YOUR_QUERY>",
"hitsPerPage": 50,
},
],
},
)
# use the class directly
print(response)
# print the JSON response
print(response.to_json())
```
For full documentation, visit the **[Algolia Python API Client](https://www.algolia.com/doc/libraries/sdk/install#python)**.
## ❓ Troubleshooting
Encountering an issue? Before reaching out to support, we recommend heading to our [FAQ](https://support.algolia.com/hc/sections/15061037630609-API-Client-FAQs) where you will find answers for the most common issues and gotchas with the client. You can also open [a GitHub issue](https://github.com/algolia/api-clients-automation/issues/new?assignees=&labels=&projects=&template=Bug_report.md)
## Contributing
This repository hosts the code of the generated Algolia API client for Python, if you'd like to contribute, head over to the [main repository](https://github.com/algolia/api-clients-automation). You can also find contributing guides on [our documentation website](https://api-clients-automation.netlify.app/docs/introduction).
## 📄 License
The Algolia Python API Client is an open-sourced software licensed under the [MIT license](LICENSE).
| text/markdown | Algolia Team | null | null | null | MIT | algolia, search, full-text-search, neural-search | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Langua... | [] | null | null | >=3.8.1 | [] | [] | [] | [
"aiohttp>=3.10.11",
"async-timeout>=4.0.3",
"pydantic>=2",
"python-dateutil>=2.8.2",
"requests>=2.32.3",
"urllib3>=2.2.3"
] | [] | [] | [] | [
"Homepage, https://www.algolia.com",
"Repository, https://github.com/algolia/algoliasearch-client-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:45:35.577311 | algoliasearch-4.37.0.tar.gz | 403,431 | c2/bd/929fa01b53b8632e149f95ae47af7a59f33eccd97a726602272141fad34a/algoliasearch-4.37.0.tar.gz | source | sdist | null | false | 7df4a0e2af3d0cc73926e9fc1071d5c1 | 9ddae64b96262fdcc62403b65f11d776b80baecd4b4a337fc57bd34d1ec5e962 | c2bd929fa01b53b8632e149f95ae47af7a59f33eccd97a726602272141fad34a | null | [
"LICENSE"
] | 9,602 |
2.4 | pgskewer | 0.1.2 | EDWH pgqueuer pipeline (pg-skEWer) |
# pgskewer
A minimalistic pipeline functionality built on top of [pgqueuer](https://github.com/educationwarehouse/pgqueuer), providing enhanced task orchestration capabilities for PostgreSQL-based job queues.
> The name "pgskewer" is chosen due to its metaphorical representation of how it "skewers" tasks together in a pipeline;
> rhymes with "pgqueuer" and contains "EW" (Education Warehouse) in its name.
## Features
- **Sequential and Parallel Task Pipelines**: Define complex workflows with a mix of sequential and parallel task execution
- **Result Storage**: Automatically store and pass around task results in sequence
- **Cancelable Tasks**: Gracefully cancel tasks when needed
- **Improved Error Handling**: Better error propagation and handling in pipelines
- **Real-time Log Streaming**: Stream logs from blocking functions in real-time
- **Type Annotations**: Comprehensive typing support for better IDE integration
## Installation
```bash
uv pip install pgskewer
```
## Requirements
- Python ≥ 3.13
- PostgreSQL database
- Dependencies:
- pgqueuer
- asyncpg
- anyio
- uvloop
## Quick Start
```python
import asyncio
import asyncpg
from pgqueuer import Job
from pgqueuer.db import AsyncpgDriver
from pgqueuer.queries import Queries
from pgskewer import ImprovedQueuer, parse_payload, TaskResult
async def main():
# Initialize the queuer with your database connection
connection = await asyncpg.connect("postgresql://user:password@localhost/dbname")
driver = AsyncpgDriver(connection)
pgq = ImprovedQueuer(driver)
# Define some tasks as entrypoints
@pgq.entrypoint("fetch_data")
async def fetch_data(job):
# Fetch some data
return {"data": "example data"}
@pgq.entrypoint("process_data")
async def process_data(job: Job):
# Process the data from the previous step
payload = parse_payload(job.payload)
data = payload["tasks"]["fetch_data"]["result"]["data"]
return {"processed": data.upper()}
@pgq.entrypoint("store_results")
async def store_results(job: Job):
# Store the processed data
payload = parse_payload(job.payload)
processed = payload["tasks"]["process_data"]["result"]["processed"]
# Store the processed data somewhere
return {"status": "completed", "stored": processed}
# Create a pipeline that runs these tasks in sequence
pgq.entrypoint_pipeline(
"my_pipeline",
# start steps as a mix of entrypoint names and function references:
fetch_data,
"process_data",
store_results
)
# Execute the pipeline (empty initial data)
job_id = await pgq.qm.queries.enqueue("my_pipeline", b'')
# when the pipeline completes, pgqueuer_result should have an entry for this job_id:
result: TaskResult = await pgq.result(job_id, timeout=None)
if __name__ == "__main__":
asyncio.run(main())
```
## Advanced Usage
### Creating Pipelines with Parallel Tasks
You can define pipelines with a mix of sequential and parallel tasks:
```python
# Define a pipeline with parallel tasks
pipeline = pgq.pipeline([
"task_1", # Run task_1 first
["task_2a", "task_2b"], # Then run task_2a and task_2b in parallel
"task_3" # Finally run task_3 after both task_2a and task_2b complete
])
```
### Register a Pipeline as an Entrypoint
You can register a pipeline as an entrypoint for reuse:
```python
# Register the pipeline as an entrypoint
pgq.entrypoint_pipeline(
"data_processing_pipeline",
"fetch_data",
["validate_data", "normalize_data"],
"store_data"
)
# Now you can enqueue this pipeline like any other task
job_id = await pgq.enqueue("data_processing_pipeline", {"source": "api"})
```
### Running Blocking Functions Asynchronously
pgskewer provides utilities to run blocking functions asynchronously with real-time log streaming:
```python
from pgskewer import unblock
def cpu_intensive_task(data):
# This is a blocking function
print("Processing data...")
result = process_data(data)
print("Processing complete!")
return result
# Run the blocking function asynchronously with log streaming
result = await unblock(cpu_intensive_task, data)
```
### Pipeline Result Structure
The pipeline returns a structured result with information about each task:
```python
{
"initial": {
# The initial payload provided to the pipeline
},
"tasks": {
"task_1": {
"status": "successful",
"ok": true,
"result": {
# Task 1's result data
}
},
"task_2a": {
"status": "successful",
"ok": true,
"result": {
# Task 2a's result data
}
},
# ... other tasks
}
}
```
## Error Handling
If any task in a pipeline fails:
- In sequential execution, the pipeline stops and no further tasks are executed
- In parallel execution, sibling tasks are terminated
## License
`pgskewer` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
## Credits
Developed by [Education Warehouse](https://educationwarehouse.nl/). | text/markdown | null | Robin van der Noord <robin.vdn@educationwarehouse.nl> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"anyio",
"asyncpg",
"edwh-uuid7",
"pgqueuer<0.25",
"python-dotenv",
"uvloop",
"edwh; extra == \"dev\"",
"edwh-migrate; extra == \"dev\"",
"hatch; extra == \"dev\"",
"psycopg2-binary; extra == \"dev\"",
"pydal; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.3","id":"zena","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T21:45:23.752855 | pgskewer-0.1.2.tar.gz | 21,481 | 06/87/fe9bdebe1bcefd8f6fb55c0d49dee9c80fae04c532013186a4176fb79acd/pgskewer-0.1.2.tar.gz | source | sdist | null | false | d64ae776895451e5fa2f1f08332718f6 | f14e3aa74a7c6916db738aac40aff27aea1b1975245243a3d3507065b01402b3 | 0687fe9bdebe1bcefd8f6fb55c0d49dee9c80fae04c532013186a4176fb79acd | null | [] | 213 |
2.4 | sendcraft-sdk | 1.0.0 | Official SendCraft SDK for Python - Send transactional emails and campaigns via the SendCraft API | # SendCraft Python SDK
Official Python SDK for [SendCraft](https://sendcraft.online) — send transactional emails and campaigns via the SendCraft API.
## Installation
```bash
pip install sendcraft-sdk
```
## Quick Start
```python
import os
from sendcraft import SendCraft
client = SendCraft(api_key=os.environ["SENDCRAFT_API_KEY"])
result = client.send_email(
to_email="user@example.com",
subject="Hello from SendCraft",
html_content="<h1>Hello!</h1><p>Welcome!</p>",
from_email="noreply@yourdomain.com"
)
print(result)
```
## Environment Variables
```env
SENDCRAFT_API_KEY=sk_live_your_key_here
```
## Usage Examples
### Send Bulk Emails
```python
client.send_bulk_emails(
emails=[
{"toEmail": "user1@example.com", "toName": "John"},
{"toEmail": "user2@example.com", "toName": "Jane"},
],
subject="Weekly Newsletter",
html_content="<h1>This week...</h1>",
from_email="newsletter@yourdomain.com"
)
```
### Create Campaign
```python
client.create_campaign(
name="Product Launch",
subject="Big News!",
html_content="<h1>Check it out!</h1>",
from_email="marketing@yourdomain.com",
recipients=["user1@example.com", "user2@example.com"]
)
```
### Error Handling
```python
from sendcraft import SendCraft, SendCraftError, UnauthorizedError, RateLimitError
try:
client.send_email(...)
except UnauthorizedError:
print("Invalid API key")
except RateLimitError:
print("Too many requests")
except SendCraftError as e:
print("Error:", e)
```
## API Reference
| Method | Description |
|--------|-------------|
| `send_email(...)` | Send a single email |
| `send_bulk_emails(...)` | Send to multiple recipients |
| `schedule_email(...)` | Schedule email for later |
| `get_email_stats()` | Get email statistics |
| `create_campaign(...)` | Create campaign |
| `get_campaigns(limit)` | List campaigns |
| `send_campaign(id)` | Send campaign |
| `create_template(...)` | Create template |
| `get_templates()` | List templates |
| `create_webhook(...)` | Create webhook |
| `get_analytics(id)` | Campaign analytics |
| `get_account()` | Account info |
## Support
- Website: https://sendcraft.online
- Email: support@sendcraft.online
## License
MIT
| text/markdown | SendCraft Team | support@sendcraft.online | null | null | null | sendcraft, email, transactional, marketing, api, sdk | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: MIT License",
"Operating Sys... | [] | https://sendcraft.online | null | >=3.7 | [] | [] | [] | [
"requests>=2.25.0",
"pytest>=6.0; extra == \"dev\"",
"black>=21.0; extra == \"dev\"",
"flake8>=3.9; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-18T21:44:25.965329 | sendcraft_sdk-1.0.0.tar.gz | 4,798 | 8d/8d/14b28c924cb93360aa4d03703fe71694b5fff1116f46ca5294d63acd02df/sendcraft_sdk-1.0.0.tar.gz | source | sdist | null | false | 57586470a170613a850f928b24460447 | 90b7ee6a330a86523b386aa927bf32185f0049a24c416751103f5a81847ad44e | 8d8d14b28c924cb93360aa4d03703fe71694b5fff1116f46ca5294d63acd02df | null | [] | 242 |
2.3 | xrpl-py | 4.6.0b0 | A complete Python library for interacting with the XRP ledger | [](https://xrpl-py.readthedocs.io/)
# xrpl-py
A pure Python implementation for interacting with the [XRP Ledger](https://xrpl.org/).
The `xrpl-py` library simplifies the hardest parts of XRP Ledger interaction, like serialization and transaction signing. It also provides native Python methods and models for [XRP Ledger transactions](https://xrpl.org/transaction-formats.html) and core server [API](https://xrpl.org/api-conventions.html) ([`rippled`](https://github.com/ripple/rippled)) objects.
As an example, this is how you would use this library to send a payment on testnet:
```py
from xrpl.account import get_balance
from xrpl.clients import JsonRpcClient
from xrpl.models import Payment, Tx
from xrpl.transaction import submit_and_wait
from xrpl.wallet import generate_faucet_wallet
# Create a client to connect to the test network
client = JsonRpcClient("https://s.altnet.rippletest.net:51234")
# Create two wallets to send money between on the test network
wallet1 = generate_faucet_wallet(client, debug=True)
wallet2 = generate_faucet_wallet(client, debug=True)
# Both balances should be zero since nothing has been sent yet
print("Balances of wallets before Payment tx")
print(get_balance(wallet1.address, client))
print(get_balance(wallet2.address, client))
# Create a Payment transaction from wallet1 to wallet2
payment_tx = Payment(
account=wallet1.address,
amount="1000",
destination=wallet2.address,
)
# Submit the payment to the network and wait to see a response
# Behind the scenes, this fills in fields which can be looked up automatically like the fee.
# It also signs the transaction with wallet1 to prove you own the account you're paying from.
payment_response = submit_and_wait(payment_tx, client, wallet1)
print("Transaction was submitted")
# Create a "Tx" request to look up the transaction on the ledger
tx_response = client.request(Tx(transaction=payment_response.result["hash"]))
# Check whether the transaction was actually validated on ledger
print("Validated:", tx_response.result["validated"])
# Check balances after 1000 drops (.001 XRP) was sent from wallet1 to wallet2
print("Balances of wallets after Payment tx:")
print(get_balance(wallet1.address, client))
print(get_balance(wallet2.address, client))
```
[](https://pepy.tech/project/xrpl-py/month)
[](https://github.com/xpring-eng/xrpl-py/graphs/contributors)
## Installation and supported versions
The `xrpl-py` library is available on [PyPI](https://pypi.org/). Install with `pip`:
```
pip3 install xrpl-py
```
The library supports [Python 3.8](https://www.python.org/downloads/) and later.
[](https://pypi.org/project/xrpl-py)
## Features
Use `xrpl-py` to build Python applications that leverage the [XRP Ledger](https://xrpl.org/). The library helps with all aspects of interacting with the XRP Ledger, including:
- Key and wallet management
- Serialization
- Transaction Signing
`xrpl-py` also provides:
- A network client — See [`xrpl.clients`](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.clients.html) for more information.
- Methods for inspecting accounts — See [XRPL Account Methods](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.account.html) for more information.
- Codecs for encoding and decoding addresses and other objects — See [Core Codecs](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.core.html) for more information.
## [➡️ Reference Documentation](https://xrpl-py.readthedocs.io/en/stable/)
See the complete [`xrpl-py` reference documentation on Read the Docs](https://xrpl-py.readthedocs.io/en/stable/index.html).
## Usage
The following sections describe some of the most commonly used modules in the `xrpl-py` library and provide sample code.
### Network client
Use the `xrpl.clients` library to create a network client for connecting to the XRP Ledger.
```py
from xrpl.clients import JsonRpcClient
JSON_RPC_URL = "https://s.altnet.rippletest.net:51234"
client = JsonRpcClient(JSON_RPC_URL)
```
### Manage keys and wallets
#### `xrpl.wallet`
Use the [`xrpl.wallet`](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.wallet.html) module to create a wallet from a given seed or or via a [Testnet faucet](https://xrpl.org/xrp-testnet-faucet.html).
To create a wallet from a seed (in this case, the value generated using [`xrpl.keypairs`](#xrpl-keypairs)):
```py
wallet_from_seed = xrpl.wallet.Wallet.from_seed(seed)
print(wallet_from_seed)
# pub_key: ED46949E414A3D6D758D347BAEC9340DC78F7397FEE893132AAF5D56E4D7DE77B0
# priv_key: -HIDDEN-
# address: rG5ZvYsK5BPi9f1Nb8mhFGDTNMJhEhufn6
```
To create a wallet from a Testnet faucet:
```py
test_wallet = generate_faucet_wallet(client)
test_account = test_wallet.address
print("Classic address:", test_account)
# Classic address: rEQB2hhp3rg7sHj6L8YyR4GG47Cb7pfcuw
```
#### `xrpl.core.keypairs`
Use the [`xrpl.core.keypairs`](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.core.keypairs.html#module-xrpl.core.keypairs) module to generate seeds and derive keypairs and addresses from those seed values.
Here's an example of how to generate a `seed` value and derive an [XRP Ledger "classic" address](https://xrpl.org/cryptographic-keys.html#account-id-and-address) from that seed.
```py
from xrpl.core import keypairs
seed = keypairs.generate_seed()
public, private = keypairs.derive_keypair(seed)
test_account = keypairs.derive_classic_address(public)
print("Here's the public key:")
print(public)
print("Here's the private key:")
print(private)
print("Store this in a secure place!")
# Here's the public key:
# ED3CC1BBD0952A60088E89FA502921895FC81FBD79CAE9109A8FE2D23659AD5D56
# Here's the private key:
# EDE65EE7882847EF5345A43BFB8E6F5EEC60F45461696C384639B99B26AAA7A5CD
# Store this in a secure place!
```
**Note:** You can use `xrpl.core.keypairs.sign` to sign transactions but `xrpl-py` also provides explicit methods for safely signing and submitting transactions. See [Transaction Signing](#transaction-signing) and [XRPL Transaction Methods](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.transaction.html#module-xrpl.transaction) for more information.
### Serialize and sign transactions
To securely submit transactions to the XRP Ledger, you need to first serialize data from JSON and other formats into the [XRP Ledger's canonical format](https://xrpl.org/serialization.html), then to [authorize the transaction](https://xrpl.org/transaction-basics.html#authorizing-transactions) by digitally [signing it](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.core.keypairs.html?highlight=sign#xrpl.core.keypairs.sign) with the account's private key. The `xrpl-py` library provides several methods to simplify this process.
Use the [`xrpl.transaction`](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.transaction.html) module to sign and submit transactions. The module offers three ways to do this:
- [`sign_and_submit`](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.transaction.html#xrpl.transaction.sign_and_submit) — Signs a transaction locally, then submits it to the XRP Ledger. This method does not implement [reliable transaction submission](https://xrpl.org/reliable-transaction-submission.html#reliable-transaction-submission) best practices, so only use it for development or testing purposes.
- [`sign`](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.transaction.html#xrpl.transaction.sign) — Signs a transaction locally. This method **does not** submit the transaction to the XRP Ledger.
- [`submit_and_wait`](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.transaction.html#xrpl.transaction.submit_and_wait) — An implementation of the [reliable transaction submission guidelines](https://xrpl.org/reliable-transaction-submission.html#reliable-transaction-submission), this method submits a signed transaction to the XRP Ledger and then verifies that it has been included in a validated ledger (or has failed to do so). Use this method to submit transactions for production purposes.
```py
from xrpl.models.transactions import Payment
from xrpl.transaction import sign, submit_and_wait
from xrpl.ledger import get_latest_validated_ledger_sequence
from xrpl.account import get_next_valid_seq_number
current_validated_ledger = get_latest_validated_ledger_sequence(client)
# prepare the transaction
# the amount is expressed in drops, not XRP
# see https://xrpl.org/basic-data-types.html#specifying-currency-amounts
my_tx_payment = Payment(
account=test_wallet.address,
amount="2200000",
destination="rPT1Sjq2YGrBMTttX4GZHjKu9dyfzbpAYe",
last_ledger_sequence=current_validated_ledger + 20,
sequence=get_next_valid_seq_number(test_wallet.address, client),
fee="10",
)
# sign the transaction
my_tx_payment_signed = sign(my_tx_payment,test_wallet)
# submit the transaction
tx_response = submit_and_wait(my_tx_payment_signed, client)
```
#### Get fee from the XRP Ledger
In most cases, you can specify the minimum [transaction cost](https://xrpl.org/transaction-cost.html#current-transaction-cost) of `"10"` for the `fee` field unless you have a strong reason not to. But if you want to get the [current load-balanced transaction cost](https://xrpl.org/transaction-cost.html#current-transaction-cost) from the network, you can use the `get_fee` function:
```py
from xrpl.ledger import get_fee
fee = get_fee(client)
print(fee)
# 10
```
#### Auto-filled fields
The `xrpl-py` library automatically populates the `fee`, `sequence` and `last_ledger_sequence` fields when you create transactions. In the example above, you could omit those fields and let the library fill them in for you.
```py
from xrpl.models.transactions import Payment
from xrpl.transaction import submit_and_wait, autofill_and_sign
# prepare the transaction
# the amount is expressed in drops, not XRP
# see https://xrpl.org/basic-data-types.html#specifying-currency-amounts
my_tx_payment = Payment(
account=test_wallet.address,
amount="2200000",
destination="rPT1Sjq2YGrBMTttX4GZHjKu9dyfzbpAYe"
)
# sign the transaction with the autofill method
# (this will auto-populate the fee, sequence, and last_ledger_sequence)
my_tx_payment_signed = autofill_and_sign(my_tx_payment, client, test_wallet)
print(my_tx_payment_signed)
# Payment(
# account='rMPUKmzmDWEX1tQhzQ8oGFNfAEhnWNFwz',
# transaction_type=<TransactionType.PAYMENT: 'Payment'>,
# fee='10',
# sequence=16034065,
# account_txn_id=None,
# flags=0,
# last_ledger_sequence=10268600,
# memos=None,
# signers=None,
# source_tag=None,
# signing_pub_key='EDD9540FA398915F0BCBD6E65579C03BE5424836CB68B7EB1D6573F2382156B444',
# txn_signature='938FB22AE7FE76CF26FD11F8F97668E175DFAABD2977BCA397233117E7E1C4A1E39681091CC4D6DF21403682803AB54CC21DC4FA2F6848811DEE10FFEF74D809',
# amount='2200000',
# destination='rPT1Sjq2YGrBMTttX4GZHjKu9dyfzbpAYe',
# destination_tag=None,
# invoice_id=None,
# paths=None,
# send_max=None,
# deliver_min=None
# )
# submit the transaction
tx_response = submit_and_wait(my_tx_payment_signed, client)
```
### Subscribe to ledger updates
You can send `subscribe` and `unsubscribe` requests only using the WebSocket network client. These request methods allow you to be alerted of certain situations as they occur, such as when a new ledger is declared.
```py
from xrpl.clients import WebsocketClient
url = "wss://s.altnet.rippletest.net/"
from xrpl.models import Subscribe, StreamParameter
req = Subscribe(streams=[StreamParameter.LEDGER])
# NOTE: this code will run forever without a timeout, until the process is killed
with WebsocketClient(url) as client:
client.send(req)
for message in client:
print(message)
# {'result': {'fee_base': 10, 'fee_ref': 10, 'ledger_hash': '7CD50477F23FF158B430772D8E82A961376A7B40E13C695AA849811EDF66C5C0', 'ledger_index': 18183504, 'ledger_time': 676412962, 'reserve_base': 20000000, 'reserve_inc': 5000000, 'validated_ledgers': '17469391-18183504'}, 'status': 'success', 'type': 'response'}
# {'fee_base': 10, 'fee_ref': 10, 'ledger_hash': 'BAA743DABD168BD434804416C8087B7BDEF7E6D7EAD412B9102281DD83B10D00', 'ledger_index': 18183505, 'ledger_time': 676412970, 'reserve_base': 20000000, 'reserve_inc': 5000000, 'txn_count': 0, 'type': 'ledgerClosed', 'validated_ledgers': '17469391-18183505'}
# {'fee_base': 10, 'fee_ref': 10, 'ledger_hash': 'D8227DAF8F745AE3F907B251D40B4081E019D013ABC23B68C0B1431DBADA1A46', 'ledger_index': 18183506, 'ledger_time': 676412971, 'reserve_base': 20000000, 'reserve_inc': 5000000, 'txn_count': 0, 'type': 'ledgerClosed', 'validated_ledgers': '17469391-18183506'}
# {'fee_base': 10, 'fee_ref': 10, 'ledger_hash': 'CFC412B6DDB9A402662832A781C23F0F2E842EAE6CFC539FEEB287318092C0DE', 'ledger_index': 18183507, 'ledger_time': 676412972, 'reserve_base': 20000000, 'reserve_inc': 5000000, 'txn_count': 0, 'type': 'ledgerClosed', 'validated_ledgers': '17469391-18183507'}
```
### Asynchronous Code
This library supports Python's [`asyncio`](https://docs.python.org/3/library/asyncio.html) package, which is used to run asynchronous code. All the async code is in [`xrpl.asyncio`](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.asyncio.html) If you are writing asynchronous code, please note that you will not be able to use any synchronous sugar functions, due to how event loops are handled. However, every synchronous method has a corresponding asynchronous method that you can use.
This sample code is the asynchronous equivalent of the above section on submitting a transaction.
```py
import asyncio
from xrpl.models.transactions import Payment
from xrpl.asyncio.transaction import sign, submit_and_wait
from xrpl.asyncio.ledger import get_latest_validated_ledger_sequence
from xrpl.asyncio.account import get_next_valid_seq_number
from xrpl.asyncio.clients import AsyncJsonRpcClient
async_client = AsyncJsonRpcClient(JSON_RPC_URL)
async def submit_sample_transaction():
current_validated_ledger = await get_latest_validated_ledger_sequence(async_client)
# prepare the transaction
# the amount is expressed in drops, not XRP
# see https://xrpl.org/basic-data-types.html#specifying-currency-amounts
my_tx_payment = Payment(
account=test_wallet.address,
amount="2200000",
destination="rPT1Sjq2YGrBMTttX4GZHjKu9dyfzbpAYe",
last_ledger_sequence=current_validated_ledger + 20,
sequence=await get_next_valid_seq_number(test_wallet.address, async_client),
fee="10",
)
# sign and submit the transaction
tx_response = await submit_and_wait(my_tx_payment, async_client, test_wallet)
asyncio.run(submit_sample_transaction())
```
### Encode addresses
Use [`xrpl.core.addresscodec`](https://xrpl-py.readthedocs.io/en/stable/source/xrpl.core.addresscodec.html) to encode and decode addresses into and from the ["classic" and X-address formats](https://xrpl.org/accounts.html#addresses).
```py
# convert classic address to x-address
from xrpl.core import addresscodec
testnet_xaddress = (
addresscodec.classic_address_to_xaddress(
"rMPUKmzmDWEX1tQhzQ8oGFNfAEhnWNFwz",
tag=0,
is_test_network=True,
)
)
print(testnet_xaddress)
# T7QDemmxnuN7a52A62nx2fxGPWcRahLCf3qaswfrsNW9Lps
```
## Migrating
If you're currently using `xrpl-py` version 1, you can use [this guide to migrate to v2](https://xrpl.org/blog/2023/xrpl-py-2.0-release.html).
## Contributing
If you want to contribute to this project, see [CONTRIBUTING.md].
### Mailing Lists
We have a low-traffic mailing list for announcements of new `xrpl-py` releases. (About 1 email per week)
- [Subscribe to xrpl-announce](https://groups.google.com/g/xrpl-announce)
If you're using the XRP Ledger in production, you should run a [rippled server](https://github.com/ripple/rippled) and subscribe to the ripple-server mailing list as well.
- [Subscribe to ripple-server](https://groups.google.com/g/ripple-server)
### Code Samples
- For samples of common use cases, see the [XRPL.org Code Samples](https://xrpl.org/code-samples.html) page.
- You can also browse those samples [directly on GitHub](https://github.com/XRPLF/xrpl-dev-portal/tree/master/_code-samples).
### Report an issue
Experienced an issue? Report it [here](https://github.com/XRPLF/xrpl-py/issues/new).
## License
The `xrpl-py` library is licensed under the ISC License. See [LICENSE] for more information.
[CONTRIBUTING.md]: CONTRIBUTING.md
[LICENSE]: LICENSE
| text/markdown | Mayukha Vadari | mvadari@ripple.com | Ashray Chowdhry | achowdhry@ripple.com | ISC | xrp, xrpl, cryptocurrency | [
"License :: OSI Approved",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.8.1 | [] | [] | [] | [
"Deprecated<2.0.0,>=1.3.1",
"ECPy<2.0.0,>=1.2.5",
"base58<3.0.0,>=2.1.0",
"cffi>=1.15.0; extra == \"confidential\"",
"httpx<0.29.0,>=0.18.1",
"pycryptodome<4.0.0,>=3.23.0",
"types-Deprecated<2.0.0,>=1.2.9",
"typing-extensions<5.0.0,>=4.13.2",
"websockets>=11"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/XRPLF/xrpl-py/issues",
"Documentation, https://xrpl-py.readthedocs.io",
"Repository, https://github.com/XRPLF/xrpl-py"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:44:02.672538 | xrpl_py-4.6.0b0.tar.gz | 2,613,234 | 14/7f/9536a799fa8e1516bbb935c429522ffed8e5d1415729316aff9ffa9ca7db/xrpl_py-4.6.0b0.tar.gz | source | sdist | null | false | e4be7b64d37f289511896d7d63b68144 | 4e625340fd4de48a440512dd19baacd77ea3a64f5c541ac468c2b8042c6fcf56 | 147f9536a799fa8e1516bbb935c429522ffed8e5d1415729316aff9ffa9ca7db | null | [] | 203 |
2.4 | rankjie-pypetkitapi | 1.3.0.dev9 | Python client for PetKit API | # Petkit API Client
---
[](https://github.com/Jezza34000/py-petkit-api/)
[][python version] [](https://github.com/Jezza34000/py-petkit-api/actions)
[][pypi_] [](https://pepy.tech/projects/pypetkitapi)
---
[](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api) [](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api) [](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api)
[](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api)
[](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api)
[](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api)
[](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api)
[](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api)
[](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api)
[](https://sonarcloud.io/summary/new_code?id=Jezza34000_py-petkit-api)
[][pre-commit]
[][black]
[](https://mypy.readthedocs.io/en/stable/)
[](https://github.com/astral-sh/ruff)
---
[pypi_]: https://pypi.org/project/pypetkitapi/
[python version]: https://pypi.org/project/pypetkitapi
[pre-commit]: https://github.com/pre-commit/pre-commit
[black]: https://github.com/psf/black
### Enjoying this library?
[![Sponsor Jezza34000][github-sponsor-shield]][github-sponsor] [![Static Badge][buymeacoffee-shield]][buymeacoffee]
---
## ℹ️ Overview
PetKit Client is a Python library for interacting with the PetKit API. It allows you to manage your PetKit devices, retrieve account data, and control devices through the API.
## 🚀 Features
- Login and session management
- Fetch account and device data
- Control PetKit devices (Feeder, Litter Box, Water Fountain, Purifiers)
- Fetch images & videos produced by devices
> Pictures are available **with or without** Care+ subscription, Videos are only available **with** Care+ subscription
## ⬇️ Installation
Install the library using pip:
```bash
pip install rankjie-pypetkitapi
```
## 💡 Usage Example:
Here is a simple example of how to use the library to interact with the PetKit API \
This example is not an exhaustive list of all the features available in the library.
```python
import asyncio
import logging
import aiohttp
from pypetkitapi.client import PetKitClient
from pypetkitapi.command import DeviceCommand, FeederCommand, LBCommand, DeviceAction, LitterCommand
logging.basicConfig(level=logging.DEBUG)
async def main():
async with aiohttp.ClientSession() as session:
client = PetKitClient(
username="username", # Your PetKit account username or id
password="password", # Your PetKit account password
region="FR", # Your region or country code (e.g. FR, US,CN etc.)
timezone="Europe/Paris", # Your timezone(e.g. "Asia/Shanghai")
session=session,
)
await client.get_devices_data()
# Lists all devices and pet from account
for key, value in client.petkit_entities.items():
print(f"{key}: {type(value).__name__} - {value.name}")
# Select a device
device_id = key
# Read devices or pet information
print(client.petkit_entities[device_id])
# Send command to the devices
### Example 1 : Turn on the indicator light
### Device_ID, Command, Payload
await client.send_api_request(device_id, DeviceCommand.UPDATE_SETTING, {"lightMode": 1})
### Example 2 : Feed the pet
### Device_ID, Command, Payload
# simple hopper :
await client.send_api_request(device_id, FeederCommand.MANUAL_FEED, {"amount": 1})
# dual hopper :
await client.send_api_request(device_id, FeederCommand.MANUAL_FEED, {"amount1": 2})
# or
await client.send_api_request(device_id, FeederCommand.MANUAL_FEED, {"amount2": 2})
### Example 3 : Start the cleaning process
### Device_ID, Command, Payload
await client.send_api_request(device_id, LitterCommand.CONTROL_DEVICE, {DeviceAction.START: LBCommand.CLEANING})
if __name__ == "__main__":
asyncio.run(main())
```
## 💡 More example usage
Check at the usage in the Home Assistant integration : [here](https://github.com/Jezza34000/homeassistant_petkit)
## ☑️ Supported Devices
| **Category** | **Name** | **Device** |
| ---------------- | ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **🍗 Feeders** | ✅ Fresh Element | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/feeder.png" width="40"/></a> |
| | ✅ Fresh Element Mini Pro | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/feedermini.png" width="40"/></a> |
| | ✅ Fresh Element Infinity | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/d3.png" width="40"/></a> |
| | ✅ Fresh Element Solo | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/d4.png" width="40"/></a> |
| | ✅ Fresh Element Gemini | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/d4s.png" width="40"/></a> |
| | ✅ YumShare Solo | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/d4h.png" width="40"/></a> |
| | ✅ YumShare Dual-hopper | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/d4sh.png" width="40"/></a> |
| **🚽 Litters** | ✅ PuraX | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/t3.png" width="40"/></a> |
| | ✅ PuraMax | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/t4.1.png" width="40"/></a> |
| | ✅ PuraMax 2 | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/t4.png" width="40"/></a> |
| | ✅ Purobot Max Pro | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/t5.png" width="40"/></a> |
| | ✅ Purobot Ultra | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/t6.png" width="40"/></a> |
| **⛲ Fountains** | ✅ Eversweet Solo 2 | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/5w5.png" width="40"/></a> |
| | ✅ Eversweet 3 Pro | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/4w5.png" width="40"/></a> |
| | ✅ Eversweet 3 Pro UVC | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/6w5.png" width="40"/></a> |
| | ✅ Eversweet 5 Mini | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/2w5.png" width="40"/></a> |
| | ✅ Eversweet Max | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/ctw3.png" width="40"/></a> |
| **🧴 Purifiers** | ✅ Air Magicube | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/k2.png" width="40"/></a> |
| | ✅ Air Smart Spray | <a href=""><img src="https://raw.githubusercontent.com/Jezza34000/homeassistant_petkit/refs/heads/main/images/devices/k3.png" width="40"/></a> |
## 🛟 Help and Support
Developers? Want to help? Join us on our Discord channel dedicated to developers and contributors.
[![Discord][discord-shield]][discord]
## 👨💻 Contributing
Contributions are welcome!\
Please open an issue or submit a pull request.
## License
This project is licensed under the MIT License. See the LICENSE file for details.
---
[homeassistant_petkit]: https://github.com/Jezza34000/py-petkit-api
[commits-shield]: https://img.shields.io/github/commit-activity/y/Jezza34000/py-petkit-api.svg?style=flat
[commits]: https://github.com/Jezza34000/py-petkit-api/commits/main
[discord]: https://discord.gg/Va8DrmtweP
[discord-shield]: https://img.shields.io/discord/1318098700379361362.svg?style=for-the-badge&label=Discord&logo=discord&color=5865F2
[forum-shield]: https://img.shields.io/badge/community-forum-brightgreen.svg?style=for-the-badge&label=Home%20Assistant%20Community&logo=homeassistant&color=18bcf2
[forum]: https://community.home-assistant.io/t/petkit-integration/834431
[license-shield]: https://img.shields.io/github/license/Jezza34000/py-petkit-api.svg??style=flat
[maintenance-shield]: https://img.shields.io/badge/maintainer-Jezza34000-blue.svg?style=flat
[releases-shield]: https://img.shields.io/github/release/Jezza34000/py-petkit-api.svg?style=for-the-badge&color=41BDF5
[releases]: https://github.com/Jezza34000/py-petkit-api/releases
[github-sponsor-shield]: https://img.shields.io/badge/sponsor-Jezza34000-blue.svg?style=for-the-badge&logo=githubsponsors&color=EA4AAA
[github-sponsor]: https://github.com/sponsors/Jezza34000
[buymeacoffee-shield]: https://img.shields.io/badge/Donate-buy_me_a_coffee-yellow.svg?style=for-the-badge&logo=buy-me-a-coffee
[buymeacoffee]: https://www.buymeacoffee.com/jezza
| text/markdown | Jezza34000 | info@mail.com | null | null | MIT | petkit, api, client, pet, iot | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming ... | [] | null | null | >=3.11 | [] | [] | [] | [
"aiofiles<25.0.0,>=24.1.0",
"aiohttp<4.0.0,>=3.11.11",
"m3u8<7.0.0,>=6.0.0",
"pycryptodome<4.0.0,>=3.19.1",
"pydantic<3.0.0,>=2.10.4",
"tenacity<10.0.0,>=9.1.2"
] | [] | [] | [] | [
"Homepage, https://github.com/rankjie/py-petkit-api",
"Repository, https://github.com/rankjie/py-petkit-api"
] | twine/6.1.0 CPython/3.13.0 | 2026-02-18T21:43:30.793774 | rankjie_pypetkitapi-1.3.0.dev9.tar.gz | 44,048 | e4/98/4959300129b674ad95fca0632cfc484d73d31c85ad28d4d58774c990a504/rankjie_pypetkitapi-1.3.0.dev9.tar.gz | source | sdist | null | false | c60b67528a996391d5f48882384f4b2b | bc301863dd92158d521739d76ddf6716ab886bce3888a8011ca9273af26ab7b2 | e4984959300129b674ad95fca0632cfc484d73d31c85ad28d4d58774c990a504 | null | [
"LICENSE"
] | 196 |
2.4 | google-adk | 1.25.1 | Agent Development Kit | # Agent Development Kit (ADK)
[](LICENSE)
[](https://pypi.org/project/google-adk/)
[](https://github.com/google/adk-python/actions/workflows/python-unit-tests.yml)
[](https://www.reddit.com/r/agentdevelopmentkit/)
<a href="https://codewiki.google/github.com/google/adk-python"><img src="https://www.gstatic.com/_/boq-sdlc-agents-ui/_/r/Mvosg4klCA4.svg" alt="Ask Code Wiki" height="20"></a>
<html>
<h2 align="center">
<img src="https://raw.githubusercontent.com/google/adk-python/main/assets/agent-development-kit.png" width="256"/>
</h2>
<h3 align="center">
An open-source, code-first Python framework for building, evaluating, and deploying sophisticated AI agents with flexibility and control.
</h3>
<h3 align="center">
Important Links:
<a href="https://google.github.io/adk-docs/">Docs</a>,
<a href="https://github.com/google/adk-samples">Samples</a>,
<a href="https://github.com/google/adk-java">Java ADK</a>,
<a href="https://github.com/google/adk-go">Go ADK</a> &
<a href="https://github.com/google/adk-web">ADK Web</a>.
</h3>
</html>
Agent Development Kit (ADK) is a flexible and modular framework that applies
software development principles to AI agent creation. It is designed to
simplify building, deploying, and orchestrating agent workflows, from simple
tasks to complex systems. While optimized for Gemini, ADK is model-agnostic,
deployment-agnostic, and compatible with other frameworks.
---
## 🔥 What's new
- **Custom Service Registration**: Add a service registry to provide a generic way to register custom service implementations to be used in FastAPI server. See [short instruction](https://github.com/google/adk-python/discussions/3175#discussioncomment-14745120). ([391628f](https://github.com/google/adk-python/commit/391628fcdc7b950c6835f64ae3ccab197163c990))
- **Rewind**: Add the ability to rewind a session to before a previous invocation ([9dce06f](https://github.com/google/adk-python/commit/9dce06f9b00259ec42241df4f6638955e783a9d1)).
- **New CodeExecutor**: Introduces a new AgentEngineSandboxCodeExecutor class that supports executing agent-generated code using the Vertex AI Code Execution Sandbox API ([ee39a89](https://github.com/google/adk-python/commit/ee39a891106316b790621795b5cc529e89815a98))
## ✨ Key Features
- **Rich Tool Ecosystem**: Utilize pre-built tools, custom functions,
OpenAPI specs, MCP tools or integrate existing tools to give agents diverse
capabilities, all for tight integration with the Google ecosystem.
- **Code-First Development**: Define agent logic, tools, and orchestration
directly in Python for ultimate flexibility, testability, and versioning.
- **Agent Config**: Build agents without code. Check out the
[Agent Config](https://google.github.io/adk-docs/agents/config/) feature.
- **Tool Confirmation**: A [tool confirmation flow(HITL)](https://google.github.io/adk-docs/tools/confirmation/) that can guard tool execution with explicit confirmation and custom input.
- **Modular Multi-Agent Systems**: Design scalable applications by composing
multiple specialized agents into flexible hierarchies.
- **Deploy Anywhere**: Easily containerize and deploy agents on Cloud Run or
scale seamlessly with Vertex AI Agent Engine.
## 🚀 Installation
### Stable Release (Recommended)
You can install the latest stable version of ADK using `pip`:
```bash
pip install google-adk
```
The release cadence is roughly bi-weekly.
This version is recommended for most users as it represents the most recent official release.
### Development Version
Bug fixes and new features are merged into the main branch on GitHub first. If you need access to changes that haven't been included in an official PyPI release yet, you can install directly from the main branch:
```bash
pip install git+https://github.com/google/adk-python.git@main
```
Note: The development version is built directly from the latest code commits. While it includes the newest fixes and features, it may also contain experimental changes or bugs not present in the stable release. Use it primarily for testing upcoming changes or accessing critical fixes before they are officially released.
## 🤖 Agent2Agent (A2A) Protocol and ADK Integration
For remote agent-to-agent communication, ADK integrates with the
[A2A protocol](https://github.com/google-a2a/A2A/).
See this [example](https://github.com/a2aproject/a2a-samples/tree/main/samples/python/agents)
for how they can work together.
## 📚 Documentation
Explore the full documentation for detailed guides on building, evaluating, and
deploying agents:
* **[Documentation](https://google.github.io/adk-docs)**
## 🏁 Feature Highlight
### Define a single agent:
```python
from google.adk.agents import Agent
from google.adk.tools import google_search
root_agent = Agent(
name="search_assistant",
model="gemini-2.5-flash", # Or your preferred Gemini model
instruction="You are a helpful assistant. Answer user questions using Google Search when needed.",
description="An assistant that can search the web.",
tools=[google_search]
)
```
### Define a multi-agent system:
Define a multi-agent system with coordinator agent, greeter agent, and task execution agent. Then ADK engine and the model will guide the agents to work together to accomplish the task.
```python
from google.adk.agents import LlmAgent, BaseAgent
# Define individual agents
greeter = LlmAgent(name="greeter", model="gemini-2.5-flash", ...)
task_executor = LlmAgent(name="task_executor", model="gemini-2.5-flash", ...)
# Create parent agent and assign children via sub_agents
coordinator = LlmAgent(
name="Coordinator",
model="gemini-2.5-flash",
description="I coordinate greetings and tasks.",
sub_agents=[ # Assign sub_agents here
greeter,
task_executor
]
)
```
### Development UI
A built-in development UI to help you test, evaluate, debug, and showcase your agent(s).
<img src="https://raw.githubusercontent.com/google/adk-python/main/assets/adk-web-dev-ui-function-call.png"/>
### Evaluate Agents
```bash
adk eval \
samples_for_testing/hello_world \
samples_for_testing/hello_world/hello_world_eval_set_001.evalset.json
```
## 🤝 Contributing
We welcome contributions from the community! Whether it's bug reports, feature requests, documentation improvements, or code contributions, please see our
- [General contribution guideline and flow](https://google.github.io/adk-docs/contributing-guide/).
- Then if you want to contribute code, please read [Code Contributing Guidelines](./CONTRIBUTING.md) to get started.
## Community Repo
We have [adk-python-community repo](https://github.com/google/adk-python-community) that is home to a growing ecosystem of community-contributed tools, third-party
service integrations, and deployment scripts that extend the core capabilities
of the ADK.
## Vibe Coding
If you want to develop agent via vibe coding the [llms.txt](./llms.txt) and the [llms-full.txt](./llms-full.txt) can be used as context to LLM. While the former one is a summarized one and the later one has the full information in case your LLM has big enough context window.
## Community Events
- [Completed] ADK's 1st community meeting on Wednesday, October 15, 2025. Remember to [join our group](https://groups.google.com/g/adk-community) to get access to the [recording](https://drive.google.com/file/d/1rpXDq5NSH8-MyMeYI6_5pZ3Lhn0X9BQf/view), and [deck](https://docs.google.com/presentation/d/1_b8LG4xaiadbUUDzyNiapSFyxanc9ZgFdw7JQ6zmZ9Q/edit?slide=id.g384e60cdaca_0_658&resourcekey=0-tjFFv0VBQhpXBPCkZr0NOg#slide=id.g384e60cdaca_0_658).
## 📄 License
This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
---
*Happy Agent Building!*
| text/markdown | null | Google LLC <googleapis-packages@google.com> | null | null | null | null | [
"Typing :: Typed",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Progr... | [] | null | null | >=3.10 | [] | [] | [] | [
"PyYAML<7.0.0,>=6.0.2",
"aiosqlite>=0.21.0",
"anyio<5.0.0,>=4.9.0",
"authlib<2.0.0,>=1.6.6",
"click<9.0.0,>=8.1.8",
"fastapi<1.0.0,>=0.124.1",
"google-api-python-client<3.0.0,>=2.157.0",
"google-auth[pyopenssl]>=2.47.0",
"google-cloud-aiplatform[agent-engines]<2.0.0,>=1.132.0",
"google-cloud-bigqu... | [] | [] | [] | [
"changelog, https://github.com/google/adk-python/blob/main/CHANGELOG.md",
"documentation, https://google.github.io/adk-docs/",
"homepage, https://google.github.io/adk-docs/",
"repository, https://github.com/google/adk-python"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T21:43:20.839121 | google_adk-1.25.1-py3-none-any.whl | 2,579,485 | 50/09/e7ed67abe7e928309799b4c4789b2a3b5eba4ac0eb6d4c7912f9e3e9823d/google_adk-1.25.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 5e2cf4d49be0bb13165c93378742a683 | 62907f54b918a56450fc81669471f5819f41a48548ada3a521ac85728ca29001 | 5009e7ed67abe7e928309799b4c4789b2a3b5eba4ac0eb6d4c7912f9e3e9823d | null | [
"LICENSE"
] | 104,438 |
2.4 | PayPerTranscript | 0.2.9 | Open-Source Voice-to-Text mit Pay-per-Use Pricing | <div align="center">
# 🎙️ PayPerTranscript
**Voice-to-Text ohne Abo-Falle**
Hotkey halten → Sprechen → Text erscheint
*Bezahle nur, was du nutzt: ~0.02¢ pro Transkription*
[](LICENSE)
[](https://python.org)
[]()
</div>
---
## 💡 Das Problem
Kommerzielle Voice-to-Text Dienste kosten **$12-15/Monat** - egal ob du sie 5 Minuten oder 5 Stunden nutzt.
## ✨ Die Lösung
**PayPerTranscript** nutzt deinen eigenen API-Key. Du zahlst nur für tatsächliche Nutzung:
- **100 Transkriptionen/Tag** = nur **~74 Cent/Monat**
- Kommerzielle Alternative = **$15/Monat**
- **Du sparst über 95%**
**Open Source** · **Keine Telemetrie** · **Dein eigener API-Key**
---
## 🚀 Features
### 🎯 Kernfunktionen
- **Hold-to-Record**: `Ctrl+Win` halten - sprechen - loslassen - fertig
- **Blitzschnell**: 30 Sekunden Audio = 0.14 Sekunden Transkription (via Groq Whisper)
- **Smart Formatting**: WhatsApp bekommt lockere Texte, Outlook professionelle E-Mails
- **Wortliste**: Eigene Namen und Fachbegriffe werden immer korrekt geschrieben
### 📊 Transparenz & Kontrolle
- **Live-Kosten-Dashboard**: Sieh genau, was du verbrauchst
- **Abo-Vergleich**: Wie viel du im Vergleich zu kommerziellen Diensten sparst
- **Session-Historie**: Jede Transkription nachvollziehbar
### 🔒 Privatsphäre
- Dein eigener API-Key - du kontrollierst die Daten
- Keine Telemetrie, kein Tracking
- Audio-Dateien werden automatisch gelöscht
- Open Source unter MIT-Lizenz
---
## 📦 Installation
### Via pip (empfohlen)
```bash
pip install paypertranscript
paypertranscript
```
Beim ersten Start führt dich ein **Setup-Wizard** durch die Konfiguration (2 Minuten).
**Voraussetzungen**: Windows 10/11 · Python 3.12+
### Aus Quellcode
```bash
git clone https://github.com/jxnxts/PayPerTranscript.git
cd PayPerTranscript
pip install -e .
paypertranscript
```
---
<div align="center">
**Bezahle nur, was du wirklich nutzt** 💰
</div>
| text/markdown | PayPerTranscript Contributors | null | null | null | null | null | [
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3.12",
"Topic :: Multimedia :: Sound/Audio :: Speech"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"PySide6",
"sounddevice",
"numpy",
"groq",
"pynput",
"pywin32",
"psutil",
"pyperclip",
"pyautogui",
"keyring",
"soundfile",
"build; extra == \"dev\"",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jxnxts/PayPerTranscript"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:42:45.907171 | paypertranscript-0.2.9.tar.gz | 2,908,566 | 40/30/ad9381a4940f33c32fac99e160914c2f1dfaf3dbe8708b64ae3f968e5bf7/paypertranscript-0.2.9.tar.gz | source | sdist | null | false | d0409b71a58068e88d92f59961707b48 | 0ba643b72407ef863aaf05bd6123cb0592499390cc5737c65b2cf5f8289d7555 | 4030ad9381a4940f33c32fac99e160914c2f1dfaf3dbe8708b64ae3f968e5bf7 | MIT | [
"LICENSE"
] | 0 |
2.4 | replaybt | 1.1.0 | Realistic backtesting engine for algo traders & AI agents | # replaybt
Realistic backtesting engine for algo traders and AI agents.
The engine owns execution — your strategy only emits signals. No look-ahead bias by default. Gap protection, adverse slippage, and fees are built in, not bolted on.
## Install
```bash
pip install replaybt
```
<p align="center">
<img src="https://raw.githubusercontent.com/sirmoremoney/replaybt/main/docs/assets/demo.gif" alt="replaybt demo" width="880">
</p>
## Quick Start
```python
from replaybt import BacktestEngine, CSVProvider, Strategy, MarketOrder, Side
class EMACrossover(Strategy):
def configure(self, config):
self._prev_fast = self._prev_slow = None
def on_bar(self, bar, indicators, positions):
fast = indicators.get("ema_fast")
slow = indicators.get("ema_slow")
if fast is None or slow is None or self._prev_fast is None:
self._prev_fast, self._prev_slow = fast, slow
return None
crossed_up = fast > slow and self._prev_fast <= self._prev_slow
self._prev_fast, self._prev_slow = fast, slow
if not positions and crossed_up:
return MarketOrder(side=Side.LONG, take_profit_pct=0.05, stop_loss_pct=0.03)
return None
engine = BacktestEngine(
strategy=EMACrossover(),
data=CSVProvider("ETH_1m.csv", symbol_name="ETH"),
config={
"initial_equity": 10_000,
"indicators": {
"ema_fast": {"type": "ema", "period": 15, "source": "close"},
"ema_slow": {"type": "ema", "period": 35, "source": "close"},
},
},
)
results = engine.run()
print(results.summary())
```
## Key Features
- **Signals at T, fills at T+1** — no look-ahead bias
- **Gap protection** — open gaps past stops fill at the open, not the stop level
- **11 built-in indicators** with automatic multi-timeframe resampling
- **Limit orders, scale-in, breakeven stops, trailing stops, partial TP**
- **Multi-asset** — time-synchronized portfolio backtest
- **RL-ready** — `StepEngine` with gym-like `step()` / `reset()`
- **Declarative strategies** — JSON config, no Python class needed
- **Validation** — static bias auditor, delay test, OOS split
- **Optimization** — parallel parameter sweep, walk-forward, Monte Carlo
## Documentation
Full documentation: [sirmoremoney.github.io/replaybt](https://sirmoremoney.github.io/replaybt)
- [Getting Started](https://sirmoremoney.github.io/replaybt/getting-started/) — first backtest tutorial
- [Concepts](https://sirmoremoney.github.io/replaybt/concepts/) — execution loop, signal timing, gap protection
- [Cookbook](https://sirmoremoney.github.io/replaybt/cookbook/) — working recipes for common patterns
- [API Reference](https://sirmoremoney.github.io/replaybt/api/) — every class, method, and parameter
## License
MIT
| text/markdown | sirmoremoney | null | null | null | null | algorithmic-trading, backtesting, crypto, trading | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24",
"pandas>=2.0",
"click>=8.0; extra == \"cli\"",
"rich>=13.0; extra == \"cli\"",
"requests>=2.28; extra == \"data\"",
"matplotlib>=3.5; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"requests>=2.28; extra == \"dev\"",
"mkdocs-material>=9.5; e... | [] | [] | [] | [
"Homepage, https://github.com/sirmoremoney/replaybt",
"Source, https://github.com/sirmoremoney/replaybt",
"Issues, https://github.com/sirmoremoney/replaybt/issues"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-18T21:42:43.107260 | replaybt-1.1.0.tar.gz | 235,437 | 69/68/64d940fed9995fc67be4c88adef3695afb610ecf2ff8a6d707795d1a7f56/replaybt-1.1.0.tar.gz | source | sdist | null | false | 378169552bcf35a01b9ac3312d2348f3 | d369f45f7d66fe379f92a99a85e3dfcf61d477ca88616cbed13bc3f8a0640dc6 | 696864d940fed9995fc67be4c88adef3695afb610ecf2ff8a6d707795d1a7f56 | MIT | [
"LICENSE"
] | 223 |
2.4 | cscd | 0.1.1 | **cscd** (pronounced cascade) is a content-addressed workflow orchestration tool that brings the power of content-addressable storage (CAS) to everyday development workflows. Companion to cascache. | # cscd 🌊
> **Pronounced "cascade" • Companion to cascache**
> **⚠️ PROOF OF CONCEPT - NOT PRODUCTION READY**
>
> This is a **conceptual implementation** for research and development purposes.
> While it demonstrates core CAS functionality and includes comprehensive testing,
> it is **not intended for production use**. Use at your own risk.
>
> See [docs/concept.md](docs/concept.md) for more details about the concept and technologies used.
**Flow naturally through your build pipeline**
cscd (pronounced "cascade") is a content-addressed workflow orchestration tool that brings the power of content-addressable storage to everyday development workflows. It bridges the gap between simple task runners (like Make/Just) and complex build systems (like Bazel), providing intelligent caching without forcing teams to restructure their projects.
There exists a companion server app [cascache](https://gitlab.com/cascascade/cascache) that is used for remote caching.
**Warning**: Large parts of this tool were generated with the help of AI. Special thanks to Claude Sonnet for the excellent support!
## ✨ Features
- **🎯 Smart Caching** - Content-addressed caching with SHA256 for instant rebuilds
- **🌐 Distributed Cache** - Share cache across team with automatic retry and fallback
- **⚡ Parallel Execution** - Auto-detect CPU cores and run independent tasks concurrently
- **🌳 Interactive Tree View** - Dagger-style dependency visualization with live progress
- **📊 Dependency Graphs** - Automatic topological sorting and cycle detection
- **🔍 Rich CLI** - Beautiful tree views, error context, and progress tracking
- **⚙️ Simple Config** - Clean YAML syntax with glob patterns and env vars
- **🛡️ Type Safe** - Full type hints with pyright validation
- **🧪 Well Tested** - 85% coverage with 101 passing tests
- **📚 Documented** - Complete CLI and configuration references
## 🚀 Quick Start
### Installation
```bash
# Using uv (recommended)
uv pip install cscd
# Using pip
pip install cscd
# From source
git clone https://gitlab.com/cascascade/cscd.git
cd cascade
uv sync
```
### Your First Workflow
Create `cscd.yaml` in your project:
```yaml
version: 1
tasks:
build:
command: npm run build
inputs:
- "src/**/*.ts"
- "package.json"
outputs:
- "dist/"
test:
command: npm test
inputs:
- "src/**/*.ts"
- "tests/**/*.ts"
depends_on:
- build
```
Run your tasks:
```bash
# Execute tasks with dependencies
cscd run test
# Parallel execution (auto-detect CPUs)
cscd run test # defaults to parallel
# Control parallelism
cscd run test --jobs 4 # use 4 workers
cscd run test --jobs 1 # sequential
# Plain output for CI/CD
cscd run test --plain
# List available tasks
cscd list
# Visualize dependency graph
cscd graph
# Validate configuration
cscd validate
```
### Distributed Caching
Share cache across your team with a remote CAS server:
```yaml
cache:
local:
enabled: true
path: .cscd/cache
remote:
enabled: true
type: cas
url: grpc://cas.example.com:50051
token_file: ~/.cscd/cas-token
timeout: 30.0
max_retries: 3 # Automatic retry on transient errors
```
**Features:**
- 🔄 Automatic retry with exponential backoff
- 🔌 Connection pooling for low latency
- ⚡ Local-first strategy (check local → remote → miss)
- 🛡️ Graceful fallback to local on network errors
- 📊 Statistics tracking (future: `cascade cache stats`)
See [examples/remote-cache/](examples/remote-cache/) for complete setup guide.
## 📖 Documentation
**User Guides:**
- [Concept Document](docs/concept.md) - What is Cascade? Core concepts and technology stack
- [CLI Reference](docs/cli.md) - Complete command documentation
- [Configuration Guide](docs/configuration.md) - Full cscd.yaml reference
**Technical Specifications:**
- [Architecture](spec/architecture.md) - System design and components
- [Testing Strategy](spec/testing.md) - Test organization and practices
- [Design Document](spec/design.md) - Philosophy and principles
- [Roadmap](spec/roadmap.md) - Implementation timeline and status
## 🎯 Use Cases
### Python Project
```yaml
version: 1
tasks:
lint:
command: ruff check src/
inputs: ["src/**/*.py", "pyproject.toml"]
typecheck:
command: pyright
inputs: ["src/**/*.py"]
test:
command: pytest
inputs: ["src/**/*.py", "tests/**/*.py"]
depends_on: [lint, typecheck]
build:
command: python -m build
inputs: ["src/**/*.py", "pyproject.toml"]
outputs: ["dist/"]
depends_on: [test]
```
### Multi-Stage Build
```yaml
version: 1
tasks:
generate:
command: protoc --python_out=. schema.proto
inputs: ["schema.proto"]
outputs: ["schema_pb2.py"]
build-backend:
command: go build -o backend cmd/server/main.go
inputs: ["cmd/**/*.go", "*.proto"]
outputs: ["backend"]
depends_on: [generate]
build-frontend:
command: npm run build
inputs: ["src/**/*.ts"]
outputs: ["dist/"]
depends_on: [generate]
package:
command: docker build -t myapp .
inputs: ["backend", "dist/", "Dockerfile"]
depends_on: [build-backend, build-frontend]
```
## 🎨 CLI Commands
### `cscd run`
Execute tasks with automatic dependency resolution and parallel execution:
```bash
# Run single task (parallel by default)
cscd run build
# Control parallelism
cscd run build --jobs 8 # use 8 workers
cscd run build --jobs auto # auto-detect CPUs
cscd run build --jobs 1 # sequential
# Plain output for CI/CD
cscd run build --plain
# Run multiple tasks
cscd run lint test build
# Dry run (show execution plan)
cscd run --dry-run deploy
# Disable caching
cscd run --no-cache build
# Quiet mode
cscd run -q test
```
**Interactive Tree View:**
```
📦 Tasks
├── ✓ lint-css ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
│ └── ✓ test ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
│ └── ✓ build ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
│ └── ✓ package ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
├── ✓ lint-js ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
└── ✓ lint-python ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
└── ✓ docs ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
✓ Successfully executed 7 task(s)
```
**Error Context:**
When tasks fail, Cascade provides detailed context:
```
✗ Task failed: step3
Dependency chain:
├─ step1
├─ step2
└─ step3
⊘ Skipped 3 task(s) due to failure:
• step4
• step5
• final
```
### `cscd graph`
Visualize task dependencies:
```bash
# ASCII tree output
cscd graph
# GraphViz DOT format
cscd graph --format dot > graph.dot
```
**Example Output:**
```
┌─ Roots (no dependencies)
│ ├─ setup-database
│ └─ install-deps
│
├─ Layer 1
│ ├─ lint-python
│ └─ lint-js
│
├─ Layer 2
│ ├─ test-unit
│ └─ test-integration
│
└─ Final layer
└─ deploy
```
### `cscd validate`
Validate configuration:
```bash
cscd validate
```
Checks:
- ✅ Valid YAML syntax
- ✅ No cyclic dependencies
- ✅ All dependencies defined
- ✅ Required fields present
### `cscd clean`
Manage cache:
```bash
# Clean with confirmation
cscd clean
# Force clean
cscd clean --force
```
## 🏗️ Architecture
Cascade is built as a layered system:
**CLI → Config → Graph → Executor → Cache**
Each layer is independently testable with clear interfaces. The architecture supports local-first operation with future remote cache backends.
For detailed architecture documentation, see [spec/architecture.md](spec/architecture.md).
## 🧪 Testing
Cascade maintains high code quality with comprehensive testing:
```bash
# Run unit and integration tests
uv run pytest
# With coverage report
uv run pytest --cov=cascade --cov-report=html
# cascache integration tests (requires Docker)
./tests/integration-cascache/run-tests.sh
# Or with just:
just test-cascache
```
**Current Status:**
- 101 tests (51 unit + 47 integration + 3 component)
- 85% code coverage
- All quality checks passing
- Optional: cascache integration tests with Docker Compose
**Test Levels:**
- **Unit tests** (`tests/unit/`) - Fast, mocked dependencies
- **Integration tests** (`tests/integration/`) - Component interaction, local only
- **Component tests** (`tests/component/`) - CLI end-to-end tests
- **cascache integration** (`tests/integration-cascache/`) - Real cascache server (Docker-based)
For detailed testing strategy, see [spec/testing.md](spec/testing.md) and [tests/integration-cascache/README.md](tests/integration-cascache/README.md).
## 🛠️ Development
### Quick Setup
```bash
# Clone and install
git clone https://gitlab.com/yourusername/cascade.git
cd cascade
uv sync
```
### Common Commands
```bash
just lint # Run linter (ruff)
just typecheck # Type checking (pyright)
just test # Run tests (pytest)
just test-cascache # cascache integration tests (Docker)
just ci-gitlab # Full CI pipeline
```
For detailed development setup and project structure, see [spec/architecture.md](spec/architecture.md).
## 📊 Status
**Phase 1: Core MVP** ✅ COMPLETE (2026-02-12)
- ✅ Task execution with dependencies
- ✅ Content-addressable caching
- ✅ YAML configuration
- ✅ Rich CLI interface
- ✅ Graph visualization
- ✅ 85% test coverage
- ✅ Complete documentation
**Phase 2: Parallelization** ✅ COMPLETE (2026-02-12)
- ✅ Async task execution
- ✅ Parallel execution with `--jobs` flag
- ✅ Auto CPU detection
- ✅ Interactive tree view with live progress
- ✅ Dependency-aware scheduling
- ✅ Better error context with dependency chains
- ✅ TTY detection for CI/CD compatibility
**Coming Soon:**
- 🔄 Phase 3: CAS Integration (remote caching with cascache)
- 🎨 Phase 4: Enhanced developer experience
- 🤖 Phase 5: CI/CD optimizations
**Future Features:**
- 🐳 Docker Integration - Run tasks in isolated containers
- 🌐 Remote Execution - Execute tasks on remote machines via cascache
- 🔌 Plugin System - Extensible architecture for custom integrations
- 📊 Web Dashboard - Real-time monitoring and analytics
- 🎨 VSCode Integration - Tasks generator or full extension
- 🤖 CI Pipeline Generation - Auto-generate GitHub Actions, GitLab CI, etc.
See [roadmap.md](spec/roadmap.md) for details.
## 🤝 Contributing
Contributions welcome! Please:
- Follow PEP 8 and project code style
- Add tests for new functionality
- Update documentation as needed
- Run quality checks before submitting
Development setup instructions in the [Development](#development) section above.
## 📝 License
MIT License - see [LICENSE](LICENSE) for details.
## 🔗 Links
**Documentation:**
- [Concept Document](docs/concept.md) - Core concepts and technology stack
- [CLI Reference](docs/cli.md) - Command documentation
- [Configuration Guide](docs/configuration.md) - YAML reference
**Specifications:**
- [Architecture](spec/architecture.md) - System design
- [Testing Strategy](spec/testing.md) - Test practices
- [Design Document](spec/design.md) - Philosophy
- [Roadmap](spec/roadmap.md) - Implementation plan
**Examples:**
- [examples/](examples/) - Sample projects
---
**Built with:** Python 3.13+ • uv • Typer • Rich • NetworkX • Pydantic
| text/markdown | null | ladidadida <stefan@dalada.de> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiofiles>=23.0.0",
"anyio>=4.0.0",
"grpcio>=1.64.0",
"networkx>=3.0",
"protobuf>=5.26.0",
"pydantic>=2.0.0",
"pyyaml>=6.0.0",
"rich>=13.0.0",
"typer>=0.12.0"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/cascascade/cscd",
"Repository, https://gitlab.com/cascascade/cscd",
"Issues, https://gitlab.com/cascascade/cscd/-/issues",
"Documentation, https://gitlab.com/cascascade/cscd/-/blob/main/README.md"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T21:41:48.689020 | cscd-0.1.1.tar.gz | 2,726,961 | b9/74/e8b9d0e575ee6c198f37dfa3bb0861286690cabdb26fc644473bd410e566/cscd-0.1.1.tar.gz | source | sdist | null | false | 5163ad45c06636fa5be541ecfd89634d | bb0418d186eb2091d2c7bd29ea4a15d8493c901879bb067640e0489bfc1af7d6 | b974e8b9d0e575ee6c198f37dfa3bb0861286690cabdb26fc644473bd410e566 | null | [
"LICENSE"
] | 263 |
2.3 | rsyncthing | 0.1.0 | Add your description here | # rsyncthing: A CLI for syncthing that's as easy to use as rsync. | text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"asyncssh>=2.22.0",
"cyclopts>=4.5.3",
"httpx>=0.28.1",
"rich>=14.3.2"
] | [] | [] | [] | [] | uv/0.8.22 | 2026-02-18T21:41:24.863748 | rsyncthing-0.1.0.tar.gz | 1,609 | ed/9d/6eed78282a5d7beaef1c72926ffa7c734483dd010498c40049aea9007449/rsyncthing-0.1.0.tar.gz | source | sdist | null | false | 28f4e794168b7eda64a069e9b4ce0e09 | 7b8b79cba59d6862e145d2cfd13a7adcc260d1cb3755e6705d1ffe14fc7919bb | ed9d6eed78282a5d7beaef1c72926ffa7c734483dd010498c40049aea9007449 | null | [] | 229 |
2.4 | mcp-server-propel | 0.2.0 | MCP server for Propel code review API - submit git diffs for AI-powered code review | # MCP Server for Propel
Submit git diffs for AI-powered code review directly from your IDE or terminal using the [Propel](https://propelcode.ai) code review platform.
## Features
- Submit git diffs for async code review
- Automatically polls for results (reviews take 5-10 minutes)
- Retrieve detailed review comments with file paths, line numbers, and severity
- Quick status checks for in-progress reviews
- Works with Claude Desktop, Cursor, Claude Code, Codex, and Windsurf
## Requirements
- Python 3.10 or higher (required by the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk))
## Installation
### Recommended: Using `uvx` (no install needed)
[`uvx`](https://docs.astral.sh/uv/) runs the server in an isolated environment automatically — no virtual environment setup, no dependency conflicts.
```bash
# Install uv if you don't have it
curl -LsSf https://astral.sh/uv/install.sh | sh
```
No further installation steps needed. Configure your client to use `uvx` directly (see Setup below).
### Alternative: Using `pipx`
[`pipx`](https://pipx.pypa.io/) installs the server in its own isolated environment:
```bash
pipx install mcp-server-propel
```
### Alternative: Using `pip`
Use a virtual environment to avoid conflicts with system packages:
```bash
python3 -m venv ~/.venvs/mcp-propel
source ~/.venvs/mcp-propel/bin/activate
pip install mcp-server-propel
```
> **Avoid installing with `pip` into your system Python.** System-managed packages (like `cffi`) can cause install failures. A virtual environment, `uvx`, or `pipx` avoids this entirely.
## Setup
### Step 1: Get your API token
Get your Propel API token from [Propel Settings](https://app.propelcode.ai/administration/settings?tab=review-api-tokens). Your token needs `reviews:write` and `reviews:read` scopes.
### Step 2: Configure your client
Choose the setup instructions for your tool:
- [Claude Desktop](#claude-desktop)
- [Cursor](#cursor)
- [Claude Code](#claude-code)
- [Codex CLI](#codex-cli)
- [Windsurf](#windsurf)
---
### Claude Desktop
**Config file location:**
- macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
**Using `uvx` (recommended):**
```json
{
"mcpServers": {
"propel": {
"command": "uvx",
"args": ["mcp-server-propel"],
"env": {
"PROPEL_API_TOKEN": "your-api-token-here"
}
}
}
}
```
**Using a direct install (`pipx` or `pip`):**
```json
{
"mcpServers": {
"propel": {
"command": "mcp-server-propel",
"env": {
"PROPEL_API_TOKEN": "your-api-token-here"
}
}
}
}
```
Restart Claude Desktop after saving.
---
### Cursor
**Config file location:**
- Project-level: `.cursor/mcp.json` (in the project root)
- Global: `~/.cursor/mcp.json`
You can also add it via **Cursor Settings > Developer > MCP Tools > Add Custom MCP**.
```json
{
"mcpServers": {
"propel": {
"command": "uvx",
"args": ["mcp-server-propel"],
"env": {
"PROPEL_API_TOKEN": "your-api-token-here"
}
}
}
}
```
---
### Claude Code
**Option A: CLI command (quickest)**
```bash
claude mcp add --transport stdio \
--env PROPEL_API_TOKEN=your-api-token-here \
propel -- uvx mcp-server-propel
```
Add `--scope project` to share with your team, or `--scope user` to enable across all projects.
**Option B: Project config file (`.mcp.json` in project root)**
```json
{
"mcpServers": {
"propel": {
"command": "uvx",
"args": ["mcp-server-propel"],
"env": {
"PROPEL_API_TOKEN": "${PROPEL_API_TOKEN}"
}
}
}
}
```
> With `.mcp.json`, you can use `${PROPEL_API_TOKEN}` to reference your shell environment variable instead of hardcoding the token.
---
### Codex CLI
**Option A: CLI command**
```bash
codex mcp add propel \
--env PROPEL_API_TOKEN=your-api-token-here \
-- uvx mcp-server-propel
```
**Option B: Config file (`~/.codex/config.toml` or `.codex/config.toml`)**
```toml
[mcp_servers.propel]
command = "uvx"
args = ["mcp-server-propel"]
[mcp_servers.propel.env]
PROPEL_API_TOKEN = "your-api-token-here"
```
---
### Windsurf
**Config file location:**
- macOS/Linux: `~/.codeium/windsurf/mcp_config.json`
- Windows: `%USERPROFILE%\.codeium\windsurf\mcp_config.json`
```json
{
"mcpServers": {
"propel": {
"command": "uvx",
"args": ["mcp-server-propel"],
"env": {
"PROPEL_API_TOKEN": "your-api-token-here"
}
}
}
}
```
---
## Tools
### submit_review
Submit a git diff for code review. Returns immediately with a review ID, then instructs the AI to call `get_review` to poll for results.
**Parameters:**
- `diff` (required): Unified git diff output
- `repository` (required): Repository identifier (name, full name, or URL)
- `base_commit` (optional): Base commit SHA for additional context
### get_review
Get review results. Polls the API automatically every 12 seconds until the review completes or fails (~12 min max).
**Parameters:**
- `review_id` (required): The review job identifier
### check_review_status
Quick, non-polling status check. Returns only status and timing info without waiting.
**Parameters:**
- `review_id` (required): The review job identifier
## Troubleshooting
**"PROPEL_API_TOKEN environment variable is required"**
- Ensure the token is set in your client's MCP config `env` section
- Restart your client after updating the config
**"Invalid authorization token" / "Authorization token has expired"**
- Generate a new token at [Propel Settings](https://app.propelcode.ai/administration/settings?tab=review-api-tokens)
- Ensure it has `reviews:write` and `reviews:read` scopes
**"Diff too large"**
- Your diff exceeds the 1 MB size limit
- Try reviewing a smaller changeset (fewer files or commits)
**"Repository not found"**
- The repository must be connected to your Propel workspace
- Check the repository name matches what's configured in Propel
**`Cannot uninstall cffi` or similar dependency conflicts**
- This happens when installing with `pip` into system Python
- Fix: use `uvx`, `pipx`, or install inside a virtual environment (see Installation above)
**Cursor not detecting tools**
- Ensure you are on Cursor v2.4.21 or later
- Check that your `PROPEL_API_TOKEN` is set correctly in the config
## Support
- Documentation: https://docs.propelcode.ai
- Issues: https://github.com/propel/mcp-server-propel/issues
- For development and contributing, see [CONTRIBUTING.md](CONTRIBUTING.md)
| text/markdown | null | Propel <support@propelcode.ai> | null | null | MIT | code-review, diff, git, mcp, mcp-server, propel | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Quality Assurance",
"Topic :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.24.0",
"mcp>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://propelcode.ai",
"Repository, https://github.com/propel/mcp-server-propel",
"Documentation, https://docs.propelcode.ai"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T21:40:27.523514 | mcp_server_propel-0.2.0.tar.gz | 71,910 | 4a/c6/25f89b6b24b1fbbedd439acae43724cebdb5e5c69afde3854d92dc55fb9a/mcp_server_propel-0.2.0.tar.gz | source | sdist | null | false | 6b67d45bf9ecf6f2a416580026bfcc26 | 44daaf44aec398f7e2475dbdb2d4812f3020b479d13f1060b8dcc8475e9e1ffb | 4ac625f89b6b24b1fbbedd439acae43724cebdb5e5c69afde3854d92dc55fb9a | null | [
"LICENSE"
] | 239 |
2.4 | pyivia | 0.2.40 | Python API for IBM Verify Identity Access | # PyIVIA
PyIVIA is a Python library that wraps the IBM Verify Identity Access RESTful Web services to provide a
quick and easy way to construct configuration scripts for appliances.
**Supported Versions**
- IBM Verify Identity Access 11.0.2.0
- IBM Verify Identity Access 11.0.1.0
- IBM Verify Identity Access 11.0.0.0
- IBM Security Verify Access 10.0.9.0
- IBM Security Verify Access 10.0.8.0
- IBM Security Verify Access 10.0.7.0
- IBM Security Verify Access 10.0.6.0
- IBM Security Verify Access 10.0.5.0
- IBM Security Verify Access 10.0.4.0
- IBM Security Verify Access 10.0.3.1
- IBM Security Verify Access 10.0.3.0
- IBM Security Verify Access 10.0.2.0
- IBM Security Verify Access 10.0.1.0
- IBM Security Verify Access 10.0.0.0
- IBM Security Access Manager 9.0.7.3
- IBM Security Access Manager 9.0.7.2
- IBM Security Access Manager 9.0.7.1
- IBM Security Access Manager 9.0.7.0
- IBM Security Access Manager 9.0.6.0
- IBM Security Access Manager 9.0.5.0
- IBM Security Access Manager 9.0.4.0
- IBM Security Access Manager 9.0.3.0
- IBM Security Access Manager 9.0.2.1
- IBM Security Access Manager 9.0.2.0
## Installation
For Linux/macOS: if you clone the library to `~/repos/pyivia`, add this to `~/.profile`:
```sh
# add pyivia library to Python's search path
export PYTHONPATH="${PYTHONPATH}:${HOME}/repos/pyivia"
```
## From IBM Security Verify Access 10.0.0.0 onwards:
Module has been build into a package Currently hosted on PyPi that can be installed using pip:
```sh
pip install pyivia
```
## Usage
```python
>>> import pyivia
>>> factory = pyivia.Factory("https://isam.mmfa.ibm.com", "admin", "Passw0rd")
>>> web = factory.get_web_settings()
>>> resp = web.reverse_proxy.restart_instance("default")
>>> if resp.success:
... print("Successfully restarted the default instance.")
... else:
... print("Failed to restart the default instance. status_code: %s, data: %s" % (resp.status_code, resp.data))
...
Successfully restarted the default instance.
```
## Documentation
Documentation for using this library can be found on [pyivia GitHub pages](https://lachlan-ibm.github.io/pyivia/index.html).
| text/markdown | Lachlan Gleeson | lgleeson@au1.ibm.com | null | null | MIT | null | [] | [] | https://github.com/lachlan-ibm/pyivia | null | null | [] | [] | [] | [
"requests>=2.23.0"
] | [] | [] | [] | [
"Homepage, https://github.com/lachlan-ibm/pyivia",
"Documentation, https://lachlan-ibm.github.io/pyivia",
"Source, https://github.com/lachlan-ibm/pyivia",
"Tracker, https://github.com/lachlan-ibm/pyivia/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T21:38:44.695823 | pyivia-0.2.40.tar.gz | 112,089 | 87/87/80ea596846828e509f12d5d376994e8d95af4295fbf8b4569cf1d740ad40/pyivia-0.2.40.tar.gz | source | sdist | null | false | 1357cfcc2cbeed3658de55cd31eb8981 | 01676244cfd16be387e590d0dffcae7f569536d32249609b64803d56af9bc886 | 878780ea596846828e509f12d5d376994e8d95af4295fbf8b4569cf1d740ad40 | null | [
"LICENSE.txt",
"AUTHORS.md"
] | 406 |
2.4 | netzooe-eservice-api | 1.0.0b2 | A Python wrapper for the unofficial Netz Oberösterreich eService-Portal API. | # Netz OÖ eService API
A Python wrapper for the unofficial Netz Oberösterreich eService-Portal API.

[][pypi-version]
[][workflow-ci]
[pypi-version]: https://pypi.python.org/pypi//netzooe-eservice-api
[workflow-ci]: https://github.com/superbox-dev/netzooe_eservice_api/actions/workflows/ci.yml
## Getting started
```bash
pip install netzooe_eservice_api
```
## Changelog
The changelog lives in the [CHANGELOG.md](https://github.com/superbox-dev/netzooe_eservice_api/blob/main/CHANGELOG.md) document.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## Get Involved
The **Netz OÖ eService-Portal API** is an open-source project and contributions are welcome. You can:
* Report [issues](https://github.com/superbox-dev/netzooe_eservice_api/issues/new/choose) or request new features
* Improve documentation
* Contribute code
* Support the project by starring it on GitHub ⭐
I'm happy about your contributions to the project!
You can get started by reading the [CONTRIBUTING.md](https://github.com/superbox-dev/netzooe_eservice_api/blob/main/CONTRIBUTING.md).
| text/markdown | null | Michael Hacker <mh@superbox.one> | null | Michael Hacker <mh@superbox.one> | null | api, component, custom component, custom integration, netzooe, eservice, hacs-component, hacs-integration, hacs-repository, hacs, hass, home assistant, home-assistant, homeassistant, integration | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries",
"... | [] | null | null | >=3.13 | [] | [] | [] | [
"aiohttp==3.*"
] | [] | [] | [] | [
"Homepage, https://github.com/superbox-dev/netzooe_eservice_api",
"Documentation, https://github.com/superbox-dev/netzooe_eservice_api",
"Issues, https://github.com/superbox-dev/netzooe_eservice_api/issues",
"Source, https://github.com/superbox-dev/netzooe_eservice_api"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:37:44.008246 | netzooe_eservice_api-1.0.0b2.tar.gz | 101,170 | 28/5c/b23e7383d4c5c87254205455e3e9db5c86e1675753c5538ae001497f65eb/netzooe_eservice_api-1.0.0b2.tar.gz | source | sdist | null | false | 583a1dcb00d439e70ff9279f70804434 | e9427cd1d89a5f7b8ff640d5284ce0dc93eb8b8813ca25ba3ed196b21b0fb41e | 285cb23e7383d4c5c87254205455e3e9db5c86e1675753c5538ae001497f65eb | null | [
"LICENSE"
] | 213 |
2.4 | compressa-perf | 0.2.7 | Performance Measurement tool by Compressa | # Compressa Performance Measurement Tool
This tool is designed to measure the performance of Compressa models.
It uses the OpenAI API to run inference tasks and stores the results in a SQLite database.
## Installation
```bash
git clone https://github.com/compressa-ai/compressa-perf.git
cd compressa-perf
poetry install
$(poetry env activate)
```
## Install with Pip
```bash
pip install compressa-perf
```
## Usage
### 1. Run experiment with prompts from a file
```bash
❯ compressa-perf measure \
--db some_db.sqlite \
--openai_url https://some-api-url.ru/ \
--api_key "${OPENAI_API_KEY}" \
--model_name Compressa-Qwen2.5-14B-Instruct \
--experiment_name "File Prompts Run" \
--prompts_file resources/prompts.csv \
--num_tasks 1000 \
--num_runners 100
```
### 2. Run experiment with generated prompts
```bash
❯ compressa-perf measure \
--db some_db.sqlite \
--openai_url https://some-api-url.ru/chat-2/v1/ \
--api_key "${OPENAI_API_KEY}" \
--model_name Compressa-Qwen2.5-14B-Instruct \
--experiment_name "Generated Prompts Run" \
--num_tasks 2 \
--num_runners 2 \
--generate_prompts \
--num_prompts 1000 \
--prompt_length 5000
```
Full parameter list can be obtained with `compressa-perf measure -h`.
### 3. Run set of experiments from YAML file
You can describe set of experiments in YAML file and run them on different services in one command:
```bash
❯ compressa-perf measure-from-yaml experiments.yaml \
--db some_db.sqlite \
```
Example of YAML file:
```yaml
- openai_url: http://localhost:5000/v1/
api_key: ${OPENAI_API_KEY}
model_name: Compressa-LLM
experiment_name: "File Prompts Run 1"
description: "Experiment using prompts from a file with 500 tasks and 5 runners"
prompts_file: resources/prompts.csv
num_tasks: 500
num_runners: 5
generate_prompts: false
num_prompts: 0
prompt_length: 0
max_tokens: 1000
- openai_url: https://some-api-url/v1/
api_key: ${OPENAI_API_KEY}
model_name: Compressa-LLM
experiment_name: "File Prompts Run 2"
description: "Experiment using prompts from a file with 20 tasks and 10 runners"
prompts_file: resources/prompts.csv
num_tasks: 20
num_runners: 10
generate_prompts: true
num_prompts: 10
prompt_length: 10000
max_tokens: 100
```
**List of Parameters**
- `openai_url` - url to chat completion endpoint - `REQUIRED`
- `serv_api_url` - url to service handlers of the Compressa platform - default is `http://localhost:5100/v1/` (if `None` - the inference only will run)
- `api_key` - API key - `REQUIRED`
- `model_name` - served model name - `REQUIRED`
- `experiment_name` - `REQUIRED`
- `description`
- `prompts_file` - path to the file with prompts
- `report_file` - path to the report file - default is `results/experiment`
- `report_mode` - report file extension (`.csv`, `.md`, `.pdf`) - default is `.pdf`
- `num_tasks`
- `num_runners`
- `generate_prompts` - `true` or `false`
- `num_prompts`
- `prompt_length`
- `max_tokens`
### 4. List experiments
You can select experiments by name, parameters or metrics (or substrings in these fields) via `compressa-perf list` command.
For example:
```
❯ compressa-perf list \
--show-metrics \
--param-filter openai_url=chat-2 \
--param-filter avg_n_input=30
List of Experiments:
+----+----------------------------------------------------------------------------+---------------------+--------+-----------------------+
| | ID | Name | Date | Description |
+====+============================================================================+=====================+========+=======================+
| 25 | Compressa-Qwen2.5-14B-Instruct-Int4 Long Input / Short Output | 5 runners | 2024-10-03 06:21:45 | | ttft: 25.0960 |
| | | | | latency: 52.5916 |
| | | | | tpot: 0.5530 |
| | | | | throughput: 2891.0323 |
+----+----------------------------------------------------------------------------+---------------------+--------+-----------------------+
| 23 | Compressa-Qwen2.5-14B-Instruct-Int4 Long Input / Short Output | 4 runners | 2024-10-03 06:14:57 | | ttft: 17.1862 |
| | | | | latency: 37.9612 |
| | | | | tpot: 0.3954 |
| | | | | throughput: 3230.8769 |
+----+----------------------------------------------------------------------------+---------------------+--------+-----------------------+
```
Full parameter list:
```bash
❯ compressa-perf list -h
usage: compressa-perf list [-h] [--db DB] [--show-parameters] [--show-metrics] [--name-filter NAME_FILTER] [--param-filter PARAM_FILTER]
options:
-h, --help show this help message and exit
--db DB Path to the SQLite database
--show-parameters Show all parameters for each experiment
--show-metrics Show metrics for each experiment
--name-filter NAME_FILTER
Filter experiments by substring in the name
--param-filter PARAM_FILTER
Filter experiments by parameter value (e.g., paramkey=value_substring)
```
### 5. Generate a report for an experiment
In addition to the `.pdf`, `.csv` or `.md` reports the text reports also can be generated with the command:
```bash
❯ compressa-perf report <EXPERIMENT_ID>
```
Output example:
```
❯ compressa-perf report 3
Experiment Details:
ID: 3
Name: My First Run
Date: 2024-09-24 07:10:39
Description: None
Experiment Parameters:
╒══════════════╤═══════════════════════════════════════════╕
│ Parameter │ Value │
╞══════════════╪═══════════════════════════════════════════╡
│ num_workers │ 2 │
├──────────────┼───────────────────────────────────────────┤
│ num_tasks │ 2 │
├──────────────┼───────────────────────────────────────────┤
│ openai_url │ https://some-api-url.ru/chat-2/v1/ │
├──────────────┼───────────────────────────────────────────┤
│ max_tokens │ 1000 │
├──────────────┼───────────────────────────────────────────┤
│ model_name │ Compressa-LLM │
├──────────────┼───────────────────────────────────────────┤
│ avg_n_input │ 32 │
├──────────────┼───────────────────────────────────────────┤
│ std_n_input │ 2.8284 │
├──────────────┼───────────────────────────────────────────┤
│ avg_n_output │ 748.5000 │
├──────────────┼───────────────────────────────────────────┤
│ std_n_output │ 2.1213 │
╘══════════════╧═══════════════════════════════════════════╛
Experiment Metrics:
╒══════════════════════════╤══════════╕
│ Metric │ Value │
╞══════════════════════════╪══════════╡
│ TTFT │ 0.0622 │
├──────────────────────────┼──────────┤
│ TTFT_95 │ 0.0693 │
├──────────────────────────┼──────────┤
│ TOP_5_TTFT │ 0.0757 │
├──────────────────────────┼──────────┤
│ LATENCY │ 0.4642 │
├──────────────────────────┼──────────┤
│ LATENCY_95 │ 0.6452 │
├──────────────────────────┼──────────┤
│ TOP_5_LATENCY │ 0.7156 │
├──────────────────────────┼──────────┤
│ TPOT │ 0.0265 │
├──────────────────────────┼──────────┤
│ THROUGHPUT │ 100.162 │
├──────────────────────────┼──────────┤
│ THROUGHPUT_INPUT_TOKENS │ 62.4664 │
├──────────────────────────┼──────────┤
│ THROUGHPUT_OUTPUT_TOKENS │ 37.6953 │
├──────────────────────────┼──────────┤
│ RPS │ 2.154 │
├──────────────────────────┼──────────┤
│ LONGER_THAN_60_LATENCY │ 0 │
├──────────────────────────┼──────────┤
│ LONGER_THAN_120_LATENCY │ 0 │
├──────────────────────────┼──────────┤
│ LONGER_THAN_180_LATENCY │ 0 │
├──────────────────────────┼──────────┤
│ FAILED_REQUESTS │ 0 │
├──────────────────────────┼──────────┤
│ FAILED_REQUESTS_PER_HOUR │ 0 │
╘══════════════════════════╧══════════╛
```
For more information on available commands and options, run:
```bash
compressa-perf --help
```
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.
| text/markdown | Gleb Morgachev | morgachev.g@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4,>=3.9 | [] | [] | [] | [
"openai<2.0.0,>=1.47.1",
"pandas<3.0.0,>=2.2.3",
"python-dotenv<2.0.0,>=1.0.1",
"pyyaml>=5.1",
"reportlab<5.0.0,>=4.4.2",
"requests<3.0.0,>=2.31.0",
"tabulate<0.10.0,>=0.9.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-18T21:37:21.906768 | compressa_perf-0.2.7.tar.gz | 22,816 | 06/24/eeb24cc382006d767eaf027c7ba349004d5475d92db24fcbbbb5034dc4d1/compressa_perf-0.2.7.tar.gz | source | sdist | null | false | 3437d9e90adc1259852db35e5aa62f74 | 04b773c1f95805d86f899b94f6d90817128af0cf639294ac755ed08cd60a83db | 0624eeb24cc382006d767eaf027c7ba349004d5475d92db24fcbbbb5034dc4d1 | null | [
"LICENSE"
] | 250 |
2.4 | linkml-runtime | 1.10.0rc5 | Runtime environment for LinkML, the Linked open data modeling language | # linkml-runtime
[](https://pypi.python.org/pypi/linkml-runtime)

[](https://mybinder.org/v2/gh/linkml/linkml-runtime/main?filepath=notebooks)
[](https://pypi.python.org/pypi/linkml)
[](https://pepy.tech/project/linkml-runtime)
[](https://pypi.org/project/linkml-runtime)
[](https://codecov.io/gh/linkml/linkml-runtime)
Runtime support for linkml generated data classes.
## About
This Python library provides runtime support for [LinkML](https://linkml.io/linkml/) datamodels.
See the [LinkML repo](https://github.com/linkml/linkml) for the [Python Dataclass Generator](https://linkml.io/linkml/generators/python.html) which will convert a schema into a Python object model. That model will have dependencies on functionality in this library.
The library also provides
* loaders: for loading from external formats such as json, yaml, rdf, tsv into LinkML instances
* dumpers: the reverse operation
See [working with data](https://linkml.io/linkml/data/index.html) in the documentation for more details
This repository also contains the Python dataclass representation of the [LinkML metamodel](https://github.com/linkml/linkml-model), and various utility functions that are useful for working with LinkML data and schemas.
It also includes the [SchemaView](https://linkml.io/linkml/developers/manipulating-schemas.html) class for working with LinkML schemas.
## Notebooks
See the [notebooks](https://github.com/linkml/linkml-runtime/tree/main/notebooks) folder for examples.
| text/markdown | null | Chris Mungall <cjmungall@lbl.gov>, Harold Solbrig <solbrig@jhu.edu>, Sierra Moxon <smoxon@lbl.gov>, Bill Duncan <wdduncan@gmail.com>, Harshad Hegde <hhegde@lbl.gov> | null | null | null | linkml, metamodel, owl, rdf, schema visualization, yaml | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"click",
"curies>=0.5.4",
"deprecated",
"hbreader",
"isodate<1.0.0,>=0.7.2; python_version < \"3.11\"",
"json-flattener>=0.1.9",
"jsonasobj2==1.*,>=1.0.0,>=1.0.4",
"jsonschema>=3.2.0",
"prefixcommons>=0.1.12",
"prefixmaps>=0.1.4",
"pydantic<3.0.0,>=1.10.2",
"pyyaml",
"rdflib>=6.0.0",
"requ... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:37:16.179391 | linkml_runtime-1.10.0rc5.tar.gz | 590,071 | 64/9e/9f12902c4d24030f8f4e9991ca6a301d108930d30ba243ec887104e56f92/linkml_runtime-1.10.0rc5.tar.gz | source | sdist | null | false | 50cedd9853d80c7e10de504f3b253011 | 23990e5a49bc53932faa6d66e37a89818d75d237a8449b50b841db4b30505ab9 | 649e9f12902c4d24030f8f4e9991ca6a301d108930d30ba243ec887104e56f92 | CC0-1.0 | [
"LICENSE"
] | 288 |
2.4 | linkml | 1.10.0rc5 | Linked Open Data Modeling Language | [](https://pypi.python.org/pypi/linkml)

[](https://pypi.python.org/pypi/linkml)
[](https://mybinder.org/v2/gh/linkml/linkml/main?filepath=notebooks)
[](https://zenodo.org/badge/latestdoi/13996/linkml/linkml)
[](https://pepy.tech/project/linkml)
[](https://pypi.org/project/linkml)
[](https://codecov.io/gh/linkml/linkml)
# LinkML - Linked Data Modeling Language
LinkML is a linked data modeling language following object-oriented and ontological principles. LinkML models are typically authored in YAML, and can be converted to other schema representation formats such as JSON or RDF.
This repo holds the tools for generating and working with LinkML. For the LinkML schema (metamodel), please see https://github.com/linkml/linkml-model
The complete documentation for LinkML can be found here:
- [linkml.io/linkml](https://linkml.io/linkml)
| text/markdown | Deepak Unni | Chris Mungall <cjmungall@lbl.gov>, Sierra Moxon <smoxon@lbl.gov>, Harold Solbrig <solbrig@jhu.edu>, Sujay Patil <spatil@lbl.gov>, Harshad Hegde <hhegde@lbl.gov>, Mark Andrew Miller <MAM@lbl.gov>, Gaurav Vaidya <gaurav@renci.org>, Kevin Schaper <kevin@tislab.org> | null | null | null | biolink, data modeling, linked data, owl, rdf, schema | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"antlr4-python3-runtime<4.10,>=4.9.0",
"click<8.2,>=8.0",
"graphviz>=0.10.1",
"hbreader",
"isodate>=0.6.0",
"jinja2>=3.1.0",
"jsonasobj2<2.0.0,>=1.0.3",
"jsonschema[format]>=4.0.0",
"linkml-runtime<2.0.0,>=1.9.5",
"openpyxl",
"parse",
"prefixcommons>=0.1.7",
"prefixmaps>=0.2.2",
"pydantic<... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:37:14.685290 | linkml-1.10.0rc5.tar.gz | 324,382 | ab/58/917aef25d6c62ca080539fff54619c6d3313f049eb2c3c93372f42fb09c1/linkml-1.10.0rc5.tar.gz | source | sdist | null | false | e8684de350301cb4f14f4eff15bc2dc7 | 521a161c9de1444764716b19ad3787b1579d155644fde26d1ae473b5c498fa7c | ab58917aef25d6c62ca080539fff54619c6d3313f049eb2c3c93372f42fb09c1 | Apache-2.0 | [] | 345 |
2.4 | jsonschema-pydantic-converter | 0.1.9 | Convert JSON Schema definitions to Pydantic models dynamically at runtime | # jsonschema-pydantic-converter
[](https://github.com/akshaylive/jsonschema-pydantic-converter/actions)
[](https://pypi.org/project/jsonschema-pydantic-converter/)
[](https://pypi.org/project/jsonschema-pydantic-converter/)
[](https://opensource.org/licenses/Apache-2.0)
Convert JSON Schema definitions to Pydantic models dynamically at runtime.
## Overview
`jsonschema-pydantic-converter` is a Python library that transforms JSON Schema dictionaries into Pydantic v2 models. This is useful when you need to work with dynamic schemas, validate data against JSON Schema specifications, or bridge JSON Schema-based systems with Pydantic-based applications.
## Features
- **Dynamic Model Generation**: Convert JSON Schema to Pydantic models at runtime
- **TypeAdapter Support**: Generate TypeAdapters for enhanced validation and serialization
- **Comprehensive Type Support**:
- Primitive types (string, number, integer, boolean, null)
- Arrays with typed items and tuples (prefixItems)
- Nested objects
- Enums (with and without explicit type)
- Union types (anyOf, oneOf)
- Combined schemas (allOf)
- Negation (not)
- Constant values (const)
- Boolean schemas (true/false)
- **Validation Constraints**: Full support for Pydantic-native constraints
- String: minLength, maxLength, pattern
- Numeric: minimum, maximum, exclusiveMinimum, exclusiveMaximum, multipleOf
- Array: minItems, maxItems
- **Schema References**: Support for `$ref` and `$defs`/`definitions`
- **Field Metadata**: Preserves titles, descriptions, and default values
- **Self-References**: Handle recursive schema definitions
- **Pydantic v2 Compatible**: Built for Pydantic 2.0+
## Installation
```bash
pip install jsonschema-pydantic-converter
```
Or using uv:
```bash
uv add jsonschema-pydantic-converter
```
## Usage
> **Note on Deprecation**: The `transform()` function is deprecated in favor of `create_type_adapter()`. JSON schemas are better represented as TypeAdapters since BaseModels can only represent 'object' types, while TypeAdapters can handle any JSON schema type including primitives, arrays, and unions. Existing code using `transform()` will continue to work, but new code should use `create_type_adapter()`.
### Basic Example (Deprecated - using `transform`)
```python
from jsonschema_pydantic_converter import transform
# Define a JSON Schema
schema = {
"type": "object",
"properties": {
"name": {"type": "string", "description": "User's name"},
"age": {"type": "integer", "description": "User's age"},
"email": {"type": "string"}
},
"required": ["name", "age"]
}
# Convert to Pydantic model (deprecated - use create_type_adapter instead)
UserModel = transform(schema)
# Use the model
user = UserModel(name="John Doe", age=30, email="john@example.com")
print(user.model_dump())
# {'name': 'John Doe', 'age': 30, 'email': 'john@example.com'}
```
### Using TypeAdapter for Validation
The `create_type_adapter` function returns a Pydantic TypeAdapter, which provides additional validation and serialization capabilities:
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"}
},
"required": ["name", "age"]
}
# Create TypeAdapter
adapter = create_type_adapter(schema)
# Validate Python objects
user = adapter.validate_python({"name": "John Doe", "age": 30, "email": "john@example.com"})
print(user)
# Validate JSON strings directly
json_str = '{"name": "Jane Doe", "age": 25}'
user = adapter.validate_json(json_str)
# Serialize back to Python dict
user_dict = adapter.dump_python(user)
print(user_dict)
# {'name': 'Jane Doe', 'age': 25, 'email': None}
```
**When to use `transform` vs `create_type_adapter`:**
- **Recommended**: Use `create_type_adapter()` for all new code - it handles any JSON schema type and provides validation/serialization methods
- **Deprecated**: `transform()` is maintained for backward compatibility but only works with object schemas. It returns a BaseModel class if you need direct model access
### Working with Enums
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "object",
"properties": {
"status": {
"type": "string",
"enum": ["active", "inactive", "pending"]
}
}
}
adapter = create_type_adapter(schema)
obj = adapter.validate_python({"status": "active"})
```
### Nested Objects
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "object",
"properties": {
"user": {
"type": "object",
"properties": {
"name": {"type": "string"},
"email": {"type": "string"}
},
"required": ["name"]
}
}
}
adapter = create_type_adapter(schema)
data = adapter.validate_python({"user": {"name": "Alice", "email": "alice@example.com"}})
```
### Arrays
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "object",
"properties": {
"tags": {
"type": "array",
"items": {"type": "string"}
}
}
}
adapter = create_type_adapter(schema)
obj = adapter.validate_python({"tags": ["python", "pydantic", "json-schema"]})
```
### Schema with References
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "object",
"properties": {
"person": {"$ref": "#/$defs/Person"}
},
"$defs": {
"Person": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
}
}
}
}
adapter = create_type_adapter(schema)
person = adapter.validate_python({"person": {"name": "Bob", "age": 25}})
```
### Union Types (anyOf)
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "object",
"properties": {
"value": {
"anyOf": [
{"type": "string"},
{"type": "integer"}
]
}
}
}
adapter = create_type_adapter(schema)
obj1 = adapter.validate_python({"value": "text"})
obj2 = adapter.validate_python({"value": 42})
```
### Validation Constraints
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "object",
"properties": {
"username": {
"type": "string",
"minLength": 3,
"maxLength": 20,
"pattern": "^[a-z0-9_]+$"
},
"age": {
"type": "integer",
"minimum": 0,
"maximum": 150
},
"score": {
"type": "number",
"minimum": 0,
"maximum": 100,
"multipleOf": 0.5
}
}
}
adapter = create_type_adapter(schema)
# Valid data
obj = adapter.validate_python({
"username": "john_doe",
"age": 25,
"score": 85.5
})
# Invalid - will raise ValidationError
# adapter.validate_python({"username": "ab"}) # Too short
# adapter.validate_python({"age": -1}) # Below minimum
```
### Constant Values (const)
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "object",
"properties": {
"country": {"const": "United States"},
"version": {"const": 1}
}
}
adapter = create_type_adapter(schema)
# Valid - exact match
obj = adapter.validate_python({"country": "United States", "version": 1})
# Invalid - will raise ValidationError
# adapter.validate_python({"country": "Canada", "version": 1})
```
### Negation (not)
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "object",
"properties": {
"value": {"not": {"type": "string"}}
}
}
adapter = create_type_adapter(schema)
# Valid - not a string
obj1 = adapter.validate_python({"value": 42})
obj2 = adapter.validate_python({"value": [1, 2, 3]})
# Invalid - is a string
# adapter.validate_python({"value": "text"})
```
### Combined Schemas (allOf)
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"allOf": [
{
"type": "object",
"properties": {"name": {"type": "string"}},
"required": ["name"]
},
{
"type": "object",
"properties": {"age": {"type": "integer"}},
"required": ["age"]
}
]
}
adapter = create_type_adapter(schema)
# Valid - satisfies all schemas
obj = adapter.validate_python({"name": "Alice", "age": 30})
# Invalid - missing required fields
# adapter.validate_python({"name": "Alice"})
```
### Tuples (prefixItems)
```python
from jsonschema_pydantic_converter import create_type_adapter
schema = {
"type": "array",
"prefixItems": [
{"type": "string"},
{"type": "integer"},
{"type": "boolean"}
]
}
adapter = create_type_adapter(schema)
# Valid tuple
result = adapter.validate_python(["hello", 42, True])
# Returns: ("hello", 42, True)
```
### Boolean Schemas
```python
from jsonschema_pydantic_converter import create_type_adapter
# Schema that accepts anything
schema_true = True
adapter_true = create_type_adapter(schema_true)
adapter_true.validate_python("anything") # Valid
adapter_true.validate_python(42) # Valid
adapter_true.validate_python([1, 2, 3]) # Valid
# Schema that rejects everything
schema_false = False
adapter_false = create_type_adapter(schema_false)
# adapter_false.validate_python("anything") # Invalid - raises ValidationError
```
## Development Setup
### Prerequisites
- Python 3.10 or higher
- [uv](https://github.com/astral-sh/uv) (recommended) or pip
### Clone the Repository
```bash
git clone https://github.com/akshaylive/jsonschema-pydantic-converter.git
cd jsonschema-pydantic-converter
```
### Install Dependencies
Using uv (recommended):
```bash
uv sync
```
Using pip:
```bash
pip install -e .
pip install mypy ruff pytest pytest-cov
```
### Run Tests
```bash
# Using uv
uv run pytest
# With coverage
uv run pytest --cov=src --cov-report=html
# Using pytest directly (if in activated venv)
pytest
```
### Code Quality
The project uses several tools to maintain code quality:
```bash
# Type checking with mypy
uv run mypy src/
# Linting with ruff
uv run ruff check .
# Format code with ruff
uv run ruff format .
```
## Contributing
Contributions are welcome! Here's how you can help:
### Reporting Issues
- Check existing issues before creating a new one
- Provide a clear description of the problem
- Include a minimal reproducible example
- Specify your Python and Pydantic versions
### Submitting Pull Requests
1. Fork the repository
2. Create a new branch for your feature/fix:
```bash
git checkout -b feature/your-feature-name
```
3. Make your changes and ensure:
- All tests pass: `uv run pytest`
- Code is properly formatted: `uv run ruff format .`
- No linting errors: `uv run ruff check .`
- Type checking passes: `uv run mypy src/`
4. Add tests for new functionality
5. Update documentation if needed
6. Commit your changes with clear commit messages
7. Push to your fork and submit a pull request
### Code Style
- Follow PEP 8 guidelines
- Use Google-style docstrings
- Type hints are required for all functions
- Line length: 88 characters (Black/Ruff default)
### Development Guidelines
- Write tests for all new features
- Maintain backwards compatibility when possible
- Update the README for user-facing changes
- Keep dependencies minimal
## Limitations
- Optional fields without defaults are set to `None` rather than using `Optional[T]` type annotation to maintain JSON Schema round-trip consistency
- When `allOf` contains `$ref` references, the generated `json_schema()` output may not preserve the exact original structure (validation still works correctly)
- Some advanced JSON Schema features are not yet supported:
- `$anchor` references (causes syntax errors with forward references)
- `$dynamicRef` / `$dynamicAnchor` (draft 2020-12 advanced features)
- Full enforcement of: `uniqueItems`, `contains`, `propertyNames`, `patternProperties`, `format` validation
- `if-then-else` conditionals (base type is used, but conditionals are not enforced)
- Complex schema combinations may require testing for edge cases
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## Maintainer
Akshaya Shanbhogue - [akshay.live@gmail.com](mailto:akshay.live@gmail.com)
## Links
- [GitHub Repository](https://github.com/akshaylive/jsonschema-pydantic-converter)
- [Issue Tracker](https://github.com/akshaylive/jsonschema-pydantic-converter/issues)
| text/markdown | null | null | null | Akshaya Shanbhogue <akshay.live@gmail.com> | Apache-2.0 | conversion, json-schema, pydantic, schema, validation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/akshaylive/jsonschema-pydantic-converter",
"Repository, https://github.com/akshaylive/jsonschema-pydantic-converter"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:37:03.868365 | jsonschema_pydantic_converter-0.1.9.tar.gz | 62,460 | 1c/34/8db60a66894055ec785a10f6ad0e7778ab2e5f53c3254d36864ea8cb829e/jsonschema_pydantic_converter-0.1.9.tar.gz | source | sdist | null | false | a21462166059832ef20820f488b19aa8 | a1396e372bc0f7615fd47588e3fb18b031f5fb178d5db65c5ad6d083a2df71bd | 1c348db60a66894055ec785a10f6ad0e7778ab2e5f53c3254d36864ea8cb829e | null | [
"LICENSE"
] | 30,094 |
2.4 | jeremydimond.pygamesim | 1.2.1 | Python game simulator. | # pygamesim
Version: 1.2.1
Python game simulator
| text/markdown | null | Jeremy Dimond <jeremy@jeremydimond.com> | null | null | Copyright © 2019-2026 Jeremy Dimond
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | jeremydimond, game, simulator, simulation, blackjack | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests==2.32.5",
"jsonpickle==4.1.1",
"jeremydimond.pytesthelpers==1.0.4; extra == \"dev\"",
"pytest<9.0.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-tldr; extra == \"dev\"",
"bumpver; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jeremydimond/pygamesim"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T21:35:21.736215 | jeremydimond_pygamesim-1.2.1.tar.gz | 9,437 | 32/e3/a8ade4273767224ec1172965486b8e4b436aca7ee8f13a18bdeb6262b9b3/jeremydimond_pygamesim-1.2.1.tar.gz | source | sdist | null | false | bb910847112a7f802df347702896d2f4 | e087819534490d25012278b7f982dd7465c1eff4a338d45b01ec4f19cbe1e5a4 | 32e3a8ade4273767224ec1172965486b8e4b436aca7ee8f13a18bdeb6262b9b3 | null | [
"LICENSE"
] | 0 |
2.4 | threefive | 3.0.75 | threefive is The #1 SCTE-35 Decoder and Encoder on the Planet | #### Need to inject SCTE-35 into HLS? [X9k3.](https://github.com/superkabuki/x9k3)
# [ threefive ]
## https://github.com/superkabuki/threefive
### threefive is the industry leading SCTE-35 tool.
* __Decodes SCTE-35__ from `MPEGTS`✔ `Base64`✔ `Bytes`✔ `DASH`✔ `Hex` ✔ `HLS`✔ `Integers`✔ `JSON`✔ `XML`✔ `XML+Binary`✔
* __Encodes SCTE-35__ to `MPEGTS`✔ `Base64`✔ `Bytes`✔ `Hex`✔ `Integers`✔ `JSON`✔ `XML`✔ `XML+Binary`✔
* __Injects SCTE-35 Packets__ into `MPEGTS`✔.
* __Network support__ for `HTTP(s)`✔ `Multicast`✔ `UDP`✔ `SRT`✔
* __Built-in__ `Multicast Server`✔
* __Automatic__ `AES decryption`✔
___
### [ News ]
* __Python3 vs. Pypy3__ [__parsing SCTE35 with threefive__](https://github.com/superkabuki/threefive_is_scte35#python3-vs-pypy3-running-threefive) (watch the cool video)
* __threefive now supports__ [__Secure Reliable Transport__](https://github.com/superkabuki/threefive_is_scte35/blob/main/README.md#threefive-now-supports-srt) (watch the cool video)
* [__threefive does Multicast very well__](#-threefive-streams-multicast-its-easy-), both as a sender and receiver.
___
## [ Latest version is v3.0.75 ]
* [__Super low cyclomatic complexity score__](cyclomatic.md)
___
## [Fun Facts]
* threefive is single threaded.
* threefive has more left shifts than multiplication operations.
* threefive doesn't have a single lambda call.
___
## [Examples]
<i>These examples show how to parse SCTE-35<BR>
from various SCTE-35 data formats, with both the cli and with the library.</i>
<details><summary>MPEGTS</summary>
* MPEGTS streams can be Files, Http(s), Multicast,SRT, UDP Unicast, or stdin.
* __cli__
```js
threefive https://example.com/video.ts
```
* wildcards work too.
```js
threefive /mpegts/*.ts
```
* __lib__
```py3
from threefive import Stream
stream = Stream('https://example.com/video.ts')
stream.decode()
```
</details>
<details><summary>Base64</summary>
* __cli__
```js
threefive '/DAsAAAAAyiYAP/wCgUAAAABf1+ZmQEBABECD0NVRUkAAAAAf4ABADUAAC2XQZU='
```
* __lib__
```py3
from threefive import Cue
data = '/DAsAAAAAyiYAP/wCgUAAAABf1+ZmQEBABECD0NVRUkAAAAAf4ABADUAAC2XQZU='
cue=Cue(data)
cue.show()
```
</details>
<details><summary>Bytes</summary>
* __cli__
* Bytes don't work on the cli
* __lib__
```py3
from threefive import Cue
data = b'\xfc0\x16\x00\x00\x00\x00\x00\x00\x00\xff\xf0\x05\x06\xfe\x00\xc0D\xa0\x00\x00\x00\xb5k\x88'
cue=Cue(data)
cue.show()
```
</details>
<details><summary>Hex</summary>
* Can be a hex literal or hex string or bytes.
* __cli__
```js
threefive 0xfc301600000000000000fff00506fed605225b0000b0b65f3b
```
* __lib__
```py3
from threefive import Cue
data = 0xfc301600000000000000fff00506fed605225b0000b0b65f3b
cue=Cue(data)
cue.show()
```
</details>
<details><summary>Int</summary>
* Can be a literal integer or string or bytes.
* __cli__
```js
threefive 1583008701074197245727019716796221243043855984942057168199483
```
* __lib__
```py3
from threefive import Cue
data = 1583008701074197245727019716796221243043855984942057168199483
cue=Cue(data)
cue.show()
```
</details>
<details><summary>JSON</summary>
* __cli__
* put JSON SCTE-35 in a file and redirect it into threefive
* cat files to threefive works too.
* echo JSON or type JSON on the command line.
```js
threefive < json.json
```
* __lib__
```py3
from threefive import Cue
data = '''{
"info_section": {
"table_id": "0xfc",
"section_syntax_indicator": false,
"private": false,
"sap_type": "0x03",
"sap_details": "No Sap Type",
"section_length": 22,
"protocol_version": 0,
"encrypted_packet": false,
"encryption_algorithm": 0,
"pts_adjustment": 0.0,
"cw_index": "0x00",
"tier": "0x0fff",
"splice_command_length": 5,
"splice_command_type": 6,
"descriptor_loop_length": 0,
"crc": "0xb56b88"
},
"command": {
"command_length": 5,
"command_type": 6,
"name": "Time Signal",
"time_specified_flag": true,
"pts_time": 140.005333
},
"descriptors": []
}
'''
cue=Cue(data)
cue.show()
```
</details>
<details><summary><u>Xml</u></summary>
* __cli__
* put xml SCTE-35 in a [file](xml.xml) and redirect it into threefive
* cat files to threefive works too.
* echo xml or type xml on the command line.
```js
threefive < xml.xml
```
* __lib__
```py3
from threefive import Cue
data = '''
<scte35:SpliceInfoSection xmlns:scte35="https://scte.org/schemas/35"
ptsAdjustment="0" protocolVersion="0" sapType="3" tier="4095">
<scte35:TimeSignal>
<scte35:SpliceTime ptsTime="12600480"/>
</scte35:TimeSignal>
</scte35:SpliceInfoSection>
'''
cue=Cue(data)
cue.show()
```
</details>
<details><summary>Xml+binary</summary>
* __cli__
* write xml+binary to a [file](xmlbin.xml) and redirect it to threefive
* cat files to threefive works too.
* echo xml+binary or type xml+binary on the command line.
```js
threefive < xmlbin.xml
```
* __lib__
```py3
from threefive import Cue
data = '''<scte35:Signal xmlns:scte35="https://scte.org/schemas/35">
<scte35:Binary>/DAWAAAAAAAAAP/wBQb+AMBEoAAAALVriA==</scte35:Binary>
</scte35:Signal>
'''
cue=Cue(data)
cue.show()
```
</details>
</samp>
#### [__More Examples__](https://github.com/superkabuki/threefive/tree/main/examples)
# [ Documentation ]
* __use threefive on the web__
* [threefive SCTE-35 __Online Parser__](https://iodisco.com/scte35) hosted on my server_
* [ SCTE-35 __Online Parser__ powered by threefive](http://www.domus1938.com/scte35parser) _another online parser powered by threefive_
* [SCTE-35 __As a Service__](sassy.md) _if you can make an http request, you can parse SCTE-35, no install needed._
* [__install__](#install)
* [SCTE-35 Decoding __Quick Start__ ](#quick-start) _threefive makes decoding SCTE-35 fast and easy_
* [SCTE-35 __Examples__](https://github.com/superkabuki/threefive/tree/main/examples) _examples of all kinds of SCTE-35 stuff_
* __Command line__
* [SCTE-35 __Cli__](#-the-cli-tool-) _decode SCTE-35 on the command line_
* __Library__
* [__Using the threefive.Cue class__](https://github.com/superkabuki/threefive/blob/main/lib.md)
* [__Using the threefive library__](#using-the-library) _decode SCTE-35 with less than ten lines of code_
* * [threefive __Classes__](#classes) _threefive is OO, made to subclass_
* [__Cue__ Class](https://github.com/superkabuki/threefive/blob/main/cue.md) _this class you'll use often_
* [__Stream__ Class](https://github.com/superkabuki/threefive/blob/main/stream.md) _this is the class for parsing MPEGTS_
* [Use __threefive to stream Multicast__](#-threefive-streams-multicast-its-easy-) _threefive is a multicast client and server_
* [SCTE-35 __Sidecar Files__](https://github.com/superkabuki/SCTE-35_Sidecar_Files) _threefive supports SCTE-35 sidecar files_
* [__SuperKabuki__ SCTE-35 MPEGTS __Packet Injection__](inject.md) _inject SCTE-35 into MPEGTS streams_
* [SCTE-35 __HLS__](https://github.com/superkabuki/threefive/blob/main/hls.md) _parse SCTE-35 in HLS__
* [SCTE-35 __XML__ ](https://github.com/superkabuki/SCTE-35/blob/main/xml.md) and [More __XML__](node.md) _threefive can parse and encode SCTE-35 xml_
* [__Encode__ SCTE-35](https://github.com/superkabuki/threefive/blob/main/encode.md) _threefive can encode SCTE-35 in every SCTE-35 format_
* [Make your __threefive__ script an executable with __cython__](cython.md) _threefive is compatible with all python tools_
</samp>
## [Install]
* python3 via pip
```rebol
python3 -mpip install threefive
```
* pypy3
```rebol
pypy3 -mpip install threefive
```
* from the git repo
```rebol
git clone https://github.com/superkabuki/scte35.git
cd threefive
make install
```
___
## [Quick Start]
* Most of the stuff in threefive all works the same way.
### [cli tool]
* The default action is to read a input and write a SCTE-35 output.
* __Inputs:__ mpegts, base64, hex, json,and xml, and xmlbin.
* __Outputs:__ base64, bytes, hex, int, json, xml, and xmlbin.
* __Sources:__ SCTE35 can read from strings, files, stdin, http(s), multicast,srt and udp.
|Input |Output | How to use |
|----------|-----------|---------------------------------------------------------|
|__mpegts__|__base64__ | threefive https://example.com/video.ts __base64__ |
| | | |
|__base64__|__hex__ | threefive '/DAWAAAAAAAAAP/wBQb+AKmKxwAACzuu2Q==' __hex__|
| | | | |
|__xmlbin__|__int__ | threefive < xmlbin.xml __int__ |
| | | |
|__xml__ |__json__ | threefive < xml.xml |
| | | |
|__mpegts__|__xml+bin__| threefive video.ts __xmlbin__ |
| | | | |
|__json__ |__xml__ | threefive < json.json __xml__ |
| | | |
* __Additional functionality__ in the threefive cli tool.
| Description | How To Use |
|------------------------------------------|---------------------------------------------------------|
| Adjust __SCTE-35__ PTS values by seconds | threefive __bump__ -i input.ts -o output.ts -b -37.45 |
| | |
| Parse HLS for __SCTE35__ |threefive __hls__ https://example.com/master.m3u8 |
| | |
| Inject __SCTE35__ packets |threefive __inject__ -i in.video -s sidecar.txt -o out.ts|
| | |
| Show raw __SCTE35__ packets |threefive __packets__ udp://@235.35.3.5:3535 |
| | |
| Copy MPEGTS stream to stdout at realtime speed| threefive __rt__ input.ts | mplayer - |
| | |
| Create __SCTE35__ sidecar file |threefive __sidecar__ video.ts |
| | |
|Fix __SCTE-35__ data mangled by __ffmpeg__| threefive __sixfix__ video.ts |
| | |
| Show streams in mpegts stream | threefive __show__ https://example.com/video.ts |
| | |
| Show __iframes__ in mpegts stream |threefive __iframes__ srt://10.10.1.3:9000 |
| | |
| Show __PTS__ values from mpegts stream | threefive __pts__ udp://192.168.1.10:9000 |
| | |
|__Proxy__ the __mpegts__ stream to stdout |threefive __proxy__ https://wexample.com/video.ts |
| | |
| __Multicast__ anything |threefive __mcast__ some.file |
| | |
___
## [XML]
* [XML](https://github.com/superkabuki/SCTE-35/blob/main/xml.md) __New__! _updated 05/01/2025_
## [Cli]
* [SCTE-35 Cli Super Tool](#the-cli-tool) Encodes, Decodes, and Recodes. This is pretty cool, it does SCTE-35 seven different ways.
* The cli tool comes with builtin documentation just type `threefive help`
## [HLS]
* [Advanced Parsing of SCTE-35 in HLS with threefive](https://github.com/superkabuki/threefive/blob/main/hls.md) All HLS SCTE-35 tags, Sidecar Files, AAC ID3 Header Timestamps, SCTE-35 filters... Who loves you baby?
## [MPEGTS Packet Injection]
* [The SuperKabuki MPEGTS Packet Injection Engine in the Cli](inject.md)
## [SCTE-35 As a Service]
* Decode SCTE-35 without installing anything. If you can make an https request, you can use [__Sassy__](sassy.md) to decode SCTE-35. .
## [Classes]
* The python built in help is always the most up to date docs for the library.
```py3
a@fu:~/build7/threefive$ pypy3
>>>> from threefive import Stream
>>>> help(Stream)
```
* [Class Structure](https://github.com/superkabuki/threefive/blob/main/classes.md)
* [Cue Class](https://github.com/superkabuki/threefive/blob/main/cue.md) Cue is the main SCTE-35 class to use.
* [Stream Class](https://github.com/superkabuki/threefive/blob/main/stream.md) The Stream class handles MPEGTS SCTE-35 streams local, Http(s), UDP, and Multicast.
___
### [threefive now supports SRT]
* _( You have to unmute the audio )_
https://github.com/user-attachments/assets/a323ea90-867f-480f-a55f-e9339263e511
<BR>
* [more SRT and threefive info](srt.md)
* _checkout [SRTfu](https://github.com/superkabuki/srtfu)_
___
## [more]
* [Online SCTE-35 Parser](https://iodisco.com/scte35) Supporte Base64, Bytes,Hex,Int, Json, Xml, and Xml+binary.
* [Encode SCTE-35](https://github.com/superkabuki/threefive/blob/main/encode.md) Some encoding code examples.
___
## __Python3 vs Pypy3 running threefive__
* __( You have to unmute the audio )__
https://github.com/user-attachments/assets/9e88fb38-6ad0-487a-a801-90faba9d72c6
___
# Using the library
* Let me show you how easy threefive is to use.
* reading SCTE-35 xml from a file
```py3
a@fu:~/threefive$ pypy3
Python 3.9.16 (7.3.11+dfsg-2+deb12u3, Dec 30 2024, 22:36:23)
[PyPy 7.3.11 with GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>> from threefive import reader
>>>> from threefive import Cue
>>>> data =reader('/home/a/xml.xml').read()
```
* load it into a threefive.Cue instance
```py3
>>>> cue = Cue(data)
```
* Show the data as JSON
```py3
>>>> cue.show()
{
"info_section": {
"table_id": "0xfc",
"section_syntax_indicator": false,
"private": false,
"sap_type": "0x03",
"sap_details": "No Sap Type",
"section_length": 92,
"protocol_version": 0,
"encrypted_packet": false,
"encryption_algorithm": 0,
"pts_adjustment": 0.0,
"cw_index": "0x00",
"tier": "0x0fff",
"splice_command_length": 15,
"splice_command_type": 5,
"descriptor_loop_length": 60,
"crc": "0x7632935"
},
"command": {
"command_length": 15,
"command_type": 5,
"name": "Splice Insert",
"break_auto_return": false,
"break_duration": 180.0,
"splice_event_id": 1073743095,
"splice_event_cancel_indicator": false,
"out_of_network_indicator": true,
"program_splice_flag": false,
"duration_flag": true,
"splice_immediate_flag": false,
"event_id_compliance_flag": true,
"unique_program_id": 1,
"avail_num": 12,
"avails_expected": 5
},
"descriptors": [
{
"tag": 0,
"identifier": "CUEI",
"name": "Avail Descriptor",
"provider_avail_id": 12,
"descriptor_length": 8
},
{
"tag": 0,
"identifier": "CUEI",
"name": "Avail Descriptor",
"provider_avail_id": 13,
"descriptor_length": 8
},
]
}
```
* convert the data back to xml
```py3
>>>> print(cue.xml())
<scte35:SpliceInfoSection xmlns:scte35="https://scte.org/schemas/35" ptsAdjustment="0" protocolVersion="0" sapType="3" tier="4095">
<scte35:SpliceInsert spliceEventId="1073743095" spliceEventCancelIndicator="false" spliceImmediateFlag="false" eventIdComplianceFlag="true" availNum="12" availsExpected="5" outOfNetworkIndicator="true" uniqueProgramId="1">
<scte35:BreakDuration autoReturn="false" duration="16200000"/>
</scte35:SpliceInsert>
<scte35:AvailDescriptor providerAvailId="12"/>
<scte35:AvailDescriptor providerAvailId="13"/>
<scte35:AvailDescriptor providerAvailId="14"/>
<scte35:AvailDescriptor providerAvailId="15"/>
<scte35:AvailDescriptor providerAvailId="16"/>
<scte35:AvailDescriptor providerAvailId="17"/>
</scte35:SpliceInfoSection>
```
* convert to xml+binary
```py3
>>>> print(cue.xmlbin())
<scte35:Signal xmlns:scte35="https://scte.org/schemas/35">
<scte35:Binary>/DBcAAAAAAAAAP/wDwVAAAT3f69+APcxQAABDAUAPAAIQ1VFSQAAAAwACENVRUkAAAANAAhDVUVJAAAADgAIQ1VFSQAAAA8ACENVRUkAAAAQAAhDVUVJAAAAEQdjKTU=</scte35:Binary>
</scte35:Signal>
```
* convert to base64
```py3
>>>> print(cue.base64())
/DBcAAAAAAAAAP/wDwVAAAT3f69+APcxQAABDAUAPAAIQ1VFSQAAAAwACENVRUkAAAANAAhDVUVJAAAADgAIQ1VFSQAAAA8ACENVRUkAAAAQAAhDVUVJAAAAEQdjKTU=
```
* convert to hex
```py3
>>>> print(cue.hex())
0xfc305c00000000000000fff00f05400004f77faf7e00f7314000010c05003c0008435545490000000c0008435545490000000d0008435545490000000e0008435545490000000f000843554549000000100008435545490000001107632935
```
* show just the splice command
```py3
>>>> cue.command.show()
{
"command_length": 15,
"command_type": 5,
"name": "Splice Insert",
"break_auto_return": false,
"break_duration": 180.0,
"splice_event_id": 1073743095,
"splice_event_cancel_indicator": false,
"out_of_network_indicator": true,
"program_splice_flag": false,
"duration_flag": true,
"splice_immediate_flag": false,
"event_id_compliance_flag": true,
"unique_program_id": 1,
"avail_num": 12,
"avails_expected": 5
}
```
* edit the break duration
```py3
>>>> cue.command.break_duration=30
>>>> cue.command.show()
{
"command_length": 15,
"command_type": 5,
"name": "Splice Insert",
"break_auto_return": false,
"break_duration": 30,
"splice_event_id": 1073743095,
"splice_event_cancel_indicator": false,
"out_of_network_indicator": true,
"program_splice_flag": false,
"duration_flag": true,
"splice_immediate_flag": false,
"event_id_compliance_flag": true,
"unique_program_id": 1,
"avail_num": 12,
"avails_expected": 5
}
```
* re-encode to base64 with the new duration
```py3
>>>> cue.base64()
'/DBcAAAAAAAAAP/wDwVAAAT3f69+ACky4AABDAUAPAAIQ1VFSQAAAAwACENVRUkAAAANAAhDVUVJAAAADgAIQ1VFSQAAAA8ACENVRUkAAAAQAAhDVUVJAAAAEe1FB6g='
```
* re-encode to xml with the new duration
```py3
>>>> print(cue.xml())
<scte35:SpliceInfoSection xmlns:scte35="https://scte.org/schemas/35" ptsAdjustment="0" protocolVersion="0" sapType="3" tier="4095">
<scte35:SpliceInsert spliceEventId="1073743095" spliceEventCancelIndicator="false" spliceImmediateFlag="false" eventIdComplianceFlag="true" availNum="12" availsExpected="5" outOfNetworkIndicator="true" uniqueProgramId="1">
<scte35:BreakDuration autoReturn="false" duration="2700000"/>
</scte35:SpliceInsert>
<scte35:AvailDescriptor providerAvailId="12"/>
<scte35:AvailDescriptor providerAvailId="13"/>
<scte35:AvailDescriptor providerAvailId="14"/>
<scte35:AvailDescriptor providerAvailId="15"/>
<scte35:AvailDescriptor providerAvailId="16"/>
<scte35:AvailDescriptor providerAvailId="17"/>
</scte35:SpliceInfoSection>
```
* show just the descriptors
```py3
>>>> _ = [d.show() for d in cue.descriptors]
{
"tag": 0,
"identifier": "CUEI",
"name": "Avail Descriptor",
"provider_avail_id": 12,
"descriptor_length": 8
}
{
"tag": 0,
"identifier": "CUEI",
"name": "Avail Descriptor",
"provider_avail_id": 13,
"descriptor_length": 8
}
{
"tag": 0,
"identifier": "CUEI",
"name": "Avail Descriptor",
"provider_avail_id": 14,
"descriptor_length": 8
}
{
"tag": 0,
"identifier": "CUEI",
"name": "Avail Descriptor",
"provider_avail_id": 15,
"descriptor_length": 8
}
{
"tag": 0,
"identifier": "CUEI",
"name": "Avail Descriptor",
"provider_avail_id": 16,
"descriptor_length": 8
}
{
"tag": 0,
"identifier": "CUEI",
"name": "Avail Descriptor",
"provider_avail_id": 17,
"descriptor_length": 8
}
```
* pop off the last descriptor and re-encode to xml
```py3
>>>> cue.descriptors.pop()
{'tag': 0, 'identifier': 'CUEI', 'name': 'Avail Descriptor', 'private_data': None, 'provider_avail_id': 17, 'descriptor_length': 8}
>>>> print(cue.xml())
<scte35:SpliceInfoSection xmlns:scte35="https://scte.org/schemas/35" ptsAdjustment="0" protocolVersion="0" sapType="3" tier="4095">
<scte35:SpliceInsert spliceEventId="1073743095" spliceEventCancelIndicator="false" spliceImmediateFlag="false" eventIdComplianceFlag="true" availNum="12" availsExpected="5" outOfNetworkIndicator="true" uniqueProgramId="1">
<scte35:BreakDuration autoReturn="false" duration="2700000"/>
</scte35:SpliceInsert>
<scte35:AvailDescriptor providerAvailId="12"/>
<scte35:AvailDescriptor providerAvailId="13"/>
<scte35:AvailDescriptor providerAvailId="14"/>
<scte35:AvailDescriptor providerAvailId="15"/>
<scte35:AvailDescriptor providerAvailId="16"/>
</scte35:SpliceInfoSection>
```
## [ The Cli tool ]
#### The cli tool installs automatically with pip or the Makefile.
* [__SCTE-35 Inputs__](#inputs)
* [__SCTE-35 Outputs__](#outputs)
* [Parse __MPEGTS__ streams for __SCTE-35__](#streams)
* [Parse __SCTE-35__ in __hls__](#hls)
* [Display __MPEGTS__ __iframes__](#iframes)
* [Display raw __SCTE-35 packets__ from __video streams__](#packets)
* [__Repair SCTE-35 streams__ changed to __bin data__ by __ffmpeg__](#sixfix)
#### `Inputs`
* Most __inputs__ are __auto-detected.__
* __stdin__ is __auto selected__ and __auto detected.__
* __SCTE-35 data is printed to stderr__
* __stdout is used when piping video__
* mpegts can be specified by file name or URI.
```rebol
threefive udp://@235.2.5.35:3535
```
* If a file comtains a SCTE-35 cue as a string( base64,hex,int,json,or xml+bin), redirect the file contents.
```rebol
threefive < json.json
```
* quoted strings(( base64,hex,int,json or xml+bin), can be passed directly on the command line as well.
```awk
threefive '/DAWAAAAAAAAAP/wBQb+ztd7owAAdIbbmw=='
```
| Input Type | Cli Example |
|------------|-------------------------------------------------------------------------------------------------------------|
| __Base64__ | `threefive '/DAsAAAAAyiYAP/wCgUAAAABf1+ZmQEBABECD0NVRUkAAAAAf4ABADUAAC2XQZU='`
| __Hex__ |`threefive 0xfc301600000000000000fff00506fed605225b0000b0b65f3b`|
| __HLS__ |`threefive hls https://example.com/master.m3u8` |
| __JSON__ |`threefive < json.json` |
| __Xmlbin__ | `js threefive < xmlbin.xml` |
# `Streams`
|Protocol | Cli Example |
|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
| __File__ | `threefive video.ts` |
| __Http(s)__ | `threefive https://example.com/video.ts` |
| __Stdin__ | `threefive < video.ts` |
| __UDP Multicast__| `threefive udp://@235.35.3.5:9999` |
| __UDP Unicast__ | `threefive udp://10.0.0.7:5555` |
| __HLS__ | `threefive hls https://example.com/master.m3u8`|
| | |
#### Outputs
* output type is determined by the key words __base64, bytes, hex, int, json, and xmlbin__.
* __json is the default__.
* __Any input (except HLS,) can be returned as any output__
* examples __Base64 to Hex__ etc...)
| Output Type | Cli Example |
|-------------|----------------------------------------------------------|
|__Base 64__ | `threefive 0xfc301600000000000000fff00506fed605225b0000b0b65f3b base64 ` |
| __Bytes__ | `threefive 0xfc301600000000000000fff00506fed605225b0000b0b65f3b bytes` |
| __Hex__ | `threefive '/DAsAAAAAyiYAP/wCgUAAAABf1+ZmQEBABECD0NVRUkAAAAAf4ABADUAAC2XQZU=' hex` |
| __Integer__ | `threefive '/DAsAAAAAyiYAP/wCgUAAAABf1+ZmQEBABECD0NVRUkAAAAAf4ABADUAAC2XQZU=' int` |
| __JSON__ | `threefive 0xfc301600000000000000fff00506fed605225b0000b0b65f3b json ` |
| __Xml+bin__ | `threefive 0xfc301600000000000000fff00506fed605225b0000b0b65f3b xmlbin ` |`
#### `hls`
* parse hls manifests and segments for SCTE-35
```smalltalk
threefive hls https://example.com/master.m3u8
```
___
#### `Iframes`
* Show iframes PTS in an MPEGTS video
```smalltalk
threefive iframes https://example.com/video.ts
```
___
#### `packets`
* Print raw SCTE-35 packets from multicast mpegts video
```smalltalk
threefive packets udp://@235.35.3.5:3535
```
___
#### `proxy`
* Parse a https stream and write raw video to stdout
```smalltalk
threefive proxy video.ts
```
___
#### `pts`
* Print PTS from mpegts video
```smalltalk
threefive pts video.ts
```
___
#### `sidecar`
* Parse a stream, write pts,write SCTE-35 Cues to sidecar.txt
```smalltalk
threefive sidecar video.ts
```
___
#### `sixfix`
* Fix SCTE-35 data mangled by ffmpeg
```smalltalk
threefive sixfix video.ts
```
___
#### `show`
* Probe mpegts video _( kind of like ffprobe )_
```smalltalk
threefive show video.ts
```
___
#### `version`
* Show version
```smalltalk
threefive version
```
___
#### `help`
* Help
```rebol
threefive help
```
___
## [ threefive Streams Multicast, it's easy. ]
* The threefive cli has long been a Multicast Receiver( client )
* The cli now comes with a builtin Multicast Sender( server).
* It's optimized for MPEGTS (1316 byte Datagrams) but you can send any video or file.
* The defaults will work in most situations, you don't even have to set the address.
* threefive cli also supports UDP Unicast Streaming.
If you're tired of configuring strange kernel settings with sysctl trying to get multicast to work,<br>
threefive multicast is written from scratch in raw sockets and autoconfigures most settings,<br>
threefive adjusts the SO_RCVBUF, SO_SNDBUF, SO_REUSEADDR,SO_REUSEPORT,IP_MULTICAST_TTL and IP_MULTICAST_LOOP for you.<br>
all you really need to do is make sure multicast is enabled on the network device, threefive can handle the rest.<br>
```js
ip link set wlp2s0 multicast on
```
```js
a@fu:~$ threefive mcast help
usage: threefive mcast [-h] [-i INPUT] [-a ADDR] [-b BIND_ADDR] [-t TTL]
optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
like "/home/a/vid.ts" or "udp://@235.35.3.5:3535" or
"https://futzu.com/xaa.ts"
[default:sys.stdin.buffer]
-a ADDR, --addr ADDR Destination IP:Port [default:235.35.3.5:3535]
-b BIND_ADDR, --bind_addr BIND_ADDR
Local IP to bind [default:0.0.0.0]
-t TTL, --ttl TTL Multicast TTL (1 - 255) [default:32]
a@fu:~$
```
* the video shows three streams being read and played from threefive's multicast, one stream is being converted to srt.
* the command
```sh
a@fu:~/scratch/threefive$ threefive mcast -i ~/mpegts/ms.ts
```
https://github.com/user-attachments/assets/df95b8da-5ca6-4bf3-b029-c95204841e43
* __threefive mcast__ sends __1316 byte datagrams__. Here's `tcpdump multicast`output.
<img width="1126" height="679" alt="image" src="https://github.com/user-attachments/assets/b29f33c7-d35c-42be-95fb-2c6e72d1ab9b" />
___
## [iodisco.com/scte35](https://iodisco.com/scte35)
<svg width="100" height="100">
<circle cx="50" cy="50" r="40" stroke="green" stroke-width="4" fill="yellow" />
</svg>
<img width="258" height="256" alt="image" src="https://github.com/user-attachments/assets/642cb803-9465-408e-bb6e-03549eb22d78" />
___
[__Install__](#install) |[__SCTE-35 Cli__](#the-cli-tool) | [__SCTE-35 HLS__](https://github.com/superkabuki/threefive/blob/main/hls.md) | [__Cue__ Class](https://github.com/superkabuki/threefive/blob/main/cue.md) | [__Stream__ Class](https://github.com/superkabuki/threefive/blob/main/stream.md) | [__Online SCTE-35 Parser__](https://iodisco.com/scte35) | [__Encode SCTE-35__](https://github.com/superkabuki/threefive/blob/main/encode.md) | [__SCTE-35 Examples__](https://github.com/superkabuki/threefive/tree/main/examples)
| [__SCTE-35 XML__ ](https://github.com/superkabuki/SCTE-35/blob/main/xml.md) and [More __XML__](node.md) | [__threefive runs Four Times Faster on pypy3__](https://pypy.org/) | [__SuperKabuki SCTE-35 MPEGTS Packet Injection__](inject.md)
| text/markdown | Adrian | spam@iodisco.com | null | null | null | null | [
"License :: OSI Approved :: Sleepycat License",
"Environment :: Console",
"Operating System :: OS Independent",
"Operating System :: POSIX :: BSD :: OpenBSD",
"Operating System :: POSIX :: Linux",
"Topic :: Multimedia :: Video",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python ... | [] | https://github.com/superkabuki/threefive | null | >=3.8 | [] | [] | [] | [
"pyaes",
"srtfu>=0.0.11"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.2 | 2026-02-18T21:35:17.919822 | threefive-3.0.75.tar.gz | 91,729 | 54/de/5e71367776eeb28ba0e1bd210f9573c0564b538bd403d067bf77af7bdc2c/threefive-3.0.75.tar.gz | source | sdist | null | false | b23dcba52e10599d12475ec7f17d8b29 | c0e2a8d1947829822c87f10ebc4669ea868e42d8586ce5fbb60bc6a0b892c71b | 54de5e71367776eeb28ba0e1bd210f9573c0564b538bd403d067bf77af7bdc2c | null | [
"LICENSE"
] | 377 |
2.4 | svy-rs | 0.3.0 | Internal Rust extension for the svy package. Do not depend on this directly. | # svy-rs
> **Internal package** — This is a compiled Rust extension for the [svy](https://pypi.org/project/svy/) package. Do not depend on this directly.
## Installation
This package is automatically installed as a dependency of `svy`:
```bash
pip install svy
```
Do not install `svy-rs` directly unless you know what you're doing.
| text/markdown; charset=UTF-8; variant=GFM | null | Samplics LLC <msdiallo@samplics.org> | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Development Status :: 4 - Beta",
"Operating System :: OS Independent",
"Topic ::... | [] | null | null | >=3.11 | [] | [] | [] | [
"polars[pyarrow]>=1.8.2"
] | [] | [] | [] | [] | maturin/1.12.2 | 2026-02-18T21:35:15.180628 | svy_rs-0.3.0.tar.gz | 82,023 | d0/58/3e41eff3eaba12adc780e65648dfd010fed38a333e02bbd4c67e19622a91/svy_rs-0.3.0.tar.gz | source | sdist | null | false | 5533752dcc151393f9f31cea39667dec | dbd56c96b9b8917c450425f5de87085cd3e55c5e314ffbec0290e5632081df29 | d0583e41eff3eaba12adc780e65648dfd010fed38a333e02bbd4c67e19622a91 | null | [] | 501 |
2.4 | shipit-cli | 0.17.5 | Shipit CLI is the best way to build, serve and deploy your projects anywhere. | # Shipit
Shipit is a CLI that automatically detects the type of project you are trying to run, builds it and runs it using [Starlark](https://starlark-lang.org/) definition files (called `Shipit`).
It can run builds locally, inside Docker, or through Wasmer, and bundles a one-command experience for common frameworks.
## Quick Start
To use shipit, you'll need to have [uv](https://docs.astral.sh/uv/) installed.
Install nothing globally; use `uvx shipit-cli` to run Shipit from anywhere.
```bash
uvx shipit-cli .
```
Running in `auto` mode will generate the `Shipit` file when needed, build the project, and can
also serve it. Shipit picks the safest builder automatically and falls back to
Docker or Wasmer when requested:
- `uvx shipit-cli . --wasmer` builds locally and serves inside Wasmer.
- `uvx shipit-cli . --docker` builds it with Docker (you can customize the docker client as well, eg: `--docker-client depot`).
- `uvx shipit-cli . --start` launches the app after building.
You can combine them as needed:
```
uvx shipit-cli . --start --wasmer --skip-prepare
```
## Commands
### Default `auto` mode
Full pipeline in one command. Combine flags such as `--regenerate` to rewrite
the `Shipit` file. Use
`--wasmer` to run with Wasmer, or `--wasmer-deploy` to deploy to Wasmer Edge.
### `generate`
```bash
uvx shipit-cli generate .
```
Create or refresh the `Shipit` file. Override build and run commands with
`--install-command`, `--build-command`, or `--start-command`. Pick a exlicit provider
with `--use-provider`.
### `plan`
```bash
uvx shipit-cli plan --out plan.json
```
Evaluate the project and emit config, derived commands, and required
services without building. Helpful for CI checks or debugging configuration.
### `build`
```bash
uvx shipit-cli build
```
Run the build steps defined in `Shipit`. Append `--wasmer` to execute inside
Wasmer, `--docker` to use Docker builds.
### `serve`
```bash
uvx shipit-cli serve
```
Execute the start command for the project. Combine with `--wasmer` for WebAssembly execution, or `--wasmer-deploy` to deploy to Wasmer Edge.
## Supported Technologies
Shipit works with three execution environments:
- Local builder for fast, host-native builds.
- Docker builder when container isolation is required.
- Wasmer runner for portable WebAssembly packaging and deployment.
## Development
Clone the repository and use the `uv` project environment.
```bash
uv run shipit . --start
```
Use any other subcommand during development by prefixing with `uv run shipit`,
for example `uv run shipit build . --wasmer`. This keeps changes local while
matching the published CLI behaviour.
### Tests
Run the test suite with:
```bash
uv run pytest
```
You can run the e2e tests in parallel (`-n 8`) with:
```bash
uv run pytest -m e2e -v "tests/test_e2e.py" -s -n 8
```
The e2e tests will:
* Build the project (locally, or with docker)
* Run the project (locally or with Wasmer)
* Test that the project output (via http requests) is the correct one
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"dotenv>=0.9.9",
"pydantic-settings>=2.12.0",
"pydantic>=2.12.4",
"pyyaml>=6.0.2",
"requests>=2.32.5",
"rich>=14.1.0",
"ripgrep-python>=0.1.0",
"semantic-version>=2.10.0",
"sh>=2.2.2",
"toml>=0.10.2",
"tomlkit>=0.13.3",
"typer>=0.16.1",
"xingque>=0.2.1"
] | [] | [] | [] | [
"homepage, https://wasmer.io",
"repository, https://github.com/wasmerio/shipit",
"Changelog, https://github.com/wasmerio/shipit/changelog"
] | uv/0.7.13 | 2026-02-18T21:34:45.043608 | shipit_cli-0.17.5.tar.gz | 42,499 | 00/59/c01369c60c964fd6fac7ea02069b4c0636d4f68d32cdb13bf7598d8999a8/shipit_cli-0.17.5.tar.gz | source | sdist | null | false | 57497b36d95ca4784a668398d69db6a5 | 5b55db0d6b06affdae34bede6ddefb42e6c17bd4cd73d416df3579ca40f0e4e0 | 0059c01369c60c964fd6fac7ea02069b4c0636d4f68d32cdb13bf7598d8999a8 | null | [] | 841 |
2.4 | netrise-turbine-sdk | 0.1.11 | Turbine GraphQL Python SDK (generated via ariadne-codegen) | # Turbine Python SDK
Minimal, sync-first Python client for the Turbine GraphQL API.
## Getting Started
### Installation from PyPI
Install from PyPI as `netrise-turbine-sdk`:
```bash
# pip
pip install netrise-turbine-sdk
# poetry
poetry add netrise-turbine-sdk
# uv
uv add netrise-turbine-sdk
```
### Configure environment variables
The SDK automatically loads environment variables from a `.env` file in your current working directory when you call `TurbineClientConfig.from_env()`. You can also set environment variables directly.
**Option 1: Using a `.env` file (recommended)**
Create a `.env` file in your project directory:
```bash
endpoint=https://apollo.turbine.netrise.io/graphql/v3
audience=https://prod.turbine.netrise.io/
domain=https://authn.turbine.netrise.io
client_id=<client_id>
client_secret=<client_secret>
organization_id=<org_id>
```
The SDK will automatically load these when you call `TurbineClientConfig.from_env()`. The `.env` file is searched in:
- Current working directory (most common)
- Parent directories (walks up the directory tree)
**Option 2: Set environment variables directly**
```python
import os
os.environ["endpoint"] = "https://apollo.turbine.netrise.io/graphql/v3"
# ... set other variables
cfg = TurbineClientConfig.from_env(load_env_file=False)
```
**Option 3: Disable automatic .env loading**
If you prefer to load `.env` files manually:
```python
from dotenv import load_dotenv
load_dotenv() # Your custom loading logic
cfg = TurbineClientConfig.from_env(load_env_file=False)
```
Populate the missing values. Reach out to [mailto:support@netrise.io](support@netrise.io) if you need assistance.
## Union Field Aliasing Convention
When GraphQL union types have members with identically-named fields that return different types, the SDK automatically applies aliases to disambiguate them. This is necessary because code generators cannot create a single Python type for fields with conflicting return types.
### Naming Convention
Aliased fields follow the pattern: `{camelCaseTypeName}{PascalCaseFieldName}`
For example, the `NotificationControl` union has `AssetAnalysisControl` and `UserManagementControl` types that both define an `events` field with different return types. In the generated SDK, these become:
- `AssetAnalysisControl.events` → `assetAnalysisControlEvents`
- `UserManagementControl.events` → `userManagementControlEvents`
### Example Usage
```python
# Accessing aliased fields on union type members
notification_settings = client.query_notification_settings()
for pref in notification_settings.preferences:
for control in pref.controls:
# Access the aliased field based on the control type
if hasattr(control, 'assetAnalysisControlEvents'):
events = control.assetAnalysisControlEvents
elif hasattr(control, 'userManagementControlEvents'):
events = control.userManagementControlEvents
```
This aliasing is applied automatically during SDK generation and only affects fields that would otherwise cause type conflicts.
## License
See [LICENSE](https://github.com/NetRiseInc/Python-Turbine-SDK/blob/main/LICENSE) for details.
## Documentation
- [API Documentation & Code Samples](https://github.com/NetRiseInc/Python-Turbine-SDK/blob/main/docs/README.md) - detailed examples for all client SDK operations.
| text/markdown | NetRise | anthony.feddersen@netrise.io | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx<1.0.0,>=0.27.0",
"pydantic<3.0.0,>=2.0.0",
"python-dotenv<2.0.0,>=1.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T21:34:36.070737 | netrise_turbine_sdk-0.1.11.tar.gz | 81,042 | 16/cc/f4b58aed9cee61f6c2ee0174fc6103ea9a5062b74b38d6afdbebdcf18fe8/netrise_turbine_sdk-0.1.11.tar.gz | source | sdist | null | false | 353275307c607c629d231ba7c41fa8ff | 897aec88a789d420999cc2f569a034477da7da23cbe588ab309b3c73fbcee27e | 16ccf4b58aed9cee61f6c2ee0174fc6103ea9a5062b74b38d6afdbebdcf18fe8 | null | [] | 237 |
2.4 | ome-arrow | 0.0.7 | Using OME specifications with Apache Arrow for fast, queryable, and language agnostic bioimage data. | <img width="600" src="https://raw.githubusercontent.com/wayscience/ome-arrow/main/docs/src/_static/logo.png?raw=true">

[](https://github.com/wayscience/ome-arrow/actions/workflows/run-tests.yml?query=branch%3Amain)
[](https://github.com/astral-sh/ruff)
[](https://github.com/astral-sh/uv)
[](https://doi.org/10.5281/zenodo.17664969)
# Open, interoperable, and queryable microscopy images with OME Arrow
OME-Arrow uses [Open Microscopy Environment (OME)](https://github.com/ome) specifications through [Apache Arrow](https://arrow.apache.org/) for fast, queryable, and language agnostic bioimage data.
<img height="200" src="https://raw.githubusercontent.com/wayscience/ome-arrow/main/docs/src/_static/references_to_files.png">
__Images are often left behind from the data model, referenced but excluded from databases.__
<img height="200" src="https://raw.githubusercontent.com/wayscience/ome-arrow/main/docs/src/_static/various_ome_arrow_schema.png">
__OME-Arrow brings images back into the story.__
OME Arrow enables image data to be stored alongside metadata or derived data such as single-cell morphology features.
Images in OME Arrow are composed of mutlilayer [structs](https://arrow.apache.org/docs/python/generated/pyarrow.struct.html) so they may be stored as values within tables.
This means you can store, query, and build relationships on data from the same location using any system which is compatible with Apache Arrow (including Parquet) through common data interfaces (such as SQL and DuckDB).
## Project focus
This package is intentionally dedicated to work at a per-image level and not large batch handling (though it may be used for those purposes by users or in other projects).
- For visualizing OME Arrow and OME Parquet data in Napari, please see the [`napari-ome-arrow`](https://github.com/WayScience/napari-ome-arrow) Napari plugin.
- For more comprehensive handling of many images and features in the context of the OME Parquet format please see the [`CytoDataFrame`](https://github.com/cytomining/CytoDataFrame) project (and relevant [example notebook](https://github.com/cytomining/CytoDataFrame/blob/main/docs/src/examples/cytodataframe_at_a_glance.ipynb)).
## Installation
Install OME Arrow from PyPI or from source:
```sh
# install from pypi
pip install ome-arrow
# install directly from source
pip install git+https://github.com/wayscience/ome-arrow.git
```
## Quick start
See below for a quick start guide.
Please also reference an example notebook: [Learning to fly with OME-Arrow](https://github.com/wayscience/ome-arrow/tree/main/docs/src/examples/learning_to_fly_with_ome-arrow.ipynb).
```python
from ome_arrow import OMEArrow
# Ingest a tif image through a convenient OME Arrow class
# We can also ingest OME-Zarr or NumPy arrays.
oa_image = OMEArrow(
data="your_image.tif"
)
# Access the OME Arrow struct itself
# (compatible with Arrow-compliant data storage).
oa_image.data
# Show information about the image.
oa_image.info()
# Display the image with matplotlib.
oa_image.view(how="matplotlib")
# Display the image with pyvista
# (great for ZYX 3D images; install extras: `pip install 'ome-arrow[viz]'`).
oa_image.view(how="pyvista")
# Export to OME-Parquet.
# We can also export OME-TIFF, OME-Zarr or NumPy arrays.
oa_image.export(how="ome-parquet", out="your_image.ome.parquet")
# Export to Vortex (install extras: `pip install 'ome-arrow[vortex]'`).
oa_image.export(how="vortex", out="your_image.vortex")
```
## Tensor view (DLPack)
For tensor-focused workflows (PyTorch/JAX), use `tensor_view` and DLPack export.
```python
from ome_arrow import OMEArrow
oa = OMEArrow("your_image.ome.parquet")
# Spatial ROI per plane
view = oa.tensor_view(t=0, z=0, roi=(32, 32, 128, 128), layout="CHW")
# Convenience 3D ROI (x, y, z, w, h, d)
view3d = oa.tensor_view(roi3d=(32, 32, 2, 128, 128, 4), layout="TZCHW")
# 3D tiled iteration over (z, y, x)
for cap in view3d.iter_tiles_3d(tile_size=(2, 64, 64), mode="numpy"):
pass
```
Advanced options:
- `chunk_policy="auto" | "combine" | "keep"` controls ChunkedArray handling.
- `channel_policy="error" | "first"` controls behavior when dropping `C` from layout.
See full docs: [`docs/src/dlpack.md`](docs/src/dlpack.md)
## Contributing, Development, and Testing
Please see our [contributing documentation](https://github.com/wayscience/ome-arrow/tree/main/CONTRIBUTING.md) for more details on contributions, development, and testing.
## Related projects
OME Arrow is used or inspired by the following projects, check them out!
- [`napari-ome-arrow`](https://github.com/WayScience/napari-ome-arrow): enables you to view OME Arrow and related images.
- [`nViz`](https://github.com/WayScience/nViz): focuses on ingesting and visualizing various 3D image data.
- [`CytoDataFrame`](https://github.com/cytomining/CytoDataFrame): provides a DataFrame-like experience for viewing feature and microscopy image data within Jupyter notebook interfaces and creating OME Parquet files.
- [`coSMicQC`](https://github.com/cytomining/coSMicQC): performs quality control on microscopy feature datasets, visualized using CytoDataFrames.
| text/markdown | Dave Bunten | null | null | null | null | null | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"bioio>=3",
"bioio-ome-tiff>=1.4",
"bioio-ome-zarr>=3.0.3",
"bioio-tifffile>=1.3",
"fire>=0.7",
"matplotlib>=3.10.7",
"numpy>=2.2.6",
"pandas>=2.2.3",
"pillow>=12",
"pyarrow>=22",
"jax>=0.4; extra == \"dlpack\"",
"torch>=2.1; extra == \"dlpack\"",
"jax>=0.4; extra == \"dlpack-jax\"",
"torc... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:34:19.928551 | ome_arrow-0.0.7.tar.gz | 35,925,127 | cb/ab/64ec2bbaf4460ff413fcb583b95d69d76367c457e7c9e36718b89f2d619f/ome_arrow-0.0.7.tar.gz | source | sdist | null | false | 3043995d4a28aad9c753d6fb62e1d530 | eaea4a70fe8f28cc774a136191e011323d4c8ace48f4d6888bc78e6d0cebc314 | cbab64ec2bbaf4460ff413fcb583b95d69d76367c457e7c9e36718b89f2d619f | null | [
"LICENSE"
] | 232 |
2.4 | web3b0x | 0.2 | b0x: Tiny crypto key lockbox for chat based AI agent such as OpenClaw or Nanobot | # b0x
A tiny crypto key lockbox for chat basedAI agent such as OpenClaw or Nanobot.
It works with Base chain and sepolia.
To install, simple type:
pip install web3b0x
And run:
python -m b0x
Or why not run directly with pipx?
pipx run web3b0x
Hope you like it!
| text/markdown | 0xKJ | kernel1983@gmail.com | null | null | MIT | ethereum | [] | [] | https://github.com/w3connect/b0x | null | <4,>=3.8 | [] | [] | [] | [
"web3>=6.0.0",
"tornado>=6.0.0",
"pyotp>=2.9.0",
"qrcode>=8.2"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-18T21:33:15.141222 | web3b0x-0.2.tar.gz | 6,565 | c7/d4/4392322a26187113d727fb4dcec3652e70515e7e1e8e09d95d31fa663424/web3b0x-0.2.tar.gz | source | sdist | null | false | 3a6581bfbe6fb8048ea3bab667394aea | eecbcccc3faa6fb70ca3364912b82e3a65cb9d313ed956bb9ea1e115ee5b7493 | c7d44392322a26187113d727fb4dcec3652e70515e7e1e8e09d95d31fa663424 | null | [
"LICENSE"
] | 159 |
2.4 | 248-sdk | 0.1.4 | Shared SDK for 248 products - MongoDB models, PostgreSQL models, and SmartLead schemas | # 248 SDK
Shared SDK for 248 products - MongoDB models, PostgreSQL models, and SmartLead schemas.
## Installation
```bash
# From Git
pip install git+https://github.com/248ai/248-sdk.git
# With PostgreSQL support
pip install "248-sdk[postgres] @ git+https://github.com/248ai/248-sdk.git"
# Local development (editable)
pip install -e .
```
## Quick Start
### MongoDB Models
```python
import asyncio
from sdk_248 import mongodb
from sdk_248.models import Campaign, CampaignStatus
async def main():
# Initialize MongoDB connection
await mongodb.initialize(
connection_string="mongodb://localhost:27017",
database_name="mydb"
)
# Query campaigns
campaigns = await Campaign.find(
Campaign.status == CampaignStatus.RUNNING
).to_list()
for campaign in campaigns:
print(f"Campaign: {campaign.name}")
# Close connection
await mongodb.close()
asyncio.run(main())
```
### PostgreSQL Models
```python
from sdk_248 import postgres
from sdk_248.models import Organization
# Initialize PostgreSQL
postgres.initialize(
connection_string="postgresql://user:pass@localhost:5432/db"
)
# Use sessions
with postgres.session() as session:
orgs = session.query(Organization).all()
for org in orgs:
print(f"Organization: {org.name}")
```
### Using Models Without Database
```python
from sdk_248.models import Lead, CampaignStatus
# Create models for validation/serialization
lead = Lead(
email="test@example.com",
first_name="John",
last_name="Doe"
)
# Serialize to dict
lead_dict = lead.model_dump()
# Use enums
status = CampaignStatus.RUNNING
```
## Available Exports
### From `sdk_248`
- `MongoDBManager`, `mongodb` - MongoDB connection management
- `PostgresManager`, `postgres`, `Base` - PostgreSQL connection management
### From `sdk_248.models`
#### Enums
- `CampaignStatus` - Campaign status values
- `EmailOrchestrator` - Email orchestrator options
- `AppType` - Application types
- `NodeType` - Node types for responder actions
- `ActionRunStatus` - Action run status values
- `CategoriserStatus` - Categoriser status values
#### MongoDB Models
- `Campaign` - Main campaign document (Beanie)
- `Lead` - Lead embedded model
- `ActionRun` - Action run embedded model
- `Node` - Responder node model
#### PostgreSQL Models
- `Organization` - Organization entity
- `OrganizationStatus` - Organization status enum
### From `sdk_248.schemas`
- `CampaignSequenceInput` - Campaign sequence configuration
- `CampaignReplyWebhookSchema` - Reply webhook payload
- `EmailSentWebhookSchema` - Email sent webhook payload
- `Organization` - Organization response schema (Pydantic)
- `OrganizationCreate` - Organization create schema
- `OrganizationUpdate` - Organization update schema
| text/markdown | null | Wenzo Rithelly <rithellyenzo@gmail.com> | null | null | MIT | beanie, mongodb, postgresql, sdk, smartlead, sqlalchemy | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"beanie<3.0.0,>=2.0.1",
"motor<4.0.0,>=3.3.0",
"pydantic<3.0.0,>=2.0.0",
"sqlalchemy<3.0.0,>=2.0.0",
"mypy>=1.8.0; extra == \"all\"",
"psycopg2-binary<3.0.0,>=2.9.0; extra == \"all\"",
"pytest-asyncio>=0.23.0; extra == \"all\"",
"pytest-cov>=4.0.0; extra == \"all\"",
"pytest>=8.0.0; extra == \"all\"... | [] | [] | [] | [
"Repository, https://github.com/248ai/248-sdk"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T21:33:13.742012 | 248_sdk-0.1.4.tar.gz | 14,097 | 62/ca/16c5729200cd434c36b705d8be12ea2b2a17c63decebe8aae15676369432/248_sdk-0.1.4.tar.gz | source | sdist | null | false | f3c6c32d23356ff936010b8f21396a67 | aa04c293d601c23588a11f4dbc9629618d8c36d22a0777f54c3216c3d940db8c | 62ca16c5729200cd434c36b705d8be12ea2b2a17c63decebe8aae15676369432 | null | [] | 282 |
2.4 | dssketch | 1.1.13 | Human-friendly alternative to DesignSpace XML - provides simple, intuitive text format for variable font design with 84-97% size reduction | # DesignSpace Sketch
**Human-friendly alternative to DesignSpace XML**
DSSketch provides a simple, intuitive text format for describing variable fonts, replacing the overcomplicated and verbose XML format with clean, readable text that font designers can easily understand and edit by hand. This makes variable font development more accessible and less error-prone.
**The core philosophy:** Transform complex, verbose XML into simple, human-readable format that achieves 84-97% size reduction while maintaining full functionality.
## Why DSSketch?
### Before: DesignSpace XML (verbose, error-prone)
```xml
<?xml version='1.0' encoding='UTF-8'?>
<designspace format="5.0">
<axes>
<axis tag="wght" name="weight" minimum="100" maximum="900" default="400">
<labelname xml:lang="en">Weight</labelname>
<map input="100" output="0"/>
<map input="300" output="211"/>
<map input="400" output="356"/>
<map input="500" output="586"/>
<map input="700" output="789"/>
<map input="900" output="1000"/>
<labels ordering="0">
<label uservalue="100" name="Thin"/>
<label uservalue="300" name="Light"/>
<label uservalue="400" name="Regular" elidable="true"/>
<label uservalue="500" name="Medium"/>
<label uservalue="700" name="Bold"/>
<label uservalue="900" name="Black"/>
</labels>
</axis>
<axis tag="ital" name="italic" values="0 1" default="0">
<labelname xml:lang="en">Italic</labelname>
<labels ordering="1">
<label uservalue="0" name="Upright" elidable="true"/>
<label uservalue="1" name="Italic"/>
</labels>
</axis>
</axes>
<rules>
<rule name="heavy alternates">
<conditionset>
<condition name="weight" minimum="600" maximum="1000"/>
</conditionset>
<sub name="cent" with="cent.rvrn"/>
<sub name="cent.old" with="cent.old.rvrn"/>
<sub name="cent.sc" with="cent.sc.rvrn"/>
<sub name="cent.tln" with="cent.tln.rvrn"/>
<sub name="cent.ton" with="cent.ton.rvrn"/>
<sub name="dollar" with="dollar.rvrn"/>
<sub name="dollar.old" with="dollar.old.rvrn"/>
<sub name="dollar.sc" with="dollar.sc.rvrn"/>
<sub name="dollar.tln" with="dollar.tln.rvrn"/>
<sub name="dollar.ton" with="dollar.ton.rvrn"/>
</rule>
</rules>
<sources>
<source filename="sources/SuperFont-Thin.ufo" familyname="SuperFont" stylename="Thin">
<location>
<dimension name="Weight" xvalue="0"/>
<dimension name="Italic" xvalue="0"/>
</location>
</source>
<source filename="sources/SuperFont-Regular.ufo" familyname="SuperFont" stylename="Regular">
<location>
<dimension name="Weight" xvalue="356"/>
<dimension name="Italic" xvalue="0"/>
</location>
</source>
<!-- ... 50+ more lines for simple 2-axis font ... -->
</sources>
<instances>
<!-- ... hundreds of lines for instance definitions ... -->
</instances>
</designspace>
```
### After: DSSketch (clean, intuitive)
```dssketch
family SuperFont
path sources
axes
wght 100:400:900
Thin > 0
Light > 211
Regular > 356 @elidable
Medium > 586
Bold > 789
Black > 1000
ital discrete
Upright @elidable
Italic
sources [wght, ital]
SuperFont-Thin [0, 0]
SuperFont-Regular [356, 0] @base
SuperFont-Black [1000, 0]
SuperFont-Thin-Italic [0, 1]
SuperFont-Italic [356, 1]
SuperFont-Black-Italic [1000, 1]
rules
dollar* cent* > .rvrn (weight >= Bold) "heavy alternates"
instances auto
```
**Result: 93% smaller, infinitely more readable**
## Key Advantages
### 1. **Human-Friendly Syntax**
- **Intuitive axis definitions**: `wght 100:400:900` instead of verbose XML attributes
- **Simple source coordinates**: `[400, 0]` instead of complex XML dimension tags
- **Readable rules**: `dollar > .rvrn (weight >= 400)` instead of nested XML structures
- **Common directory paths**: `path sources` eliminates repetitive file paths
### 2. **Smart Automation**
- **Auto instance generation**: `instances auto` creates all meaningful combinations
- **Standard weight mapping**: Recognizes `Regular > 400`, `Bold > 700` automatically
- **Wildcard rule expansion**: `* > .alt` finds all glyphs with .alt variants
- **UFO validation**: Automatically validates source files and extracts glyph lists
### 3. **Label-Based Syntax**
Make your font files even more readable with label-based coordinates and ranges:
#### Label-Based Source Coordinates
```dssketch
# Traditional numeric format:
sources [wght, ital]
Font-Regular [362, 0] @base
Font-Black [1000, 1]
# Label-based format:
sources [wght, ital]
Font-Regular [Regular, Upright] @base
Font-Black [Black, Italic]
```
#### Label-Based Axis Ranges
```dssketch
# Traditional numeric format:
axes
wght 100:400:900
wdth 75:100:125
# Label-based ranges for weight and width:
axes
weight Thin:Regular:Black # Auto-converts to 100:400:900
width Condensed:Normal:Extended # Auto-converts to 80:100:150
```
#### Human-Readable Axis Names
```dssketch
# Short tags (traditional):
axes
wght 100:400:900
wdth 75:100:125
ital discrete
# Human-readable names:
axes
weight 100:400:900 # Auto-converts to wght
width 75:100:125 # Auto-converts to wdth
italic discrete # Auto-converts to ital
```
**Supported names:** `weight` → `wght`, `width` → `wdth`, `italic` → `ital`, `slant` → `slnt`, `optical` → `opsz`
#### Label-Based Rule Conditions
```dssketch
# Traditional numeric format:
rules
dollar > dollar.heavy (weight >= 700) "heavy dollar"
ampersand > ampersand.fancy (weight >= 700 && width >= 110) "compound"
# Label-based format:
rules
dollar > dollar.heavy (weight >= Bold) "heavy dollar"
ampersand > ampersand.fancy (weight >= Bold && width <= Wide) "compound"
g > g.alt (Regular <= weight <= Bold) "range condition"
```
**Benefits:**
- More readable: `weight >= Bold` vs `weight >= 700`
- Self-documenting: labels show semantic meaning
- Works with all operators: `>=`, `<=`, `==`, ranges
- Supports all axes: standard and custom
- Can mix numeric and label values
```dssketch
# Complete label-based example
family SuperFont
path sources
axes
weight Thin:Regular:Black
Thin > 0
Light > 211
Regular > 356 @elidable
Medium > 586
Bold > 789
Black > 1000
italic discrete
Upright @elidable
Italic
sources [wght, ital]
SuperFont-Thin [Thin, Upright]
SuperFont-Regular [Regular, Upright]
SuperFont-Black [Black, Upright]
SuperFont-Thin-Italic [Thin, Italic]
SuperFont-Italic [Regular, Italic]
SuperFont-Black-Italic [Black, Italic]
rules
dollar* cent* > .rvrn (weight >= Bold) "heavy alternates"
A > A.alt (Regular <= weight <= Bold) "medium weight"
instances auto
```
### 4. **Advanced Features Made Simple**
#### Discrete Axes
```dssketch
# Instead of complex XML values="0 1" attributes:
ital discrete
Upright @elidable # No need for > 0
Italic # No need for > 1
```
#### Flexible Substitution Rules
```dssketch
rules
# Simple glyph substitution with labels
dollar > dollar.heavy (weight >= Bold)
# Wildcard patterns with labels
A* > .alt (weight >= Bold) # All glyphs starting with A
* > .rvrn (weight >= Medium) # All glyphs with .rvrn variants
# Complex conditions with labels
ampersand > .fancy (weight >= Bold && width <= Wide)
g > g.alt (Regular <= weight <= Bold) # Range conditions
# Numeric conditions still work
thin* > .ultra (weight >= -100) # Negative coordinates supported
b > b.alt (450 <= weight <= Bold) # Mix labels and numbers
```
#### Explicit Axis Order Control
```dssketch
# Control instance generation order
axes
wdth 60:100:200 # First in names: "Condensed Thin" - "{width} {weight}"
Condensed > 350.0
Normal > 560.0 @elidable
wght 100:400:900 # Second in names
Thin > 100
Regular > 400
Black > 900
sources [wght, wdth] # Coordinates follow this order: [weight, width]
Thin-Condensed [100, 350]
Regular-Condensed [400, 350] @base
Black-Condensed [900, 350]
Thin-Normal [100, 560]
Regular-Normal [400, 560]
Black-Normal [900, 560]
```
#### UFO Layer Support
Store multiple masters as layers within a single UFO file:
```dssketch
sources [wght]
# Default layer (foreground) - base master
Font-Master.ufo [400] @base
# Intermediate masters stored as layers in same UFO
Font-Master.ufo [500] @layer="wght500"
Font-Master.ufo [600] @layer="wght600"
Font-Master.ufo [700] @layer="bold-layer"
# Separate UFO for extreme weight
Font-Black.ufo [900]
```
**Benefits:**
- **Reduces file count**: Multiple masters in one UFO
- **Organized structure**: Related masters kept together
- **Full bidirectional support**: DesignSpace `layerName` ↔ DSSketch `@layer`
**Syntax formats:**
- `@layer="layer name"` - with double quotes (supports spaces)
- `@layer='layer name'` - with single quotes
- `@layer=layername` - without quotes (no spaces)
Can be combined with `@base`: `Font.ufo [400] @base @layer="default"`
#### Custom Axis
```dssketch
# Control instance generation order
axes
CONTRAST CNTR 0:0:100 # First in names: "C2 Condensed Thin" - "{CNTR} {width} {weight}"
0 C0 > 100.0 @elidable
50 C1 > 600.0
100 C2 > 900.0
wdth 60:100:100
Condensed > 350.0
Normal > 560.0 @elidable
wght 100:400:900 # Third in names
Thin > 100
Regular > 400
Black > 900
sources [wght, wdth, CONTRAST] # Coordinates follow this order: [weight, width, CONTRAST]
Thin-Condensed-C2 [Thin, Condensed, C2]
Regular-Condensed-C2 [Regular, Condensed, C2] @base
Black-Condensed-C2 [Black, Condensed, C2]
```
## Installation & Usage
### Installation
#### Using uv (recommended)
```bash
# Install uv if not already installed
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install DSSketch
uv pip install dssketch
# Or install from source (for development)
uv pip install -e .
# Install with development dependencies
uv pip install -e ".[dev]"
```
#### Using pip
```bash
pip install dssketch
# Or install from source
pip install -e .
```
### Command Line
```bash
# Convert DesignSpace → DSSketch (with UFO validation)
dssketch font.designspace
# Convert DSSketch → DesignSpace
dssketch font.dssketch
# With explicit output
dssketch input.designspace -o output.dssketch
# Skip UFO validation (not recommended)
dssketch font.dssketch --no-validation
# avar2 format options
dssketch font.designspace --matrix # Matrix format (default)
dssketch font.designspace --linear # Linear format
# Without installation (using Python module directly)
python -m dssketch.cli font.designspace
```
### Python API
```python
import dssketch
from fontTools.designspaceLib import DesignSpaceDocument
# High-level API functions (recommended)
# Convert DesignSpace object to DSSketch file
ds = DesignSpaceDocument()
ds.read("MyFont.designspace")
dssketch.convert_to_dss(ds, "MyFont.dssketch")
# With options: vars_threshold (0=disabled, 3=default), avar2_format ("matrix"/"linear")
dssketch.convert_to_dss(ds, "MyFont.dssketch", vars_threshold=0) # no variables
dssketch.convert_to_dss(ds, "MyFont.dssketch", avar2_format="linear") # linear format
# Convert DSSketch file to DesignSpace object
ds = dssketch.convert_to_designspace("MyFont.dssketch")
# Convert DesignSpace to DSSketch string
dss_string = dssketch.convert_designspace_to_dss_string(ds)
dss_string = dssketch.convert_designspace_to_dss_string(ds, vars_threshold=2) # more variables
# Work with DSSketch strings (for programmatic generation)
dss_content = """
family MyFont
axes
wght 100:400:900
Thin > 100
Regular > 400
Black > 900
sources
Thin [100]
Regular [400] @base
Black [900]
"""
# Convert DSSketch string to DesignSpace object
ds = dssketch.convert_dss_string_to_designspace(dss_content, base_path="./")
# Convert DesignSpace object to DSSketch string
dss_string = dssketch.convert_designspace_to_dss_string(ds)
```
## DSSketch Format Examples
### Basic 2-Axis Font
```dssketch
family MyFont
path path_to_sources
axes
wght 300:400:700
Light > 300
Regular > 390 @elidable
Bold > 700
ital discrete
Upright @elidable
Italic
sources [wght, ital]
Light [Light, 0]
Regular [Regular, 0] @base
Bold [Bold, 0]
LightItalic [Light, 1]
Italic [Regular, 1]
BoldItalic [Bold, 1]
instances auto
skip
# Skip Light Italic (optional - removes unwanted combinations)
Light Italic
```
### Complex Multi-Axis Font
```dssketch
family SuperFont
suffix VF
axes
wght Thin:Regular:Black # user space 100:400:900
Thin > 0 # 100
Light > 196 # 300
Regular > 362 @elidable # 400
Medium > 477 # 500
Bold > 732 # 700
Black > 1000 # 900
wdth Condensed:Normal:Extended
Condensed > 60
Normal > 100 @elidable
Extended > 200
ital discrete
Upright @elidable
Italic
sources [wght, wdth, ital]
Thin [Thin, Condensed, Upright]
Regular [Regular, Condensed, Upright] @base
Black [Black, Condensed, Upright]
ThinItalic [Thin, Condensed, Italic]
Italic [Regular, Condensed, Italic]
BlackItalic [Black, Condensed, Italic]
ThinExtended [Thin, Extended, Upright]
RegularExtended [Regular, Extended, Upright]
BlackExtended [Black, Extended, Upright]
ThinExtendedItalic [Thin, Extended, Italic]
ExtendedItalic [Regular, Extended, Italic]
BlackExtendedItalic [Black, Extended, Italic]
rules
# Currency symbols get heavy alternates
dollar cent > .rvrn (weight >= Medium)
# Wildcard patterns
A* > .alt (weight >= Bold) # All A-glyphs get alternates
dollar cent at number > .fancy (weight >= 700 && width >= 150) # Complex conditions
instances auto
skip
# Skip extreme combinations
Thin Italic
Extended Bold Italic
```
### Advanced Rules and Patterns
```dssketch
family AdvancedFont
axes
wght 100:400:900
wdth 60:100:200
CONTRAST CNTR 0:50:100 # Custom axis (uppercase)
sources [wght, wdth, CONTRAST]
Light [100, 100, 0] @base
Bold [900, 100, 100]
rules
# Exact glyph substitution
dollar > dollar.heavy (weight >= 500)
# Multiple glyphs with same target
dollar cent > .currency (weight >= 600)
# Prefix wildcards (all glyphs starting with pattern)
A* > .stylistic (weight >= 700) # A, AE, Aacute, etc.
num* > .proportional (CONTRAST >= 50) # number variants
# Universal wildcard (all glyphs with matching targets)
S* G* > .rvrn (weight >= Regular) # Only creates rules where .rvrn exists
Q* > .alt (weight >= 600 && width >= 150) # Complex conditions
# Range conditions
o > o.round (Regular <= weight <= Bold)
# Negative coordinates (supported in design space)
ultra* > .thin (weight >= -100)
back* > .forward (CONTRAST <= -25)
instances auto
```
### Font with UFO Layers
```dssketch
# Using layers to store intermediate masters in single UFO files
# Reduces file count while maintaining full design flexibility
family FontWithLayers
axes
wght 100:400:900
Thin > 100
Regular > 400 @elidable
Bold > 700
Black > 900
wdth 75:100:125
Condensed > 75
Normal > 100 @elidable
Wide > 125
sources [wght, wdth]
# Main master files (default layer)
Font-Regular.ufo [400, 100] @base
Font-Thin.ufo [100, 100]
Font-Black.ufo [900, 100]
# Width extremes
Font-Condensed.ufo [400, 75]
Font-Wide.ufo [400, 125]
# Intermediate weight masters as layers in Font-Regular.ufo
Font-Regular.ufo [300, 100] @layer="wght300"
Font-Regular.ufo [500, 100] @layer="wght500"
Font-Regular.ufo [600, 100] @layer="wght600"
Font-Regular.ufo [700, 100] @layer="wght700"
# Condensed intermediates as layers
Font-Condensed.ufo [300, 75] @layer="wght300-condensed"
Font-Condensed.ufo [700, 75] @layer="wght700-condensed"
instances auto
```
## Key Concepts
### User Space vs Design Space
```
User Space = Values users see (CSS font-weight: 400) = OS/2 table
Design Space = Actual coordinates where sources are located
Mapping example:
Regular > 362 means:
- User requests font-weight: 400 (Regular)
- Master is located at coordinate 362 in design space
- CSS 400 maps to design space 362
```
### Rule Conditions Use Design Space
**Important**: All rule conditions use design space coordinates, not user space values.
```dssketch
axes
wght 300:400:700
Light > 0 # User 300 → Design 0
Regular > 362 # User 400 → Design 362
Bold > 1000 # User 700 → Design 1000
rules
# This condition uses design space coordinate 362, not user space 400
dollar > .heavy (weight >= 362) # Activates at Regular and heavier
```
### Axis Mapping Formats
DSSketch supports three formats for defining axis mappings, giving you full control over user-space and design-space coordinates:
#### 1. Standard Label (inferred user-space)
```dssketch
axes
wght 100:400:900
Light > 251 # Uses standard user-space value (300) → design-space value 251
Regular > 398 # Uses standard user-space value (400) → design-space value 398
Bold > 870 # Uses standard user-space value (700) → design-space value 870
```
**How it works**: For known labels (Light, Regular, Bold, etc.), user-space values are automatically taken from standard mappings in `data/unified-mappings.yaml`.
#### 2. Custom Label (design-space as user-space)
```dssketch
axes
wght 100:400:900
MyCustom > 500 # Custom label, user_value = design_value = 500
```
**How it works**: For unknown labels, user-space value equals design-space value.
#### 3. Explicit User-Space Mapping (full control)
```dssketch
axes
wght 50:500:980
50 UltraThin > 0 # Explicit: user=50, design=0
200 Light > 230 # Override: user=200 instead of standard 300
500 Regular > 420 # Override: user=500 instead of standard 400
980 DeepBlack > 1000 # Custom: user=980, design=1000
wdth 60:100:200
Condensed > 380 # Standard: user=80 (from mappings)
Normal > 560 # Standard: user=100 (from mappings)
150 Wide > 700 # Override: user=150 instead of standard 100
200 Extended > 1000 # Override: user=200 instead of standard 125
CUSTOM CSTM 0:50:100
0 Low > 0 # Custom axis: user=0, design=0
50 Medium > 100 # Custom axis: user=50, design=100
100 High > 200 # Custom axis: user=100, design=200
```
**Format**: `user_value label > design_value`
**Use cases**:
- **Create custom labels** with explicit user-space values for non-standard scales
- **Override standard mappings** with different user-space coordinates
- **Define custom axes** with meaningful user-space values
- **Fine-tune weight/width scales** beyond standard CSS values
**Example from `examples/MegaFont-3x5x7x3-Variable.dssketch`**:
```dssketch
axes
wdth 60:100:200
Compressed > 0
Condensed > 380
Normal > 560 @elidable
150 Wide > 700 # user=150 (custom), design=700
200 Extended > 1000 # user=200 (custom), design=1000
wght Thin:Regular:Black
Thin > 0
200 Light > 230 # user=200 (override standard 300), design=230
Regular > 420 @elidable
Bold > 725
Black > 1000
```
### Family Auto-Detection
The `family` field is optional in DSSketch. If not specified, DSSketch will automatically detect the family name from the base source UFO file:
```dssketch
# Family name is optional - will be detected from UFO
path sources
axes
wght 100:400:900
Thin > 100
Regular > 400
Black > 900
sources [wght]
Thin [100]
Regular [400] @base # Family name detected from this UFO's font.info.familyName
Black [900]
instances auto
```
**How it works:**
- When `family` is missing, DSSketch reads the base source UFO (`@base` flag)
- Extracts `font.info.familyName` from the UFO using fontParts
- Falls back to "Unknown" if UFO is not found or has no familyName
- Logs a warning if auto-detection is used (non-critical)
This is useful for quick prototyping or when the family name should always match the UFO metadata.
### Discrete Axes
Traditional XML requires complex `values="0 1"` attributes. DSSketch makes it simple:
```dssketch
# Old way (still supported):
ital 0:0:1
Upright > 0
Italic > 1
# New way (recommended):
ital discrete
Upright @elidable
Italic
```
### Automatic Instance Generation
The `instances auto` feature intelligently creates **all possible combinations** of axis labels using combinatorial logic (`itertools.product`):
```dssketch
axes
ital discrete # Controls name order: Italic first
Upright @elidable
Italic
wght 100:400:900 # Weight second in names
Thin > 100
Regular > 400 @elidable
Black > 900
instances auto # Generates: "Thin", "Regular", "Black", "Italic Thin", "Italic", "Italic Black"
```
**How it works:**
1. **Combinatorial generation**: Creates cartesian product of all axis labels
- Axis 1 (ital): `[Upright, Italic]` × Axis 2 (wght): `[Thin, Regular, Black]`
- Result: 2 × 3 = **6 combinations**
2. **Elidable name cleanup**: Removes redundant `@elidable` labels
- `Upright Thin` → `Thin`
- `Upright Regular` → `Regular` (both parts elidable)
- `Italic Regular` → `Italic` (Regular is elidable)
3. **Final instances**: `Thin`, `Regular`, `Black`, `Italic`, `Italic Thin`, `Italic Black`
#### Fallback for Axes Without Labels
When axes have no mappings defined (only `min:def:max`), DSSketch automatically generates instances from the axis range values:
```dssketch
family QuickPrototype
axes
wght 100:400:900 # No labels defined
wdth 75:100:125 # No labels defined
instances auto
# Generates 9 instances (3 × 3):
# wght100 wdth75, wght100 wdth100, wght100 wdth125
# wght400 wdth75, wght400 wdth100, wght400 wdth125
# wght900 wdth75, wght900 wdth100, wght900 wdth125
```
**How fallback works:**
- Uses axis `minimum`, `default`, and `maximum` values as instance points
- Instance names use `tag+value` format (e.g., `wght400 wdth100`)
- Useful for quick prototyping without defining full axis mappings
This also works with avar2 fonts, where additional points from avar2 input mappings are included.
### Instance Skip Functionality
When using `instances auto`, you can exclude specific instance combinations with the `skip` subsection. This is useful for removing impractical or unwanted style combinations:
```dssketch
axes
wdth 60:100:200
Condensed > 60
Normal > 100 @elidable
Extended > 200
wght 100:400:900
Thin > 100
Light > 300
Regular > 400 @elidable
Bold > 700
ital discrete
Upright @elidable
Italic
instances auto
skip
# Skip extreme thin italic combinations (too fragile)
Thin Italic
Light Italic
# Skip extremely wide and heavy combinations
Extended Bold
# Without skip: 3 widths × 4 weights × 2 italics = 24 instances
# With 3 skipped: 24 - 3 = 21 instances
```
**Important rules for skip:**
1. **Use FINAL instance names** (after elidable cleanup):
```dssketch
axes
wdth 60:100:200
Condensed > 60
Normal > 100 @elidable
wght 100:400:900
Regular > 400 @elidable
Bold > 700
instances auto
skip
# ✅ CORRECT: Uses final name after "Normal" and "Regular" are removed
Bold
# ❌ WRONG: Would not match because "Normal Regular Bold" becomes "Bold"
Normal Regular Bold
```
2. **Follow axis order** from axes section:
```dssketch
# Axes order: wdth → wght → ital
skip
Condensed Thin Italic # ✅ Correct: width → weight → italic
Thin Condensed Italic # ❌ Wrong: doesn't match axis order
```
3. **Comments supported**: Use `#` for inline comments explaining skip rules
**Example with multiple elidable labels:**
```dssketch
family MegaFont
axes
CONTRAST CNTR 0:0:100
NonContrast > 0 @elidable
HighContrast > 100
wdth 60:100:200
Condensed > 60
Normal > 100 @elidable
Extended > 200
wght 100:400:900
Thin > 100
Regular > 400 @elidable
Bold > 700
instances auto
skip
# "NonContrast Normal Thin" → "Thin" (after cleanup) - so we skip "Thin"
Thin
# "NonContrast Normal Regular" → "Regular" (after cleanup)
Regular
# "HighContrast Normal Thin" → "HighContrast Thin" (after cleanup)
HighContrast Thin
# "NonContrast Extended Thin" → "Extended Thin" (after cleanup)
Extended Thin
# Generation process:
# 1. Create all combinations: 2 contrasts × 3 widths × 3 weights = 18 combinations
# 2. Apply elidable cleanup (remove NonContrast, Normal, Regular where appropriate)
# 3. Check skip rules on FINAL names
# 4. Generate remaining instances
```
**Production example from `examples/MegaFont-WithSkip.dssketch`:**
```dssketch
instances auto
skip
# Skip extreme thin weights with reverse slant (too fragile)
Compressed Thin Reverse
Condensed Thin Reverse
Extended Thin Reverse
# Skip low contrast with compressed thin (readability issues)
LowContrast Compressed Thin
LowContrast Compressed Thin Slant
LowContrast Compressed Thin Reverse
# Skip high contrast extended black (too heavy/wide)
HighContrast Extended Black Slant
HighContrast Extended Black Reverse
# Skip middle-weight compressed (redundant)
Compressed Medium Reverse
Compressed Extrabold
# Result: 315 total combinations - 14 skipped = 301 instances generated
```
### Skip Rule Validation
DSSketch validates skip rules at **two levels** to ensure correctness and provide helpful feedback:
**1. ERROR Level - Invalid Label Detection**
Stops conversion if skip rules contain labels that don't exist in axis definitions:
```
# Example DSSketch with error:
axes
wght 100:700
Thin > 100
Bold > 700
ital discrete
Upright
Italic
instances auto
skip
Heavy Italic # ERROR: "Heavy" not defined in any axis
```
**Error message:**
```
ERROR: Skip rule 'Heavy Italic' contains label 'Heavy' which is not defined in any axis.
Available labels: Bold, Italic, Thin, Upright
```
**2. WARNING Level - Unused Skip Rule Detection**
Logs warnings for skip rules that never match any generated instance (may indicate typo or elidable cleanup):
```
# Example DSSketch with warning:
axes
wght 100:700
Thin > 100
Bold > 700
ital discrete
Upright @elidable
Italic
instances auto
skip
Bold Upright # WARNING: "Upright" is @elidable, so "Bold Upright" becomes just "Bold"
```
**Warning message:**
```
WARNING: Skip validation: 1 skip rule(s) were never used. This may indicate a typo or that elidable cleanup changed the instance names.
- Unused skip rule: 'Bold Upright'
```
**Label Naming Rules:**
Labels cannot contain spaces. Use camelCase for compound names:
```
# ✅ CORRECT - camelCase labels
axes
wght 100:900
ExtraLight > 100 # camelCase, no spaces
SemiBold > 900 # camelCase, no spaces
instances auto
skip
ExtraLight Italic # Two labels: "ExtraLight" + "Italic"
# ❌ INCORRECT - spaces in labels
axes
wght 100:900
Extra Light > 100 # ERROR: spaces not allowed
Semi Bold > 900 # ERROR: spaces not allowed
```
This matches standard font naming conventions (ExtraLight, SemiBold) from `data/unified-mappings.yaml`.
**Benefits:**
- **Catches typos**: Detects misspelled labels before they cause silent failures
- **Identifies unreachable rules**: Warns about skip rules affected by elidable cleanup
- **Clear error messages**: Shows available labels for easy correction
- **Simple and predictable**: Each space separates labels, no ambiguity
- **Production-ready**: All validation tested on large MegaFont example (15 skip rules)
**Axis order controls name sequence:**
```dssketch
# Order 1: Width first, then Weight
axes
wdth 60:100:200
Condensed > 60
Normal > 100 @elidable
wght 100:400:900
Thin > 100
Regular > 400
Black > 900
# Result: "Condensed Thin", "Condensed Regular", "Condensed Black", "Thin", "Regular", "Black"
# Order 2: Weight first, then Width
axes
wght 100:400:900
Thin > 100
Regular > 400
Black > 900
wdth 60:100:200
Condensed > 60
Normal > 100 @elidable
# Result: "Thin Condensed", "Regular Condensed", "Black Condensed", "Thin", "Regular", "Black"
```
**Complex multi-axis example:**
```dssketch
axes
wdth 60:100:100
Condensed > 60
Normal > 100 @elidable
wght 100:400:900
Thin > 100
Regular > 400 @elidable
Black > 900
ital discrete
Upright @elidable
Italic
instances auto
# Generates: 2 × 3 × 2 = 12 combinations
# Result: Thin, Regular, Black,
# Condensed Thin, Condensed, Condensed Black,
# Thin Italic, Italic, Black Italic,
# Condensed Thin Italic, Condensed Italic, Condensed Black Italic
```
**Result**: Automatic generation of all meaningful style combinations with proper PostScript names, file paths, and style linking based on axes order.
### Disabling Instance Generation (`instances off`)
When you want to completely disable automatic instance generation (e.g., for avar2 fonts where instances are not needed or should be managed externally):
```dssketch
family MyFont
axes
wght 100:400:900
wdth 75:100:125
instances off
```
This produces a DesignSpace file with zero instances, which is useful for:
- **avar2 variable fonts** where instances may be generated differently
- **Build pipelines** that generate instances externally
- **Testing** axis configurations without instance overhead
### avar2 Support (OpenType 1.9)
DSSketch provides comprehensive support for avar2 (axis variations version 2), enabling non-linear axis mappings and inter-axis dependencies. This is essential for sophisticated variable fonts like parametric fonts.
#### User Space vs Design Space in avar2
**Critical concept**: avar2 mappings have a clean separation between spaces:
- **Input** (`[axis=value]`): Always **USER space** — the CSS values that applications request
- **Output** (`axis=value`): Always **DESIGN space** — the internal font coordinates
**Labels always mean user space:**
```dssketch
axes
wght 100:400:900
Regular > 435 # Axis mapping: user=400 → design=435 (default)
wdth 75:100:125
Condensed > 75
Normal > 100
avar2
# Input uses USER space: Regular=400, Condensed=80 (CSS standard)
# Output uses DESIGN space: wght=385
[wght=Regular, wdth=Condensed] > wght=385
```
**Interpretation:**
- When user requests Regular and Condensed
- The font will use design coordinate 385 instead of the default 435
- This allows the font to optically compensate for the condensed width
**The axis mapping `Regular > 435` defines the DEFAULT design value.** avar2 can OVERRIDE it for specific axis combinations.
#### Basic avar2 Syntax
**Mapping structure: `[input] > output`**
- **Input**: `[axis=value]` — USER space coordinate (what CSS/apps request)
- **Output**: `axis=value` — DESIGN space value (internal font coordinate)
**Example 1: Non-linear weight curve** (from `examples/avar2.dssketch`)
```dssketch
axes
wght 1:400:1000 "Weight"
wdth 50:100:150 "Width"
avar2 matrix
outputs wght wdth
[wght=100] 300 - # user asks wght=100 → font uses wght=300
[wght=400] $ - # user asks wght=400 → font uses default (400)
[wght=700] 600 - # user asks wght=700 → font uses wght=600
[wdth=75] - 90 # user asks wdth=75 → font uses wdth=90
[wdth=100] - $ # user asks wdth=100 → font uses default (100)
```
Here `-` means "no change for this axis", `$` means "use axis default".
**Example 2: Optical size affects weight and width** (from `examples/avar2OpticalSize.dssketch`)
```dssketch
axes
wght 1:400:1000 "Weight"
wdth 50:100:150 "Width"
opsz 6:16:144 "Optical size"
avar2 matrix
outputs wght wdth
[opsz=6, wght=400, wdth=100] 600 125 # small text: heavier, wider
[opsz=144, wght=400, wdth=100] 200 75 # large display: lighter, narrower
```
At small sizes (opsz=6), text needs more weight to be readable. At large sizes (opsz=144), less weight looks better.
**Example 3: Hidden parametric axes** (from `examples/avar2QuadraticRotation.dssketch`)
```dssketch
axes
ZROT 0:0:90 "Rotation"
axes hidden
AAAA 0:0:90
BBBB 0:0:90
avar2 matrix
outputs AAAA BBBB
[ZROT=0] $ $ # at rotation=0: use defaults
[ZROT=90] 90 90 # at rotation=90: set both to 90
```
User controls `ZROT`, font internally adjusts hidden axes `AAAA` and `BBBB`.
**Example 4: Cross-axis dependency with labels**
```dssketch
axes
wght 100:400:900
Light > 300
Regular > 435 # user=400 → design=435 (default)
Bold > 700
wdth 75:100:125
Condensed > 75
Normal > 100
Wide > 125
sources [wght, wdth]
Regular-Normal [Regular, Normal] @base
Bold [Bold, Normal]
.....
avar2
# Labels resolve to USER space: Regular=400, Condensed=80
# Output is DESIGN space
[wght=Regular, wdth=Condensed] > wght=385
[wght=Bold, wdth=Condensed] > wght=650
```
**What this means:**
- `Regular > 435` defines the DEFAULT design value for Regular weight
- At Condensed width, Regular needs a lighter design value (385) for optical balance
- The label `Regular` always means user=400 everywhere in DSSketch
- The converter automatically translates user→design for DesignSpace XML
**Sources for avar2 fonts** — use `AXIS=value` format (from `examples/avar2-RobotoDelta-Roman.dssketch`):
```dssketch
sources
Roboto-Regular opsz=0 @base
Roboto-GRAD-250 opsz=0, GRAD=-250
Roboto-GRAD150 opsz=0, GRAD=150
Roboto-VROT13 opsz=0, VANG=13, VROT=13
Roboto-XTUC741-wght100 opsz=1, wght=100, wdth=151, XTUC=741
```
Format: `SourceName AXIS=value, AXIS=value, ...` — list only axes that differ from defaults. Values are **design-space** coordinates (same as in DesignSpace XML `xvalue`).
**Variables (`avar2 vars`)** — define reusable values (from `examples/avar2Fences.dssketch`)
```dssketch
avar2 vars
$wght1 = 600 # define variable
avar2 matrix
outputs wght wdth
[wght=1000, wdth=50] $wght1 50 # use variable (= 600)
[wght=1000, wdth=90] $wght1 90 # same value reused
[wght=600, wdth=50] $wght1 50
[wght=600, wdth=90] $wght1 90
```
Variables start with `$`, useful when same value appears many times.
#### Linear vs Matrix Format
**Linear format** — one mapping per line:
```dssketch
avar2
[wght=100] > wght=300
[wght=700] > wght=600
# With optional description name:
"opsz144_wght1000" [opsz=144, wght=1000] > XOUC=244, XOLC=234
```
**Matrix format** — tabular, better for multiple output axes:
```dssketch
avar2 matrix
outputs wght wdth
[wght=100] 300 -
[wght=700] 600 -
```
**Complex matrix** (from `examples/avar2-RobotoDelta-Roman.dssketch`):
```dssketch
avar2 matrix
outputs XOPQ XOUC XOLC XTUC YOPQ YOUC
[opsz=-1, wght=100, wdth=25, slnt=0, GRAD=0] 50 50 50 451 48 48
[opsz=-1, wght=400, wdth=25, slnt=0, GRAD=0] 100 100 100 430 85 85
[opsz=-1, wght=1000, wdth=25, slnt=0, GRAD=0] 150 150 150 400 105 105
[opsz=0, wght=100, wdth=100, slnt=0, GRAD=0] 47 47 47 516 44 44
[opsz=0, wght=400, wdth=100, slnt=0] $ $ $ $ $ $
[opsz=1, wght=100, wdth=25, slnt=0, GRAD=0] 2 2 2 278 2 2
```
Each row: `[input conditions]` → values for all output columns. `$` = axis default, `-` = no output.
Both formats produce identical results. Matrix is default for output, linear is easier to read for simple cases.
#### CLI Options for avar2
```bash
# Format options
dssketch font.designspace --matrix # matrix format (default)
dssketch font.designspace --linear # linear format
# Variable generation options
dssketch font.designspace # auto-generate vars (threshold=3, default)
dssketch font.designspace --novars # disable variable generation
dssketch font.designspace --vars 2 # threshold=2 (more variables)
dssketch font.designspace --vars 5 # threshold=5 (fewer variables)
```
**Variable generation**: Values appearing N+ times become variables (`$var1`, `$var2`, etc.)
#### Instances with avar2
For avar2 fonts, `instances off` is often the better choice:
```dssketch
instances off # recommended for most avar2 fonts
```
**Why?** avar2 fonts often have complex axis interactions where automatic instance generation produces too many or inappropriate combinations. Use `instances auto` only when you understand which combinations make sense.
When using `instances auto` with avar2 fonts that have axes without labels, DSSketch automatically generates instance points from:
1. Axis min, default, and max values
2. Unique input points from avar2 mappings
```dssketch
axes
wght 1:400:1000 # No labels defined
opsz 6:16:144 # No labels defined
avar2
[wght=100] > wght=300
[wght=700] > wght=600
[opsz=144] > wght=200
instances auto
# Generates instances at: wght=[1, 100, 400, 700, 1000] × opsz=[6, 16, 144]
# Instance names: wght1 opsz6, wght100 opsz6, wght400 opsz16, etc.
```
**Hidden axes are excluded** from instance generation - only user-facing axes contribute to instance combinations.
## Architecture & API
### Core Components
- **High-level API**: `convert_to_dss()`, `convert_to_designspace()`, `convert_dss_string_to_designspace()`
- **Parsers**: `DSSParser` with comprehensive validation and error detection
- **Writers**: `DSSWriter` with optimization and compression
- **Converters**: Bidirectional `DesignSpaceToDSS` ↔ `DSSToDesignSpace`
- **Validation**: `UFOValidator`, `UFOGlyphExtractor` for robust master file handling
- **Instances**: `createInstances()` for intelligent automatic instance generation
### Data Management
```bash
# After pip install -e . (recommended):
dssketch-data info # Show data file locations
dssketch-data copy unified-mappings.yaml # Copy default file for editing
dssketch-data edit # Open user data directory
dssketch-data reset --file unified-mappings.yaml # Reset specific file
dssketch-data reset --all # Reset all files
# Without installation (using Python module directly):
python -m dssketch.data_cli info
python -m dssketch.data_cli copy unified-mappings.yaml
python -m dssketch.data_cli edit
```
### Error Handling
```python
from src.dssketch.parsers.dss_parser import DSSParser
# S | text/markdown | null | Alexander Lubovenko <lubovenko@gmail.com> | null | null | null | fonts, designspace, variable-fonts, typography, font-design, ufo-fonts | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Text Processing :: Fonts",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :... | [] | null | null | >=3.8 | [] | [] | [] | [
"fonttools>=4.38.0",
"defcon>=0.10.0",
"fontParts>=0.12.0",
"pyyaml>=6.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/typedev/dssketch",
"Documentation, https://github.com/typedev/dssketch#readme",
"Repository, https://github.com/typedev/dssketch.git",
"Issues, https://github.com/typedev/dssketch/issues"
] | uv/0.9.9 {"installer":{"name":"uv","version":"0.9.9"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Fedora Linux","version":"43","id":"","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T21:30:57.508784 | dssketch-1.1.13.tar.gz | 112,060 | 66/4b/7144e72edcad89501db71df05bc3b63ca7aa6bb70dfeeb1a11365a0e6660/dssketch-1.1.13.tar.gz | source | sdist | null | false | 5a6dc065f3a28629f2a4cf3dc7620ac0 | 7cc48be14104b015fdedb7de22f47f55c3edd6a24154e9eb416385849da75088 | 664b7144e72edcad89501db71df05bc3b63ca7aa6bb70dfeeb1a11365a0e6660 | MIT | [
"LICENSE"
] | 242 |
2.4 | pdr | 1.4.1 | Planetary Data Reader | README.md
## The Planetary Data Reader (pdr)
This tool provides a single command---`read(‘/path/to/file’)`---for ingesting
_all_ common planetary data types. It reads almost all "primary observational
data" products currently archived in the PDS (under PDS3 or PDS4), and the
fraction of products it does not read is continuously shrinking.
[Currently-supported datasets are listed here.](docs/supported_datasets.md)
If the software fails while attempting to read from datasets that we have
listed as supported, please submit an issue with a link to the file and
information about the error (if applicable). There might also be datasets that
work but are not listed. We would like to hear about those too. If a dataset
is not yet supported that you would like us to consider prioritizing,
[please fill out this request form](https://docs.google.com/forms/d/1JHyMDzC9LlXY4MOMcHqV5fbseSB096_PsLshAMqMWBw/viewform).
### Attribution
If you use _pdr_ in your work, please cite us using our [JOSS Paper](docs/pdr_joss_paper.pdf): [](https://doi.org/10.21105/joss.07256).
A BibTex style citation is available in [CITATION.cff](CITATION.cff).
### Installation
_pdr_ is now on `conda` and `pip`. We recommend (and only officially support)
installation into a `conda` environment. You can do this like so:
```
conda create --name pdrenv
conda activate pdrenv
conda install -c conda-forge pdr
```
The minimum supported version of Python is _3.9_.
Using the conda install will install some optional dependencies in the environment.yml
file for pdr including: `astropy` and `pillow`. If you'd prefer to forego those
optional dependencies, please use minimal_environment.yml in your
installation. This is not supported through a direct conda install as
described above and will require additional steps. Optional dependencies
and the added functionality they support are listed below:
- `pvl`: allows `Data.load("LABEL", as_pvl=True)`, which will load PDS3
labels as `pvl` objects rather than plain text
- `astropy`: adds support for FITS files
- `jupyter`: allows usage of the Example Jupyter Notebook (and other jupyter
notebooks you create)
- `pillow`: adds support for reading a variety of 'desktop' image formats
(TIFF, JPEG, etc.) and for browse image rendering
- `Levenshtein`: allows use of `metaget_fuzzy`, a fuzzy-matching metadata
parsing function
For pip users, no optional dependencies will be packaged with pdr. The extras
tags are:
- `pvl`: installs `pvl`
- `fits`: installs `astropy`
- `notebooks`: installs `jupyter`
- `pillow`: installs `pillow`
- `fuzzy`: installs `Levenshtein`
Example syntax for using pip to install syntax with `astropy` and `pillow` optional
dependencies:
```
pip install pdr[fits, pillow]
```
#### NOTE: `pdr` is not currently compatible with python 3.13 when installed with `pip`, it can be used with python 3.13 through `conda`
### Usage
You can check out our example Notebook on a JupyterLite server for a
quick interactive demo of functionality:
[](https://millionconcepts.github.io/jlite-pdr-demo/)
Additional information on usage including examples, output data types, notes
and caveats, tests, etc. can now be accessed in our documentation on
readthedocs at: https://pdr.readthedocs.io [](https://pdr.readthedocs.io/en/latest/?badge=latest)
### Contributing
Thank you for wanting to contribute to `pdr` and improving efforts to make
planetary science data accessible. Please review our code of conduct before
contributing. [](docs/code_of_conduct.md)
If you have found a bug, a dataset that we claim to support that's not opening
properly, or you have a feature request, please file an issue. We will also
review pull requests, but would probably prefer you start the conversation with
us first, so we can expect your contributions and make sure they will be within
scope.
If you need general support you can find us on [OpenPlanetary Slack](https://app.slack.com/client/T04CWPQL9/C04CWPQM5)
(available to [OpenPlanetary members](https://www.openplanetary.org/join))
or feel free to [email](mailto:sierra@millionconcepts.com) the team.
---
This work is supported by NASA grant No. 80NSSC21K0885.
| text/markdown | null | Chase Million <chase@millionconcepts.com>, "Michael St. Clair" <mstclair@millionconcepts.com>, Sierra Brown <sierra@millionconcepts.com>, Sabrina Curtis <scurtis@millionconcepts.com>, Zack Weinberg <zack@millionconcepts.com>, Bekah Albach <ralbach@millionconcepts.com> | null | null | ### BSD 3-Clause License
Copyright (c) 2021, Million Concepts
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
### pdr/pds4_tools is derived from code in the Small Bodies Node [pds4_tools package](https://github.com/Small-Bodies-Node/pds4_tools) and carries this additional license:
Copyright (c) 2015 - 2024, University of Maryland
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the University of Maryland nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL UNIVERSITY OF MARYLAND BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
----------------------------------------------------------------------------
This software takes inspiration from the SAOImage DS9 and fv FITS Viewer
tools, and would like to thank the developers of those applications.
----------------------------------------------------------------------------
This software may be packaged by software licensed by the following:
Copyright (c) 2010-2018, PyInstaller Development Team
Copyright (c) 2005-2009, Giovanni Bajo
Based on previous work under copyright (c) 2002 McMillan Enterprises, Inc.
PyInstaller is licensed under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2 of the License,
or any later version.
----------------------------------------------------------------------------
This software may be packaged by software licensed by the following:
Copyright (c) 2004-2006 Bob Ippolito <bob at redivi.com>.
Copyright (c) 2010-2012 Ronald Oussoren <ronaldoussoren at mac.com>.
py2app is licensed under the terms of the MIT or PSF open source licenses.
----------------------------------------------------------------------------
This software includes or uses code licensed by the following:
Copyright (c) 2010-2015 Benjamin Peterson
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----------------------------------------------------------------------------
This software includes or uses code licensed by the following:
Copyright (c) 2005-2017, NumPy Developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of the NumPy Developers nor the names of any
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
----------------------------------------------------------------------------
This software includes or uses code licensed by the following:
Copyright (c) 2012-2021 Matplotlib Development Team
All Rights Reserved.
1. This LICENSE AGREEMENT is between the Matplotlib Development Team
("MDT"), and the Individual or Organization ("Licensee") accessing and
otherwise using matplotlib software in source or binary form and its
associated documentation.
2. Subject to the terms and conditions of this License Agreement, MDT
hereby grants Licensee a nonexclusive, royalty-free, world-wide license
to reproduce, analyze, test, perform and/or display publicly, prepare
derivative works, distribute, and otherwise use matplotlib
alone or in any derivative version, provided, however, that MDT's
License Agreement and MDT's notice of copyright, i.e., "Copyright (c)
2012- Matplotlib Development Team; All Rights Reserved" are retained in
matplotlib alone or in any derivative version prepared by
Licensee.
3. In the event Licensee prepares a derivative work that is based on or
incorporates matplotlib or any part thereof, and wants to
make the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to matplotlib .
4. MDT is making matplotlib available to Licensee on an "AS
IS" basis. MDT MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, MDT MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB
WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
5. MDT SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR
LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING
MATPLOTLIB , OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF
THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between MDT and
Licensee. This License Agreement does not grant permission to use MDT
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using matplotlib ,
Licensee agrees to be bound by the terms and conditions of this License
Agreement.
----------------------------------------------------------------------------
This software includes or uses code licensed by the following:
Copyright (c) 2015, Daniel Greenfeld
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of cached-property nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
----------------------------------------------------------------------------
This software includes or uses code licensed by the following:
Copyright (C) 2005 Association of Universities for Research in Astronomy (AURA)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
3. The name of AURA and its representatives may not be used to
endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY AURA ``AS IS'' AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL AURA BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
----------------------------------------------------------------------------
This software includes or uses code licensed by the following:
Copyright (c) 2011-2024, Astropy Developers
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
* Neither the name of the Astropy Team nor the names of its contributors may be
used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"dustgoggles",
"more_itertools",
"multidict",
"numpy",
"pandas>=2.0.0",
"rms-vax",
"pillow; extra == \"pillow\"",
"astropy; extra == \"fits\"",
"jupyter; extra == \"notebooks\"",
"pvl; extra == \"pvl\"",
"pytest; extra == \"tests\"",
"Levenshtein; extra == \"fuzzy\""
] | [] | [] | [] | [
"Repository, https://github.com/MillionConcepts/pdr"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-18T21:30:40.309496 | pdr-1.4.1.tar.gz | 13,371,187 | a7/c2/23c63ae9193bba428d6533223e6a091996ddbbfb47a980b205a80b84843b/pdr-1.4.1.tar.gz | source | sdist | null | false | 9271cce6fd92954d148291403b7e5f02 | b25ef51ee904650294c60b9d5867d7884a9a84360ea0c9ca3a086798a1d72bbb | a7c223c63ae9193bba428d6533223e6a091996ddbbfb47a980b205a80b84843b | null | [
"LICENSE.md"
] | 273 |
2.4 | stardag | 0.1.4 | Declarative and composable DAG framework for Python with persistent asset management | # Stardag
[](https://pypi.org/project/stardag/)
[](https://pypi.org/project/stardag/)
[](https://stardag-dev.github.io/stardag/)
[](https://github.com/stardag-dev/stardag/blob/main/lib/LICENSE)
**Declarative and composable DAGs for Python.**
Stardag provides a clean Python API for representing persistently stored assets, the code that produces them, and their dependencies as a declarative Directed Acyclic Graph (DAG). It is a spiritual—but highly modernized—descendant of [Luigi](https://github.com/spotify/luigi), designed for iterative data and ML workflows.
Built on [Pydantic](https://docs.pydantic.dev/), Stardag uses expressive type annotations to reduce boilerplate and make task I/O contracts explicit—enabling composable tasks and pipelines while maintaining a fully declarative specification of every produced asset.
## Quick Example
```python
import stardag as sd
@sd.task
def get_range(limit: int) -> list[int]:
return list(range(limit))
@sd.task
def get_sum(integers: sd.Depends[list[int]]) -> int:
return sum(integers)
# Declarative DAG specification - no computation yet
sum_task = get_sum(integers=get_range(limit=4))
# Materialize all tasks' targets
sd.build(sum_task)
# Load results
assert sum_task.output().load() == 6
assert sum_task.integers.output().load() == [0, 1, 2, 3]
```
## Installation
```bash
pip install stardag
```
Or with [uv](https://docs.astral.sh/uv/):
```bash
uv add stardag
```
**Optional extras:**
```bash
pip install stardag[s3] # S3 storage support
pip install stardag[prefect] # Prefect integration
pip install stardag[modal] # Modal integration
```
## Documentation
**[Read the docs](https://stardag-dev.github.io/stardag/)** for tutorials, guides, and API reference.
- [Getting Started](https://stardag-dev.github.io/stardag/getting-started/) — Installation and first steps
- [Core Concepts](https://stardag-dev.github.io/stardag/concepts/) — Tasks, targets, dependencies
- [How-To Guides](https://stardag-dev.github.io/stardag/how-to/) — Integrations with Prefect, Modal
- [Configuration](https://stardag-dev.github.io/stardag/configuration/) — Profiles, CLI reference
## Stardag Cloud
[Stardag Cloud](https://app.stardag.com) provides optional services for team collaboration and monitoring:
- **Web UI** — Dashboard for build monitoring and task inspection
- **API Service** — Task tracking and coordination across distributed builds
The SDK works fully standalone—the platform adds value for teams needing shared visibility and coordination.
## Why Stardag?
- **Composability** — Task instances as first-class parameters enable loose coupling and reusability
- **Declarative** — Full DAG specification before execution; inspect, serialize, and reason about pipelines
- **Deterministic** — Parameter hashing gives each task a unique, reproducible ID and output path
- **Pydantic-native** — Tasks are Pydantic models with full validation and serialization support
- **Framework-agnostic** — Integrate with Prefect, Modal, or run standalone
## Links
- [Documentation](https://stardag-dev.github.io/stardag/)
- [GitHub](https://github.com/stardag-dev/stardag)
- [Stardag Cloud](https://app.stardag.com)
- [Contributing](https://github.com/stardag-dev/stardag/blob/main/CONTRIBUTING.md)
| text/markdown | null | Anders Huss <info@stardag.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming L... | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"aiofiles>=25.1.0",
"httpx-retries>=0.1.0",
"httpx>=0.27.0",
"pydantic-settings>=2.7.1",
"pydantic>=2.8.2",
"tenacity>=9.1.4",
"tomli-w>=1.0.0",
"typer>=0.12.0",
"uuid6>=2024.7.10",
"modal>=1.0.0; extra == \"modal\"",
"pandas>=2.1.0; extra == \"pandas\"",
"asyncpg>=0.30.0; python_version >= \"... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:30:20.073284 | stardag-0.1.4.tar.gz | 415,028 | 1b/19/dbb0b7fa82d74849532e48187801b0617798b12cdf80da5d43375afd47f9/stardag-0.1.4.tar.gz | source | sdist | null | false | 441be73b13079be52bd5a450937344e6 | 097d240ca99b50096facda6de4864d9747b803b8feecdce485eafbdf82985391 | 1b19dbb0b7fa82d74849532e48187801b0617798b12cdf80da5d43375afd47f9 | Apache-2.0 | [] | 238 |
2.4 | iterable-extensions | 0.1.0 | Add your description here | # iterable-extensions
Collection of useful extension methods to Iterables to enable functional programming. It is heavily inspired by C#'s LINQ.
The extension methods are implemented using [https://pypi.org/project/extensionmethods/](https://pypi.org/project/extensionmethods/). Under the hood they rely strongly on `itertools` and are basically syntactic sugar to write code in a more functional style.
Type-checking is supported, see [Type-checking](#Type-checking).
Important notes:
- Whereas `itertools` generally returns **iterators** that are exhausted after consuming once, `iterable-extensions` generally returns **iterables** that may be consumed repeatedly.
- Similarly to `itertools`, `iterable-extensions` aims to evaluate lazily so that not the entire input iterable is loaded into memory. However, there are some notable exceptions, including:
- `order_by` and `order_by_descending`
- `group_by`
**This package is under development. Only a number of extension methods are currently implemented.**
For the full API reference and the currently implemented methods, see [https://iterable-extensions.readthedocs.io/](https://iterable-extensions.readthedocs.io/).
## Example usage
To filter elements in an iterable based on some predicate:
```py
from iterable_extensions import where, to_list
source = [1, 2, 3, 4, 5]
filtered = source | where[int](lambda x: x > 3) # Only numbers greater than 3
lst = filtered | to_list() # Materialize into list
print(lst)
# [4, 5]
lst2 = filtered | to_list() # Iterables can be consumed multiple times
print(lst2)
# [4, 5]
```
To transform elements according to some function:
```py
from iterable_extensions import select, to_list
source = [1, 2, 3, 4, 5]
transformed = source | select[int, str](lambda x: str(2 * x)) # Transform each element
lst = transformed | to_list() # Materialize into list
print(lst)
# ['2', '4', '6', '8', '10']
```
To group elements based on a key:
```py
from dataclasses import dataclass
from iterable_extensions.iterable_extensions import group_by
@dataclass
class Person:
age: int
name: str
source = [
Person(21, Gender.MALE, "Arthur"),
Person(37, Gender.FEMALE, "Becky"),
Person(12, Gender.MALE, "Chris"),
Person(48, Gender.MALE, "Dave"),
Person(88, Gender.MALE, "Eduardo"),
Person(56, Gender.FEMALE, "Felice"),
]
grouped = source | group_by[Person, int](lambda p: p.age) # Group by age
lst = grouped | to_list() # Materialize into list
print(lst)
# [
# 10: [Person(age=10, name='Arthur'), Person(age=10, name='Becky')],
# 20: [Person(age=20, name='Chris')],
# 30: [Person(age=30, name='Dave'), Person(age=30, name='Eduardo'), Person(age=30, name='Felice')]
# ]
```
You can chain these methods into functional-style code. For instance, in the below
example, to get the full name of the oldest male and female:
```py
from dataclasses import dataclass
from enum import IntEnum
from iterable_extensions.iterable_extensions import (
Grouping,
first,
group_by,
order_by_descending,
select,
to_list,
)
class Gender(IntEnum):
MALE = 1
FEMALE = 2
@dataclass
class Person:
age: int
gender: Gender
first_name: str
last_name: str
data = [
Person(21, Gender.MALE, "Arthur", "Johnson"),
Person(56, Gender.FEMALE, "Becky", "de Vries"),
Person(12, Gender.MALE, "Chris", "Lamarck"),
Person(48, Gender.MALE, "Dave", "Stevens"),
Person(88, Gender.MALE, "Eduardo", "Doe"),
Person(37, Gender.FEMALE, "Felice", "van Halen"),
]
grouped = (
data
| group_by[Person, Gender](lambda p: p.gender) # Group by gender
| select[Grouping[Person, Gender], Person]( # Within each group
lambda g: (
g
# Order by age, descending
| order_by_descending[Person, int](lambda p: p.age)
| first() # Take the first entry
)
# For each gender, aggregate first and last name
| select[Person, str](lambda p: f"{p.first_name} {p.last_name}")
| to_list() # Materialize into list
)
print(grouped)
# ['Eduardo Doe', 'Becky de Vries']
```
## Type-checking
The iterable extensions are fully type-annotated and support type inference with
linters as much as possible. However, due to limitations in current type checkers,
inference doesn't propage through the `|` operator. For example:
```py
source: list[int] = [1, 2, 3, 4, 5]
filtered = source | where(lambda x: x > 3)
```
Will give an error like `Operator ">" not supported for types "T@where" and "Literal[3]"` on the lambda body, even though the type of `x` is fully specified through `source`.
To circumvent this, you can expicitly specify its type: `where[int](lambda x: ...)`. This also gives you autocompletion on `x` in the lambda body.
Alternatively, you can explicitly define the function instead of writing a lambda:
```
def func(x: int) -> bool:
return x > 3
filtered = source | where(func)
```
Although this hampers the readability of the functional style that the `iterable-extensions` package aims to provide.
Note that the type annotations are only for static checkers. You can ignore these errors and the code will still run fine.
## How to read `Extension[TIn, **P, TOut]`
In the API reference, you'll notice that all extension methods inherit from `Extension[TIn, **P, TOut]`. This class is the core of the `extensionsmethods` package ([https://pypi.org/project/extensionmethods/](https://pypi.org/project/extensionmethods/)). It provides the basic `|`-operator functionality.
The `Extension` class has two type parameters and a paramspec:
- `TIn`: The type that the extension is defined to operate on.
- `**P`: Arbitrary number of arguments that the extension method may take.
- `TOut`: The type of the return value of the extension method.
For example, looking at the signature of `select`:
```
class select[TIn, TOut](
Extension[
Iterable[TIn],
[Callable[[TIn], TOut]],
Iterable[TOut]
]
): ...
```
we see that:
- `TIn` = `Iterable[TIn]`. `select` is defined to operate on iterables of an arbitrary input type.
- `**P` = `[Callable[[TIn], TOut]]`. `select` requires a mapping function as a parameter.
- `TOut` = `Iterable[TOut]`. `select` returns an iterable of an arbitrary output type.
For example:
```py
source: list[int] = [1, 2, 3, 4]
# Allowed. Inputs ints, outputs strings.
source | select[int, str](lambda: str(x))
# Not allowed. Expects strings as inputs, but ints are given.
source | select[str, str](lambda: str(x))
# Not allowed. Excepts ints as output, but the lambda returns strings.
source | select[int, int](lambda: str(x))
```
## Installation
Using pip:
```
pip install iterable-extensions
```
Using uv:
```
uv add iterable-extensions
```
## License
This project is licensed under the MIT License. See the `LICENSE` file for details.
| text/markdown | null | Pim Mostert <pim.mostert@pimmostert.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"extensionmethods>=0.1.3"
] | [] | [] | [] | [
"Homepage, https://github.com/Pim-Mostert/iterable-extensions",
"Issues, https://github.com/Pim-Mostert/iterable-extensions/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:29:51.947620 | iterable_extensions-0.1.0.tar.gz | 60,434 | c8/0b/d8c1d342e055f1dce811e11d3ffb25f5c3d077d50ab57152d6fb5ab3a363/iterable_extensions-0.1.0.tar.gz | source | sdist | null | false | 339d2f418f2c3b08ede4f00e9d44f506 | bd818c742f37d6e9886b98245f000580a9b0a534688e4ad99ec056685f697585 | c80bd8c1d342e055f1dce811e11d3ffb25f5c3d077d50ab57152d6fb5ab3a363 | MIT | [
"LICENSE"
] | 245 |
2.4 | piccione | 3.0.1 | A Python toolkit for uploading and downloading data to external repositories and cloud services. | # Piccione
<p align="center">
<img src="docs/public/piccione.png" alt="Piccione logo" width="200">
</p>
Pronounced *Py-ccione*.
[](https://github.com/opencitations/piccione/actions/workflows/tests.yml)
[](https://opencitations.github.io/piccione/coverage/)
[](https://opensource.org/licenses/ISC)
**PICCIONE** - Python Interface for Cloud Content Ingest and Outbound Network Export
A Python toolkit for uploading and downloading data to external repositories and cloud services.
## Installation
```bash
pip install piccione
```
## Quick start
### Upload to Figshare
```bash
python -m piccione.upload.on_figshare config.yaml
```
### Upload to Zenodo
```bash
python -m piccione.upload.on_zenodo config.yaml
```
### Upload to Internet Archive
```bash
python -m piccione.upload.on_internet_archive config.yaml
```
### Upload to triplestore
```bash
python -m piccione.upload.on_triplestore <endpoint> <folder>
```
### Download from Figshare
```bash
python -m piccione.download.from_figshare <article_id> -o <output_dir>
```
### Download from SharePoint
```bash
python -m piccione.download.from_sharepoint config.yaml <output_dir>
```
## Documentation
Full documentation: https://opencitations.github.io/piccione/
Configuration examples: [examples/](examples/)
## Development
```bash
git clone https://github.com/opencitations/piccione.git
cd piccione
uv sync --all-extras --dev
uv run pytest tests/
```
## License
ISC License - see [LICENSE.md](LICENSE.md)
| text/markdown | null | Arcangelo Massari <arcangelo.massari@unibo.it> | null | null | ISC | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: ISC License (ISCL)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Langu... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.28.1",
"internetarchive>=5.7.1",
"pyyaml>=6.0.3",
"redis>=4.5.5",
"requests>=2.32.5",
"rich>=14.2.0",
"sparqlite>=1.0.0",
"tqdm>=4.67.1"
] | [] | [] | [] | [
"Homepage, https://github.com/opencitations/piccione",
"Documentation, https://opencitations.github.io/piccione/",
"Repository, https://github.com/opencitations/piccione"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T21:29:40.903915 | piccione-3.0.1.tar.gz | 649,502 | 98/bb/790b4a5156d76e0dffcf350ba077300661d2feb08240638cbbccbbff5b9b/piccione-3.0.1.tar.gz | source | sdist | null | false | 5ff027b055bf9f0155f72e73f0fa1449 | c0e1bd6501863eee1ef0df9c7a2227db109080145d477b7c88f0c2abd760568d | 98bb790b4a5156d76e0dffcf350ba077300661d2feb08240638cbbccbbff5b9b | null | [
"LICENSE.md"
] | 246 |
2.4 | sqlmodelclassapi | 0.1.0 | Add your description here | # SQLModel support for classApi
This project provides a simple way to use a SQLModel `Session` inside your classApi views.
First, create your SQLModel engine. Here is a basic example:
```py
# engine.py
from sqlalchemy import create_engine
from sqlmodel import SQLModel
from .model import User # In this example, User has 3 fields: id, name, and email
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
```
Then create a new base view using `make_session_view`:
```py
# engine.py
from sqlalchemy import create_engine
from sqlmodel import SQLModel
from sqlmodelclassapi.main import make_session_view
from .model import User
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
SessionView = make_session_view(engine=engine) # New base view with session support
```
Now, to create a view, inherit from `SessionView` instead of `BaseView`:
```py
# views.py
class ExampleSessionView(SessionView):
methods = ["GET", "POST"]
def get(self, *args):
statement = select(User)
results = self.session.exec(statement).all()
return [
{"id": user.id, "name": user.name, "email": user.email}
for user in results
]
def post(self, name: str):
new_user = User(name=name, email=f"{name.lower()}@example.com")
self.session.add(new_user)
self.session.commit()
self.session.refresh(new_user)
return {"message": f"User '{name}' created successfully!", "user": new_user}
```
With this setup, all operations in your request run inside the same session, including `pre_{method}` hooks.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"classapi>=0.1.0.1",
"sqlmodel>=0.0.34"
] | [] | [] | [] | [] | uv/0.9.7 | 2026-02-18T21:29:28.886201 | sqlmodelclassapi-0.1.0.tar.gz | 4,016 | de/2f/036f2d0acf3a28e1633f3d4fed07edb139026772b8f70dba424a13dc569c/sqlmodelclassapi-0.1.0.tar.gz | source | sdist | null | false | 0f542f460ba30b374ec905111d1d6e3a | b0633fecb369ff8a9d9345007b41a419c1c0d9d851bd4a0b4fb07a3d66fdfff9 | de2f036f2d0acf3a28e1633f3d4fed07edb139026772b8f70dba424a13dc569c | null | [] | 164 |
2.1 | qai-hub | 0.45.0 | Python API for Qualcomm® AI Hub. | Qualcomm® AI Hub
================
`Qualcomm® AI Hub <https://aihub.qualcomm.com>`_ simplifies deploying AI models
for vision, audio, and speech applications to edge devices.
helps to optimize, validate,
and deploy machine learning models on-device for vision, audio, and speech use
cases.
With Qualcomm® AI Model Hub, you can:
- Convert trained models from frameworks like PyTorch for optimized on-device performance on Qualcomm® devices.
- Profile models on-device to obtain detailed metrics including runtime, load time, and compute unit utilization.
- Verify numerical correctness by performing on-device inference.
- Easily deploy models using Qualcomm® AI Engine Direct or TensorFlow Lite.
:code:`qai_hub` is a python package that provides an API for users to upload a
model, submit the profile jobs for hardware and get key metrics to optimize the
machine learning model further.
Installation with PyPI
----------------------
The easiest way to install :code:`qai_hub` is by using pip, running
:code:`pip install qai-hub`
For more information, check out the `documentation <https://workbench.aihub.qualcomm.com/docs/>`_.
License
-------
Copyright (c) 2023, Qualcomm Technologies Inc. All rights reserved.
| null | Qualcomm® Technologies, Inc. | ai-hub-support@qti.qualcomm.com | null | null | BSD License | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engine... | [] | https://aihub.qualcomm.com/ | null | >=3.10 | [] | [] | [] | [
"backoff>=2.2",
"deprecation",
"h5py<4,>=2.10.0",
"numpy<3,>=1.22.0",
"packaging>=20.0",
"prettytable>=3.9.0",
"protobuf<=6.31.1,>=3.20",
"requests",
"requests-toolbelt",
"s3transfer<0.14,>=0.10.3",
"semver>=3.0",
"tqdm",
"typing-extensions>=4.12.2",
"coremltools==6.2; extra == \"coremltoo... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.17 | 2026-02-18T21:29:01.587836 | qai_hub-0.45.0-py3-none-any.whl | 114,537 | 22/93/63b34dd7952f9243019548a5de5ce901fb6d307451abbcc7ee0f6ffe847d/qai_hub-0.45.0-py3-none-any.whl | py3 | bdist_wheel | null | false | a8a3472cdca4aa3a3ba3e9b0b620ac95 | 47e53299511346b35f363924105b71e054dabeaccb2b26138491cb4959b611bc | 229363b34dd7952f9243019548a5de5ce901fb6d307451abbcc7ee0f6ffe847d | null | [] | 2,246 |
2.4 | genesis-world | 0.4.0 | A universal and generative physics engine | 

[](https://pypi.org/project/genesis-world/)
[](https://pepy.tech/projects/genesis-world)
[](https://github.com/Genesis-Embodied-AI/Genesis/issues)
[](https://github.com/Genesis-Embodied-AI/Genesis/discussions)
[](https://discord.gg/nukCuhB47p)
<a href="https://drive.google.com/uc?export=view&id=1ZS9nnbQ-t1IwkzJlENBYqYIIOOZhXuBZ"><img src="https://img.shields.io/badge/WeChat-07C160?style=for-the-badge&logo=wechat&logoColor=white" height="20" style="display:inline"></a>
[](./README.md)
[](./README_FR.md)
[](./README_KR.md)
[](./README_CN.md)
[](./README_JA.md)
# Genesis
## 🔥 News
- [2025-08-05] Released v0.3.0 🎊 🎉
- [2025-07-02] The development of Genesis is now officially supported by [Genesis AI](https://genesis-ai.company/).
- [2025-01-09] We released a [detailed performance benchmarking and comparison report](https://github.com/zhouxian/genesis-speed-benchmark) on Genesis, together with all the test scripts.
- [2025-01-08] Released v0.2.1 🎊 🎉
- [2025-01-08] Created [Discord](https://discord.gg/nukCuhB47p) and [Wechat](https://drive.google.com/uc?export=view&id=1ZS9nnbQ-t1IwkzJlENBYqYIIOOZhXuBZ) group.
- [2024-12-25] Added a [docker](#docker) including support for the ray-tracing renderer
- [2024-12-24] Added guidelines for [contributing to Genesis](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/.github/contributing/PULL_REQUESTS.md)
## Table of Contents
1. [What is Genesis?](#what-is-genesis)
2. [Key Features](#key-features)
3. [Quick Installation](#quick-installation)
4. [Docker](#docker)
5. [Documentation](#documentation)
6. [Contributing to Genesis](#contributing-to-genesis)
7. [Support](#support)
8. [License and Acknowledgments](#license-and-acknowledgments)
9. [Associated Papers](#associated-papers)
10. [Citation](#citation)
## What is Genesis?
Genesis is a physics platform designed for general-purpose *Robotics/Embodied AI/Physical AI* applications. It is simultaneously multiple things:
1. A **universal physics engine** re-built from the ground up, capable of simulating a wide range of materials and physical phenomena.
2. A **lightweight**, **ultra-fast**, **pythonic**, and **user-friendly** robotics simulation platform.
3. A powerful and fast **photo-realistic rendering system**.
4. A **generative data engine** that transforms user-prompted natural language description into various modalities of data.
Powered by a universal physics engine re-designed and re-built from the ground up, Genesis integrates various physics solvers and their coupling into a unified framework. This core physics engine is further enhanced by a generative agent framework that operates at an upper level, aiming towards fully automated data generation for robotics and beyond.
**Note**: Currently, we are open-sourcing the _underlying physics engine_ and the _simulation platform_. Our _generative framework_ is a modular system that incorporates many different generative modules, each handling a certain range of data modalities, routed by a high level agent. Some of the modules integrated existing papers and some are still under submission. Access to our generative feature will be gradually rolled out in the near future. If you are interested, feel free to explore more in the [paper list](#associated-papers) below.
Genesis aims to:
- **Lower the barrier** to using physics simulations, making robotics research accessible to everyone. See our [mission statement](https://genesis-world.readthedocs.io/en/latest/user_guide/overview/mission.html).
- **Unify diverse physics solvers** into a single framework to recreate the physical world with the highest fidelity.
- **Automate data generation**, reducing human effort and letting the data flywheel spin on its own.
Project Page: <https://genesis-embodied-ai.github.io/>
## Key Features
- **Speed**: Over 43 million FPS when simulating a Franka robotic arm with a single RTX 4090 (430,000 times faster than real-time).
- **Cross-platform**: Runs on Linux, macOS, Windows, and supports multiple compute backends (CPU, Nvidia/AMD GPUs, Apple Metal).
- **Integration of diverse physics solvers**: Rigid body, MPM, SPH, FEM, PBD, Stable Fluid.
- **Wide range of material models**: Simulation and coupling of rigid bodies, liquids, gases, deformable objects, thin-shell objects, and granular materials.
- **Compatibility with various robots**: Robotic arms, legged robots, drones, *soft robots*, and support for loading `MJCF (.xml)`, `URDF`, `.obj`, `.glb`, `.ply`, `.stl`, and more.
- **Photo-realistic rendering**: Native ray-tracing-based rendering.
- **Differentiability**: Genesis is designed to be fully differentiable. Currently, our MPM solver and Tool Solver support differentiability, with other solvers planned for future versions (starting with rigid & articulated body solver).
- **User-friendliness**: Designed for simplicity, with intuitive installation and APIs.
## Quick Installation
### Using pip
Install **PyTorch** first following the [official instructions](https://pytorch.org/get-started/locally/).
Then, install Genesis via PyPI:
```bash
pip install genesis-world # Requires Python>=3.10,<3.14;
```
For the latest version to date, make sure that `pip` is up-to-date via `pip install --upgrade pip`, then run command:
```bash
pip install git+https://github.com/Genesis-Embodied-AI/Genesis.git
```
Note that the package must still be updated manually to sync with main branch.
Users seeking to contribute are encouraged to install Genesis in editable mode. First, make sure that `genesis-world` has been uninstalled, then clone the repository and install locally:
```bash
git clone https://github.com/Genesis-Embodied-AI/Genesis.git
cd Genesis
pip install -e ".[dev]"
```
It is recommended to systematically execute `pip install -e ".[dev]"` after moving HEAD to make sure that all dependencies and entrypoints are up-to-date.
### Using uv
[uv](https://docs.astral.sh/uv/) is a fast Python package and project manager.
**Install uv:**
```bash
# On macOS and Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
**Quick start with uv:**
```bash
git clone https://github.com/Genesis-Embodied-AI/Genesis.git
cd Genesis
uv sync
```
Then install PyTorch for your platform:
```bash
# NVIDIA GPU (CUDA 12.6 as an example)
uv pip install torch --index-url https://download.pytorch.org/whl/cu126
# CPU only (Linux/Windows)
uv pip install torch --index-url https://download.pytorch.org/whl/cpu
# Apple Silicon (Metal/MPS)
uv pip install torch
```
Run an example:
```bash
uv run examples/rigid/single_franka.py
```
## Docker
If you want to use Genesis from Docker, you can first build the Docker image as:
```bash
docker build -t genesis -f docker/Dockerfile docker
```
Then you can run the examples inside the docker image (mounted to `/workspace/examples`):
```bash
xhost +local:root # Allow the container to access the display
docker run --gpus all --rm -it \
-e DISPLAY=$DISPLAY \
-e LOCAL_USER_ID="$(id -u)" \
-v /dev/dri:/dev/dri \
-v /tmp/.X11-unix/:/tmp/.X11-unix \
-v $(pwd):/workspace \
--name genesis genesis:latest
```
### AMD users
AMD users can use Genesis using the `docker/Dockerfile.amdgpu` file, which is built by running:
```
docker build -t genesis-amd -f docker/Dockerfile.amdgpu docker
```
and can then be used by running:
```xhost +local:docker \
docker run -it --network=host \
--device=/dev/kfd \
--device=/dev/dri \
--group-add=video \
--ipc=host \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--shm-size 8G \
-v $PWD:/workspace \
-e DISPLAY=$DISPLAY \
genesis-amd
```
The examples will be accessible from `/workspace/examples`. Note: AMD users should use the ROCm (HIP) backend. This means you will need to call `gs.init(backend=gs.amdgpu)` to initialise Genesis.
## Documentation
Comprehensive documentation is available in [English](https://genesis-world.readthedocs.io/en/latest/user_guide/index.html), [Chinese](https://genesis-world.readthedocs.io/zh-cn/latest/user_guide/index.html), and [Japanese](https://genesis-world.readthedocs.io/ja/latest/user_guide/index.html). This includes detailed installation steps, tutorials, and API references.
## Contributing to Genesis
The Genesis project is an open and collaborative effort. We welcome all forms of contributions from the community, including:
- **Pull requests** for new features or bug fixes.
- **Bug reports** through GitHub Issues.
- **Suggestions** to improve Genesis's usability.
Refer to our [contribution guide](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/.github/contributing/PULL_REQUESTS.md) for more details.
## Support
- Report bugs or request features via GitHub [Issues](https://github.com/Genesis-Embodied-AI/Genesis/issues).
- Join discussions or ask questions on GitHub [Discussions](https://github.com/Genesis-Embodied-AI/Genesis/discussions).
## License and Acknowledgments
The Genesis source code is licensed under Apache 2.0.
Genesis's development has been made possible thanks to these open-source projects:
- [Taichi](https://github.com/taichi-dev/taichi): High-performance cross-platform compute backend. Kudos to the Taichi team for their technical support!
- [FluidLab](https://github.com/zhouxian/FluidLab): Reference MPM solver implementation.
- [SPH_Taichi](https://github.com/erizmr/SPH_Taichi): Reference SPH solver implementation.
- [Ten Minute Physics](https://matthias-research.github.io/pages/tenMinutePhysics/index.html) and [PBF3D](https://github.com/WASD4959/PBF3D): Reference PBD solver implementations.
- [MuJoCo](https://github.com/google-deepmind/mujoco): Reference for rigid body dynamics.
- [libccd](https://github.com/danfis/libccd): Reference for collision detection.
- [PyRender](https://github.com/mmatl/pyrender): Rasterization-based renderer.
- [LuisaCompute](https://github.com/LuisaGroup/LuisaCompute) and [LuisaRender](https://github.com/LuisaGroup/LuisaRender): Ray-tracing DSL.
- [Madrona](https://github.com/shacklettbp/madrona) and [Madrona-mjx](https://github.com/shacklettbp/madrona_mjx): Batch renderer backend
## Associated Papers
Genesis is a large scale effort that integrates state-of-the-art technologies of various existing and on-going research work into a single system. Here we include a non-exhaustive list of all the papers that contributed to the Genesis project in one way or another:
- Xian, Zhou, et al. "Fluidlab: A differentiable environment for benchmarking complex fluid manipulation." arXiv preprint arXiv:2303.02346 (2023).
- Xu, Zhenjia, et al. "Roboninja: Learning an adaptive cutting policy for multi-material objects." arXiv preprint arXiv:2302.11553 (2023).
- Wang, Yufei, et al. "Robogen: Towards unleashing infinite data for automated robot learning via generative simulation." arXiv preprint arXiv:2311.01455 (2023).
- Wang, Tsun-Hsuan, et al. "Softzoo: A soft robot co-design benchmark for locomotion in diverse environments." arXiv preprint arXiv:2303.09555 (2023).
- Wang, Tsun-Hsuan Johnson, et al. "Diffusebot: Breeding soft robots with physics-augmented generative diffusion models." Advances in Neural Information Processing Systems 36 (2023): 44398-44423.
- Katara, Pushkal, Zhou Xian, and Katerina Fragkiadaki. "Gen2sim: Scaling up robot learning in simulation with generative models." 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.
- Si, Zilin, et al. "DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation." arXiv preprint arXiv:2403.08716 (2024).
- Wang, Yian, et al. "Thin-Shell Object Manipulations With Differentiable Physics Simulations." arXiv preprint arXiv:2404.00451 (2024).
- Lin, Chunru, et al. "UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments." arXiv preprint arXiv:2411.12711 (2024).
- Zhou, Wenyang, et al. "EMDM: Efficient motion diffusion model for fast and high-quality motion generation." European Conference on Computer Vision. Springer, Cham, 2025.
- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Scalable differentiable physics for learning and control." International Conference on Machine Learning. PMLR, 2020.
- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Efficient differentiable simulation of articulated bodies." In International Conference on Machine Learning, PMLR, 2021.
- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming Lin. "Differentiable simulation of soft multi-body systems." Advances in Neural Information Processing Systems 34 (2021).
- Wan, Weilin, et al. "Tlcontrol: Trajectory and language control for human motion synthesis." arXiv preprint arXiv:2311.17135 (2023).
- Wang, Yian, et al. "Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting." arXiv preprint arXiv:2411.09823 (2024).
- Zheng, Shaokun, et al. "LuisaRender: A high-performance rendering framework with layered and unified interfaces on stream architectures." ACM Transactions on Graphics (TOG) 41.6 (2022): 1-19.
- Fan, Yingruo, et al. "Faceformer: Speech-driven 3d facial animation with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
- Wu, Sichun, Kazi Injamamul Haque, and Zerrin Yumak. "ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE." Proceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games. 2024.
- Dou, Zhiyang, et al. "C· ase: Learning conditional adversarial skill embeddings for physics-based characters." SIGGRAPH Asia 2023 Conference Papers. 2023.
... and many more on-going work.
## Citation
If you use Genesis in your research, please consider citing:
```bibtex
@misc{Genesis,
author = {Genesis Authors},
title = {Genesis: A Generative and Universal Physics Engine for Robotics and Beyond},
month = {December},
year = {2024},
url = {https://github.com/Genesis-Embodied-AI/Genesis}
}
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"psutil",
"quadrants==0.4.0",
"pydantic>=2.11.0",
"numpy>=1.26.4",
"trimesh",
"libigl",
"py-cpuinfo",
"mujoco>=3.2.5",
"moviepy>=2.0.0",
"pyglet!=2.1.8,>=1.5",
"freetype-py",
"PyOpenGL>=3.1.4",
"numba",
"pymeshlab",
"pycollada",
"pygltflib==1.16.0",
"tetgen==0.8.2",
"PyGEL3D",
"v... | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.8 | 2026-02-18T21:28:19.648834 | genesis_world-0.4.0-py3-none-any.whl | 82,512,449 | f9/9f/c1c533a1d067bdbc171109a813e7eb9bf624809efaae1e7416a847ad14c4/genesis_world-0.4.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 6a429b714acfa35a4daccbad06410c2d | 3dece7ecb8fc60d9c49ec315fdfaefd7317b9f63fc6db53d4ffcf3c401e26f8e | f99fc1c533a1d067bdbc171109a813e7eb9bf624809efaae1e7416a847ad14c4 | null | [
"LICENSE"
] | 761 |
2.4 | cas-toolbox | 2026.8.1 | Cluster Automation Scripts Toolbox | # cas-toolbox
**Version:** ex: `2025.7.2` -> (2025 wk 7 hotfix 2)
Will update every monday if there is a code change.
Cluster Automation Scripts Toolbox -
- One stop ship for tool scripts for cluster automation operations in high performance computing.
- All single file script / libs for easy transportation
- Minimal dependencies for all scripts
## Requirements
- Python >= 3.6
- argparse
## Optional Python Libs
- curses
- python-dateutil
- xxhash
- resource
- prettytable
- ipaddress
- numpy
## Includes following single file libs
- hpcp.py
- multiCMD.py
- multiSSH3.py
- iotest.py (simple-iotest)
- statbtrfs.py
- Tee_Logger.py
- TSVZ.py
- statblk.py
## Installation
Use pip:
```bash
pip install cas-toolbox
```
Use pipx:
```bash
pipx install cas-toolbox
```
Use uv:
```bash
uv tool install --with numpy cas-toolbox
```
Note: with numpy, iotest random number generator will perform much better. But it is not used anywhere else.
Use uv to add as dependency:
```bash
uv add cas-toolbox
```
## Commands provided:
- `hpcp`
- `mcmd` / `multicmd` / `multiCMD`
- `mssh` / `mssh3` / `multissh` / `multissh3` / `multiSSH3`
- `iotest`
- `statbtrfs`
- `TSVZ` / `tsvz`
- `statblk`
All with `--help` / `-h` provided.
## Author
- Yufei Pan (pan@zopyr.us)
## License
GPL-3.0-or-later
## Links
- [hpcp](https://github.com/yufei-pan/hpcp)
- [multiCMD](https://github.com/yufei-pan/multiCMD)
- [multiSSH3](https://github.com/yufei-pan/multiSSH3)
- [simple-iotest](https://github.com/yufei-pan/simple-iotest)
- [statbtrfs](https://github.com/yufei-pan/statbtrfs)
- [Tee_Logger](https://github.com/yufei-pan/Tee_Logger)
- [TSVZ](https://github.com/yufei-pan/TSVZ)
- [statblk](https://github.com/yufei-pan/statblk)
| text/markdown | null | Yufei Pan <pan@zopyr.us> | null | null | GPL-3.0-or-later | null | [
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"hpcp==9.48",
"multicmd==1.44",
"multissh3==6.12",
"simple-iotest==3.61.2",
"statbtrfs==0.26",
"tee-logger==6.37",
"tsvz==3.36",
"statblk==1.37"
] | [] | [] | [] | [
"Homepage, https://github.com/yufei-pan/cas-toolbox",
"hpcp, https://github.com/yufei-pan/hpcp",
"multiCMD, https://github.com/yufei-pan/multiCMD",
"multiSSH3, https://github.com/yufei-pan/multiSSH3",
"simple-iotest, https://github.com/yufei-pan/simple-iotest",
"statbtrfs, https://github.com/yufei-pan/sta... | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Fedora Linux","version":"43","id":"","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T21:27:33.272499 | cas_toolbox-2026.8.1.tar.gz | 2,453 | c1/2e/3dc085161480c11f0af1a55ea2d4ce724a0a7fd05235a02ea2b5e84fb357/cas_toolbox-2026.8.1.tar.gz | source | sdist | null | false | 20964de171baa938be307ea40e6b7d83 | 2e02e462ef619f4d81ab9ced8d960c7e5e20f9737fc61d14374453983c1cc047 | c12e3dc085161480c11f0af1a55ea2d4ce724a0a7fd05235a02ea2b5e84fb357 | null | [] | 265 |
2.4 | jentis | 1.0.1 | A unified Python interface for multiple Large Language Model (LLM) providers including Google Gemini, Anthropic Claude, OpenAI GPT, xAI Grok, Azure OpenAI, and Ollama | # Jentis LLM Kit
A unified Python interface for multiple Large Language Model (LLM) providers. Access Google Gemini, Anthropic Claude, OpenAI GPT, xAI Grok, Azure OpenAI, and Ollama through a single, consistent API.
## Features
- 🔄 **Unified Interface**: One API for all LLM providers
- 🚀 **Easy to Use**: Simple `init_llm()` function to get started
- 📡 **Streaming Support**: Real-time response streaming for all providers
- 📊 **Token Tracking**: Consistent token usage reporting across providers
- 🔧 **Flexible Configuration**: Provider-specific parameters when needed
- 🛡️ **Error Handling**: Comprehensive exception hierarchy for debugging
## Supported Providers
| Provider | Aliases | Models |
|----------|---------|--------|
| Google Gemini | `google`, `gemini` | gemini-2.0-flash-exp, gemini-1.5-pro, etc. |
| Anthropic Claude | `anthropic`, `claude` | claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022 |
| OpenAI | `openai`, `gpt` | gpt-4o, gpt-4o-mini, gpt-4-turbo |
| xAI Grok | `grok`, `xai` | grok-2-latest, grok-2-vision-latest |
| Azure OpenAI | `azure`, `microsoft` | Your deployment names |
| Ollama Cloud | `ollama-cloud` | llama2, mistral, codellama, etc. |
| Ollama Local | `ollama`, `ollama-local` | Any locally installed model |
| Vertex AI | `vertexai`, `vertex-ai`, `vertex` | Any Vertex AI Model Garden model |
## Installation
```bash
# Install the base package
pip install jentis
# Install provider-specific dependencies
pip install google-generativeai # For Google Gemini
pip install anthropic # For Anthropic Claude
pip install openai # For OpenAI, Grok, Azure
pip install ollama # For Ollama (Cloud & Local)
# Vertex AI requires no pip packages — only gcloud CLI
```
## Quick Start
### Basic Usage
```python
from jentis.llmkit import init_llm
# Initialize OpenAI GPT-4 (requires OpenAI API key)
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxx" # Your OpenAI API key
)
# Generate a response
response = llm.generate_response("What is Python?")
print(response)
```
### Streaming Responses
```python
from jentis.llmkit import init_llm
# Each provider requires its own API key
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxx" # OpenAI-specific key
)
# Stream the response
for chunk in llm.generate_response_stream("Write a short story about AI"):
print(chunk, end='', flush=True)
```
## Provider Examples
### Google Gemini
```python
from jentis.llmkit import init_llm
# Requires Google AI Studio API key
llm = init_llm(
provider="google",
model="gemini-2.0-flash-exp",
api_key="AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxx", # Google API key
temperature=0.7,
max_tokens=1024
)
response = llm.generate_response("Explain quantum computing")
print(response)
```
### Anthropic Claude
```python
from jentis.llmkit import init_llm
# Requires Anthropic API key
llm = init_llm(
provider="anthropic",
model="claude-3-5-sonnet-20241022",
api_key="sk-ant-api03-xxxxxxxxxxxxxxxxx", # Anthropic API key
max_tokens=2048,
temperature=0.8
)
response = llm.generate_response("Write a haiku about programming")
print(response)
```
### OpenAI GPT
```python
from jentis.llmkit import init_llm
# Requires OpenAI API key
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx", # OpenAI API key
temperature=0.9,
max_tokens=1500,
frequency_penalty=0.5,
presence_penalty=0.3
)
response = llm.generate_response("Design a simple REST API")
print(response)
```
### xAI Grok
```python
from jentis.llmkit import init_llm
# Requires xAI API key
llm = init_llm(
provider="grok",
model="grok-2-latest",
api_key="xai-xxxxxxxxxxxxxxxxxxxxxxxx", # xAI API key
temperature=0.7
)
response = llm.generate_response("What's happening in tech?")
print(response)
```
### Azure OpenAI
```python
from jentis.llmkit import init_llm
# Requires Azure OpenAI API key and endpoint
llm = init_llm(
provider="azure",
model="gpt-4o",
api_key="a1b2c3d4e5f6xxxxxxxxxxxx", # Azure API key
azure_endpoint="https://your-resource.openai.azure.com/",
deployment_name="gpt-4o-deployment",
api_version="2024-08-01-preview",
temperature=0.7
)
response = llm.generate_response("Explain Azure services")
print(response)
```
### Ollama Local
```python
from jentis.llmkit import init_llm
# No API key needed for local Ollama
llm = init_llm(
provider="ollama",
model="llama2",
temperature=0.7
)
response = llm.generate_response("Hello, Ollama!")
print(response)
```
### Ollama Cloud
```python
from jentis.llmkit import init_llm
# Requires Ollama Cloud API key
llm = init_llm(
provider="ollama-cloud",
model="llama2",
api_key="ollama_xxxxxxxxxxxxxxxx", # Ollama Cloud API key
host="https://ollama.com"
)
response = llm.generate_response("Explain machine learning")
print(response)
```
### Vertex AI (Model Garden)
```python
from jentis.llmkit import init_llm
# Uses gcloud CLI for authentication (no API key needed)
llm = init_llm(
provider="vertexai",
model="moonshotai/kimi-k2-thinking-maas",
project_id="gen-lang-client-0152852093",
region="global",
temperature=0.6,
max_tokens=8192
)
response = llm.generate_response("What is quantum computing?")
print(response)
```
## Advanced Usage
### Using Function-Based API with Metadata
If you need detailed metadata (token usage, model info), import the provider-specific functions:
```python
from jentis.llmkit.Openai import openai_llm
result = openai_llm(
prompt="What is AI?",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx", # Your OpenAI API key
temperature=0.7
)
print(f"Content: {result['content']}")
print(f"Model: {result['model']}")
print(f"Input tokens: {result['usage']['input_tokens']}")
print(f"Output tokens: {result['usage']['output_tokens']}")
print(f"Total tokens: {result['usage']['total_tokens']}")
```
**Other Providers:**
```python
# Google Gemini
from jentis.llmkit.Google import google_llm
result = google_llm(prompt="...", model="gemini-2.0-flash-exp", api_key="...")
# Anthropic Claude
from jentis.llmkit.Anthropic import anthropic_llm
result = anthropic_llm(prompt="...", model="claude-3-5-sonnet-20241022", api_key="...", max_tokens=1024)
# Grok
from jentis.llmkit.Grok import grok_llm
result = grok_llm(prompt="...", model="grok-2-latest", api_key="...")
# Azure OpenAI
from jentis.llmkit.Microsoft import azure_llm
result = azure_llm(prompt="...", deployment_name="gpt-4o", azure_endpoint="...", api_key="...")
# Ollama Cloud
from jentis.llmkit.Ollamacloud import ollama_cloud_llm
result = ollama_cloud_llm(prompt="...", model="llama2", api_key="...")
# Ollama Local
from jentis.llmkit.Ollamalocal import ollama_local_llm
result = ollama_local_llm(prompt="...", model="llama2")
# Vertex AI
from jentis.llmkit.Vertexai import vertexai_llm
result = vertexai_llm(prompt="...", model="google/gemini-2.0-flash", project_id="my-project")
```
**Streaming with Functions:**
```python
from jentis.llmkit.Openai import openai_llm_stream
for chunk in openai_llm_stream(
prompt="Write a story",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx"
):
print(chunk, end='', flush=True)
```
### Custom Configuration
```python
from jentis.llmkit import init_llm
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx", # Your OpenAI API key
temperature=0.8,
top_p=0.9,
max_tokens=2000,
max_retries=5,
timeout=60.0,
backoff_factor=1.0,
frequency_penalty=0.5,
presence_penalty=0.3
)
```
## Parameters
### Common Parameters
All providers support these parameters:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `provider` | str | **Required** | Provider name or alias |
| `model` | str | **Required** | Model identifier |
| `api_key` | str | None | API key (env var if not provided) |
| `temperature` | float | None | Randomness (0.0-2.0) |
| `top_p` | float | None | Nucleus sampling (0.0-1.0) |
| `max_tokens` | int | None | Maximum tokens to generate |
| `timeout` | float | 30.0 | Request timeout (seconds) |
| `max_retries` | int | 3 | Retry attempts |
| `backoff_factor` | float | 0.5 | Exponential backoff factor |
### Provider-Specific Parameters
**OpenAI & Grok:**
- `frequency_penalty`: Penalty for token frequency (0.0-2.0)
- `presence_penalty`: Penalty for token presence (0.0-2.0)
**Azure OpenAI:**
- `azure_endpoint`: Azure endpoint URL (**Required**)
- `deployment_name`: Deployment name (defaults to model)
- `api_version`: API version (default: "2024-08-01-preview")
**Ollama (Cloud & Local):**
- `host`: Host URL (Cloud: "https://ollama.com", Local: "http://localhost:11434")
## Environment Variables
**Each provider uses its own environment variable for API keys.** Set them to avoid hardcoding:
```bash
# Google Gemini
export GOOGLE_API_KEY="AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxx"
# Anthropic Claude
export ANTHROPIC_API_KEY="sk-ant-api03-xxxxxxxxxxxxxxxxx"
# OpenAI
export OPENAI_API_KEY="sk-proj-xxxxxxxxxxxxxxxxxxxx"
# xAI Grok
export XAI_API_KEY="xai-xxxxxxxxxxxxxxxxxxxxxxxx"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="a1b2c3d4e5f6xxxxxxxxxxxx"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
# Ollama Cloud
export OLLAMA_API_KEY="ollama_xxxxxxxxxxxxxxxx"
# Vertex AI (uses gcloud auth, or set token explicitly)
export VERTEX_AI_ACCESS_TOKEN="ya29.xxxxx..."
export VERTEX_AI_PROJECT_ID="your-project-id"
```
Then initialize without api_key parameter:
```python
from Jentis.llmkit import init_llm
# OpenAI - reads from OPENAI_API_KEY environment variable
llm = init_llm(provider="openai", model="gpt-4o")
# Google - reads from GOOGLE_API_KEY environment variable
llm = init_llm(provider="google", model="gemini-2.0-flash-exp")
# Anthropic - reads from ANTHROPIC_API_KEY environment variable
llm = init_llm(provider="anthropic", model="claude-3-5-sonnet-20241022")
# Vertex AI - reads from VERTEX_AI_PROJECT_ID, authenticates via gcloud
llm = init_llm(provider="vertexai", model="google/gemini-2.0-flash")
```
## Methods
All initialized LLM instances have two methods:
### `generate_response(prompt: str) -> str`
Generate a complete response.
```python
response = llm.generate_response("Your prompt here")
print(response) # String output
```
### `generate_response_stream(prompt: str) -> Generator`
Stream the response in real-time.
```python
for chunk in llm.generate_response_stream("Your prompt here"):
print(chunk, end='', flush=True)
```
## Error Handling
```python
from jentis.llmkit import init_llm
try:
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-invalid-key-xxxxxxxxxx" # Wrong API key
)
response = llm.generate_response("Test")
except ValueError as e:
print(f"Invalid configuration: {e}")
except Exception as e:
print(f"API Error: {e}")
```
Each provider has its own exception hierarchy for detailed error handling. Import from provider modules:
```python
from jentis.llmkit.Openai import (
OpenAILLMError,
OpenAILLMAPIError,
OpenAILLMImportError,
OpenAILLMResponseError
)
try:
from jentis.llmkit.Openai import openai_llm
result = openai_llm(prompt="Test", model="gpt-4o", api_key="invalid")
except OpenAILLMAPIError as e:
print(f"API Error: {e}")
except OpenAILLMError as e:
print(f"General Error: {e}")
```
## Complete Example
```python
from jentis.llmkit import init_llm
def chat_with_llm(provider_name: str, user_message: str):
"""Simple chat function supporting multiple providers."""
try:
# Initialize LLM
llm = init_llm(
provider=provider_name,
model="gpt-4o" if provider_name == "openai" else "llama2",
api_key=None, # Uses environment variables
temperature=0.7,
max_tokens=1024
)
# Stream response
print(f"\n{provider_name.upper()} Response:\n")
for chunk in llm.generate_response_stream(user_message):
print(chunk, end='', flush=True)
print("\n")
except ValueError as e:
print(f"Configuration error: {e}")
except Exception as e:
print(f"Error: {e}")
# Use different providers
chat_with_llm("openai", "What is machine learning?")
chat_with_llm("anthropic", "Explain neural networks")
chat_with_llm("ollama", "What is Python?")
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the terms of the [LICENSE](../../LICENSE) file.
## Support
- **Issues**: [GitHub Issues](https://github.com/devXjitin/jentis-llmkit/issues)
- **Documentation**: [Project Docs](https://github.com/devXjitin/jentis-llmkit)
- **Community**: [Discussions](https://github.com/devXjitin/jentis-llmkit/discussions)
## Author
Built with care by the **J.E.N.T.I.S** team.
| text/markdown | J.E.N.T.I.S Team | null | null | null | MIT | llm, ai, openai, anthropic, google, gemini, claude, grok, ollama, azure, vertex-ai, chatgpt, gpt-4, machine-learning, nlp | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Langua... | [] | null | null | >=3.8 | [] | [] | [] | [
"google-generativeai>=0.3.0; extra == \"google\"",
"anthropic>=0.18.0; extra == \"anthropic\"",
"openai>=1.0.0; extra == \"openai\"",
"ollama>=0.1.0; extra == \"ollama\"",
"google-generativeai>=0.3.0; extra == \"all\"",
"anthropic>=0.18.0; extra == \"all\"",
"openai>=1.0.0; extra == \"all\"",
"ollama>... | [] | [] | [] | [
"Homepage, https://github.com/devXjitin/jentis-llmkit",
"Documentation, https://github.com/devXjitin/jentis-llmkit",
"Repository, https://github.com/devXjitin/jentis-llmkit",
"Bug Tracker, https://github.com/devXjitin/jentis-llmkit/issues",
"Discussions, https://github.com/devXjitin/jentis-llmkit/discussion... | twine/6.2.0 CPython/3.14.2 | 2026-02-18T21:27:08.541556 | jentis-1.0.1.tar.gz | 38,534 | 38/b5/f3e7df80c8fa3c1ae8386969c16253ac9de92af4ee73d487c0462b6405d4/jentis-1.0.1.tar.gz | source | sdist | null | false | 99fb5487eaf0e01a38f610af5b551aa1 | c2fc358d8f2b13423afcb49dc523e5aa009baa73e44f5061c93f12b42281d4a7 | 38b5f3e7df80c8fa3c1ae8386969c16253ac9de92af4ee73d487c0462b6405d4 | null | [] | 252 |
2.4 | strava-mcp-http | 0.6.1 | Strava integration for MCP | # Strava MCP Server
[](https://github.com/piwibardy/strava-mcp-http/actions/workflows/ci.yml)
A Model Context Protocol (MCP) server for interacting with the Strava API. Supports both **stdio** and **streamable-http** transports, multi-tenant authentication, and MCP OAuth 2.0 for Claude Desktop custom connectors.
## User Guide
### Installation
You can install Strava MCP with `uvx`:
```bash
uvx strava-mcp
```
### Docker
Build and run the server with Docker (defaults to streamable-http on port 8000):
```bash
docker build -t strava-mcp .
docker run -p 8000:8000 \
-e STRAVA_CLIENT_ID=your_client_id \
-e STRAVA_CLIENT_SECRET=your_client_secret \
-e SERVER_BASE_URL=https://your-public-url.com \
strava-mcp
```
A pre-built image is also available on GHCR:
```bash
docker pull ghcr.io/piwibardy/strava-mcp-http:latest
```
### Setting Up Strava Credentials
1. **Create a Strava API Application**:
- Go to [https://www.strava.com/settings/api](https://www.strava.com/settings/api)
- Create a new application to obtain your Client ID and Client Secret
- For "Authorization Callback Domain", enter your server's domain (e.g. `localhost` for local dev, or your tunnel/production domain)
2. **Configure Your Credentials**:
Create a `.env` file or export environment variables:
```bash
STRAVA_CLIENT_ID=your_client_id
STRAVA_CLIENT_SECRET=your_client_secret
```
### Connecting to Claude Desktop
There are two ways to connect this server to Claude Desktop:
#### Option 1: Custom Connector (MCP OAuth — recommended)
This uses the native MCP OAuth 2.0 flow. Requires HTTPS (e.g. via a Cloudflare tunnel or production deployment).
1. Start the server with `SERVER_BASE_URL` pointing to your public HTTPS URL
2. In Claude Desktop, add a **custom connector** with URL: `https://your-server.com/mcp`
3. Claude Desktop will handle the full OAuth flow automatically (register → authorize → Strava → callback → token)
#### Option 2: stdio via mcp-remote
Use `mcp-remote` to bridge stdio and HTTP transport:
```json
{
"strava": {
"command": "npx",
"args": [
"mcp-remote",
"http://localhost:8000/mcp",
"--header",
"Authorization: Bearer YOUR_API_KEY"
]
}
}
```
To get your API key, visit `http://localhost:8000/auth/strava` and complete the Strava OAuth flow.
#### Option 3: stdio (single-user)
```json
{
"strava": {
"command": "bash",
"args": [
"-c",
"source ~/.ssh/strava.sh && uvx strava-mcp"
]
}
}
```
### Authentication
The server supports multiple authentication modes:
- **MCP OAuth 2.0**: Used by Claude Desktop custom connectors. The server acts as an OAuth authorization server, delegating to Strava for user authentication. Fully automatic.
- **Bearer API key**: For HTTP transport with `mcp-remote` or direct API access. Get your key by visiting `/auth/strava`.
- **stdio (single-user)**: Uses `STRAVA_REFRESH_TOKEN` environment variable directly.
### Available Tools
#### get_user_activities
Retrieves activities for the authenticated user.
**Parameters:**
- `before` (optional): Epoch timestamp for filtering
- `after` (optional): Epoch timestamp for filtering
- `page` (optional): Page number (default: 1)
- `per_page` (optional): Number of items per page (default: 30)
#### get_activity
Gets detailed information about a specific activity.
**Parameters:**
- `activity_id`: The ID of the activity
- `include_all_efforts` (optional): Include segment efforts (default: false)
#### get_activity_segments
Retrieves segments from a specific activity.
**Parameters:**
- `activity_id`: The ID of the activity
#### get_rate_limit_status
Returns the current Strava API rate limit status from the most recent API call. Use this to check remaining quota before making multiple requests.
**Returns:**
```json
{
"short_term": { "usage": 45, "limit": 100, "remaining": 55 },
"daily": { "usage": 320, "limit": 1000, "remaining": 680 }
}
```
Strava enforces rate limits of 100 requests/15 min and 1,000 requests/day for read operations. The server automatically retries once on 429 responses after waiting for the next 15-minute window.
### Environment Variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `STRAVA_CLIENT_ID` | Yes | — | Strava API client ID |
| `STRAVA_CLIENT_SECRET` | Yes | — | Strava API client secret |
| `SERVER_BASE_URL` | No | `http://localhost:8000` | Public base URL (for OAuth redirects) |
| `STRAVA_REFRESH_TOKEN` | No | — | Refresh token (single-user stdio mode) |
| `STRAVA_DATABASE_PATH` | No | `data/users.db` | SQLite database path |
## Developer Guide
### Project Setup
1. Clone the repository:
```bash
git clone git@github.com:piwibardy/strava-mcp-http.git
cd strava-mcp-http
```
2. Install dependencies:
```bash
uv sync
```
3. Set up environment variables:
```bash
cp .env.example .env
# Edit .env with your Strava credentials
```
### Running in Development Mode
Run the server with MCP CLI:
```bash
mcp dev strava_mcp/main.py
```
Or with HTTP transport:
```bash
uv run strava-mcp --transport streamable-http --port 8000
```
For HTTPS in development, use a Cloudflare tunnel:
```bash
cloudflared tunnel --url http://localhost:8000
```
### Project Structure
- `strava_mcp/`: Main package directory
- `config.py`: Configuration settings using pydantic-settings
- `models.py`: Pydantic models for Strava API entities
- `api.py`: Low-level API client for Strava (with rate limit tracking)
- `auth.py`: Strava OAuth callback routes (supports both MCP OAuth and legacy flows)
- `oauth_provider.py`: MCP OAuth 2.0 Authorization Server provider
- `middleware.py`: Bearer auth middleware (legacy compatibility)
- `db.py`: Async SQLite store for users, OAuth clients, tokens
- `service.py`: Service layer for business logic
- `server.py`: MCP server implementation and tool definitions
- `main.py`: Main entry point (argparse for transport/host/port)
- `tests/`: Unit tests
- `Dockerfile`: Multi-stage Docker build
### Running Tests
```bash
uv run pytest
```
### Linting
```bash
uv run ruff check . && uv run ruff format --check .
```
## License
[MIT License](LICENSE)
## Acknowledgements
- [Strava API](https://developers.strava.com/)
- [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
- Forked from [yorrickjansen/strava-mcp](https://github.com/yorrickjansen/strava-mcp)
| text/markdown | Yorrick Jansen | null | null | null | MIT | api, mcp, strava | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiosqlite>=0.20.0",
"fastapi>=0.115.11",
"httpx>=0.28.1",
"mcp[cli]>=1.8.0",
"pydantic-settings>=2.8.1",
"pydantic>=2.10.6"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:26:50.547639 | strava_mcp_http-0.6.1.tar.gz | 146,734 | 6e/b6/ed669335290ba15c51133911d781b8ef3e46bc47301aabf387393c8db17f/strava_mcp_http-0.6.1.tar.gz | source | sdist | null | false | dd0f668b9e1b97789c2701f60c6154d1 | 827588ef43c11259acc15b6134cde50080c7448d9aa4d5371d9df955d91de789 | 6eb6ed669335290ba15c51133911d781b8ef3e46bc47301aabf387393c8db17f | null | [
"LICENSE"
] | 243 |
2.4 | django-flex | 26.2.8 | A flexible query language for Django - enable frontends to dynamically construct database queries | # Django-Flex
<p align="center">
<em>A flexible query language for Django — let your frontend dynamically construct database queries</em>
</p>
<p align="center">
<a href="https://pypi.org/project/django-flex/">
<img src="https://img.shields.io/pypi/v/django-flex.svg" alt="PyPI version">
</a>
<a href="https://pypi.org/project/django-flex/">
<img src="https://img.shields.io/pypi/pyversions/django-flex.svg" alt="Python versions">
</a>
<a href="https://github.com/your-org/django-flex/blob/main/LICENSE">
<img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License">
</a>
</p>
---
**Your first API in 5 minutes. No serializers. No viewsets. Just config.**
## Features
- **Field Selection** — Request only the fields you need, including nested relations
- **JSONField Support** — Seamless dot notation for nested JSON data
- **Dynamic Filtering** — Full Django ORM operator support with composable AND/OR/NOT
- **Smart Pagination** — Limit/offset with cursor-based continuation
- **Built-in Security** — Row-level, field-level, and operation-level permissions
- **Automatic Optimization** — N+1 prevention with smart `select_related`
- **Django-Native** — Feels like a natural extension of Django
## Installation
```bash
pip install django-flex
```
## Quick Start
### 1. Add to Django
```python
# settings.py
INSTALLED_APPS = ['django_flex', ...]
MIDDLEWARE = ['django_flex.middleware.FlexMiddleware', ...]
DJANGO_FLEX = {
'EXPOSE': {
# Role-first structure: role -> model -> config
'staff': {
'booking': {
'fields': ['id', 'status', 'customer.name', 'scheduled_date'],
'ops': ['get', 'list', 'add', 'edit', 'delete'],
},
},
},
}
```
### 2. Add URL
```python
# urls.py
urlpatterns = [
path('api/', include('django_flex.urls')),
]
```
**Done.** Your API is live at `/api/bookings/`.
---
## CRUD Operations
### List
```javascript
fetch('/api/bookings/');
```
```json
{
"results": {
"1": {
"id": 1,
"status": "confirmed"
},
"2": {
"id": 2,
"status": "pending"
}
}
}
```
### Get
```javascript
fetch('/api/bookings/1');
```
```json
{
"id": 1,
"status": "confirmed",
"customer": {
"name": "Aisha Khan"
}
}
```
### Create
```javascript
fetch('/api/bookings/', {
method: 'POST',
body: JSON.stringify({
customer_id: 1,
status: 'pending',
}),
});
```
```json
{
"id": 3,
"status": "pending",
"customer_id": 1
}
```
### Update
```javascript
fetch('/api/bookings/1', {
method: 'PUT',
body: JSON.stringify({
status: 'completed',
}),
});
```
```json
{
"id": 1,
"status": "completed"
}
```
### Delete
```javascript
fetch('/api/bookings/1', { method: 'DELETE' });
```
```json
{
"deleted": true
}
```
---
## Advanced Querying
All query options are sent in the request body.
### Field Selection
```javascript
fetch('/api/bookings/', {
method: 'GET',
body: JSON.stringify({
fields: 'id, status, customer.name',
}),
});
```
```json
{
"results": {
"1": {
"id": 1,
"status": "confirmed",
"customer": {
"name": "Aisha Khan"
}
}
}
}
```
### Nested Relations
```javascript
{
fields: 'id, customer.name, customer.address.city';
}
```
```json
{
"results": {
"1": {
"id": 1,
"customer": {
"name": "Aisha Khan",
"address": {
"city": "Sydney"
}
}
}
}
}
```
### Wildcard Fields
```javascript
{
fields: '*, customer.*';
}
```
### Filtering — Exact Match
```javascript
{
filters: {
status: 'confirmed';
}
}
```
```json
{
"results": {
"1": {
"id": 1,
"status": "confirmed"
}
}
}
```
### Filtering — Comparison
```javascript
{
filters: {
'price.gte': 50,
'price.lte': 200
}
}
```
### Filtering — Text Search
```javascript
{
filters: {
'name.icontains': 'khan'
}
}
```
### Filtering — List Membership
```javascript
{
filters: {
'status.in': ['pending', 'confirmed']
}
}
```
### Filtering — Null Check
```javascript
{
filters: {
'assignee.isnull': true
}
}
```
### Filtering — Date Range
```javascript
{
filters: {
'created_at.gte': '2024-01-01',
'created_at.lte': '2024-12-31'
}
}
```
### Filtering — Related Fields
```javascript
{
filters: {
'customer.vip': true,
'customer.address.city': 'Sydney'
}
}
```
### Filtering — OR Conditions
```javascript
{
filters: {
or: {
status: 'pending',
urgent: true
}
}
}
```
### Filtering — NOT Conditions
```javascript
{
filters: {
not: {
status: 'cancelled';
}
}
}
```
### Ordering
```javascript
{
order_by: '-scheduled_date';
}
```
```json
{
"results": {
"3": {
"scheduled_date": "2024-03-15"
},
"1": {
"scheduled_date": "2024-03-10"
}
}
}
```
### Pagination
```javascript
{
limit: 20,
offset: 0
}
```
```json
{
"results": {},
"pagination": {
"offset": 0,
"limit": 20,
"has_more": true,
"next": {}
}
}
```
---
## Why Django-Flex?
| Feature | Django-Flex | GraphQL | REST |
| ------------------ | ------------------- | ----------------- | -------------------- |
| Learning curve | Low (Django-native) | High | Low |
| Field selection | ✅ | ✅ | ❌ (fixed endpoints) |
| Dynamic filtering | ✅ | ✅ | Limited |
| Built-in security | ✅ | Manual | Manual |
| Django integration | Native | Requires graphene | Native |
| Schema definition | Optional | Required | N/A |
| N+1 prevention | Automatic | Manual | Manual |
---
## Learn More
📖 [Full Documentation](docs/README.md)
## License
MIT
| text/markdown | Nehemiah Jacob | null | Nehemiah Jacob | null | MIT | django, query, api, flexible, dynamic, graphql-alternative, rest, orm | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 3.2",
"Framework :: Django :: 4.0",
"Framework :: Django :: 4.1",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Intended Audience :: Developers",
"License :: OSI Approv... | [] | null | null | >=3.8 | [] | [] | [] | [
"django>=3.2",
"pytest>=7.0; extra == \"dev\"",
"pytest-django>=4.5; extra == \"dev\"",
"pytest-xdist>=3.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"isort>=5.12; extra == \"dev\"",
"flake8>=6.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"dja... | [] | [] | [] | [
"Homepage, https://github.com/your-org/django-flex",
"Documentation, https://github.com/your-org/django-flex#readme",
"Repository, https://github.com/your-org/django-flex.git",
"Issues, https://github.com/your-org/django-flex/issues",
"Changelog, https://github.com/your-org/django-flex/blob/main/CHANGELOG.m... | twine/6.2.0 CPython/3.14.3 | 2026-02-18T21:26:18.125449 | django_flex-26.2.8.tar.gz | 38,496 | fa/ef/f0e4c2dedf09fce6fa44fef0e967c9adb4be2095725e89d22afbc6461e90/django_flex-26.2.8.tar.gz | source | sdist | null | false | e9083bf96e6b4b59d71c60484f41c720 | e1e20748de4b1026108d0cdfbea3d1c75bb43cbb405cac2b750bffa3c8e59509 | faeff0e4c2dedf09fce6fa44fef0e967c9adb4be2095725e89d22afbc6461e90 | null | [
"LICENSE"
] | 259 |
2.3 | tlacacoca | 0.2.1 | Shared foundation library for TLS-based protocol implementations | # Tlacacoca - Shared Foundation Library for TLS-Based Protocols
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/astral-sh/ruff)
A protocol-agnostic foundation library providing shared components for building secure TLS-based network protocol implementations in Python. Tlacacoca (pronounced "tla-ka-KO-ka", from Nahuatl meaning "secure/safe") provides security, middleware, and logging utilities that can be shared across multiple protocol implementations.
## Key Features
- **Security First** - TLS context creation, TOFU certificate validation, certificate utilities
- **Middleware System** - Rate limiting, IP access control, certificate authentication
- **Structured Logging** - Privacy-preserving logging with IP hashing
- **Protocol Agnostic** - Abstract interfaces that any TLS-based protocol can build upon
- **Modern Python** - Full type hints, async/await support, and modern tooling with `uv`
## Quick Start
### Installation
```bash
# As a library
uv add tlacacoca
# From source (for development)
git clone https://github.com/alanbato/tlacacoca.git
cd tlacacoca && uv sync
```
### Basic Usage
```python
import ssl
from tlacacoca import (
create_client_context,
create_server_context,
TOFUDatabase,
RateLimiter,
RateLimitConfig,
AccessControl,
AccessControlConfig,
MiddlewareChain,
)
# Create TLS contexts
client_ctx = create_client_context(verify_mode=ssl.CERT_REQUIRED)
server_ctx = create_server_context("cert.pem", "key.pem")
# Set up TOFU certificate validation
async with TOFUDatabase(app_name="myapp") as tofu:
# First connection - certificate is stored
await tofu.verify_or_trust("example.com", 1965, cert_fingerprint)
# Subsequent connections - certificate is verified
await tofu.verify_or_trust("example.com", 1965, cert_fingerprint)
# Configure middleware chain
rate_config = RateLimitConfig(capacity=10, refill_rate=1.0)
access_config = AccessControlConfig(
allow_list=["192.168.1.0/24"],
default_allow=False
)
chain = MiddlewareChain([
AccessControl(access_config),
RateLimiter(rate_config),
])
# Process requests through middleware
result = await chain.process_request(
request_url="protocol://example.com/path",
client_ip="192.168.1.100"
)
if result.allowed:
# Handle request
pass
else:
# Map denial_reason to protocol-specific response
if result.denial_reason == "rate_limit":
# e.g., Gemini: "44 SLOW DOWN\r\n"
pass
```
## Components
### Security
| Component | Description |
|-----------|-------------|
| `create_client_context()` | Create TLS context for client connections |
| `create_server_context()` | Create TLS context for server connections |
| `TOFUDatabase` | Trust-On-First-Use certificate validation |
| `generate_self_signed_cert()` | Generate self-signed certificates |
| `get_certificate_fingerprint()` | Get SHA-256 fingerprint of certificate |
| `load_certificate()` | Load certificate from PEM file |
| `create_permissive_server_context()` | PyOpenSSL context accepting self-signed client certs |
| `TLSServerProtocol` | asyncio protocol for manual TLS via PyOpenSSL memory BIOs |
| `TLSTransportWrapper` | Transport wrapper exposing peer certificate to inner protocol |
### Middleware
| Component | Description |
|-----------|-------------|
| `MiddlewareChain` | Chain multiple middleware components |
| `RateLimiter` | Token bucket rate limiting per IP |
| `AccessControl` | IP-based allow/deny lists with CIDR support |
| `CertificateAuth` | Client certificate authentication |
### Logging
| Component | Description |
|-----------|-------------|
| `configure_logging()` | Configure structured logging |
| `get_logger()` | Get a logger instance |
| `hash_ip_processor()` | Privacy-preserving IP hashing |
## Protocol Implementations Using Tlacacoca
Tlacacoca is designed to be used by protocol-specific implementations:
- **nauyaca** - Gemini protocol server & client
- **amatl** - Scroll protocol implementation (planned)
## Documentation
### Middleware Return Types
Middleware components return `MiddlewareResult` with protocol-agnostic denial reasons:
```python
from tlacacoca import MiddlewareResult, DenialReason
# Allowed request
result = MiddlewareResult(allowed=True)
# Denied request
result = MiddlewareResult(
allowed=False,
denial_reason=DenialReason.RATE_LIMIT,
retry_after=30
)
```
Protocol implementations map these to their specific status codes:
| Denial Reason | Gemini Status | Description |
|--------------|---------------|-------------|
| `RATE_LIMIT` | 44 SLOW DOWN | Rate limit exceeded |
| `ACCESS_DENIED` | 53 PROXY REFUSED | IP not allowed |
| `CERT_REQUIRED` | 60 CLIENT CERT REQUIRED | Need client certificate |
| `CERT_NOT_AUTHORIZED` | 61 CERT NOT AUTHORIZED | Certificate not in allowed list |
## Contributing
```bash
# Setup
git clone https://github.com/alanbato/tlacacoca.git
cd tlacacoca && uv sync
# Test
uv run pytest
# Lint & Type Check
uv run ruff check src/ tests/
uv run ty check src/
```
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
MIT License - see [LICENSE](LICENSE) for details.
## Resources
- [SECURITY.md](SECURITY.md) - Security documentation
- [GitHub Issues](https://github.com/alanbato/tlacacoca/issues) - Bug reports
- [GitHub Discussions](https://github.com/alanbato/tlacacoca/discussions) - Questions and ideas
---
**Status**: Active development (pre-1.0). Core security and middleware features are stable.
| text/markdown | Alan Velasco | Alan Velasco <alanvelasco.a@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=46.0.3",
"pyopenssl>=24.0.0",
"structlog>=25.5.0",
"tomli>=2.0.0; python_full_version < \"3.11\"",
"tomli-w>=1.2.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:25:32.413006 | tlacacoca-0.2.1.tar.gz | 20,271 | da/4e/04a40607a24a8158702fd5cf167f7c3a9fbf4f3472fc885e024aa909010a/tlacacoca-0.2.1.tar.gz | source | sdist | null | false | c084c594f7dc3539db013ed0d3ca101d | fb72142a48ff48095b73f6cac835011397f79f2658273173f89b59e2bb9db4b1 | da4e04a40607a24a8158702fd5cf167f7c3a9fbf4f3472fc885e024aa909010a | null | [] | 281 |
2.4 | hpcp | 9.48 | Highly Parallel CoPy / HPC coPy: A simple script optimized for distributed file store / NVMe / SSD storage medias for use in High Performace Computing environments. | # hpcp
A simple script that can issue multiple `cp -af` commands simultaneously on a local system.
Optimized for use in HPC scenarios and featuring auto-tuning for files-per-process.
Includes an adaptive progress bar for copying tasks from multiCMD.
Tested on a Lustre filesystem with 1.5 PB capacity running on 180 HDDs. Compared to using `tar`, **hpcp** reduced the time for tarball/image release from over 8 hours to under 10 minutes.
## Development status
Basic functionality (parallel copy) should be stable.
Imaging functionality (source/destination as `.img` files) will be extended with differential image support (differential backup). Imaging is only available on Linux—similar to `tar`, but uses disk images.
Block-image functionality is in **beta**. Only available on Linux. Possible use case: cloning a currently running OS without mounting `/` as read-only.
hpcp.bat available on github: simple old tk based GUI intended for basic windows functionality.
## Important Implementation Detail
By default, **hpcp** only checks:
1. The file’s relative path/name is identical.
2. The file mtime is identical.
3. The last `-hs --hash_size` bytes (defaults to `65536`) are identical.
Although in most cases these checks should confirm that both files are identical, in certain scenarios (like bit rot), corrupted files might not be detected. If you need to verify file integrity rather than perform a quick sync, it is recommended to use the `-fh --full_hash` option.
Setting `-hs --hash_size` to `0` disables hash checks entirely. This can be helpful on HDDs, as they usually have suboptimal seek performance. However, HDDs are also more prone to bit rot. If the operator can accept that risk, it is possible to rely solely on mtime checks for file comparison by setting `hash_size` to `0`. (Though on a single HDD, the standard `cp` command is already well-optimized.)
## Installation
```bash
pipx install hpcp
```
or
```bash
pip install hpcp
```
After installation, **hpcp** is available as `hpcp`. You can check its version and libraries via:
```bash
hpcp -V
```
It is recommended to install via **pip**, but **hpcp** can also function as a single script using Python’s default libraries.
**Note**:
- Using `pip` will optionally install the hashing library [xxhash](https://github.com/Cyan4973/xxHash), which can reduce CPU usage for partial hashing and increase performance when using `-fh --full_hash`.
- `pip` also installs [multiCMD](https://github.com/yufei-pan/multiCMD), used to issue commands and provide helper functions. If it is not available, `hpcp.py` will use its built-in multiCMD interface, which is more limited, has lower performance, and may have issues with files containing spaces. Please install **multiCMD** if possible.
## Disk Imaging Feature Note
Only available on Linux currently!
`-dd --disk_dump` mode differs from the standard Linux `dd` program. **hpcp** will try to mount the block device/image file to a temporary directory and perform a file-based copy to an identically-created image file specified with `-di --dest_image`. This functionality is implemented crudely and is still an **alpha** feature. It works on basic partition types (it does not work with LVM) with GPT partition tables and has been proven able to clone live running system disks to disk images, which can then be booted without issues.
The created disk image can be resized using the `-ddr --dd_resize` option to the desired size. (This feature is provided so that you can shrink the raw size of the resulting image and provides some shrink capability for XFS.)
For partitions that **hpcp** cannot create a separate unique mount point, **hpcp** will fall back to using the Linux program `dd` to clone the drive. Note that this can be risky and can lead to broken filesystems if the drive is actively being written to. (However, since you generally cannot mount that partition on the current OS, the real-world scenarios for this remain limited.)
## Remove Extra Feature Note
`-rme --remove_extra`: Especially when combined with `-rf`, **PLEASE PAY CLOSE ATTENTION TO YOUR TARGET DIRECTORY!**
`--remove_extra` will remove **all** files that are not in the source path. When you are copying a file into a folder, you almost certainly do not want to use this!
## Remove Feature Note
`-rm --remove` can remove files in bulk. This might be helpful on distributed file systems like Lustre, as it only gathers the file list once and performs bulk deletion rather than the default recursive deletion in the Linux `rm` program.
`-rf --remove_force` implies `--remove`. **Use with care!** This skips the interactive check requiring user confirmation before removing. If **hpcp** did not generate the correct file list from the specified source paths, hopefully you have fast enough reflexes to press `Ctrl + C` repeatedly to stop all parallel deletion processes if you realize a mistake.
`-b --batch`: Using `-b` with `-rm` will gather the file list for all `source_paths` first, then issue the remove command. This can be helpful because **hpcp** will tune its `-f --files_per_job` parameter accordingly for each task, and running one large remove job might be faster than running many small ones. This is especially useful when working with glob patterns like `*`.
```bash
$ hpcp -h
usage: hpcp.py [-h] [-s] [-j MAX_WORKERS] [-b | -nb] [-v] [-do] [-nds] [-fh] [-hs HASH_SIZE] [-fpj FILES_PER_JOB] [-sfl SOURCE_FILE_LIST]
[-fl TARGET_FILE_LIST] [-cfl] [-dfl [DIFF_FILE_LIST]] [-tdfl] [-nhfl] [-rm] [-rf] [-rme] [-e EXCLUDE] [-x EXCLUDE_FILE]
[-nlt] [-V] [-pfl] [-si SRC_IMAGE] [-siff LOAD_DIFF_IMAGE] [-d DEST_PATH] [-rds] [-di DEST_IMAGE] [-dis DEST_IMAGE_SIZE]
[-diff] [-dd] [-ddr DD_RESIZE] [-L RATE_LIMIT] [-F FILE_RATE_LIMIT] [-tfs TARGET_FILE_SYSTEM] [-ncd]
[-ctl COMMAND_TIMEOUT_LIMIT] [-enes]
[src_path ...]
Copy files from source to destination
positional arguments:
src_path Source Path
options:
-h, --help show this help message and exit
-s, --single_thread Use serial processing
-j, -m, -t, --max_workers MAX_WORKERS
Max workers for parallel processing. Default is 4 * CPU count. Use negative numbers to indicate {n} * CPU count, 0
means 1/2 CPU count.
-b, --batch Batch mode, process all files in one go
-nb, --no_batch, --sequential
Do not use batch mode
-v, --verbose Verbose output
-do, --directory_only
Only copy directory structure
-nds, --no_directory_sync
Do not sync directory metadata, useful for verfication
-fh, --full_hash Checks the full hash of files
-hs, --hash_size HASH_SIZE
Hash size in bytes, default is 65536. This means hpcp will only check the last 64 KiB of the file.
-fpj, --files_per_job FILES_PER_JOB
Base number of files per job, will be adjusted dynamically. Default is 1
-sfl, -lfl, --source_file_list SOURCE_FILE_LIST
Load source file list from file. Will treat it raw meaning do not expand files / folders. files are seperated
using newline. If --compare_file_list is specified, it will be used as source for compare
-fl, -tfl, --target_file_list TARGET_FILE_LIST
Specify the file_list file to store list of files in src_path to. If --compare_file_list is specified, it will be
used as targets for compare
-cfl, --compare_file_list
Only compare file list. Use --file_list to specify a existing file list or specify the dest_path to compare
src_path with. When not using with file_list, will compare hash.
-dfl, --diff_file_list [DIFF_FILE_LIST]
Implies --compare_file_list, specify a file name to store the diff file list to or omit the value to auto-
determine.
-tdfl, --tar_diff_file_list
Generate a tar compatible diff file list. ( update / new files only )
-nhfl, --no_hash_file_list
Do not append hash to file list
-rm, --remove Remove all files and folders specified in src_path
-rf, --remove_force Remove all files without prompt
-rme, --remove_extra Remove all files and folders in dest_path that are not in src_path
-e, --exclude EXCLUDE
Exclude source files matching the pattern
-x, --exclude_file EXCLUDE_FILE
Exclude source files matching the pattern in the file
-nlt, --no_link_tracking
Do not copy files that symlinks point to.
-V, --version show program's version number and exit
-pfl, --parallel_file_listing
Use parallel processing for file listing
-si, --src_image SRC_IMAGE
Source Image, mount the image and copy the files from it.
-siff, --load_diff_image LOAD_DIFF_IMAGE
Not implemented. Load diff images and apply the changes to the destination.
-d, -C, --dest_path DEST_PATH
Destination Path
-rds, --random_dest_selection
Randomly select destination path from the list of destination paths instead of filling round robin. Can speed up
transfer if dests are on different devices. Warning: can cause unable to fit in big files as dests are filled up
by smaller files.
-di, --dest_image DEST_IMAGE
Base name for destination Image, create a image file and copy the files into it.
-dis, --dest_image_size DEST_IMAGE_SIZE
Destination Image Size, specify the size of the destination image to split into. Default is 0 (No split). Example:
{10TiB} or {1G}
-diff, --get_diff_image
Not implemented. Compare the source and destination file list, create a diff image of that will update the
destination to source.
-dd, --disk_dump Disk to Disk mirror, use this if you are backuping / deploying an OS from / to a disk. Require 1 source, can be 1
src_path or 1 -si src_image, require 1 -di dest_image. Note: will only actually use dd if unable to mount / create
a partition.
-ddr, --dd_resize DD_RESIZE
Resize the destination image to the specified size with -dd. Applies to biggest partiton first. Specify multiple
-ddr to resize subsequent sized partitions. Example: {100GiB} or {200G}
-L, -rl, --rate_limit RATE_LIMIT
Approximate a rate limit the copy speed in bytes/second. Example: 10M for 10 MB/s, 1Gi for 1 GiB/s. Note: do not
work in single thread mode. Default is 0: no rate limit.
-F, -frl, --file_rate_limit FILE_RATE_LIMIT
Approximate a rate limit the copy speed in files/second. Example: 10K for 10240 files/s, 1Mi for 1024*1024*1024
files/s. Note: do not work in serial mode. Default is 0: no rate limit.
-tfs, --target_file_system TARGET_FILE_SYSTEM
Specify the target file system type. Will abort if the target file system type does not match. Example: ext4, xfs,
ntfs, fat32, exfat. Default is None: do not check target file system type.
-ncd, --no_create_dir
Ignore any destination folder that does not already exist. ( Will still copy if dest is a file )
-ctl, --command_timeout_limit COMMAND_TIMEOUT_LIMIT
Set the command timeout limit in seconds for external commands ( ex. cp / dd ). Default is 0: no timeout.
-enes, --exit_not_enough_space
Exit if there is not enough space on the destination instead of continuing (Note: Default is continue as in
compressed fs copy can be down even if source is bigger than free space).
```
| text/markdown | Yufei Pan | pan@zopyr.us | null | null | GPLv3+ | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows"
] | [] | https://github.com/yufei-pan/hpcp | null | >=3.6 | [] | [] | [] | [
"argparse",
"xxhash",
"multiCMD>=1.35"
] | [] | [] | [] | [] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Fedora Linux","version":"43","id":"","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T21:25:08.500042 | hpcp-9.48.tar.gz | 59,345 | 67/c9/b6370ca8ac9817be922a4ce0ed17bea17e0d9eb9ceb45a62bce9f37f240c/hpcp-9.48.tar.gz | source | sdist | null | false | 7891af07ac31aab95faeb08294056ee9 | dba8ea380b9f99387061b05362630d9b09af7e97cb6dd548f0f3a0ce58d7a711 | 67c9b6370ca8ac9817be922a4ce0ed17bea17e0d9eb9ceb45a62bce9f37f240c | null | [] | 263 |
2.4 | multiSSH3 | 6.12 | Run commands on multiple hosts via SSH | # multiSSH3
## Introduction
multiSSH3 is a fast, flexible way to run commands and move files across many hosts in parallel, while watching everything live.
Use it from the CLI for quick fleet actions or import it as a Python module for automation and orchestration.
multiSSH3 understands host groups from env files, expands ranges, reuses SSH sessions, and presents clean outputs for human or machine (json / table).
### Demo Video

https://github.com/user-attachments/assets/5121bb20-4b32-4000-8081-2ad39571781e
### Screenshots
#### *Running `date` on host group `all`*

#### *Running `free` on host group `all`*

#### *Running `ping` on host group `all`*

#### *Curses Help Window within curses display*

#### *Curses single window mode with key '|'*

#### *Running `free -h` on host group `all` with json output*

#### *Running `free -h` on host group `all` with greppable table output*

#### *Broadcasting `./test.txt` to host group `us` at `/tmp/test.txt`*

#### *Syncing `/tmp/test.txt` from local machine to host group `all` (at `/tmp/test.txt`)*

#### *Gathering `/tmp/test.txt` from host group `all` to local machine at `/tmp/test/<hostname>_test.txt`*

#### *Running `power status` using ipmitool on host group `all` with IPMI interface prefixing*

> Note: `DB6` in image does not have IPMI over ethernet connection. It had failed back to running `ipmitool` over ssh.
#### *Running `date; df -h` on host group `all` showing the N-Host Diff (default threshold at 0.6)*

#### *Running `echo hi` on host range `<hostname>[10-20]*`*

#### *Running `echo hi` on host range `127.0.0.1-100`*

#### *Summary shown with some hosts reporting error*

#### *Command syntax / output / runtime comparism with Ansible ad-hoc*

> Note: if you like to use ansible, you will likely be running playbooks. This comparison is just for showing ansible is not for running ad-hoc commands.
#### *Command syntax / Runtime comparism with pdsh*

#### *Output comparism with pdsh*

## Highlights
- Run commands on many hosts simultaneously and asynchronously ( configurable max connections )
- Live interactive curses UI with per-host status. ( send input to all hosts asynchronously )
- Broadcast / gather files (rsync -> scp), with --file_sync syntaxic sugar
- Host discovery via env variables, files, ranges, and DNS; smart cached skip of unreachable hosts
- IPMI support with interface IP prefixing and SSH fallback
- Concurrency tuned for large fleets with resource-aware throttling
- Easily persist defaults via config files; ControlMaster config helper for speed
- Importable as a Python module for automation frameworks
- No client side code / dependencies! ( calling system ssh / rsync / scp )
- Support windows with openssh (server and client) !
## Why use it?
- If you think ansible is too slow
- If you think ansible is too clunky / cluttered
- If you think pdsh is too complicated / simple
- If you think pdsh output is too messy
- See progress in real time, not after-the-fact logs
- Operate at scale without drowning your terminal
- Keep your host definitions in simple env files and ranges ( customizable groupable DNS )
- Drop-in for scripts: stable exit codes, compact summaries, and capable to produce machine-friendly output
## Install
Install via
```bash
pip install multiSSH3
```
multiSSH3 will be in cli available as
```bash
mssh
mssh3
multissh
multissh3
multiSSH3
```
Need Python 3.6+
## Configuration
### Config File Chain
multissh treat config files as definable default values.
Configed values can be inspected by simply running
```bash
mssh -h
```
to print the help message with current default values.
Defaults are read from the following chain map stored as json files, top ones overwrite the bottom ones.:
| `CONFIG_FILE_CHAIN` |
| ------------------------------------------ |
| `./multiSSH3.config.json` |
| `~/multiSSH3.config.json` |
| `~/.multiSSH3.config.json` |
| `~/.config/multiSSH3/multiSSH3.config.json`|
| `/etc/multiSSH3.d/multiSSH3.config.json` |
| `/etc/multiSSH3.config.json` |
### Generating / Storing Config File
To store / generate a config file with the current command line options, you can use
```bash
mssh --store_config_file [STORE_CONFIG_FILE_PATH]
```
> `--store_config_file [STORE_CONFIG_FILE]`<br/>
> is equivalent to <br/>
> `--generate_config_file --config_file [CONFIG_FILE]` <br/>
> and <br/>
> `--generate_config_file > [STORE_CONFIG_FILE_PATH]`.
>
>`--generate_config_file` will output to stdout if `--config_file` is not specified.
> Use `--store_config_file` without a path will store to `multiSSH3.config.json` in the current working directory.
You can modify the json file or use command line arguments to update values and store them as defaults.
```bash
mssh --timeout 180 --store_config_file
```
will store
```json
...
"DEFAULT_CLI_TIMEOUT": "180"
...
```
into `./multiSSH3.config.json`
> If you want to store password, it will be a plain text password in this config file. This will be better to supply it everytime as a CLI argument but you should really consider setting up public-key authentication.
>On some systems, scp / rsync will require you use public-key authentication to work.<br/>
### SSH ControlMaster Configuration
Note: you probably want to set presistent ssh connections to speed up each connection events.
You can add the following to your `~/.ssh/config` file to enable ControlMaster with a 1 hour persistence by running
```bash
mssh --add_control_master_config
```
```config
Host *
ControlMaster auto
ControlPath /run/user/%i/ssh_sockets_%C
ControlPersist 3600
```
## Environment Variables for Hostname Aliases
multissh3 is able to read hostname grouping / aliases from environment variables.
By default mssh reads env variables and recursive resolves them if specified with a hostname that is not be able to be resolved from `/etc/hosts`. This functions as a pseudo dns service for hostname grouping.
> Use `[0-9]`, `[0-f]`, `[a-Z]` ranges in the hostname strings for range expansion.
> Use `,` to separate multiple hostnames / hostname groups.
### Hostname Resolution Logic
First, mssh will expand [a-b] ranges in the hostname strings.
- Check if an IPv4 range is given
- Expand using ipv4 range expansion logic.
- Return
- Check if range given is all numerical
- Expand using numerical expansion logic.
- Resolve hostnames
- Return
- Check if range given is all hex characters
- Expand using hex expansion logic.
- Resolve hostnames
- Return
- Else
- Expand using alphanumeric expansion logic.
- Resolve hostnames
- Return
When hostname need to be resolved, mssh will check in the following order:
- Return if it is an ipv4 address
- Return if hostname is in /etc/hosts
- If `-C, --no_env` is not specified and hostname is in current terminal environment variables
- Redo the whole range -> hostname resolution logic with the resolved env variable value.
- hostname is in map generated from env_file(s) specified by `--env_files ENV_FILES`
- Redo the whole range -> hostname resolution logic with the resolved env variable value.
- Lookup using `socket.gethostbyname()` to query dns server. ( Slow! )
> TLDR:
>
> ipv4 address expansion -> range expansion -> identify ipv4 -> resolve using environment variables ( if not no_env ) -> env map from env_files -> remote hostname resolution
### Default Env Files
> Because command environment variables take precedence over env files, you can specify `-C, --no_env` or set `"DEFAULT_NO_ENV": true` in config file to disable environment variable lookup.
> Use `-ef ENV_FILE, --env_file ENV_FILE` to specify a single env file to replace the default env file lookup chain. ( Only this file will be used. )
> Use `-efs ENV_FILES, --env_files ENV_FILES` to append files to the end of the default env file lookup chain. ( Files will be loaded first to last. )
| `DEFAULT_ENV_FILES` |
| ------------- |
| `/etc/profile.d/hosts.sh` |
| `~/.bashrc` |
| `~/.zshrc` |
| `~/host.env` |
| `~/hosts.env` |
| `.env` |
| `host.env` |
| `hosts.env` |
Later files take precedence over earlier files.
### Example Env Hostname File
An example hostname alias file will look like:
```bash
us_east='100.100.0.1-3,us_east_prod_[1-5]'
us_central=""
us_west="100.101.0.1-2,us_west_prod_[a-c]_[1-3]"
us="$us_east,$us_central,$us_west"
asia="100.90.0-1.1-9"
eu=''
rhel8="$asia,$us_east"
all="$us,$asia,$eu"
```
( You can use bash replacements for grouping. )
### Hostname Range Expansion
mssh is also able to recognize ip blocks / number blocks / hex blocks / character blocks directly.
For example:
```bash
mssh testrig[1-10] lsblk
mssh ww[a-c],10.100.0.* 'cat /etc/fstab' 'sed -i "/lustre/d' /etc/fstab' 'cat /etc/fstab'
```
## Misc Features
### Interactive Inputs
It also supports interactive inputs. ( and able to async boardcast to all supplied hosts )
```bash
mssh www bash
```
mssh cache all inputs and send them to all hosts when they are ready to receive inputs.
### Curses Window Size Control
By default, it will try to fit everything inside your window.
```bash
DEFAULT_WINDOW_WIDTH = 40
DEFAULT_WINDOW_HEIGHT = 1
```
While leaving minimum 40 characters / 1 line for each host display by default. You can modify this by using `-ww WINDOW_WIDTH, --window_width WINDOW_WIDTH` and `-wh WINDOW_HEIGHT, --window_height WINDOW_HEIGHT`.
> It is also possible to modify the window size within curses display by pressing keys:
> ```
> ? : Toggle Help Menu
> _ or + : Change window hight
> { or } : Change window width
> < or > : Change host index
> |(pipe) : Toggle single host
> Ctrl+D : Exit
> Ctrl+R : Force refresh
> ↑ or ↓ : Navigate history
> ← or → : Move cursor
> PgUp/Dn : Scroll history by 5
> Home/End: Jump cursor
> Esc : Clear line
> ```
> You can also toggle this help in curses by pressing `?` or `F1`.
### Command String Replacement
mssh will replace some magic strings in the command string with host specific values.
| Magic String | Description |
| ------------- | ------------------------------------ |
| `#HOST#` | Replaced with the expanded name /IP |
| `#HOSTNAME#` | Replaced with the expanded name / IP |
| `#USER#` | Replaced with the username |
| `#USERNAME#` | Replaced with the username |
| `#ID#` | Replaced with the ID of the host obj |
| `#I# ` | Replaced with the index of the host |
| `#PASSWD#` | Replaced with the password |
| `#PASSWORD#` | Replaced with the password |
| `#UUID#` | Replaced with the UUID of the host |
| `#RESOLVEDNAME#`| Replaced with the resolved name |
| `#IP#` | Replaced with the resolved IP |
> Note: `#HOST#` and `#HOSTNAME#` are the supplied hostname / ip before any resolution.
> Note: Resolved name is the IP / Hostname with user appended and ip prefix applied. This is what got used to connect to the host.
## Options
Here details the options for multiSSH3 6.02 @ 2025-11-10
### `-u, --username USERNAME` | _`DEFAULT_USERNAME`_
- You can specify the username for all hosts using this option and / or specifying username@hostname per host in the host list.
### `-p, --password PASSWORD` | _`DEFAULT_PASSWORD`_
- You can specify the password for all hosts using this option. Although it is recommended to use SSH keys / store password in config file for authentication.
### `-k, --key, --identity [KEY]` | _`DEFAULT_IDENTITY_FILE`_
- You can specify the identity file or folder to use for public key authentication. If a folder is specified, it will search for a key file inside the folder.
- This option implies `--use_key`.
- If this option is not specified but `--use_key` is specified, it will search for identity files in `DEFAULT_IDENTITY_FILE`.
- If no value is specified, it will search `DEFAULT_SSH_KEY_SEARCH_PATH`.
### `-uk, --use_key` | _`DEFAULT_USE_KEY`_
- Attempt to use public key authentication to connect to the hosts.
- Will search for identity file in `DEFAULT_IDENTITY_FILE` if `--identity` is not specified.
### `-ea, --extraargs EXTRAARGS` | _`DEFAULT_EXTRA_ARGS`_
- Extra arguments to pass to the ssh / rsync / scp command. Put in one string for multiple arguments.
- Example:
```bash
mssh -ea="--delete" -f ./data/ allServers /tmp/data/
```
### `-11, --oneonone` | _`DEFAULT_ONE_ON_ONE`_
- Run commands in one-on-one mode, where each command corresponds to each host, front to back.
- If command list length is not equal to expanded host list length, an error will be raised.
### `-f, --file FILE` | None
- The file to be copied to the hosts. Use -f multiple times to copy multiple files.
- When file is specified, the command(s) will be treated as the destination path(s) on the remote hosts.
- By default, rsync will be tried first on linux, scp will be used on windows or if rsync have failed to on linux.
### `-s, -fs, --file_sync [FILE_SYNC]` | _`DEFAULT_FILE_SYNC`_
- Operate in file sync mode, sync path in `<COMMANDS>` from this machine to `<HOSTS>`.
- Treat `--file <FILE>` and `<COMMANDS>` both as source and source
- Destination path will be inferred from source path ( Absolute Path ).
- `-fs` can also be followed by a path, a syntaxic sugar for specifying the path after the option.
- Example:
```bash
mssh -fs -- allServers ./data/ # is equivalent to
mssh -fs ./data/ allServers # is equivalent to
mssh -fs -f ./data/ allServers # is equivalent to
# if the cwd is at /tmp,
mssh -f ./data/ allServers /tmp/data/
```
### `-W, --scp` | _`DEFAULT_SCP`_
- Use scp for copying files by default instead of trying rsync.
- Can speed up operation if we know rsync will not be available on remote hosts.
### `-G, -gm, --gather_mode` | False
- Gather files from the hosts instead of sending files to the hosts.
- Will send remote files specified in `<FILE>` to local path specified in `<COMMANDS>`.
- Likely you will need to combine with the [Command String Replacement](#command-string-replacement) feature to let each host transfer to a different local path.
### `-t, --timeout TIMEOUT` | _`DEFAULT_CLI_TIMEOUT`_
- Timeout for each command in seconds.
- When using 0, timeout is disabled.
- For CLI interface, will use `DEFAULT_CLI_TIMEOUT` as default.
- For module interface, will use `DEFAULT_TIMEOUT` as default.
### `-T, --use_script_timeout` | False
- In CLI, use `DEFAULT_TIMEOUT` as timeout value instead of `DEFAULT_CLI_TIMEOUT`.
- This is to emulate the module interface behavior as if using in a script.
### `-r, --repeat REPEAT` | _`DEFAULT_REPEAT`_
- Repeat the commands for a number of times.
- Commands will be repeated in sequence for the specified number of times.
- Between repeats, it will wait for `--interval INTERVAL` seconds.
### `-i, --interval INTERVAL` | _`DEFAULT_INTERVAL`_
- Interval between command repeats in seconds.
- Only effective when `REPEAT` is greater than 1.
- Note: will wait for `INTERVAL` seconds before first run if `REPEAT` is greater than 1.
### `-M, --ipmi` | _`DEFAULT_IPMI`_
- Use ipmitool to run the command instead of ssh.
- Will strip `ipmitool` from the start of the command if it is present.
- Will replace the host's resolved IP address header with `DEFAULT_IPMI_INTERFACE_IP_PREFIX` in this mode
- Ex: `10.0.0.1` + `DEFAULT_IPMI_INTERFACE_IP_PREFIX='192.168'` -> `192.168.0.1`
- Will retry using original ip and run `ipmitool` over ssh if ipmi connection had failed. ( Will append ipmitool to the command if not present )
### `-mpre, --ipmi_interface_ip_prefix IPMI_INTERFACE_IP_PREFIX` | _`DEFAULT_IPMI_INTERFACE_IP_PREFIX`_
- The prefix of the IPMI interfaces. Will replace the resolved IP address with the given prefix when using ipmi mode.
- This will take precedence over `INTERFACE_IP_PREFIX` when in ipmi mode.
- Ex: `10.0.0.1` + `-mpre '192.168'` -> `192.168.0.1`
### `-pre, --interface_ip_prefix INTERFACE_IP_PREFIX` | _`DEFAULT_INTERFACE_IP_PREFIX`_
- The prefix of the for the interfaces. Will replace the resolved IP address with the given prefix when connecting to the host.
- Will prioritize `IPMI_INTERFACE_IP_PREFIX` if it exists when in ipmi mode.
- Ex: `10.0.0.1` + `-pre '172.30'` -> `172.30.0.1`
### `-iu, --ipmi_username IPMI_USERNAME` | _`DEFAULT_IPMI_USERNAME`_
- The username to use to connect to the hosts via ipmi.
- This will be used when `--ipmi` is specified.
- If this is not specified, `DEFAULT_USERNAME` will be used in ipmi mode.
### `-ip, --ipmi_password IPMI_PASSWORD` | _`DEFAULT_IPMI_PASSWORD`_
- The password to use to connect to the hosts via ipmi.
- This will be used when `--ipmi` is specified.
- If this is not specified, `DEFAULT_PASSWORD` will be used in ipmi mode.
### `-S, -q, -nw, --no_watch` | _`DEFAULT_NO_WATCH`_
- Disable the curses terminal display and only print the output.
- Note this will not be 'quiet mode' traditionally, please use `-Q, -no, --quiet, --no_output` to disable output.
- Useful in scripting to reduce runtime and terminal flashing.
### `-ww, --window_width WINDOW_WIDTH` | _`DEFAULT_WINDOW_WIDTH`_
- The minimum character length of the curses window.
- Default is 40 characters.
- Will try to fit as many hosts as possible in the terminal window while leaving at least this many characters for each host display.
- You can modify this value in curses display by pressing `_` or `+`.
### `-wh, --window_height WINDOW_HEIGHT` | _`DEFAULT_WINDOW_HEIGHT`_
- The minimum line height of the curses window.
- Default is 1 line.
- Will try to fit as many hosts as possible in the terminal window while leaving at least this many lines for each host display.
- You can modify this value in curses display by pressing `{` or `}`.
- Terminal will overflow if it is smaller than ww * wh.
### `-B, -sw, --single_window` | _`DEFAULT_SINGLE_WINDOW`_
- Use a single window mode for curses display.
- This shows a single large window for a host for detailed monitoring.
- You can rotate between hosts by pressing `<` or `>` in curses display. ( also works in non single window mode )
- You can toggle single window mode in curses display by pressing `|` ( pipe ).
### `-R, -eo, --error_only` | _`DEFAULT_ERROR_ONLY`_
- Print `Success` if all hosts returns zero.
- Only print output for the hosts that returns non-zero.
- Useful in scripting to reduce output.
### `-Q, -no, --no_output, --quiet` | _`DEFAULT_NO_OUTPUT`_
- Do not print any output.
- Note: if using without `--no_watch`, the curses display will still be shown.
- Useful in scripting when failure is expected.
- Note: return code will still be returned correctly unless `-Z, -rz, --return_zero` is specified.
### `-Z, -rz, --return_zero` | _`DEFAULT_RETURN_ZERO`_
- Return 0 even if there are errors.
- Useful in scripting when failure is expected and bash is set to exit when error occurs.
### `-C, --no_env` | _`DEFAULT_NO_ENV`_
- Do not load the command line environment variables for hostname resolution.
- Only use `/etc/hosts` -> env files specified in `--env_files ENV_FILES` -> DNS for hostname resolution.
- Useful when environment variables can interfere with hostname resolution ( for example not reloading environment variables after refreshing them ).
### `-ef, --env_file ENV_FILE` | None
- Replace the env file look up chain with this env_file. ( Still work with `--no_env` )
- Only this file will be used for env file hostname resolution.
- Useful when you want to use a specific env file for hostname resolution.
### `-efs, --env_files ENV_FILES` | _`DEFAULT_ENV_FILES`_
- The files to load the environment variables for hostname resolution.
- Can specify multiple. Load first to last. ( Still work with `--no_env` )
- Useful when you want to add additional env files for hostname resolution.
### `-m, --max_connections MAX_CONNECTIONS` | _`DEFAULT_MAX_CONNECTIONS`_
- The maximum number of concurrent connections to establish.
- Default is 4 * cpu_count connections.
- Useful for limiting the number of simultaneous SSH connections to avoid overwhelming the compute resources / security limits
- Note: mssh will open at least 3 files per connection. By default some linux systems will only set the ulimit -n to 1024 files. This means about 300 connections can be opened simultaneously. You can increase the ulimit -n value to allow more connections if needed.
- You will observe `Warning: The number of maximum connections {max_connections} is larger than estimated limit {estimated_limit} .....` if the max connections is larger than estimated limit.
- mssh will also throttle thread generation if the estimated limit is lower than `2 * max_connections` to avoid hitting the file descriptor limit as python will use some file descriptors when setting up threads.
### `-j, --json` | _`DEFAULT_JSON_OUTPUT`_
- Output in json format.
- Will also respect `-R, -eo, --error_only` and `-Q, -no, --quiet, --no_output` options.
### `-w, --success_hosts` | _`DEFAULT_SUCCESS_HOSTS`_
- By default, a summary of failed hosts is printed.
- Use this option to also print the hosts that succeeded in summary as well.
- Useful when you want to do something with the succeeded hosts later.
- Note: you can directly use the failed / succeeded host list string as it should be fully compatible with mssh host input.
### `-P, -g, --greppable, --table` | _`DEFAULT_GREPPABLE_OUTPUT`_
- Output in greppable table.
- Each line contains: Hostname / Resolved Name / Return Code / Output Type/ Output
- Note a host can have multiple lines if the output contains multiple lines.
- Useful in a script if we are piping the output to a log file for later grepping.
### `-x, -su, --skip_unreachable` | _`DEFAULT_SKIP_UNREACHABLE`_
- Skip unreachable hosts.
- Note: Timedout Hosts are considered unreachable.
- By default mssh set this to true to speed up operations on large host lists with some unreachable hosts.
- Unreachable hosts will be tried again when their timeout expires.
- mssh stores the current run unreachable hosts in memory and if `skip_unreachable` is true, it will store them in a temperary file called __{username}_multiSSH3_UNAVAILABLE_HOSTS.csv in the system temp folder.
- To force mssh to not use unreachable hosts from previous runs, you can use `-a, -nsu, --no_skip_unreachable` to set `skip_unreachable` to false.
### `-a, -nsu, --no_skip_unreachable` | not _`DEFAULT_SKIP_UNREACHABLE`_
- Do not skip unreachable hosts.
- This forms an mutually exclusive pair with `-x, -su, --skip_unreachable`.
- This option sets `skip_unreachable` to false.
### `-uhe, --unavailable_host_expiry UNAVAILABLE_HOST_EXPIRY` | _`DEFAULT_UNAVAILABLE_HOST_EXPIRY`_
- The expiry time in seconds for unreachable hosts stored in the temperary unavailable hosts file.
- Default is 600 seconds ( 10 minutes ).
- Note: because mssh stores a hostname: expire_time pair in the unavailable hosts file, opeartor is able to use different expiry time for different runs to control how long unreachable hosts are skipped and they will be expired at the correct time.
- Note: because mssh stores the expire time in monotonic time. In most systems, this means the expiry time will not persist across system reboots. ( and also the fact it is store in the system temp folder ) although the unavailable hosts can accidentally persist across reboots if the system is rebooted often and the `unavailable_host_expiry` is set to a very large value.
### `-X, -sh, --skip_hosts SKIP_HOSTS` | _`DEFAULT_SKIP_HOSTS`_
- A comma separated list of hosts to skip.
- This field will be expanded in the same way as the host list.
- Useful when you want to skip some hosts temporarily without modifying the host list.
### `--generate_config_file` | False
- Generate a config file with the current command line options.
- Outputs to stdout if `--config_file` is not specified.
### `--config_file [CONFIG_FILE]` | None
- Additional config file path to load options from.
- Will be loaded last thus overwriting other config file values.
- Use without value to use `multiSSH3.config.json` in the current working directory.
- Also used with `--generate_config_file` to specify output path.
### `--store_config_file [STORE_CONFIG_FILE]` | None
- Store the current command line options to a config file.
- Equivalent to `--generate_config_file --config_file [STORE_CONFIG_FILE]`
- Outputs to `multiSSH3.config.json` in the current working directory if no path is specified.
### `--debug` | False
- Enable debug mode.
- Print host specific debug messages to hosts's stderr.
### `-ci, --copy_id` | False
- `copy_id` mode, use `ssh-copy-id` to copy public key to the hosts.
- Will use the identity file if specified in `-k, --key, --identity [KEY]`
- Will respect `-u, --username` and `-p, --password` options for username and password. ( password will need `sshpass` to be installed, or it will prompt for password interactively )
### `-I, -nh, --no_history` | _`DEFAULT_NO_HISTORY`_
- Do not store command history.
- By default, mssh store command history in `HISTORY_FILE`.
- Useful in scripts when you do not want to store command history.
### `-hf, --history_file HISTORY_FILE` | _`DEFAULT_HISTORY_FILE`_
- The file to store command history.
- By default, mssh store command history in `~/.mssh_history`.
- The history file is a TSV ( tab separated values ) file with each line containing: timestamp, mssh_path, options, hosts, commands.
### `--script` | False
- Script mode, syntatic sugar for `-SCRIPT` or `--no_watch --skip_unreachable --no_env --no_history --greppable --error_only`.
- Useful when using mssh in shell scripts.
### `-e, --encoding ENCODING` | _`DEFAULT_ENCODING`_
- The encoding to use for decoding the output from the hosts.
- Default is `utf-8`.
### `-dt, --diff_display_threshold DIFF_DISPLAY_THRESHOLD` | _`DEFAULT_DIFF_DISPLAY_THRESHOLD`_
- The threshold of different lines to total lines ratio to trigger N-body diff display mode.
- When the output difference ratio exceeds this threshold, mssh will display the diff between outputs instead of the full outputs.
- Useful when the outputs are large and mostly similar.
- Set to 1.0 to always use diff display mode.
- Set to 0.0 to never use diff display mode.
- Note: This uses custom N-body diff algorithm. Uses some memory.
### `--force_truecolor` | _`DEFAULT_FORCE_TRUECOLOR`_
- Force enable truecolor support in curses display.
- Useful when your terminal supports truecolor but is not detected correctly.
### `--add_control_master_config` | False
- Add ControlMaster configuration to your `~/.ssh/config` file to enable persistent ssh connections.
- This will help speed up connections to multiple hosts.
- The configuration added is:
```config
Host *
ControlMaster auto
ControlPath /run/user/%i/ssh_sockets_%C
ControlPersist 3600
```
### `-V, --version` | False
- Print the version of multiSSH3 and exit.
- Will also print the found system binary for calling when setting up connections.
## Usage
Use ```mssh --help``` for more info.
Below is a sample help message output from multiSSH3 6.02 @ 2025-11-10
```bash
$ mssh -h
usage: multiSSH3.py [-h] [-u USERNAME] [-p PASSWORD] [-k [IDENTITY_FILE]] [-uk] [-ea EXTRAARGS] [-11] [-f FILE] [-s [FILE_SYNC]] [-W] [-G] [-t TIMEOUT] [-T] [-r REPEAT] [-i INTERVAL] [-M] [-mpre IPMI_INTERFACE_IP_PREFIX] [-pre INTERFACE_IP_PREFIX] [-iu IPMI_USERNAME] [-ip IPMI_PASSWORD] [-S] [-ww WINDOW_WIDTH] [-wh WINDOW_HEIGHT] [-B] [-R] [-Q] [-Z] [-C] [-ef ENV_FILE] [-efs ENV_FILES] [-m MAX_CONNECTIONS] [-j] [-w] [-P] [-x | -a] [-uhe UNAVAILABLE_HOST_EXPIRY] [-X SKIP_HOSTS] [--generate_config_file] [--config_file [CONFIG_FILE]] [--store_config_file [STORE_CONFIG_FILE]] [--debug] [-ci] [-I] [-hf HISTORY_FILE] [--script] [-e ENCODING] [-dt DIFF_DISPLAY_THRESHOLD] [--force_truecolor] [--add_control_master_config] [-V] [hosts] [commands ...]
Run a command on multiple hosts, Use #HOST# or #HOSTNAME# to replace the host name in the command.
positional arguments:
hosts Hosts to run the command on, use "," to seperate hosts. (default: all)
commands the command to run on the hosts / the destination of the files #HOST# or #HOSTNAME# will be replaced with the host name.
options:
-h, --help show this help message and exit
-u, --username USERNAME
The general username to use to connect to the hosts. Will get overwrote by individual username@host if specified. (default: None)
-p, --password PASSWORD
The password to use to connect to the hosts, (default: )
-k, --identity_file, --key, --identity [IDENTITY_FILE]
The identity file to use to connect to the hosts. Implies --use_key. Specify a folder for program to search for a key. Use option without value to use ~/.ssh/ (default: None)
-uk, --use_key Attempt to use public key file to connect to the hosts. (default: False)
-ea, --extraargs EXTRAARGS
Extra arguments to pass to the ssh / rsync / scp command. Put in one string for multiple arguments.Use "=" ! Ex. -ea="--delete" (default: None)
-11, --oneonone Run one corresponding command on each host. (default: False)
-f, --file FILE The file to be copied to the hosts. Use -f multiple times to copy multiple files
-s, -fs, --file_sync [FILE_SYNC]
Operate in file sync mode, sync path in <COMMANDS> from this machine to <HOSTS>. Treat --file <FILE> and <COMMANDS> both as source and source and destination will be the same in this mode. Infer destination from source path. (default: False)
-W, --scp Use scp for copying files instead of rsync. Need to use this on windows. (default: False)
-G, -gm, --gather_mode
Gather files from the hosts instead of sending files to the hosts. Will send remote files specified in <FILE> to local path specified in <COMMANDS> (default: False)
-t, --timeout TIMEOUT
Timeout for each command in seconds. Set default value via DEFAULT_CLI_TIMEOUT in config file. Use 0 for disabling timeout. (default: 0)
-T, --use_script_timeout
Use shortened timeout suitable to use in a script. Set value via DEFAULT_TIMEOUT field in config file. (current: 50)
-r, --repeat REPEAT Repeat the command for a number of times (default: 1)
-i, --interval INTERVAL
Interval between repeats in seconds (default: 0)
-M, --ipmi Use ipmitool to run the command. (default: False)
-mpre, --ipmi_interface_ip_prefix IPMI_INTERFACE_IP_PREFIX
The prefix of the IPMI interfaces (default: )
-pre, --interface_ip_prefix INTERFACE_IP_PREFIX
The prefix of the for the interfaces (default: None)
-iu, --ipmi_username IPMI_USERNAME
The username to use to connect to the hosts via ipmi. (default: ADMIN)
-ip, --ipmi_password IPMI_PASSWORD
The password to use to connect to the hosts via ipmi. (default: )
-S, -q, -nw, --no_watch
Quiet mode, no curses watch, only print the output. (default: False)
-ww, --window_width WINDOW_WIDTH
The minimum character length of the curses window. (default: 40)
-wh, --window_height WINDOW_HEIGHT
The minimum line height of the curses window. (default: 1)
-B, -sw, --single_window
Use a single window for all hosts. (default: False)
-R, -eo, --error_only
Only print the error output. (default: False)
-Q, -no, --no_output, --quiet
Do not print the output. (default: False)
-Z, -rz, --return_zero
Return 0 even if there are errors. (default: False)
-C, --no_env Do not load the command line environment variables. (default: False)
-ef, --env_file ENV_FILE
Replace the env file look up chain with this env_file. ( Still work with --no_env ) (default: None)
-efs, --env_files ENV_FILES
The files to load the mssh file based environment variables from. Can specify multiple. Load first to last. ( Still work with --no_env ) (default: ['/etc/profile.d/hosts.sh', '~/.bashrc', '~/.zshrc', '~/host.env', '~/hosts.env', '.env', 'host.env', 'hosts.env'])
-m, --max_connections MAX_CONNECTIONS
Max number of connections to use (default: 4 * cpu_count)
-j, --json Output in json format. (default: False)
-w, --success_hosts Output the hosts that succeeded in summary as well. (default: False)
-P, -g, --greppable, --table
Output in greppable table. (default: False)
-x, -su, --skip_unreachable
Skip unreachable hosts. Note: Timedout Hosts are considered unreachable. Note: multiple command sequence will still auto skip unreachable hosts. (default: True)
-a, -nsu, --no_skip_unreachable
Do not skip unreachable hosts. Note: Timedout Hosts are considered unreachable. Note: multiple command sequence will still auto skip unreachable hosts. (default: False)
-uhe, --unavailable_host_expiry UNAVAILABLE_HOST_EXPIRY
Time in seconds to expire the unavailable hosts (default: 600)
-X, -sh, --skip_hosts SKIP_HOSTS
Skip the hosts in the list. (default: None)
--generate_config_file
Store / generate the default config file from command line argument and current config at --config_file / stdout
--config_file [CONFIG_FILE]
Additional config file to use, will pioritize over config chains. When using with store_config_file, will store the resulting config file at this location. Use without a path will use multiSSH3.config.json
--store_config_file [STORE_CONFIG_FILE]
Store the default config file from command line argument and current config. Same as --store_config_file --config_file=<path>
--debug Print debug information
-ci, --copy_id Copy the ssh id to the hosts
-I, -nh, --no_history
Do not record the command to history. Default: False
-hf, --history_file HISTORY_FILE
The file to store the history. (default: ~/.mssh_history)
--script Run the command in script mode, short for -SCRIPT or --no_watch --skip_unreachable --no_env --no_history --greppable --error_only
-e, --encoding ENCODING
The encoding to use for the output. (default: utf-8)
-dt, --diff_display_threshold DIFF_DISPLAY_THRESHOLD
The threshold of lines to display the diff when files differ. {0-1} Set to 0 to always display the diff. Set to 1 to disable diff. (Only merge same) (default: 0.6)
--force_truecolor Force truecolor output even when not in a truecolor terminal. (default: False)
--add_control_master_config
Add ControlMaster configuration to ~/.ssh/config to speed up multiple connections to the same host.
-V, --version show program's version number and exit
```
Note: The default values can be modified / updated in the [Config file](#config).
## Importing as a Module
You can also import multiSSH3 as a module in your python scripts.
### Host Object
The `Host` object represents a host and its command execution state. The main execution function `run_command_on_hosts` returns a list of `Host` objects.
```python
class Host:
def __init__(self, name, command, files = None,ipmi = False,interface_ip_prefix = None,scp=False,extraargs=None,gatherMode=False,identity_file=None,shell=False,i = -1,uuid=uuid.uuid4(),ip = None):
self.name = name # the name of the host (hostname or IP address)
self.command = command # the command to run on the host
self.returncode = None # the return code of the command
self.output = [] # the output of the command for curses
self.stdout = [] # the stdout of the command
self.stderr = [] # the stderr of the command
self.lineNumToPrintSet = set() # line numbers to reprint
self.lastUpdateTime = time.monotonic() # the last time the output was updated
self.lastPrintedUpdateTime = 0 # the last time the output was printed
self.files = files # the files to be copied to the host
self.ipmi = ipmi # whether to use ipmi to connect to the host
self.shell = shell # whether to use shell to run the command
self.interface_ip_prefix = interface_ip_prefix # the prefix of the ip address of the interface to be used to connect to the host
self.scp = scp # whether to use scp to copy files to the host
self.gatherMode = gatherMode # whether the host is in gather mode
self.extraargs = extraargs # extra arguments to be passed to ssh
self.resolvedName = None # the resolved IP address of the host
# also store a globally unique integer i from 0
self.i = i if i != -1 else _get_i()
self.uuid = uuid
self.identity_file = identity_file
self.ip = ip if ip else getIP(name)
self.current_color_pair = [-1, -1, 1]
self.output_buffer = io.BytesIO()
self.stdout_buffer = io.BytesIO()
self.stderr_buffer = io.BytesIO()
self.thread = None
```
### Example:
```python
import multiSSH3
ethReachableHosts = multiSSH3.run_command_on_hosts(nodesToCheck,['echo hi'],return_unfinished = True) # return_unfinished will return the | text/markdown | Yufei Pan | pan@zopyr.us | null | null | GPLv3+ | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows"
] | [] | https://github.com/yufei-pan/multiSSH3 | null | >=3.6 | [] | [] | [] | [
"argparse",
"ipaddress"
] | [] | [] | [] | [] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Fedora Linux","version":"43","id":"","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-18T21:24:54.810189 | multissh3-6.12.tar.gz | 97,564 | 30/d2/dd43b83191966215e5342b678ce31ab1350eeac03ad34325d682309e68fe/multissh3-6.12.tar.gz | source | sdist | null | false | 915635a67171a97db840af0e94884593 | e668c9ec4ead52240e2e0ff55c756aaf916c2164e15685104108eda614ca757c | 30d2dd43b83191966215e5342b678ce31ab1350eeac03ad34325d682309e68fe | null | [] | 0 |
2.4 | uq-physicell | 1.2.5 | Project to perform uncertainty quantification of PhysiCell models | # UQ-PhysiCell
<p align="center">
<img src="https://raw.githubusercontent.com/heberlr/UQ_PhysiCell/development/uq_physicell/doc/icon.png" alt="pyABC logo" width="50%">
</p>
[](https://github.com/heberlr/UQ_PhysiCell/actions/workflows/test-examples.yml)
[](https://uq-physicell.readthedocs.io/en/latest/?badge=latest)
[](https://badge.fury.io/py/uq-physicell)
[](https://python.org)
[](https://github.com/heberlr/UQ_PhysiCell/tree/development/uq_physicell/LICENSE.md)
[](https://doi.org/10.5281/zenodo.17823176)
UQ-PhysiCell is a comprehensive framework for performing uncertainty quantification and parameter calibration of PhysiCell models. It provides sophisticated tools for model analysis, calibration, and model selection.
## Resources
- 📖 **Documentation**: [https://uq-physicell.readthedocs.io/en/latest/index.html](https://uq-physicell.readthedocs.io/en/latest/index.html)
- 💡 **Examples**: [https://uq-physicell.readthedocs.io/en/latest/examples.html](https://uq-physicell.readthedocs.io/en/latest/examples.html)
- 🐛 **Bug Reports**: [https://github.com/heberlr/UQ_PhysiCell/issues](https://github.com/heberlr/UQ_PhysiCell/issues)
- 💻 **Source Code**: [https://github.com/heberlr/UQ_PhysiCell](https://github.com/heberlr/UQ_PhysiCell)
<!-- - 📄 **Cite**: [...](...) -->
| text/markdown | null | "Heber L. Rocha" <heberonly@gmail.com> | null | null | BSD 3-Clause License
Copyright (c) 2025, Heber Rocha and the UQ_PhysiCell Project
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | Calibration, Model selection, PhysiCell, Sensitivity Analysis, Uncertainty Quantification | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pcdl",
"salib",
"pyabc; extra == \"abc\"",
"botorch; extra == \"bo\"",
"gpytorch; extra == \"bo\"",
"torch; extra == \"bo\"",
"linkify-it-py>=2.0; extra == \"docs\"",
"myst-parser>=0.18; extra == \"docs\"",
"sphinx-autobuild>=2021.3.14; extra == \"docs\"",
"sphinx-rtd-theme>=1.0; extra == \"docs\... | [] | [] | [] | [
"Homepage, https://github.com/heberlr/UQ_PhysiCell"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-18T21:24:44.334184 | uq_physicell-1.2.5.tar.gz | 1,145,270 | 42/71/38ed5bf3c4734d0cee66451922961a4c844487c283562cde25ebf435dc9c/uq_physicell-1.2.5.tar.gz | source | sdist | null | false | 62aff547c3bf1f02dde0785560faa3b2 | ec4a75c037192f8523fa81259aed3e5b250ae6e5006838ae5ce105626a3fc36c | 427138ed5bf3c4734d0cee66451922961a4c844487c283562cde25ebf435dc9c | null | [] | 251 |
2.4 | nomad-north-jupyter | 0.2.1 | An example for Jupyter NORTH tool (Jupyter docker image). | # `NORTH` Jupyter tool
`nomad-north-jupyter` is a NOMAD plugin and can be used along with other NOMAD plugins, in [nomad-distro-dev](https://github.com/FAIRmat-NFDI/nomad-distro-dev), [nomad-distro-template](https://github.com/FAIRmat-NFDI/nomad-distro-template), and in NOMAD production instance. Adding it as a plugin will make the `jupyter_north_tool` available in the `NORTH` tools registry of the NOMAD Oasis environment.
The plugin contains the `NORTH` tool configuration and a Docker image for a Jupyter-based tool in the NOMAD `NORTH` (NOMAD Oasis Remote Tools Hub) environment. The [nomad-north-jupyter image](https://github.com/FAIRmat-NFDI/nomad-north-jupyter/pkgs/container/nomad-north-jupyter) from this plugin provides the default base image for [Dockerfile](https://github.com/FAIRmat-NFDI/cookiecutter-nomad-plugin/blob/main/%7B%7Bcookiecutter.plugin_name%7D%7D/py_sources/src/north_tools/%7B%7Bcookiecutter.north_tool_name%7D%7D/Dockerfile) which is used as a basis to define custom Jupyter `NORTH` tools.
## Quick start
The `jupyter_north_tool`, a `NORTH` tool instance provided by this plugin, offers a containerized JupyterLab environment for interactive analysis.
**In the following sections, we will cover:**
1. [Building and testing the Docker image locally](#building-and-testing)
1. [Using `nomad-north-jupyter` as a base image for custom `NORTH` tools](#using-nomad-north-jupyter-as-a-base-image-for-custom-north-tools)
- [Package management](#package-management)
- [Port and user configuration](#port-and-user-configuration)
- [Fixing permissions](#fixing-permissions)
1. [Adding the `nomad-north-jupyter` plugin to NOMAD](#adding-the-nomad-north-jupyter-plugin-to-nomad)
- [Adding the `nomad-north-jupyter` plugin in your NOMAD Oasis](#adding-the-nomad-north-jupyter-plugin-in-your-nomad-oasis)
- [Adding `nomad-north-jupyter` plugin in your local NOMAD installation](#adding-nomad-north-jupyter-plugin-in-your-local-nomad-installation)
- [Reconfigure existing `NORTH` tool entry point](#reconfigure-existing-north-tool-entry-point)
1. [Adding `nomad-north-jupyter` image in a NOMAD Oasis (not recommended)](#adding-nomad-north-jupyter-image-in-a-nomad-oasis-not-recommended)
1. [Documentation](#documentation)
1. [Main contributors](#main-contributors)
## Building and testing
Build the Docker image locally:
```bash
docker build -f src/nomad_north_jupyter/north_tools/jupyter_north_tool/Dockerfile \
-t ghcr.io/fairmat-nfdi/nomad-north-jupyter:latest .
```
Test the image:
```bash
docker run -p 8888:8888 ghcr.io/fairmat-nfdi/nomad-north-jupyter:latest
```
Access JupyterLab at `http://localhost:8888`.
## Using `nomad-north-jupyter` as a base image for custom `NORTH` tools
This image is designed to be used as a base for custom NOMAD `NORTH` Jupyter tools. When extending this image in your plugin's `Dockerfile` created from [cookiecutter-nomad-plugin](https://github.com/FAIRmat-NFDI/cookiecutter-nomad-plugin/), keep the following in mind:
### Package management
Both `uv` and `pip` are available as package managers in the image. Both install and uninstall packages in the Conda environment, so you can use either one of them to manage your Python dependencies.
**Example using uv:**
```dockerfile
RUN uv pip install numpy pandas scipy
```
**Example using pip:**
```dockerfile
RUN pip install --no-cache-dir matplotlib seaborn
```
### Port and user configuration
Like other Jupyter notebook images, port `8888` is exposed for JupyterLab access. The default user is `${NB_USER}` (usually `jovyan`), and you should switch to this user when installing packages or copying files to ensure proper permissions.
### Fixing permissions
After customizing the base image (e.g., installing additional packages or adding files), you may need to fix file permissions to avoid permission issues when running the container. Add the following lines at the end of your `Dockerfile` after all customizations:
```dockerfile
COPY --chown=${NB_USER}:${NB_GID} . ${HOME}/${PLUGIN_NAME}
RUN fix-permissions "/home/${NB_USER}" \
&& fix-permissions "${CONDA_DIR}"
```
## Adding the `nomad-north-jupyter` plugin to NOMAD
Currently, NOMAD has two distinct flavors (NOMAD Oasis and NOMAD development environment) that are relevant depending on your role as an user.
### Adding the `nomad-north-jupyter` plugin in your NOMAD Oasis
The plugin `nomad-north-jupyter` is a default member of the plugin group in NOMAD Oasis. This facilitates NOMAD to autonomously discover and integrate the `NORTHTool` via `NorthToolEntryPoint` defined in the plugin. Later, the tool will be available in the `NORTH` tools registry of the NOMAD Oasis environment.
### Adding `nomad-north-jupyter` plugin in your local NOMAD installation
We now recommend using the dedicated [`nomad-distro-dev`](https://github.com/FAIRmat-NFDI/nomad-distro-dev) to facilitate NOMAD plugin development. To add `nomad-north-jupyter` to your local development environment, add it as a dependency in `pyproject.toml`. NOMAD will automatically discover `NORTHToolEntryPoint` instances (e.g., `north_tool_entry_point`) defined in `nomad-north-jupyter`. To replace or modify the `NORTH` tool configuration (for instance, changing the image or image version), you can adjust the entry point configuration in your `nomad.yaml` file.
### Reconfigure existing `NORTH` tool entry point
The image shipped with `nomad-north-jupyter` is a generic Jupyter container that may be too simplistic for your use case. In that case, you can change to a different image to use in the container. A [`NORTHTool`](https://nomad-lab.eu/prod/v1/docs/reference/config.html#northtool) entry point can be reconfigured via the `nomad.yaml` configuration file of your NOMAD Oasis instance (you can learn more about this reconfiguration and the [merge strategy](https://nomad-lab.eu/prod/v1/docs/reference/config.html#merging-rules) in the NOMAD docs). Hence, if you have the `nomad-north-jupyter` plugin installed, you can do so by adjusting the entry point configuration in your `nomad.yaml` file:
```yaml
plugins:
entry_points:
options:
nomad_north_jupyter.north_tools.jupyter_north_tool:north_tool_entry_point:
north_tool:
image: ghcr.io/fairmat-nfdi/nomad-north-jupyter:<another_tag>
display_name: "renamed jupyter tool"
```
## Adding `nomad-north-jupyter` image in a NOMAD Oasis (not recommended)
> [!WARNING]
> We strongly recommend integrating `nomad-north-jupyter` into NOMAD as a plugin. The following approach is only recommended if you have an existing NOMAD Oasis instance that you do not want to rebuild, but still want to add the Jupyter image to the running `NORTH` service.
If you cannot use the plugin approach, you can add the `nomad-north-jupyter` image to your `NORTH` service by editing the `nomad.yaml` file in a [nomad-distro-template](https://github.com/FAIRmat-NFDI/nomad-distro-template) instance. Define the corresponding `NORTH` tool in `nomad.yaml` as shown below (see the full `NORTH` tool configuration in the [NOMAD documentation](https://nomad-lab.eu/prod/v1/docs/reference/config.html)):
```yaml
# Not a recommended way
north:
jupyterhub_crypt_key: "978bfb2e13a8448a253c629d8dd84ffsd587f30e635b753153960930cad9d36d"
tools:
options:
jupyter:
image: ghcr.io/fairmat-nfdi/nomad-north-jupyter:latest
description: "### **Jupyter Notebook**: The Classic Notebook Interface"
file_extensions:
- ipynb
icon: jupyter_logo.svg
image_pull_policy: Always
maintainer:
- email: fairmat@physik.hu-berlin.de
name: NOMAD Authors
mount_path: /home/jovyan
path_prefix: lab/tree
privileged: false
short_description: ""
with_path: true
```
## Documentation
For comprehensive documentation on creating and managing `NORTH` tools, including detailed information on topics such as:
- Entry point configuration and `NORTHTool` API
- Docker image structure and best practices
- Dependency management
See the [NOMAD `NORTH` Tools documentation](https://fairmat-nfdi.github.io/nomad-docs/howto/plugins/types/north_tools.html).
> [!NOTE]
> This NOMAD plugin was generated with [`Cookiecutter`](https://www.cookiecutter.io/) along with `@nomad`'s [`cookiecutter-nomad-plugin`](https://github.com/FAIRmat-NFDI/cookiecutter-nomad-plugin) template.
## Main contributors
| Name | E-mail |
| ------------- | ----------------------------------------------------------------- |
| NOMAD Authors | [fairmat@physik.hu-berlin.de](mailto:fairmat@physik.hu-berlin.de)
| text/markdown | null | NOMAD Authors <fairmat@physik.hu-berlin.de> | null | NOMAD Authors <fairmat@physik.hu-berlin.de> | The MIT License (MIT)
Copyright (c) 2026 NOMAD Authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"nomad-lab>=1.4.1",
"ruff; extra == \"dev\"",
"pytest; extra == \"dev\"",
"structlog; extra == \"dev\"",
"mkdocs; extra == \"dev\"",
"mkdocs-material==8.1.1; extra == \"dev\"",
"pymdown-extensions; extra == \"dev\"",
"mkdocs-click; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/fairmat-nfdi/nomad-north-jupyter"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:22:06.695128 | nomad_north_jupyter-0.2.1.tar.gz | 108,835 | 17/a1/4ba909dca4fab557641f322d8927e2220fb2806638180688ca7d2bb860f8/nomad_north_jupyter-0.2.1.tar.gz | source | sdist | null | false | 3d6a35697fae0b2319256fd7d8608fef | 0b451614725af6024e39a7e0a633104b8a5c1f88cf04a88b6b4cec4bee6bd428 | 17a14ba909dca4fab557641f322d8927e2220fb2806638180688ca7d2bb860f8 | null | [
"LICENSE"
] | 338 |
2.4 | vectorshift | 0.0.92 | VectorShift Python SDK | # VectorShift SDK
Python SDK in development for VS pipeline creation and interaction
## Documentation
The VectorShift SDK provides a Python interface for creating, managing, and executing AI pipelines on the VectorShift platform.
For comprehensive API documentation, visit: [https://docs.vectorshift.ai/api-reference/overview](https://docs.vectorshift.ai/api-reference/overview)
## Installation
```
pip install vectorshift
```
## Usage
### Autnethication
Set api key in code
```python
vectorshift.api_key = 'sk-****'
```
or set as environement variable
```bash
export VECTORSHIFT_API_KEY='sk-***'
```
Create a new pipeline
```python
import vectorshift
from vectorshift.pipeline import Pipeline, InputNode, OutputNode, LlmNode
# Set API key
vectorshift.api_key = "your api key here"
input_node = InputNode(node_name="input_0")
llm_node = LlmNode(
node_name="llm_node",
system="You are a helpful assistant.",
prompt=input_node.text,
provider="openai",
model="gpt-4o-mini",
temperature=0.7
)
output_node = OutputNode(
node_name="output_0",
value=llm_node.response
)
pipeline = Pipeline.new(
name="basic-llm-pipeline",
nodes=[input_node, llm_node, output_node]
)
```
Basic Rag Pipeline
```python
import vectorshift
from vectorshift.pipeline import Pipeline, InputNode, KnowledgeBaseNode, OutputNode, LlmNode
from vectorshift import KnowledgeBase
# Set API key
vectorshift.api_key = "your api key here"
# Create input node for user query
input_node = InputNode(
node_name="Query",
)
# Fetch knowledge base
knowledge_base = KnowledgeBase.fetch(name="your knowledge base name here")
# Create knowledge base node to retrieve relevant documents
knowledge_base_node = KnowledgeBaseNode(
query=input_node.text,
knowledge_base=knowledge_base,
format_context_for_llm=True,
)
# Create LLM node that uses both the query and retrieved documents
llm_node = LlmNode(
system="You are a helpful assistant that answers questions based on the provided context documents.",
prompt=f"Query: {input_node.text}\n\nContext: {knowledge_base_node.formatted_text}",
provider="openai",
model="gpt-4o-mini",
temperature=0.7
)
# Create output node for the LLM response
output_node = OutputNode(
node_name="Response",
value=llm_node.response
)
# Create the RAG pipeline
rag_pipeline = Pipeline.new(
name="rag-pipeline",
nodes=[input_node, knowledge_base_node, llm_node, output_node],
)
```
### Pipeline List Mode
Specify execution mode = `batch` to run a node in list mode. The node will now take in lists of inputs and execute it's task in parallel over the inputs. In this example the input will be split by newlines and the sub pipeline will execute over each part of the input in parallel.
```python
import vectorshift
from vectorshift import Pipeline
from vectorshift.pipeline import InputNode, OutputNode, PipelineNode, SplitTextNode
vectorshift.api_key = 'your api key '
sub_pipeline = Pipeline.fetch(name="your sub pipeline")
print(sub_pipeline)
input_node = InputNode(node_name="input_0")
split_text_node = SplitTextNode(
node_name="split_text_node",
text=input_node.text,
delimiter="newline"
)
pipeline_node = PipelineNode(pipeline_id=sub_pipeline.id,
node_name="sub_pipeline",
input_0 = split_text_node.processed_text,
execution_mode="batch"
)
output_node = OutputNode(node_name="output_0", value=pipeline_node.output_0)
main_pipeline = Pipeline.new(
name="batched-pipeline",
nodes=[input_node, split_text_node, pipeline_node, output_node]
)
```
### Streaming
```python
import vectorshift
from vectorshift.pipeline import Pipeline, InputNode, OutputNode, LlmNode
# Set API key
vectorshift.api_key = 'your api key here'
# Create input node
input_node = InputNode(node_name="input_0")
# Create LLM node that will stream responses
llm_node = LlmNode(
node_name="llm_node",
system="You are a helpful assistant.",
prompt=input_node.text,
provider="openai",
model="gpt-4o-mini",
temperature=0.7,
stream=True # Enable streaming
)
# Create output node connected to LLM response
output_node = OutputNode(
node_name="output_0",
value=llm_node.response,
output_type="stream<string>"
)
# Create and save the pipeline
pipeline = Pipeline.new(
name="streaming-llm-pipeline-1",
nodes=[input_node, llm_node, output_node]
)
# Run pipeline with streaming enabled
input_data = {"input_0": "Tell me a story about a brave adventurer"}
# Stream the response chunks
for chunk in pipeline.run(input_data, stream=True):
try:
# Parse the chunk as a JSON line
chunk_str = chunk.decode('utf-8') if isinstance(chunk, bytes) else str(chunk)
if chunk_str.startswith('data: '):
json_str = chunk_str[6:] # Remove 'data: ' prefix
import json
data = json.loads(json_str)
if data.get('output_name') == 'output_0':
print(data.get('output_value', ''), end="", flush=True)
except (json.JSONDecodeError, UnicodeDecodeError, AttributeError):
# If parsing fails, just continue to next chunk
continue
```
### Async Usage
Call the async sdk methods by prefixing the sdk method with `a`. Here we can fetch a pipeline by name, run it with a particular input and await the pipeline results.
```python
import asyncio
import vectorshift
from vectorshift.pipeline import Pipeline, InputNode, OutputNode, LlmNode
vectorshift.api_key = "your api key here"
pipeline = Pipeline.fetch(name="your pipeline name here")
input_data = {"input_0": "Hello, how are you?"}
result = asyncio.run(pipeline.arun(input_data))
print(result)
```
### Parallel Knowledge Base Upload
We can use the async methods to parallelize bulk upload of the files in a directory to a knowledge base. Here we have a script that takes in a vectorstore name and a local diretory to upload.
```python
import asyncio
import os
import argparse
import vectorshift
from vectorshift.knowledge_base import KnowledgeBase, IndexingConfig
from dotenv import load_dotenv
from tqdm import tqdm
load_dotenv()
def upload_documents(vectorstore_name, upload_dir, max_concurrent=16):
vectorshift.api_key = 'your api key here'
vectorstore = KnowledgeBase.fetch(name=vectorstore_name)
num_files = sum([len(files) for r, d, files in os.walk(upload_dir)])
print(f'Number of files in the upload directory: {num_files}')
async def upload_document(semaphore, script_path, document_title, dirpath):
async with semaphore:
try:
# Create indexing configuration
indexing_config = IndexingConfig(
chunk_size=512,
chunk_overlap=0,
file_processing_implementation='Default',
index_tables=False,
analyze_documents=False
)
response = await vectorstore.aindex_document(
document_type='file',
document=script_path,
indexing_config=indexing_config
)
return f"Response for {document_title} in directory {dirpath}: {response}"
except Exception as e:
return f"Response for {document_title} in directory {dirpath}: Failed due to {e}"
async def upload_all_documents():
# Create semaphore to limit concurrent uploads
semaphore = asyncio.Semaphore(max_concurrent)
all_files = []
for dirpath, dirnames, filenames in os.walk(upload_dir):
for script_file in filenames:
script_path = os.path.join(dirpath, script_file)
document_title = os.path.basename(script_path)
all_files.append((script_path, document_title, dirpath))
# Create tasks for all files
tasks = []
for script_path, document_title, dirpath in all_files:
task = upload_document(semaphore, script_path, document_title, dirpath)
tasks.append(task)
# Process all tasks with progress bar
with tqdm(total=len(all_files), desc="Uploading documents") as pbar:
for coro in asyncio.as_completed(tasks):
result = await coro
if "Failed due to" in result:
print(f"Error: {result}")
else:
print(result)
pbar.update(1)
asyncio.run(upload_all_documents())
if __name__ == "__main__":
# Setup command line argument parsing
parser = argparse.ArgumentParser(description='Upload documents to a VectorStore.')
parser.add_argument('--vectorstore_name', type=str, required=True, help='Name of the VectorStore to upload documents to.')
parser.add_argument('--upload_dir', type=str, required=True, help='Directory path of documents to upload.')
parser.add_argument('--max_concurrent', type=int, default=16, help='Maximum number of concurrent uploads.')
args = parser.parse_args()
upload_documents(args.vectorstore_name, args.upload_dir, args.max_concurrent)
```
## Version Control
The sdk can be used to version control and manage updates to pipelines defined in code.
Lets say we created a pipeline
```python
import vectorshift
from vectorshift.pipeline import Pipeline, InputNode, OutputNode, LlmNode
# Set API key
vectorshift.api_key = "your api key here"
input_node = InputNode(node_name="input_0")
llm_node = LlmNode(
node_name="llm_node",
system="You are a helpful assistant.",
prompt=input_node.text,
provider="openai",
model="gpt-4o-mini",
temperature=0.7
)
output_node = OutputNode(
node_name="output_0",
value=llm_node.response
)
pipeline = Pipeline.new(
name="basic-llm-pipeline",
nodes=[input_node, llm_node, output_node]
)
```
If we want to update the llm used to gpt4o we can change the LLM in the node and save the pipeline with the new node definitions. The deployed pipeline will automatically be updated.
```python
import vectorshift
from vectorshift.pipeline import Pipeline, InputNode, OutputNode, LlmNode
input_node = InputNode(node_name="input_0")
llm_node = LlmNode(
node_name="llm_node",
system="You are a helpful assistant.",
prompt=input_node.text,
provider="openai",
model="gpt-4o",
temperature=0.7
)
output_node = OutputNode(
node_name="output_0",
value=llm_node.response
)
pipeline = Pipeline.fetch(name="basic-llm-pipeline")
output = pipeline.save(
nodes=[input_node, llm_node, output_node]
)
print(output)
```
## Chatbots
Run a chatbot. This code allows you to chat with your chatbot in your terminal. Since we provide conversation_id = None in the initial run the chatbot will start a new conversation. Note how by entering the conversation id returned by chatbot.run we can continue the conversation and have the chatbot see previous repsponses.
```python
from vectorshift import Chatbot
chatbot = Chatbot.fetch(name = 'your chatbot name')
conversation_id = None
while True:
user_input = input("User: ")
if user_input.lower() == "quit":
break
response = chatbot.run(input=user_input, input_type="text", conversation_id=conversation_id)
conversation_id = response['conversation_id']
print(response['output_message'])
```
Streaming Chatbot
```python
from vectorshift import Chatbot
chatbot = Chatbot.fetch(name = 'your chatbot name')
conversation_id = None
while True:
user_input = input("User: ")
if user_input.lower() == "quit":
break
response_stream = chatbot.run(input=user_input, input_type="text", conversation_id=conversation_id, stream=True)
conversation_id = None
for chunk in response_stream:
try:
chunk_str = chunk.decode('utf-8') if isinstance(chunk, bytes) else str(chunk)
if chunk_str.startswith('data: '):
json_str = chunk_str[6:] # Remove 'data: ' prefix
import json
data = json.loads(json_str)
if data.get('conversation_id'):
conversation_id = data.get('conversation_id')
elif data.get('output_value') and data.get('type') == 'stream':
print(data.get('output_value', ''), end="", flush=True)
except (json.JSONDecodeError, UnicodeDecodeError, AttributeError):
continue
print() # Add newline after streaming is complete
```
Chatbot File Upload
```python
from vectorshift import Chatbot
import os
import json
chatbot = Chatbot.fetch(name='your chatbot name')
conversation_id = None
while True:
user_input = input("User: ")
if user_input.lower() == "quit":
break
# Handle file upload
if user_input.startswith("add_file "):
file_path = user_input[9:] # Remove "add_file " prefix
if os.path.isfile(file_path):
try:
upload_response = chatbot.upload_files(file_paths=[file_path], conversation_id=conversation_id)
conversation_id = upload_response.get('conversation_id')
print(f"File uploaded successfully: {upload_response.get('uploaded_files', [])}")
except Exception as e:
print(f"Error uploading file: {e}")
else:
print(f"File not found: {file_path}")
continue
# Handle text input with streaming
response_stream = chatbot.run(input=user_input, input_type="text", conversation_id=conversation_id, stream=True)
for chunk in response_stream:
try:
chunk_str = chunk.decode('utf-8') if isinstance(chunk, bytes) else str(chunk)
if not chunk_str.startswith('data: '):
continue
data = json.loads(chunk_str[6:]) # Remove 'data: ' prefix
# Update conversation_id if present
if data.get('conversation_id'):
conversation_id = data.get('conversation_id')
# Print streaming output
if data.get('output_value') and data.get('type') == 'stream':
print(data.get('output_value', ''), end="", flush=True)
except (json.JSONDecodeError, UnicodeDecodeError, AttributeError):
continue
print() # Add newline after streaming
```
## Integrations
Integration nodes accept an integration object that include the id of your integration
```python
from vectorshift.pipeline import IntegrationSlackNode, InputNode, Pipeline
from vectorshift.integrations import IntegrationObject
input_node = InputNode(node_name="input_0", description = 'Gmail Message to Send')
integration_id = 'your integration id'
integration = IntegrationObject(object_id = integration_id)
gmail_node = IntegrationGmailNode(
integration = integration,
node_name="gmail_node",
action="send_email",
recipients="recipient@gmail.com",
subject="Test Email from Pipeline",
body=input_node.text,
format="text"
)
gmail_pipeline = Pipeline.new(
name="gmail-pipeline",
nodes=[input_node, gmail_node]
)
```
To use the slack node specify the channel and team id accessible from the slack app
```python
from vectorshift.pipeline import IntegrationSlackNode, InputNode, Pipeline
from vectorshift.integrations import IntegrationObject
input_node = InputNode(node_name="input_0", description = 'Slack Message to Send')
integration_id = 'your_integration_id'
slack_node = IntegrationSlackNode(
node_name="slack_node",
integration = IntegrationObject(object_id = integration_id),
action = 'send_message',
channel = 'your_channel_id',
message = input_node.text,
team = 'your_team_id'
)
slack_pipeline = Pipeline.new(
name = 'slack-pipeline',
nodes = [input_node, slack_node]
)
```
| text/markdown | Alex Leonardi, Pratham Goyal, Eric Shen | support@vectorshift.ai | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | null | [] | [] | [] | [
"networkx==3.1",
"tomli>=2.0.1",
"pytest>=7.0.0",
"bson>=0.5.10",
"black>=23.0.0",
"pydantic>=2.0.0",
"aiohttp>=3.8.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:19:39.211310 | vectorshift-0.0.92.tar.gz | 661,003 | 8e/6f/40ceb2a3a29a37390587b9eca6f314652d4ca7b62471d21186404352a087/vectorshift-0.0.92.tar.gz | source | sdist | null | false | 22a13d0babf17c4265061df4db64cc7b | 1f8bd9a951d6fc506ec1bd8f409112fc3dba95eea9af8546d57383104c79e6e3 | 8e6f40ceb2a3a29a37390587b9eca6f314652d4ca7b62471d21186404352a087 | null | [] | 257 |
2.4 | plot-misc | 2.2.0 | Various plotting templates built on top of matplotlib | <img src="https://schmidtaf.gitlab.io/plot-misc/_images/icon.png" alt="plot-misc icon" width="250"/>
# A collection of plotting functions
__version__: `2.2.0`
This repository collects plotting modules written on top of `matplotlib`.
The functions are intended to set up light-touch, basic illustrations that
can be customised using the standard matplotlib interface via axes and figures.
Functionality is included to create illustrations commonly used in medical research,
covering forest plots, volcano plots, incidence matrices/bubble charts,
illustrations to evaluate prediction models (e.g. feature importance, net benefit, calibration plots),
and more.
The documentation for plot-misc can be found
[here](https://SchmidtAF.gitlab.io/plot-misc/).
## Installation
The package is available on PyPI, and conda, with the latest source code
available on gitlab.
### Installation using PyPI
To install the package from PyPI, run:
```bash
pip install plot-misc
```
This installs the latest stable release along with its dependencies.
### Installation using conda
A Conda package is maintained in my personal Conda channel.
To install from this channel, run:
```bash
conda install afschmidt::plot-misc
```
### Installation using gitlab
If you require the latest updates, potentially not yet formally released,
you can install the package directly from GitLab.
First, clone the repository and move into its root directory:
```bash
git clone git@gitlab.com:SchmidtAF/plot-misc.git
cd plot-misc
```
Install the dependencies:
```bash
# From the root of the repository
conda env create --file ./resources/conda/envs/conda_create.yaml
```
To add to an existing environment use:
```bash
# From the root of the repository
conda env update --file ./resources/conda/envs/conda_update.yaml
```
Next the package can be installed:
```bash
make install
```
#### Development
For development work, install the package in editable mode with Git commit
hooks configured:
```bash
make install-dev
```
This command installs the package in editable mode and configures Git commit
hooks, allowing you to run `git pull` to update the repository or switch
branches without reinstalling.
Alternatively, you can install manually:
```bash
python -m pip install -e .
python .setup_git_hooks.py
```
#### Git Hooks Configuration
When setting up a development environment, the `setup-hooks` command
configures Git hooks to enforce conventional commit message formatting and
spell check using `codespell`.
To view the commit message format requirements, run:
```bash
./.githooks/commit-msg -help
```
For frequent use, add this function to your shell configuration (`~/.bashrc`
or `~/.zshrc`):
```bash
commit-format-help() {
local git_root
git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -z "$git_root" ]; then
echo "Error: Not inside a git repository"
return 1
fi
local hook_path="$git_root/.githooks/commit-msg"
if [ ! -f "$hook_path" ]; then
echo "Error: commit-msg hook not found"
return 1
fi
"$hook_path" --help
}
```
#### Validating the package
After installing the package from GitLab, you may wish to run the test
suite to confirm everything is working as expected:
```bash
# From the root of the repository
pytest tests
```
## Usage
Please have a look at the examples in
[resources](https://gitlab.com/SchmidtAF/plot-misc/-/tree/master/resources/examples)
for some possible recipes.
| text/markdown | null | A Floriaan Schmidt <floriaanschmidt@gmail.com> | null | null | null | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"pandas>=1.3",
"numpy>=1.21",
"matplotlib>=3.5",
"scipy>=1.5",
"statsmodels>=0.1",
"scikit-learn>=1.4",
"adjustText>=1.3",
"python-build; extra == \"dev\"",
"twine; extra == \"dev\"",
"setuptools; extra == \"dev\"",
"wheel; extra == \"dev\"",
"pytest>=6; extra == \"dev\"",
"pytest-mock>=3; e... | [] | [] | [] | [
"Homepage, https://gitlab.com/SchmidtAF/plot-misc",
"Documentation, https://schmidtaf.gitlab.io/plot-misc/"
] | twine/6.2.0 CPython/3.12.0 | 2026-02-18T21:18:44.722242 | plot_misc-2.2.0.tar.gz | 134,954 | 17/7e/3dcab93b04fea3ef05b94018598d7565d144585f19d5699e63231a875549/plot_misc-2.2.0.tar.gz | source | sdist | null | false | b5b738843487f130b6077c184d994df2 | 9b32a57e16cdc5cbe438c3f7574d1fad1e7aab605c1d4418a02d1edf583834c6 | 177e3dcab93b04fea3ef05b94018598d7565d144585f19d5699e63231a875549 | GPL-3.0-or-later | [
"LICENSE"
] | 243 |
2.4 | QuizGenerator | 0.23.1 | Generate randomized quiz questions for Canvas LMS and PDF exams | # QuizGenerator
Generate randomized quiz questions for Canvas LMS and PDF exams with support for multiple question types, automatic variation generation, and QR code-based answer keys.
## Features
- **Multiple Output Formats**: Generate PDFs (LaTeX or Typst) and Canvas LMS quizzes
- **Automatic Variations**: Create unique versions for each student
- **Extensible**: Plugin system for custom question types
- **Built-in Question Library**: Memory management, process scheduling, calculus, linear algebra, and more
- **QR Code Answer Keys**: Regenerate exact exam versions from QR codes
- **Canvas Integration**: Direct upload to Canvas with variation support
## Installation
```bash
pip install QuizGenerator
```
### Reproducible installs (recommended)
If you want a fully pinned environment for a semester, use the lockfile:
```bash
uv sync --locked
```
We keep dependency ranges in `pyproject.toml` for flexibility and rely on `uv.lock`
to pin exact versions when you need reproducible builds.
### System Requirements
- Python 3.12+
- [Typst](https://typst.app/) (default PDF renderer)
- Optional: LaTeX distribution with `latexmk` (if using `--latex`)
- Recommended: [Pandoc](https://pandoc.org/) (for markdown conversion)
- Optional (LaTeX + QR codes): [Inkscape](https://inkscape.org/) for SVG conversion
### Optional Dependencies
```bash
# For QR code grading support
pip install "QuizGenerator[grading]"
# For CST463 machine learning questions
pip install "QuizGenerator[cst463]"
```
## Quick Start
Need a 2‑minute setup? See `documentation/getting_started.md`.
### 1. Create a quiz configuration (YAML)
```yaml
# my_quiz.yaml
name: "Midterm Exam"
questions:
10: # 10-point questions
"Process Scheduling":
class: FIFOScheduling
5: # 5-point questions
"Memory Paging":
class: PagingQuestion
"Vector Math":
class: VectorAddition
```
You can also provide an ordered list of questions:
```yaml
name: "Midterm Exam"
question_order: yaml
questions:
- name: "Process Scheduling"
points: 10
class: FIFOScheduling
- name: "Memory Paging"
points: 5
class: PagingQuestion
```
### 2. Generate PDFs
```bash
quizgen generate --yaml my_quiz.yaml --num-pdfs 3
```
PDFs will be created in the `out/` directory.
### 3. Upload to Canvas
```bash
# Set up Canvas credentials in ~/.env first:
# CANVAS_API_URL=https://canvas.instructure.com
# CANVAS_API_KEY=your_api_key_here
quizgen \
generate \
--yaml my_quiz.yaml \
--num-canvas 5 \
--course-id 12345
```
### 4. Generate Tag-Filtered Practice Quizzes
Create one practice quiz assignment per matching registered question type:
```bash
quizgen \
practice \
cst334 memory \
--practice-match any \
--practice-tag-source merged \
--practice-question-groups 5 \
--practice-variations 5 \
--course-id 12345
```
These are uploaded as regular graded quiz assignments into the `practice` assignment group, which is configured with `0.0` group weight.
Tag filters accept either namespaced tags (for example `course:cst334`, `topic:memory`) or bare forms (`cst334`, `memory`).
Use `--practice-tag-source explicit` if you want strict explicit-only tag matching.
### 5. Audit Tags
```bash
# Tag summary
quizgen tags list
# Show only question types missing explicit tags
quizgen tags list --only-missing-explicit --include-questions
# Explain tags for matching question types
quizgen tags explain sched
```
## CLI Completion
```bash
quizgen --help
quizgen --install-completion
quizgen test 3 --test-question MLFQQuestion
```
The CLI supports shell completion (`bash`, `zsh`, `fish`, PowerShell) through Typer.
## Creating Custom Questions
QuizGenerator supports two approaches for adding custom question types:
### Option 1: Entry Points (Recommended for Distribution)
Create a pip-installable package:
```toml
# pyproject.toml
[project.entry-points."quizgenerator.questions"]
my_question = "my_package.questions:MyCustomQuestion"
```
After `pip install`, your questions are automatically available!
### Option 2: Direct Import (Quick & Easy)
Add to your quiz YAML:
```yaml
custom_modules:
- my_questions # Import my_questions.py
questions:
10:
"My Question":
class: MyCustomQuestion
```
See [documentation/custom_questions.md](documentation/custom_questions.md) for complete guide.
### Question Authoring Pattern (New)
All questions follow the same three‑method flow:
```python
class MyQuestion(Question):
@classmethod
def _build_context(cls, *, rng_seed=None, **kwargs):
context = super()._build_context(rng_seed=rng_seed, **kwargs)
rng = context.rng
context["value"] = rng.randint(1, 10)
return context
@classmethod
def _build_body(cls, context):
body = ca.Section()
body.add_element(ca.Paragraph([f"Value: {context['value']}"]))
body.add_element(ca.AnswerTypes.Int(context["value"], label="Value"))
return body
@classmethod
def _build_explanation(cls, context):
explanation = ca.Section()
explanation.add_element(ca.Paragraph([f"Answer: {context['value']}"]))
return explanation
```
Notes:
- Always use `context.rng` (or `context["rng"]`) for deterministic randomness.
- Avoid `refresh()`; it is no longer part of the API.
## Built-in Question Types
### Operating Systems (CST334)
- `FIFOScheduling`, `SJFScheduling`, `RoundRobinScheduling`
- `PagingQuestion`, `TLBQuestion`
- `SemaphoreQuestion`, `MutexQuestion`
### Machine Learning / Math (CST463)
- `VectorAddition`, `VectorDotProduct`, `VectorMagnitude`
- `MatrixAddition`, `MatrixMultiplication`, `MatrixTranspose`
- `DerivativeBasic`, `DerivativeChain`
- `GradientDescentStep`
### General
- `FromText` - Custom text questions
- `FromGenerator` - Programmatically generated questions (requires `--allow-generator` or `QUIZGEN_ALLOW_GENERATOR=1`)
## Documentation
- [Getting Started Guide](documentation/getting_started.md)
- [First 5 Minutes](documentation/first_5_minutes.md)
- [Custom Questions Guide](documentation/custom_questions.md)
- [YAML Configuration Reference](documentation/yaml_config_guide.md)
## Canvas Setup
1. Create a `~/.env` file with your Canvas credentials:
```bash
# For testing/development
CANVAS_API_URL=https://canvas.test.instructure.com
CANVAS_API_KEY=your_test_api_key
# For production
CANVAS_API_URL_prod=https://canvas.instructure.com
CANVAS_API_KEY_prod=your_prod_api_key
```
2. Use `--prod` flag for production Canvas instance:
```bash
quizgen generate --prod --num-canvas 5 --course-id 12345 --yaml my_quiz.yaml
```
## Advanced Features
### Typst Support
Typst is the default for faster compilation. Use `--latex` to force LaTeX:
```bash
quizgen generate --latex --num-pdfs 3 --yaml my_quiz.yaml
```
Experimental: `--typst-measurement` uses Typst to measure question height for tighter layout.
It can change pagination and ordering, so use with care on finalized exams.
### Layout Optimization
By default, questions keep their YAML order (or point-value ordering for mapping format).
Use `--optimize-space` to reorder questions to reduce PDF page count. This also affects Canvas order.
### Deterministic Generation
Use seeds for reproducible quizzes:
```bash
quizgen generate --seed 42 --num-pdfs 3 --yaml my_quiz.yaml
```
### Generation Controls
Limit backoff attempts for questions that retry until they are "interesting":
```bash
quizgen generate --yaml my_quiz.yaml --num-pdfs 1 --max-backoff-attempts 50
```
Set a default numeric tolerance for float answers (overridable per question):
```bash
quizgen generate --yaml my_quiz.yaml --num-pdfs 1 --float-tolerance 0.01
```
Per-answer override in custom questions:
```python
ca.AnswerTypes.Float(value, label="Result", tolerance=0.005)
```
### QR Code Regeneration
Each generated exam includes a QR code that stores:
- Question types and parameters
- Random seed
- Version information
Use the grading tools to scan QR codes and regenerate exact exam versions.
## Security Considerations
### FromGenerator Warning
The `FromGenerator` question type executes **arbitrary Python code** from your YAML configuration files. This is a powerful feature for creating dynamic questions, but it carries security risks:
- **Only use `FromGenerator` with YAML files you completely trust**
- Never run `--allow-generator` on YAML files from untrusted sources
- Be cautious when sharing question banks that contain generator code
`FromGenerator` is disabled by default. To enable it, use one of:
```bash
quizgen generate --allow-generator --yaml my_quiz.yaml
# or
QUIZGEN_ALLOW_GENERATOR=1 quizgen generate --yaml my_quiz.yaml
```
If you need dynamic question generation with untrusted inputs, consider writing a proper `Question` subclass instead, which provides better control and validation.
### LaTeX `-shell-escape` Warning
When using `--latex`, QuizGenerator invokes `latexmk -shell-escape` to compile PDFs. This allows LaTeX to execute external commands (for example, via `\write18`). If your question content includes raw LaTeX (e.g., from custom question types or untrusted YAML sources), this can be a command‑execution vector.
Guidance:
- Only use `--latex` with trusted question sources.
- Prefer Typst (default) when possible.
- If you need LaTeX but want to reduce risk, avoid raw LaTeX content and keep custom questions constrained to ContentAST elements.
## Local Release Helper (Recommended)
Install repository-managed git hooks and alias:
```bash
bash scripts/install_git_hooks.sh
```
This installs a pre-commit hook that checks version-bump vendoring and a local alias:
```bash
git bump patch
```
`git bump` bumps `pyproject.toml` via `uv version`, vendors `lms_interface`, stages `pyproject.toml`/`uv.lock`/`lms_interface`, and commits.
Use `git bump patch --verbose` for full vendoring logs (default output is a short summary).
## Project Structure
```
QuizGenerator/
├── QuizGenerator/ # Main package
│ ├── question.py # Question base classes and registry
│ ├── quiz.py # Quiz generation logic
│ ├── contentast.py # Content AST for cross-format rendering
│ ├── premade_questions/ # Built-in question library
│ └── ... # Question types and rendering utilities
├── example_files/ # Example quiz configurations
├── documentation/ # User guides
├── lms_interface/ # Canvas LMS integration
└── quizgen # CLI entry point
```
## Contributing
Contributions welcome! Areas of interest:
- New question types
- Additional LMS integrations
- Documentation improvements
- Bug fixes
## License
GNU General Public License v3.0 or later (GPLv3+) - see LICENSE file for details
## Citation
If you use QuizGenerator in academic work, please cite:
```
@software{quizgenerator,
author = {Ogden, Sam},
title = {QuizGenerator: Automated Quiz Generation for Education},
year = {2024},
url = {https://github.com/OtterDen-Lab/QuizGenerator}
}
```
## Support
- Issues: https://github.com/OtterDen-Lab/QuizGenerator/issues
- Documentation: https://github.com/OtterDen-Lab/QuizGenerator/tree/main/documentation
---
**Note**: This tool is designed for educational use. Ensure compliance with your institution's academic integrity policies when using automated quiz generation.
| text/markdown | null | Sam Ogden <samuel.s.ogden@gmail.com> | null | null | GPL-3.0-or-later | assessment, canvas, education, exam, lms, quiz, teaching, testing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Education :: Testing"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"canvasapi~=3.2.0",
"cryptography~=46.0.0",
"markdown~=3.9",
"matplotlib<4,>=3.8",
"numpy<3,>=1.26",
"pypandoc~=1.6.3",
"python-dotenv~=1.0.1",
"pyyaml~=6.0.1",
"requests~=2.32.2",
"segno~=1.6.0",
"sympy~=1.14.0",
"tqdm>=4.67.3",
"typer<1,>=0.12",
"keras~=3.12.0; extra == \"cst463\"",
"t... | [] | [] | [] | [
"Homepage, https://github.com/OtterDen-Lab/QuizGenerator",
"Documentation, https://github.com/OtterDen-Lab/QuizGenerator/tree/main/documentation",
"Repository, https://github.com/OtterDen-Lab/QuizGenerator",
"Bug Tracker, https://github.com/OtterDen-Lab/QuizGenerator/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T21:18:27.914072 | quizgenerator-0.23.1.tar.gz | 197,058 | 7c/8f/9bdddffe2ab2431718b34f8c80d70863b2c051932422ae94334f06d32006/quizgenerator-0.23.1.tar.gz | source | sdist | null | false | 25561aa885930c19b350d65a68f28f83 | f4cbc465a446baddbe032160c0404f693b44e5785874916c9cedc87190ae1dd1 | 7c8f9bdddffe2ab2431718b34f8c80d70863b2c051932422ae94334f06d32006 | null | [
"LICENSE"
] | 0 |
2.4 | Jentis | 1.0.0 | A unified Python interface for multiple Large Language Model (LLM) providers including Google Gemini, Anthropic Claude, OpenAI GPT, xAI Grok, Azure OpenAI, and Ollama | # Jentis LLM Kit
A unified Python interface for multiple Large Language Model (LLM) providers. Access Google Gemini, Anthropic Claude, OpenAI GPT, xAI Grok, Azure OpenAI, and Ollama through a single, consistent API.
## Features
- 🔄 **Unified Interface**: One API for all LLM providers
- 🚀 **Easy to Use**: Simple `init_llm()` function to get started
- 📡 **Streaming Support**: Real-time response streaming for all providers
- 📊 **Token Tracking**: Consistent token usage reporting across providers
- 🔧 **Flexible Configuration**: Provider-specific parameters when needed
- 🛡️ **Error Handling**: Comprehensive exception hierarchy for debugging
## Supported Providers
| Provider | Aliases | Models |
|----------|---------|--------|
| Google Gemini | `google`, `gemini` | gemini-2.0-flash-exp, gemini-1.5-pro, etc. |
| Anthropic Claude | `anthropic`, `claude` | claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022 |
| OpenAI | `openai`, `gpt` | gpt-4o, gpt-4o-mini, gpt-4-turbo |
| xAI Grok | `grok`, `xai` | grok-2-latest, grok-2-vision-latest |
| Azure OpenAI | `azure`, `microsoft` | Your deployment names |
| Ollama Cloud | `ollama-cloud` | llama2, mistral, codellama, etc. |
| Ollama Local | `ollama`, `ollama-local` | Any locally installed model |
| Vertex AI | `vertexai`, `vertex-ai`, `vertex` | Any Vertex AI Model Garden model |
## Installation
```bash
# Install the base package
pip install Jentis
# Install provider-specific dependencies
pip install google-generativeai # For Google Gemini
pip install anthropic # For Anthropic Claude
pip install openai # For OpenAI, Grok, Azure
pip install ollama # For Ollama (Cloud & Local)
# Vertex AI requires no pip packages — only gcloud CLI
```
## Quick Start
### Basic Usage
```python
from Jentis.llmkit import init_llm
# Initialize OpenAI GPT-4 (requires OpenAI API key)
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxx" # Your OpenAI API key
)
# Generate a response
response = llm.generate_response("What is Python?")
print(response)
```
### Streaming Responses
```python
from Jentis.llmkit import init_llm
# Each provider requires its own API key
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxx" # OpenAI-specific key
)
# Stream the response
for chunk in llm.generate_response_stream("Write a short story about AI"):
print(chunk, end='', flush=True)
```
## Provider Examples
### Google Gemini
```python
from Jentis.llmkit import init_llm
# Requires Google AI Studio API key
llm = init_llm(
provider="google",
model="gemini-2.0-flash-exp",
api_key="AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxx", # Google API key
temperature=0.7,
max_tokens=1024
)
response = llm.generate_response("Explain quantum computing")
print(response)
```
### Anthropic Claude
```python
from Jentis.llmkit import init_llm
# Requires Anthropic API key
llm = init_llm(
provider="anthropic",
model="claude-3-5-sonnet-20241022",
api_key="sk-ant-api03-xxxxxxxxxxxxxxxxx", # Anthropic API key
max_tokens=2048,
temperature=0.8
)
response = llm.generate_response("Write a haiku about programming")
print(response)
```
### OpenAI GPT
```python
from Jentis.llmkit import init_llm
# Requires OpenAI API key
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx", # OpenAI API key
temperature=0.9,
max_tokens=1500,
frequency_penalty=0.5,
presence_penalty=0.3
)
response = llm.generate_response("Design a simple REST API")
print(response)
```
### xAI Grok
```python
from Jentis.llmkit import init_llm
# Requires xAI API key
llm = init_llm(
provider="grok",
model="grok-2-latest",
api_key="xai-xxxxxxxxxxxxxxxxxxxxxxxx", # xAI API key
temperature=0.7
)
response = llm.generate_response("What's happening in tech?")
print(response)
```
### Azure OpenAI
```python
from Jentis.llmkit import init_llm
# Requires Azure OpenAI API key and endpoint
llm = init_llm(
provider="azure",
model="gpt-4o",
api_key="a1b2c3d4e5f6xxxxxxxxxxxx", # Azure API key
azure_endpoint="https://your-resource.openai.azure.com/",
deployment_name="gpt-4o-deployment",
api_version="2024-08-01-preview",
temperature=0.7
)
response = llm.generate_response("Explain Azure services")
print(response)
```
### Ollama Local
```python
from Jentis.llmkit import init_llm
# No API key needed for local Ollama
llm = init_llm(
provider="ollama",
model="llama2",
temperature=0.7
)
response = llm.generate_response("Hello, Ollama!")
print(response)
```
### Ollama Cloud
```python
from Jentis.llmkit import init_llm
# Requires Ollama Cloud API key
llm = init_llm(
provider="ollama-cloud",
model="llama2",
api_key="ollama_xxxxxxxxxxxxxxxx", # Ollama Cloud API key
host="https://ollama.com"
)
response = llm.generate_response("Explain machine learning")
print(response)
```
### Vertex AI (Model Garden)
```python
from Jentis.llmkit import init_llm
# Uses gcloud CLI for authentication (no API key needed)
llm = init_llm(
provider="vertexai",
model="moonshotai/kimi-k2-thinking-maas",
project_id="gen-lang-client-0152852093",
region="global",
temperature=0.6,
max_tokens=8192
)
response = llm.generate_response("What is quantum computing?")
print(response)
```
## Advanced Usage
### Using Function-Based API with Metadata
If you need detailed metadata (token usage, model info), import the provider-specific functions:
```python
from Jentis.llmkit.Openai import openai_llm
result = openai_llm(
prompt="What is AI?",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx", # Your OpenAI API key
temperature=0.7
)
print(f"Content: {result['content']}")
print(f"Model: {result['model']}")
print(f"Input tokens: {result['usage']['input_tokens']}")
print(f"Output tokens: {result['usage']['output_tokens']}")
print(f"Total tokens: {result['usage']['total_tokens']}")
```
**Other Providers:**
```python
# Google Gemini
from Jentis.llmkit.Google import google_llm
result = google_llm(prompt="...", model="gemini-2.0-flash-exp", api_key="...")
# Anthropic Claude
from Jentis.llmkit.Anthropic import anthropic_llm
result = anthropic_llm(prompt="...", model="claude-3-5-sonnet-20241022", api_key="...", max_tokens=1024)
# Grok
from Jentis.llmkit.Grok import grok_llm
result = grok_llm(prompt="...", model="grok-2-latest", api_key="...")
# Azure OpenAI
from Jentis.llmkit.Microsoft import azure_llm
result = azure_llm(prompt="...", deployment_name="gpt-4o", azure_endpoint="...", api_key="...")
# Ollama Cloud
from Jentis.llmkit.Ollamacloud import ollama_cloud_llm
result = ollama_cloud_llm(prompt="...", model="llama2", api_key="...")
# Ollama Local
from Jentis.llmkit.Ollamalocal import ollama_local_llm
result = ollama_local_llm(prompt="...", model="llama2")
# Vertex AI
from Jentis.llmkit.Vertexai import vertexai_llm
result = vertexai_llm(prompt="...", model="google/gemini-2.0-flash", project_id="my-project")
```
**Streaming with Functions:**
```python
from Jentis.llmkit.Openai import openai_llm_stream
for chunk in openai_llm_stream(
prompt="Write a story",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx"
):
print(chunk, end='', flush=True)
```
### Custom Configuration
```python
from Jentis.llmkit import init_llm
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx", # Your OpenAI API key
temperature=0.8,
top_p=0.9,
max_tokens=2000,
max_retries=5,
timeout=60.0,
backoff_factor=1.0,
frequency_penalty=0.5,
presence_penalty=0.3
)
```
## Parameters
### Common Parameters
All providers support these parameters:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `provider` | str | **Required** | Provider name or alias |
| `model` | str | **Required** | Model identifier |
| `api_key` | str | None | API key (env var if not provided) |
| `temperature` | float | None | Randomness (0.0-2.0) |
| `top_p` | float | None | Nucleus sampling (0.0-1.0) |
| `max_tokens` | int | None | Maximum tokens to generate |
| `timeout` | float | 30.0 | Request timeout (seconds) |
| `max_retries` | int | 3 | Retry attempts |
| `backoff_factor` | float | 0.5 | Exponential backoff factor |
### Provider-Specific Parameters
**OpenAI & Grok:**
- `frequency_penalty`: Penalty for token frequency (0.0-2.0)
- `presence_penalty`: Penalty for token presence (0.0-2.0)
**Azure OpenAI:**
- `azure_endpoint`: Azure endpoint URL (**Required**)
- `deployment_name`: Deployment name (defaults to model)
- `api_version`: API version (default: "2024-08-01-preview")
**Ollama (Cloud & Local):**
- `host`: Host URL (Cloud: "https://ollama.com", Local: "http://localhost:11434")
## Environment Variables
**Each provider uses its own environment variable for API keys.** Set them to avoid hardcoding:
```bash
# Google Gemini
export GOOGLE_API_KEY="AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxx"
# Anthropic Claude
export ANTHROPIC_API_KEY="sk-ant-api03-xxxxxxxxxxxxxxxxx"
# OpenAI
export OPENAI_API_KEY="sk-proj-xxxxxxxxxxxxxxxxxxxx"
# xAI Grok
export XAI_API_KEY="xai-xxxxxxxxxxxxxxxxxxxxxxxx"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="a1b2c3d4e5f6xxxxxxxxxxxx"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
# Ollama Cloud
export OLLAMA_API_KEY="ollama_xxxxxxxxxxxxxxxx"
# Vertex AI (uses gcloud auth, or set token explicitly)
export VERTEX_AI_ACCESS_TOKEN="ya29.xxxxx..."
export VERTEX_AI_PROJECT_ID="your-project-id"
```
Then initialize without api_key parameter:
```python
from Jentis.llmkit import init_llm
# OpenAI - reads from OPENAI_API_KEY environment variable
llm = init_llm(provider="openai", model="gpt-4o")
# Google - reads from GOOGLE_API_KEY environment variable
llm = init_llm(provider="google", model="gemini-2.0-flash-exp")
# Anthropic - reads from ANTHROPIC_API_KEY environment variable
llm = init_llm(provider="anthropic", model="claude-3-5-sonnet-20241022")
# Vertex AI - reads from VERTEX_AI_PROJECT_ID, authenticates via gcloud
llm = init_llm(provider="vertexai", model="google/gemini-2.0-flash")
```
## Methods
All initialized LLM instances have two methods:
### `generate_response(prompt: str) -> str`
Generate a complete response.
```python
response = llm.generate_response("Your prompt here")
print(response) # String output
```
### `generate_response_stream(prompt: str) -> Generator`
Stream the response in real-time.
```python
for chunk in llm.generate_response_stream("Your prompt here"):
print(chunk, end='', flush=True)
```
## Error Handling
```python
from Jentis.llmkit import init_llm
try:
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-invalid-key-xxxxxxxxxx" # Wrong API key
)
response = llm.generate_response("Test")
except ValueError as e:
print(f"Invalid configuration: {e}")
except Exception as e:
print(f"API Error: {e}")
```
Each provider has its own exception hierarchy for detailed error handling. Import from provider modules:
```python
from Jentis.llmkit.Openai import (
OpenAILLMError,
OpenAILLMAPIError,
OpenAILLMImportError,
OpenAILLMResponseError
)
try:
from Jentis.llmkit.Openai import openai_llm
result = openai_llm(prompt="Test", model="gpt-4o", api_key="invalid")
except OpenAILLMAPIError as e:
print(f"API Error: {e}")
except OpenAILLMError as e:
print(f"General Error: {e}")
```
## Complete Example
```python
from Jentis.llmkit import init_llm
def chat_with_llm(provider_name: str, user_message: str):
"""Simple chat function supporting multiple providers."""
try:
# Initialize LLM
llm = init_llm(
provider=provider_name,
model="gpt-4o" if provider_name == "openai" else "llama2",
api_key=None, # Uses environment variables
temperature=0.7,
max_tokens=1024
)
# Stream response
print(f"\n{provider_name.upper()} Response:\n")
for chunk in llm.generate_response_stream(user_message):
print(chunk, end='', flush=True)
print("\n")
except ValueError as e:
print(f"Configuration error: {e}")
except Exception as e:
print(f"Error: {e}")
# Use different providers
chat_with_llm("openai", "What is machine learning?")
chat_with_llm("anthropic", "Explain neural networks")
chat_with_llm("ollama", "What is Python?")
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the terms of the [LICENSE](../../LICENSE) file.
## Support
- **Issues**: [GitHub Issues](https://github.com/devXjitin/jentis-llmkit/issues)
- **Documentation**: [Project Docs](https://github.com/devXjitin/jentis-llmkit)
- **Community**: [Discussions](https://github.com/devXjitin/jentis-llmkit/discussions)
## Author
Built with care by the **J.E.N.T.I.S** team.
| text/markdown | J.E.N.T.I.S Team | null | null | null | MIT | llm, ai, openai, anthropic, google, gemini, claude, grok, ollama, azure, vertex-ai, chatgpt, gpt-4, machine-learning, nlp | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Langua... | [] | null | null | >=3.8 | [] | [] | [] | [
"google-generativeai>=0.3.0; extra == \"google\"",
"anthropic>=0.18.0; extra == \"anthropic\"",
"openai>=1.0.0; extra == \"openai\"",
"ollama>=0.1.0; extra == \"ollama\"",
"google-generativeai>=0.3.0; extra == \"all\"",
"anthropic>=0.18.0; extra == \"all\"",
"openai>=1.0.0; extra == \"all\"",
"ollama>... | [] | [] | [] | [
"Homepage, https://github.com/devXjitin/jentis-llmkit",
"Documentation, https://github.com/devXjitin/jentis-llmkit",
"Repository, https://github.com/devXjitin/jentis-llmkit",
"Bug Tracker, https://github.com/devXjitin/jentis-llmkit/issues",
"Discussions, https://github.com/devXjitin/jentis-llmkit/discussion... | twine/6.2.0 CPython/3.14.2 | 2026-02-18T21:18:03.089625 | jentis-1.0.0.tar.gz | 38,482 | 6d/f4/9da3764cd38fcb2a16c8515c8288af6461b65edbec7825a9c8261d1bdb50/jentis-1.0.0.tar.gz | source | sdist | null | false | 882cef2cd91c03ac8e179b6b620c8d7f | 170b0e75dcf577b4a38334279df7cd94936c8075a4616ad82b18a41c0a82ea41 | 6df49da3764cd38fcb2a16c8515c8288af6461b65edbec7825a9c8261d1bdb50 | null | [] | 0 |
2.4 | streamlit-lexical | 1.3.2 | Streamlit component that allows you to use Meta's Lexical rich text editor | # streamlit_lexical
Streamlit component that allows you to use Meta's [Lexical](https://lexical.dev/) as a rich text plugin.
## Installation instructions
```sh
pip install streamlit-lexical
```
## Usage instructions
```python
import streamlit as st
from streamlit_lexical import streamlit_lexical
markdown = streamlit_lexical(value="initial value in **markdown**",
placeholder="Enter some rich text",
height=800,
debounce=500,
key='1234',
on_change=None
)
st.markdown(rich_text_dict)
```
## Development instructions
After cloning the github repo...
In __init__.py, set:
```python
RELEASE = False
```
And you can test out the example.py with your changes by doing the following:
```sh
cd streamlit_lexical/frontend
npm install (or yarn install)
npm run start # Start the Webpack dev server
```
Then, in a separate terminal, run:
```python
pip install -e .
streamlit run example.py
```
Further, to build the package (after making changes/adding features), you can install it locally like:
```sh
cd streamlit_lexical/frontend
npm install (or yarn install)
npm run build
cd ../..
pip install -e ./
```
Make sure the __init__.py file RELEASE is set to True in this case.
| text/markdown | Ben F | ben@musubilabs.ai | null | null | null | null | [] | [] | https://github.com/musubi-labs/streamlit_lexical | null | >=3.7 | [] | [] | [] | [
"streamlit>=1.36.0",
"wheel; extra == \"devel\"",
"pytest==7.4.0; extra == \"devel\"",
"playwright==1.39.0; extra == \"devel\"",
"requests==2.31.0; extra == \"devel\"",
"pytest-playwright-snapshot==1.0; extra == \"devel\"",
"pytest-rerunfailures==12.0; extra == \"devel\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.10 | 2026-02-18T21:17:56.378455 | streamlit_lexical-1.3.2.tar.gz | 689,248 | 6b/f9/cc2c4d5b931794d8712282e5a281fd3e292000cf4b59804138295d10206c/streamlit_lexical-1.3.2.tar.gz | source | sdist | null | false | db2a30cede546750d8128f241de80697 | e6ff19d43c1c7740fa58c839b3c5cf853ed8fed0ea0faea46cd0821e40f2c8cc | 6bf9cc2c4d5b931794d8712282e5a281fd3e292000cf4b59804138295d10206c | null | [
"LICENSE"
] | 286 |
2.4 | mindmapconverter | 0.1.1 | A tool to convert between Freemind/Freeplane and PlantUML mindmaps. | # Mind Map Converter
## Overview
This project provides a Python script (`mindmapconverter.py`) to facilitate the conversion between Freeplane/Freemind XML mind map files (`.mm`) and PlantUML mind map definitions (`.puml`). This enables users to leverage Freeplane/Freemind for visual mind map creation and then convert these maps into a PlantUML format suitable for embedding in documentation, especially in environments that support PlantUML rendering (e.g., GitLab, Confluence, Markdown viewers with Kroki integration).
## Features
- Convert Freeplane/Freemind (`.mm`) to PlantUML (`.puml`).
- Convert PlantUML (`.puml`) to Freeplane/Freemind (`.mm`).
- Supports both standard PlantUML syntax (`* Node`) and legacy underscore syntax (`*_ Node`).
- Command-line interface with proper argument parsing.
## Installation
### Prerequisites
- Python 3.x
### From PyPI
```bash
pip install mindmapconverter
```
### From Source
1. Clone the repository:
```bash
git clone https://github.com/your-username/mindmapconverter.git
cd mindmapconverter
```
2. Install the package:
```bash
pip install .
```
Or for development (editable mode):
```bash
pip install -e .
```
## Usage
The script automatically detects the conversion direction based on the input file's extension.
### Command Line Interface
```bash
python mindmapconverter.py input_file [-o output_file]
```
### Converting Freeplane/Freemind to PlantUML
To convert a Freeplane/Freemind `.mm` file to PlantUML:
```bash
python mindmapconverter.py input_file.mm -o output_file.puml
```
**Example:**
```bash
python mindmapconverter.py my_mindmap.mm -o my_mindmap.puml
```
If `-o` is omitted, the output is printed to stdout:
```bash
python mindmapconverter.py my_mindmap.mm > my_mindmap.puml
```
### Converting PlantUML to Freeplane/Freemind
To convert a PlantUML `.puml` file to Freeplane/Freemind XML:
```bash
python mindmapconverter.py input_file.puml -o output_file.mm
```
**Example:**
```bash
python mindmapconverter.py my_mindmap.puml -o my_mindmap.mm
```
### Supported Syntax
The converter supports the standard PlantUML MindMap syntax using asterisks for hierarchy:
```plantuml
@startmindmap
* Root
** Child 1
** Child 2
*** Grandchild
@endmindmap
```
It also supports the legacy syntax with underscores (`*_ Node`).
## Testing
To run the included unit tests:
```bash
python3 test_mindmapconverter.py
```
## Contributing
Contributions are welcome! If you have suggestions for improvements, bug reports, or want to add new features, please feel free to:
1. Fork the repository.
2. Create a new branch (`git checkout -b feature/YourFeature`).
3. Make your changes and add tests.
4. Commit your changes (`git commit -m 'Add some feature'`).
5. Push to the branch (`git push origin feature/YourFeature`).
6. Open a Pull Request.
## License
This project is licensed under the [MIT License](LICENSE).
| text/markdown | null | Bosse <bosse@klykken.com> | null | null | MIT License
Copyright (c) 2023 Bosse Klykken
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| mindmap, converter, freemind, freeplane, plantuml | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/larkly/mindmapconverter",
"Bug Tracker, https://github.com/larkly/mindmapconverter/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T21:16:28.774173 | mindmapconverter-0.1.1.tar.gz | 5,973 | d0/7b/3d3c2c46d5ecec88802590669a19958fed5a617c48d35707bb182b7983fd/mindmapconverter-0.1.1.tar.gz | source | sdist | null | false | 508a054095550d7cc9e730b4e160edf3 | 97527d467832f9c263f4a90f280ff27c0140fdc1fa64cc7cfb60e104100f9294 | d07b3d3c2c46d5ecec88802590669a19958fed5a617c48d35707bb182b7983fd | null | [
"LICENSE"
] | 252 |
2.4 | ffpuppet | 0.19.0 | A Python module that aids in the automation of Firefox at the process level | FFPuppet
========
[](https://github.com/MozillaSecurity/ffpuppet/actions/workflows/ci.yml)
[](https://codecov.io/gh/MozillaSecurity/ffpuppet)
[](https://matrix.to/#/#fuzzing:mozilla.org)
[](https://pypi.org/project/ffpuppet)
FFPuppet is a Python module that automates browser process related tasks to aid in fuzzing. Happy bug hunting!
Are you [fuzzing](https://firefox-source-docs.mozilla.org/tools/fuzzing/index.html) the browser? [Grizzly](https://github.com/MozillaSecurity/grizzly) can help.
Installation
------------
##### To install the latest version from PyPI
pip install ffpuppet
##### Xvfb on Linux
On Linux `xvfb` can be used in order to run headless (this is not the same as Firefox's `-headless` mode).
To install `xvfb` on Ubuntu run:
apt-get install xvfb
##### Install minidump-stackwalk
`minidump-stackwalk` is used to collect crash reports from minidump files. More
information can be found [here](https://lib.rs/crates/minidump-stackwalk).
Browser Builds
--------------
If you are looking for builds to use with FFPuppet there are a few options.
##### Download a build
[fuzzfetch](https://github.com/MozillaSecurity/fuzzfetch) is the recommended method for obtaining builds and is also very helpful in automation.
Taskcluster has a collection of many different build types for multiple platforms and branches.
An index of the latest mozilla-central builds can be found [here](https://firefox-ci-tc.services.mozilla.com/tasks/index/gecko.v2.mozilla-central.latest.firefox/).
##### Create your own build
If you would like to compile your own, build instructions can be found [here](https://firefox-source-docs.mozilla.org/setup/index.html). When using `minidump-stackwalk`
breakpad [symbols](https://firefox-source-docs.mozilla.org/setup/building_with_debug_symbols.html#building-with-debug-symbols) are required for symbolized stacks.
Usage
-----
Once installed FFPuppet can be run using the following command:
ffpuppet <firefox_binary>
##### Replaying a test case
ffpuppet <firefox_binary> -p <custom_prefs.js> -d -u <testcase>
This will open the provided test case file in Firefox using the provided prefs.js file. Any log data (stderr, stdout, ASan logs... etc) will be dumped to the console if a failure is detected. [Grizzly Replay](https://github.com/MozillaSecurity/grizzly/wiki/Grizzly-Replay) is recommended for replaying test cases.
##### Prefs.js files
prefs.js files that can be used for fuzzing or other automated testing can be generated with [PrefPicker](https://github.com/MozillaSecurity/prefpicker).
| text/markdown | Tyson Smith | twsmith@mozilla.com | Mozilla Fuzzing Team | fuzzing@mozilla.com | MPL 2.0 | automation firefox fuzz fuzzing security test testing | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Testing"
] | [] | https://github.com/MozillaSecurity/ffpuppet | null | >=3.10 | [] | [] | [] | [
"psutil>=5.9.0",
"xvfbwrapper>=0.2.10; sys_platform == \"linux\"",
"pre-commit; extra == \"dev\"",
"tox; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T21:16:28.027767 | ffpuppet-0.19.0.tar.gz | 88,005 | 3c/b3/5f339578d87244edb2470854020128afe956fa78428d4c4d3d993f899853/ffpuppet-0.19.0.tar.gz | source | sdist | null | false | ad6cd2e9c8cac8d5346f179a882be792 | db2e6b1f9a2415d391eaf92736d26ddca8e8163a939eab8a8f68fbd1eb1cd96d | 3cb35f339578d87244edb2470854020128afe956fa78428d4c4d3d993f899853 | null | [
"LICENSE"
] | 4,321 |
2.4 | codeflash | 0.20.1 | Client for codeflash.ai - automatic code performance optimization, powered by AI | 
<p align="center">
<a href="https://github.com/codeflash-ai/codeflash">
<img src="https://img.shields.io/github/commit-activity/m/codeflash-ai/codeflash" alt="GitHub commit activity">
</a>
<a href="https://pypi.org/project/codeflash/"><img src="https://static.pepy.tech/badge/codeflash" alt="PyPI Downloads"></a>
<a href="https://pypi.org/project/codeflash/">
<img src="https://img.shields.io/pypi/v/codeflash?label=PyPI%20version" alt="PyPI Downloads">
</a>
</p>
[Codeflash](https://www.codeflash.ai) is a general purpose optimizer for Python that helps you improve the performance of your Python code while maintaining its correctness.
It uses advanced LLMs to generate multiple optimization ideas for your code, tests them to be correct and benchmarks them for performance. It then creates merge-ready pull requests containing the best optimization found, which you can review and merge.
How to use Codeflash -
- Optimize an entire existing codebase by running `codeflash --all`
- Automate optimizing all __future__ code you will write by installing Codeflash as a GitHub action.
- Optimize a Python workflow `python myscript.py` end-to-end by running `codeflash optimize myscript.py`
Codeflash is used by top engineering teams at **Pydantic** [(PRs Merged)](https://github.com/pydantic/pydantic/pulls?q=is%3Apr+author%3Amisrasaurabh1+is%3Amerged), **Roboflow** [(PRs Merged 1](https://github.com/roboflow/inference/issues?q=state%3Aclosed%20is%3Apr%20author%3Amisrasaurabh1%20is%3Amerged), [PRs Merged 2)](https://github.com/roboflow/inference/issues?q=state%3Amerged%20is%3Apr%20author%3Acodeflash-ai%5Bbot%5D), **Unstructured** [(PRs Merged 1](https://github.com/Unstructured-IO/unstructured/pulls?q=is%3Apr+Explanation+and+details+in%3Abody+is%3Amerged), [PRs Merged 2)](https://github.com/Unstructured-IO/unstructured-ingest/pulls?q=is%3Apr+Explanation+and+details+in%3Abody+is%3Amerged), **Langflow** [(PRs Merged)](https://github.com/langflow-ai/langflow/issues?q=state%3Aclosed%20is%3Apr%20author%3Amisrasaurabh1) and many others to ship performant, expert level code.
Codeflash is great at optimizing AI Agents, Computer Vision algorithms, PyTorch code, numerical code, backend code or anything else you might write with Python.
## Installation
To install Codeflash, run:
```
pip install codeflash
```
Add codeflash as a development time dependency if you are using package managers like uv or poetry.
## Quick Start
1. To configure Codeflash for a project, at the root directory of your project where the pyproject.toml file is located, run:
```
codeflash init
```
- It will ask you a few questions about your project like the location of your code and tests
- Ask you to generate an [API Key](https://app.codeflash.ai/app/apikeys) to access Codeflash's LLMs
- Install a [GitHub app](https://github.com/apps/codeflash-ai/installations/select_target) to open Pull Requests on GitHub.
- Ask if you want to setup a GitHub actions which will optimize all your future code.
- The codeflash config is then saved in the pyproject.toml file.
2. Optimize your entire codebase:
```
codeflash --all
```
This can take a while to run for a large codebase, but it will keep opening PRs as it finds optimizations.
3. Optimize a script:
```
codeflash optimize myscript.py
```
## Documentation
For detailed installation and usage instructions, visit our documentation at [docs.codeflash.ai](https://docs.codeflash.ai)
## Demo
- Optimizing the performance of new code for a Pull Request through GitHub Actions. This lets you ship code quickly while ensuring it remains performant.
https://github.com/user-attachments/assets/38f44f4e-be1c-4f84-8db9-63d5ee3e61e5
- Optiming a workflow end to end automatically with `codeflash optimize`
https://github.com/user-attachments/assets/355ba295-eb5a-453a-8968-7fb35c70d16c
## Support
Join our community for support and discussions. If you have any questions, feel free to reach out to us using one of the following methods:
- [Free live Installation Support](https://calendly.com/codeflash-saurabh/codeflash-setup)
- [Join our Discord](https://www.codeflash.ai/discord)
- [Follow us on Twitter](https://x.com/codeflashAI)
- [Follow us on Linkedin](https://www.linkedin.com/in/saurabh-misra/)
## License
Codeflash is licensed under the BSL-1.1 License. See the [LICENSE](https://github.com/codeflash-ai/codeflash/blob/main/codeflash/LICENSE) file for details.
| text/markdown | null | "CodeFlash Inc." <contact@codeflash.ai> | null | null | null | LLM, ai, code, codeflash, machine learning, optimization, performance | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.1.0",
"codeflash-benchmark",
"coverage>=7.6.4",
"crosshair-tool>=0.0.78",
"dill>=0.3.8",
"filelock",
"gitpython>=3.1.31",
"humanize>=4.0.0",
"inquirer>=3.0.0",
"isort>=5.11.0",
"jedi>=0.19.1",
"junitparser>=3.1.0",
"libcst>=1.0.1",
"line-profiler>=4.2.0",
"lxml>=5.3.0",
"para... | [] | [] | [] | [
"Homepage, https://codeflash.ai"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T21:16:25.335586 | codeflash-0.20.1.tar.gz | 506,476 | 30/f6/32cc82d8863f6c43b461b27697d3367c9a70d5444ac0741df978acaf89f8/codeflash-0.20.1.tar.gz | source | sdist | null | false | 2f0edcbe1cf0a4b8ead93e782c481c27 | 035934397277ec18860fb9ee81460f96cab11bc1cab4e9b5e16414955f2f2aea | 30f632cc82d8863f6c43b461b27697d3367c9a70d5444ac0741df978acaf89f8 | null | [
"LICENSE"
] | 2,215 |
2.4 | rw-cli-keywords | 0.0.0 | A set of RunWhen published CLI keywords and python libraries for interacting with APIs using CLIs |
<p align="center">
<br>
<a href="https://runwhen.slack.com/join/shared_invite/zt-1l7t3tdzl-IzB8gXDsWtHkT8C5nufm2A">
<img src="https://img.shields.io/badge/Join%20Slack-%23E01563.svg?&style=for-the-badge&logo=slack&logoColor=white" alt="Join Slack">
</a>
</p>
# CodeCollection Registry
To explore all CodeCollections and tasks, please visit the [CodeCollection Registry](https://registry.runwhen.com/).
[](https://registry.runwhen.com)
## RunWhen CLI Codecollection
This repository is **one of many** CodeCollections that is used with the [RunWhen Platform](https://www.runwhen.com) and [RunWhen Local](https://docs.runwhen.com/public/v/runwhen-local). It contains CodeBundles that are maintained by the RunWhen team and perform health, operational, and troubleshooting tasks.
Please see the **[contributing](CONTRIBUTING.md)** and **[code of conduct](CODE_OF_CONDUCT.md)** for details on adding your contributions to this project.
| text/markdown | null | RunWhen <info@runwhen.com> | null | null | Apache License 2.0 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | null | [] | [] | [] | [
"robotframework>=4.1.2",
"jmespath>=1.0.1",
"python-dateutil>=2.9.0",
"requests>=2.31.0",
"thefuzz>=0.20.0",
"pyyaml>=6.0.1",
"jinja2>=3.1.4",
"tabulate>=0.9.0",
"google-auth>=2.0.0",
"google-cloud-bigquery>=3.0.0",
"docker",
"azure-containerregistry",
"azure-identity"
] | [] | [] | [] | [
"homepage, https://github.com/runwhen-contrib/rw-cli-codecollection"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T21:16:23.577768 | rw_cli_keywords-0.0.0.tar.gz | 89,626 | 7e/e0/6746ed80b371ecb5402dfb4cebaee51e56c3acfef24166312243858db4c2/rw_cli_keywords-0.0.0.tar.gz | source | sdist | null | false | d258e8e7b03da06fc592fab86151b1aa | 699dad5963004bd57c530a7060ce02a84966c6272afb089a9eeafe9df174149f | 7ee06746ed80b371ecb5402dfb4cebaee51e56c3acfef24166312243858db4c2 | null | [
"LICENSE"
] | 226 |
2.4 | tiny-agent-os | 1.2.1 | Python agent loop | # TinyAgent

A small, modular agent framework for building LLM-powered applications in Python.
Inspired by [smolagents](https://github.com/huggingface/smolagents) and [Pi](https://github.com/badlogic/pi-mono) — borrowing the minimal-abstraction philosophy from the former and the conversational agent loop from the latter.
> **Beta** — TinyAgent is usable but not production-ready. APIs may change between minor versions.
## Overview
TinyAgent provides a lightweight foundation for creating conversational AI agents with tool use capabilities. It features:
- **Streaming-first architecture**: All LLM interactions support streaming responses
- **Tool execution**: Define and execute tools with structured outputs
- **Event-driven**: Subscribe to agent events for real-time UI updates
- **Provider agnostic**: Works with any OpenAI-compatible `/chat/completions` endpoint (OpenRouter, OpenAI, Chutes, local servers)
- **Prompt caching**: Reduce token costs and latency with Anthropic-style cache breakpoints
- **Dual provider paths**: Pure-Python or optional Rust binding via PyO3 for native-speed streaming
- **Type-safe**: Full type hints throughout
## Quick Start
```python
import asyncio
from tinyagent import Agent, AgentOptions, OpenRouterModel, stream_openrouter
# Create an agent
agent = Agent(
AgentOptions(
stream_fn=stream_openrouter,
session_id="my-session"
)
)
# Configure
agent.set_system_prompt("You are a helpful assistant.")
agent.set_model(OpenRouterModel(id="anthropic/claude-3.5-sonnet"))
# Optional: use any OpenAI-compatible /chat/completions endpoint
# agent.set_model(OpenRouterModel(id="gpt-4o-mini", base_url="https://api.openai.com/v1/chat/completions"))
# Simple prompt
async def main():
response = await agent.prompt_text("What is the capital of France?")
print(response)
asyncio.run(main())
```
## Installation
```bash
pip install tiny-agent-os
```
## Core Concepts
### Agent
The [`Agent`](api/agent.md) class is the main entry point. It manages:
- Conversation state (messages, tools, system prompt)
- Streaming responses
- Tool execution
- Event subscription
### Messages
Messages follow a typed dictionary structure:
- `UserMessage`: Input from the user
- `AssistantMessage`: Response from the LLM
- `ToolResultMessage`: Result from tool execution
### Tools
Tools are functions the LLM can call:
```python
from tinyagent import AgentTool, AgentToolResult
async def calculate_sum(tool_call_id: str, args: dict, signal, on_update) -> AgentToolResult:
result = args["a"] + args["b"]
return AgentToolResult(
content=[{"type": "text", "text": str(result)}]
)
tool = AgentTool(
name="sum",
description="Add two numbers",
parameters={
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"}
},
"required": ["a", "b"]
},
execute=calculate_sum
)
agent.set_tools([tool])
```
### Events
The agent emits events during execution:
- `AgentStartEvent` / `AgentEndEvent`: Agent run lifecycle
- `TurnStartEvent` / `TurnEndEvent`: Single turn lifecycle
- `MessageStartEvent` / `MessageUpdateEvent` / `MessageEndEvent`: Message streaming
- `ToolExecutionStartEvent` / `ToolExecutionUpdateEvent` / `ToolExecutionEndEvent`: Tool execution
Subscribe to events:
```python
def on_event(event):
print(f"Event: {event.type}")
unsubscribe = agent.subscribe(on_event)
```
### Prompt Caching
TinyAgent supports [Anthropic-style prompt caching](api/caching.md) to reduce costs on multi-turn conversations. Enable it when creating the agent:
```python
agent = Agent(
AgentOptions(
stream_fn=stream_openrouter,
session_id="my-session",
enable_prompt_caching=True,
)
)
```
Cache breakpoints are automatically placed on user message content blocks so the prompt prefix stays cached across turns. See [Prompt Caching](api/caching.md) for details.
## Rust Binding: `tinyagent._alchemy`
TinyAgent ships with an optional Rust-based LLM provider implemented in
`src/lib.rs`. It wraps the [`alchemy-llm`](https://crates.io/crates/alchemy-llm)
Rust crate and exposes it to Python via [PyO3](https://pyo3.rs) as
`tinyagent._alchemy`, giving you native-speed OpenAI-compatible streaming without
leaving the Python process.
### Why
The pure-Python providers (`openrouter_provider.py`, `proxy.py`) work fine, but the Rust
binding gives you:
- **Lower per-token overhead** -- SSE parsing, JSON deserialization, and event dispatch all
happen in compiled Rust with a multi-threaded Tokio runtime.
- **Unified provider abstraction** -- `alchemy-llm` normalizes differences across providers
(OpenRouter, Anthropic, custom endpoints) behind a single streaming interface.
- **Full event fidelity** -- text deltas, thinking deltas, tool call deltas, and terminal
events are all surfaced as typed Python dicts.
### How it works
```
Python (async) Rust (Tokio)
───────────────── ─────────────────────────
stream_alchemy_*() ──> alchemy_llm::stream()
│
AlchemyStreamResponse ├─ SSE parse + deserialize
.__anext__() <── ├─ event_to_py_value()
(asyncio.to_thread) └─ mpsc channel -> Python
```
1. Python calls `openai_completions_stream(model, context, options)` which is a `#[pyfunction]`.
2. The Rust side builds an `alchemy-llm` request, opens an SSE stream on a shared Tokio
runtime, and sends events through an `mpsc` channel.
3. Python reads events by calling the blocking `next_event()` method via
`asyncio.to_thread`, making it async-compatible without busy-waiting.
4. A terminal `done` or `error` event signals the end of the stream. The final
`AssistantMessage` dict is available via `result()`.
### Building
Requires a Rust toolchain (1.70+) and [maturin](https://www.maturin.rs/).
```bash
pip install maturin
maturin develop # debug build, installs into current venv
maturin develop --release # optimized build
```
### Python API
Two functions are exposed from the `tinyagent._alchemy` module:
| Function | Description |
|---|---|
| `collect_openai_completions(model, context, options?)` | Blocking. Consumes the entire stream and returns `{"events": [...], "final_message": {...}}`. Useful for one-shot calls. |
| `openai_completions_stream(model, context, options?)` | Returns an `OpenAICompletionsStream` handle for incremental consumption. |
The `OpenAICompletionsStream` handle has two methods:
| Method | Description |
|---|---|
| `next_event()` | Blocking. Returns the next event dict, or `None` when the stream ends. |
| `result()` | Blocking. Returns the final assistant message dict. |
All three arguments are plain Python dicts:
```python
model = {
"id": "anthropic/claude-3.5-sonnet",
"base_url": "https://openrouter.ai/api/v1/chat/completions",
"provider": "openrouter", # required for env-key fallback/inference
"api": "openai-completions", # optional; inferred from provider when omitted/blank
"headers": {"X-Custom": "val"}, # optional
"reasoning": False, # optional
"context_window": 128000, # optional
"max_tokens": 4096, # optional
}
context = {
"system_prompt": "You are helpful.",
"messages": [
{"role": "user", "content": [{"type": "text", "text": "Hello"}]}
],
"tools": [ # optional
{"name": "sum", "description": "Add numbers", "parameters": {...}}
],
}
options = {
"api_key": "sk-...", # optional
"temperature": 0.7, # optional
"max_tokens": 1024, # optional
}
```
**Routing contract (`provider`, `api`, `base_url`)**:
- `provider`: backend identity used for API-key fallback and provider defaults
- `api`: alchemy unified API selector (`openai-completions` or `minimax-completions`)
- `base_url`: concrete HTTP endpoint
If `api` is omitted/blank, the Python side infers:
- `provider in {"minimax", "minimax-cn"}` => `minimax-completions`
- otherwise => `openai-completions`
Legacy API aliases are normalized for backward compatibility:
- `api="openrouter"` / `api="openai"` => `openai-completions`
- `api="minimax"` => `minimax-completions`
### Using via TinyAgent (high-level)
You don't need to call the Rust binding directly. Use the `alchemy_provider` module:
```python
from tinyagent import Agent, AgentOptions
from tinyagent.alchemy_provider import OpenAICompatModel, stream_alchemy_openai_completions
agent = Agent(
AgentOptions(
stream_fn=stream_alchemy_openai_completions,
session_id="my-session",
)
)
agent.set_model(
OpenAICompatModel(
provider="openrouter",
id="anthropic/claude-3.5-sonnet",
base_url="https://openrouter.ai/api/v1/chat/completions",
)
)
```
MiniMax global:
```python
agent.set_model(
OpenAICompatModel(
provider="minimax",
id="MiniMax-M2.5",
base_url="https://api.minimax.io/v1/chat/completions",
# api is optional here; inferred as "minimax-completions"
)
)
```
MiniMax CN:
```python
agent.set_model(
OpenAICompatModel(
provider="minimax-cn",
id="MiniMax-M2.5",
base_url="https://api.minimax.chat/v1/chat/completions",
# api is optional here; inferred as "minimax-completions"
)
)
```
### Limitations
- Rust binding currently dispatches only `openai-completions` and `minimax-completions`.
- Image blocks are not yet supported (text and thinking blocks work).
- `next_event()` is blocking and runs in a thread via `asyncio.to_thread` -- this adds
slight overhead compared to a native async generator, but keeps the GIL released during
the Rust work.
## Documentation
- [Architecture](ARCHITECTURE.md): System design and component interactions
- [API Reference](api/): Detailed module documentation
- [Prompt Caching](api/caching.md): Cache breakpoints, cost savings, and provider requirements
- [OpenAI-Compatible Endpoints](api/openai-compatible-endpoints.md): Using `OpenRouterModel.base_url` with OpenRouter, OpenAI, Chutes, and local compatible backends
- [Usage Semantics](api/usage-semantics.md): Unified `message["usage"]` schema across Python and Rust provider paths
- [Changelog](../CHANGELOG.md): Release history
## Project Structure
```
tinyagent/
├── agent.py # Agent class
├── agent_loop.py # Core agent execution loop
├── agent_tool_execution.py # Tool execution helpers
├── agent_types.py # Type definitions
├── caching.py # Prompt caching utilities
├── openrouter_provider.py # OpenRouter integration
├── alchemy_provider.py # Rust-based provider (PyO3)
├── proxy.py # Proxy server integration
└── proxy_event_handlers.py # Proxy event parsing
```
| text/markdown; charset=UTF-8; variant=GFM | Fabian | null | null | null | MIT | agent, llm, openrouter, streaming | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Develo... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.28.0",
"ruff>=0.9.0; extra == \"dev\"",
"mypy>=1.14.0; extra == \"dev\"",
"pre-commit>=4.0.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.25.0; extra == \"dev\"",
"python-dotenv>=1.0.0; extra == \"dev\"",
"grimp>=3.0; extra == \"dev\"",
"vulture>=2.11.0; extra ... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-18T21:15:20.644681 | tiny_agent_os-1.2.1.tar.gz | 178,994 | 57/6e/36af597658d3c29e51dc7aedc05e37c3278db5aee7605e9ab582c9946885/tiny_agent_os-1.2.1.tar.gz | source | sdist | null | false | 0599e2d1ec0b85d6d18c1515a8d41b83 | ff09fa25c4088714d7cc82af4176e4fc02045d8d175f54d1ae1b52268c409ed1 | 576e36af597658d3c29e51dc7aedc05e37c3278db5aee7605e9ab582c9946885 | null | [
"LICENSE"
] | 450 |
2.4 | modelscan | 0.8.8 | The modelscan package is a cli tool for detecting unsafe operations in model files across various model serialization formats. | 
[](https://github.com/protectai/modelscan/actions/workflows/bandit.yml)
[](https://github.com/protectai/modelscan/actions/workflows/build.yml)
[](https://github.com/protectai/modelscan/actions/workflows/black.yml)
[](https://github.com/protectai/modelscan/actions/workflows/mypy.yml)
[](https://github.com/protectai/modelscan/actions/workflows/test.yml)
[](https://pypi.org/project/modelscan)
[](https://pypi.org/project/modelscan)
[](https://opensource.org/license/apache-2-0/)
[](https://github.com/pre-commit/pre-commit)
# ModelScan: Protection Against Model Serialization Attacks
Machine Learning (ML) models are shared publicly over the internet, within teams and across teams. The rise of Foundation Models have resulted in public ML models being increasingly consumed for further training/fine tuning. ML Models are increasingly used to make critical decisions and power mission-critical applications.
Despite this, models are not yet scanned with the rigor of a PDF file in your inbox.
This needs to change, and proper tooling is the first step.

ModelScan is an open source project from [Protect AI](https://protectai.com/?utm_campaign=Homepage&utm_source=ModelScan%20GitHub%20Page&utm_medium=cta&utm_content=Open%20Source) that scans models to determine if they contain
unsafe code. It is the first model scanning tool to support multiple model formats.
ModelScan currently supports: H5, Pickle, and SavedModel formats. This protects you
when using PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way.
## TL;DR
If you are ready to get started scanning your models, it is simple:
```bash
pip install modelscan
```
With it installed, scan a model:
```bash
modelscan -p /path/to/model_file.pkl
```
## Why You Should Scan Models
Models are often created from automated pipelines, others may come from a data scientist’s laptop. In either case the model needs to move from one machine to another before it is used. That process of saving a model to disk is called serialization.
A **Model Serialization Attack** is where malicious code is added to the contents of a model during serialization(saving) before distribution — a modern version of the Trojan Horse.
The attack functions by exploiting the saving and loading process of models. When you load a model with `model = torch.load(PATH)`, PyTorch opens the contents of the file and begins to running the code within. The second you load the model the exploit has executed.
A **Model Serialization Attack** can be used to execute:
- Credential Theft(Cloud credentials for writing and reading data to other systems in your environment)
- Data Theft(the request sent to the model)
- Data Poisoning(the data sent after the model has performed its task)
- Model Poisoning(altering the results of the model itself)
These attacks are incredibly simple to execute and you can view working examples in our 📓[notebooks](https://github.com/protectai/modelscan/tree/main/notebooks) folder.
## Enforcing And Automating Model Security
ModelScan offers robust open-source scanning. If you need comprehensive AI security, consider [Guardian](https://protectai.com/guardian?utm_campaign=Guardian&utm_source=ModelScan%20GitHub%20Page&utm_medium=cta&utm_content=Open%20Source). It is our enterprise-grade model scanning product.

### Guardian's Features:
1. **Cutting-Edge Scanning**: Access our latest scanners, broader model support, and automatic model format detection.
2. **Proactive Security**: Define and enforce security requirements for Hugging Face models before they enter your environment—no code changes required.
3. **Enterprise-Wide Coverage**: Implement a cohesive security posture across your organization, seamlessly integrating with your CI/CD pipelines.
4. **Comprehensive Audit Trail**: Gain full visibility into all scans and results, empowering you to identify and mitigate threats effectively.
## Getting Started
### How ModelScan Works
If loading a model with your machine learning framework automatically executes the attack,
how does ModelScan check the content without loading the malicious code?
Simple, it reads the content of the file one byte at a time just like a string, looking for
code signatures that are unsafe. This makes it incredibly fast, scanning models in the time it
takes for your computer to process the total filesize from disk(seconds in most cases). It also secure.
ModelScan ranks the unsafe code as:
- CRITICAL
- HIGH
- MEDIUM
- LOW

If an issue is detected, reach out to the author's of the model immediately to determine the cause.
In some cases, code may be embedded in the model to make things easier to reproduce as a data scientist, but
it opens you up for attack. Use your discretion to determine if that is appropriate for your workloads.
### What Models and Frameworks Are Supported?
This will be expanding continually, so look out for changes in our release notes.
At present, ModelScan supports any Pickle derived format and many others:
| ML Library | API | Serialization Format | modelscan support |
|----------------------------------------------|------------------------------------------------------------------------------------------------------------|-------------------------------------|-------------------|
| Pytorch | [torch.save() and torch.load()](https://pytorch.org/tutorials/beginner/saving_loading_models.html ) | Pickle | Yes |
| Tensorflow | [tf.saved_model.save()](https://www.tensorflow.org/guide/saved_model) | Protocol Buffer | Yes |
| Keras | [keras.models.save(save_format= 'h5')](https://www.tensorflow.org/guide/keras/serialization_and_saving) | HD5 (Hierarchical Data Format) | Yes |
| | [keras.models.save(save_format= 'keras')](https://www.tensorflow.org/guide/keras/serialization_and_saving) | Keras V3 (Hierarchical Data Format) | Yes |
| Classic ML Libraries (Sklearn, XGBoost etc.) | pickle.dump(), dill.dump(), joblib.dump(), cloudpickle.dump() | Pickle, Cloudpickle, Dill, Joblib | Yes |
### Installation
ModelScan is installed on your systems as a Python package(Python 3.9 to 3.12 supported). As shown from above you can install
it by running this in your terminal:
```bash
pip install modelscan
```
To include it in your project's dependencies so it is available for everyone, add it to your `requirements.txt`
or `pyproject.toml` like this:
```toml
modelscan = ">=0.1.1"
```
Scanners for Tensorflow or HD5 formatted models require installation with extras:
```bash
pip install 'modelscan[ tensorflow, h5py ]'
```
### Using ModelScan via CLI
ModelScan supports the following arguments via the CLI:
| Usage | Argument | Explanation |
|----------------------------------------------------------------------------------|------------------|---------------------------------------------------------|
| ```modelscan -h``` | -h or --help | View usage help |
| ```modelscan -v``` | -v or --version | View version information |
| ```modelscan -p /path/to/model_file``` | -p or --path | Scan a locally stored model |
| ```modelscan -p /path/to/model_file --settings-file ./modelscan-settings.toml``` | --settings-file | Scan a locally stored model using custom configurations |
| ```modelscan create-settings-file``` | -l or --location | Create a configurable settings file |
| ```modelscan -r``` | -r or --reporting-format | Format of the output. Options are console, json, or custom (to be defined in settings-file). Default is console |
| ```modelscan -r reporting-format -o file-name``` | -o or --output-file | Optional file name for output report |
| ```modelscan --show-skipped``` | --show-skipped | Print a list of files that were skipped during the scan |
Remember models are just like any other form of digital media, you should scan content from any untrusted source before use.
#### CLI Exit Codes
The CLI exit status codes are:
- `0`: Scan completed successfully, no vulnerabilities found
- `1`: Scan completed successfully, vulnerabilities found
- `2`: Scan failed, modelscan threw an error while scanning
- `3`: No supported files were passed to the tool
- `4`: Usage error, CLI was passed invalid or incomplete options
### Using ModelScan Programmatically in Python
While ModelScan can be easily used via CLI, you can also integrate it directly into your Python applications or workflows.
```python
from modelscan.modelscan import ModelScan
from modelscan.settings import DEFAULT_SETTINGS
# Initialize ModelScan with default settings
scanner = ModelScan(settings=DEFAULT_SETTINGS)
# Scan a model file or directory
results = scanner.scan("/path/to/model_file.pkl")
# Check if issues were found
if scanner.issues.all_issues:
print(f"Found {len(scanner.issues.all_issues)} issues!")
# Access issues by severity
issues_by_severity = scanner.issues.group_by_severity()
for severity, issues in issues_by_severity.items():
print(f"{severity}: {len(issues)} issues")
# Generate a report (default is console output)
scanner.generate_report()
```
You can customize the scan behavior with your own settings:
```python
# Start with default settings and customize
custom_settings = DEFAULT_SETTINGS.copy()
# Update settings as needed
custom_settings["reporting"]["module"] = "modelscan.reporting.json_report.JSONReport"
custom_settings["reporting"]["settings"]["output_file"] = "scan_results.json"
# Initialize with custom settings
scanner = ModelScan(settings=custom_settings)
```
### Understanding The Results
Once a scan has been completed you'll see output like this if an issue is found:

Here we have a model that has an unsafe operator for both `ReadFile` and `WriteFile` in the model.
Clearly we do not want our models reading and writing files arbitrarily. We would now reach out
to the creator of this model to determine what they expected this to do. In this particular case
it allows an attacker to read our AWS credentials and write them to another place.
That is a firm NO for usage.
## Integrating ModelScan In Your ML Pipelines and CI/CD Pipelines
Ad-hoc scanning is a great first step, please drill it into yourself, peers, and friends to do
this whenever they pull down a new model to explore. It is not sufficient to improve security
for production MLOps processes.
Model scanning needs to be performed more than once to accomplish the following:
1. Scan all pre-trained models before loading it for further work to prevent a compromised
model from impacting your model building or data science environments.
2. Scan all models after training to detect a supply chain attack that compromises new models.
3. Scan all models before deploying to an endpoint to ensure that the model has not been compromised after storage.
The red blocks below highlight this in a traditional ML Pipeline.

The processes would be the same for fine-tuning or any modifications of LLMs, foundational models, or external model.
Embed scans into deployment processes in your CI/CD systems to secure usage
as models are deployed as well if this is done outside your ML Pipelines.
## Diving Deeper
Inside the 📓[**notebooks**](https://github.com/protectai/modelscan/tree/main/notebooks) folder you can explore a number of notebooks that showcase
exactly how Model Serialization Attacks can be performed against various ML Frameworks like TensorFlow and PyTorch.
To dig more into the meat of how exactly these attacks work check out 🖹 [**Model Serialization Attack Explainer**](https://github.com/protectai/modelscan/blob/main/docs/model_serialization_attacks.md).
If you encounter any other approaches for evaluating models in a static context, please reach out, we'd love
to learn more!
## Licensing
Copyright 2024 Protect AI
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
<http://www.apache.org/licenses/LICENSE-2.0>
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
## Acknowledgements
We were heavily inspired by [Matthieu Maitre](http://mmaitre314.github.io) who built [PickleScan](https://github.com/mmaitre314/picklescan).
We appreciate the work and have extended it significantly with ModelScan. ModelScan is OSS’ed in the similar spirit as PickleScan.
## Contributing
We would love to have you contribute to our open source ModelScan project.
If you would like to contribute, please follow the details on [Contribution page](https://github.com/protectai/modelscan/blob/main/CONTRIBUTING.md).
| text/markdown | ProtectAI | community@protectai.com | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"click<9.0.0,>=8.1.3",
"h5py<4.0.0,>=3.9.0; extra == \"h5py\"",
"numpy>=1.24.3",
"rich<15.0.0,>=13.4.2",
"tensorflow<3.0,>=2.17; extra == \"tensorflow\"",
"tomlkit<0.14.0,>=0.12.3"
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.12.12 Linux/6.14.0-1017-azure | 2026-02-18T21:14:48.393406 | modelscan-0.8.8-py3-none-any.whl | 38,921 | 8b/1b/392fa2002cd60f22bc9328ce6eb015564930fef5ea7170947c93ef8384ee/modelscan-0.8.8-py3-none-any.whl | py3 | bdist_wheel | null | false | 248f7955140529541decc3fc1e039401 | a1997df2368628daa1b3f394f5660a338b1debc623dec67b38f665ba04ad967e | 8b1b392fa2002cd60f22bc9328ce6eb015564930fef5ea7170947c93ef8384ee | null | [
"LICENSE"
] | 610 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.